Siga este enlace para ver otros tipos de publicaciones sobre el tema: Generic decoding.

Artículos de revistas sobre el tema "Generic decoding"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Generic decoding".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Guobin Shen, Guang-Ping Gao, Shipeng Li, Heung-Yeung Shum y Ya-Qin Zhang. "Accelerate video decoding with generic GPU". IEEE Transactions on Circuits and Systems for Video Technology 15, n.º 5 (mayo de 2005): 685–93. http://dx.doi.org/10.1109/tcsvt.2005.846440.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Lax, R. F. "Generic interpolation polynomial for list decoding". Finite Fields and Their Applications 18, n.º 1 (enero de 2012): 167–78. http://dx.doi.org/10.1016/j.ffa.2011.07.007.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Kushnerov, Alexander V. y Valery A. Lipnitski. "Generic BCH codes. Polynomial-norm error decoding". Journal of the Belarusian State University. Mathematics and Informatics, n.º 2 (30 de julio de 2020): 36–48. http://dx.doi.org/10.33581/2520-6508-2020-2-36-48.

Texto completo
Resumen
The classic Bose – Chaudhuri – Hocquenghem (BCH) codes is famous and well-studied part in the theory of error-correcting codes. Generalization of BCH codes allows us to expand the range of activities in the practical correction of errors. Some generic BCH codes are able to correct more errors than classic BCH code in one message block. So it is important to provide appropriate method of error correction. After our investigation it was found that polynomial-norm method is most convenient and effective for that task. The result of the study was a model of a polynomial-norm decoder for a generic BCH code at length 65.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Dupuis, Frédéric, Jan Florjanczyk, Patrick Hayden y Debbie Leung. "The locking-decoding frontier for generic dynamics". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 469, n.º 2159 (8 de noviembre de 2013): 20130289. http://dx.doi.org/10.1098/rspa.2013.0289.

Texto completo
Resumen
It is known that the maximum classical mutual information, which can be achieved between measurements on pairs of quantum systems, can drastically underestimate the quantum mutual information between them. In this article, we quantify this distinction between classical and quantum information by demonstrating that after removing a logarithmic-sized quantum system from one half of a pair of perfectly correlated bitstrings, even the most sensitive pair of measurements might yield only outcomes essentially independent of each other. This effect is a form of information locking but the definition we use is strictly stronger than those used previously. Moreover, we find that this property is generic, in the sense that it occurs when removing a random subsystem. As such, the effect might be relevant to statistical mechanics or black hole physics. While previous works had always assumed a uniform message, we assume only a min-entropy bound and also explore the effect of entanglement. We find that classical information is strongly locked almost until it can be completely decoded. Finally, we exhibit a quantum key distribution protocol that is ‘secure’ in the sense of accessible information but in which leakage of even a logarithmic number of bits compromises the secrecy of all others.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Jouguet, Paul y Sebastien Kunz-Jacques. "High performance error correction for quantum key distribution using polar codes". Quantum Information and Computation 14, n.º 3&4 (marzo de 2014): 329–38. http://dx.doi.org/10.26421/qic14.3-4-8.

Texto completo
Resumen
We study the use of polar codes for both discrete and continuous variables Quantum Key Distribution (QKD). Although very large blocks must be used to obtain the efficiency required by quantum key distribution, and especially continuous variables quantum key distribution, their implementation on generic x86 Central Processing Units (CPUs) is practical. Thanks to recursive decoding, they exhibit excellent decoding speed, much higher than large, irregular Low Density Parity Check (LDPC) codes implemented on similar hardware, and competitive with implementations of the same codes on high-end Graphic Processing Units (GPUs).
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Xu, Liyan, Fabing Duan, Xiao Gao, Derek Abbott y Mark D. McDonnell. "Adaptive recursive algorithm for optimal weighted suprathreshold stochastic resonance". Royal Society Open Science 4, n.º 9 (septiembre de 2017): 160889. http://dx.doi.org/10.1098/rsos.160889.

Texto completo
Resumen
Suprathreshold stochastic resonance (SSR) is a distinct form of stochastic resonance, which occurs in multilevel parallel threshold arrays with no requirements on signal strength. In the generic SSR model, an optimal weighted decoding scheme shows its superiority in minimizing the mean square error (MSE). In this study, we extend the proposed optimal weighted decoding scheme to more general input characteristics by combining a Kalman filter and a least mean square (LMS) recursive algorithm, wherein the weighted coefficients can be adaptively adjusted so as to minimize the MSE without complete knowledge of input statistics. We demonstrate that the optimal weighted decoding scheme based on the Kalman–LMS recursive algorithm is able to robustly decode the outputs from the system in which SSR is observed, even for complex situations where the signal and noise vary over time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Florescu, Dorian y Daniel Coca. "A Novel Reconstruction Framework for Time-Encoded Signals with Integrate-and-Fire Neurons". Neural Computation 27, n.º 9 (septiembre de 2015): 1872–98. http://dx.doi.org/10.1162/neco_a_00764.

Texto completo
Resumen
Integrate-and-fire neurons are time encoding machines that convert the amplitude of an analog signal into a nonuniform, strictly increasing sequence of spike times. Under certain conditions, the encoded signals can be reconstructed from the nonuniform spike time sequences using a time decoding machine. Time encoding and time decoding methods have been studied using the nonuniform sampling theory for band-limited spaces, as well as for generic shift-invariant spaces. This letter proposes a new framework for studying IF time encoding and decoding by reformulating the IF time encoding problem as a uniform sampling problem. This framework forms the basis for two new algorithms for reconstructing signals from spike time sequences. We demonstrate that the proposed reconstruction algorithms are faster, and thus better suited for real-time processing, while providing a similar level of accuracy, compared to the standard reconstruction algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Li, Yinan, Jianan Lu y Badrish Chandramouli. "Selection Pushdown in Column Stores using Bit Manipulation Instructions". Proceedings of the ACM on Management of Data 1, n.º 2 (13 de junio de 2023): 1–26. http://dx.doi.org/10.1145/3589323.

Texto completo
Resumen
Modern analytical database systems predominantly rely on column-oriented storage, which offers superior compression efficiency due to the nature of the columnar layout. This compression, however, creates challenges in decoding speed during query processing. Previous research has explored predicate pushdown on encoded values to avoid decoding, but these techniques are restricted to specific encoding schemes and predicates, limiting their practical use. In this paper, we propose a generic predicate pushdown approach that supports arbitrary predicates by leveraging selection pushdown to reduce decoding costs. At the core of our approach is a fast select operator capable of directly extracting selected encoded values without decoding, by using Bit Manipulation Instructions, an instruction set extension to the X86 architecture. We empirically evaluate the proposed techniques in the context of Apache Parquet using both micro-benchmarks and the TPC-H benchmark, and show that our techniques improve the query performance of Parquet by up to one order of magnitude with representative scan queries. Further experimentation using Apache Spark demonstrates speed improvements of up to 5.5X even for end-to-end queries involving complex joins.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Rybalov, A. N. "On the generic complexity of the decoding problem for linear codes". Prikladnaya diskretnaya matematika. Prilozhenie, n.º 12 (1 de septiembre de 2019): 198–202. http://dx.doi.org/10.17223/2226308x/12/56.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Jia, Xiaojun y Zihao Liu. "One-Shot M-Array Pattern Based on Coded Structured Light for Three-Dimensional Object Reconstruction". Journal of Control Science and Engineering 2021 (2 de junio de 2021): 1–16. http://dx.doi.org/10.1155/2021/6676704.

Texto completo
Resumen
Pattern encoding and decoding are two challenging problems in a three-dimensional (3D) reconstruction system using coded structured light (CSL). In this paper, a one-shot pattern is designed as an M-array with eight embedded geometric shapes, in which each 2 × 2 subwindow appears only once. A robust pattern decoding method for reconstructing objects from a one-shot pattern is then proposed. The decoding approach relies on the robust pattern element tracking algorithm (PETA) and generic features of pattern elements to segment and cluster the projected structured light pattern from a single captured image. A deep convolution neural network (DCNN) and chain sequence features are used to accurately classify pattern elements and key points (KPs), respectively. Meanwhile, a training dataset is established, which contains many pattern elements with various blur levels and distortions. Experimental results show that the proposed approach can be used to reconstruct 3D objects.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Barreto, Paulo S. L. M., Rafael Misoczki y Marcos A. Simplicio Jr. "One-time signature scheme from syndrome decoding over generic error-correcting codes". Journal of Systems and Software 84, n.º 2 (febrero de 2011): 198–204. http://dx.doi.org/10.1016/j.jss.2010.09.016.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Aitken, Martin. "Generic assumptions in utterance interpretation: the case of indirect instructions". HERMES - Journal of Language and Communication in Business 15, n.º 28 (2 de marzo de 2017): 109. http://dx.doi.org/10.7146/hjlcb.v15i28.25669.

Texto completo
Resumen
This article addresses the role played by genre in the way in which language users interpret ‘indirect’ directive utterances in the special discourse context of technical instructions. In linguistics, issues of genre have most often been approached from socially oriented frameworks such as systemic functional linguistics and the ethnomethodology of the Swales-Bhatia school. The present article instead adopts the cognitive framework of relevance theory to account for a process of comprehension founded on two modularised cognitive processes, viz. decoding of semantic content and relevance-driven inferential manipulation of resulting representations to which generic assumptions about the discourse provide significant contextual input.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Guo, Jianzhong, Cong Cao, Dehui Shi, Jing Chen, Shuai Zhang, Xiaohu Huo, Dejin Kong, Jian Li, Yukang Tian y Min Guo. "Matching Pursuit Algorithm for Decoding of Binary LDPC Codes". Wireless Communications and Mobile Computing 2021 (31 de octubre de 2021): 1–5. http://dx.doi.org/10.1155/2021/9980774.

Texto completo
Resumen
This paper presents a novel hard decision decoding algorithm for low-density parity-check (LDPC) codes, in which the stand matching pursuit (MP) is adapted for error pattern recovery from syndrome over GF(2). In this algorithm, the operation of inner product can be converted into XOR and accumulation, which makes the matching pursuit work with a high efficiency. In addition, the maximum iteration is theoretically explored in relation to sparsity and error probability according to the sparse theory. To evaluate the proposed algorithm, two MP-based decoding algorithms are simulated and compared over an AWGN channel, i.e., generic MP (GMP) and syndrome MP (SMP). Simulation results show that the GMP algorithm outperforms the SMP by 0.8 dB at BER = 10 − 5 .
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Yang, Bang, Yuexian Zou, Fenglin Liu y Can Zhang. "Non-Autoregressive Coarse-to-Fine Video Captioning". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 4 (18 de mayo de 2021): 3119–27. http://dx.doi.org/10.1609/aaai.v35i4.16421.

Texto completo
Resumen
It is encouraged to see that progress has been made to bridge videos and natural language. However, mainstream video captioning methods suffer from slow inference speed due to the sequential manner of autoregressive decoding, and prefer generating generic descriptions due to the insufficient training of visual words (e.g., nouns and verbs) and inadequate decoding paradigm. In this paper, we propose a non-autoregressive decoding based model with a coarse-to-fine captioning procedure to alleviate these defects. In implementations, we employ a bi-directional self-attention based network as our language model for achieving inference speedup, based on which we decompose the captioning procedure into two stages, where the model has different focuses. Specifically, given that visual words determine the semantic correctness of captions, we design a mechanism of generating visual words to not only promote the training of scene-related words but also capture relevant details from videos to construct a coarse-grained sentence ``template''. Thereafter, we devise dedicated decoding algorithms that fill in the ``template'' with suitable words and modify inappropriate phrasing via iterative refinement to obtain a fine-grained description. Extensive experiments on two mainstream video captioning benchmarks, i.e., MSVD and MSR-VTT, demonstrate that our approach achieves state-of-the-art performance, generates diverse descriptions, and obtains high inference efficiency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Liu, Qi, Lei Yu, Laura Rimell y Phil Blunsom. "Pretraining the Noisy Channel Model for Task-Oriented Dialogue". Transactions of the Association for Computational Linguistics 9 (2021): 657–74. http://dx.doi.org/10.1162/tacl_a_00390.

Texto completo
Resumen
Abstract Direct decoding for task-oriented dialogue is known to suffer from the explaining-away effect, manifested in models that prefer short and generic responses. Here we argue for the use of Bayes’ theorem to factorize the dialogue task into two models, the distribution of the context given the response, and the prior for the response itself. This approach, an instantiation of the noisy channel model, both mitigates the explaining-away effect and allows the principled incorporation of large pretrained models for the response prior. We present extensive experiments showing that a noisy channel model decodes better responses compared to direct decoding and that a two-stage pretraining strategy, employing both open-domain and task-oriented dialogue data, improves over randomly initialized models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

El-Abbasy, Karim, Ramy Taki Eldin, Salwa El Ramly y Bassant Abdelhamid. "Optimized Polar Codes as Forward Error Correction Coding for Digital Video Broadcasting Systems". Electronics 10, n.º 17 (3 de septiembre de 2021): 2152. http://dx.doi.org/10.3390/electronics10172152.

Texto completo
Resumen
Polar codes are featured by their low encoding/decoding complexity for symmetric binary input-discrete memoryless channels. Recently, flexible generic Successive Cancellation List (SCL) decoders for polar codes were proposed to provide different throughput, latency, and decoding performances. In this paper, we propose to use polar codes with flexible fast-adaptive SCL decoders in Digital Video Broadcasting (DVB) systems to meet the growing demand for more bitrates. In addition, they can provide more interactive services with less latency and more throughput. First, we start with the construction of polar codes and propose a new mathematical relation to get the optimized design point for the polar code. We prove that our optimized design point is too close to the one that achieves minimum Bit Error Rate (BER). Then, we compare the performance of polar and Low-Density Parity Check (LDPC) codes in terms of BER, encoder/decoder latencies, and throughput. The results show that both channel coding techniques have comparable BER. However, polar codes are superior to LDPC in terms of decoding latency, and system throughput. Finally, we present the possible performance enhancement of DVB systems in terms of decoding latency and complexity when using optimized polar codes as a Forward Error Correction (FEC) technique instead of Bose Chaudhuri Hocquenghem (BCH) and LDPC codes that are currently adopted in DVB standards.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Salija, P., B. Yamuna, T. R. Padmanabhan y D. Mishra. "Generic Direct Approach for Decoding Turbo Codes Using Probability Density Based Reliability Model". Journal of Communications Technology and Electronics 66, n.º 2 (febrero de 2021): 175–83. http://dx.doi.org/10.1134/s1064226921020133.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Horikawa, Tomoyasu y Yukiyasu Kamitani. "Generic decoding of seen and imagined objects using features of deep neural networks". Journal of Vision 16, n.º 12 (1 de septiembre de 2016): 372. http://dx.doi.org/10.1167/16.12.372.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Sunil J. Wimalawansa. "Decoding the paradox: Understanding Elevated Hospitalization and Reduced Mortality in SARS-CoV-2 Variants". International Journal of Frontiers in Science and Technology Research 6, n.º 2 (30 de abril de 2024): 001–20. http://dx.doi.org/10.53294/ijfstr.2024.6.2.0031.

Texto completo
Resumen
Introduction and aim: SARS-CoV-2 outbreaks occur cyclically, aligning with winter when vitamin D levels are lowest, except after new variant outbreaks. Adequate vitamin D is crucial for robust immune function. Hypovitaminosis diminishes immune responses, increasing susceptibility to viral infections. The manuscript explores the discrepancy between increased SARS-CoV-2 hospitalizations and lower mortality. Method​: SARS-CoV-2 mutants, including Delta, BQ, and XBB Omicron lineages, developed immune evasion capabilities, reducing the effectiveness of COVID-19 vaccines and bivalent boosters. The failure of COVID-19 vaccines to prevent infections and spread to others, coupled with the immune evasion exhibited by mutant viruses, contributed to continued SARS-CoV-2 outbreaks. Interestingly, dominant new mutants, despite their increased transmissibility, have caused fewer deaths. This article scrutinizes the mentioned incongruity through an analysis of published data. Results: Achieving herd immunity and eradicating SARS-CoV-2 has proven elusive due to ongoing mutagenesis and immune evasion, leading to recurrent viral outbreaks. The failure to approve repurposed early therapies for COVID-19 by regulators, as well as misinformation and weak strategies undertaken by leading health authorities, exacerbated the situation. Repurposed agents, including vitamin D and ivermectin, have demonstrated high efficacy against SARS-CoV-2 from the beginning, and remain unaffected by mutations. Despite their cost-effectiveness and widespread availability, regulatory approval for these generic agents in COVID-19 treatment is pending. Conclusion: Regulators hesitated to approve cost-effective, repurposed generic agents primarily to safeguard the temporary approval status of COVID-19 vaccines and anti-viral agents under Emergency Use Authorization, which persists. This reluctance overlooked the opportunity to implement an integrated approach with repurposed agents alongside COVID-19 vaccines, potentially reducing hospitalizations and fatalities and preventing outbreaks; this led to the failure to eradicate SARS-CoV-2 and becoming endemic. It is imperative that regulators now reconsider approving affordable generics for SARS-CoV-2 to effectively control future viral outbreaks. Non-technical Importance (Lay Abstract) Adequate vitamin D levels significantly bolster the human immune system—deficiency compromises immune responses and increases susceptibility, particularly to viruses. While new SARS-CoV-2 mutations like Omicron are less severe, they are more infectious and adept at evading immunity from vaccines; thus, they offer a limited spectrum of protection and duration. Primary COVID-19 vaccines have reduced disease severity but have failed to prevent viral spread, contributing to outbreaks. Booster doses had little effect on the virus but caused immune paresis, thus increasing susceptibility to infections. Regulators should consider approving generic, repurposed agents like vitamin D and ivermectin as adjunct therapies to address this challenge and better prepare for future pandemics. Proactively integrating vitamin D supplementation to fortify the immune system can mitigate vital outbreaks, alleviate hospital burdens, and reduce healthcare costs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Roy, Rinku, Feng Xu, Derek G. Kamper y Xiaogang Hu. "A generic neural network model to estimate populational neural activity for robust neural decoding". Computers in Biology and Medicine 144 (mayo de 2022): 105359. http://dx.doi.org/10.1016/j.compbiomed.2022.105359.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

López Parrado, Alexander, Jaime Velasco Medina y Julián Adolfo Ramírez Gutiérrez. "Efficient hardware implementation of a full COFDM processor with robust channel equalization and reduced power consumption". Revista Facultad de Ingeniería Universidad de Antioquia, n.º 68 (18 de octubre de 2013): 48–60. http://dx.doi.org/10.17533/udea.redin.17040.

Texto completo
Resumen
This work presents the design of a 12 Mb/s Coded Orthogonal Frequency Division Multiplexing (COFDM) baseband processor for the standard IEEE 802.11a. The COFDM baseband processor was designed by using ourdesigned circuits for carrier phase correction, symbol timing synchronization, robust channel equalization and Viterbi decoding. These circuits are flexible, parameterized and described by using generic structural VHDL. The COFDM processor has two clock domains for reducing power consumption, it was synthesized on a Stratix II FPGA, and it was experimentally tested by using 2.4 GHz Radio Frequency (RF) circuitry.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Aziz Momin, Md Sorique y Ayan Biswas. "The role of gene regulation in redundant and synergistic information transfers in coherent feed-forward loop". Journal of Statistical Mechanics: Theory and Experiment 2023, n.º 2 (1 de febrero de 2023): 023501. http://dx.doi.org/10.1088/1742-5468/acb42e.

Texto completo
Resumen
Abstract For the ubiquitous coherent type-1 feed-forward loop (C1-FFL) motif, the master and co-regulators act as sources of information in decoding the output gene expression state. Using the variance-based definition of information within a Gaussian framework at steady state, we apply the partial information decomposition technique to quantify the redundant (common) and synergistic (complementary) information transfers to the output gene. By enabling the generic C1-FFL motif with complementarily tunable regulatory pathways and fixed gene product abundances, we examine the role of output gene regulation in maintaining the flow of these two multivariate information flavors. We find that the redundant and synergistic information transfers are simultaneously maximized when the direct and indirect output regulatory strengths are nearly balanced. All other manifestations of the generic C1-FFL motif, including the two terminal ones, namely, the two-step cascade and fan-out, transduce lesser amounts of these two types of information. This optimal decoding of the output gene expression state by a nearly balanced C1-FFL motif holds true in an extended repertoire of biologically relevant parametric situations. These realizations involve additional layers of regulation through changing gene product abundances, activation coefficients, and degradation rates. Our analyses underline the regulatory mechanisms through which the C1-FFL motif is able to optimally reduce its output uncertainty concurrently via redundant and synergistic modes of information transfer. We find that these information transfers are guided by fluctuations in the motif. The prevalence of redundancy over synergy in all regulatory implementations is also noteworthy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Kuang, Mei, Zongyi Zhan y Shaobing Gao. "Natural Image Reconstruction from fMRI Based on Node–Edge Interaction and Multi–Scale Constraint". Brain Sciences 14, n.º 3 (28 de febrero de 2024): 234. http://dx.doi.org/10.3390/brainsci14030234.

Texto completo
Resumen
Reconstructing natural stimulus images using functional magnetic resonance imaging (fMRI) is one of the most challenging problems in brain decoding and is also the crucial component of a brain–computer interface. Previous methods cannot fully exploit the information about interactions among brain regions. In this paper, we propose a natural image reconstruction method based on node–edge interaction and a multi-scale constraint. Inspired by the extensive information interactions in the brain, a novel graph neural network block with node–edge interaction (NEI–GNN block) is presented, which can adequately model the information exchange between brain areas via alternatively updating the nodes and edges. Additionally, to enhance the quality of reconstructed images in terms of both global structure and local detail, we employ a multi-stage reconstruction network that restricts the reconstructed images in a coarse-to-fine manner across multiple scales. Qualitative experiments on the generic object decoding (GOD) dataset demonstrate that the reconstructed images contain accurate structural information and rich texture details. Furthermore, the proposed method surpasses the existing state-of-the-art methods in terms of accuracy in the commonly used n-way evaluation. Our approach achieves 82.00%, 59.40%, 45.20% in n-way mean squared error (MSE) evaluation and 83.50%, 61.80%, 46.00% in n-way structural similarity index measure (SSIM) evaluation, respectively. Our experiments reveal the importance of information interaction among brain areas and also demonstrate the potential for developing visual-decoding brain–computer interfaces.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Schweitzer, Marlis. "Decoding the Lecture on Heads : Personal Satire on the Eighteenth-Century Stage". Eighteenth-Century Studies 57, n.º 3 (marzo de 2024): 301–24. http://dx.doi.org/10.1353/ecs.2024.a923780.

Texto completo
Resumen
Abstract: This article asserts that one of the most popular plays of the era, George Alexander Stevens’s Lecture on Heads (1764), was so successful in its use of objects, mimicry, and other deverbalizing techniques that historians have subsequently misinterpreted its satirical purpose. The wooden and papier maché heads Stevens exhibited were anything but generic: they were skillfully crafted representations of recognizable public figures. Targeting these figures with keen precision, Stevens engaged his audiences in a complex “gazing game” that required deep knowledge of 1760s court gossip, political intrigue, mezzotint imagery, and the semiotics of print caricature. This article plays that game through close analysis of caricatures, etchings, engravings, mezzotints, oil paintings, and other visual sources, aided by contemporary accounts of the Lecture , and various (unauthorized) publications of Stevens’s script.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Leuschner, Jeff y Shahram Yousefi. "A New Generic Maximum-Likelihood Metric Expression for Space–Time Block Codes With Applications to Decoding". IEEE Transactions on Information Theory 54, n.º 2 (febrero de 2008): 888–94. http://dx.doi.org/10.1109/tit.2007.913417.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Quiles, Vicente, Laura Ferrero, Eduardo Iáñez, Mario Ortiz y José M. Azorín. "Decoding of Turning Intention during Walking Based on EEG Biomarkers". Biosensors 12, n.º 8 (22 de julio de 2022): 555. http://dx.doi.org/10.3390/bios12080555.

Texto completo
Resumen
In the EEG literature, there is a lack of asynchronous intention models that realistically propose interfaces for applications that must operate in real time. In this work, a novel BMI approach to detect in real time the intention to turn is proposed. For this purpose, an offline, pseudo-online and online analysis is presented to validate the EEG as a biomarker for the intention to turn. This article presents a methodology for the creation of a BMI that could differentiate two classes: monotonous walk and intention to turn. A comparison of some of the most popular algorithms in the literature is conducted. To filter the signal, two relevant algorithms are used: H∞ filter and ASR. For processing and classification, the mean of the covariance matrices in the Riemannian space was calculated and then, with various classifiers of different types, the distance of the test samples to each class in the Riemannian space was estimated. This dispenses with power-based models and the necessary baseline correction, which is a problem in realistic scenarios. In the cross-validation for a generic selection (valid for any subject) and a personalized one, the results were, on average, 66.2% and 69.6% with the best filter H∞. For the pseudo-online, the custom configuration for each subject was an average of 40.2% TP and 9.3 FP/min; the best subject obtained 43.9% TP and 2.9 FP/min. In the final validation test, this subject obtained 2.5 FP/min and an accuracy rate of 71.43%, and the turn anticipation was 0.21 s on average.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Song, Kai, Kun Wang, Heng Yu, Yue Zhang, Zhongqiang Huang, Weihua Luo, Xiangyu Duan y Min Zhang. "Alignment-Enhanced Transformer for Constraining NMT with Pre-Specified Translations". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 8886–93. http://dx.doi.org/10.1609/aaai.v34i05.6418.

Texto completo
Resumen
We investigate the task of constraining NMT with pre-specified translations, which has practical significance for a number of research and industrial applications. Existing works impose pre-specified translations as lexical constraints during decoding, which are based on word alignments derived from target-to-source attention weights. However, multiple recent studies have found that word alignment derived from generic attention heads in the Transformer is unreliable. We address this problem by introducing a dedicated head in the multi-head Transformer architecture to capture external supervision signals. Results on five language pairs show that our method is highly effective in constraining NMT with pre-specified translations, consistently outperforming previous methods in translation quality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Li, Xin, Piji Li, Wei Bi, Xiaojiang Liu y Wai Lam. "Relevance-Promoting Language Model for Short-Text Conversation". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 8253–60. http://dx.doi.org/10.1609/aaai.v34i05.6340.

Texto completo
Resumen
Despite the effectiveness of sequence-to-sequence framework on the task of Short-Text Conversation (STC), the issue of under-exploitation of training data (i.e., the supervision signals from query text is ignored) still remains unresolved. Also, the adopted maximization-based decoding strategies, inclined to generating the generic responses or responses with repetition, are unsuited to the STC task. In this paper, we propose to formulate the STC task as a language modeling problem and tailor-make a training strategy to adapt a language model for response generation. To enhance generation performance, we design a relevance-promoting transformer language model, which performs additional supervised source attention after the self-attention to increase the importance of informative query tokens in calculating the token-level representation. The model further refines the query representation with relevance clues inferred from its multiple references during training. In testing, we adopt a randomization-over-maximization strategy to reduce the generation of generic responses. Experimental results on a large Chinese STC dataset demonstrate the superiority of the proposed model on relevance metrics and diversity metrics.1
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Zhang, Yue y Stephen Clark. "Syntactic Processing Using the Generalized Perceptron and Beam Search". Computational Linguistics 37, n.º 1 (marzo de 2011): 105–51. http://dx.doi.org/10.1162/coli_a_00037.

Texto completo
Resumen
We study a range of syntactic processing tasks using a general statistical framework that consists of a global linear model, trained by the generalized perceptron together with a generic beam-search decoder. We apply the framework to word segmentation, joint segmentation and POS-tagging, dependency parsing, and phrase-structure parsing. Both components of the framework are conceptually and computationally very simple. The beam-search decoder only requires the syntactic processing task to be broken into a sequence of decisions, such that, at each stage in the process, the decoder is able to consider the top-n candidates and generate all possibilities for the next stage. Once the decoder has been defined, it is applied to the training data, using trivial updates according to the generalized perceptron to induce a model. This simple framework performs surprisingly well, giving accuracy results competitive with the state-of-the-art on all the tasks we consider. The computational simplicity of the decoder and training algorithm leads to significantly higher test speeds and lower training times than their main alternatives, including log-linear and large-margin training algorithms and dynamic-programming for decoding. Moreover, the framework offers the freedom to define arbitrary features which can make alternative training and decoding algorithms prohibitively slow. We discuss how the general framework is applied to each of the problems studied in this article, making comparisons with alternative learning and decoding algorithms. We also show how the comparability of candidates considered by the beam is an important factor in the performance. We argue that the conceptual and computational simplicity of the framework, together with its language-independent nature, make it a competitive choice for a range of syntactic processing tasks and one that should be considered for comparison by developers of alternative approaches.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Shah, Stark y Bauch. "Coarsely Quantized Decoding and Construction of Polar Codes Using the Information Bottleneck Method". Algorithms 12, n.º 9 (10 de septiembre de 2019): 192. http://dx.doi.org/10.3390/a12090192.

Texto completo
Resumen
The information bottleneck method is a generic clustering framework from the fieldof machine learning which allows compressing an observed quantity while retaining as much ofthe mutual information it shares with the quantity of primary relevance as possible. The frameworkwas recently used to design message-passing decoders for low-density parity-check codes in whichall the arithmetic operations on log-likelihood ratios are replaced by table lookups of unsignedintegers. This paper presents, in detail, the application of the information bottleneck method to polarcodes, where the framework is used to compress the virtual bit channels defined in the code structureand show that the benefits are twofold. On the one hand, the compression restricts the outputalphabet of the bit channels to a manageable size. This facilitates computing the capacities of the bitchannels in order to identify the ones with larger capacities. On the other hand, the intermediatesteps of the compression process can be used to replace the log-likelihood ratio computations inthe decoder with table lookups of unsigned integers. Hence, a single procedure produces a polarencoder as well as its tailored, quantized decoder. Moreover, we also use a technique called messagealignment to reduce the space complexity of the quantized decoder obtained using the informationbottleneck framework.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Qureshi, Sara Shakil, Sajid Ali y Syed Ali Hassan. "Linear and Decoupled Decoders for Dual-Polarized Antenna-Based MIMO Systems". Sensors 20, n.º 24 (13 de diciembre de 2020): 7141. http://dx.doi.org/10.3390/s20247141.

Texto completo
Resumen
Quaternion orthogonal designs (QODs) have been used to design STBCs that provide improved performance in terms of various design parameters. In this paper, we show that all QODs obtained from generic iterative construction techniques based on the Adams-Lax-Phillips approach have linear and decoupled decoders which significantly reduce the computational complexity at the receiver. Our result is based on the quaternionic description of communication channels among dual-polarized antennas. Another contribution of this work is the linear and decoupled decoder for quasi-orthogonal codes for non-square as well as square designs. The proposed solution promises diversity gains with the quaternionic channel model and the decoding solution is independent of the number of receive dual-polarized antennas. A brief comparison is presented at the end to demonstrate the effectiveness of quaternion designs in two dual-polarized antennas over available STBCs for four single-polarized antennas. Linear and decoupled decoding of two quasi-orthogonal designs is shown, which has failed to exit previously. In addition, a QOD for 2×1 dual-polarized antenna configuration using quaternionic channel model shows a 3 dB gain at 10−5 in comparison to the same code evaluated for 2×2 complex representation of the quaternionic channel. This gain is further enhanced when the received diversity for these the cases is matched i.e., 2×2. The code using the quaternionic channel model shows a further 13 dB improvement at 10−5 BER.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Ren, Pengjie, Zhumin Chen, Christof Monz, Jun Ma y Maarten De Rijke. "Thinking Globally, Acting Locally: Distantly Supervised Global-to-Local Knowledge Selection for Background Based Conversation". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 8697–704. http://dx.doi.org/10.1609/aaai.v34i05.6395.

Texto completo
Resumen
Background Based Conversation (BBCs) have been introduced to help conversational systems avoid generating overly generic responses. In a BBC, the conversation is grounded in a knowledge source. A key challenge in BBCs is Knowledge Selection (KS): given a conversational context, try to find the appropriate background knowledge (a text fragment containing related facts or comments, etc.) based on which to generate the next response. Previous work addresses KS by employing attention and/or pointer mechanisms. These mechanisms use a local perspective, i.e., they select a token at a time based solely on the current decoding state. We argue for the adoption of a global perspective, i.e., pre-selecting some text fragments from the background knowledge that could help determine the topic of the next response. We enhance KS in BBCs by introducing a Global-to-Local Knowledge Selection (GLKS) mechanism. Given a conversational context and background knowledge, we first learn a topic transition vector to encode the most likely text fragments to be used in the next response, which is then used to guide the local KS at each decoding timestamp. In order to effectively learn the topic transition vector, we propose a distantly supervised learning schema. Experimental results show that the GLKS model significantly outperforms state-of-the-art methods in terms of both automatic and human evaluation. More importantly, GLKS achieves this without requiring any extra annotations, which demonstrates its high degree of scalability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Azimov, O. T. "GENERATION OF THE GENERIC FLOWCHART OF TRANSFORMATION, DECODING AND THEMATIC INTERPRETATION OF REMOTE SENSING DATA FOR GEOLOGIC OBJECTIVES SOLVING". Collection of Scientific Works of the Institute of Geological Sciences of the NAS of Ukraine 4 (11 de marzo de 2011): 11–18. http://dx.doi.org/10.30836/igs.2522-9753.2011.152622.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Rehman, Muhammad Shakil Ur, Saima Akhter Chattha y Zia Ullah Khan Niazi. "Decoding of a Gendered Marginalized Discourse in V.S. Naipaul’s A House for Mr. Biswas". International Journal of Linguistics and Culture 3, n.º 1 (10 de junio de 2022): 61–71. http://dx.doi.org/10.52700/ijlc.v3i1.86.

Texto completo
Resumen
Focusing on Sara Mill’s Feminist stylistics in A House for Mr. Biswas (1961) by V.S. Naipaul (1961), the article provides an alternative understanding of the novel through the lens of the undertaken framework to reveal marginalization on gender basis. The disparity of the socio- cultural roles of men and women in the novel through a particular use of language divulges gender inequality in a society. The preference of masculinity engrossed in of preferable patriarchal titles, names, generic pronouns and professions leads to a gender labeling. The disparaging portrayal of women in the fictional representation of a postcolonial work manifests that women are discriminated against. It is substantial, therefore, to problematize gender representation through the application of “toolkit” (Mills, 1995, p.2). It is also one of the significant theoretical frameworks that critiques a literary piece to unearth concealed objectives wrought in an implicit and explicit language-use. Naipaul in the selected work represents through a language use - word, phrase/sentence, discourse to demonstrate how rampant socio-cultural gender norms play a pivotal role in upholding gender inequality. Therefore, the study concludes that submissive role of women through the specific use of discourse makes the application of feminist stylistics more pertinent to highlight the stereotypical gender norms that are used to strengthen patriarchal hegemony. Hence, socio-cultural gender norms with the collusion of a specific discourse used in the novel are coded for dominance of patriarchy and segregation of women subsequently.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Ali, I., U. Wasenmüller y N. Wehn. "A code-aided synchronization IP core for iterative channel decoders". Advances in Radio Science 11 (4 de julio de 2013): 137–42. http://dx.doi.org/10.5194/ars-11-137-2013.

Texto completo
Resumen
Abstract. Synchronization and channel decoding are integral parts of each receiver in wireless communication systems. The task of synchronization is the estimation of the general unknown parameters of phase, frequency and timing offset as well as correction of the received symbol sequence according to the estimated parameters. The synchronized symbol sequence serves as input for the channel decoder. Advanced channel decoders are able to operate at very low signal-to-noise ratios (SNR). For small values of SNR, the parameter estimation suffers from increased noise and impacts the communication performance. To improve the synchronization quality and thus decoder performance, the synchronizers are integrated into the iterative decoding structure. Intermediate results of the channel decoder after each iteration are used to improve the synchronization. This approach is referred to as code-aided (CA) synchronization or turbo synchronization. A number of CA synchronization algorithms have already been published but there is no publication so far on a generic hardware implementation of the CA synchronization. Therefore we present an algorithm which can be implemented efficiently in hardware and demonstrate its communication performance. Furthermore we present a high throughput, flexible, area and power efficient code-aided synchronization IP core for various satellite communication standards. The core is synthesized for 65 nm low power CMOS technology. After placement and routing the core has an area of 0.194 mm2, throughput of 207 Msymbols/s and consumes 41.4 mW at 300 MHz clock frequency. The architecture is designed in such a way that it does not affect throughput of the system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Isonuma, Masaru, Junichiro Mori, Danushka Bollegala y Ichiro Sakata. "Unsupervised Abstractive Opinion Summarization by Generating Sentences with Tree-Structured Topic Guidance". Transactions of the Association for Computational Linguistics 9 (2021): 945–61. http://dx.doi.org/10.1162/tacl_a_00406.

Texto completo
Resumen
Abstract This paper presents a novel unsupervised abstractive summarization method for opinionated texts. While the basic variational autoencoder-based models assume a unimodal Gaussian prior for the latent code of sentences, we alternate it with a recursive Gaussian mixture, where each mixture component corresponds to the latent code of a topic sentence and is mixed by a tree-structured topic distribution. By decoding each Gaussian component, we generate sentences with tree-structured topic guidance, where the root sentence conveys generic content, and the leaf sentences describe specific topics. Experimental results demonstrate that the generated topic sentences are appropriate as a summary of opinionated texts, which are more informative and cover more input contents than those generated by the recent unsupervised summarization model (Bražinskas et al., 2020). Furthermore, we demonstrate that the variance of latent Gaussians represents the granularity of sentences, analogous to Gaussian word embedding (Vilnis and McCallum, 2015).
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Anushree Bose y Dr. Rajesh K. Pandey. "Dilemma to Dominance: Decoding the Evolution of the Indian Pharmaceutical Industry - A Case Study". Applied Science and Biotechnology Journal for Advanced Research 2, n.º 4 (31 de julio de 2023): 18–25. http://dx.doi.org/10.31033/abjar.2.4.5.

Texto completo
Resumen
The Indian pharmaceutical industry has undergone a significant evolution, transitioning from a domestically-focused sector to a global powerhouse. Initially, Indian pharmaceutical companies primarily catered to domestic healthcare needs. However, with the implementation of economic reforms in the 1990s, the industry witnessed rapid growth and expanded into international markets. The Indian pharmaceutical industry has experienced a remarkable transformation, establishing itself as a global leader in the production and export of generic drugs. The industry has demonstrated superior growth performance, making it one of the fastest-growing sectors worldwide. Productivity levels and innovation play a significant role in determining competitiveness, and the Indian pharmaceutical industry has shown impressive capabilities in these areas. Government policies have been instrumental in driving the industry's self-sufficiency and internationalization. The establishment of public sector pharmaceutical enterprises and the introduction of intellectual property regulations, pricing control, and research and development (R&D) support have propelled the industry's growth. This case study aims to provide an in-depth analysis of the journey of the Indian pharmaceutical industry, exploring its growth, challenges, and the dilemma it faces in balancing innovation and affordability. By examining the sector's perspective, challenges, and identified dilemmas, this case study aims to equip readers with a comprehensive understanding of the industry. The case delves into the journey and challenges faced by the Indian pharmaceutical industry. It investigates the industry's growth, the perspective of Indian pharmaceutical companies, the challenges they encounter, and the identified dilemmas. The case also provides teaching notes and objectives to enhance the readers' knowledge and decision-making skills within the pharmaceutical industry.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Ian F. Akyildiz y Hongzhi Guo. "Holographic-type communication: A new challenge for the next decade". ITU Journal on Future and Evolving Technologies 3, n.º 2 (30 de septiembre de 2022): 421–42. http://dx.doi.org/10.52953/yrll3571.

Texto completo
Resumen
Holographic-Type Communication (HTC) is an important technology that will be supported by 6G and beyond wireless systems. It provides truly immersive experiences for a large number of novel applications, such as holographic telepresence, healthcare, retail, education, training, entertainment, sports, and gaming, by displaying multi-view high resolution 3D holograms of humans or objects/items and creating multi-sensory media (mulsemedia), including audio, haptic, smell, and taste. HTC faces great challenges in transmitting high volume data with guaranteed end-to-end latency which cannot be addressed by existing communication and networking technologies. The contribution of this paper is two-fold. First, it introduces the basics and generic architectures of HTC systems. The encoding and decoding of hologram and mulsemedia are discussed, and the envisioned use cases and technical requirements are introduced. Second, this paper identifies limitations of existing wireless and wired networks in realizing HTC and points out the promising 6G and beyond networking technologies. Particularly, for HTC sources, the point cloud encoding and mulsemedia creation and synchronization are introduced. For HTC networking, new directions and associated research challenges, such as semantic communications, deterministic networks, time sensitive networks, distributed encoding and decoding, and predictive networks, are discussed as they may enable high data rate communications with guaranteed end-to-end latency. For HTC destinations, the heterogeneity of HTC devices, synchronization, and user motion prediction are explored and associated research challenges are pointed out. Video communication with 2D content has profoundly changed our daily life and workin style. HTC is an advanced technology that provides 3D immersive experiences, which will become the next research frontier.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Kitazaki, M., S. Hariyama, Y. Inoue y S. Nakauchi. "Correlation between neural decoding and perceptual performance in visual processing of human body postures: generic views, inversion effect and biomechanical constraint". Journal of Vision 9, n.º 8 (24 de marzo de 2010): 609. http://dx.doi.org/10.1167/9.8.609.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Okubo, Satoru, Aoi Nikkeshi, Chisato S. Tanaka, Kiyoshi Kimura, Mikio Yoshiyama y Nobuo Morimoto. "Forage area estimation in European honeybees (Apis mellifera) by automatic waggle decoding of videos using a generic camcorder in field apiaries". Apidologie 50, n.º 2 (abril de 2019): 243–52. http://dx.doi.org/10.1007/s13592-019-00638-3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Lee, Sangku, Janghyuk Youn y Bang Chul Jung. "Hybrid AF/DF Cooperative Relaying Technique with Phase Steering for Industrial IoT Networks". Energies 14, n.º 4 (10 de febrero de 2021): 937. http://dx.doi.org/10.3390/en14040937.

Texto completo
Resumen
For the next generation of manufacturing, the industrial internet of things (IoT) has been considered as a key technology that enables smart factories, in which sensors transfer measured data, actuators are controlled, and systems are connected wirelessly. In particular, the wireless sensor network (WSN) needs to operate with low cost, low power (energy), and narrow spectrum, which are the most technical challenges for industrial IoT networks. In general, a relay-assisted communication network has been known to overcome scarce energy problems, and a spectrum-sharing technique has been considered as a promising technique for the radio spectrum shortage problem. In this paper, we propose a phase steering based hybrid cooperative relaying (PSHCR) technique for the generic relay-assisted spectrum-shared WSN, which consists of a secondary transmitter, multiple secondary relays (SRs), a secondary access point, and multiple primary access points. Basically, SRs in the proposed PSHCR technique operate with decode-and-forward (DF) relaying protocol, but it does not abandon the SRs that failed in decoding at the first hop. Instead, the SRs operate with amplify-and-forward (AF) protocol when they failed in decoding at the first hop. Furthermore, the SRs (regardless of operating with AF or DF protocol) that satisfy interference constraints to the primary network are allowed to transmit a signal to the secondary access point at the second hop. Note that phase distortion is compensated through phase steering operation at each relay node before second-hop transmission, and thus all relay nodes can operate in a fully distributed manner. Finally, we validate that the proposed PSHCR technique significantly outperforms the existing best single relay selection (BSR) technique and cooperative phase steering (CPS) technique in terms of outage performance via extensive computer simulations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Gueron, Shay, Edoardo Persichetti y Paolo Santini. "Designing a Practical Code-Based Signature Scheme from Zero-Knowledge Proofs with Trusted Setup". Cryptography 6, n.º 1 (27 de enero de 2022): 5. http://dx.doi.org/10.3390/cryptography6010005.

Texto completo
Resumen
This paper defines a new practical construction for a code-based signature scheme. We introduce a new protocol that is designed to follow the recent paradigm known as “Sigma protocol with helper”, and prove that the protocol’s security reduces directly to the Syndrome Decoding Problem. The protocol is then converted to a full-fledged signature scheme via a sequence of generic steps that include: removing the role of the helper; incorporating a variety of protocol optimizations (using e.g., Merkle trees); applying the Fiat–Shamir transformation. The resulting signature scheme is EUF-CMA secure in the QROM, with the following advantages: (a) Security relies on only minimal assumptions and is backed by a long-studied NP-complete problem; (b) the trusted setup structure allows for obtaining an arbitrarily small soundness error. This minimizes the required number of repetitions, thus alleviating a major bottleneck associated with Fiat–Shamir schemes. We outline an initial performance estimation to confirm that our scheme is competitive with respect to existing solutions of similar type.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Sideridis, Georgios D., Panagiotis Simos, Angeliki Mouzaki, Dimitrios Stamovlasis y George K. Georgiou. "Can the Relationship Between Rapid Automatized Naming and Word Reading Be Explained by a Catastrophe? Empirical Evidence From Students With and Without Reading Difficulties". Journal of Learning Disabilities 52, n.º 1 (17 de mayo de 2018): 59–70. http://dx.doi.org/10.1177/0022219418775112.

Texto completo
Resumen
The purpose of the present study was to explain the moderating role of rapid automatized naming (RAN) in word reading with a cusp catastrophe model. We hypothesized that increases in RAN performance speed beyond a critical point would be associated with the disruption in word reading, consistent with a “generic shutdown” hypothesis. Participants were 587 elementary schoolchildren (Grades 2–4), among whom 87 had reading comprehension difficulties per the IQ-achievement discrepancy criterion. Data were analyzed via a cusp catastrophe model derived from the nonlinear dynamics systems theory. Results indicated that for children with reading comprehension difficulties, as naming speed falls below a critical level, the association between core reading processes (word recognition and decoding) becomes chaotic and unpredictable. However, after the significant common variance attributed to motivation, emotional, and internalizing symptoms measures from RAN scores was partialed out, its role as a bifurcation variable was no longer evident. Taken together, these findings suggest that RAN represents a salient cognitive measure that may be associated with psychoemotional processes that are, at least in part, responsible for unpredictable and chaotic word reading behavior among children with reading comprehension deficits.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Guan, Xuefeng, Peter van Oosterom y Bo Cheng. "A Parallel N-Dimensional Space-Filling Curve Library and Its Application in Massive Point Cloud Management". ISPRS International Journal of Geo-Information 7, n.º 8 (15 de agosto de 2018): 327. http://dx.doi.org/10.3390/ijgi7080327.

Texto completo
Resumen
Because of their locality preservation properties, Space-Filling Curves (SFC) have been widely used in massive point dataset management. However, the completeness, universality, and scalability of current SFC implementations are still not well resolved. To address this problem, a generic n-dimensional (nD) SFC library is proposed and validated in massive multiscale nD points management. The library supports two well-known types of SFCs (Morton and Hilbert) with an object-oriented design, and provides common interfaces for encoding, decoding, and nD box query. Parallel implementation permits effective exploitation of underlying multicore resources. During massive point cloud management, all xyz points are attached an additional random level of detail (LOD) value l. A unique 4D SFC key is generated from each xyzl with this library, and then only the keys are stored as flat records in an Oracle Index Organized Table (IOT). The key-only schema benefits both data compression and multiscale clustering. Experiments show that the proposed nD SFC library provides complete functions and robust scalability for massive points management. When loading 23 billion Light Detection and Ranging (LiDAR) points into an Oracle database, the parallel mode takes about 10 h and the loading speed is estimated four times faster than sequential loading. Furthermore, 4D queries using the Hilbert keys take about 1~5 s and scale well with the dataset size.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Lee, Sangku, Janghyuk Yoon y Bang Chul Jung. "A Cooperative Phase-Steering Technique with On-Off Power Control for Spectrum Sharing-Based Wireless Sensor Networks". Sensors 20, n.º 7 (30 de marzo de 2020): 1942. http://dx.doi.org/10.3390/s20071942.

Texto completo
Resumen
With the growth of the number of Internet of Things (IoT) devices, a wide range of wireless sensor networks (WSNs) will be deployed for various applications. In general, WSNs are constrained by limitations in spectrum and energy resources. In order to circumvent these technical challenges, we propose a novel cooperative phase-steering (CPS) technique with a simple on-off power control for generic spectrum sharing-based WSNs, which consists of a single secondary source (SS) node, multiple secondary relay (SR) nodes, a single secondary destination (SD) node, and multiple primary destination (PD) nodes. In the proposed technique, each SR node that succeeds in packet decoding from the SS and for which its interference power to the PD nodes is lower than a certain threshold is allowed to transmit the signal to the SD node. All SR nodes that are allowed to transmit signals to the SD node adjust the phase of their transmit signal such that the phase of received signals at the SD node from the SR nodes is aligned to a certain angle. Moreover, we mathematically analyze the outage probability of the proposed scheme. Our analytical and simulation results show that the proposed technique outperforms the conventional cooperative relaying schemes in terms of outage probability. Through extensive computer simulations, it is shown that the analytical results match well with the simulated outage probability as a lower bound.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Segal, Yoram y Ofer Hadar. "Covert channel implementation using motion vectors over H.264 compression". Revista de Estudos e Pesquisas Avançadas do Terceiro Setor 2, n.º 2 (18 de agosto de 2019): 111. http://dx.doi.org/10.31501/repats.v2i2.10567.

Texto completo
Resumen
Embedding information inside video streaming is a hot topic in the world of video broadcasting. Information assimilation can be used for positive purposes, such as copyright protection. On the other hand, it can be used for malicious purposes such as a hostile takeover, remotely, on end-user devices. The basic idea of information assimilation technology within a video is to take advantage of the sequence of frames that flows between the video server and the viewer. Casting foreigner data into each frame such a hidden communication channel is created namely - covert channel. Attackers find the multimedia world in general and video streaming, an attractive backdoor for cyber-attacks. Multimedia covert channels provide reasonable bandwidth and long-lasting transmission streams, suitable for planting malicious information and therefore used as an exploit alternative. In this article, we propose a method to protect against attacks that use video payload for transferring confidential data using a covert channel. This work is part of a large-scale study of video attack methods. The goal of the study is to build a generic platform that will investigate the reliability of video sequences. The platform allows to encoding and decoding video. A plugin can be added to each encoder or decoder. Each plugin is an algorithm that is studied and developed in the framework of this study. One of the algorithms in this platform is information transmission over video using motion vectors. This method is the topic off this article.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Eberhard, Monika J. B., Jan-Hendrik Schleimer, Susanne Schreiber y Bernhard Ronacher. "A temperature rise reduces trial-to-trial variability of locust auditory neuron responses". Journal of Neurophysiology 114, n.º 3 (septiembre de 2015): 1424–37. http://dx.doi.org/10.1152/jn.00980.2014.

Texto completo
Resumen
The neurophysiology of ectothermic animals, such as insects, is affected by environmental temperature, as their body temperature fluctuates with ambient conditions. Changes in temperature alter properties of neurons and, consequently, have an impact on the processing of information. Nevertheless, nervous system function is often maintained over a broad temperature range, exhibiting a surprising robustness to variations in temperature. A special problem arises for acoustically communicating insects, as in these animals mate recognition and mate localization typically rely on the decoding of fast amplitude modulations in calling and courtship songs. In the auditory periphery, however, temporal resolution is constrained by intrinsic neuronal noise. Such noise predominantly arises from the stochasticity of ion channel gating and potentially impairs the processing of sensory signals. On the basis of intracellular recordings of locust auditory neurons, we show that intrinsic neuronal variability on the level of spikes is reduced with increasing temperature. We use a detailed mathematical model including stochastic ion channel gating to shed light on the underlying biophysical mechanisms in auditory receptor neurons: because of a redistribution of channel-induced current noise toward higher frequencies and specifics of the temperature dependence of the membrane impedance, membrane potential noise is indeed reduced at higher temperatures. This finding holds under generic conditions and physiologically plausible assumptions on the temperature dependence of the channels' kinetics and peak conductances. We demonstrate that the identified mechanism also can explain the experimentally observed reduction of spike timing variability at higher temperatures.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Quan, Yujun, Rongrong Zhang, Jian Li, Song Ji, Hengliang Guo y Anzhu Yu. "Learning SAR-Optical Cross Modal Features for Land Cover Classification". Remote Sensing 16, n.º 2 (22 de enero de 2024): 431. http://dx.doi.org/10.3390/rs16020431.

Texto completo
Resumen
Synthetic aperture radar (SAR) and optical images provide highly complementary ground information. The fusion of SAR and optical data can significantly enhance semantic segmentation inference results. However, the fusion methods for multimodal data remains a challenge for current research due to significant disparities in imaging mechanisms from diverse sources. Our goal was to bridge the significant gaps between optical and SAR images by developing a dual-input model that utilizes image-level fusion. To improve most existing state-of-the-art image fusion methods, which often assign equal weights to multiple modalities, we employed the principal component analysis (PCA) transform approach. Subsequently, we performed feature-level fusion on shallow feature maps, which retain rich geometric information. We also incorporated a channel attention module to highlight channels rich in features and suppress irrelevant information. This step is crucial due to the substantial similarity between SAR and optical images in shallow layers such as geometric features. In summary, we propose a generic multimodal fusion strategy that can be attached to most encoding–decoding structures for feature classification tasks, designed with two inputs. One input is the optical image, and the other is the three-band fusion data obtained by combining the PCA component of the optical image with the SAR. Our feature-level fusion method effectively integrates multimodal data. The efficiency of our approach was validated using various public datasets, and the results showed significant improvements when applied to several land cover classification models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Revtova, Elena. "Construction of the Definition of the “Credit” Category". Vestnik Volgogradskogo gosudarstvennogo universiteta. Ekonomika, n.º 4 (febrero de 2021): 122–31. http://dx.doi.org/10.15688/ek.jvolsu.2020.4.11.

Texto completo
Resumen
The subject of the research is the essence, nature and meaning of the “credit” category. The goal is to develop a definition of the “credit” category reflecting the nature of the phenomenon it refers to. Hypothesis: the author believes that general scientific research methods based on the idea of triadicity will reveal and describe the nature of the phenomenon of credit, as well as formulate a scientifically based definition of the corresponding category. Methods: formal-logical method, method of triadic decoding of categories. As a result of using the formallogical method the author defined: a) the generic concept “loan” as a universe, part of which is the “credit” category; b) non-credit forms of loans: “bill”, “bond”, “factoring”, “leasing”; c) necessary and sufficient conditions for classifying an object as a “credit”. A scientifically grounded definition of the “credit” category was formulated. As a result of deciphering the “credit” category, the essential qualities in the “credit” object, which together make up its essence, nature and meaning, were revealed; a detailed definition of the “credit” category was received. The research into the nature of credit has shown that the closest generic concept of credit is a loan; a necessary condition for classifying a credit as a loan is the transfer or receipt of money and goods on loan; repayment, payment and urgency are sufficient conditions for classifying a credit as a loan category; as a result, “credit” is defined as a kind of loan categories, the objective essential properties of which are repayment, payment and urgency. The field of applicability of the results in theory: the possibility of using the formal-logical method to investigate the nature of the research object; to check the obtained definition of the object for logical correctness; to enter your own definitions of the research object into the subject field with the help of “triads”. In lending practice the results are applicable as follows: scientifically grounded definition of the “credit” category, understanding the essence and nature of credit, its properties: repayment, payment and urgency to determine the specific variety of credit.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Raveendran, Nithin, Narayanan Rengaswamy, Filip Rozpędek, Ankur Raina, Liang Jiang y Bane Vasić. "Finite Rate QLDPC-GKP Coding Scheme that Surpasses the CSS Hamming Bound". Quantum 6 (20 de julio de 2022): 767. http://dx.doi.org/10.22331/q-2022-07-20-767.

Texto completo
Resumen
Quantum error correction has recently been shown to benefit greatly from specific physical encodings of the code qubits. In particular, several researchers have considered the individual code qubits being encoded with the continuous variable GottesmanKitaev-Preskill (GKP) code, and then imposed an outer discrete-variable code such as the surface code on these GKP qubits. Under such a concatenation scheme, the analog information from the inner GKP error correction improves the noise threshold of the outer code. However, the surface code has vanishing rate and demands a lot of resources with growing distance. In this work, we concatenate the GKP code with generic quantum low-density parity-check (QLDPC) codes and demonstrate a natural way to exploit the GKP analog information in iterative decoding algorithms. We first show the noise thresholds for two lifted product QLDPC code families, and then show the improvements of noise thresholds when the iterative decoder – a hardware-friendly min-sum algorithm (MSA) – utilizes the GKP analog information. We also show that, when the GKP analog information is combined with a sequential update schedule for MSA, the scheme surpasses the well-known CSS Hamming bound for these code families. Furthermore, we observe that the GKP analog information helps the iterative decoder in escaping harmful trapping sets in the Tanner graph of the QLDPC code, thereby eliminating or significantly lowering the error floor of the logical error rate curves. Finally, we discuss new fundamental and practical questions that arise from this work on channel capacity under GKP analog information, and on improving decoder design and analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía