Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Auditory Signal Encoding Schemes.

Artykuły w czasopismach na temat „Auditory Signal Encoding Schemes”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Auditory Signal Encoding Schemes”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

BUSCHERMÖHLE, MICHAEL, ULRIKE FEUDEL, GEORG M. KLUMP, MARK A. BEE i JAN A. FREUND. "SIGNAL DETECTION ENHANCED BY COMODULATED NOISE". Fluctuation and Noise Letters 06, nr 04 (grudzień 2006): L339—L347. http://dx.doi.org/10.1142/s0219477506003483.

Pełny tekst źródła
Streszczenie:
Signal detection in fluctuating background noise is a common problem in diverse fields of research and technology. It has been shown in hearing research that the detection of signals in noise that is correlated in amplitude across the frequency spectrum (comodulated) can be improved compared to uncorrelated background noise. We show that the mechanism leading to this effect is a general phenomenon which may be utilized in other areas where signal detection in comodulated noise needs to be done with a limited frequency resolution. Our model is based on neurophysiological experiments. The proposed signal detection scheme evaluates a fluctuating envelope, the statistics of which depend on the correlation structure across the spectrum of the noise. In our model, signal detection does not require a sophisticated neuronal network but can be accomplished through the encoding of the compressed stimulus envelope in the firing rate of neurons in the auditory system.
Style APA, Harvard, Vancouver, ISO itp.
2

Schafer, Phillip B., i Dezhe Z. Jin. "Noise-Robust Speech Recognition Through Auditory Feature Detection and Spike Sequence Decoding". Neural Computation 26, nr 3 (marzec 2014): 523–56. http://dx.doi.org/10.1162/neco_a_00557.

Pełny tekst źródła
Streszczenie:
Speech recognition in noisy conditions is a major challenge for computer systems, but the human brain performs it routinely and accurately. Automatic speech recognition (ASR) systems that are inspired by neuroscience can potentially bridge the performance gap between humans and machines. We present a system for noise-robust isolated word recognition that works by decoding sequences of spikes from a population of simulated auditory feature-detecting neurons. Each neuron is trained to respond selectively to a brief spectrotemporal pattern, or feature, drawn from the simulated auditory nerve response to speech. The neural population conveys the time-dependent structure of a sound by its sequence of spikes. We compare two methods for decoding the spike sequences—one using a hidden Markov model–based recognizer, the other using a novel template-based recognition scheme. In the latter case, words are recognized by comparing their spike sequences to template sequences obtained from clean training data, using a similarity measure based on the length of the longest common sub-sequence. Using isolated spoken digits from the AURORA-2 database, we show that our combined system outperforms a state-of-the-art robust speech recognizer at low signal-to-noise ratios. Both the spike-based encoding scheme and the template-based decoding offer gains in noise robustness over traditional speech recognition methods. Our system highlights potential advantages of spike-based acoustic coding and provides a biologically motivated framework for robust ASR development.
Style APA, Harvard, Vancouver, ISO itp.
3

Lee, Yong, Chung-Heon Lee i Jun Dong Cho. "3D Sound Coding Color for the Visually Impaired". Electronics 10, nr 9 (27.04.2021): 1037. http://dx.doi.org/10.3390/electronics10091037.

Pełny tekst źródła
Streszczenie:
Contemporary art is evolving beyond simply looking at works, and the development of various sensory technologies has had a great influence on culture and art. Accordingly, opportunities for the visually impaired to appreciate visual artworks through various senses such as auditory and tactile senses are expanding. However, insufficient sound expression and lack of portability make it less understandable and accessible. This paper attempts to convey a color and depth coding scheme to the visually impaired, based on alternative sensory modalities, such as hearing (by encoding the color and depth information with 3D sounds of audio description) and touch (to be used for interface-triggering information such as color and depth). The proposed color-coding scheme represents light, saturated, and dark colors for red, orange, yellow, yellow-green, green, blue-green, blue, and purple. The paper’s proposed system can be used for both mobile platforms and 2.5D (relief) models.
Style APA, Harvard, Vancouver, ISO itp.
4

Guo, Yitong, Ping Zhou, Zhao Yao i Jun Ma. "Biophysical mechanism of signal encoding in an auditory neuron". Nonlinear Dynamics 105, nr 4 (5.08.2021): 3603–14. http://dx.doi.org/10.1007/s11071-021-06770-z.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Gururaj, Bharathi, i G. N. Sadashivappa. "Channel encoding system for transmitting image over wireless network". International Journal of Electrical and Computer Engineering (IJECE) 10, nr 5 (1.10.2020): 4655. http://dx.doi.org/10.11591/ijece.v10i5.pp4655-4662.

Pełny tekst źródła
Streszczenie:
Various encoding schemes have been introduced till date focusing on an effective image transmission scheme in presence of error-prone artifacts in wireless communication channel. Review of existing schemes of channel encoding systems infer that they are mostly inclined on compression scheme and less over problems of superior retention of signal retention as they lacks an essential consideration of network states. Therefore, the proposed manuscript introduces a cost effective lossless encoding scheme which ensures resilient transmission of different forms of images. Adopting an analytical research methodology, the modeling has been carried out to ensure that a novel series of encoding operation be performed over an image followed by an effective indexing mechanism. The study outcome confirms that proposed system outshines existing encoding schemes in every respect.
Style APA, Harvard, Vancouver, ISO itp.
6

Smotherman, M. S., i P. M. Narins. "Hair cells, hearing and hopping: a field guide to hair cell physiology in the frog". Journal of Experimental Biology 203, nr 15 (1.08.2000): 2237–46. http://dx.doi.org/10.1242/jeb.203.15.2237.

Pełny tekst źródła
Streszczenie:
For more than four decades, hearing in frogs has been an important source of information for those interested in auditory neuroscience, neuroethology and the evolution of hearing. Individual features of the frog auditory system can be found represented in one or many of the other vertebrate classes, but collectively the frog inner ear represents a cornucopia of evolutionary experiments in acoustic signal processing. The mechano-sensitive hair cell, as the focal point of transduction, figures critically in the encoding of acoustic information in the afferent auditory nerve. In this review, we provide a short description of how auditory signals are encoded by the specialized anatomy and physiology of the frog inner ear and examine the role of hair cell physiology and its influence on the encoding of sound in the frog auditory nerve. We hope to demonstrate that acoustic signal processing in frogs may offer insights into the evolution and biology of hearing not only in amphibians but also in reptiles, birds and mammals, including man.
Style APA, Harvard, Vancouver, ISO itp.
7

Rupp, Kyle, Jasmine L. Hect, Madison Remick, Avniel Ghuman, Bharath Chandrasekaran, Lori L. Holt i Taylor J. Abel. "Neural responses in human superior temporal cortex support coding of voice representations". PLOS Biology 20, nr 7 (28.07.2022): e3001675. http://dx.doi.org/10.1371/journal.pbio.3001675.

Pełny tekst źródła
Streszczenie:
The ability to recognize abstract features of voice during auditory perception is an intricate feat of human audition. For the listener, this occurs in near-automatic fashion to seamlessly extract complex cues from a highly variable auditory signal. Voice perception depends on specialized regions of auditory cortex, including superior temporal gyrus (STG) and superior temporal sulcus (STS). However, the nature of voice encoding at the cortical level remains poorly understood. We leverage intracerebral recordings across human auditory cortex during presentation of voice and nonvoice acoustic stimuli to examine voice encoding at the cortical level in 8 patient-participants undergoing epilepsy surgery evaluation. We show that voice selectivity increases along the auditory hierarchy from supratemporal plane (STP) to the STG and STS. Results show accurate decoding of vocalizations from human auditory cortical activity even in the complete absence of linguistic content. These findings show an early, less-selective temporal window of neural activity in the STG and STS followed by a sustained, strongly voice-selective window. Encoding models demonstrate divergence in the encoding of acoustic features along the auditory hierarchy, wherein STG/STS responses are best explained by voice category and acoustics, as opposed to acoustic features of voice stimuli alone. This is in contrast to neural activity recorded from STP, in which responses were accounted for by acoustic features. These findings support a model of voice perception that engages categorical encoding mechanisms within STG and STS to facilitate feature extraction.
Style APA, Harvard, Vancouver, ISO itp.
8

Suruliandi, A., i S. P. Raja. "Empirical evaluation of EZW and other encoding techniques in the wavelet-based image compression domain". International Journal of Wavelets, Multiresolution and Information Processing 13, nr 02 (marzec 2015): 1550012. http://dx.doi.org/10.1142/s0219691315500125.

Pełny tekst źródła
Streszczenie:
This paper discusses about embedded zerotree wavelet (EZW) and other wavelet-based encoding techniques employed in lossy image compression. The objective of this paper is two fold. Primarily wavelet-based encoding techniques such as EZW, set partitioning in hierarchical trees (SPIHT), wavelet difference reduction (WDR), adaptively scanned wavelet difference reduction (ASWDR), set partitioned embedded block (SPECK), compression with reversible embedded wavelet (CREW) and space frequency quantization (SFQ) are implemented and their performance is analyzed. Second, wavelet-based compression schemes such as Haar, Daubechies and Biorthogonal are used to evaluate the performance of encoding techniques. The performance parameters such as peak signal-to-noise ratio (PSNR) and mean square error (MSE) are used for evaluation purpose. From the results it is observed that the performance of SPIHT encoding technique is providing better results when compared to other encoding schemes.
Style APA, Harvard, Vancouver, ISO itp.
9

Tajima, Satohiro, Hiromasa Takemura, Ikuya Murakami i Masato Okada. "Neuronal Population Decoding Explains the Change in Signal Detection Sensitivity Caused by Task-Irrelevant Perceptual Bias". Neural Computation 22, nr 10 (październik 2010): 2586–614. http://dx.doi.org/10.1162/neco_a_00019.

Pełny tekst źródła
Streszczenie:
Spatiotemporal context in sensory stimulus has profound effects on neural responses and perception, and it sometimes affects task difficulty. Recently reported experimental data suggest that human detection sensitivity to motion in a target stimulus can be enhanced by adding a slow surrounding motion in an orthogonal direction, even though the illusory motion component caused by the surround is not relevant to the task. It is not computationally clear how the task-irrelevant component of motion modulates the subject's sensitivity to motion detection. In this study, we investigated the effects of encoding biases on detection performance by modeling the stochastic neural population activities. We modeled two types of modulation on the population activity profiles caused by a contextual stimulus: one type is identical to the activity evoked by a physical change in the stimulus, and the other is expressed more simply in terms of response gain modulation. For both encoding schemes, the motion detection performance of the ideal observer is enhanced by a task-irrelevant, additive motion component, replicating the properties observed for real subjects. The success of these models suggests that human detection sensitivity can be characterized by a noisy neural encoding that limits the resolution of information transmission in the cortical visual processing pathway. On the other hand, analyses of the neuronal contributions to the task predict that the effective cell populations differ between the two encoding schemes, posing a question concerning the decoding schemes that the nervous system used during illusory states.
Style APA, Harvard, Vancouver, ISO itp.
10

Levy, Deborah F., i Stephen M. Wilson. "Categorical Encoding of Vowels in Primary Auditory Cortex". Cerebral Cortex 30, nr 2 (25.06.2019): 618–27. http://dx.doi.org/10.1093/cercor/bhz112.

Pełny tekst źródła
Streszczenie:
AbstractSpeech perception involves mapping from a continuous and variable acoustic speech signal to discrete, linguistically meaningful units. However, it is unclear where in the auditory processing stream speech sound representations cease to be veridical (faithfully encoding precise acoustic properties) and become categorical (encoding sounds as linguistic categories). In this study, we used functional magnetic resonance imaging and multivariate pattern analysis to determine whether tonotopic primary auditory cortex (PAC), defined as tonotopic voxels falling within Heschl’s gyrus, represents one class of speech sounds—vowels—veridically or categorically. For each of 15 participants, 4 individualized synthetic vowel stimuli were generated such that the vowels were equidistant in acoustic space, yet straddled a categorical boundary (with the first 2 vowels perceived as [i] and the last 2 perceived as [i]). Each participant’s 4 vowels were then presented in a block design with an irrelevant but attention-demanding level change detection task. We found that in PAC bilaterally, neural discrimination between pairs of vowels that crossed the categorical boundary was more accurate than neural discrimination between equivalently spaced vowel pairs that fell within a category. These findings suggest that PAC does not represent vowel sounds veridically, but that encoding of vowels is shaped by linguistically relevant phonemic categories.
Style APA, Harvard, Vancouver, ISO itp.
11

K., Manjunath Kamath, i R. Sanjeev Kunte. "Framework for reversible data hiding using cost-effective encoding system for video steganography". International Journal of Electrical and Computer Engineering (IJECE) 10, nr 5 (1.10.2020): 5487. http://dx.doi.org/10.11591/ijece.v10i5.pp5487-5496.

Pełny tekst źródła
Streszczenie:
Importances of reversible data hiding practices are always higher in contrast to any conventional data hiding schemes owing to its capability to generate distortion free cover media. Review of existing approaches on reversible data hiding approaches shows variable scheme mainly focussing on the embedding mechanism; however, such schemes could be furthermore improved using encoding scheme for optimal embedding performance. Therefore, the proposed manuscript discusses about a cost-effective scheme where a novel encoding scheme has been used with larger block sizes which reduces the dependencies over larger number of blocks. Further a gradient-based image registration technique is applied to ensure higher quality of the reconstructed signal over the decoding end. The study outcome shows that proposed data hiding technique is proven better than existing data hiding scheme with good balance between security and restored signal quality upon extraction of data.
Style APA, Harvard, Vancouver, ISO itp.
12

Sussman, Harvey Martin. "A Functional Role for Neural Columns: Resolving F2 Transition Variability in Stop Place Categorization". Biolinguistics 10 (28.08.2016): 060–77. http://dx.doi.org/10.5964/bioling.9049.

Pełny tekst źródła
Streszczenie:
Documented examples from neuroethology have revealed species-specific neural encoding mechanisms capable of mapping highly variable, but lawful, visual and auditory inputs within neural columns. By virtue of the entire column being the functional unit of both representation and processing, signal variation is collectively ‘absorbed’, and hence normalized, to help form natural categories possessing an underlying physically-based commonality. Stimulus-specific ‘tolerance ranges’ define the limits of signal variation, effectively shaping the functionality of the columnar-based processing. A conceptualization for an analogous human model utilizing this evolutionarily conserved neural encoding strategy for signal variability absorption is described for the non-invariance issue in stop place perception.
Style APA, Harvard, Vancouver, ISO itp.
13

Schmiedchen, Kristina, Nicole Richter, Stephan Getzmann, Erich Schröger i Rudolf Rübsamen. "ERP evidence for crossmodal interactions during the encoding of audio–visual motion offsets". Seeing and Perceiving 25 (2012): 104. http://dx.doi.org/10.1163/187847612x647360.

Pełny tekst źródła
Streszczenie:
Previous behavioural studies have challenged the notion of unrestricted visual dominance in motion perception by demonstrating that the auditory modality can affect visual motion perception in the peripheral field. Thus far, electrophysiological evidence for the interplay between both modalities across space has not been provided, yet. The present study investigated crossmodal interactions during the encoding of bimodal motion offsets at different locations in space. To this end, moving audio–visual stimuli with either congruent or spatially and temporally disparate motion offsets were presented along different azimuthal trajectories in free-field while event-related potentials (ERPs) were recorded. The pattern of interactions at motion offset suggests that dominance effects between modalities are reflected in a latency shift of ERP components. The latencies of the visual offset N1 were shifted towards the respective termination of the moving acoustic signal at peripheral motion offset locations. The presence of a concurrent visual motion stream less consistently affected the latencies of auditory offset components at frontal motion offset locations which only partially supports the widely reported visual dominance in the central visual field. However, the results provide electrophysiological evidence that the auditory system can modulate the processing of motion streams in the peripheral visual field which is supposed to be due to superior tracking of an ongoing motion in the periphery by the auditory system compared to the visual system. Our findings are well in line with previous studies that highlighted the crucial role of the auditory modality when the reliability of the visual signal is degraded.
Style APA, Harvard, Vancouver, ISO itp.
14

Shavali, Vennapusapalli, Sreeramareddy Gorlagummanahally Maripareddy i Patil Ramana Reddy. "Reconfigurable data encoding schemes for on-chip interconnect power reduction in deep submicron technology". Indonesian Journal of Electrical Engineering and Computer Science 28, nr 3 (7.10.2022): 1330. http://dx.doi.org/10.11591/ijeecs.v28.i3.pp1330-1344.

Pełny tekst źródła
Streszczenie:
With technology scaling, size of both transistor and interconnects are reduced. Power dissipation due to dynamic switching is high in the interconnects. Suitable encoding schemes that reduces transition between data bits are used to minimize interconnect power dissipation. In this paper transition between data bits is minimized based on three novel data encoding schemes identifying the novel methods estimates bit transitions in a pair of data bits and performs half inversion or full inversion on one byte of data thus reducing the switching activity by 50%. The encoder and decoder for the three encoding schemes are modelled in verilog hardware description language (HDL) and implemented using application specefic integrates circuit (ASIC) flow targeting 32 nm. Technology over all power dissipation of encoding scheme is 1.04 μW in addition over head area of 210 cells with encoding delay of 340 ps. Encoder decoder register transfer logic (RTL) code is implemented and the total area required is 34980 units. The data encoding and decoding schemes are suitable for low power applications.
Style APA, Harvard, Vancouver, ISO itp.
15

Kaur, Satwinder, Lavish Kansal, Gurjot Singh Gaba i Mohannad A. M. Al-Ja'afari. "BER Assessment of FBMC Systems Augmented with Different Space-Time Coding Schemes Over Diverse Channels". International Journal of Engineering & Technology 7, nr 3.8 (7.07.2018): 111. http://dx.doi.org/10.14419/ijet.v7i3.8.16844.

Pełny tekst źródła
Streszczenie:
Diverse methodologies of encoding schemes like space-time block codes (STBC), orthogonal space-time block codes (OSTBC) &quasi-orthogonal space-time block codes (QOSTBC) are being proposed as alternatives of basic Alamouti space-time encoding scheme for multiple input multiple output (MIMO) scheme for existing wireless communication systems. Since filter bank multi-carrier (FBMC) scheme is an integral part of the 5th generation (5G) cellular systems, the performance of these schemes needs to be investigated for FBMC methodology also. Alamouti and Space-time block codes are widely used in MIMO system because of their ability to achieve full diversity and the different channels are used at the receiver. In this work, we proposed different approaches for the bit error rate (BER) of Alamouti, STBC3, and STBC4 in FBMC. These approaches are based on the type of space-time encoding and number of receiving antennas being used for each space time encoding scheme for analyzing the MIMO-FBMC. Moreover, we also investigation the performance of these proposed MIMO schemes over Rayleigh and additive white Gaussian noise (AWGN) channel and compared it with the performance of BER or signal to noise ratio (SNR) of different channels.
Style APA, Harvard, Vancouver, ISO itp.
16

Afzali, Maryam, Santiago Aja‐Fernández i Derek K. Jones. "Direction‐averaged diffusion‐weighted MRI signal using different axisymmetric B‐tensor encoding schemes". Magnetic Resonance in Medicine 84, nr 3 (21.02.2020): 1579–91. http://dx.doi.org/10.1002/mrm.28191.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Burmeister, Sabrina S., Verónica G. Rodriguez Moncalvo i Karin S. Pfennig. "Differential encoding of signals and preferences by noradrenaline in the anuran brain". Journal of Experimental Biology 223, nr 18 (9.07.2020): jeb214148. http://dx.doi.org/10.1242/jeb.214148.

Pełny tekst źródła
Streszczenie:
ABSTRACTSocial preferences enable animals to selectively interact with some individuals over others. One influential idea for the evolution of social preferences is that preferred signals evolve because they elicit greater neural responses from sensory systems. However, in juvenile plains spadefoot toad (Spea bombifrons), a species with condition-dependent mating preferences, responses of the preoptic area, but not of the auditory midbrain, mirror adult social preferences. To examine whether this separation of signal representation from signal valuation generalizes to other anurans, we compared the relative contributions of noradrenergic signalling in the preoptic area and auditory midbrain of S. bombifrons and its close relative Spea multiplicata. We manipulated body condition in juvenile toads by controlling diet and used high pressure liquid chromatography to compare call-induced levels of noradrenaline and its metabolite MHPG in the auditory midbrain and preoptic area of the two species. We found that calls from the two species induced different levels of noradrenaline and MHPG in the auditory system, with higher levels measured in both species for the more energetic S. bombifrons call. In contrast, noradrenaline levels in the preoptic area mirrored patterns of social preferences in both S. bombifrons and S. multiplicata. That is, noradrenaline levels were higher in response to the preferred calls within each species and were modified by diet in S. bombifrons (with condition-dependent preferences) but not S. multiplicata (with condition-independent preferences). Our results are consistent with a potentially important role for preoptic noradrenaline in the development of social preferences and indicate that it could be a target of selection in the evolution of condition-dependent social preferences.
Style APA, Harvard, Vancouver, ISO itp.
18

Zhu, Bi, Chuansheng Chen, Xuhao Shao, Wenzhi Liu, Zhifang Ye, Liping Zhuang, Li Zheng, Elizabeth F. Loftus i Gui Xue. "Multiple interactive memory representations underlie the induction of false memory". Proceedings of the National Academy of Sciences 116, nr 9 (14.02.2019): 3466–75. http://dx.doi.org/10.1073/pnas.1817925116.

Pełny tekst źródła
Streszczenie:
Theoretical and computational models such as transfer-appropriate processing (TAP) and global matching models have emphasized the encoding–retrieval interaction of memory representations in generating false memories, but relevant neural mechanisms are still poorly understood. By manipulating the sensory modalities (visual and auditory) at different processing stages (learning and test) in the Deese–Roediger–McDermott task, we found that the auditory-learning visual-test (AV) group produced more false memories (59%) than the other three groups (42∼44%) [i.e., visual learning visual test (VV), auditory learning auditory test (AA), and visual learning auditory test (VA)]. Functional imaging results showed that the AV group’s proneness to false memories was associated with (i) reduced representational match between the tested item and all studied items in the visual cortex, (ii) weakened prefrontal monitoring process due to the reliance on frontal memory signal for both targets and lures, and (iii) enhanced neural similarity for semantically related words in the temporal pole as a result of auditory learning. These results are consistent with the predictions based on the TAP and global matching models and highlight the complex interactions of representations during encoding and retrieval in distributed brain regions that contribute to false memories.
Style APA, Harvard, Vancouver, ISO itp.
19

Scharinger, Mathias, William J. Idsardi i Samantha Poe. "A Comprehensive Three-dimensional Cortical Map of Vowel Space". Journal of Cognitive Neuroscience 23, nr 12 (grudzień 2011): 3972–82. http://dx.doi.org/10.1162/jocn_a_00056.

Pełny tekst źródła
Streszczenie:
Mammalian cortex is known to contain various kinds of spatial encoding schemes for sensory information including retinotopic, somatosensory, and tonotopic maps. Tonotopic maps are especially interesting for human speech sound processing because they encode linguistically salient acoustic properties. In this study, we mapped the entire vowel space of a language (Turkish) onto cortical locations by using the magnetic N1 (M100), an auditory-evoked component that peaks approximately 100 msec after auditory stimulus onset. We found that dipole locations could be structured into two distinct maps, one for vowels produced with the tongue positioned toward the front of the mouth (front vowels) and one for vowels produced in the back of the mouth (back vowels). Furthermore, we found spatial gradients in lateral–medial, anterior–posterior, and inferior–superior dimensions that encoded the phonetic, categorical distinctions between all the vowels of Turkish. Statistical model comparisons of the dipole locations suggest that the spatial encoding scheme is not entirely based on acoustic bottom–up information but crucially involves featural–phonetic top–down modulation. Thus, multiple areas of excitation along the unidimensional basilar membrane are mapped into higher dimensional representations in auditory cortex.
Style APA, Harvard, Vancouver, ISO itp.
20

Tomchik, Seth M., i Zhongmin Lu. "Modulation of Auditory Signal-to-Noise Ratios by Efferent Stimulation". Journal of Neurophysiology 95, nr 6 (czerwiec 2006): 3562–70. http://dx.doi.org/10.1152/jn.00063.2006.

Pełny tekst źródła
Streszczenie:
One of the primary challenges that sensory systems face is extracting relevant information from background noise. In the auditory system, the ear receives efferent feedback, which may help it extract signals from noise. Here we directly test the hypothesis that efferent activity increases the signal-to-noise ratio (SNR) of the ear, using the relatively simple teleost ear. Tone-evoked saccular potentials were recorded before and after efferent stimulation, and the SNR of the responses was calculated. In quiet conditions, efferent stimulation suppressed saccular responses to a tone, reducing the SNR. However, when masking noise was added, efferent stimulation increased the SNR of the saccular responses within a range of stimulus combinations. These data demonstrate that auditory efferent feedback can increase SNR in conditions where a signal is masked by noise, thereby enhancing the encoding of signals in noise. Efferent feedback thus performs a fundamental signal processing function, helping the animal to hear sounds in difficult listening conditions.
Style APA, Harvard, Vancouver, ISO itp.
21

Ni, Ruiye, David A. Bender, Amirali M. Shanechi, Jeffrey R. Gamble i Dennis L. Barbour. "Contextual effects of noise on vocalization encoding in primary auditory cortex". Journal of Neurophysiology 117, nr 2 (1.02.2017): 713–27. http://dx.doi.org/10.1152/jn.00476.2016.

Pełny tekst źródła
Streszczenie:
Robust auditory perception plays a pivotal function for processing behaviorally relevant sounds, particularly with distractions from the environment. The neuronal coding enabling this ability, however, is still not well understood. In this study, we recorded single-unit activity from the primary auditory cortex (A1) of awake marmoset monkeys ( Callithrix jacchus) while delivering conspecific vocalizations degraded by two different background noises: broadband white noise and vocalization babble. Noise effects on neural representation of target vocalizations were quantified by measuring the responses' similarity to those elicited by natural vocalizations as a function of signal-to-noise ratio. A clustering approach was used to describe the range of response profiles by reducing the population responses to a summary of four response classes (robust, balanced, insensitive, and brittle) under both noise conditions. This clustering approach revealed that, on average, approximately two-thirds of the neurons change their response class when encountering different noises. Therefore, the distortion induced by one particular masking background in single-unit responses is not necessarily predictable from that induced by another, suggesting the low likelihood of a unique group of noise-invariant neurons across different background conditions in A1. Regarding noise influence on neural activities, the brittle response group showed addition of spiking activity both within and between phrases of vocalizations relative to clean vocalizations, whereas the other groups generally showed spiking activity suppression within phrases, and the alteration between phrases was noise dependent. Overall, the variable single-unit responses, yet consistent response types, imply that primate A1 performs scene analysis through the collective activity of multiple neurons. NEW & NOTEWORTHY The understanding of where and how auditory scene analysis is accomplished is of broad interest to neuroscientists. In this paper, we systematically investigated neuronal coding of multiple vocalizations degraded by two distinct noises at various signal-to-noise ratios in nonhuman primates. In the process, we uncovered heterogeneity of single-unit representations for different auditory scenes yet homogeneity of responses across the population.
Style APA, Harvard, Vancouver, ISO itp.
22

Spanton, Rory W., i Christopher J. Berry. "The unequal variance signal-detection model of recognition memory: Investigating the encoding variability hypothesis". Quarterly Journal of Experimental Psychology 73, nr 8 (27.02.2020): 1242–60. http://dx.doi.org/10.1177/1747021820906117.

Pełny tekst źródła
Streszczenie:
Despite the unequal variance signal-detection (UVSD) model’s prominence as a model of recognition memory, a psychological explanation for the unequal variance assumption has yet to be verified. According to the encoding variability hypothesis, old item memory strength variance (σo) is greater than that of new items because items are incremented by variable, rather than fixed, amounts of strength at encoding. Conditions that increase encoding variability should therefore result in greater estimates of σo. We conducted three experiments to test this prediction. In Experiment 1, encoding variability was manipulated by presenting items for a fixed or variable (normally distributed) duration at study. In Experiment 2, we used an attentional manipulation whereby participants studied items while performing an auditory one-back task in which distractors were presented at fixed or variable intervals. In Experiment 3, participants studied stimuli with either high or low variance in word frequency. Across experiments, estimates of σo were unaffected by our attempts to manipulate encoding variability, even though the manipulations weakly affected subsequent recognition. Instead, estimates of σo tended to be positively correlated with estimates of the mean difference in strength between new and studied items ( d), as might be expected if σo generally scales with d. Our results show that it is surprisingly hard to successfully manipulate encoding variability, and they provide a signpost for others seeking to test the encoding variability hypothesis.
Style APA, Harvard, Vancouver, ISO itp.
23

Heikkilä, Jenni, Kimmo Alho i Kaisa Tiippana. "Semantically Congruent Visual Stimuli Can Improve Auditory Memory". Multisensory Research 30, nr 7-8 (2017): 639–51. http://dx.doi.org/10.1163/22134808-00002584.

Pełny tekst źródła
Streszczenie:
We investigated the effects of audiovisual semantic congruency on recognition memory performance. It has been shown previously that memory performance is better for semantically congruent stimuli that are presented together in different modalities (e.g., a dog’s bark with a picture of the dog) during encoding, compared to stimuli presented together with an incongruent or non-semantic stimulus across modalities. We wanted to clarify whether this congruency effect is also present when the effects of response bias and uncertainty of stimulus type are removed. The participants memorized auditory or visual stimuli (sounds, spoken words or written words), which were either presented with a semantically congruent stimulus in the other modality or presented alone during encoding. This experimental paradigm allowed us to utilize signal detection theory in performance analysis. In addition, it enabled us to eliminate possible effects caused by intermingling congruent stimuli with incongruent or non-semantic stimuli, as previously done in other studies. The memory of sounds was facilitated when accompanied by semantically congruent pictures or written words, in comparison to sounds presented in isolation. The memory of spoken words was facilitated by semantically congruent pictures. However, written words did not facilitate memory of spoken words, or vice versa. These results suggest that semantically congruent verbal and non-verbal visual stimuli presented in tandem with auditory counterparts, can enhance the precision of auditory encoding, except when the stimuli in each modality are both verbal.
Style APA, Harvard, Vancouver, ISO itp.
24

Oliver, D., A. M. Taberner, H. Thurm, M. Sausbier, C. Arntz, P. Ruth, B. Fakler i M. C. Liberman. "The Role of BKCa Channels in Electrical Signal Encoding in the Mammalian Auditory Periphery". Journal of Neuroscience 26, nr 23 (7.06.2006): 6181–89. http://dx.doi.org/10.1523/jneurosci.1047-06.2006.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Lau, Joseph C. Y., Patrick C. M. Wong i Bharath Chandrasekaran. "Context-dependent plasticity in the subcortical encoding of linguistic pitch patterns". Journal of Neurophysiology 117, nr 2 (1.02.2017): 594–603. http://dx.doi.org/10.1152/jn.00656.2016.

Pełny tekst źródła
Streszczenie:
We examined the mechanics of online experience-dependent auditory plasticity by assessing the influence of prior context on the frequency-following responses (FFRs), which reflect phase-locked responses from neural ensembles within the subcortical auditory system. FFRs were elicited to a Cantonese falling lexical pitch pattern from 24 native speakers of Cantonese in a variable context, wherein the falling pitch pattern randomly occurred in the context of two other linguistic pitch patterns; in a patterned context, wherein, the falling pitch pattern was presented in a predictable sequence along with two other pitch patterns, and in a repetitive context, wherein the falling pitch pattern was presented with 100% probability. We found that neural tracking of the stimulus pitch contour was most faithful and accurate when listening context was patterned and least faithful when the listening context was variable. The patterned context elicited more robust pitch tracking relative to the repetitive context, suggesting that context-dependent plasticity is most robust when the context is predictable but not repetitive. Our study demonstrates a robust influence of prior listening context that works to enhance online neural encoding of linguistic pitch patterns. We interpret these results as indicative of an interplay between contextual processes that are responsive to predictability as well as novelty in the presentation context. NEW & NOTEWORTHY Human auditory perception in dynamic listening environments requires fine-tuning of sensory signal based on behaviorally relevant regularities in listening context, i.e., online experience-dependent plasticity. Our finding suggests what partly underlie online experience-dependent plasticity are interplaying contextual processes in the subcortical auditory system that are responsive to predictability as well as novelty in listening context. These findings add to the literature that looks to establish the neurophysiological bases of auditory system plasticity, a central issue in auditory neuroscience.
Style APA, Harvard, Vancouver, ISO itp.
26

Lee, Vanessa, Benjamin A. Pawlisch, Matheus Macedo-Lima i Luke Remage-Healey. "Norepinephrine enhances song responsiveness and encoding in the auditory forebrain of male zebra finches". Journal of Neurophysiology 119, nr 1 (1.01.2018): 209–20. http://dx.doi.org/10.1152/jn.00251.2017.

Pełny tekst źródła
Streszczenie:
Norepinephrine (NE) can dynamically modulate excitability and functional connectivity of neural circuits in response to changes in external and internal states. Regulation by NE has been demonstrated extensively in mammalian sensory cortices, but whether NE-dependent modulation in sensory cortex alters response properties in downstream sensorimotor regions is less clear. Here we examine this question in male zebra finches, a songbird species with complex vocalizations and a well-defined neural network for auditory processing of those vocalizations. We test the hypothesis that NE modulates auditory processing and encoding, using paired extracellular electrophysiology recordings and pattern classifier analyses. We report that a NE infusion into the auditory cortical region NCM (caudomedial nidopallium; analogous to mammalian secondary auditory cortex) enhances the auditory responses, burst firing, and coding properties of single NCM neurons. Furthermore, we report that NE-dependent changes in NCM coding properties, but not auditory response strength, are transmitted downstream to the sensorimotor nucleus HVC. Finally, NE modulation in the NCM of males is qualitatively similar to that observed in females: in both sexes, NE increases auditory response strengths. However, we observed a sex difference in the mechanism of enhancement: whereas NE increases response strength in females by decreasing baseline firing rates, NE increases response strength in males by increasing auditory-evoked activity. Therefore, NE signaling exhibits a compensatory sex difference to achieve a similar, state-dependent enhancement in signal-to-noise ratio and coding accuracy in males and females. In summary, our results provide further evidence for adrenergic regulation of sensory processing and modulation of auditory/sensorimotor functional connectivity. NEW & NOTEWORTHY This study documents that the catecholamine norepinephrine (also known as noradrenaline) acts in the auditory cortex to shape local processing of complex sound stimuli. Moreover, it also enhances the coding accuracy of neurons in the auditory cortex as well as in the downstream sensorimotor cortex. Finally, this study shows that while the sensory-enhancing effects of norepinephrine are similar in males and females, there are sex differences in the mode of action.
Style APA, Harvard, Vancouver, ISO itp.
27

Ghahabi, Omid, i Mohammad Hassan Savoji. "Adaptive Variable Degree- Zero-Trees for Re-Encoding of Perceptually Quantized Wavelet Packet Transformed Audio and High-Quality Speech". ISRN Signal Processing 2011 (6.03.2011): 1–16. http://dx.doi.org/10.5402/2011/145758.

Pełny tekst źródła
Streszczenie:
A fast, efficient, and scalable algorithm is proposed, in this paper, for re-encoding of perceptually quantized wavelet-packet transform (WPT) coefficients of audio and high quality speech and is called “adaptive variable degree- zero-trees” (AVDZ). The quantization process is carried out by taking into account some basic perceptual considerations and achieves good subjective quality with low complexity. The performance of the proposed AVDZ algorithm is compared with two other zero-tree-based schemes comprising (1) embedded zero-tree wavelet (EZW) and (2) the set partitioning in hierarchical trees (SPIHT). Since EZW and SPIHT are designed for image compression, some modifications are incorporated in these schemes for their better matching to audio signals. It is shown that the proposed modifications can improve their performance by about 15–25%. Furthermore, it is concluded that the proposed AVDZ algorithm outperforms these modified versions in terms of both output average bit-rates and computation times.
Style APA, Harvard, Vancouver, ISO itp.
28

FUKUSHIMA, K., S. KIYOMOTO, T. TANAKA i K. SAKURAI. "Analysis of Program Obfuscation Schemes with Variable Encoding Technique". IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E91-A, nr 1 (1.01.2008): 316–29. http://dx.doi.org/10.1093/ietfec/e91-a.1.316.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Ahmed, Md Firoz, Md Sofiqul Islam i Abu Zafor Md Touhidul Islam. "Comparative Performance Assessment of V-Blast Encoded 8×8 MIMO MC-CDMA Wireless System". International Journal on AdHoc Networking Systems 11, nr 2 (30.04.2021): 1–7. http://dx.doi.org/10.5121/ijans.2021.11201.

Pełny tekst źródła
Streszczenie:
The bit error rate performance of a V-Blast encoded 8x8 MIMO MC-CDMA wireless communication system for different signal detection (MMSE and ZF) and digital modulation (BPSK, QPSK, DPSK, and 4QAM) schemes for grayscale image transmission has been investigated in this paper. The proposed wireless system employ ½-rated Convolution and cyclic redundancy check (CRC) channel encoding over the AWGN channel and Walsh Hadamard code as an orthogonal spread code. The present Matlab based simulation study demonstrates that the V-Blast encoded 8×8 MIMO MC-CDMA wireless system with the employment of 1⁄2- rated convolution and cyclic redundancy check (CRC) channel encoding strategies shows good performance utilizing BPSK digital modulation and ZF signal detection scheme in grayscale image transmission.
Style APA, Harvard, Vancouver, ISO itp.
30

Malone, B. J., Marc A. Heiser, Ralph E. Beitel i Christoph E. Schreiner. "Background noise exerts diverse effects on the cortical encoding of foreground sounds". Journal of Neurophysiology 118, nr 2 (1.08.2017): 1034–54. http://dx.doi.org/10.1152/jn.00152.2017.

Pełny tekst źródła
Streszczenie:
The ability to detect and discriminate sounds in background noise is critical for our ability to communicate. The neural basis of robust perceptual performance in noise is not well understood. We identified neuronal populations in core auditory cortex of squirrel monkeys that differ in how they process foreground signals in background noise and that may contribute to robust signal representation and discrimination in acoustic environments with prominent background noise.
Style APA, Harvard, Vancouver, ISO itp.
31

Niwa, Mamiko, Kevin N. O'Connor, Elizabeth Engall, Jeffrey S. Johnson i M. L. Sutter. "Hierarchical effects of task engagement on amplitude modulation encoding in auditory cortex". Journal of Neurophysiology 113, nr 1 (1.01.2015): 307–27. http://dx.doi.org/10.1152/jn.00458.2013.

Pełny tekst źródła
Streszczenie:
We recorded from middle lateral belt (ML) and primary (A1) auditory cortical neurons while animals discriminated amplitude-modulated (AM) sounds and also while they sat passively. Engagement in AM discrimination improved ML and A1 neurons' ability to discriminate AM with both firing rate and phase-locking; however, task engagement affected neural AM discrimination differently in the two fields. The results suggest that these two areas utilize different AM coding schemes: a “single mode” in A1 that relies on increased activity for AM relative to unmodulated sounds and a “dual-polar mode” in ML that uses both increases and decreases in neural activity to encode modulation. In the dual-polar ML code, nonsynchronized responses might play a special role. The results are consistent with findings in the primary and secondary somatosensory cortices during discrimination of vibrotactile modulation frequency, implicating a common scheme in the hierarchical processing of temporal information among different modalities. The time course of activity differences between behaving and passive conditions was also distinct in A1 and ML and may have implications for auditory attention. At modulation depths ≥ 16% (approximately behavioral threshold), A1 neurons' improvement in distinguishing AM from unmodulated noise is relatively constant or improves slightly with increasing modulation depth. In ML, improvement during engagement is most pronounced near threshold and disappears at highly suprathreshold depths. This ML effect is evident later in the stimulus, and mainly in nonsynchronized responses. This suggests that attention-related increases in activity are stronger or longer-lasting for more difficult stimuli in ML.
Style APA, Harvard, Vancouver, ISO itp.
32

Kim, Seonjae, Dongsan Jun, Byung-Gyu Kim, Seungkwon Beack, Misuk Lee i Taejin Lee. "Two-Dimensional Audio Compression Method Using Video Coding Schemes". Electronics 10, nr 9 (6.05.2021): 1094. http://dx.doi.org/10.3390/electronics10091094.

Pełny tekst źródła
Streszczenie:
As video compression is one of the core technologies that enables seamless media streaming within the available network bandwidth, it is crucial to employ media codecs to support powerful coding performance and higher visual quality. Versatile Video Coding (VVC) is the latest video coding standard developed by the Joint Video Experts Team (JVET) that can compress original data hundreds of times in the image or video; the latest audio coding standard, Unified Speech and Audio Coding (USAC), achieves a compression rate of about 20 times for audio or speech data. In this paper, we propose a pre-processing method to generate a two-dimensional (2D) audio signal as an input of a VVC encoder, and investigate the applicability to 2D audio compression using the video coding scheme. To evaluate the coding performance, we measure both signal-to-noise ratio (SNR) and bits per sample (bps). The experimental result shows the possibility of researching 2D audio encoding using video coding schemes.
Style APA, Harvard, Vancouver, ISO itp.
33

Heinz, Michael G., H. Steven Colburn i Laurel H. Carney. "Evaluating Auditory Performance Limits: I. One-Parameter Discrimination Using a Computational Model for the Auditory Nerve". Neural Computation 13, nr 10 (1.10.2001): 2273–316. http://dx.doi.org/10.1162/089976601750541804.

Pełny tekst źródła
Streszczenie:
A method for calculating psychophysical performance limits based on stochastic neural responses is introduced and compared to previous analytical methods for evaluating auditory discrimination of tone frequency and level. The method uses signal detection theory and a computational model for a population of auditory nerve (AN) fiber responses. The use of computational models allows predictions to be made over a wider parameter range and with more complete descriptions of AN responses than in analytical models. Performance based on AN discharge times (all-information) is compared to performance based only on discharge counts (rate-place). After the method is verified over the range of parameters for which previous analytical models are applicable, the parameter space is then extended. For example, a computational model of AN activity that extends to high frequencies is used to explore the common belief that rate-place information is responsible for frequency encoding at high frequencies due to the rolloff in AN phase locking above 2 kHz. This rolloff is thought to eliminate temporal information at high frequencies. Contrary to this belief, results of this analysis show that rate-place predictions for frequency discrimination are inconsistent with human performance in the dependence on frequency for high frequencies and that there is significant temporal information in the AN up to at least 10 kHz. In fact, the all-information predictions match the functional dependence of human performance on frequency, although optimal performance is much better than human performance. The use of computational AN models in this study provides new constraints on hypotheses of neural encoding of frequency in the auditory system; however, the method is limited to simple tasks with deterministic stimuli. A companion article in this issue (“Evaluating Auditory Performance Limits: II”) describes an extension of this approach to more complex tasks that include random variation of one parameter, for example, random-level variation, which is often used in psychophysics to test neural encoding hypotheses.
Style APA, Harvard, Vancouver, ISO itp.
34

Kumar N. S, Pradeep, i H. N. Suresh. "Encoding time optimization for intra-frame reconstruction schemes for H.264". International Journal of Engineering & Technology 7, nr 3.3 (8.06.2018): 245. http://dx.doi.org/10.14419/ijet.v7i2.33.14161.

Pełny tekst źródła
Streszczenie:
The area of transmission compression is gaining a quick momentum by evolving with completely different varieties of compression protocols and coding method. However, majority of the prevailing communication devices still uses H.264 as a customary compression protocol. we tend to reviewed a number of the prevailing system touching on usage of H.264 in video compression to explore it doesn’t supply process effectiveness though it's going to supply higher reconstructed knowledge on the receiver finish. This paper introduces A distinctive optimisation mechanism to optimize the coding time by introducing a value perform formulation. The bestowed design runs over typical H.264 and supply worth additional advantage by creating it additional resource friendly. The rule made uses MPEG file as AN input that endure the method of optimisation of coding time used for coding I and P frame. The study outcome of the projected system is found to supply a much better reduction in coding time as compared to existing mechanism of Lagrangian price perform. As a contribution, our outcome shows a much better equilibrium between the info quality of reconstructed signal and therefore the coding time.
Style APA, Harvard, Vancouver, ISO itp.
35

Menchetti, Marco, Liam W. Bussey, Daniel Gilks, Tim Whitley, Costas Constantinou i Kai Bongs. "Digitally encoded RF to optical data transfer using excited Rb without the use of a local oscillator". Journal of Applied Physics 133, nr 1 (7.01.2023): 014401. http://dx.doi.org/10.1063/5.0129107.

Pełny tekst źródła
Streszczenie:
We present a passive RF to optical data transfer without a local oscillator using an atomic “Rydberg” receiver. We demonstrate the ability to detect a 5G frequency carrier wave (3.5 GHz) and decode digital data from the carrier wave without the use of a local oscillator to detect the modulation of the RF signal. The encoding and decoding of the data are achieved using an intermediate frequency (IF). The rubidium vapor detects the changes in the carrier wave's amplitude, which comes from the mixing of the IF onto the carrier. The rubidium vapor then upconverts the IF into the optical domain for detection. Using this technique for data encoding and extraction, we achieve data rates up to 238 kbps with a variety of encoding schemes.
Style APA, Harvard, Vancouver, ISO itp.
36

Ro, Jae-Hyun, Bit-Na Kwon, Seongjoo Lee i Hyoung-Kyu Song. "Adaptive encoding scheme providing optimal performance for Internet of Things industry in the backscatter system". International Journal of Distributed Sensor Networks 13, nr 2 (luty 2017): 155014771769362. http://dx.doi.org/10.1177/1550147717693620.

Pełny tekst źródła
Streszczenie:
In the wireless communication market, Internet of Things is one of the major issues. Internet of Things is a technology which connects all objects to the Internet and enables to share the information of the objects with each other. Also, energy harvesting for batteryless devices is also one of the new issues in order to realize Internet of Things. Through the realization of Internet of Things, all objects can connect to the Internet and share own information with each other. Since the sensor communication with the low-power battery or without the battery is possible by the backscatter technique, a backscatter technique is highly useful for the realization of Internet of Things. This article proposes a scheme that a sensor is connected to the Internet and the optimal performance is obtained. For reliable backscatter technique for mobile user, this article proposes the two encoding schemes which are FM0 and Miller- n. Also, to minimize the loss of the data rate, these two encoding schemes are adaptively used according to mobility. The mobility is decided by signal-to-noise ratio and channel state. For mobile user, the proposed backscatter technique has high error performance due to proposed adaptive encoding scheme.
Style APA, Harvard, Vancouver, ISO itp.
37

Bendor, Daniel, i Xiaoqin Wang. "Neural Coding of Periodicity in Marmoset Auditory Cortex". Journal of Neurophysiology 103, nr 4 (kwiecień 2010): 1809–22. http://dx.doi.org/10.1152/jn.00281.2009.

Pełny tekst źródła
Streszczenie:
Pitch, our perception of how high or low a sound is on a musical scale, crucially depends on a sound's periodicity. If an acoustic signal is temporally jittered so that it becomes aperiodic, the pitch will no longer be perceivable even though other acoustical features that normally covary with pitch are unchanged. Previous electrophysiological studies investigating pitch have typically used only periodic acoustic stimuli, and as such these studies cannot distinguish between a neural representation of pitch and an acoustical feature that only correlates with pitch. In this report, we examine in the auditory cortex of awake marmoset monkeys ( Callithrix jacchus) the neural coding of a periodicity's repetition rate, an acoustic feature that covaries with pitch. We first examine if individual neurons show similar repetition rate tuning for different periodic acoustic signals. We next measure how sensitive these neural representations are to the temporal regularity of the acoustic signal. We find that neurons throughout auditory cortex covary their firing rate with the repetition rate of an acoustic signal. However, similar repetition rate tuning across acoustic stimuli and sensitivity to temporal regularity were generally only observed in a small group of neurons found near the anterolateral border of primary auditory cortex, the location of a previously identified putative pitch processing center. These results suggest that although the encoding of repetition rate is a general component of auditory cortical processing, the neural correlate of periodicity is confined to a special class of pitch-selective neurons within the putative pitch processing center of auditory cortex.
Style APA, Harvard, Vancouver, ISO itp.
38

Halliday, David F., i Ian Moore. "A comparison of random and periodic marine simultaneous-source encoding". Leading Edge 37, nr 6 (czerwiec 2018): 471a1–471a11. http://dx.doi.org/10.1190/tle37060471a1.1.

Pełny tekst źródła
Streszczenie:
Separation algorithms for marine simultaneous-source data generally require encoded sources. Proposed encoding schemes include random time delays (time dithers), periodic time sequences (such as those referred to as seismic apparition), and periodic phase sequences (for sources with fully controlled phase like a marine vibrator). At a given frequency, time dithers spread energy at a given wavenumber over all wavenumbers, phase sequences shift the energy by a fixed wavenumber (independent of frequency), and time sequences split energy over multiple wavenumbers in a frequency-dependent way. The way the encoding scheme distributes energy in the wavenumber domain is important because separation algorithms generally assume that, in the absence of encoding, all energy falls into the signal cone. Time dithering allows separation by inversion. At low frequencies, the inverse problem is overdetermined and easily solved. At higher frequencies, sparse inversion works well, provided the data exhibit a sufficiently sparse representation (consistent with compressive sensing theory). Phase sequencing naturally separates the sources in the wavenumber domain at low frequencies. At higher frequencies, ambiguities must be resolved using assumptions such as limited dispersion and limited complexity. Time sequencing allows a simple separation at low frequencies based on a scaling and subtraction process in the wavenumber domain. However, the scaling becomes unstable near notch frequencies, including DC. At higher frequencies, a similar problem to that for phase sequencing must be solved. The encoding schemes, therefore, have similar overall properties and require similar assumptions, but differ in some potentially important details. Phase sequencing is clearly only applicable to phase-controllable sources, and the different encoding schemes have other implications for data acquisition, for example, with respect to operational complexity, efficiency, spatial sampling, and tolerance to errors.
Style APA, Harvard, Vancouver, ISO itp.
39

Rhode, W. S., i S. Greenberg. "Encoding of amplitude modulation in the cochlear nucleus of the cat". Journal of Neurophysiology 71, nr 5 (1.05.1994): 1797–825. http://dx.doi.org/10.1152/jn.1994.71.5.1797.

Pełny tekst źródła
Streszczenie:
1. Amplitude modulation (AM) is a pervasive property of acoustic communication systems. In the present study we investigate neural temporal mechanisms in the auditory nerve and cochlear nuclei of the pentobarbital sodium-anesthesized cat associated with the neural coding of 100% AM tones, both in quiet and in the presence of wideband, quasi-flat-spectrum noise. The AM carrier frequency was set to the neuron's characteristic frequency (CF) and the sound pressure level (SPL) of acoustic stimuli was varied over a wide dynamic range of intensities (< or = 40 dB). The temporal AM-encoding capability of auditory neurons was measured by computing the synchronization coefficient (SC) of the neural response to the signal's modulation and carrier frequency. The temporal modulation transfer function (tMTF) of a neuron was then computed by measuring the SC of the response to signals of variable fmod (50-2550 Hz). 2. Neurons in the cochlear nuclei synchronize on average more highly to the modulation frequency than fibers of comparable CF, threshold, and spontaneous rate in the auditory nerve. The disparity in performance is greatest at high SPLs and low signal-to-noise ratios. However, there is a significant degree of diversity in AM-encoding capability among neurons in both the cochlear nuclei and auditory nerve. Among auditory nerve fibers (ANFs), low- and medium-spontaneous-rate (SR) units (SR < 18 spike/s) phase-lock with greater precision than comparable high-SR units at any given frequency, particularly at moderate to high SPLs, consistent with previous studies. 3. The phase-locking capabilities of neurons in the cochlear nucleus are considerably more variable than in the auditory nerve. Moreover, the variability itself depends on two distinct measures of phase-locking performance. Most ANFs are capable of phase-locking to frequencies as high as 3–4 kHz. In the cochlear nucleus many unit types do not phase-lock to modulation frequencies > 1 kHz. As a result, phase-locking performance is measured on the basis of two parameters, maximum synchronization, irrespective of stimulus frequency, and the upper frequency limit for significant phase-locking. 4. Cochlear nucleus neurons may be divided into three distinct groups on the basis of maximum synchronization capability. In group 1 are the primary-like (PL) units of the anteroventral division, whose phase-locking capabilities are comparable with those of high-SR ANFs.(ABSTRACT TRUNCATED AT 400 WORDS)
Style APA, Harvard, Vancouver, ISO itp.
40

Jensen, Jesper, i Richard Heusdens. "Schemes for optimal frequency-differential encoding of sinusoidal model parameters". Signal Processing 83, nr 8 (sierpień 2003): 1721–35. http://dx.doi.org/10.1016/s0165-1684(03)00069-0.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Waters, D. "The peripheral auditory characteristics of noctuid moths: information encoding and endogenous noise". Journal of Experimental Biology 199, nr 4 (1.04.1996): 857–68. http://dx.doi.org/10.1242/jeb.199.4.857.

Pełny tekst źródła
Streszczenie:
The ability of the noctuid A1 cell acoustic receptor to encode biologically relevant information from bat echolocation calls is examined. Short-duration stimuli (less than approximately 6 ms) reduce the dynamic resolution of the receptor, making intensity, and hence range, estimates of foraging bats unreliable. This low dynamic range is further reduced by inaccurate encoding of stimulus intensity, reducing the real dynamic range of the A1 cell to 1 bit at stimulus durations below 3.1 ms. Interspike interval is also an unreliable measure of stimulus intensity at low stimulus levels and/or for short-duration stimuli. The quantity of information encoded per stimulus is reduced as the presentation rate of stimuli is increased. The spontaneous generation of A1 cell action potentials may reduce the ability of the moth to discriminate bat from non-bat signals. Even with a recognition criterion of three A1 cell spikes per call, the moth would regularly make wrong decisions about a bat being present in the immediate environment. Removing this noise would necessitate a considerable loss of information through filtering at the interneurone level. It is proposed that, for bats using short-duration calls, the moth would only be able to recognise an approaching bat from the repetitious nature of the incoming signal.
Style APA, Harvard, Vancouver, ISO itp.
42

Chaudhary, Sushank, Deepika Thakur i Abhishek Sharma. "10 Gbps-60 GHz RoF Transmission System for 5 G Applications". Journal of Optical Communications 40, nr 3 (26.07.2019): 281–84. http://dx.doi.org/10.1515/joc-2017-0079.

Pełny tekst źródła
Streszczenie:
Abstract This work is focused on transmission of 10 Gbps data and 60 GHz millimeter signal over 60 km optical fiber for 5 G applications. 5 G networks generally use millimeter range of frequencies. Radio over fiber is revolutionary technology to transmit radio signals over optical fiber. Furthermore, the comparative analysis of non-return to zero (NRZ) and return to zero (RZ) encoding schemes is also done. The results are reported in terms of Q-factor, bit error rate and eye diagrams. The reported results show the successful transmission of high speed 10 Gbps data and 60 GHz millimeter signal over 60 km optical fiber.
Style APA, Harvard, Vancouver, ISO itp.
43

Gorina-Careta, Natàlia, Teresa Ribas-Prats, Sonia Arenillas-Alcón, Marta Puertollano, M. Dolores Gómez-Roig i Carles Escera. "Neonatal Frequency-Following Responses: A Methodological Framework for Clinical Applications". Seminars in Hearing 43, nr 03 (sierpień 2022): 162–76. http://dx.doi.org/10.1055/s-0042-1756162.

Pełny tekst źródła
Streszczenie:
AbstractThe frequency-following response (FFR) to periodic complex sounds is a noninvasive scalp-recorded auditory evoked potential that reflects synchronous phase-locked neural activity to the spectrotemporal components of the acoustic signal along the ascending auditory hierarchy. The FFR has gained recent interest in the fields of audiology and auditory cognitive neuroscience, as it has great potential to answer both basic and applied questions about processes involved in sound encoding, language development, and communication. Specifically, it has become a promising tool in neonates, as its study may allow both early identification of future language disorders and the opportunity to leverage brain plasticity during the first 2 years of life, as well as enable early interventions to prevent and/or ameliorate sound and language encoding disorders. Throughout the present review, we summarize the state of the art of the neonatal FFR and, based on our own extensive experience, present methodological approaches to record it in a clinical environment. Overall, the present review is the first one that comprehensively focuses on the neonatal FFRs applications, thus supporting the feasibility to record the FFR during the first days of life and the predictive potential of the neonatal FFR on detecting short- and long-term language abilities and disruptions.
Style APA, Harvard, Vancouver, ISO itp.
44

Lüdtke, Niklas, i Mark E. Nelson. "Short-Term Synaptic Plasticity Can Enhance Weak Signal Detectability in Nonrenewal Spike Trains". Neural Computation 18, nr 12 (grudzień 2006): 2879–916. http://dx.doi.org/10.1162/neco.2006.18.12.2879.

Pełny tekst źródła
Streszczenie:
We study the encoding of weak signals in spike trains with interspike interval (ISI) correlations and the signals' subsequent detection in sensory neurons. Motivated by the observation of negative ISI correlations in auditory and electrosensory afferents, we assess the theoretical performance limits of an individual detector neuron receiving a weak signal distributed across multiple afferent inputs. We assess the functional role of ISI correlations in the detection process using statistical detection theory and derive two sequential likelihood ratio detector models: one for afferents with renewal statistics; the other for afferents with negatively correlated ISIs. We suggest a mechanism that might enable sensory neurons to implicitly compute conditional probabilities of presynaptic spikes by means of short-term synaptic plasticity. We demonstrate how this mechanism can enhance a postsynaptic neuron's sensitivity to weak signals by exploiting the correlation structure of the input spike trains. Our model not only captures fundamental aspects of early electrosensory signal processing in weakly electric fish, but may also bear relevance to the mammalian auditory system and other sensory modalities.
Style APA, Harvard, Vancouver, ISO itp.
45

Sun, Jie, ZhaoFang Yang, Yu Zhang, Teng Li i Sha Wang. "High-Capacity Data Hiding Method Based on Two Subgroup Pixels-Value Adjustment Using Encoding Function". Security and Communication Networks 2022 (22.07.2022): 1–14. http://dx.doi.org/10.1155/2022/4336526.

Pełny tekst źródła
Streszczenie:
Confidential information can be hidden in digital images through data hiding technology. This has practical application value for copyright, intellectual property protection, public information protection, and so on. In recent years, researchers have proposed many schemes of data hiding. However, existed data hiding schemes suffer from low hiding capacity or poor stego-image quality. This paper uses a new method of multiple pixels-value adjustment with encoding function (MPA) to further improve the comprehensive performance, which is well in both hiding capacity and stego-image quality. The main idea is to divide n adjacent cover pixels into two sub-groups and implement multi-bit-based modulus operations in each group, respectively. The efficacy of this proposed is evaluated by peak signal-to-noise ratio (PSNR), embedding payload, structural similarity index (SSIM), and quality index (QI). The recorded PSNR value is 30.01 dB, and embedding payload is 5 bpp (bits per pixel). In addition, the steganalysis tests do not detect this steganography technique.
Style APA, Harvard, Vancouver, ISO itp.
46

Taehyun Kim i M. H. Ammar. "A comparison of heterogeneous video multicast schemes: Layered encoding or stream replication". IEEE Transactions on Multimedia 7, nr 6 (grudzień 2005): 1123–30. http://dx.doi.org/10.1109/tmm.2005.858376.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Scheller, Bertram C. A., Michael Daunderer i Gordon Pipa. "General Anesthesia Increases Temporal Precision and Decreases Power of the Brainstem Auditory-evoked Response-related Segments of the Electroencephalogram". Anesthesiology 111, nr 2 (1.08.2009): 340–55. http://dx.doi.org/10.1097/aln.0b013e3181acf7c0.

Pełny tekst źródła
Streszczenie:
Background Brainstem auditory-evoked responses (BAEP) have been reported to be unchanged in the presence of drugs used for induction and maintenance of general anesthesia. The aim of this study was to investigate if the signal segments after the auditory stimulus that are used to average the evoked response change under the influence of general anesthesia. Methods BAEPs of 156 patients scheduled for elective surgery under general anesthesia were investigated. Anesthetic regimen was randomized as a combination of one of four hypnotic drugs supplemented by one of four opioids. Signal segments after the auditory stimulus were obtained at six different periods of anesthesia. Power and phase properties of wavelet-filtered single-sweep auditory-evoked activity accounting for the waveform of the averaged BAEP wave V and the stability of amplitude and latency of the averaged BAEP wave V over periods were analyzed. Results Amplitude and latency of wave V change slightly with no significant difference between the periods. During anesthesia, however, the power of single sweeps is significantly reduced, whereas phase-locking properties of the according signal segments are significantly enhanced. This effect is independent of the anesthetic or opioid used. Conclusions General anesthesia affects phase and power of the segments of the electroencephalogram related to BAEP wave V. This study's results support the idea that temporally precise responses from a large number of neurons in the brainstem might play a crucial role in encoding and passing sensory information to higher subcortical and cortical areas of the brain.
Style APA, Harvard, Vancouver, ISO itp.
48

Sommers, Mitchell S., Brent Spehar i Nancy Tye‐Murray. "The effects of signal‐to‐noise ratio on auditory‐visual integration: Integration and encoding are not independent". Journal of the Acoustical Society of America 117, nr 4 (kwiecień 2005): 2574. http://dx.doi.org/10.1121/1.4788583.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Ren, Jianfeng, Xudong Jiang i Junsong Yuan. "LBP Encoding Schemes Jointly Utilizing the Information of Current Bit and Other LBP Bits". IEEE Signal Processing Letters 22, nr 12 (grudzień 2015): 2373–77. http://dx.doi.org/10.1109/lsp.2015.2481435.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Gai, Yan, Brent Doiron, Vibhakar Kotak i John Rinzel. "Noise-Gated Encoding of Slow Inputs by Auditory Brain Stem Neurons With a Low-Threshold K+ Current". Journal of Neurophysiology 102, nr 6 (grudzień 2009): 3447–60. http://dx.doi.org/10.1152/jn.00538.2009.

Pełny tekst źródła
Streszczenie:
Phasic neurons, which do not fire repetitively to steady depolarization, are found at various stages of the auditory system. Phasic neurons are commonly described as band-pass filters because they do not respond to low-frequency inputs even when the amplitude is large. However, we show that phasic neurons can encode low-frequency inputs when noise is present. With a low-threshold potassium current ( IKLT), a phasic neuron model responds to rising and falling phases of a subthreshold low-frequency signal with white noise. When the white noise was low-pass filtered, the phasic model also responded to the signal's trough but still not to the peak. In contrast, a tonic neuron model fired mostly to the signal's peak. To test the model predictions, whole cell slice recordings were obtained in the medial (MSO) and lateral (LSO) superior olivary neurons in gerbil from postnatal day 10 (P10) to 22. The phasic MSO neurons with strong IKLT, mostly from gerbils aged P17 or older, showed firing patterns consistent with the preceding predictions. Moreover, injecting a virtual IKLT into weak-phasic MSO and tonic LSO neurons with putative weak or no IKLT (from gerbils younger than P17) shifted the neural response from the signal's peak to the rising phase. These findings advance our knowledge about how noise gates the signal pathway and how phasic neurons encode slow envelopes of sounds with high-frequency carriers.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii