Academic literature on the topic 'Sensory input'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sensory input.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Sensory input"

1

Bui, Tuan V., and Robert M. Brownstone. "Sensory-evoked perturbations of locomotor activity by sparse sensory input: a computational study." Journal of Neurophysiology 113, no. 7 (April 2015): 2824–39. http://dx.doi.org/10.1152/jn.00866.2014.

Full text
Abstract:
Sensory inputs from muscle, cutaneous, and joint afferents project to the spinal cord, where they are able to affect ongoing locomotor activity. Activation of sensory input can initiate or prolong bouts of locomotor activity depending on the identity of the sensory afferent activated and the timing of the activation within the locomotor cycle. However, the mechanisms by which afferent activity modifies locomotor rhythm and the distribution of sensory afferents to the spinal locomotor networks have not been determined. Considering the many sources of sensory inputs to the spinal cord, determining this distribution would provide insights into how sensory inputs are integrated to adjust ongoing locomotor activity. We asked whether a sparsely distributed set of sensory inputs could modify ongoing locomotor activity. To address this question, several computational models of locomotor central pattern generators (CPGs) that were mechanistically diverse and generated locomotor-like rhythmic activity were developed. We show that sensory inputs restricted to a small subset of the network neurons can perturb locomotor activity in the same manner as seen experimentally. Furthermore, we show that an architecture with sparse sensory input improves the capacity to gate sensory information by selectively modulating sensory channels. These data demonstrate that sensory input to rhythm-generating networks need not be extensively distributed.
APA, Harvard, Vancouver, ISO, and other styles
2

Santos, Bruno A., Rogerio M. Gomes, Xabier E. Barandiaran, and Phil Husbands. "Active Role of Self-Sustained Neural Activity on Sensory Input Processing: A Minimal Theoretical Model." Neural Computation 34, no. 3 (February 17, 2022): 686–715. http://dx.doi.org/10.1162/neco_a_01471.

Full text
Abstract:
Abstract A growing body of work has demonstrated the importance of ongoing oscillatory neural activity in sensory processing and the generation of sensorimotor behaviors. It has been shown, for several different brain areas, that sensory-evoked neural oscillations are generated from the modulation by sensory inputs of inherent self-sustained neural activity (SSA). This letter contributes to that strand of research by introducing a methodology to investigate how much of the sensory-evoked oscillatory activity is generated by SSA and how much is generated by sensory inputs within the context of sensorimotor behavior in a computational model. We develop an abstract model consisting of a network of three Kuramoto oscillators controlling the behavior of a simulated agent performing a categorical perception task. The effects of sensory inputs and SSAs on sensory-evoked oscillations are quantified by the cross product of velocity vectors in the phase space of the network under different conditions (disconnected without input, connected without input, and connected with input). We found that while the agent is carrying out the task, sensory-evoked activity is predominantly generated by SSA (93.10%) with much less influence from sensory inputs (6.90%). Furthermore, the influence of sensory inputs can be reduced by 10.4% (from 6.90% to 6.18%) with a decay in the agent's performance of only 2%. A dynamical analysis shows how sensory-evoked oscillations are generated from a dynamic coupling between the level of sensitivity of the network and the intensity of the input signals. This work may suggest interesting directions for neurophysiological experiments investigating how self-sustained neural activity influences sensory input processing, and ultimately affects behavior.
APA, Harvard, Vancouver, ISO, and other styles
3

Fadli, Muhammad, Wahyuni Wahyuni, and Farid Rahman. "Penatalaksanaan Fisioterapi pada Pasien Diabetic Peripheral Neuropaty dengan Metode Sensorimotor Exercise." Ahmar Metastasis Health Journal 1, no. 3 (December 31, 2021): 92–100. http://dx.doi.org/10.53770/amhj.v1i3.53.

Full text
Abstract:
ABSTRACK Introduction:Diabetic peripheral neuropathy causes sensory disturbances such as the reduced sensation of vibration, pressure, pain, and joint position, this will result in reduced ability to balance and coordinate a person's gait. Sensory-motor exercise is used to correct muscle imbalances through sensory input. This study aims to determine the effect of exercise therapy on sensory improvement, balance, and functional ability using the sensory-motor exercise method in patients with diabetic peripheral neuropathy. The research method used in this study is an experiment with the case report method, and the sample is taken using an incidental technique. Results: After being given exercise therapy using the sensory-motor exercise method, the results were an increase in sensory input sensitivity, an increase in static balance, an increase in dynamic balance, and an increase in the patient's functional ability in the form of a better walking pattern. Conclusion: exercise therapy using the sensory-motor exercise method effectively improves balance and improves walking patterns in patients with diabetic peripheral neuropathy. Suggestion: exercise to increase the movement ability of sensory and functional functions can be combined with sensory training in patients with diabetic peripheral neuropathy. ABSTRAK Pendahuluan: Diabetic peripheral neuropati mengakibatkan gangguan sensorik seperti berkurangnya sensasi getaran, tekanan, nyeri dan posisi sendi, hal ini akan mengakibatkan berkurangnya kemampuan keseimbangan dan koordinasi gaya berjalan seseorang, sensory motor exercise merupakan metode latihan yang digunakan untuk memperbaiki ketidakseimbangan otot melalui input sensorik. Penelitian ini bertujuan untuk mengetahui efek terapi latihan terhadap perbaikan sensoris, keseimbangan dan kemampuan fungsional dengan menggunakan metode sensori motor exercise pada pasien dengan diabetic peripheral neuropaty. Metode penelitian yang digunakan pada studi ini merupakan eksperimen dengan metode case report, dan sampel di ambil dengan teknik insindental. Hasil: setelah diberikan terapi latihan dengan metode sensory motor exercise didapatkan hasil berupa peningkatan sensitifitas input sensorik, peningkatan keseimbangan statis dan peningkatan keseimbangan dinamis serta peningkatan dari kemampuan fungsional pasien berupa pola berjalan yang lebih baik. Kesimpulan: terapi latihan dengan metode sensory motor exercise efektif untuk meningkatkan keseimbangan dan memperbaiki pola berjalan pada pasien dengan diabetic peripheral neuropaty. Saran: latihan peningkatan kemampuan gerak fungsi sensoris dan fungsional dapat dikombinasikan dengan latihan sensomotori pada penderita diabetic peripheral neuropaty.
APA, Harvard, Vancouver, ISO, and other styles
4

Ugawa, Yoshikazu. "Sensory input and basal ganglia." Rinsho Shinkeigaku 52, no. 11 (2012): 862–65. http://dx.doi.org/10.5692/clinicalneurol.52.862.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mao, Yu-Ting, Tian-Miao Hua, and Sarah L. Pallas. "Competition and convergence between auditory and cross-modal visual inputs to primary auditory cortical areas." Journal of Neurophysiology 105, no. 4 (April 2011): 1558–73. http://dx.doi.org/10.1152/jn.00407.2010.

Full text
Abstract:
Sensory neocortex is capable of considerable plasticity after sensory deprivation or damage to input pathways, especially early in development. Although plasticity can often be restorative, sometimes novel, ectopic inputs invade the affected cortical area. Invading inputs from other sensory modalities may compromise the original function or even take over, imposing a new function and preventing recovery. Using ferrets whose retinal axons were rerouted into auditory thalamus at birth, we were able to examine the effect of varying the degree of ectopic, cross-modal input on reorganization of developing auditory cortex. In particular, we assayed whether the invading visual inputs and the existing auditory inputs competed for or shared postsynaptic targets and whether the convergence of input modalities would induce multisensory processing. We demonstrate that although the cross-modal inputs create new visual neurons in auditory cortex, some auditory processing remains. The degree of damage to auditory input to the medial geniculate nucleus was directly related to the proportion of visual neurons in auditory cortex, suggesting that the visual and residual auditory inputs compete for cortical territory. Visual neurons were not segregated from auditory neurons but shared target space even on individual target cells, substantially increasing the proportion of multisensory neurons. Thus spatial convergence of visual and auditory input modalities may be sufficient to expand multisensory representations. Together these findings argue that early, patterned visual activity does not drive segregation of visual and auditory afferents and suggest that auditory function might be compromised by converging visual inputs. These results indicate possible ways in which multisensory cortical areas may form during development and evolution. They also suggest that rehabilitative strategies designed to promote recovery of function after sensory deprivation or damage need to take into account that sensory cortex may become substantially more multisensory after alteration of its input during development.
APA, Harvard, Vancouver, ISO, and other styles
6

Henn, V. "Sensory Input Modifying Central Motor Actions." Stereotactic and Functional Neurosurgery 49, no. 5 (1986): 251–55. http://dx.doi.org/10.1159/000100183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Franosch, Jan-Moritz P., Sebastian Urban, and J. Leo van Hemmen. "Supervised Spike-Timing-Dependent Plasticity: A Spatiotemporal Neuronal Learning Rule for Function Approximation and Decisions." Neural Computation 25, no. 12 (December 2013): 3113–30. http://dx.doi.org/10.1162/neco_a_00520.

Full text
Abstract:
How can an animal learn from experience? How can it train sensors, such as the auditory or tactile system, based on other sensory input such as the visual system? Supervised spike-timing-dependent plasticity (supervised STDP) is a possible answer. Supervised STDP trains one modality using input from another one as “supervisor.” Quite complex time-dependent relationships between the senses can be learned. Here we prove that under very general conditions, supervised STDP converges to a stable configuration of synaptic weights leading to a reconstruction of primary sensory input.
APA, Harvard, Vancouver, ISO, and other styles
8

Etesami, Jalal, and Philipp Geiger. "Causal Transfer for Imitation Learning and Decision Making under Sensor-Shift." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 06 (April 3, 2020): 10118–25. http://dx.doi.org/10.1609/aaai.v34i06.6571.

Full text
Abstract:
Learning from demonstrations (LfD) is an efficient paradigm to train AI agents. But major issues arise when there are differences between (a) the demonstrator's own sensory input, (b) our sensors that observe the demonstrator and (c) the sensory input of the agent we train.In this paper, we propose a causal model-based framework for transfer learning under such “sensor-shifts”, for two common LfD tasks: (1) inferring the effect of the demonstrator's actions and (2) imitation learning. First we rigorously analyze, on the population-level, to what extent the relevant underlying mechanisms (the action effects and the demonstrator policy) can be identified and transferred from the available observations together with prior knowledge of sensor characteristics. And we device an algorithm to infer these mechanisms. Then we introduce several proxy methods which are easier to calculate, estimate from finite data and interpret than the exact solutions, alongside theoretical bounds on their closeness to the exact ones. We validate our two main methods on simulated and semi-real world data.
APA, Harvard, Vancouver, ISO, and other styles
9

Havrylovych, Mariia, and Valeriy Danylov. "Research of autoencoder-based user biometric verification with motion patterns." System research and information technologies, no. 2 (August 30, 2022): 128–36. http://dx.doi.org/10.20535/srit.2308-8893.2022.2.10.

Full text
Abstract:
In the current research, we continue our previous study regarding motion-based user biometric verification, which consumes sensory data. Sensory-based verification systems empower the continuous authentication narrative – as physiological biometric methods mainly based on photo or video input meet a lot of difficulties in implementation. The research aims to analyze how various components of sensor data from an accelerometer affect and contribute to defining the process of unique person motion patterns and understanding how it may express the human behavioral patterns with different activity types. The study used the recurrent long-short-term-memory autoencoder as a baseline model. The choice of model was based on our previous research. The research results have shown that various data components contribute differently to the verification process depending on the type of activity. However, we conclude that a single sensor data source may not be enough for a robust authentication system. The multimodal authentication system should be proposed to utilize and aggregate the input streams from multiple sensors as further research.
APA, Harvard, Vancouver, ISO, and other styles
10

Stolz, Thomas, Max Diesner, Susanne Neupert, Martin E. Hess, Estefania Delgado-Betancourt, Hans-Joachim Pflüger, and Joachim Schmidt. "Descending octopaminergic neurons modulate sensory-evoked activity of thoracic motor neurons in stick insects." Journal of Neurophysiology 122, no. 6 (December 1, 2019): 2388–413. http://dx.doi.org/10.1152/jn.00196.2019.

Full text
Abstract:
Neuromodulatory neurons located in the brain can influence activity in locomotor networks residing in the spinal cord or ventral nerve cords of invertebrates. How inputs to and outputs of neuromodulatory descending neurons affect walking activity is largely unknown. With the use of matrix-assisted laser desorption/ionization time-of-flight mass spectrometry and immunohistochemistry, we show that a population of dorsal unpaired median (DUM) neurons descending from the gnathal ganglion to thoracic ganglia of the stick insect Carausius morosus contains the neuromodulatory amine octopamine. These neurons receive excitatory input coupled to the legs’ stance phases during treadmill walking. Inputs did not result from connections with thoracic central pattern-generating networks, but, instead, most are derived from leg load sensors. In excitatory and inhibitory retractor coxae motor neurons, spike activity in the descending DUM (desDUM) neurons increased depolarizing reflexlike responses to stimulation of leg load sensors. In these motor neurons, descending octopaminergic neurons apparently functioned as components of a positive feedback network mainly driven by load-detecting sense organs. Reflexlike responses in excitatory extensor tibiae motor neurons evoked by stimulations of a femur-tibia movement sensor either are increased or decreased or were not affected by the activity of the descending neurons, indicating different functions of desDUM neurons. The increase in motor neuron activity is often accompanied by a reflex reversal, which is characteristic for actively moving animals. Our findings indicate that some descending octopaminergic neurons can facilitate motor activity during walking and support a sensory-motor state necessary for active leg movements. NEW & NOTEWORTHY We investigated the role of descending octopaminergic neurons in the gnathal ganglion of stick insects. The neurons become active during walking, mainly triggered by input from load sensors in the legs rather than pattern-generating networks. This report provides novel evidence that octopamine released by descending neurons on stimulation of leg sense organs contributes to the modulation of leg sensory-evoked activity in a leg motor control system.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Sensory input"

1

McNair, Nicolas A. "Input-specificity of sensory-induced neural plasticity in humans." Thesis, University of Auckland, 2008. http://hdl.handle.net/2292/3285.

Full text
Abstract:
The aim of this thesis was to investigate the input-specificity of sensory-induced plasticity in humans. This was achieved by varying the characteristics of sine gratings so that they selectively targeted distinct populations of neurons in the visual cortex. In Experiments 1-3, specificity was investigated with electroencephalography using horizontally- and vertically-oriented sine gratings (Experiment 1) or gratings of differing spatial frequency (Experiments 2 & 3). Increases in the N1b potential were observed only for sine gratings that were the same in orientation or spatial frequency as that used as the tetanus, suggesting that the potentiation is specific to the visual pathways stimulated during the induction of the tetanus. However, the increase in the amplitude of the N1b in Experiment 1 was not maintained when tested again at 50 minutes post-tetanus. This may have been due to depotentiation caused by the temporal frequency of stimulus presentation in the first post-tetanus block. To try to circumvent this potential confound, immediate and maintained (tested 30 minutes post-tetanus) spatial-frequency-specific potentiation were tested separately in Experiments 2 and 3, respectively. Experiment 3 demonstrated that the increased N1b was maintained for up to half an hour post-tetanus. In addition, the findings from Experiment 1, as well as the pattern of results from Experiments 2 and 3, indicate that the potentiation must be occurring in the visual cortex rather than further upstream at the lateral geniculate nucleus. In Experiment 4 functional magnetic resonance imaging was used to more accurately localise where these plastic changes were taking place using sine gratings of differing spatial frequency. A small, focal post-tetanic increase in the blood-oxygen-level-dependent (BOLD) response was observed for the tetanised grating in the right temporo-parieto-occipital junction. For the non-tetanised grating, decreases in BOLD were found in the primary visual cortex and bilaterally in the cuneus and pre-cuneus. These decreases may have been due to inhibitory interconnections between neurons tuned to different spatial frequencies. These data indicate that tetanic sensory stimulation selectively targets and potentiates specific populations of neurons in the visual cortex.
APA, Harvard, Vancouver, ISO, and other styles
2

Nargis, Sultana Mahbuba. "Sensory Input and Mental Imagery in Second Language Acquisition." University of Toledo / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1418370678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Jung-Kyong. "Sensory substitution learning using auditory input: Behavioral and neural correlates." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=96695.

Full text
Abstract:
Sensory substitution refers to the replacement of one sensory input with another. This concept, originally developed to aid the blind, presents a scientific opportunity to study crossmodal perceptual learning and neural plasticity. Using a technique that translates vision into sound, the present dissertation examined sensory substitution learning. Four studies tested the hypotheses that mental representations of spatial information such as shape are abstract, and that they are based on involvement of common brain regions independently of sensory modality. Study 1 aimed to develop a training paradigm in auditory vision substitution. We examined the minimum amount of learning necessary to identify visual images using sound, and the effects of more extensive training on a wide range of stimuli to test the hypothesis that sensory substitution would be based on generalized crossmodal rule learning. Study 2 was a functional magnetic resonance imaging (fMRI) adaptation of study 1. Subjects were scanned before and after training during a task in which shape-coded sound was to be matched to visually presented shape. It was predicted that training would lead to sound-induced visual recruitment. Study 3 examined auditory touch substitution learning. Blindfolded sighted subjects were trained to recognize tactile shapes using shape-coded sounds and tested on a matching task. We also tested post-training transfer to vision. It was predicted that shape could be conveyed across sensory modalities. Study 4 was an fMRI adaptation of Study 3. Subjects were scanned before and after training during a task in which shape-coded sound was matched to tactually presented shape. Visual recruitment driven by non-visual inputs was predicted. Results showed that sighted people learned to extract visual or tactile patterns from auditory input. This learning was generalizable across stimuli within and across modalities, suggesting an abstract mental representation of shape. Auditory shape learning was associated with change in the functional network between the auditory cortex and the lateral occipital complex (LOC), a region known for visual shape processing. The auditory access to the LOC supports the notion that sensory specificity of the brain is not determined by the nature of the stimuli but rather by the task demand of the information to be processed.
La substitution sensorielle réfère à la capacité de remplacer une entrée sensorielle par une autre. Ce concept, initialement développé pour aider les personnes aveugles, offre une opportunité scientifique pour étudier l'apprentissage perceptuel à travers plusieurs modalités sensorielles et la plasticité neurale. La présente dissertation utilise une technique qui transforme la vision en son pour examiner l'apprentissage de la substitution sensorielle. Quatre études ont testé les hypothèses que les représentations mentales de l'information spatiale telles que des formes abstraites sont basées sur l'implication de régions cérébrales communes indépendamment de modalités sensorielles. L'étude 1 avait pour but de développer un paradigme d'apprentissage de la substitution audio-visuelle. Nous avons examiné le taux minimal d'apprentissage nécessaire pour identifier les images visuelles en utilisant le son, et les effets d'un entraînement plus intensif sur une large gamme de stimuli pour tester l'hypothèse que la substitution sensorielle serait basée sur une loi d'apprentissage généralisé à travers plusieurs modalités. L'étude 2 était une adaptation de l'étude 1 utilisant l'imagerie par résonance magnétique fonctionnelle (IRMf). Les sujets étaient scannés avant et après un entraînement à une tâche pendant laquelle une forme codée sonore devait être appariée à une forme abstraite présentée visuellement. Nous faisions l'hypothèse que suite à l'entraînement, l'exposition sonore conduirait à un recrutement visuel. L'étude 3 a examiné l'apprentissage pour transformer le toucher en son. Des sujets voyants avaient les yeux bandés et étaient entraînés pour reconnaitre des formes tactiles utilisant des formes codées sonores et testées sur une tâche d'appariement. Nous avons aussi testé le transfert à la vision après entraînement. Nous avons prédit que les formes pourraient être transportées à travers les modalités sensorielles. L'étude 4 était une adaptation en IRMf de l'étude 3. Les sujets étaient scannés avant et après un entraînement pendant une tâche dans laquelle une forme codée sonore était appariée à une forme présentée tactilement. Nous faisions l'hypothèse que des entrées non visuelles conduiraient à un recrutement visuel. Les résultats ont montré que les personnes voyantes ont appris à extraire des modes visuels ou tactiles à partir d'entrées auditives. Cet apprentissage était généralisable à travers les stimuli, dans et à travers les modalités, suggérant une représentation mentale abstraite des formes. L'apprentissage de formes auditives était associé à un changement dans le réseau fonctionnel entre le cortex auditif et le complexe latero-occipital (CLO), une région connue pour le traitement visuel des formes. L'accès auditif au CLO supporte la notion que la spécificité sensorielle du cerveau n'est pas déterminée par la nature des stimuli mais plutôt par le traitement requis pour exécuter la tache.
APA, Harvard, Vancouver, ISO, and other styles
4

Lovell, Nathan, and N/A. "Machine Vision as the Primary Sensory Input for Mobile, Autonomous Robots." Griffith University. School of Information and Communication Technology, 2006. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20070911.152447.

Full text
Abstract:
Image analysis, and its application to sensory input (computer vision) is a fairly mature field, so it is surprising that its techniques are not extensively used in robotic applications. The reason for this is that, traditionally, robots have been used in controlled environments where sophisticated computer vision was not necessary, for example in car manufacturing. As the field of robotics has moved toward providing general purpose robots that must function in the real world, it has become necessary that the robots be provided with robust sensors capable of understanding the complex world around them. However, when researchers apply techniques previously studied in image analysis literature to the field of robotics, several difficult problems emerge. In this thesis we examine four reasons why it is difficult to apply work in image analysis directly to real-time, general purpose computer vision applications. These are: improvement in the computational complexity of image analysis algorithms, robustness to dynamic and unpredictable visual conditions, independence from domain specific knowledge in object recognition and the development of debugging facilities. This thesis examines each of these areas making several innovative contributions in each area. We argue that, although each area is distinct, improvement must be made in all four areas before vision will be utilised as the primary sensory input for mobile, autonomous robotic applications. In the first area, the computational complexity of image analysis algorithms, we note the dependence of a large number of high-level processing routines on a small number of low-level algorithms. Therefore, improvement to a small set of highly utilised algorithms will yield benefits in a large number of applications. In this thesis we examine the common tasks of image segmentation, edge and straight line detection and vectorisation. In the second area, robustness to dynamic and unpredictable conditions, we examine how vision systems can be made more tolerant to changes of illumination in the visual scene. We examine the classical image segmentation task and present a method for illumination independence that builds on our work from the first area. The third area is the reliance on domain-specific knowledge in object recognition. Many current systems depend on a large amount of hard-coded domainspecific knowledge to understand the world around them. This makes the system hard to modify, even for slight changes in the environment, and very difficult to apply in a different context entirely. We present an XML-based language, the XML Object Definition (XOD) language, as a solution to this problem. The language is largely descriptive instead of imperative so, instead of describing how to locate objects within each image, the developer simply describes the properties of the objects. The final area is the development of support tools. Vision system programming is extremely difficult because large amounts of data are handled at a very fast rate. If the system is running on an embedded device (such as a robot) then locating defects in the code is a time consuming and frustrating task. Many development-support applications are available for specific applications. We present a general purpose development-support tool for embedded, real-time vision systems. The primary case study for this research is that of Robotic soccer, in the international RoboCup Four-Legged league. We utilise all of the research of this thesis to provide the first illumination-independent object recognition system for RoboCup. Furthermore we illustrate the flexibility of our system by applying it to several other tasks and to marked changes in the visual environment for RoboCup itself.
APA, Harvard, Vancouver, ISO, and other styles
5

Xin, Yifei. "Exploring the Chinese Room: Parallel Sensory Input in Second Language Learning." University of Toledo / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1333762798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lovell, Nathan. "Machine Vision as the Primary Sensory Input for Mobile, Autonomous Robots." Thesis, Griffith University, 2006. http://hdl.handle.net/10072/367107.

Full text
Abstract:
Image analysis, and its application to sensory input (computer vision) is a fairly mature field, so it is surprising that its techniques are not extensively used in robotic applications. The reason for this is that, traditionally, robots have been used in controlled environments where sophisticated computer vision was not necessary, for example in car manufacturing. As the field of robotics has moved toward providing general purpose robots that must function in the real world, it has become necessary that the robots be provided with robust sensors capable of understanding the complex world around them. However, when researchers apply techniques previously studied in image analysis literature to the field of robotics, several difficult problems emerge. In this thesis we examine four reasons why it is difficult to apply work in image analysis directly to real-time, general purpose computer vision applications. These are: improvement in the computational complexity of image analysis algorithms, robustness to dynamic and unpredictable visual conditions, independence from domain specific knowledge in object recognition and the development of debugging facilities. This thesis examines each of these areas making several innovative contributions in each area. We argue that, although each area is distinct, improvement must be made in all four areas before vision will be utilised as the primary sensory input for mobile, autonomous robotic applications. In the first area, the computational complexity of image analysis algorithms, we note the dependence of a large number of high-level processing routines on a small number of low-level algorithms. Therefore, improvement to a small set of highly utilised algorithms will yield benefits in a large number of applications. In this thesis we examine the common tasks of image segmentation, edge and straight line detection and vectorisation. In the second area, robustness to dynamic and unpredictable conditions, we examine how vision systems can be made more tolerant to changes of illumination in the visual scene. We examine the classical image segmentation task and present a method for illumination independence that builds on our work from the first area. The third area is the reliance on domain-specific knowledge in object recognition. Many current systems depend on a large amount of hard-coded domainspecific knowledge to understand the world around them. This makes the system hard to modify, even for slight changes in the environment, and very difficult to apply in a different context entirely. We present an XML-based language, the XML Object Definition (XOD) language, as a solution to this problem. The language is largely descriptive instead of imperative so, instead of describing how to locate objects within each image, the developer simply describes the properties of the objects. The final area is the development of support tools. Vision system programming is extremely difficult because large amounts of data are handled at a very fast rate. If the system is running on an embedded device (such as a robot) then locating defects in the code is a time consuming and frustrating task. Many development-support applications are available for specific applications. We present a general purpose development-support tool for embedded, real-time vision systems. The primary case study for this research is that of Robotic soccer, in the international RoboCup Four-Legged league. We utilise all of the research of this thesis to provide the first illumination-independent object recognition system for RoboCup. Furthermore we illustrate the flexibility of our system by applying it to several other tasks and to marked changes in the visual environment for RoboCup itself.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information and Communication Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
7

Ortman, Robert L. "Sensory input encoding and readout methods for in vitro living neuronal networks." Thesis, Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44856.

Full text
Abstract:
Establishing and maintaining successful communication stands as a critical prerequisite for achieving the goals of inducing and studying advanced computation in small-scale living neuronal networks. The following work establishes a novel and effective method for communicating arbitrary "sensory" input information to cultures of living neurons, living neuronal networks (LNNs), consisting of approximately 20 000 rat cortical neurons plated on microelectrode arrays (MEAs) containing 60 electrodes. The sensory coding algorithm determines a set of effective codes (symbols), comprised of different spatio-temporal patterns of electrical stimulation, to which the LNN consistently produces unique responses to each individual symbol. The algorithm evaluates random sequences of candidate electrical stimulation patterns for evoked-response separability and reliability via a support vector machine (SVM)-based method, and employing the separability results as a fitness metric, a genetic algorithm subsequently constructs subsets of highly separable symbols (input patterns). Sustainable input/output (I/O) bit rates of 16-20 bits per second with a 10% symbol error rate resulted for time periods of approximately ten minutes to over ten hours. To further evaluate the resulting code sets' performance, I used the system to encode approximately ten hours of sinusoidal input into stimulation patterns that the algorithm selected and was able to recover the original signal with a normalized root-mean-square error of 20-30% using only the recorded LNN responses and trained SVM classifiers. Response variations over the course of several hours observed in the results of the sine wave I/O experiment suggest that the LNNs may retain some short-term memory of the previous input sample and undergo neuroplastic changes in the context of repeated stimulation with sensory coding patterns identified by the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
8

Chakrabarty, Arnab. "Role of sensory input in structural plasticity of dendrites in adult neuronal networks." Diss., lmu, 2013. http://nbn-resolving.de/urn:nbn:de:bvb:19-155241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhao, Yifan. "Language Learning through Dialogs:Mental Imagery and Parallel Sensory Input in Second Language Learning." University of Toledo / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1396634043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

MacBride, Claire Ann MacBride. "Mental Imagery as a Substitute for Parallel Sensory Input in the Field of SLA." University of Toledo / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1525379740507044.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Sensory input"

1

Proebster, Walter E. Peripherie von Informationssystemen: Technologie und Anwendung : Eingabe, Tastatur, Sensoren, Sprache etc. : Ausgabe, Drucker, Bildschirm, Anzeigen etc. : externe Speicher, Magnetik, Optik etc. Berlin: Springer-Verlag, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

AIPR Workshop (26th 1997 Washington, D.C.). Exploiting new image sources and sensors: 26th AIPR Workshop, 15-17 October 1997, Washington, D.C. Edited by Selander J. Michael 1952-, Society of Photo-optical Instrumentation Engineers., and AIPR Executive Committee. Bellingham, Wash: SPIE, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tyagi, Amit Kumar. Multimedia and Sensory Input for Augmented, Mixed, and Virtual Reality. IGI Global, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tyagi, Amit Kumar. Multimedia and Sensory Input for Augmented, Mixed, and Virtual Reality. IGI Global, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tyagi, Amit Kumar. Multimedia and Sensory Input for Augmented, Mixed, and Virtual Reality. IGI Global, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tyagi, Amit, and Shamila Mohammed. Multimedia and Sensory Input for Augmented, Mixed, and Virtual Reality. IGI Global, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tyagi, Amit Kumar. Multimedia and Sensory Input for Augmented, Mixed, and Virtual Reality. IGI Global, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stoneley, Sarah, and Simon Rinald. Sensory loss. Edited by Patrick Davey and David Sprigings. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780199568741.003.0047.

Full text
Abstract:
Sensory disturbance can either be a complete loss (anaesthesia) or a reduction (hypoaesthesia) in the ability to perceive the sensory input. Dysaesthesia is an abnormal increase in the perception of normal sensory stimuli. Hyperalgesia is an increased sensitivity to normally painful stimuli, and allodynia is the perception of usually innocuous stimuli as painful. A complete loss of sensation is likely to be due to a central nervous system problem, while a tingling/paraesthesia (large fibre) or burning/temperature (small fibre) sensation is likely due to an acquired peripheral nervous system problem. Shooting, electric-shock-like pains suggest radicular pathology, a tight-band spinal cord dysfunction. Positive sensory symptoms are usually absent in inherited neuropathies, even in the context of significant deficits on examination. This chapter describes the clinical approach to patients with sensory symptoms. Common patterns of sensory loss and their causes are described.
APA, Harvard, Vancouver, ISO, and other styles
9

Strayer. Lose Weight by Decreasing Sensory Input: A Revolutionary Mind-Body Approach. Dorrance Publishing Co., Inc., 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Heller, Sharon. Yoga Bliss: How Sensory Input in Yoga Calms & Organizes the Nervous System. Symmetry, 2023.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Sensory input"

1

Stein, Wolfgang. "Sensory Input to Central Pattern Generators." In Encyclopedia of Computational Neuroscience, 2668–76. New York, NY: Springer New York, 2015. http://dx.doi.org/10.1007/978-1-4614-6675-8_465.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Johansson, Roland S. "Sensory Input and Control of Grip." In Novartis Foundation Symposia, 45–63. Chichester, UK: John Wiley & Sons, Ltd., 2007. http://dx.doi.org/10.1002/9780470515563.ch4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Stein, Wolfgang. "Sensory Input to Central Pattern Generators." In Encyclopedia of Computational Neuroscience, 1–11. New York, NY: Springer New York, 2014. http://dx.doi.org/10.1007/978-1-4614-7320-6_465-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Stein, Wolfgang. "Sensory Input to Central Pattern Generators." In Encyclopedia of Computational Neuroscience, 1–10. New York, NY: Springer New York, 2020. http://dx.doi.org/10.1007/978-1-4614-7320-6_465-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Strösslin, Thomas, Christophe Krebser, Angelo Arleo, and Wulfram Gerstner. "Combining Multimodal Sensory Input for Spatial Learning." In Artificial Neural Networks — ICANN 2002, 87–92. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46084-5_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bullock, Theodore H. "The Comparative Neurology of Expectation: Stimulus Acquisition and Neurobiology of Anticipated and Unanticipated Input." In Sensory Biology of Aquatic Animals, 269–84. New York, NY: Springer New York, 1988. http://dx.doi.org/10.1007/978-1-4612-3714-3_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bereiter, D. A., E. J. DeMaria, W. C. Engeland, and D. S. Gann. "Endocrine Responses to Multiple Sensory Input Related to Injury." In Advances in Experimental Medicine and Biology, 251–63. Boston, MA: Springer US, 1988. http://dx.doi.org/10.1007/978-1-4899-2064-5_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Clark, Lauren. "Sensory Awareness – Understanding Your Unique Brain Response to Sensory Input from the World Around You." In Das menschliche Büro - The human(e) office, 179–85. Wiesbaden: Springer Fachmedien Wiesbaden, 2021. http://dx.doi.org/10.1007/978-3-658-33519-9_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Politis, Dionysios, Rafail Tzimas, Dimitrios Margounakis, Veljko Aleksić, and Nektarios-Kyriakos Paris. "User Experience and Music Perception in Broadcasts: Sensory Input Classification." In New Realities, Mobile Systems and Applications, 410–19. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96296-8_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Di Ferdinando, Andrea, and Domenico Parisi. "Internal Representations of Sensory Input Reflect the Motor Output with Which Organisms Respond to the Input." In Seeing, Thinking and Knowing, 115–41. Dordrecht: Springer Netherlands, 2004. http://dx.doi.org/10.1007/1-4020-2081-3_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Sensory input"

1

Evans, Richard, Matko Bošnjak, Lars Buesing, Kevin Ellis, David Pfau, Pushmeet Kohli, and Marek Sergot. "Making Sense of Raw Input (Extended Abstract)." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/799.

Full text
Abstract:
How should a machine intelligence perform unsupervised structure discovery over streams of sensory input? One approach to this problem is to cast it as an apperception task. Here, the task is to construct an explicit interpretable theory that both explains the sensory sequence and also satisfies a set of unity conditions, designed to ensure that the constituents of the theory are connected in a relational structure. However, the original formulation of the apperception task had one fundamental limitation: it assumed the raw sensory input had already been parsed using a set of discrete categories, so that all the system had to do was receive this already-digested symbolic input, and make sense of it. But what if we don't have access to pre-parsed input? What if our sensory sequence is raw unprocessed information? The central contribution of this paper is a neuro-symbolic framework for distilling interpretable theories out of streams of raw, unprocessed sensory experience. First, we extend the definition of the apperception task to include ambiguous (but still symbolic) input: sequences of sets of disjunctions. Next, we use a neural network to map raw sensory input to disjunctive input. Our binary neural network is encoded as a logic program, so the weights of the network and the rules of the theory can be solved jointly as a single SAT problem. This way, we are able to jointly learn how to perceive (mapping raw sensory information to concepts) and apperceive (combining concepts into declarative rules).
APA, Harvard, Vancouver, ISO, and other styles
2

Hill, Chris, Casey Lee Hunt, Sammie Crowder, Brett Fiedler, Emily B. Moore, and Ann Eisenberg. "Investigating Sensory Extensions as Input for Interactive Simulations." In TEI '23: Seventeenth International Conference on Tangible, Embedded, and Embodied Interaction. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3569009.3573108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jeon, Soo. "State Estimation for Kinematic Model Over Lossy Network." In ASME 2010 Dynamic Systems and Control Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/dscc2010-4297.

Full text
Abstract:
The major benefit of the kinematic Kalman filter (KKF), i.e., the state estimation based on kinematic model is that it is immune to parameter variations and unknown disturbances regardless of the operating conditions. In carrying out complex motion tasks such as the coordinated manipulation among multiple machines, some of the motion variables measured by sensors may only be available through the communication layer, which requires to formulate the optimal state estimator subject to lossy network. In contrast to standard dynamic systems, the kinematic model used in the KKF relies on sensory data not only for the output but also for the process input. This paper studies how the packet dropout occurring from the input sensor as well as the output sensor affects the performance of the KKF. When the output sensory data are delivered through the lossy network, it has been shown that the mean error covariance of the KKF is bounded for any non-zero packet arrival rate. On the other hand, if the input sensory data are subject to lossy network, the Bernoulli dropout model results in an unbounded mean error covariance. More practical strategy is to adopt the previous input estimate in case the current packet is dropped. For each case of packet dropout models, the stochastic characteristics of the mean error covariance are analyzed and compared. Simulation results are presented to illustrate the analytical results and to compare the performance of the time varying (optimal) filter gain with that of the static (sub-optimal) filter gain.
APA, Harvard, Vancouver, ISO, and other styles
4

Wurdemann, Helge A., Evangelos Georgiou, Lei Cui, and Jian S. Dai. "SLAM Using 3D Reconstruction via a Visual RGB and RGB-D Sensory Input." In ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/detc2011-47735.

Full text
Abstract:
This paper investigates simultaneous localization and mapping (SLAM) problem by exploiting the Microsoft Kinect™ sensor array and an autonomous mobile robot capable of self-localization. The combination of them covers the major features of SLAM including mapping, sensing, locating, and modeling. The Kinect™ sensor array provides a dual camera output of RGB, using a CMOS camera, and RGB-D, using a depth camera. The sensors will be mounted on the KCLBOT, an autonomous nonholonomic two wheel maneuverable mobile robot. The mobile robot platform has the ability to self-localize and preform navigation maneuvers to traverse to set target points using intelligent processes. The target point for this operation is a fixed coordinate position, which will be the goal for the mobile robot to reach, taking into consideration the obstacles in the environment which will be represented in a 3D spatial model. Extracting the images from the sensor after a calibration routine, a 3D reconstruction of the traversable environment is produced for the mobile robot to navigate. Using the constructed 3D model the autonomous mobile robot follows a polynomial-based nonholonomic trajectory with obstacle avoidance. The experimental results demonstrate the cost effectiveness of this off the shelf sensor array. The results show the effectiveness to produce a 3D reconstruction of an environment and the feasibility of using the Microsoft Kinect™ sensor for mapping, sensing, locating, and modeling, that enables the implementation of SLAM on this type of platform.
APA, Harvard, Vancouver, ISO, and other styles
5

Kruijff, Ernst, Gerold Wesche, Kai Riege, Gernot Goebbels, Martijn Kunstman, and Dieter Schmalstieg. "Tactylus, a pen-input device exploring audiotactile sensory binding." In the ACM symposium. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1180495.1180557.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wakatabe, Ryo, Yasuo Kuniyoshi, and Gordon Cheng. "O (logn) algorithm for forward kinematics under asynchronous sensory input." In 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017. http://dx.doi.org/10.1109/icra.2017.7989291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Richards, Deborah. "Intimately intelligent virtual agents: knowing the human beyond sensory input." In ICMI '17: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3139491.3139505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Connor, Jack, Jordan Nowell, Benjamin Champion, and Matthew Joordens. "Analysis of Robotic Fish Using Swarming Rules with Limited Sensory Input." In 2019 14th Annual Conference System of Systems Engineering (SoSE). IEEE, 2019. http://dx.doi.org/10.1109/sysose.2019.8753879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Scherlen, Anne-Catherine, and Vincent Gautier. "Eye movements : sensory input to command and control adaptive visual aids." In 2007 3rd International IEEE/EMBS Conference on Neural Engineering. IEEE, 2007. http://dx.doi.org/10.1109/cne.2007.369669.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Atashzar, S. Farokh, Mahya Shahbazi, Fariborz Rahimi, Mehdi Delrobaei, Jack Lee, Rajni V. Patel, and Mandar Jog. "Effect of kinesthetic force feedback and visual sensory input on writer's cramp." In 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER 2013). IEEE, 2013. http://dx.doi.org/10.1109/ner.2013.6696076.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Sensory input"

1

Parker, Michael, Alex Stott, Brian Quinn, Bruce Elder, Tate Meehan, and Sally Shoop. Joint Chilean and US mobility testing in extreme environments. Engineer Research and Development Center (U.S.), November 2021. http://dx.doi.org/10.21079/11681/42362.

Full text
Abstract:
Vehicle mobility in cold and challenging terrains is of interest to both the US and Chilean Armies. Mobility in winter conditions is highly vehicle dependent with autonomous vehicles experiencing additional challenges over manned vehicles. They lack the ability to make informed decisions based on what they are “seeing” and instead need to rely on input from sensors on the vehicle, or from Unmanned Aerial Systems (UAS) or satellite data collections. This work focuses on onboard vehicle Controller Area Network (CAN) Bus sensors, driver input sensors, and some externally mounted sensors to assist with terrain identification and overall vehicle mobility. Analysis of winter vehicle/sensor data collected in collaboration with the Chilean Army in Lonquimay, Chile during July and August 2019 will be discussed in this report.
APA, Harvard, Vancouver, ISO, and other styles
2

Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak, and Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, July 1996. http://dx.doi.org/10.32747/1996.7613033.bard.

Full text
Abstract:
The objectives of this project were to develop procedures and models, based on neural networks, for quality sorting of agricultural produce. Two research teams, one in Purdue University and the other in Israel, coordinated their research efforts on different aspects of each objective utilizing both melons and tomatoes as case studies. At Purdue: An expert system was developed to measure variances in human grading. Data were acquired from eight sensors: vision, two firmness sensors (destructive and nondestructive), chlorophyll from fluorescence, color sensor, electronic sniffer for odor detection, refractometer and a scale (mass). Data were analyzed and provided input for five classification models. Chlorophyll from fluorescence was found to give the best estimation for ripeness stage while the combination of machine vision and firmness from impact performed best for quality sorting. A new algorithm was developed to estimate and minimize training size for supervised classification. A new criteria was established to choose a training set such that a recurrent auto-associative memory neural network is stabilized. Moreover, this method provides for rapid and accurate updating of the classifier over growing seasons, production environments and cultivars. Different classification approaches (parametric and non-parametric) for grading were examined. Statistical methods were found to be as accurate as neural networks in grading. Classification models by voting did not enhance the classification significantly. A hybrid model that incorporated heuristic rules and either a numerical classifier or neural network was found to be superior in classification accuracy with half the required processing of solely the numerical classifier or neural network. In Israel: A multi-sensing approach utilizing non-destructive sensors was developed. Shape, color, stem identification, surface defects and bruises were measured using a color image processing system. Flavor parameters (sugar, acidity, volatiles) and ripeness were measured using a near-infrared system and an electronic sniffer. Mechanical properties were measured using three sensors: drop impact, resonance frequency and cyclic deformation. Classification algorithms for quality sorting of fruit based on multi-sensory data were developed and implemented. The algorithms included a dynamic artificial neural network, a back propagation neural network and multiple linear regression. Results indicated that classification based on multiple sensors may be applied in real-time sorting and can improve overall classification. Advanced image processing algorithms were developed for shape determination, bruise and stem identification and general color and color homogeneity. An unsupervised method was developed to extract necessary vision features. The primary advantage of the algorithms developed is their ability to learn to determine the visual quality of almost any fruit or vegetable with no need for specific modification and no a-priori knowledge. Moreover, since there is no assumption as to the type of blemish to be characterized, the algorithm is capable of distinguishing between stems and bruises. This enables sorting of fruit without knowing the fruits' orientation. A new algorithm for on-line clustering of data was developed. The algorithm's adaptability is designed to overcome some of the difficulties encountered when incrementally clustering sparse data and preserves information even with memory constraints. Large quantities of data (many images) of high dimensionality (due to multiple sensors) and new information arriving incrementally (a function of the temporal dynamics of any natural process) can now be processed. Furhermore, since the learning is done on-line, it can be implemented in real-time. The methodology developed was tested to determine external quality of tomatoes based on visual information. An improved model for color sorting which is stable and does not require recalibration for each season was developed for color determination. Excellent classification results were obtained for both color and firmness classification. Results indicted that maturity classification can be obtained using a drop-impact and a vision sensor in order to predict the storability and marketing of harvested fruits. In conclusion: We have been able to define quantitatively the critical parameters in the quality sorting and grading of both fresh market cantaloupes and tomatoes. We have been able to accomplish this using nondestructive measurements and in a manner consistent with expert human grading and in accordance with market acceptance. This research constructed and used large databases of both commodities, for comparative evaluation and optimization of expert system, statistical and/or neural network models. The models developed in this research were successfully tested, and should be applicable to a wide range of other fruits and vegetables. These findings are valuable for the development of on-line grading and sorting of agricultural produce through the incorporation of multiple measurement inputs that rapidly define quality in an automated manner, and in a manner consistent with the human graders and inspectors.
APA, Harvard, Vancouver, ISO, and other styles
3

Jones, Scott B., Shmuel P. Friedman, and Gregory Communar. Novel streaming potential and thermal sensor techniques for monitoring water and nutrient fluxes in the vadose zone. United States Department of Agriculture, January 2011. http://dx.doi.org/10.32747/2011.7597910.bard.

Full text
Abstract:
The “Novel streaming potential (SP) and thermal sensor techniques for monitoring water and nutrient fluxes in the vadose zone” project ended Oct. 30, 2015, after an extension to complete travel and intellectual exchange of ideas and sensors. A significant component of this project was the development and testing of the Penta-needle Heat Pulse Probe (PHPP) in addition to testing of the streaming potential concept, both aimed at soil water flux determination. The PHPP was successfully completed and shown to provide soil water flux estimates down to 1 cm day⁻¹ with altered heat input and timing as well as use of larger heater needles. The PHPP was developed by Scott B. Jones at Utah State University with a plan to share sensors with Shmulik P. Friedman, the ARO collaborator. Delays in completion of the PHPP resulted in limited testing at USU and a late delivery of sensors (Sept. 2015) to Dr. Friedman. Two key aspects of the subsurface water flux sensor development that delayed the availability of the PHPP sensors were the addition of integrated electrical conductivity measurements (available in February 2015) and resolution of bugs in the microcontroller firmware (problems resolved in April 2015). Furthermore, testing of the streaming potential method with a wide variety of non-polarizable electrodes at both institutions was not successful as a practical measurement tool for water flux due to numerous sources of interference and the M.S. student in Israel terminated his program prematurely for personal reasons. In spite of these challenges, the project funded several undergraduate students building sensors and several master’s students and postdocs participating in theory and sensor development and testing. Four peer-reviewed journal articles have been published or submitted to date and six oral/poster presentations were also delivered by various authors associated with this project. We intend to continue testing the "new generation" PHPP probes at both USU and at the ARO resulting in several additional publications coming from this follow-on research. Furthermore, Jones is presently awaiting word on an internal grant application for commercialization of the PHPP at USU.
APA, Harvard, Vancouver, ISO, and other styles
4

Kuznetsov, Victor, Vladislav Litvinenko, Egor Bykov, and Vadim Lukin. A program for determining the area of the object entering the IR sensor grid, as well as determining the dynamic characteristics. Science and Innovation Center Publishing House, April 2021. http://dx.doi.org/10.12731/bykov.0415.15042021.

Full text
Abstract:
Currently, to evaluate the dynamic characteristics of objects, quite a large number of devices are used in the form of chronographs, which consist of various optical, thermal and laser sensors. Among the problems of these devices, the following can be distinguished: the lack of recording of the received data; the inaccessibility of taking into account the trajectory of the object flying in the sensor area, as well as taking into consideration the trajectory of the object during the approach to the device frame. The signal received from the infrared sensors is recorded in a separate document in txt format, in the form of a table. When you turn to the document, data is read from the current position of the input data stream in the specified list by an argument in accordance with the given condition. As a result of reading the data, it forms an array that includes N number of columns. The array is constructed in a such way that the first column includes time values, and columns 2...N- the value of voltage . The algorithm uses cycles that perform the function of deleting array rows where there is a fact of exceeding the threshold value in more than two columns, as well as rows where the threshold level was not exceeded. The modified array is converted into two new arrays, each of which includes data from different sensor frames. An array with the coordinates of the centers of the sensor operation zones was created to apply the Pythagorean theorem in three-dimensional space, which is necessary for calculating the exact distance between the zones. The time is determined by the difference in the response of the first and second sensor frames. Knowing the path and time, we are able to calculate the exact speed of the object. For visualization, the oscillograms of each sensor channel were displayed, and a chronograph model was created. The chronograph model highlights in purple the area where the threshold has been exceeded.
APA, Harvard, Vancouver, ISO, and other styles
5

McMurtrey, Michael, Kunal Mondal, Joseph Bass, Kiyo Fujimoto, and Austin Biaggne. Report on plasma jet printer for sensor fabrication with process parameters optimized by simulation input. Office of Scientific and Technical Information (OSTI), September 2019. http://dx.doi.org/10.2172/1668670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Alchanatis, Victor, Stephen W. Searcy, Moshe Meron, W. Lee, G. Y. Li, and A. Ben Porath. Prediction of Nitrogen Stress Using Reflectance Techniques. United States Department of Agriculture, November 2001. http://dx.doi.org/10.32747/2001.7580664.bard.

Full text
Abstract:
Commercial agriculture has come under increasing pressure to reduce nitrogen fertilizer inputs in order to minimize potential nonpoint source pollution of ground and surface waters. This has resulted in increased interest in site specific fertilizer management. One way to solve pollution problems would be to determine crop nutrient needs in real time, using remote detection, and regulating fertilizer dispensed by an applicator. By detecting actual plant needs, only the additional nitrogen necessary to optimize production would be supplied. This research aimed to develop techniques for real time assessment of nitrogen status of corn using a mobile sensor with the potential to regulate nitrogen application based on data from that sensor. Specifically, the research first attempted to determine the system parameters necessary to optimize reflectance spectra of corn plants as a function of growth stage, chlorophyll and nitrogen status. In addition to that, an adaptable, multispectral sensor and the signal processing algorithm to provide real time, in-field assessment of corn nitrogen status was developed. Spectral characteristics of corn leaves reflectance were investigated in order to estimate the nitrogen status of the plants, using a commercial laboratory spectrometer. Statistical models relating leaf N and reflectance spectra were developed for both greenhouse and field plots. A basis was established for assessing nitrogen status using spectral reflectance from plant canopies. The combined effect of variety and N treatment was studied by measuring the reflectance of three varieties of different leaf characteristic color and five different N treatments. The variety effect on the reflectance at 552 nm was not significant (a = 0.01), while canonical discriminant analysis showed promising results for distinguishing different variety and N treatment, using spectral reflectance. Ambient illumination was found inappropriate for reliable, one-beam spectral reflectance measurement of the plants canopy due to the strong spectral lines of sunlight. Therefore, artificial light was consequently used. For in-field N status measurement, a dark chamber was constructed, to include the sensor, along with artificial illumination. Two different approaches were tested (i) use of spatially scattered artificial light, and (ii) use of collimated artificial light beam. It was found that the collimated beam along with a proper design of the sensor-beam geometry yielded the best results in terms of reducing the noise due to variable background, and maintaining the same distance from the sensor to the sample point of the canopy. A multispectral sensor assembly, based on a linear variable filter was designed, constructed and tested. The sensor assembly combined two sensors to cover the range of 400 to 1100 nm, a mounting frame, and a field data acquisition system. Using the mobile dark chamber and the developed sensor, as well as an off-the-shelf sensor, in- field nitrogen status of the plants canopy was measured. Statistical analysis of the acquired in-field data showed that the nitrogen status of the com leaves can be predicted with a SEP (Standard Error of Prediction) of 0.27%. The stage of maturity of the crop affected the relationship between the reflectance spectrum and the nitrogen status of the leaves. Specifically, the best prediction results were obtained when a separate model was used for each maturity stage. In-field assessment of the nitrogen status of corn leaves was successfully carried out by non contact measurement of the reflectance spectrum. This technology is now mature to be incorporated in field implements for on-line control of fertilizer application.
APA, Harvard, Vancouver, ISO, and other styles
7

Baker, John L., James L. Olds, and Joel L. Davis. A Novel Approach to Large Scale Brain Network Models: An Algorithmic Model for Place Cell Emergence With Robotic Sensor Input. Fort Belvoir, VA: Defense Technical Information Center, June 2004. http://dx.doi.org/10.21236/ada425321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Berney, Ernest, Andrew Ward, and Naveen Ganesh. First generation automated assessment of airfield damage using LiDAR point clouds. Engineer Research and Development Center (U.S.), March 2021. http://dx.doi.org/10.21079/11681/40042.

Full text
Abstract:
This research developed an automated software technique for identifying type, size, and location of man-made airfield damage including craters, spalls, and camouflets from a digitized three-dimensional point cloud of the airfield surface. Point clouds were initially generated from Light Detection and Ranging (LiDAR) sensors mounted on elevated lifts to simulate aerial data collection and, later, an actual unmanned aerial system. LiDAR data provided a high-resolution, globally positioned, and dimensionally scaled point cloud exported in a LAS file format that was automatically retrieved and processed using volumetric detection algorithms developed in the MATLAB software environment. Developed MATLAB algorithms used a three-stage filling technique to identify the boundaries of craters first, then spalls, then camouflets, and scaled their sizes based on the greatest pointwise extents. All pavement damages and their locations were saved as shapefiles and uploaded into the GeoExPT processing environment for visualization and quality control. This technique requires no user input between data collection and GeoExPT visualization, allowing for a completely automated software analysis with all filters and data processing hidden from the user.
APA, Harvard, Vancouver, ISO, and other styles
9

Meiri, Noam, Michael D. Denbow, and Cynthia J. Denbow. Epigenetic Adaptation: The Regulatory Mechanisms of Hypothalamic Plasticity that Determine Stress-Response Set Point. United States Department of Agriculture, November 2013. http://dx.doi.org/10.32747/2013.7593396.bard.

Full text
Abstract:
Our hypothesis was that postnatal stress exposure or sensory input alters brain activity, which induces acetylation and/or methylation on lysine residues of histone 3 and alters methylation levels in the promoter regions of stress-related genes, ultimately resulting in long-lasting changes in the stress-response set point. Therefore, the objectives of the proposal were: 1. To identify the levels of total histone 3 acetylation and different levels of methylation on lysine 9 and/or 14 during both heat and feed stress and challenge. 2. To evaluate the methylation and acetylation levels of histone 3 lysine 9 and/or 14 at the Bdnfpromoter during both heat and feed stress and challenge. 3. To evaluate the levels of the relevant methyltransferases and transmethylases during infliction of stress. 4. To identify the specific localization of the cells which respond to both specific histone modification and the enzyme involved by applying each of the stressors in the hypothalamus. 5. To evaluate the physiological effects of antisense knockdown of Ezh2 on the stress responses. 6. To measure the level of CpG methylation in the promoter region of BDNF in thermal treatments and free-fed, 12-hour fasted, and re-fed chicks during post-natal day 3, which is the critical period for feed-control establishment, and 10 days later to evaluate longterm effects. 7. The phenotypic effect of antisense “knock down” of the transmethylaseDNMT 3a. Background: The growing demand for improvements in poultry production requires an understanding of the mechanisms governing stress responses. Two of the major stressors affecting animal welfare and hence, the poultry industry in both the U.S. and Israel, are feed intake and thermal responses. Recently, it has been shown that the regulation of energy intake and expenditure, including feed intake and thermal regulation, resides in the hypothalamus and develops during a critical post-hatch period. However, little is known about the regulatory steps involved. The hypothesis to be tested in this proposal is that epigenetic changes in the hypothalamus during post-hatch early development determine the stress-response set point for both feed and thermal stressors. The ambitious goals that were set for this proposal were met. It was established that both stressors i.e. feed and thermal stress, can be manipulated during the critical period of development at day 3 to induce resilience to stress later in life. Specifically it was established that unfavorable nutritional conditions during early developmental periods or heat exposure influences subsequent adaptability to those same stressful conditions. Furthermore it was demonstrated that epigenetic marks on the promoter of genes involved in stress memory are altered both during stress, and as a result, later in life. Specifically it was demonstrated that fasting and heat had an effect on methylation and acetylation of histone 3 at various lysine residues in the hypothalamus during exposure to stress on day 3 and during stress challenge on day 10. Furthermore, the enzymes that perform these modifications are altered both during stress conditioning and challenge. Finally, these modifications are both necessary and sufficient, since antisense "knockdown" of these enzymes affects histone modifications, and as a consequence stress resilience. DNA methylation was also demonstrated at the promoters of genes involved in heat stress regulation and long-term resilience. It should be noted that the only goal that we did not meet because of technical reasons was No. 7. In conclusion: The outcome of this research may provide information for the improvement of stress responses in high yield poultry breeds using epigenetic adaptation approaches during critical periods in the course of early development in order to improve animal welfare even under suboptimum environmental conditions.
APA, Harvard, Vancouver, ISO, and other styles
10

Galili, Naftali, Roger P. Rohrbach, Itzhak Shmulevich, Yoram Fuchs, and Giora Zauberman. Non-Destructive Quality Sensing of High-Value Agricultural Commodities Through Response Analysis. United States Department of Agriculture, October 1994. http://dx.doi.org/10.32747/1994.7570549.bard.

Full text
Abstract:
The objectives of this project were to develop nondestructive methods for detection of internal properties and firmness of fruits and vegetables. One method was based on a soft piezoelectric film transducer developed in the Technion, for analysis of fruit response to low-energy excitation. The second method was a dot-matrix piezoelectric transducer of North Carolina State University, developed for contact-pressure analysis of fruit during impact. Two research teams, one in Israel and the other in North Carolina, coordinated their research effort according to the specific objectives of the project, to develop and apply the two complementary methods for quality control of agricultural commodities. In Israel: An improved firmness testing system was developed and tested with tropical fruits. The new system included an instrumented fruit-bed of three flexible piezoelectric sensors and miniature electromagnetic hammers, which served as fruit support and low-energy excitation device, respectively. Resonant frequencies were detected for determination of firmness index. Two new acoustic parameters were developed for evaluation of fruit firmness and maturity: a dumping-ratio and a centeroid of the frequency response. Experiments were performed with avocado and mango fruits. The internal damping ratio, which may indicate fruit ripeness, increased monotonically with time, while resonant frequencies and firmness indices decreased with time. Fruit samples were tested daily by destructive penetration test. A fairy high correlation was found in tropical fruits between the penetration force and the new acoustic parameters; a lower correlation was found between this parameter and the conventional firmness index. Improved table-top firmness testing units, Firmalon, with data-logging system and on-line data analysis capacity have been built. The new device was used for the full-scale experiments in the next two years, ahead of the original program and BARD timetable. Close cooperation was initiated with local industry for development of both off-line and on-line sorting and quality control of more agricultural commodities. Firmalon units were produced and operated in major packaging houses in Israel, Belgium and Washington State, on mango and avocado, apples, pears, tomatoes, melons and some other fruits, to gain field experience with the new method. The accumulated experimental data from all these activities is still analyzed, to improve firmness sorting criteria and shelf-life predicting curves for the different fruits. The test program in commercial CA storage facilities in Washington State included seven apple varieties: Fuji, Braeburn, Gala, Granny Smith, Jonagold, Red Delicious, Golden Delicious, and D'Anjou pear variety. FI master-curves could be developed for the Braeburn, Gala, Granny Smith and Jonagold apples. These fruits showed a steady ripening process during the test period. Yet, more work should be conducted to reduce scattering of the data and to determine the confidence limits of the method. Nearly constant FI in Red Delicious and the fluctuations of FI in the Fuji apples should be re-examined. Three sets of experiment were performed with Flandria tomatoes. Despite the complex structure of the tomatoes, the acoustic method could be used for firmness evaluation and to follow the ripening evolution with time. Close agreement was achieved between the auction expert evaluation and that of the nondestructive acoustic test, where firmness index of 4.0 and more indicated grade-A tomatoes. More work is performed to refine the sorting algorithm and to develop a general ripening scale for automatic grading of tomatoes for the fresh fruit market. Galia melons were tested in Israel, in simulated export conditions. It was concluded that the Firmalon is capable of detecting the ripening of melons nondestructively, and sorted out the defective fruits from the export shipment. The cooperation with local industry resulted in development of automatic on-line prototype of the acoustic sensor, that may be incorporated with the export quality control system for melons. More interesting is the development of the remote firmness sensing method for sealed CA cool-rooms, where most of the full-year fruit yield in stored for off-season consumption. Hundreds of ripening monitor systems have been installed in major fruit storage facilities, and being evaluated now by the consumers. If successful, the new method may cause a major change in long-term fruit storage technology. More uses of the acoustic test method have been considered, for monitoring fruit maturity and harvest time, testing fruit samples or each individual fruit when entering the storage facilities, packaging house and auction, and in the supermarket. This approach may result in a full line of equipment for nondestructive quality control of fruits and vegetables, from the orchard or the greenhouse, through the entire sorting, grading and storage process, up to the consumer table. The developed technology offers a tool to determine the maturity of the fruits nondestructively by monitoring their acoustic response to mechanical impulse on the tree. A special device was built and preliminary tested in mango fruit. More development is needed to develop a portable, hand operated sensing method for this purpose. In North Carolina: Analysis method based on an Auto-Regressive (AR) model was developed for detecting the first resonance of fruit from their response to mechanical impulse. The algorithm included a routine that detects the first resonant frequency from as many sensors as possible. Experiments on Red Delicious apples were performed and their firmness was determined. The AR method allowed the detection of the first resonance. The method could be fast enough to be utilized in a real time sorting machine. Yet, further study is needed to look for improvement of the search algorithm of the methods. An impact contact-pressure measurement system and Neural Network (NN) identification method were developed to investigate the relationships between surface pressure distributions on selected fruits and their respective internal textural qualities. A piezoelectric dot-matrix pressure transducer was developed for the purpose of acquiring time-sampled pressure profiles during impact. The acquired data was transferred into a personal computer and accurate visualization of animated data were presented. Preliminary test with 10 apples has been performed. Measurement were made by the contact-pressure transducer in two different positions. Complementary measurements were made on the same apples by using the Firmalon and Magness Taylor (MT) testers. Three-layer neural network was designed. 2/3 of the contact-pressure data were used as training input data and corresponding MT data as training target data. The remaining data were used as NN checking data. Six samples randomly chosen from the ten measured samples and their corresponding Firmalon values were used as the NN training and target data, respectively. The remaining four samples' data were input to the NN. The NN results consistent with the Firmness Tester values. So, if more training data would be obtained, the output should be more accurate. In addition, the Firmness Tester values do not consistent with MT firmness tester values. The NN method developed in this study appears to be a useful tool to emulate the MT Firmness test results without destroying the apple samples. To get more accurate estimation of MT firmness a much larger training data set is required. When the larger sensitive area of the pressure sensor being developed in this project becomes available, the entire contact 'shape' will provide additional information and the neural network results would be more accurate. It has been shown that the impact information can be utilized in the determination of internal quality factors of fruit. Until now,
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography