Academic literature on the topic 'Visual learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Visual learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Visual learning"

1

Sze, Daniel Y. "Visual Learning." Journal of Vascular and Interventional Radiology 32, no. 3 (March 2021): 331. http://dx.doi.org/10.1016/j.jvir.2021.01.265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Yan, Yang Liu, Shenghua Zhong, and Songtao Wu. "Implicit Visual Learning." ACM Transactions on Intelligent Systems and Technology 8, no. 2 (January 18, 2017): 1–24. http://dx.doi.org/10.1145/2974024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cruz, Rodrigo Santa, Basura Fernando, Anoop Cherian, and Stephen Gould. "Visual Permutation Learning." IEEE Transactions on Pattern Analysis and Machine Intelligence 41, no. 12 (December 1, 2019): 3100–3114. http://dx.doi.org/10.1109/tpami.2018.2873701.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jones, Rachel. "Visual learning visualized." Nature Reviews Neuroscience 4, no. 1 (January 2003): 10. http://dx.doi.org/10.1038/nrn1014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lu, Zhong-Lin, Tianmiao Hua, Chang-Bing Huang, Yifeng Zhou, and Barbara Anne Dosher. "Visual perceptual learning." Neurobiology of Learning and Memory 95, no. 2 (February 2011): 145–51. http://dx.doi.org/10.1016/j.nlm.2010.09.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Richler, Jennifer J., and Thomas J. Palmeri. "Visual category learning." Wiley Interdisciplinary Reviews: Cognitive Science 5, no. 1 (November 26, 2013): 75–94. http://dx.doi.org/10.1002/wcs.1268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nida, Diini Fitrahtun, Muhyiatul Fadilah, Ardi Ardi, and Suci Fajrina. "CHARACTERISTICS OF VISUAL LITERACY-BASED BIOLOGY LEARNING MODULE VALIDITY ON PHOTOSYNTHESIS LEARNING MATERIALS." JURNAL PAJAR (Pendidikan dan Pengajaran) 7, no. 4 (July 29, 2023): 785. http://dx.doi.org/10.33578/pjr.v7i4.9575.

Full text
Abstract:
Visual literacy is the skill to interpret and give meaning to information in the form of images or visuals. Visual literacy is included in the list of 21st-century skills. The observation results indicate that most of the students have not mastered visual literacy well. One of the efforts that can be made to improve visual literacy is the provision of appropriate and right teaching materials. The research is an R&D (Research and Development) using a 4-D model, which is modified to 3-D (define, design, develop). The instruments used were content analysis sheets and validation questionnaires. The results of the research imply that there are three characteristics of the validity of the developed module. First, visual literacy produces students’ critical thinking and communication skills by building their own meaning or conclusions regarding the given image object. Second, visual literacy produces students' creative thinking by recreating it in the form of images or other visual objects from the provided visual information. Third, visual literacy produces students' critical thinking skills by connecting visual objects or images that are distributed to them. The module is considered to be very valid (feasible) to use with a percentage of 94.23%.
APA, Harvard, Vancouver, ISO, and other styles
8

Guinibert, Matthew. "Learn from your environment: A visual literacy learning model." Australasian Journal of Educational Technology 36, no. 4 (September 28, 2020): 173–88. http://dx.doi.org/10.14742/ajet.5200.

Full text
Abstract:
Based on the presupposition that visual literacy skills are not usually learned unaided by osmosis, but require targeted learning support, this article explores how everyday encounters with visuals can be leveraged as contingent learning opportunities. The author proposes that a learner’s environment can become a visual learning space if appropriate learning support is provided. This learning support may be delivered via the anytime and anywhere capabilities of mobile learning (m-learning), which facilitates peer learning in informal settings. The study propositioned a rhizomatic m-learning model of visual skills that describes how the visuals one encounters in their physical everyday environment can be leveraged as visual literacy learning opportunities. The model was arrived at by following an approach based on heuristic inquiry and user-centred design, including testing prototypes with representative learners. The model describes one means visual literacy could be achieved by novice learners from contingent learning encounters in informal learning environments, through collaboration and by providing context-aware learning support. Such a model shifts the onus of visual literacy learning away from academic programmes and, in this way, opens an alternative pathway for the learning of visual skills. Implications for practice or policy: This research proposes a means for learners to leverage visuals they encounter in their physical everyday environment as visual literacy learning opportunities. M-learning software developers may find the pedagogical model useful in informing their own software. Educators teaching visual skills may find application of the learning model’s pedagogical assumptions in isolation in their own formal learning settings.
APA, Harvard, Vancouver, ISO, and other styles
9

Taga, Tadashi, Kazuhito Yoshizaki, and Kimiko Kato. "Visual field difference in visual statistical learning." Proceedings of the Annual Convention of the Japanese Psychological Association 79 (September 22, 2015): 2EV—074–2EV—074. http://dx.doi.org/10.4992/pacjpa.79.0_2ev-074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Holland, Keith. "Visual skills for learning." Set: Research Information for Teachers, no. 2 (August 1, 1996): 1–4. http://dx.doi.org/10.18296/set.0900.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Visual learning"

1

Zhu, Fan. "Visual feature learning." Thesis, University of Sheffield, 2015. http://etheses.whiterose.ac.uk/8218/.

Full text
Abstract:
Categorization is a fundamental problem of many computer vision applications, e.g., image classification, pedestrian detection and face recognition. The robustness of a categorization system heavily relies on the quality of features, by which data are represented. The prior arts of feature extraction can be concluded in different levels, which, in a bottom up order, are low level features (e.g., pixels and gradients) and middle/high-level features (e.g., the BoW model and sparse coding). Low level features can be directly extracted from images or videos, while middle/high-level features are constructed upon low-level features, and are designed to enhance the capability of categorization systems based on different considerations (e.g., guaranteeing the domain-invariance and improving the discriminative power). This thesis focuses on the study of visual feature learning. Challenges that remain in designing visual features lie in intra-class variation, occlusions, illumination and view-point changes and insufficient prior knowledge. To address these challenges, I present several visual feature learning methods, where these methods cover the following sub-topics: (i) I start by introducing a segmentation-based object recognition system. (ii) When training data are insufficient, I seek data from other resources, which include images or videos in a different domain, actions captured from a different viewpoint and information in a different media form. In order to appropriately transfer such resources into the target categorization system, four transfer learning-based feature learning methods are presented in this section, where both cross-view, cross-domain and cross-modality scenarios are addressed accordingly. (iii) Finally, I present a random-forest based feature fusion method for multi-view action recognition.
APA, Harvard, Vancouver, ISO, and other styles
2

Goh, Hanlin. "Learning deep visual representations." Paris 6, 2013. http://www.theses.fr/2013PA066356.

Full text
Abstract:
Les avancées récentes en apprentissage profond et en traitement d'image présentent l'opportunité d'unifier ces deux champs de recherche complémentaires pour une meilleure résolution du problème de classification d'images dans des catégories sémantiques. L'apprentissage profond apporte au traitement d'image le pouvoir de représentation nécessaire à l'amélioration des performances des méthodes de classification d'images. Cette thèse propose de nouvelles méthodes d'apprentissage de représentations visuelles profondes pour la résolution de cette tache. L'apprentissage profond a été abordé sous deux angles. D'abord nous nous sommes intéressés à l'apprentissage non supervisé de représentations latentes ayant certaines propriétés à partir de données en entrée. Il s'agit ici d'intégrer une connaissance à priori, à travers un terme de régularisation, dans l'apprentissage d'une machine de Boltzmann restreinte (RBM). Nous proposons plusieurs formes de régularisation qui induisent différentes propriétés telles que la parcimonie, la sélectivité et l'organisation en structure topographique. Le second aspect consiste au passage graduel de l'apprentissage non supervisé à l'apprentissage supervisé de réseaux profonds. Ce but est réalisé par l'introduction sous forme de supervision, d'une information relative à la catégorie sémantique. Deux nouvelles méthodes sont proposées. Le premier est basé sur une régularisation top-down de réseaux de croyance profonds à base de RBMs. Le second optimise un cout intégrant un critre de reconstruction et un critre de supervision pour l'entrainement d'autoencodeurs profonds. Les méthodes proposées ont été appliquées au problme de classification d'images. Nous avons adopté le modèle sac-de-mots comme modèle de base parce qu'il offre d'importantes possibilités grâce à l'utilisation de descripteurs locaux robustes et de pooling par pyramides spatiales qui prennent en compte l'information spatiale de l'image. L'apprentissage profonds avec agrÉgation spatiale est utilisé pour apprendre un dictionnaire hiÉrarchique pour l'encodage de reprÉsentations visuelles de niveau intermÉdiaire. Cette mÉthode donne des rÉsultats trs compétitifs en classification de scènes et d'images. Les dictionnaires visuels appris contiennent diverses informations non-redondantes ayant une structure spatiale cohérente. L'inférence est aussi très rapide. Nous avons par la suite optimisé l'étape de pooling sur la base du codage produit par le dictionnaire hiérarchique précédemment appris en introduisant introduit une nouvelle paramétrisation dérivable de l'opération de pooling qui permet un apprentissage par descente de gradient utilisant l'algorithme de rétro-propagation. Ceci est la premire tentative d'unification de l'apprentissage profond et du modèle de sac de mots. Bien que cette fusion puisse sembler évidente, l'union de plusieurs aspects de l'apprentissage profond de représentations visuelles demeure une tache complexe à bien des égards et requiert encore un effort de recherche important
Recent advancements in the areas of deep learning and visual information processing have presented an opportunity to unite both fields. These complementary fields combine to tackle the problem of classifying images into their semantic categories. Deep learning brings learning and representational capabilities to a visual processing model that is adapted for image classification. This thesis addresses problems that lead to the proposal of learning deep visual representations for image classification. The problem of deep learning is tackled on two fronts. The first aspect is the problem of unsupervised learning of latent representations from input data. The main focus is the integration of prior knowledge into the learning of restricted Boltzmann machines (RBM) through regularization. Regularizers are proposed to induce sparsity, selectivity and topographic organization in the coding to improve discrimination and invariance. The second direction introduces the notion of gradually transiting from unsupervised layer-wise learning to supervised deep learning. This is done through the integration of bottom-up information with top-down signals. Two novel implementations supporting this notion are explored. The first method uses top-down regularization to train a deep network of RBMs. The second method combines predictive and reconstructive loss functions to optimize a stack of encoder-decoder networks. The proposed deep learning techniques are applied to tackle the image classification problem. The bag-of-words model is adopted due to its strengths in image modeling through the use of local image descriptors and spatial pooling schemes. Deep learning with spatial aggregation is used to learn a hierarchical visual dictionary for encoding the image descriptors into mid-level representations. This method achieves leading image classification performances for object and scene images. The learned dictionaries are diverse and non-redundant. The speed of inference is also high. From this, a further optimization is performed for the subsequent pooling step. This is done by introducing a differentiable pooling parameterization and applying the error backpropagation algorithm. This thesis represents one of the first attempts to synthesize deep learning and the bag-of-words model. This union results in many challenging research problems, leaving much room for further study in this area
APA, Harvard, Vancouver, ISO, and other styles
3

Walker, Catherine Livesay. "Visual learning through Hypermedia." CSUSB ScholarWorks, 1996. https://scholarworks.lib.csusb.edu/etd-project/1148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Owens, Andrew (Andrew Hale). "Learning visual models from paired audio-visual examples." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107352.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 93-104).
From the clink of a mug placed onto a saucer to the bustle of a busy café, our days are filled with visual experiences that are accompanied by distinctive sounds. In this thesis, we show that these sounds can provide a rich training signal for learning visual models. First, we propose the task of predicting the sound that an object makes when struck as a way of studying physical interactions within a visual scene. We demonstrate this idea by training an algorithm to produce plausible soundtracks for videos in which people hit and scratch objects with a drumstick. Then, with human studies and automated evaluations on recognition tasks, we verify that the sounds produced by the algorithm convey information about actions and material properties. Second, we show that ambient audio - e.g., crashing waves, people speaking in a crowd - can also be used to learn visual models. We train a convolutional neural network to predict a statistical summary of the sounds that occur within a scene, and we demonstrate that the visual representation learned by the model conveys information about objects and scenes.
by Andrew Owens.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Peyre, Julia. "Learning to detect visual relations." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEE016.

Full text
Abstract:
Nous étudions le problème de détection de relations visuelles de la forme (sujet, prédicat, objet) dans les images, qui sont des entités intermédiaires entre les objets et les scènes visuelles complexes. Cette thèse s’attaque à deux défis majeurs : (1) le problème d’annotations coûteuses pour l’entrainement de modèles fortement supervisés, (2) la variation d’apparence visuelle des relations. Nous proposons un premier modèle de détection de relations visuelles faiblement supervisé, n’utilisant que des annotations au niveau de l’image, qui, étant donné des détecteurs d’objets pré-entrainés, atteint une précision proche de celle de modèles fortement supervisés. Notre second modèle combine des représentations compositionnelles (sujet, objet, prédicat) et holistiques (triplet) afin de mieux modéliser les variations d’apparence visuelle et propose un module de raisonnement par analogie pour généraliser à de nouveaux triplets. Nous validons expérimentalement le bénéfice apporté par chacune de ces composantes sur des bases de données réelles
In this thesis, we study the problem of detection of visual relations of the form (subject, predicate, object) in images, which are intermediate level semantic units between objects and complex scenes. Our work addresses two main challenges in visual relation detection: (1) the difficulty of obtaining box-level annotations to train fully-supervised models, (2) the variability of appearance of visual relations. We first propose a weakly-supervised approach which, given pre-trained object detectors, enables us to learn relation detectors using image-level labels only, maintaining a performance close to fully-supervised models. Second, we propose a model that combines different granularities of embeddings (for subject, object, predicate and triplet) to better model appearance variation and introduce an analogical reasoning module to generalize to unseen triplets. Experimental results demonstrate the improvement of our hybrid model over a purely compositional model and validate the benefits of our transfer by analogy to retrieve unseen triplets
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Zhaoqing. "Self-supervised Visual Representation Learning." Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/29595.

Full text
Abstract:
In general, large-scale annotated data are essential to training deep neural networks in order to achieve better performance in visual feature learning for various computer vision applications. Unfortunately, the amount of annotations is challenging to obtain, requiring a high cost of money and human resources. The dependence on large-scale annotated data has become a crucial bottleneck in developing an advanced intelligence perception system. Self-supervised visual representation learning, a subset of unsupervised learning, has gained popularity because of its ability to avoid the high cost of annotated data. A series of methods designed various pretext tasks to explore the general representations from unlabeled data and use these general representations for different downstream tasks. Although previous methods achieved great success, the label noise problem exists in these pretext tasks due to the lack of human-annotation supervision, which causes harmful effects on the transfer performance. This thesis discusses two types of the noise problem in self-supervised learning and designs the corresponding methods to alleviate the negative effects and explore the transferable representations. Firstly, in pixel-level self-supervised learning, the pixel-level correspondences are easily noisy because of complicated context relationships (e.g., misleading pixels in the background). Secondly, two views of the same image share the foreground object and some background information. As optimizing the pretext task (e.g., contrastive learning), the model is easily to capture the foreground object and noisy background information, simultaneously. Such background information can be harmful to the transfer performance on downstream tasks, including image classification, object detection, and instance segmentation. To address the above mentioned issues, our core idea is to leverage the data regularities and prior knowledge. Experimental results demonstrate that the proposed methods effectively alleviate the negative effects of label noise in self-supervised learning and surpass a series of previous methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Tang-Wright, Kimmy. "Visual topography and perceptual learning in the primate visual system." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:388b9658-dceb-443a-a19b-c960af162819.

Full text
Abstract:
The primate visual system is organised and wired in a topological manner. From the eye well into extrastriate visual cortex, a preserved spatial representation of the vi- sual world is maintained across many levels of processing. Diffusion-weighted imaging (DWI), together with probabilistic tractography, is a non-invasive technique for map- ping connectivity within the brain. In this thesis I probed the sensitivity and accuracy of DWI and probabilistic tractography by quantifying its capacity to detect topolog- ical connectivity in the post mortem macaque brain, between the lateral geniculate nucleus (LGN) and primary visual cortex (V1). The results were validated against electrophysiological and histological data from previous studies. Using the methodol- ogy developed in this thesis, it was possible to segment the LGN reliably into distinct subregions based on its structural connectivity to different parts of the visual field represented in V1. Quantitative differences in connectivity from magno- and parvo- cellular subcomponents of the LGN to different parts of V1 could be replicated with this method in post mortem brains. The topological corticocortical connectivity be- tween extrastriate visual area V5/MT and V1 could also be mapped in the post mortem macaque. In vivo DWI scans previously obtained from the same brains have lower resolution and signal-to-noise because of the shorter scan times. Nevertheless, in many cases, these yielded topological maps similar to the post mortem maps. These results indicate that the preserved topology of connection between LGN to V1, and V5/MT to V1, can be revealed using non-invasive measures of diffusion-weighted imaging and tractography in vivo. In a preliminary investigation using Human Connectome data obtained in vivo, I was not able to segment the retinotopic map in LGN based on con- nections to V1. This may be because information about the topological connectivity is not carried in the much lower resolution human diffusion data, or because of other methodological limitations. I also investigated the mechanisms of perceptual learning by developing a novel task-irrelevant perceptual learning paradigm designed to adapt neuronal elements early on in visual processing in a certain region of the visual field. There is evidence, although not clear-cut, to suggest that the paradigm elicits task- irrelevant perceptual learning, but that these effects only emerge when practice-related effects are accounted for. When orientation and location specific effects on perceptual performance are examined, the largest improvement occurs at the trained location, however, there is also significant improvement at one other 'untrained' location, and there is also a significant improvement in performance for a control group that did not receive any training at any location. The work highlights inherent difficulties in inves- tigating perceptual learning, which relate to the fact that learning likely takes place at both lower and higher levels of processing, however, the paradigm provides a good starting point for comprehensively investigating the complex mechanisms underlying perceptual learning.
APA, Harvard, Vancouver, ISO, and other styles
8

Shi, Xiaojin. "Visual learning from small training datasets /." Diss., Digital Dissertations Database. Restricted to UC campuses, 2005. http://uclibs.org/PID/11984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Jingen. "Learning Semantic Features for Visual Recognition." Doctoral diss., University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3358.

Full text
Abstract:
Visual recognition (e.g., object, scene and action recognition) is an active area of research in computer vision due to its increasing number of real-world applications such as video (image) indexing and search, intelligent surveillance, human-machine interaction, robot navigation, etc. Effective modeling of the objects, scenes and actions is critical for visual recognition. Recently, bag of visual words (BoVW) representation, in which the image patches or video cuboids are quantized into visual words (i.e., mid-level features) based on their appearance similarity using clustering, has been widely and successfully explored. The advantages of this representation are: no explicit detection of objects or object parts and their tracking are required; the representation is somewhat tolerant to within-class deformations, and it is efficient for matching. However, the performance of the BoVW is sensitive to the size of the visual vocabulary. Therefore, computationally expensive cross-validation is needed to find the appropriate quantization granularity. This limitation is partially due to the fact that the visual words are not semantically meaningful. This limits the effectiveness and compactness of the representation. To overcome these shortcomings, in this thesis we present principled approach to learn a semantic vocabulary (i.e. high-level features) from a large amount of visual words (mid-level features). In this context, the thesis makes two major contributions. First, we have developed an algorithm to discover a compact yet discriminative semantic vocabulary. This vocabulary is obtained by grouping the visual-words based on their distribution in videos (images) into visual-word clusters. The mutual information (MI) be- tween the clusters and the videos (images) depicts the discriminative power of the semantic vocabulary, while the MI between visual-words and visual-word clusters measures the compactness of the vocabulary. We apply the information bottleneck (IB) algorithm to find the optimal number of visual-word clusters by finding the good tradeoff between compactness and discriminative power. We tested our proposed approach on the state-of-the-art KTH dataset, and obtained average accuracy of 94.2%. However, this approach performs one-side clustering, because only visual words are clustered regardless of which video they appear in. In order to leverage the co-occurrence of visual words and images, we have developed the co-clustering algorithm to simultaneously group the visual words and images. We tested our approach on the publicly available fifteen scene dataset and have obtained about 4% increase in the average accuracy compared to the one side clustering approaches. Second, instead of grouping the mid-level features, we first embed the features into a low-dimensional semantic space by manifold learning, and then perform the clustering. We apply Diffusion Maps (DM) to capture the local geometric structure of the mid-level feature space. The DM embedding is able to preserve the explicitly defined diffusion distance, which reflects the semantic similarity between any two features. Furthermore, the DM provides multi-scale analysis capability by adjusting the time steps in the Markov transition matrix. The experiments on KTH dataset show that DM can perform much better (about 3% to 6% improvement in average accuracy) than other manifold learning approaches and IB method. Above methods use only single type of features. In order to combine multiple heterogeneous features for visual recognition, we further propose the Fielder Embedding to capture the complicated semantic relationships between all entities (i.e., videos, images,heterogeneous features). The discovered relationships are then employed to further increase the recognition rate. We tested our approach on Weizmann dataset, and achieved about 17% 21% improvements in the average accuracy.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science PhD
APA, Harvard, Vancouver, ISO, and other styles
10

Beale, Dan. "Autonomous visual learning for robotic systems." Thesis, University of Bath, 2012. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.558886.

Full text
Abstract:
This thesis investigates the problem of visual learning using a robotic platform. Given a set of objects the robots task is to autonomously manipulate, observe, and learn. This allows the robot to recognise objects in a novel scene and pose, or separate them into distinct visual categories. The main focus of the work is in autonomously acquiring object models using robotic manipulation. Autonomous learning is important for robotic systems. In the context of vision, it allows a robot to adapt to new and uncertain environments, updating its internal model of the world. It also reduces the amount of human supervision needed for building visual models. This leads to machines which can operate in environments with rich and complicated visual information, such as the home or industrial workspace; also, in environments which are potentially hazardous for humans. The hypothesis claims that inducing robot motion on objects aids the learning process. It is shown that extra information from the robot sensors provides enough information to localise an object and distinguish it from the background. Also, that decisive planning allows the object to be separated and observed from a variety of dierent poses, giving a good foundation to build a robust classication model. Contributions include a new segmentation algorithm, a new classication model for object learning, and a method for allowing a robot to supervise its own learning in cluttered and dynamic environments.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Visual learning"

1

Katsushi, Ikeuchi, and Veloso Manuela M, eds. Symbolic visual learning. New York: Oxford University Press, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

K, Nayar Shree, and Poggio Tomaso, eds. Early visual learning. New York: Oxford University Press, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

M, Moore David, and Dwyer Francis M, eds. Visual literacy: A spectrum of visual learning. Englewood Cliffs, N.J: Educational Technology Publications, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liberty, Jesse. Learning Visual Basic .NET. Sebastopol, CA: O'Reilly, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

N, Erin Jane, ed. Visual handicaps and learning. 3rd ed. Austin, Tex: PRO-ED, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rourke, Adrianne. Improving visual teaching materials. Hauppauge, N.Y: Nova Science Publishers, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Baratta, Alex. Visual writing. Newcastle upon Tyne: Cambridge Scholars, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vakanski, Aleksandar, and Farrokh Janabi-Sharifi. Robot Learning by Visual Observation. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2017. http://dx.doi.org/10.1002/9781119091882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Beatty, Grace Joely. PowerPoint: The visual learning guide. Rocklin, CA: Prima Pub., 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Manfred, Fahle, and Poggio Tomaso, eds. Perceptual learning. Cambridge, Mass: MIT Press, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Visual learning"

1

Burge, M., and W. Burger. "Learning visual ideals." In Image Analysis and Processing, 316–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63508-4_138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Burge, M., and W. Burger. "Learning visual ideals." In Lecture Notes in Computer Science, 464–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/bfb0025067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Panciroli, Chiara, Laura Corazza, and Anita Macauda. "Visual-Graphic Learning." In Proceedings of the 2nd International and Interdisciplinary Conference on Image and Imagination, 49–62. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-41018-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lu, Zhong-Lin, and Barbara Anne Dosher. "Visual Perceptual Learning." In Encyclopedia of the Sciences of Learning, 3415–18. Boston, MA: Springer US, 2012. http://dx.doi.org/10.1007/978-1-4419-1428-6_258.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lovegrove, William. "The Visual Deficit Hypothesis." In Learning Disabilities, 246–69. New York, NY: Springer New York, 1992. http://dx.doi.org/10.1007/978-1-4613-9133-3_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Golon, Alexandra Shires. "Learning Styles Differentiation." In VISUAL-SPATIAL learners, 1–18. 2nd ed. New York: Routledge, 2021. http://dx.doi.org/10.4324/9781003239482-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Golon, Alexandra Shires. "Learning Styles Differentiation." In VISUAL-SPATIAL learners, 1–18. 2nd ed. New York: Routledge, 2021. http://dx.doi.org/10.4324/9781003239482-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Qi, Peng Wang, Xin Wang, Xiaodong He, and Wenwu Zhu. "Video Representation Learning." In Visual Question Answering, 111–17. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-0964-1_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wu, Qi, Peng Wang, Xin Wang, Xiaodong He, and Wenwu Zhu. "Deep Learning Basics." In Visual Question Answering, 15–26. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-0964-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Grobstein, Paul, and Kao Liang Chow. "Visual System Development, Plasticity." In Learning and Memory, 56–58. Boston, MA: Birkhäuser Boston, 1989. http://dx.doi.org/10.1007/978-1-4899-6778-7_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Visual learning"

1

Buijs, Jean M., and Michael S. Lew. "Learning visual concepts." In the seventh ACM international conference. New York, New York, USA: ACM Press, 1999. http://dx.doi.org/10.1145/319878.319880.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhao, Qi, and Christof Koch. "Learning visual saliency." In 2011 45th Annual Conference on Information Sciences and Systems (CISS). IEEE, 2011. http://dx.doi.org/10.1109/ciss.2011.5766178.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

BERARDI, NICOLETTA, and ADRIANA FIORENTINI. "VISUAL PERCEPTUAL LEARNING." In Proceedings of the International School of Biophysics. WORLD SCIENTIFIC, 2001. http://dx.doi.org/10.1142/9789812799975_0034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ji, Daomin, Hui Luo, and Zhifeng Bao. "Visualization Recommendation Through Visual Relation Learning and Visual Preference Learning." In 2023 IEEE 39th International Conference on Data Engineering (ICDE). IEEE, 2023. http://dx.doi.org/10.1109/icde55515.2023.00145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Guangming Chang, Chunfen Yuan, and Weiming Hu. "Interclass visual similarity based visual vocabulary learning." In 2011 First Asian Conference on Pattern Recognition (ACPR 2011). IEEE, 2011. http://dx.doi.org/10.1109/acpr.2011.6166597.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mahouachi, Dorra, and Moulay A. Akhloufi. "Deep learning visual programming." In Disruptive Technologies in Information Sciences III, edited by Misty Blowers, Russell D. Hall, and Venkateswara R. Dasari. SPIE, 2019. http://dx.doi.org/10.1117/12.2519882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cruz, Rodrigo Santa, Basura Fernando, Anoop Cherian, and Stephen Gould. "DeepPermNet: Visual Permutation Learning." In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017. http://dx.doi.org/10.1109/cvpr.2017.640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cai, Haipeng, Shiv Raj Pant, and Wen Li. "Towards learning visual semantics." In ESEC/FSE '20: 28th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3368089.3417040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Teow, Matthew Y. W. "Convolutional Visual Feature Learning." In the 2018 International Conference. New York, New York, USA: ACM Press, 2018. http://dx.doi.org/10.1145/3232651.3232672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yeh, Tom, and Trevor Darrell. "Dynamic visual category learning." In 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2008. http://dx.doi.org/10.1109/cvpr.2008.4587616.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Visual learning"

1

Bhanu, Bir. Learning Integrated Visual Database for Image Exploitation. Fort Belvoir, VA: Defense Technical Information Center, November 2002. http://dx.doi.org/10.21236/ada413389.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Edelman, Shimon, Heinrich H. Buelthoff, and Erik Sklar. Task and Object Learning in Visual Recognition. Fort Belvoir, VA: Defense Technical Information Center, January 1991. http://dx.doi.org/10.21236/ada259961.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jiang, Yuhong V. Implicit Learning of Complex Visual Contexts Under Non-Optimal Conditions. Fort Belvoir, VA: Defense Technical Information Center, July 2007. http://dx.doi.org/10.21236/ada482119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Petrie, Christopher, and Katija Aladin. Spotlight: Visual Arts. HundrED, December 2020. http://dx.doi.org/10.58261/azgu5536.

Full text
Abstract:
HundrED and Supercell believe that fostering Visual Art skills can be just as important as numeracy and literacy. Furthermore, we also believe that Visual Arts can be integrated into all learning in schools and developed in a diversity of ways. To this end, the purpose of this project is to shine a spotlight, and make globally visible, leading education innovations from around the world doing exceptional work on developing the skill of Visual Arts for all students, teachers, and leaders in schools today.
APA, Harvard, Vancouver, ISO, and other styles
5

Poggio, Tomaso, and Stephen Smale. Hierarchical Kernel Machines: The Mathematics of Learning Inspired by Visual Cortex. Fort Belvoir, VA: Defense Technical Information Center, February 2013. http://dx.doi.org/10.21236/ada580529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Harmon, Dr Jennifer. Exploring the Efficacy of Active and Authentic Learning in the Visual Merchandising Classroom. Ames: Iowa State University, Digital Repository, November 2016. http://dx.doi.org/10.31274/itaa_proceedings-180814-1524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mills, Kathy, Elizabeth Heck, Alinta Brown, Patricia Funnell, and Lesley Friend. Senses together : Multimodal literacy learning in primary education : Final project report. Institute for Learning Sciences and Teacher Education, Australian Catholic University, 2023. http://dx.doi.org/10.24268/acu.8zy8y.

Full text
Abstract:
[Executive summary] Literacy studies have traditionally focussed on the seen. The other senses are typically under-recognised in literacy studies and research, where the visual sense has been previously prioritised. However, spoken and written language, images, gestures, touch, movement, and sound are part of everyday literacy practices. Communication is no longer focussed on visual texts but is a multisensory experience. Effective communication depends then on sensory orchestration, which unifies the body and its senses. Understanding sensory orchestration is crucial to literacy learning in the 21st century where the combination of multisensory practices is both digital and multimodal. Unfortunately, while multimodal literacy has become an increasing focus in school curriculum, research has still largely remained focussed on the visual. The Sensory Orchestration for Multimodal Literacy Learning in Primary Education project, led by ARC Future Fellow Professor Kathy Mills, sought to address this research deficit. In addressing this gap, the project built an evidence base for understanding how students become critical users of sensory techniques to communicate through digital, virtual, and augmented-reality texts. The project has contributed to the development of new multimodal literacy programs and a next-generation approach to multimodality through the utilisation of innovative sensorial education programs in various educational environments including primary schools, digital labs, and art museums.
APA, Harvard, Vancouver, ISO, and other styles
8

Yu, Wanchi. Implicit Learning of Children with and without Developmental Language Disorder across Auditory and Visual Categories. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.7460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nahorniak, Maya. Occupation of profession: Methodology of laboratory classes from practically-oriented courses under distance learning (on an example of discipline «Radioproduction»). Ivan Franko National University of Lviv, February 2022. http://dx.doi.org/10.30970/vjo.2022.51.11412.

Full text
Abstract:
The article deals with the peculiarities of the use of verbal, visual and practical methods in the distance learning of professional practically-oriented discipline «Radioproduction», are offered new techniques for the use of these methods during the presentation of theoretical material and the creation of a media product (audiovisual content), due to the acquisition of a specialty in conditions online. It is proved that in distance learning, this discipline is inadmissible to absolutize the significance of verbal methods (narrative, explanation, conversation, discussion, lecture) and that all varieties of verbal methods require the intensification of an interactive factor. Based on its own experience, it has been demonstrated, as with the help of various educational platforms, the most appropriate use of visual learning methods. Particular attention is paid to the fact that practical teaching methods based on professional activities of students acquire priority in their professional training. It has been established that only when parity application of new receptions of verbal, visual and practical methods of online learning may have a proper pedagogical effect and will ensure the qualitative acquisition of the specialty. Training methods – verbal, visual, practical – are intended to provide all levels of assimilation of knowledge and skills to promote the full master of the radiojournalist specialist.
APA, Harvard, Vancouver, ISO, and other styles
10

Shepiliev, Dmytro S., Yevhenii O. Modlo, Yuliia V. Yechkalo, Viktoriia V. Tkachuk, Mykhailo M. Mintii, Iryna S. Mintii, Oksana M. Markova, et al. WebAR development tools: An overview. CEUR Workshop Proceedings, March 2021. http://dx.doi.org/10.31812/123456789/4356.

Full text
Abstract:
Web augmented reality (WebAR) development tools aimed at improving the visual aspects of learning are far from being visual and available themselves. This causing problems of selecting and testing WebAR development tools for CS undergraduatesmastering inweb-design basics. The research is aimed at conducting comparative analysis of WebAR tools to select those appropriated for beginners.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography