Letteratura scientifica selezionata sul tema "Visual learning"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Visual learning".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Visual learning"

1

Sze, Daniel Y. "Visual Learning". Journal of Vascular and Interventional Radiology 32, n. 3 (marzo 2021): 331. http://dx.doi.org/10.1016/j.jvir.2021.01.265.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Liu, Yan, Yang Liu, Shenghua Zhong e Songtao Wu. "Implicit Visual Learning". ACM Transactions on Intelligent Systems and Technology 8, n. 2 (18 gennaio 2017): 1–24. http://dx.doi.org/10.1145/2974024.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Cruz, Rodrigo Santa, Basura Fernando, Anoop Cherian e Stephen Gould. "Visual Permutation Learning". IEEE Transactions on Pattern Analysis and Machine Intelligence 41, n. 12 (1 dicembre 2019): 3100–3114. http://dx.doi.org/10.1109/tpami.2018.2873701.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Jones, Rachel. "Visual learning visualized". Nature Reviews Neuroscience 4, n. 1 (gennaio 2003): 10. http://dx.doi.org/10.1038/nrn1014.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Lu, Zhong-Lin, Tianmiao Hua, Chang-Bing Huang, Yifeng Zhou e Barbara Anne Dosher. "Visual perceptual learning". Neurobiology of Learning and Memory 95, n. 2 (febbraio 2011): 145–51. http://dx.doi.org/10.1016/j.nlm.2010.09.010.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Richler, Jennifer J., e Thomas J. Palmeri. "Visual category learning". Wiley Interdisciplinary Reviews: Cognitive Science 5, n. 1 (26 novembre 2013): 75–94. http://dx.doi.org/10.1002/wcs.1268.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Nida, Diini Fitrahtun, Muhyiatul Fadilah, Ardi Ardi e Suci Fajrina. "CHARACTERISTICS OF VISUAL LITERACY-BASED BIOLOGY LEARNING MODULE VALIDITY ON PHOTOSYNTHESIS LEARNING MATERIALS". JURNAL PAJAR (Pendidikan dan Pengajaran) 7, n. 4 (29 luglio 2023): 785. http://dx.doi.org/10.33578/pjr.v7i4.9575.

Testo completo
Abstract (sommario):
Visual literacy is the skill to interpret and give meaning to information in the form of images or visuals. Visual literacy is included in the list of 21st-century skills. The observation results indicate that most of the students have not mastered visual literacy well. One of the efforts that can be made to improve visual literacy is the provision of appropriate and right teaching materials. The research is an R&D (Research and Development) using a 4-D model, which is modified to 3-D (define, design, develop). The instruments used were content analysis sheets and validation questionnaires. The results of the research imply that there are three characteristics of the validity of the developed module. First, visual literacy produces students’ critical thinking and communication skills by building their own meaning or conclusions regarding the given image object. Second, visual literacy produces students' creative thinking by recreating it in the form of images or other visual objects from the provided visual information. Third, visual literacy produces students' critical thinking skills by connecting visual objects or images that are distributed to them. The module is considered to be very valid (feasible) to use with a percentage of 94.23%.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Guinibert, Matthew. "Learn from your environment: A visual literacy learning model". Australasian Journal of Educational Technology 36, n. 4 (28 settembre 2020): 173–88. http://dx.doi.org/10.14742/ajet.5200.

Testo completo
Abstract (sommario):
Based on the presupposition that visual literacy skills are not usually learned unaided by osmosis, but require targeted learning support, this article explores how everyday encounters with visuals can be leveraged as contingent learning opportunities. The author proposes that a learner’s environment can become a visual learning space if appropriate learning support is provided. This learning support may be delivered via the anytime and anywhere capabilities of mobile learning (m-learning), which facilitates peer learning in informal settings. The study propositioned a rhizomatic m-learning model of visual skills that describes how the visuals one encounters in their physical everyday environment can be leveraged as visual literacy learning opportunities. The model was arrived at by following an approach based on heuristic inquiry and user-centred design, including testing prototypes with representative learners. The model describes one means visual literacy could be achieved by novice learners from contingent learning encounters in informal learning environments, through collaboration and by providing context-aware learning support. Such a model shifts the onus of visual literacy learning away from academic programmes and, in this way, opens an alternative pathway for the learning of visual skills. Implications for practice or policy: This research proposes a means for learners to leverage visuals they encounter in their physical everyday environment as visual literacy learning opportunities. M-learning software developers may find the pedagogical model useful in informing their own software. Educators teaching visual skills may find application of the learning model’s pedagogical assumptions in isolation in their own formal learning settings.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Taga, Tadashi, Kazuhito Yoshizaki e Kimiko Kato. "Visual field difference in visual statistical learning." Proceedings of the Annual Convention of the Japanese Psychological Association 79 (22 settembre 2015): 2EV—074–2EV—074. http://dx.doi.org/10.4992/pacjpa.79.0_2ev-074.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Holland, Keith. "Visual skills for learning". Set: Research Information for Teachers, n. 2 (1 agosto 1996): 1–4. http://dx.doi.org/10.18296/set.0900.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Tesi sul tema "Visual learning"

1

Zhu, Fan. "Visual feature learning". Thesis, University of Sheffield, 2015. http://etheses.whiterose.ac.uk/8218/.

Testo completo
Abstract (sommario):
Categorization is a fundamental problem of many computer vision applications, e.g., image classification, pedestrian detection and face recognition. The robustness of a categorization system heavily relies on the quality of features, by which data are represented. The prior arts of feature extraction can be concluded in different levels, which, in a bottom up order, are low level features (e.g., pixels and gradients) and middle/high-level features (e.g., the BoW model and sparse coding). Low level features can be directly extracted from images or videos, while middle/high-level features are constructed upon low-level features, and are designed to enhance the capability of categorization systems based on different considerations (e.g., guaranteeing the domain-invariance and improving the discriminative power). This thesis focuses on the study of visual feature learning. Challenges that remain in designing visual features lie in intra-class variation, occlusions, illumination and view-point changes and insufficient prior knowledge. To address these challenges, I present several visual feature learning methods, where these methods cover the following sub-topics: (i) I start by introducing a segmentation-based object recognition system. (ii) When training data are insufficient, I seek data from other resources, which include images or videos in a different domain, actions captured from a different viewpoint and information in a different media form. In order to appropriately transfer such resources into the target categorization system, four transfer learning-based feature learning methods are presented in this section, where both cross-view, cross-domain and cross-modality scenarios are addressed accordingly. (iii) Finally, I present a random-forest based feature fusion method for multi-view action recognition.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Goh, Hanlin. "Learning deep visual representations". Paris 6, 2013. http://www.theses.fr/2013PA066356.

Testo completo
Abstract (sommario):
Les avancées récentes en apprentissage profond et en traitement d'image présentent l'opportunité d'unifier ces deux champs de recherche complémentaires pour une meilleure résolution du problème de classification d'images dans des catégories sémantiques. L'apprentissage profond apporte au traitement d'image le pouvoir de représentation nécessaire à l'amélioration des performances des méthodes de classification d'images. Cette thèse propose de nouvelles méthodes d'apprentissage de représentations visuelles profondes pour la résolution de cette tache. L'apprentissage profond a été abordé sous deux angles. D'abord nous nous sommes intéressés à l'apprentissage non supervisé de représentations latentes ayant certaines propriétés à partir de données en entrée. Il s'agit ici d'intégrer une connaissance à priori, à travers un terme de régularisation, dans l'apprentissage d'une machine de Boltzmann restreinte (RBM). Nous proposons plusieurs formes de régularisation qui induisent différentes propriétés telles que la parcimonie, la sélectivité et l'organisation en structure topographique. Le second aspect consiste au passage graduel de l'apprentissage non supervisé à l'apprentissage supervisé de réseaux profonds. Ce but est réalisé par l'introduction sous forme de supervision, d'une information relative à la catégorie sémantique. Deux nouvelles méthodes sont proposées. Le premier est basé sur une régularisation top-down de réseaux de croyance profonds à base de RBMs. Le second optimise un cout intégrant un critre de reconstruction et un critre de supervision pour l'entrainement d'autoencodeurs profonds. Les méthodes proposées ont été appliquées au problme de classification d'images. Nous avons adopté le modèle sac-de-mots comme modèle de base parce qu'il offre d'importantes possibilités grâce à l'utilisation de descripteurs locaux robustes et de pooling par pyramides spatiales qui prennent en compte l'information spatiale de l'image. L'apprentissage profonds avec agrÉgation spatiale est utilisé pour apprendre un dictionnaire hiÉrarchique pour l'encodage de reprÉsentations visuelles de niveau intermÉdiaire. Cette mÉthode donne des rÉsultats trs compétitifs en classification de scènes et d'images. Les dictionnaires visuels appris contiennent diverses informations non-redondantes ayant une structure spatiale cohérente. L'inférence est aussi très rapide. Nous avons par la suite optimisé l'étape de pooling sur la base du codage produit par le dictionnaire hiérarchique précédemment appris en introduisant introduit une nouvelle paramétrisation dérivable de l'opération de pooling qui permet un apprentissage par descente de gradient utilisant l'algorithme de rétro-propagation. Ceci est la premire tentative d'unification de l'apprentissage profond et du modèle de sac de mots. Bien que cette fusion puisse sembler évidente, l'union de plusieurs aspects de l'apprentissage profond de représentations visuelles demeure une tache complexe à bien des égards et requiert encore un effort de recherche important
Recent advancements in the areas of deep learning and visual information processing have presented an opportunity to unite both fields. These complementary fields combine to tackle the problem of classifying images into their semantic categories. Deep learning brings learning and representational capabilities to a visual processing model that is adapted for image classification. This thesis addresses problems that lead to the proposal of learning deep visual representations for image classification. The problem of deep learning is tackled on two fronts. The first aspect is the problem of unsupervised learning of latent representations from input data. The main focus is the integration of prior knowledge into the learning of restricted Boltzmann machines (RBM) through regularization. Regularizers are proposed to induce sparsity, selectivity and topographic organization in the coding to improve discrimination and invariance. The second direction introduces the notion of gradually transiting from unsupervised layer-wise learning to supervised deep learning. This is done through the integration of bottom-up information with top-down signals. Two novel implementations supporting this notion are explored. The first method uses top-down regularization to train a deep network of RBMs. The second method combines predictive and reconstructive loss functions to optimize a stack of encoder-decoder networks. The proposed deep learning techniques are applied to tackle the image classification problem. The bag-of-words model is adopted due to its strengths in image modeling through the use of local image descriptors and spatial pooling schemes. Deep learning with spatial aggregation is used to learn a hierarchical visual dictionary for encoding the image descriptors into mid-level representations. This method achieves leading image classification performances for object and scene images. The learned dictionaries are diverse and non-redundant. The speed of inference is also high. From this, a further optimization is performed for the subsequent pooling step. This is done by introducing a differentiable pooling parameterization and applying the error backpropagation algorithm. This thesis represents one of the first attempts to synthesize deep learning and the bag-of-words model. This union results in many challenging research problems, leaving much room for further study in this area
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Walker, Catherine Livesay. "Visual learning through Hypermedia". CSUSB ScholarWorks, 1996. https://scholarworks.lib.csusb.edu/etd-project/1148.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Owens, Andrew (Andrew Hale). "Learning visual models from paired audio-visual examples". Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107352.

Testo completo
Abstract (sommario):
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 93-104).
From the clink of a mug placed onto a saucer to the bustle of a busy café, our days are filled with visual experiences that are accompanied by distinctive sounds. In this thesis, we show that these sounds can provide a rich training signal for learning visual models. First, we propose the task of predicting the sound that an object makes when struck as a way of studying physical interactions within a visual scene. We demonstrate this idea by training an algorithm to produce plausible soundtracks for videos in which people hit and scratch objects with a drumstick. Then, with human studies and automated evaluations on recognition tasks, we verify that the sounds produced by the algorithm convey information about actions and material properties. Second, we show that ambient audio - e.g., crashing waves, people speaking in a crowd - can also be used to learn visual models. We train a convolutional neural network to predict a statistical summary of the sounds that occur within a scene, and we demonstrate that the visual representation learned by the model conveys information about objects and scenes.
by Andrew Owens.
Ph. D.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Peyre, Julia. "Learning to detect visual relations". Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEE016.

Testo completo
Abstract (sommario):
Nous étudions le problème de détection de relations visuelles de la forme (sujet, prédicat, objet) dans les images, qui sont des entités intermédiaires entre les objets et les scènes visuelles complexes. Cette thèse s’attaque à deux défis majeurs : (1) le problème d’annotations coûteuses pour l’entrainement de modèles fortement supervisés, (2) la variation d’apparence visuelle des relations. Nous proposons un premier modèle de détection de relations visuelles faiblement supervisé, n’utilisant que des annotations au niveau de l’image, qui, étant donné des détecteurs d’objets pré-entrainés, atteint une précision proche de celle de modèles fortement supervisés. Notre second modèle combine des représentations compositionnelles (sujet, objet, prédicat) et holistiques (triplet) afin de mieux modéliser les variations d’apparence visuelle et propose un module de raisonnement par analogie pour généraliser à de nouveaux triplets. Nous validons expérimentalement le bénéfice apporté par chacune de ces composantes sur des bases de données réelles
In this thesis, we study the problem of detection of visual relations of the form (subject, predicate, object) in images, which are intermediate level semantic units between objects and complex scenes. Our work addresses two main challenges in visual relation detection: (1) the difficulty of obtaining box-level annotations to train fully-supervised models, (2) the variability of appearance of visual relations. We first propose a weakly-supervised approach which, given pre-trained object detectors, enables us to learn relation detectors using image-level labels only, maintaining a performance close to fully-supervised models. Second, we propose a model that combines different granularities of embeddings (for subject, object, predicate and triplet) to better model appearance variation and introduce an analogical reasoning module to generalize to unseen triplets. Experimental results demonstrate the improvement of our hybrid model over a purely compositional model and validate the benefits of our transfer by analogy to retrieve unseen triplets
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Wang, Zhaoqing. "Self-supervised Visual Representation Learning". Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/29595.

Testo completo
Abstract (sommario):
In general, large-scale annotated data are essential to training deep neural networks in order to achieve better performance in visual feature learning for various computer vision applications. Unfortunately, the amount of annotations is challenging to obtain, requiring a high cost of money and human resources. The dependence on large-scale annotated data has become a crucial bottleneck in developing an advanced intelligence perception system. Self-supervised visual representation learning, a subset of unsupervised learning, has gained popularity because of its ability to avoid the high cost of annotated data. A series of methods designed various pretext tasks to explore the general representations from unlabeled data and use these general representations for different downstream tasks. Although previous methods achieved great success, the label noise problem exists in these pretext tasks due to the lack of human-annotation supervision, which causes harmful effects on the transfer performance. This thesis discusses two types of the noise problem in self-supervised learning and designs the corresponding methods to alleviate the negative effects and explore the transferable representations. Firstly, in pixel-level self-supervised learning, the pixel-level correspondences are easily noisy because of complicated context relationships (e.g., misleading pixels in the background). Secondly, two views of the same image share the foreground object and some background information. As optimizing the pretext task (e.g., contrastive learning), the model is easily to capture the foreground object and noisy background information, simultaneously. Such background information can be harmful to the transfer performance on downstream tasks, including image classification, object detection, and instance segmentation. To address the above mentioned issues, our core idea is to leverage the data regularities and prior knowledge. Experimental results demonstrate that the proposed methods effectively alleviate the negative effects of label noise in self-supervised learning and surpass a series of previous methods.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Tang-Wright, Kimmy. "Visual topography and perceptual learning in the primate visual system". Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:388b9658-dceb-443a-a19b-c960af162819.

Testo completo
Abstract (sommario):
The primate visual system is organised and wired in a topological manner. From the eye well into extrastriate visual cortex, a preserved spatial representation of the vi- sual world is maintained across many levels of processing. Diffusion-weighted imaging (DWI), together with probabilistic tractography, is a non-invasive technique for map- ping connectivity within the brain. In this thesis I probed the sensitivity and accuracy of DWI and probabilistic tractography by quantifying its capacity to detect topolog- ical connectivity in the post mortem macaque brain, between the lateral geniculate nucleus (LGN) and primary visual cortex (V1). The results were validated against electrophysiological and histological data from previous studies. Using the methodol- ogy developed in this thesis, it was possible to segment the LGN reliably into distinct subregions based on its structural connectivity to different parts of the visual field represented in V1. Quantitative differences in connectivity from magno- and parvo- cellular subcomponents of the LGN to different parts of V1 could be replicated with this method in post mortem brains. The topological corticocortical connectivity be- tween extrastriate visual area V5/MT and V1 could also be mapped in the post mortem macaque. In vivo DWI scans previously obtained from the same brains have lower resolution and signal-to-noise because of the shorter scan times. Nevertheless, in many cases, these yielded topological maps similar to the post mortem maps. These results indicate that the preserved topology of connection between LGN to V1, and V5/MT to V1, can be revealed using non-invasive measures of diffusion-weighted imaging and tractography in vivo. In a preliminary investigation using Human Connectome data obtained in vivo, I was not able to segment the retinotopic map in LGN based on con- nections to V1. This may be because information about the topological connectivity is not carried in the much lower resolution human diffusion data, or because of other methodological limitations. I also investigated the mechanisms of perceptual learning by developing a novel task-irrelevant perceptual learning paradigm designed to adapt neuronal elements early on in visual processing in a certain region of the visual field. There is evidence, although not clear-cut, to suggest that the paradigm elicits task- irrelevant perceptual learning, but that these effects only emerge when practice-related effects are accounted for. When orientation and location specific effects on perceptual performance are examined, the largest improvement occurs at the trained location, however, there is also significant improvement at one other 'untrained' location, and there is also a significant improvement in performance for a control group that did not receive any training at any location. The work highlights inherent difficulties in inves- tigating perceptual learning, which relate to the fact that learning likely takes place at both lower and higher levels of processing, however, the paradigm provides a good starting point for comprehensively investigating the complex mechanisms underlying perceptual learning.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Shi, Xiaojin. "Visual learning from small training datasets /". Diss., Digital Dissertations Database. Restricted to UC campuses, 2005. http://uclibs.org/PID/11984.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Liu, Jingen. "Learning Semantic Features for Visual Recognition". Doctoral diss., University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3358.

Testo completo
Abstract (sommario):
Visual recognition (e.g., object, scene and action recognition) is an active area of research in computer vision due to its increasing number of real-world applications such as video (image) indexing and search, intelligent surveillance, human-machine interaction, robot navigation, etc. Effective modeling of the objects, scenes and actions is critical for visual recognition. Recently, bag of visual words (BoVW) representation, in which the image patches or video cuboids are quantized into visual words (i.e., mid-level features) based on their appearance similarity using clustering, has been widely and successfully explored. The advantages of this representation are: no explicit detection of objects or object parts and their tracking are required; the representation is somewhat tolerant to within-class deformations, and it is efficient for matching. However, the performance of the BoVW is sensitive to the size of the visual vocabulary. Therefore, computationally expensive cross-validation is needed to find the appropriate quantization granularity. This limitation is partially due to the fact that the visual words are not semantically meaningful. This limits the effectiveness and compactness of the representation. To overcome these shortcomings, in this thesis we present principled approach to learn a semantic vocabulary (i.e. high-level features) from a large amount of visual words (mid-level features). In this context, the thesis makes two major contributions. First, we have developed an algorithm to discover a compact yet discriminative semantic vocabulary. This vocabulary is obtained by grouping the visual-words based on their distribution in videos (images) into visual-word clusters. The mutual information (MI) be- tween the clusters and the videos (images) depicts the discriminative power of the semantic vocabulary, while the MI between visual-words and visual-word clusters measures the compactness of the vocabulary. We apply the information bottleneck (IB) algorithm to find the optimal number of visual-word clusters by finding the good tradeoff between compactness and discriminative power. We tested our proposed approach on the state-of-the-art KTH dataset, and obtained average accuracy of 94.2%. However, this approach performs one-side clustering, because only visual words are clustered regardless of which video they appear in. In order to leverage the co-occurrence of visual words and images, we have developed the co-clustering algorithm to simultaneously group the visual words and images. We tested our approach on the publicly available fifteen scene dataset and have obtained about 4% increase in the average accuracy compared to the one side clustering approaches. Second, instead of grouping the mid-level features, we first embed the features into a low-dimensional semantic space by manifold learning, and then perform the clustering. We apply Diffusion Maps (DM) to capture the local geometric structure of the mid-level feature space. The DM embedding is able to preserve the explicitly defined diffusion distance, which reflects the semantic similarity between any two features. Furthermore, the DM provides multi-scale analysis capability by adjusting the time steps in the Markov transition matrix. The experiments on KTH dataset show that DM can perform much better (about 3% to 6% improvement in average accuracy) than other manifold learning approaches and IB method. Above methods use only single type of features. In order to combine multiple heterogeneous features for visual recognition, we further propose the Fielder Embedding to capture the complicated semantic relationships between all entities (i.e., videos, images,heterogeneous features). The discovered relationships are then employed to further increase the recognition rate. We tested our approach on Weizmann dataset, and achieved about 17% 21% improvements in the average accuracy.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science PhD
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Beale, Dan. "Autonomous visual learning for robotic systems". Thesis, University of Bath, 2012. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.558886.

Testo completo
Abstract (sommario):
This thesis investigates the problem of visual learning using a robotic platform. Given a set of objects the robots task is to autonomously manipulate, observe, and learn. This allows the robot to recognise objects in a novel scene and pose, or separate them into distinct visual categories. The main focus of the work is in autonomously acquiring object models using robotic manipulation. Autonomous learning is important for robotic systems. In the context of vision, it allows a robot to adapt to new and uncertain environments, updating its internal model of the world. It also reduces the amount of human supervision needed for building visual models. This leads to machines which can operate in environments with rich and complicated visual information, such as the home or industrial workspace; also, in environments which are potentially hazardous for humans. The hypothesis claims that inducing robot motion on objects aids the learning process. It is shown that extra information from the robot sensors provides enough information to localise an object and distinguish it from the background. Also, that decisive planning allows the object to be separated and observed from a variety of dierent poses, giving a good foundation to build a robust classication model. Contributions include a new segmentation algorithm, a new classication model for object learning, and a method for allowing a robot to supervise its own learning in cluttered and dynamic environments.
Gli stili APA, Harvard, Vancouver, ISO e altri

Libri sul tema "Visual learning"

1

Katsushi, Ikeuchi, e Veloso Manuela M, a cura di. Symbolic visual learning. New York: Oxford University Press, 1997.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

K, Nayar Shree, e Poggio Tomaso, a cura di. Early visual learning. New York: Oxford University Press, 1996.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

M, Moore David, e Dwyer Francis M, a cura di. Visual literacy: A spectrum of visual learning. Englewood Cliffs, N.J: Educational Technology Publications, 1994.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

N, Erin Jane, a cura di. Visual handicaps and learning. 3a ed. Austin, Tex: PRO-ED, 1992.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Liberty, Jesse. Learning Visual Basic .NET. Sebastopol, CA: O'Reilly, 2002.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Rourke, Adrianne. Improving visual teaching materials. Hauppauge, N.Y: Nova Science Publishers, 2009.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Baratta, Alex. Visual writing. Newcastle upon Tyne: Cambridge Scholars, 2010.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Manfred, Fahle, e Poggio Tomaso, a cura di. Perceptual learning. Cambridge, Mass: MIT Press, 2002.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Vakanski, Aleksandar, e Farrokh Janabi-Sharifi. Robot Learning by Visual Observation. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2017. http://dx.doi.org/10.1002/9781119091882.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Beatty, Grace Joely. PowerPoint: The visual learning guide. Rocklin, CA: Prima Pub., 1994.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Visual learning"

1

Burge, M., e W. Burger. "Learning visual ideals". In Image Analysis and Processing, 316–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63508-4_138.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Burge, M., e W. Burger. "Learning visual ideals". In Lecture Notes in Computer Science, 464–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/bfb0025067.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Panciroli, Chiara, Laura Corazza e Anita Macauda. "Visual-Graphic Learning". In Proceedings of the 2nd International and Interdisciplinary Conference on Image and Imagination, 49–62. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-41018-6_6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Lu, Zhong-Lin, e Barbara Anne Dosher. "Visual Perceptual Learning". In Encyclopedia of the Sciences of Learning, 3415–18. Boston, MA: Springer US, 2012. http://dx.doi.org/10.1007/978-1-4419-1428-6_258.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Lovegrove, William. "The Visual Deficit Hypothesis". In Learning Disabilities, 246–69. New York, NY: Springer New York, 1992. http://dx.doi.org/10.1007/978-1-4613-9133-3_8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Golon, Alexandra Shires. "Learning Styles Differentiation". In VISUAL-SPATIAL learners, 1–18. 2a ed. New York: Routledge, 2021. http://dx.doi.org/10.4324/9781003239482-1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Golon, Alexandra Shires. "Learning Styles Differentiation". In VISUAL-SPATIAL learners, 1–18. 2a ed. New York: Routledge, 2021. http://dx.doi.org/10.4324/9781003239482-1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Wu, Qi, Peng Wang, Xin Wang, Xiaodong He e Wenwu Zhu. "Video Representation Learning". In Visual Question Answering, 111–17. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-0964-1_7.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Wu, Qi, Peng Wang, Xin Wang, Xiaodong He e Wenwu Zhu. "Deep Learning Basics". In Visual Question Answering, 15–26. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-0964-1_2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Grobstein, Paul, e Kao Liang Chow. "Visual System Development, Plasticity". In Learning and Memory, 56–58. Boston, MA: Birkhäuser Boston, 1989. http://dx.doi.org/10.1007/978-1-4899-6778-7_22.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Visual learning"

1

Buijs, Jean M., e Michael S. Lew. "Learning visual concepts". In the seventh ACM international conference. New York, New York, USA: ACM Press, 1999. http://dx.doi.org/10.1145/319878.319880.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Zhao, Qi, e Christof Koch. "Learning visual saliency". In 2011 45th Annual Conference on Information Sciences and Systems (CISS). IEEE, 2011. http://dx.doi.org/10.1109/ciss.2011.5766178.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

BERARDI, NICOLETTA, e ADRIANA FIORENTINI. "VISUAL PERCEPTUAL LEARNING". In Proceedings of the International School of Biophysics. WORLD SCIENTIFIC, 2001. http://dx.doi.org/10.1142/9789812799975_0034.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Ji, Daomin, Hui Luo e Zhifeng Bao. "Visualization Recommendation Through Visual Relation Learning and Visual Preference Learning". In 2023 IEEE 39th International Conference on Data Engineering (ICDE). IEEE, 2023. http://dx.doi.org/10.1109/icde55515.2023.00145.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Guangming Chang, Chunfen Yuan e Weiming Hu. "Interclass visual similarity based visual vocabulary learning". In 2011 First Asian Conference on Pattern Recognition (ACPR 2011). IEEE, 2011. http://dx.doi.org/10.1109/acpr.2011.6166597.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Mahouachi, Dorra, e Moulay A. Akhloufi. "Deep learning visual programming". In Disruptive Technologies in Information Sciences III, a cura di Misty Blowers, Russell D. Hall e Venkateswara R. Dasari. SPIE, 2019. http://dx.doi.org/10.1117/12.2519882.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Cruz, Rodrigo Santa, Basura Fernando, Anoop Cherian e Stephen Gould. "DeepPermNet: Visual Permutation Learning". In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017. http://dx.doi.org/10.1109/cvpr.2017.640.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Cai, Haipeng, Shiv Raj Pant e Wen Li. "Towards learning visual semantics". In ESEC/FSE '20: 28th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3368089.3417040.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Teow, Matthew Y. W. "Convolutional Visual Feature Learning". In the 2018 International Conference. New York, New York, USA: ACM Press, 2018. http://dx.doi.org/10.1145/3232651.3232672.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Yeh, Tom, e Trevor Darrell. "Dynamic visual category learning". In 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2008. http://dx.doi.org/10.1109/cvpr.2008.4587616.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Rapporti di organizzazioni sul tema "Visual learning"

1

Bhanu, Bir. Learning Integrated Visual Database for Image Exploitation. Fort Belvoir, VA: Defense Technical Information Center, novembre 2002. http://dx.doi.org/10.21236/ada413389.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Edelman, Shimon, Heinrich H. Buelthoff e Erik Sklar. Task and Object Learning in Visual Recognition. Fort Belvoir, VA: Defense Technical Information Center, gennaio 1991. http://dx.doi.org/10.21236/ada259961.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Jiang, Yuhong V. Implicit Learning of Complex Visual Contexts Under Non-Optimal Conditions. Fort Belvoir, VA: Defense Technical Information Center, luglio 2007. http://dx.doi.org/10.21236/ada482119.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Petrie, Christopher, e Katija Aladin. Spotlight: Visual Arts. HundrED, dicembre 2020. http://dx.doi.org/10.58261/azgu5536.

Testo completo
Abstract (sommario):
HundrED and Supercell believe that fostering Visual Art skills can be just as important as numeracy and literacy. Furthermore, we also believe that Visual Arts can be integrated into all learning in schools and developed in a diversity of ways. To this end, the purpose of this project is to shine a spotlight, and make globally visible, leading education innovations from around the world doing exceptional work on developing the skill of Visual Arts for all students, teachers, and leaders in schools today.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Poggio, Tomaso, e Stephen Smale. Hierarchical Kernel Machines: The Mathematics of Learning Inspired by Visual Cortex. Fort Belvoir, VA: Defense Technical Information Center, febbraio 2013. http://dx.doi.org/10.21236/ada580529.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Harmon, Dr Jennifer. Exploring the Efficacy of Active and Authentic Learning in the Visual Merchandising Classroom. Ames: Iowa State University, Digital Repository, novembre 2016. http://dx.doi.org/10.31274/itaa_proceedings-180814-1524.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Mills, Kathy, Elizabeth Heck, Alinta Brown, Patricia Funnell e Lesley Friend. Senses together : Multimodal literacy learning in primary education : Final project report. Institute for Learning Sciences and Teacher Education, Australian Catholic University, 2023. http://dx.doi.org/10.24268/acu.8zy8y.

Testo completo
Abstract (sommario):
[Executive summary] Literacy studies have traditionally focussed on the seen. The other senses are typically under-recognised in literacy studies and research, where the visual sense has been previously prioritised. However, spoken and written language, images, gestures, touch, movement, and sound are part of everyday literacy practices. Communication is no longer focussed on visual texts but is a multisensory experience. Effective communication depends then on sensory orchestration, which unifies the body and its senses. Understanding sensory orchestration is crucial to literacy learning in the 21st century where the combination of multisensory practices is both digital and multimodal. Unfortunately, while multimodal literacy has become an increasing focus in school curriculum, research has still largely remained focussed on the visual. The Sensory Orchestration for Multimodal Literacy Learning in Primary Education project, led by ARC Future Fellow Professor Kathy Mills, sought to address this research deficit. In addressing this gap, the project built an evidence base for understanding how students become critical users of sensory techniques to communicate through digital, virtual, and augmented-reality texts. The project has contributed to the development of new multimodal literacy programs and a next-generation approach to multimodality through the utilisation of innovative sensorial education programs in various educational environments including primary schools, digital labs, and art museums.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Yu, Wanchi. Implicit Learning of Children with and without Developmental Language Disorder across Auditory and Visual Categories. Portland State University Library, gennaio 2000. http://dx.doi.org/10.15760/etd.7460.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Nahorniak, Maya. Occupation of profession: Methodology of laboratory classes from practically-oriented courses under distance learning (on an example of discipline «Radioproduction»). Ivan Franko National University of Lviv, febbraio 2022. http://dx.doi.org/10.30970/vjo.2022.51.11412.

Testo completo
Abstract (sommario):
The article deals with the peculiarities of the use of verbal, visual and practical methods in the distance learning of professional practically-oriented discipline «Radioproduction», are offered new techniques for the use of these methods during the presentation of theoretical material and the creation of a media product (audiovisual content), due to the acquisition of a specialty in conditions online. It is proved that in distance learning, this discipline is inadmissible to absolutize the significance of verbal methods (narrative, explanation, conversation, discussion, lecture) and that all varieties of verbal methods require the intensification of an interactive factor. Based on its own experience, it has been demonstrated, as with the help of various educational platforms, the most appropriate use of visual learning methods. Particular attention is paid to the fact that practical teaching methods based on professional activities of students acquire priority in their professional training. It has been established that only when parity application of new receptions of verbal, visual and practical methods of online learning may have a proper pedagogical effect and will ensure the qualitative acquisition of the specialty. Training methods – verbal, visual, practical – are intended to provide all levels of assimilation of knowledge and skills to promote the full master of the radiojournalist specialist.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Shepiliev, Dmytro S., Yevhenii O. Modlo, Yuliia V. Yechkalo, Viktoriia V. Tkachuk, Mykhailo M. Mintii, Iryna S. Mintii, Oksana M. Markova et al. WebAR development tools: An overview. CEUR Workshop Proceedings, marzo 2021. http://dx.doi.org/10.31812/123456789/4356.

Testo completo
Abstract (sommario):
Web augmented reality (WebAR) development tools aimed at improving the visual aspects of learning are far from being visual and available themselves. This causing problems of selecting and testing WebAR development tools for CS undergraduatesmastering inweb-design basics. The research is aimed at conducting comparative analysis of WebAR tools to select those appropriated for beginners.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia