Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Visual learning“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Visual learning" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Visual learning"
Sze, Daniel Y. „Visual Learning“. Journal of Vascular and Interventional Radiology 32, Nr. 3 (März 2021): 331. http://dx.doi.org/10.1016/j.jvir.2021.01.265.
Der volle Inhalt der QuelleLiu, Yan, Yang Liu, Shenghua Zhong und Songtao Wu. „Implicit Visual Learning“. ACM Transactions on Intelligent Systems and Technology 8, Nr. 2 (18.01.2017): 1–24. http://dx.doi.org/10.1145/2974024.
Der volle Inhalt der QuelleCruz, Rodrigo Santa, Basura Fernando, Anoop Cherian und Stephen Gould. „Visual Permutation Learning“. IEEE Transactions on Pattern Analysis and Machine Intelligence 41, Nr. 12 (01.12.2019): 3100–3114. http://dx.doi.org/10.1109/tpami.2018.2873701.
Der volle Inhalt der QuelleJones, Rachel. „Visual learning visualized“. Nature Reviews Neuroscience 4, Nr. 1 (Januar 2003): 10. http://dx.doi.org/10.1038/nrn1014.
Der volle Inhalt der QuelleLu, Zhong-Lin, Tianmiao Hua, Chang-Bing Huang, Yifeng Zhou und Barbara Anne Dosher. „Visual perceptual learning“. Neurobiology of Learning and Memory 95, Nr. 2 (Februar 2011): 145–51. http://dx.doi.org/10.1016/j.nlm.2010.09.010.
Der volle Inhalt der QuelleRichler, Jennifer J., und Thomas J. Palmeri. „Visual category learning“. Wiley Interdisciplinary Reviews: Cognitive Science 5, Nr. 1 (26.11.2013): 75–94. http://dx.doi.org/10.1002/wcs.1268.
Der volle Inhalt der QuelleNida, Diini Fitrahtun, Muhyiatul Fadilah, Ardi Ardi und Suci Fajrina. „CHARACTERISTICS OF VISUAL LITERACY-BASED BIOLOGY LEARNING MODULE VALIDITY ON PHOTOSYNTHESIS LEARNING MATERIALS“. JURNAL PAJAR (Pendidikan dan Pengajaran) 7, Nr. 4 (29.07.2023): 785. http://dx.doi.org/10.33578/pjr.v7i4.9575.
Der volle Inhalt der QuelleGuinibert, Matthew. „Learn from your environment: A visual literacy learning model“. Australasian Journal of Educational Technology 36, Nr. 4 (28.09.2020): 173–88. http://dx.doi.org/10.14742/ajet.5200.
Der volle Inhalt der QuelleTaga, Tadashi, Kazuhito Yoshizaki und Kimiko Kato. „Visual field difference in visual statistical learning.“ Proceedings of the Annual Convention of the Japanese Psychological Association 79 (22.09.2015): 2EV—074–2EV—074. http://dx.doi.org/10.4992/pacjpa.79.0_2ev-074.
Der volle Inhalt der QuelleHolland, Keith. „Visual skills for learning“. Set: Research Information for Teachers, Nr. 2 (01.08.1996): 1–4. http://dx.doi.org/10.18296/set.0900.
Der volle Inhalt der QuelleDissertationen zum Thema "Visual learning"
Zhu, Fan. „Visual feature learning“. Thesis, University of Sheffield, 2015. http://etheses.whiterose.ac.uk/8218/.
Der volle Inhalt der QuelleGoh, Hanlin. „Learning deep visual representations“. Paris 6, 2013. http://www.theses.fr/2013PA066356.
Der volle Inhalt der QuelleRecent advancements in the areas of deep learning and visual information processing have presented an opportunity to unite both fields. These complementary fields combine to tackle the problem of classifying images into their semantic categories. Deep learning brings learning and representational capabilities to a visual processing model that is adapted for image classification. This thesis addresses problems that lead to the proposal of learning deep visual representations for image classification. The problem of deep learning is tackled on two fronts. The first aspect is the problem of unsupervised learning of latent representations from input data. The main focus is the integration of prior knowledge into the learning of restricted Boltzmann machines (RBM) through regularization. Regularizers are proposed to induce sparsity, selectivity and topographic organization in the coding to improve discrimination and invariance. The second direction introduces the notion of gradually transiting from unsupervised layer-wise learning to supervised deep learning. This is done through the integration of bottom-up information with top-down signals. Two novel implementations supporting this notion are explored. The first method uses top-down regularization to train a deep network of RBMs. The second method combines predictive and reconstructive loss functions to optimize a stack of encoder-decoder networks. The proposed deep learning techniques are applied to tackle the image classification problem. The bag-of-words model is adopted due to its strengths in image modeling through the use of local image descriptors and spatial pooling schemes. Deep learning with spatial aggregation is used to learn a hierarchical visual dictionary for encoding the image descriptors into mid-level representations. This method achieves leading image classification performances for object and scene images. The learned dictionaries are diverse and non-redundant. The speed of inference is also high. From this, a further optimization is performed for the subsequent pooling step. This is done by introducing a differentiable pooling parameterization and applying the error backpropagation algorithm. This thesis represents one of the first attempts to synthesize deep learning and the bag-of-words model. This union results in many challenging research problems, leaving much room for further study in this area
Walker, Catherine Livesay. „Visual learning through Hypermedia“. CSUSB ScholarWorks, 1996. https://scholarworks.lib.csusb.edu/etd-project/1148.
Der volle Inhalt der QuelleOwens, Andrew (Andrew Hale). „Learning visual models from paired audio-visual examples“. Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107352.
Der volle Inhalt der QuelleCataloged from PDF version of thesis.
Includes bibliographical references (pages 93-104).
From the clink of a mug placed onto a saucer to the bustle of a busy café, our days are filled with visual experiences that are accompanied by distinctive sounds. In this thesis, we show that these sounds can provide a rich training signal for learning visual models. First, we propose the task of predicting the sound that an object makes when struck as a way of studying physical interactions within a visual scene. We demonstrate this idea by training an algorithm to produce plausible soundtracks for videos in which people hit and scratch objects with a drumstick. Then, with human studies and automated evaluations on recognition tasks, we verify that the sounds produced by the algorithm convey information about actions and material properties. Second, we show that ambient audio - e.g., crashing waves, people speaking in a crowd - can also be used to learn visual models. We train a convolutional neural network to predict a statistical summary of the sounds that occur within a scene, and we demonstrate that the visual representation learned by the model conveys information about objects and scenes.
by Andrew Owens.
Ph. D.
Peyre, Julia. „Learning to detect visual relations“. Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEE016.
Der volle Inhalt der QuelleIn this thesis, we study the problem of detection of visual relations of the form (subject, predicate, object) in images, which are intermediate level semantic units between objects and complex scenes. Our work addresses two main challenges in visual relation detection: (1) the difficulty of obtaining box-level annotations to train fully-supervised models, (2) the variability of appearance of visual relations. We first propose a weakly-supervised approach which, given pre-trained object detectors, enables us to learn relation detectors using image-level labels only, maintaining a performance close to fully-supervised models. Second, we propose a model that combines different granularities of embeddings (for subject, object, predicate and triplet) to better model appearance variation and introduce an analogical reasoning module to generalize to unseen triplets. Experimental results demonstrate the improvement of our hybrid model over a purely compositional model and validate the benefits of our transfer by analogy to retrieve unseen triplets
Wang, Zhaoqing. „Self-supervised Visual Representation Learning“. Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/29595.
Der volle Inhalt der QuelleTang-Wright, Kimmy. „Visual topography and perceptual learning in the primate visual system“. Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:388b9658-dceb-443a-a19b-c960af162819.
Der volle Inhalt der QuelleShi, Xiaojin. „Visual learning from small training datasets /“. Diss., Digital Dissertations Database. Restricted to UC campuses, 2005. http://uclibs.org/PID/11984.
Der volle Inhalt der QuelleLiu, Jingen. „Learning Semantic Features for Visual Recognition“. Doctoral diss., University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3358.
Der volle Inhalt der QuellePh.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science PhD
Beale, Dan. „Autonomous visual learning for robotic systems“. Thesis, University of Bath, 2012. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.558886.
Der volle Inhalt der QuelleBücher zum Thema "Visual learning"
Katsushi, Ikeuchi, und Veloso Manuela M, Hrsg. Symbolic visual learning. New York: Oxford University Press, 1997.
Den vollen Inhalt der Quelle findenK, Nayar Shree, und Poggio Tomaso, Hrsg. Early visual learning. New York: Oxford University Press, 1996.
Den vollen Inhalt der Quelle findenM, Moore David, und Dwyer Francis M, Hrsg. Visual literacy: A spectrum of visual learning. Englewood Cliffs, N.J: Educational Technology Publications, 1994.
Den vollen Inhalt der Quelle findenN, Erin Jane, Hrsg. Visual handicaps and learning. 3. Aufl. Austin, Tex: PRO-ED, 1992.
Den vollen Inhalt der Quelle findenLiberty, Jesse. Learning Visual Basic .NET. Sebastopol, CA: O'Reilly, 2002.
Den vollen Inhalt der Quelle findenRourke, Adrianne. Improving visual teaching materials. Hauppauge, N.Y: Nova Science Publishers, 2009.
Den vollen Inhalt der Quelle findenBaratta, Alex. Visual writing. Newcastle upon Tyne: Cambridge Scholars, 2010.
Den vollen Inhalt der Quelle findenManfred, Fahle, und Poggio Tomaso, Hrsg. Perceptual learning. Cambridge, Mass: MIT Press, 2002.
Den vollen Inhalt der Quelle findenVakanski, Aleksandar, und Farrokh Janabi-Sharifi. Robot Learning by Visual Observation. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2017. http://dx.doi.org/10.1002/9781119091882.
Der volle Inhalt der QuelleBeatty, Grace Joely. PowerPoint: The visual learning guide. Rocklin, CA: Prima Pub., 1994.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Visual learning"
Burge, M., und W. Burger. „Learning visual ideals“. In Image Analysis and Processing, 316–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63508-4_138.
Der volle Inhalt der QuelleBurge, M., und W. Burger. „Learning visual ideals“. In Lecture Notes in Computer Science, 464–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/bfb0025067.
Der volle Inhalt der QuellePanciroli, Chiara, Laura Corazza und Anita Macauda. „Visual-Graphic Learning“. In Proceedings of the 2nd International and Interdisciplinary Conference on Image and Imagination, 49–62. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-41018-6_6.
Der volle Inhalt der QuelleLu, Zhong-Lin, und Barbara Anne Dosher. „Visual Perceptual Learning“. In Encyclopedia of the Sciences of Learning, 3415–18. Boston, MA: Springer US, 2012. http://dx.doi.org/10.1007/978-1-4419-1428-6_258.
Der volle Inhalt der QuelleLovegrove, William. „The Visual Deficit Hypothesis“. In Learning Disabilities, 246–69. New York, NY: Springer New York, 1992. http://dx.doi.org/10.1007/978-1-4613-9133-3_8.
Der volle Inhalt der QuelleGolon, Alexandra Shires. „Learning Styles Differentiation“. In VISUAL-SPATIAL learners, 1–18. 2. Aufl. New York: Routledge, 2021. http://dx.doi.org/10.4324/9781003239482-1.
Der volle Inhalt der QuelleGolon, Alexandra Shires. „Learning Styles Differentiation“. In VISUAL-SPATIAL learners, 1–18. 2. Aufl. New York: Routledge, 2021. http://dx.doi.org/10.4324/9781003239482-1.
Der volle Inhalt der QuelleWu, Qi, Peng Wang, Xin Wang, Xiaodong He und Wenwu Zhu. „Video Representation Learning“. In Visual Question Answering, 111–17. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-0964-1_7.
Der volle Inhalt der QuelleWu, Qi, Peng Wang, Xin Wang, Xiaodong He und Wenwu Zhu. „Deep Learning Basics“. In Visual Question Answering, 15–26. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-0964-1_2.
Der volle Inhalt der QuelleGrobstein, Paul, und Kao Liang Chow. „Visual System Development, Plasticity“. In Learning and Memory, 56–58. Boston, MA: Birkhäuser Boston, 1989. http://dx.doi.org/10.1007/978-1-4899-6778-7_22.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Visual learning"
Buijs, Jean M., und Michael S. Lew. „Learning visual concepts“. In the seventh ACM international conference. New York, New York, USA: ACM Press, 1999. http://dx.doi.org/10.1145/319878.319880.
Der volle Inhalt der QuelleZhao, Qi, und Christof Koch. „Learning visual saliency“. In 2011 45th Annual Conference on Information Sciences and Systems (CISS). IEEE, 2011. http://dx.doi.org/10.1109/ciss.2011.5766178.
Der volle Inhalt der QuelleBERARDI, NICOLETTA, und ADRIANA FIORENTINI. „VISUAL PERCEPTUAL LEARNING“. In Proceedings of the International School of Biophysics. WORLD SCIENTIFIC, 2001. http://dx.doi.org/10.1142/9789812799975_0034.
Der volle Inhalt der QuelleJi, Daomin, Hui Luo und Zhifeng Bao. „Visualization Recommendation Through Visual Relation Learning and Visual Preference Learning“. In 2023 IEEE 39th International Conference on Data Engineering (ICDE). IEEE, 2023. http://dx.doi.org/10.1109/icde55515.2023.00145.
Der volle Inhalt der QuelleGuangming Chang, Chunfen Yuan und Weiming Hu. „Interclass visual similarity based visual vocabulary learning“. In 2011 First Asian Conference on Pattern Recognition (ACPR 2011). IEEE, 2011. http://dx.doi.org/10.1109/acpr.2011.6166597.
Der volle Inhalt der QuelleMahouachi, Dorra, und Moulay A. Akhloufi. „Deep learning visual programming“. In Disruptive Technologies in Information Sciences III, herausgegeben von Misty Blowers, Russell D. Hall und Venkateswara R. Dasari. SPIE, 2019. http://dx.doi.org/10.1117/12.2519882.
Der volle Inhalt der QuelleCruz, Rodrigo Santa, Basura Fernando, Anoop Cherian und Stephen Gould. „DeepPermNet: Visual Permutation Learning“. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017. http://dx.doi.org/10.1109/cvpr.2017.640.
Der volle Inhalt der QuelleCai, Haipeng, Shiv Raj Pant und Wen Li. „Towards learning visual semantics“. In ESEC/FSE '20: 28th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3368089.3417040.
Der volle Inhalt der QuelleTeow, Matthew Y. W. „Convolutional Visual Feature Learning“. In the 2018 International Conference. New York, New York, USA: ACM Press, 2018. http://dx.doi.org/10.1145/3232651.3232672.
Der volle Inhalt der QuelleYeh, Tom, und Trevor Darrell. „Dynamic visual category learning“. In 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2008. http://dx.doi.org/10.1109/cvpr.2008.4587616.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Visual learning"
Bhanu, Bir. Learning Integrated Visual Database for Image Exploitation. Fort Belvoir, VA: Defense Technical Information Center, November 2002. http://dx.doi.org/10.21236/ada413389.
Der volle Inhalt der QuelleEdelman, Shimon, Heinrich H. Buelthoff und Erik Sklar. Task and Object Learning in Visual Recognition. Fort Belvoir, VA: Defense Technical Information Center, Januar 1991. http://dx.doi.org/10.21236/ada259961.
Der volle Inhalt der QuelleJiang, Yuhong V. Implicit Learning of Complex Visual Contexts Under Non-Optimal Conditions. Fort Belvoir, VA: Defense Technical Information Center, Juli 2007. http://dx.doi.org/10.21236/ada482119.
Der volle Inhalt der QuellePetrie, Christopher, und Katija Aladin. Spotlight: Visual Arts. HundrED, Dezember 2020. http://dx.doi.org/10.58261/azgu5536.
Der volle Inhalt der QuellePoggio, Tomaso, und Stephen Smale. Hierarchical Kernel Machines: The Mathematics of Learning Inspired by Visual Cortex. Fort Belvoir, VA: Defense Technical Information Center, Februar 2013. http://dx.doi.org/10.21236/ada580529.
Der volle Inhalt der QuelleHarmon, Dr Jennifer. Exploring the Efficacy of Active and Authentic Learning in the Visual Merchandising Classroom. Ames: Iowa State University, Digital Repository, November 2016. http://dx.doi.org/10.31274/itaa_proceedings-180814-1524.
Der volle Inhalt der QuelleMills, Kathy, Elizabeth Heck, Alinta Brown, Patricia Funnell und Lesley Friend. Senses together : Multimodal literacy learning in primary education : Final project report. Institute for Learning Sciences and Teacher Education, Australian Catholic University, 2023. http://dx.doi.org/10.24268/acu.8zy8y.
Der volle Inhalt der QuelleYu, Wanchi. Implicit Learning of Children with and without Developmental Language Disorder across Auditory and Visual Categories. Portland State University Library, Januar 2000. http://dx.doi.org/10.15760/etd.7460.
Der volle Inhalt der QuelleNahorniak, Maya. Occupation of profession: Methodology of laboratory classes from practically-oriented courses under distance learning (on an example of discipline «Radioproduction»). Ivan Franko National University of Lviv, Februar 2022. http://dx.doi.org/10.30970/vjo.2022.51.11412.
Der volle Inhalt der QuelleShepiliev, Dmytro S., Yevhenii O. Modlo, Yuliia V. Yechkalo, Viktoriia V. Tkachuk, Mykhailo M. Mintii, Iryna S. Mintii, Oksana M. Markova et al. WebAR development tools: An overview. CEUR Workshop Proceedings, März 2021. http://dx.doi.org/10.31812/123456789/4356.
Der volle Inhalt der Quelle