Littérature scientifique sur le sujet « Visual capabilities »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Visual capabilities ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Visual capabilities"

1

Ateniese, Giuseppe, Carlo Blundo, Alfredo De Santis et Douglas R. Stinson. « Extended capabilities for visual cryptography ». Theoretical Computer Science 250, no 1-2 (janvier 2001) : 143–61. http://dx.doi.org/10.1016/s0304-3975(99)00127-9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Wu, Xiaotian, et Wei Sun. « Extended Capabilities for XOR-Based Visual Cryptography ». IEEE Transactions on Information Forensics and Security 9, no 10 (octobre 2014) : 1592–605. http://dx.doi.org/10.1109/tifs.2014.2346014.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Bowskill, Jerry, et John Downie. « Extending the capabilities of the human visual system ». ACM SIGGRAPH Computer Graphics 29, no 2 (mai 1995) : 61–65. http://dx.doi.org/10.1145/204362.204378.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Stephen, L., et K. Andrej. « Superior visual detection capabilities in congenitally deaf Cats ». Journal of Vision 7, no 9 (19 mars 2010) : 308. http://dx.doi.org/10.1167/7.9.308.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

LES, ZBIGNIEW, et MAGDALENA LES. « SHAPE UNDERSTANDING SYSTEM : THE VISUAL REASONING PROCESS ». International Journal of Pattern Recognition and Artificial Intelligence 17, no 04 (juin 2003) : 663–83. http://dx.doi.org/10.1142/s0218001403002551.

Texte intégral
Résumé :
In this paper the visual reasoning that is part of visual thinking capabilities of the shape understanding system (SUS) is investigated. This research is a continuation of the authors' previous work focused on investigating understanding capabilities of the intelligent systems based on the shape understanding system. SUS is an example of the visual understanding system, where sensory information is transformed into the multilevel representation in the concept formation process that is part of the visual thinking capabilities. The visual reasoning involves transformation of the description of the object when passing consequent stages of the reasoning process and the reasoning and processing of the data are mutually dependent.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Yeritsyan, Naira, Konrad Lehmann, Oliver Puk, Jochen Graw et Siegrid Löwel. « Visual capabilities and cortical maps in BALB/c mice ». European Journal of Neuroscience 36, no 6 (28 juin 2012) : 2801–11. http://dx.doi.org/10.1111/j.1460-9568.2012.08195.x.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Yang, Mingyu, Xiaoning Gui, Run Wang, Shiju Jiang, Jing Zhou, Jian Chen, Meiling Wang et al. « Clinical Evaluation of the Pre-Analytical Capabilities of Hemostasis Instrument ». Clinical and Applied Thrombosis/Hemostasis 28 (janvier 2022) : 107602962211184. http://dx.doi.org/10.1177/10760296221118483.

Texte intégral
Résumé :
Objective: Evaluate the technical performance of the pre-analytical hemolysis-icterus-lipemia (HIL) check module on the ACL-TOP-750. Methods: 8433 routine coagulation samples were evaluated for HIL, the presence of clotting and low sample volume by both visual inspection and the pre-analytical HIL check module on the ACL-TOP-750. Results: 7726 samples were in agreement with both methods and 707 were not consistent. 356 samples with low volume were identified by visual inspection and 920 by the instrument (2.7 mL threshold). Visual inspection identified 56 lipemic samples while 13 of those with moderate or high lipemia were identified by the instrument. Visual inspection identified 47 hemolyzed samples while 7 with moderate or high hemolysis were identified by the instrument. Both visual inspection and the instrument identified 36 icteric samples. For triglyceride concentration and bilirubin concentration, there was good correlation between the ACL-TOP-750 and the DXC800 biochemistry analyzer. Among 30 samples with varying amounts of clotting, 27 were discovered by visual inspection and 3 were discovered by the instrument. Conclusion: The pre-analytical check module on the ACL-TOP-750 improved the detection rate of samples below the target 2.7 mL volume, and the accuracy in detection of HIL. However, the automated method could not replace visual assessment of clotting in samples.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Siddins, Eileen Maree, Ryan Daniel et Robert Johnstone. « Building Visual Artists’ Resilience Capabilities : Current Educator Strategies and Methods ». Journal of Arts and Humanities 5, no 7 (21 juillet 2016) : 24. http://dx.doi.org/10.18533/journal.v5i7.968.

Texte intégral
Résumé :
<p>Enrolments in higher education programs in the creative and performing arts are increasing in many countries. Yet graduates of these degrees, who enter the broad sector known as the creative industries, face particular challenges in terms of securing long-term and sustainable employment. In addition, creative and performing artists face a range of mental challenges, caused by such factors as: the solitary nature of much creative practice, critical feedback by audiences and gatekeepers, or the general pressures associated with maintaining artistic relevance or integrity. The concepts of resilience and professional wellbeing are therefore highly relevant to those who pursue a career in creative industries, and while there has been an emerging body of work in this area, to date it has focussed on the performing arts area (e.g. music, theatre). Hence, in order to expand knowledge relevant to resilience and artists, this paper sets out to explore the extent to which current educators in the Australian context specifically address these issues within higher visual arts curricula; specifically the areas of illustration, design, film and photography. This was achieved via interviews with seventeen current academics working in these areas. The findings propose that higher education providers of programs in the visual arts consider placing a stronger emphasis on the embedded development of resilience and professional wellbeing capacities.</p>
Styles APA, Harvard, Vancouver, ISO, etc.
9

Levine, Michael W., et J. Jason McAnany. « The relative capabilities of the upper and lower visual hemifields ». Vision Research 45, no 21 (octobre 2005) : 2820–30. http://dx.doi.org/10.1016/j.visres.2005.04.001.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Сергиевская et Irina Sergievskaya. « Multimedia Capabilities for Teaching Listening Foreign-Language Text ». Modern Communication Studies 6, no 3 (15 mai 2017) : 45–48. http://dx.doi.org/10.12737/19155.

Texte intégral
Résumé :
Semantic information processing received at listening will be easier if it is accompanied by the creation of the visual image in iconic and symbolic form in the context of multimedia. Sounding information is consistently supported by opening the visual image that affects the listener with its dynamics and easy to understand. Development of symbolic-character code in the context of multimedia is required to develop automatic skill of binding the audible speech with the image. The technology of image creation is to help the listener adequately understand the meaning of the individual stories produced on the screen, put them into a single image, complete the image, and then use it as a structural model statements.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Visual capabilities"

1

Srisamang, Richard, Richard Todd, Sudarshan Bhat et Terry Moore. « UAV INTEGRATED VISUAL CONTROL AND SIMULATION SYSTEM ARCHITECTURE AND CAPABILITIES IN ACTION ». International Foundation for Telemetering, 2000. http://hdl.handle.net/10150/606815.

Texte intégral
Résumé :
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California
Unmanned Aerial Vehicles (UAV) are becoming a significant asset to the military. This has given rise to the development of the Vehicle Control and Simulation System (VCSS), a low-cost ground support and control system deployable to any UAV testing site, with the capability to support ground crew and pilot training, real-time telemetry simulation, distribution, transmission and reception, mission planning, and Global Positioning System (GPS) reception. This paper describes the development of the VCSS detailing its capabilities, demonstrating its use in the field, and showing its novel use of internet technology for vehicle control telemetry distribution.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Murabito, Francesca. « Deeply Incorporating Human Capabilities into Machine Learning Models for Fine-Grained Visual Categorization ». Doctoral thesis, Università di Catania, 2019. http://hdl.handle.net/10761/4144.

Texte intégral
Résumé :
Artificial intelligence and machine learning have long attempted to emulate human visual system. With the recent advances in deep neural networks, which take inspiration from the architecture of the primate visual hierarchy, human-level visual abilities are now coming within reach of artificial systems. However, the existing computational models are designed with engineering goals, loosely emulating computations and connections of biological neurons, especially in terms of intermediate visual representations. In this thesis we aim at investigating how human skills can be integrated into computational models in order to perform fine-grained image categorization, a task which requires the application of specific perceptive and cognitive abilities to be solved. In particular, our goal is to develop systems which, either implicitly or explicitly, combine human reasoning processes with deep classification models. Our claims is that by the emulation of the process carried out by humans while performing a recognition task it is possible to yield improved classification performance. To this end, we first attempt to replicate human visual attention by modeling a saliency detection system able to emulate the integration of the top-down (task-controlled, classification-driven) and bottom-up (sensory information) processes; thus, the generated saliency maps are able to represent implicitly the way humans perceive and focus their attention while performing recognition, and, therefore, a useful supervision for the automatic classification system. We then investigate if and to what extent the learned saliency maps can support visual classification in nontrivial cases. To achieve this, we propose SalClassNet, a CNN framework consisting of two networks jointly trained: a) the first one computing top-down saliency maps from input images, and b) the second one exploiting the computed saliency maps for visual classification. Gaze shifts change in relation to a task is not the only process when performing classification in specific domains, but humans also leverage a-priori specialized knowledge to perform recognition. For example, distinguishing between different dog breeds or fruit varieties requires skills that not all human possess but only domain experts. Of course, one may argue that the typical learning-by-example approach can be applied by asking domain experts to collect enough annotations from which machine learning methods can derive the features necessary for the classification. Nevertheless, this is a really costly process and often infeasible. Thus, the second part of this thesis aim at explicitly modeling and exploiting domain-specific knowledge to perform recognition. To this end, we introduce and demonstrate that computational ontologies can explicitly encode human knowledge and that it can be used to support multiple tasks from data annotation to classification. In particular, we propose an ontology-based annotation tool, able to reduce significantly the efforts to collect highly-specialized labels and demonstrate its effectiveness building the VegImage dataset, a collection of about 4,000 images belonging to 24 fruit varieties, annotated with over 65,000 bounding boxes and enriched with a large knowledge base consisting of more than 1,000,000 OWL triples. We then exploit this ontology-structured knowledge by combining a semantic-classifier, which performs inference based on the information encoded in the domain ontology, with a visual convolutional neural network, showing that the integration of semantics into automatic classification models can represents the key to solve a complex task such as the fine-grained recognition of fruit varieties, a task which requires the contribution of domain expert to be completely solved. Performance evaluation of the proposed approaches provides a basis to assess the validity of our claim along with the scientific soundness of developed models.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Eziolisa, Ositadimma Nnanna. « Investigation of Capabilities of Observers in a Watch Window Study ». Wright State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=wright1401889055.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Znotinas, Katherine. « Sensory Capabilities of Polypterus Senegalus in Aquatic and Terrestrial Environments ». Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37088.

Texte intégral
Résumé :
In the amphibious fish Polypterus senegalus, focussing on lateral line, vision and electrosensation, we investigated sensory abilities, their interactions, and changes in their effects on locomotor behaviour between aquatic and terrestrial environments. First, we blocked lateral line, vision, or both, and examined effects on locomotion in both environments. Both senses affected both types of locomotion. When fish could see but not feel, variation in several kinematic variables increased, suggesting that sensory integration may affect locomotor control. Next, we assessed response to optokinetic stimuli of varying size and speed. Temporal and spatial visual acuity were both low, as expected in a nocturnal ambush predator. Visual ability in air was much reduced. Finally, we attempted to record electrogenesis in Polypterus, but did not observe the electric discharges reported in a previous study. Future studies might examine changes in sensory function, interaction and importance in behaviour in Polypterus raised in a terrestrial environment.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Matts, Tobias, et Anton Sterner. « Vision-based Driver Assistance Systems for Teleoperation of OnRoad Vehicles : Compensating for Impaired Visual Perception Capabilities Due to Degraded Video Quality ». Thesis, Linköpings universitet, Medie- och Informationsteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-167146.

Texte intégral
Résumé :
Autonomous vehicles is going to be a part of future transport of goods and people, but to make them usable in unpredictable situations presented in real traffic, there is need for backup systems for manual vehicle control. Teleoperation, where a driver controls the vehicle remotely, has been proposed as a backup system for this purpose. This technique is highly dependent on stable and large wireless network bandwidth to transmit high resolution video from the vehicle to the driver station. Reduction in network bandwidth, resulting in a reduced level of detail in the video stream, could lead to a higher risk of driver error. This thesis is a two part investigation. One part looking into whether lower resolution and increased lossy compression of video at the operator station affects driver performance and safety of operation during teleoperation. The second part covers implementation of two vision-based driver assistance systems, one which detects and highlights vehicles and pedestrians in front of the vehicle, and one which detects and highlights lane markings. A driving test was performed at an asphalt track with white markings for track boundaries, with different levels of video quality presented to the driver. Reducing video quality did have a negative effect on lap time and increased the number of times the track boundary was crossed. The test was performed with a small group of drivers, so the results can only be interpreted as an indication toward that video quality can negatively affect driver performance. The vision-based driver assistance systems for detection and marking of pedestrians was tested by showing a test group pre-recorded video shot in traffic, and them reacting when they saw a pedestrian about to cross the road. The results of a one-way analysis of variance, shows that video quality significantly affect reaction times, with p = 0.02181 at significance level α = 0.05. A two-way analysis of variance was also conducted, accounting for video quality, the use of a driver assistance system marking pedestrians, and the interaction between these two. The results point to that marking pedestrians in very low quality video does help reduce reaction times, but the results are not significant at significance level α = 0.05.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Ryan, Kathryn Mary. « Pieces of practice | avian spaces ». Thesis, The University of Sydney, 2012. http://hdl.handle.net/2123/12008.

Texte intégral
Résumé :
This research paper is written in a first-person narrative style. The style mirrors the practice-led research methodology I have used which privileges process over resolution and acknowledges that making can be both generative and interrogative. More traditional research methods rely on distancing the researcher from production and placing them within an external framework. Practice-led researchers “construct experiential starting points from which practice follows. They tend to ‘dive in’, to commence practising to see what emerges. They acknowledge that what emerges is individualistic and idiosyncratic.” In this paper the reader is taken on a journey from the spaces of the future, present and past in search of the ‘unfound’. The ‘unfound’ is also to some extent ‘unknown’, but occasionally reveals itself in the text through accidents of poetic association between objects, art and literary moments. The space of the paper is also an avian one. It doesn’t interrogate the material egg and bird object motifs in my practical work, but occupies the air to which these forms owe their qualities of transience, agility and fragility. It is this element that exemplifies the space of my works production. Instead of dissecting and pinning down this element (which would be antithetical), I have tried to occupy its spirit. A substantial part of the paper is made up of footnotes and references to exterior sites, elements that in this paper are far from peripheral. They are employed here as literary devices that enable a visual and conceptual illustration of the distance between process and analysis. Alberto Manguel wrote that “all writing depends on the generosity of the reader.” This paper requires a ‘generous reader’ willing to follow an experimental journey. 1. Brad Haseman, ‘Tightrope Writing: Creative Writing Programs in the RQF Environment’ http://www.textjournal.com.au/april07/haseman.htm 2. Alberto Manguel, A History of Reading (London: Flamingo, 1997), 179.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Jonsson, Mårten. « Digital tools for the blind : How to increase navigational capabilities for visually impaired persons ». Thesis, Högskolan i Skövde, Institutionen för kommunikation och information, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-9735.

Texte intégral
Résumé :
The development of human-computer interaction (HCI) systems, usable by people withvisual impairments is a progressing field of research. Similarly, the creation of audio-onlygames and digital tools has been investigated somewhat thoroughly, with many interestingresults. This thesis aims to combine the two fields in the creation of an audio-only digital tool,aimed at aiding visually impaired persons to navigate unknown areas. This is done by lookingat the field of HCI systems, and games for blind, and by looking at the concept of mentalmaps and spatial orientation within cognitive science. An application is created, evaluatedand tested based on a set number of criteria. An experiment is performed and the results areevaluated and compared to another digital tool in order to learn more about how to increasethe usability and functionality of digital tools for the visually impaired. The results give astrong indication towards how to best proceed with future research.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Lin, Min-Chen, et 林旻蓁. « Visual Analytics with Data Integration Capabilities ». Thesis, 2016. http://ndltd.ncl.edu.tw/handle/78580185788947466124.

Texte intégral
Résumé :
碩士
中華大學
資訊管理學系
104
The quantity of data grows in the speed of light with the advancing technology. One of the topics that are most talked about today is big data, as it implies much more value than it appears to have. The faster the implications hidden in the data are deciphered as they are produced, the greater opportunity there is to stay ahead of competitors. One of the effective techniques that allow people to interpret what is hidden in data is the shortest possible time is the visualized analysis. The use of visualization tools allows complicated data to be transformed into easy-to-read graphics. This process requires integration of data coming from a wide variety of sources in order to demonstrate the value of these data graphically. Most of the visualization tools are available in the market; however, they provide only the import of single files. The few that allow importing of multiple files are not necessarily capable of data integration. On the other hand, professional statistical analysis programs are complicated to use, which increases the difficulty to use. For this reason, this study intends to integrate data of multiple files and sources. The data integration consists of data merge and addition of new attributes. Data merge allows the merging of different data table, while the addition of new attributes allows the extension of existing data field and create new attribute fields. This helps sort out the data to be visualized before the visualization and maximizes the effects of visualization. Google Visualizations API is introduced as the visualization tool, which contains large quantity of graphics. User’s visualization settings are imported into Google Visualization API to create visual graphics. The framework designed for this study provides portable graphic service. A website creates specifically for the graphics creates is generated and encrypted based on the visualization settings of the graphics. The user only has to share the address and password to allow others to view the graphics through a browser. An integrated visualized analysis system framework in this study is built for data analysts, which allows them to integrate data before the visualization and maximizes the visual effects after the visualization of the data to be visualized. The portable graphic service allows users to share the visualized results with others. The feasibility of this framework can be demonstrated by applications such as cross-referencing of college examination lists and nationwide mortality due to cancers.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Hsu, Yu-Wen, et 許又文. « Design and Implementation of Parallel Biped Robot with Visual Capabilities ». Thesis, 2019. http://ndltd.ncl.edu.tw/handle/7ukt6f.

Texte intégral
Résumé :
碩士
國立臺灣海洋大學
電機工程學系
107
The purpose of this thesis is to design a biped robot with a parallel mechanism which is in contrast to the tandem structure in most products seen on the market. The walking principle of the parallel biped mechanism is designed by inverse kinematics. The swing angle of the motor is calculated by the derived formula, and implemented in the Arduino to control the biped robot to move to the planned position. It is expected to be simpler than the tandem biped robot. The robot vision uses the LinkIt™ Smart 7688 Duo with the Webcam to stream images. On the one hand, the biped robot can be used to remotely monitor the surrounding environment, and on the other hand it can be used for visual tracking control. This system uses the Arduino Pro Mini to control the servo motor MG996R to actuate the robot and to transmit and receive data through Bluetooth HC-05. The video stream is provided by the LinkIt™ Smart 7688 Duo as a server. And the video is also shown on the human-machine interface written by Processing for remote control of the moving directions of the biped robot. For the visual tracking capability, the system also uses Processing for image analysis to recognize the desired path and calculate the path centroid position and offset. Then the control decision is sent back to the server side to perform the tracking task.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Fluckiger, S. Joseph. « Security with visual understanding : Kinect human recognition capabilities applied in a home security system ». Thesis, 2012. http://hdl.handle.net/2152/ETD-UT-2012-05-5077.

Texte intégral
Résumé :
Vision is the most celebrated human sense. Eighty percent of the information humans receive is obtained through vision. Machines capable of capturing images are now ubiquitous, but until recently, they have been unable to recognize objects in the images they capture. In effect, machines have been blind. This paper explores the revolutionary new capability of a camera to recognize whether a human is present in an image and take detailed measurements of the person’s dimensions. It explains how the hardware and software of the camera work to provide this remarkable capability in just 200 milliseconds per image. To demonstrate these capabilities, a home security application has been built called Security with Visual Understanding (SVU). SVU is a hardware/software solution that detects a human and then performs biometric authentication by comparing the dimensions of the seen person against a database of known people. If the person is unrecognized, an alarm is sounded, and a picture of the intruder is sent via SMS text message to the home owner. Analysis is performed to measure the tolerance of the SVU algorithm for differentiating between two people based on their body dimensions.
text
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Visual capabilities"

1

Korneev, Viktor, Larisa Gagarina et Mariya Korneeva. Visualization in scientific research. ru : INFRA-M Academic Publishing LLC., 2021. http://dx.doi.org/10.12737/1029660.

Texte intégral
Résumé :
The textbook describes the methods of graphical representation of the results of the calculation of physical and engineering problems, represented by specialized programs and operating system tools. The graphic capabilities of the MATLAB package, which, along with powerful calculation tools, has excellent computer graphics, are studied in detail. A number of visualization tasks are solved by computer graphics programming methods in C++. The GDI GUI functions are used from the set of system API functions that the Windows operating system provides to the user. All examples in C++ are tested in the Visual Studio 2008 project development environment. The issues of interaction between the MATLAB package and programs written in C++ in the Visual Studio environment are considered. Meets the requirements of the federal state educational standards of higher education of the latest generation. For students studying in the field of training "Software Engineering".
Styles APA, Harvard, Vancouver, ISO, etc.
2

Brocker, Susan. Vision Without Sight : Human Capabilities (Shockwave Social Studies). Children's Press (CT), 2007.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Grossberg, Stephen. The Visual World as Illusion. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780199794607.003.0007.

Texte intégral
Résumé :
This chapter shows how visual illusions arise from neural processes that play an adaptive role in achieving the remarkable perceptual capabilities of advanced brains. It clarifies that many visual percepts are visual illusions, in the sense that they arise from active processes that reorganize and complete perceptual representations from the noisy data received by retinas. Some of these representations look illusory, whereas others look real. The chapter heuristically summarizes explanations of illusions that arise due to completion of perceptual groupings, filling-in of surface lightnesses and colors, transformation of ambiguous motion signals into coherent percepts of object motion direction and speed, and interactions between the form and motion cortical processing streams. A central theme is that the brain is organized into parallel processing streams with computationally complementary properties, that interstream interactions overcome these complementary deficiencies to compute effective representations of the world, and how these representations generate visual illusions.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Tossell, Mark, Blair Hutchinson, Roberto Andreoli et Joshua N. Milligan. Learning Tableau 2022 : Create Effective Data Visualizations, Build Interactive Visual Analytics, and Improve Your Data Storytelling Capabilities. Packt Publishing, Limited, 2022.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Sansone, Joseph. Seeing Is Believing : A Quantitative Study Of Posthypnotic Suggestion And The Altering Of Subconscious Beliefs To Enhance Visual Capabilities Including The Potential For Nonphysical Sight. High Energy Publishing LLC, 2019.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Saremi, Ahmad Reza. Determination of human visual capabilities in the identification of the color of highway signs under a combination of vehicle headlamp and high intensity discharge light sources. 1990.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Weinel, Jonathan. Inner Sound. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190671181.001.0001.

Texte intégral
Résumé :
Inner Sound explores how altered states of consciousness have shaped the design of electronic music and audio-visual media. The book begins by discussing consciousness, and how this may change during states such as dreaming, psychedelic experience, meditation, and trance. Next, a variety of shamanic traditions are reviewed, in order to explore how indigenous societies have reflected visionary experiences through visual art and music. This provides the necessary background from which to consider how analogue and digital audio technologies enable specific capabilities for representing or inducing altered states of consciousness in psychedelic rock, electronic dance music, and electroacoustic music. Developing the discussion to consider sound in the context of audio-visual media, the role of altered states of consciousness in films, visual music, VJ performances, interactive video games, and virtual reality applications is also discussed. Through the analysis of these examples, the author uncovers common mechanisms, and ultimately proposes a conceptual model for ‘Altered States of Consciousness Simulations’. This theoretical model describes how sound can be used to simulate various subjective states of consciousness from a first-person perspective, in an interactive context. Throughout the book, the ethical issues regarding altered states of consciousness in electronic music and audio-visual media are also explored, ultimately allowing the reader to consider not only the design of Altered States of Consciousness Simulations, but also the implications of their use for digital society. In this way, Inner Sound explores the limits of technology for representing and manipulating consciousness, at the frontiers of electronic music and art.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Schotter, Jesse. Introduction : A Hieroglyphic Civilisation. Edinburgh University Press, 2018. http://dx.doi.org/10.3366/edinburgh/9781474424776.003.0001.

Texte intégral
Résumé :
The introduction traces how, through the comparison to Egyptian hieroglyphs, twentieth-century writers, directors, and theorists incessantly invoked other media as well as other nations as they sought to define the most essential qualities and capabilities of their own. Rather than attempting to combine media, the modernists defined the uniqueness of any medium by its hybridity, its ability to enclose or embody the sonic, visual, or semantic characteristics of other media forms. At the same time, by situating conceptions of hieroglyphics within the historical context of Egypt in the 1920s and in relation to the novels of Tawfiq al-Hakim and Naguib Mahfouz, the book insists on the fundamental connection between theories of new technologies on the one hand and colonialism, nationalism, and the universalist desire to bridge linguistic and cultural boundaries on the other.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Walden, Joshua S. Epilogue. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190653507.003.0006.

Texte intégral
Résumé :
The book’s epilogue explores the place of musical portraiture in the context of posthumous depictions of the deceased, and in relation to the so-called posthuman condition, which describes contemporary changes in the relationship of the individual with such aspects of life as technology and the body. It first examines Alfred Hitchcock’s Vertigo to view how Bernard Herrmann’s score relates to issues of portraiture and the depiction of the identity of the deceased. It then considers the work of cyborg composer-artist Neil Harbisson, who has aimed, through the use of new capabilities of hybridity between the body and technology, to convey something akin to visual likeness in his series of Sound Portraits. The epilogue shows how an examination of contemporary views of posthumous and posthuman identities helps to illuminate the ways music represents the self throughout the genre of musical portraiture.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Weinel, Jonathan. Virtual Unreality. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190671181.003.0008.

Texte intégral
Résumé :
This chapter explores altered states of consciousness in interactive video games and virtual reality applications. First, a brief overview of advances in the sound and graphics of video games is provided, which has led to ever-more immersive capabilities within the medium. Following this, a variety of games that represent states of intoxication, drug use, and hallucinations are discussed, in order to reveal how these states are portrayed with the aid of sound and music, and for what purpose. An alternative trajectory in games is also explored, as various synaesthetic titles are reviewed, which provide high-adrenaline experiences for ravers, and simulate dreams, meditation, or psychedelic states. Through the analysis of these, and building upon the previous chapters of Inner Sound, this chapter presents a conceptual model for ‘Altered States of Consciousness Simulations’: interactive audio-visual systems that represent altered states with regards to the sensory components of the experience.
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Visual capabilities"

1

Daw, Nigel W. « Development of Visual Capabilities ». Dans Visual Development, 29–57. Boston, MA : Springer US, 1995. http://dx.doi.org/10.1007/978-1-4757-6940-1_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Daw, Nigel W. « Development of Visual Capabilities ». Dans Visual Development, 27–53. Boston, MA : Springer US, 2013. http://dx.doi.org/10.1007/978-1-4614-9059-3_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Snyder, Harry L. « The Visual System : Capabilities and Limitations ». Dans Flat-Panel Displays and CRTs, 54–69. Dordrecht : Springer Netherlands, 1985. http://dx.doi.org/10.1007/978-94-011-7062-8_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Little, James J., Jesse Hoey et Pantelis Elinas. « Visual Capabilities in an Interactive Autonomous Robot ». Dans Cognitive Vision Systems, 295–312. Berlin, Heidelberg : Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11414353_17.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Amat, J., et A. Casals. « Visual Inspection System with Qualitative Analysis Capabilities ». Dans Sensor Devices and Systems for Robotics, 323–35. Berlin, Heidelberg : Springer Berlin Heidelberg, 1989. http://dx.doi.org/10.1007/978-3-642-74567-6_23.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Douglas, Ron H., et Craig W. Hawryshyn. « Behavioural studies of fish vision : an analysis of visual capabilities ». Dans The Visual System of Fish, 373–418. Dordrecht : Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0411-8_11.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Tripi, Ferdinando, Rita Toni, Angela Lucia Calogero, Pasqualino Maietta Latessa, Antonio Tempesta, Stefania Toselli, Alessia Grigoletto et al. « Visual and Motor Capabilities of Future Car Drivers ». Dans Advances in Intelligent Systems and Computing, 214–20. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-39512-4_34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Kovalerchuk, Boris. « Discovering Visual Features and Shape Perception Capabilities in GLC ». Dans Intelligent Systems Reference Library, 141–71. Cham : Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73040-0_6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Cornish, Katie, Joy Goodman-Deane et P. John Clarkson. « Visual Capabilities : What Do Graphic Designers Want to See ? » Dans Universal Access in Human–Computer Interaction. Design and Development Approaches and Methods, 56–66. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58706-6_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Cárdenas, Martha I., Alfredo Vellido et Jesús Giraldo. « Visual Exploratory Assessment of Class C GPCR Extracellular Domains Discrimination Capabilities ». Dans Advances in Intelligent Systems and Computing, 31–39. Cham : Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-40126-3_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Visual capabilities"

1

Nightingale, James, Qi Wang, Jose M. Alcaraz Calero, Ian Owens et Christos Grecos. « Enhancing visual communications capabilities in tactical networks ». Dans 2015 International Conference on Military Communications and Information Systems (ICMCIS). IEEE, 2015. http://dx.doi.org/10.1109/icmcis.2015.7158692.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Livingston, Mark. « Quantification of visual capabilities using augmented reality displays ». Dans 2006 IEEE/ACM International Symposium on Mixed and Augmented Reality. IEEE, 2006. http://dx.doi.org/10.1109/ismar.2006.297788.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Dias, Joao Pedro, Andre Restivo et Hugo Sereno Ferreira. « Empowering Visual Internet-of-Things Mashups with Self-Healing Capabilities ». Dans 2021 IEEE/ACM 3rd International Workshop on Software Engineering Research and Practices for the IoT (SERP4IoT). IEEE, 2021. http://dx.doi.org/10.1109/serp4iot52556.2021.00014.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Yong, Wen Lin, Jun Kit Chaw et Yiqi Tew. « Interactive Dashboard with Visual Sensing and Zero-Shot Learning Capabilities ». Dans International Conference on Digital Transformation and Applications (ICDXA 2021). Tunku Abdul Rahman University College, 2021. http://dx.doi.org/10.56453/icdxa.2021.1009.

Texte intégral
Résumé :
These days, technology is growing rapidly, and the market has been introduced with lots of fascinating ways to interact with computers. The advancement of deep learning models and hardware technology also enables more applications with fancy features to be built. The importance of hand gesture recognition has increased due to the prevalence of touchless applications. However, developing an efficient recognition system needs to overcome the challenges of hand segmentation, local hand shape representation, global body configuration representation, and a gesture sequence model. This paper proposed an interactive dashboard that could react to hand gestures. This is also an initiative of the Tunku Abdul Rahman University College (TAR UC) Smart Campus project. Deep learning models were investigated in this research and the optimal model was selected for the dashboard. In addition, 20BN Jester Dataset was used for the dashboard development. To set up a more user-friendly dashboard, the data communication stream between the captured input stream and commands among the devices will also be studied. As to achieve higher responsiveness from the dashboard, evaluation on data communication protocols which were used to pass the input data included in the study. Keywords: Computer Vision, Human-Computer Interaction (HCI), Gesture Detection, Real-time systems, Feature Extraction
Styles APA, Harvard, Vancouver, ISO, etc.
5

Yindi, Dong. « Visual Basic Program Designing Based on Computational Thinking Capabilities Training ». Dans The 2nd Information Technology and Mechatronics Engineering Conference (ITOEC 2016). Paris, France : Atlantis Press, 2016. http://dx.doi.org/10.2991/itoec-16.2016.31.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Wickens, Christopher D. « Three-dimensional stereoscopic display implementation : guidelines derived from human visual capabilities ». Dans SC - DL tentative, sous la direction de John O. Merritt et Scott S. Fisher. SPIE, 1990. http://dx.doi.org/10.1117/12.19883.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Guvensan, M. Amac, A. Gokhan Yavuz, Z. Cihan Taysi, M. Elif Karsligil et Esra Celik. « Image Processing Capabilities of ARM-based Micro-controllers for Visual Sensor Networks ». Dans 2011 IEEE/IFIP 9th International Conference on Embedded and Ubiquitous Computing (EUC). IEEE, 2011. http://dx.doi.org/10.1109/euc.2011.44.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Coianiz, Tarcisio, et Marco Aste. « Improving robot's indoor navigation capabilities by integrating visual, sonar, and odometric measurements ». Dans Optical Tools for Manufacturing and Advanced Automation, sous la direction de Paul S. Schenker. SPIE, 1993. http://dx.doi.org/10.1117/12.150258.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Johnson, Chris A., Craig W. Adams, Richard A. Lewis et John L. Keltner. « Fatigue Effects in Automated Perimetry ». Dans Noninvasive Assessment of the Visual System. Washington, D.C. : Optica Publishing Group, 1987. http://dx.doi.org/10.1364/navs.1987.wb2.

Texte intégral
Résumé :
Automated static perimetry has improved the detection and differential diagnostic capabilities of clinical visual field testing. However, automated perimetry is more demanding and less flexible than manual visual field testing. The increased effort and attentional requirements of automated perimetry may adversely influence the sensitivity and reliability of visual field testing.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Karan, Kapil Yadav et Amandeep Singh. « Comparative analysis of Visual Recognition Capabilities of CNN Architecture Enhanced with Gabor Filter ». Dans 2020 International Conference on Electronics and Sustainable Communication Systems (ICESC). IEEE, 2020. http://dx.doi.org/10.1109/icesc48915.2020.9155891.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Visual capabilities"

1

Fendrich, Robert. DURIP - Improved Eye Movement Monitoring Capabilities for Studies in Visual Cognition. Fort Belvoir, VA : Defense Technical Information Center, février 1990. http://dx.doi.org/10.21236/ada220355.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Acharya, Ashwin, Max Langenkamp et James Dunham. Trends in AI Research for the Visual Surveillance of Populations. Center for Security and Emerging Technology, janvier 2022. http://dx.doi.org/10.51593/20200097.

Texte intégral
Résumé :
Progress in artificial intelligence has led to growing concern about the capabilities of AI-powered surveillance systems. This data brief uses bibliometric analysis to chart recent trends in visual surveillance research — what share of overall computer vision research it comprises, which countries are leading the way, and how things have varied over time.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Kulhandjian, Hovannes. Detecting Driver Drowsiness with Multi-Sensor Data Fusion Combined with Machine Learning. Mineta Transportation Institute, septembre 2021. http://dx.doi.org/10.31979/mti.2021.2015.

Texte intégral
Résumé :
In this research work, we develop a drowsy driver detection system through the application of visual and radar sensors combined with machine learning. The system concept was derived from the desire to achieve a high level of driver safety through the prevention of potentially fatal accidents involving drowsy drivers. According to the National Highway Traffic Safety Administration, drowsy driving resulted in 50,000 injuries across 91,000 police-reported accidents, and a death toll of nearly 800 in 2017. The objective of this research work is to provide a working prototype of Advanced Driver Assistance Systems that can be installed in present-day vehicles. By integrating two modes of visual surveillance to examine a biometric expression of drowsiness, a camera and a micro-Doppler radar sensor, our system offers high reliability over 95% in the accuracy of its drowsy driver detection capabilities. The camera is used to monitor the driver’s eyes, mouth and head movement and recognize when a discrepancy occurs in the driver's blinking pattern, yawning incidence, and/or head drop, thereby signaling that the driver may be experiencing fatigue or drowsiness. The micro-Doppler sensor allows the driver's head movement to be captured both during the day and at night. Through data fusion and deep learning, the ability to quickly analyze and classify a driver's behavior under various conditions such as lighting, pose-variation, and facial expression in a real-time monitoring system is achieved.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Ivanova, Halyna I., Olena O. Lavrentieva, Larysa F. Eivas, Iuliia O. Zenkovych et Aleksandr D. Uchitel. The students' brainwork intensification via the computer visualization of study materials. [б. в.], juillet 2020. http://dx.doi.org/10.31812/123456789/3859.

Texte intégral
Résumé :
The paper the approaches to the intensification of the students’ brainwork by means of computer visualization of study material have been disclosed. In general, the content of students’ brainwork has been presented as a type of activity providing the cognitive process, mastering the techniques and ways of thinking, developing the capabilities and abilities of the individual, the product of which is a certain form of information, as a result of the brainwork the outlook of the subject of work is enriched. It is shown the visualization is the process of presenting data in the form of an image with the aim of maximum ease of understanding; the giving process of visual form to any mental object. In the paper the content, techniques, methods and software for creating visualization tools for study material has exposed. The essence and computer tools for creating such types of visualization of educational material like mind maps, supporting notes and infographics have been illustrated; they have been concretized from the point of view of application in the course of studying the mathematical sciences. It is proved the use of visualization tools for study materials helps to increase the intensity and effectiveness of students’ brainwork. Based on the results of an empirical study, it has been concluded the visualization of study materials contributes to the formation of students’ key intellectual competencies and forming their brainwork culture.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie