Journal articles on the topic 'Visual Computing (VIC)'

To see the other types of publications on this topic, follow the link: Visual Computing (VIC).

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Visual Computing (VIC).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Koch, C., H. T. Wang, and B. Mathur. "Computing motion in the primate's visual system." Journal of Experimental Biology 146, no. 1 (September 1, 1989): 115–39. http://dx.doi.org/10.1242/jeb.146.1.115.

Full text
Abstract:
Computing motion on the basis of the time-varying image intensity is a difficult problem for both artificial and biological vision systems. We will show how one well-known gradient-based computer algorithm for estimating visual motion can be implemented within the primate's visual system. This relaxation algorithm computes the optical flow field by minimizing a variational functional of a form commonly encountered in early vision, and is performed in two steps. In the first stage, local motion is computed, while in the second stage spatial integration occurs. Neurons in the second stage represent the optical flow field via a population-coding scheme, such that the vector sum of all neurons at each location codes for the direction and magnitude of the velocity at that location. The resulting network maps onto the magnocellular pathway of the primate visual system, in particular onto cells in the primary visual cortex (V1) as well as onto cells in the middle temporal area (MT). Our algorithm mimics a number of psychophysical phenomena and illusions (perception of coherent plaids, motion capture, motion coherence) as well as electrophysiological recordings. Thus, a single unifying principle ‘the final optical flow should be as smooth as possible’ (except at isolated motion discontinuities) explains a large number of phenomena and links single-cell behavior with perception and computational theory.
APA, Harvard, Vancouver, ISO, and other styles
2

Xue-Ming, Li, Li Fang-Hua, and Fan Hai-Fu. "A revised version of the program VEC (visual computing in electron crystallography)." Chinese Physics B 18, no. 6 (June 2009): 2459–63. http://dx.doi.org/10.1088/1674-1056/18/6/056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, XiaoYong, QinYang Yu, Yong Zhang, JinWei Dai, and BaoCai Yin. "Visual Analytic Method for Students’ Association via Modularity Optimization." Applied Sciences 10, no. 8 (April 18, 2020): 2813. http://dx.doi.org/10.3390/app10082813.

Full text
Abstract:
Students spend most of their time living and studying on campus, especially in Asia, and they form various types of associations in addition to those with classmates and roommates. It is necessary for university authorities to master these types of associations, so as to provide appropriate services, such as psychological guidance and academic advice. With the rapid development of the “smart campus,” many kinds of student behavior data are recorded, which provides an unprecedented opportunity to deeply analyze students’ associations. In this paper, we propose a visual analytic method to construct students’ association networks by computing the similarity of their behavior data. We discover student communities using the popular Louvain (or BGLL) algorithm, which can extract community structures based on modularity optimization. Using various visualization charts, we visualized associations among students so as to intuitively express them. We evaluated our method using the real behavior data of undergraduates in a university in Beijing. The experimental results indicate that this method is effective and intuitive for student association analysis.
APA, Harvard, Vancouver, ISO, and other styles
4

Hokkanen, J. E. "Visual simulations, artificial animals and virtual ecosystems." Journal of Experimental Biology 202, no. 23 (December 1, 1999): 3477–84. http://dx.doi.org/10.1242/jeb.202.23.3477.

Full text
Abstract:
This review is about a field that does not traditionally belong to biological sciences. A branch of computer animation has its mission to create active self-powered objects living artificial lives in the theoretical biology zone. Selected work, of particular interest to biologists, is presented here. These works include animated simulations of legged locomotion, flexible-bodied animals swimming and crawling, artificial fish in virtual ecosystems, automated learning of swimming and the evolution of virtual creatures with respect to morphology, locomotion and behaviour. The corresponding animations are available for downloading via the Internet. I hope that watching these intriguing pieces of visual simulation will stimulate digitally oriented biologists to seize the interactive methods made possible by ever-increasing computing power.
APA, Harvard, Vancouver, ISO, and other styles
5

Naranjo, Diana M., José R. Prieto, Germán Moltó, and Amanda Calatrava. "A Visual Dashboard to Track Learning Analytics for Educational Cloud Computing." Sensors 19, no. 13 (July 4, 2019): 2952. http://dx.doi.org/10.3390/s19132952.

Full text
Abstract:
Cloud providers such as Amazon Web Services (AWS) stand out as useful platforms to teach distributed computing concepts as well as the development of Cloud-native scalable application architectures on real-world infrastructures. Instructors can benefit from high-level tools to track the progress of students during their learning paths on the Cloud, and this information can be disclosed via educational dashboards for students to understand their progress through the practical activities. To this aim, this paper introduces CloudTrail-Tracker, an open-source platform to obtain enhanced usage analytics from a shared AWS account. The tool provides the instructor with a visual dashboard that depicts the aggregated usage of resources by all the students during a certain time frame and the specific use of AWS for a specific student. To facilitate self-regulation of students, the dashboard also depicts the percentage of progress for each lab session and the pending actions by the student. The dashboard has been integrated in four Cloud subjects that use different learning methodologies (from face-to-face to online learning) and the students positively highlight the usefulness of the tool for Cloud instruction in AWS. This automated procurement of evidences of student activity on the Cloud results in close to real-time learning analytics useful both for semi-automated assessment and student self-awareness of their own training progress.
APA, Harvard, Vancouver, ISO, and other styles
6

KENDER, JOHN R. "VISUAL INTERFACES TO COMPUTERS: A SYSTEMS-ORIENTED FIRST COURSE IN RELIABLE CONTROL VIA IMAGERY ("VISUAL INTERFACES")." International Journal of Pattern Recognition and Artificial Intelligence 15, no. 05 (August 2001): 869–84. http://dx.doi.org/10.1142/s0218001401001209.

Full text
Abstract:
We present the rationale, description, and critique of a first course in image computing that is not a traditional computer vision principles-and-tools course. "Visual Interfaces to Computers" is instead complementary to standard Computer Vision, User Interface, and Graphics courses; in fact, VI:CV::UI:G. It is organized by case studies of working visual systems that use camera input for data or control information in service of higher user goals, such as GUI control, user identification, or automobile steering. Many CV scientific principles and engineering tools are therefore taught, as well as those of psychophysics, AI, and EE, but taught selectively and always within the context of total system design. Course content is derived from conference and journal articles and Ph.D. theses, augmented with video tapes and real-time web site demos. Students do two homework assignments, one to design a "visual combination lock", and one to parse an image into English. They also do a final paper or project of their own choosing, often in teams of two, and often with surprisingly deep results. The course is assisted by a custom C-based tool kit, "XILite", a user-friendly (and comparatively bug-free) modification of Sun's X-windows Image Library for our lab's camera-equipped Sun workstations. The course has been offered twice to a wide audience with good reviews.
APA, Harvard, Vancouver, ISO, and other styles
7

Jiang, Ming-xin, Xian-xian Luo, Tao Hai, Hai-yan Wang, Song Yang, and Ahmed N. Abdalla. "Visual Object Tracking in RGB-D Data via Genetic Feature Learning." Complexity 2019 (May 2, 2019): 1–8. http://dx.doi.org/10.1155/2019/4539410.

Full text
Abstract:
Visual object tracking is a fundamental component in many computer vision applications. Extracting robust features of object is one of the most important steps in tracking. As trackers, only formulated on RGB data, are usually affected by occlusions, appearance, or illumination variations, we propose a novel RGB-D tracking method based on genetic feature learning in this paper. Our approach addresses feature learning as an optimization problem. As owning the advantage of parallel computing, genetic algorithm (GA) has fast speed of convergence and excellent global optimization performance. At the same time, unlike handcrafted feature and deep learning methods, GA can be employed to solve the problem of feature representation without prior knowledge, and it has no use for a large number of parameters to be learned. The candidate solution in RGB or depth modality is represented as an encoding of an image in GA, and genetic feature is learned through population initialization, fitness evaluation, selection, crossover, and mutation. The proposed RGB-D tracker is evaluated on popular benchmark dataset, and experimental results indicate that our method achieves higher accuracy and faster tracking speed.
APA, Harvard, Vancouver, ISO, and other styles
8

Sun, Linfeng, Zhongrui Wang, Jinbao Jiang, Yeji Kim, Bomin Joo, Shoujun Zheng, Seungyeon Lee, Woo Jong Yu, Bai-Sun Kong, and Heejun Yang. "In-sensor reservoir computing for language learning via two-dimensional memristors." Science Advances 7, no. 20 (May 2021): eabg1455. http://dx.doi.org/10.1126/sciadv.abg1455.

Full text
Abstract:
The dynamic processing of optoelectronic signals carrying temporal and sequential information is critical to various machine learning applications including language processing and computer vision. Despite extensive efforts to emulate the visual cortex of human brain, large energy/time overhead and extra hardware costs are incurred by the physically separated sensing, memory, and processing units. The challenge is further intensified by the tedious training of conventional recurrent neural networks for edge deployment. Here, we report in-sensor reservoir computing for language learning. High dimensionality, nonlinearity, and fading memory for the in-sensor reservoir were achieved via two-dimensional memristors based on tin sulfide (SnS), uniquely having dual-type defect states associated with Sn and S vacancies. Our in-sensor reservoir computing demonstrates an accuracy of 91% to classify short sentences of language, thus shedding light on a low training cost and the real-time solution for processing temporal and sequential signals for machine learning applications at the edge.
APA, Harvard, Vancouver, ISO, and other styles
9

Stokes, John F., and Marlene A. Devine. "Mirp: A Wearable Tool for Evaluating Effectiveness of Information Display." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 49, no. 6 (September 2005): 728–31. http://dx.doi.org/10.1177/154193120504900602.

Full text
Abstract:
Traditional approaches to HMI design focus on the use of visual displays and manual inputs, but these do not take advantage of the full range of means by which humans can perceive and interact with their environment. For wearable computing systems, the selection of modalities depends greatly on the proper consideration of human cognitive capabilities. The Multimodal Interface Research Platform (MIRP) is a wearable platform for evaluating task-relevant human performance by presenting information using three modalities: Visual (via head mounted display), Auditory (via earphones), and Haptic (via four vibrating actuators on the shoulders). Within the context of a predetermined task scenario, MIRP is able to monitor and record the user's interactions with the system and collect reaction time and a coarse accuracy determination of whether a message was understood. This enables observations about simple reaction time with respect to different alert/message modalities, as well as inferences about their understandability.
APA, Harvard, Vancouver, ISO, and other styles
10

Gong, Dawei, Zhiheng He, Xiaolong Ye, and Ziyun Fang. "Visual Saliency Detection for Over-Temperature Regions in 3D Space via Dual-Source Images." Sensors 20, no. 12 (June 17, 2020): 3414. http://dx.doi.org/10.3390/s20123414.

Full text
Abstract:
To allow mobile robots to visually observe the temperature of equipment in complex industrial environments and work on temperature anomalies in time, it is necessary to accurately find the coordinates of temperature anomalies and obtain information on the surrounding obstacles. This paper proposes a visual saliency detection method for hypertemperature in three-dimensional space through dual-source images. The key novelty of this method is that it can achieve accurate salient object detection without relying on high-performance hardware equipment. First, the redundant point clouds are removed through adaptive sampling to reduce the computational memory. Second, the original images are merged with infrared images and the dense point clouds are surface-mapped to visually display the temperature of the reconstructed surface and use infrared imaging characteristics to detect the plane coordinates of temperature anomalies. Finally, transformation mapping is coordinated according to the pose relationship to obtain the spatial position. Experimental results show that this method not only displays the temperature of the device directly but also accurately obtains the spatial coordinates of the heat source without relying on a high-performance computing platform.
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Xinxin, Gaoming Jiang, Aijun Zhang, and Jonathan Y. Chen. "A photograph-based approach for visual simulation of wrapped Jacquardtronic lace." Textile Research Journal 88, no. 23 (September 14, 2017): 2654–64. http://dx.doi.org/10.1177/0040517517729386.

Full text
Abstract:
A computing approach is proposed in this paper to simulate wrapped Jacquardtronic® lace appearance based on wrapped yarn photographs. The approach focuses on the determination of yarn color distribution functions using yarn photographs and extracted hue, saturation and value meshes in HSV color mode. Section curves of these meshes are fitted with a piecewise linear fitting algorithm and least-squares method to respectively describe the distribution functions of dull, semi-dull and shinning wrapped yarns. A modified depth-buffer method is also selected for visible lapping detection when dealing with multi-bar lapping coverage on the simulation plane. With the photograph-based approach, a simulator is implemented for visualization of wrapped Jacquardtronic lace via the Visual C++ programming language. It is confirmed that the developed method is capable of simulating sophisticated wrapped Jacquardtronic® lace with a superior stereoscopic impression and high efficiency.
APA, Harvard, Vancouver, ISO, and other styles
12

Franz, Marcel, Barbara Schmidt, Holger Hecht, Ewald Naumann, and Wolfgang H. R. Miltner. "Suggested visual blockade during hypnosis: Top-down modulation of stimulus processing in a visual oddball task." PLOS ONE 16, no. 9 (September 15, 2021): e0257380. http://dx.doi.org/10.1371/journal.pone.0257380.

Full text
Abstract:
Several theories of hypnosis assume that responses to hypnotic suggestions are implemented through top-down modulations via a frontoparietal network that is involved in monitoring and cognitive control. The current study addressed this issue re-analyzing previously published event-related-potentials (ERP) (N1, P2, and P3b amplitudes) and combined it with source reconstruction and connectivity analysis methods. ERP data were obtained from participants engaged in a visual oddball paradigm composed of target, standard, and distractor stimuli during a hypnosis (HYP) and a control (CON) condition. In both conditions, participants were asked to count the rare targets presented on a video screen. During HYP participants received suggestions that a wooden board in front of their eyes would obstruct their view of the screen. The results showed that participants’ counting accuracy was significantly impaired during HYP compared to CON. ERP components in the N1 and P2 window revealed no amplitude differences between CON and HYP at sensor-level. In contrast, P3b amplitudes in response to target stimuli were significantly reduced during HYP compared to CON. Source analysis of the P3b amplitudes in response to targets indicated that HYP was associated with reduced source activities in occipital and parietal brain areas related to stimulus categorization and attention. We further explored how these brain sources interacted by computing time-frequency effective connectivity between electrodes that best represented frontal, parietal, and occipital sources. This analysis revealed reduced directed information flow from parietal attentional to frontal executive sources during processing of target stimuli. These results provide preliminary evidence that hypnotic suggestions of a visual blockade are associated with a disruption of the coupling within the frontoparietal network implicated in top-down control.
APA, Harvard, Vancouver, ISO, and other styles
13

An, Xiaoyu, Zijie Meng, Yanfeng Wang, and Junwei Sun. "Proportional-Integral-Derivative Control of Four-Variable Chaotic Oscillatory Circuit Based on DNA Strand Displacement." Journal of Nanoelectronics and Optoelectronics 16, no. 4 (April 1, 2021): 612–23. http://dx.doi.org/10.1166/jno.2021.2994.

Full text
Abstract:
DNA molecular computing based on DNA strand displacement (DSD) technology is a potential computing model. Different functions can be realized by constructing DNA strand displacement analog circuit and analyzing its dynamic characteristics. In this paper, exploiting chemical reaction networks (CRNs) as the middle layer, a new chaotic oscillation circuit is constructed via DNA strand displacement and controlled by PID controller. The design of four-variable chaotic oscillatory circuit requires the combination and cascade by five DNA reaction modules. Based on the theory of stability and design principle of controller, the proportion terms, integration terms, and differentiation terms are added to chaotic oscillatory circuit for implementing PID controller. PID controller is implemented by five DNA reaction modules to stabilize the chaotic oscillation circuit. The validity of reaction modules circuit with their corresponding DSD reaction modules and controller is verified by visual DSD and Matlab. The PID controller may have better performance than PI controller, and it is an extension of PI controller.
APA, Harvard, Vancouver, ISO, and other styles
14

Burghardt, Dirk, Wolfgang Nejdl, Jochen Schiewe, and Monika Sester. "Volunteered Geographic Information: Interpretation, Visualisation and Social Computing (VGIscience)." Proceedings of the ICA 1 (May 16, 2018): 1–5. http://dx.doi.org/10.5194/ica-proc-1-15-2018.

Full text
Abstract:
In the past years Volunteered Geographic Information (VGI) has emerged as a novel form of user-generated content, which involves active generation of geo-data for example in citizen science projects or during crisis mapping as well as the passive collection of data via the user’s location-enabled mobile devices. In addition there are more and more sensors available that detect our environment with ever greater detail and dynamics. These data can be used for a variety of applications, not only for the solution of societal tasks such as in environment, health or transport fields, but also for the development of commercial products and services. The interpretation, visualisation and usage of such multi-source data is challenging because of the large heterogeneity, the differences in quality, the high update frequencies, the varying spatial-temporal resolution, subjective characteristics and low semantic structuring.<br> Therefore the German Research Foundation has launched a priority programme for the next 3&amp;ndash;6 years which will support interdisciplinary research projects. This priority programme aims to provide a scientific basis for raising the potential of VGI- and sensor data. Research questions described more in detail in this short paper span from the extraction of spatial information, to the visual analysis and knowledge presentation, taking into account the social context while collecting and using VGI.
APA, Harvard, Vancouver, ISO, and other styles
15

Mann, Steve. "Existential Technology: Wearable Computing Is Not the Real Issue!" Leonardo 36, no. 1 (February 2003): 19–25. http://dx.doi.org/10.1162/002409403321152239.

Full text
Abstract:
The author presents “Existential Technology” as a new category of in(ter)ventions and as a new theoretical framework for understanding privacy and identity. His thesis is twofold: (1) The unprotected individual has lost ground to invasive surveillance technologies and complex global organizations that undermine the humanistic property of the individual; (2) A way for the individual to be free and collegially assertive in such a world is to be “bound to freedom” by an articulably external force. To that end, the author explores empowerment via self-demotion. He has founded a federally incorporated company and appointed himself to a low enough position to be bound to freedom within that company. His performances and in(ter)ventions over the last 30 years have led him to an understanding of such concepts as individual self-corporatization and submissivity reciprocity for the creation of a balance of bureaucracy.
APA, Harvard, Vancouver, ISO, and other styles
16

Padmanaban, Nitish, Robert Konrad, Tal Stramer, Emily A. Cooper, and Gordon Wetzstein. "Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays." Proceedings of the National Academy of Sciences 114, no. 9 (February 13, 2017): 2183–88. http://dx.doi.org/10.1073/pnas.1617251114.

Full text
Abstract:
From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Chao-Ming, and Yu-Chen Chen. "Design of an Interactive Mind Calligraphy System by Affective Computing and Visualization Techniques for Real-Time Reflections of the Writer’s Emotions." Sensors 20, no. 20 (October 9, 2020): 5741. http://dx.doi.org/10.3390/s20205741.

Full text
Abstract:
A novel interactive system for calligraphy called mind calligraphy that reflects the writer’s emotions in real time by affective computing and visualization techniques is proposed. Differently from traditional calligraphy, which emphasizes artistic expression, the system is designed to visualize the writer’s mental-state changes during writing using audio-visual tools. The writer’s mental state is measured with a brain wave machine to yield attention and meditation signals, which are classified next into the four types of emotion, namely, focusing, relaxation, calmness, and anxiety. These emotion types then are represented both by animations and color palettes for by-standing observers to appreciate. Based on conclusions drawn from data collected from on-site observations, surveys via Likert-scale questionnaires, and semi-structured interviews, the proposed system was improved gradually. The participating writers’ cognitive, emotional, and behavioral engagements in the system were recorded and analyzed to obtain the following findings: (1) the interactions with the system raise the writer’s interest in calligraphy; (2) the proposed system reveals the writer’s emotions during the writing process in real time via animations of mixtures of fish swimming and sounds of raindrops, insects, and thunder; (3) the dynamic visualization of the writer’s emotion through animations and color-palette displays makes the writer understand better the connection of calligraphy and personal emotions; (4) the real-time audio-visual feedback increases the writer’s willingness to continue in calligraphy; and (5) the engagement of the writer in the system with interactions of diversified forms provides the writer with a new experience of calligraphy.
APA, Harvard, Vancouver, ISO, and other styles
18

Saptaputra, Emmanuel Himawan, Arsa Widitiarsa Utoyo, and Nia Karlna. "Gamification Analysis in Ui And Ux for Parking Spot Apps." Journal of Games, Game Art, and Gamification 5, no. 1 (October 19, 2021): 9–14. http://dx.doi.org/10.21512/jggag.v5i1.7470.

Full text
Abstract:
Advances in personal computing and information technology have been updated and published online or via mobile devices. Consequently, we must consider interaction as a fundamental complement of representation in cartography and visualization. The user interface (UI) / UX (user experience) describes a series of concepts, guidelines and workflows to critically reflect on the design and use of an interactive, map- based or other product. This entry presents the basic concepts of UI / UX design that is important for cartography and visualization, focusing on issues related to visual design. First, a fundamental distinction is made between the use of an interface as a tool and the broader experience of an interaction, a distinction that separates UI design and UX design. The phases of the Norman interaction framework are not a different form of interaction structure. Finally, three dimensions of the user interface design are described: the fundamental interaction operators that form the basic blocks of the interfaces, the interface styles that these primitive operators implement and the recommendations for the visual design of an interface.
APA, Harvard, Vancouver, ISO, and other styles
19

Yar, Hikmat, Ali Shariq Imran, Zulfiqar Ahmad Khan, Muhammad Sajjad, and Zenun Kastrati. "Towards Smart Home Automation Using IoT-Enabled Edge-Computing Paradigm." Sensors 21, no. 14 (July 20, 2021): 4932. http://dx.doi.org/10.3390/s21144932.

Full text
Abstract:
Smart home applications are ubiquitous and have gained popularity due to the overwhelming use of Internet of Things (IoT)-based technology. The revolution in technologies has made homes more convenient, efficient, and even more secure. The need for advancement in smart home technology is necessary due to the scarcity of intelligent home applications that cater to several aspects of the home simultaneously, i.e., automation, security, safety, and reducing energy consumption using less bandwidth, computation, and cost. Our research work provides a solution to these problems by deploying a smart home automation system with the applications mentioned above over a resource-constrained Raspberry Pi (RPI) device. The RPI is used as a central controlling unit, which provides a cost-effective platform for interconnecting a variety of devices and various sensors in a home via the Internet. We propose a cost-effective integrated system for smart home based on IoT and Edge-Computing paradigm. The proposed system provides remote and automatic control to home appliances, ensuring security and safety. Additionally, the proposed solution uses the edge-computing paradigm to store sensitive data in a local cloud to preserve the customer’s privacy. Moreover, visual and scalar sensor-generated data are processed and held over edge device (RPI) to reduce bandwidth, computation, and storage cost. In the comparison with state-of-the-art solutions, the proposed system is 5% faster in detecting motion, and 5 ms and 4 ms in switching relay on and off, respectively. It is also 6% more efficient than the existing solutions with respect to energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
20

Wu, Sihan, and Xue Zhang. "Visualization of Railway Transportation Engineering Management Using BIM Technology under the Application of Internet of Things Edge Computing." Wireless Communications and Mobile Computing 2022 (February 28, 2022): 1–15. http://dx.doi.org/10.1155/2022/4326437.

Full text
Abstract:
In the past, railway line planning usually required engineers to design based on their own experience after a series of field visits, leading to heavy workload and low efficiency. Moreover, operation and maintenance management is more complicated due to an abundance of railway station equipment. Based on the above problems, this paper first puts forward the railway transportation line planning and design method based on Building Information Modeling (BIM) technology. Besides, LocaSpace Viewer realizes the three-dimensional (3D) visual scene modeling of the railway environment to improve the efficiency of railway line planning and design. Secondly, the railway station’s visual operation and maintenance management system is constructed via BIM Technology. Besides, the Internet of Things (IoT) is combined with edge computing and deep learning technology to build a 3D model of station equipment, collect data in real time, and analyze data efficiently. Finally, the design effect of the model, the performance of the visual management system, and the test results of network transmission delay are displayed and analyzed. The results show that BIM can construct the 3D visualization model with high fidelity for the railway environment. This model can get a reasonable line planning scheme and analyze its feasibility, provide a reliable basis for engineers to plan railway transportation lines, and improve design efficiency. In addition, the GPU occupation rate, CPU occupation rate, and memory occupation rate of the operation and maintenance management system in different operating environments are within the standard range; when multiple clients access the system, the system data access delay is 100% less than 8 ms, which has good performance. Furthermore, the performance of the IoT transmission data real-time scheduling model and the edge computing optimization algorithm applied to this system is better than other popular methods, which can significantly improve the operation efficiency of the system. This study aims to enhance the efficiency of railway transportation line planning and station operation and maintenance management with the help of digital technologies.
APA, Harvard, Vancouver, ISO, and other styles
21

Ji, Xiao Gang, Zhong Bin Wang, and Ke Niu. "Application of LOD Technology in the 3DVR Remote Control Platform for Shearer." Applied Mechanics and Materials 79 (July 2011): 87–92. http://dx.doi.org/10.4028/www.scientific.net/amm.79.87.

Full text
Abstract:
The scene model is extremely complex in the 3DVR remote control platform for shearer. In order to speed up the 3D scene rendering of 3DVR remote control platform for shearer and reduce computational load of the system, we simplified the scene model based on LOD technology. The LOD algorithm was designed, in which the computing method of simplification rate, the algorithm of Triangle mesh simplification based on contraction and data structure of original mesh was described. Two shearer drums were simulated in the XNA platform via the above method. The simulation experiment results show that LOD model generated by the LOD algorithm have almost no loss in the visual effect, meanwhile the complexity of the scene model is reduced and the scene rendering speed is improved.
APA, Harvard, Vancouver, ISO, and other styles
22

Utterson, Andrew. "Early Visions of Interactivity: The In(put)s and Out(put)s of Real-Time Computing." Leonardo 46, no. 1 (February 2013): 67–72. http://dx.doi.org/10.1162/leon_a_00487.

Full text
Abstract:
Analyzing technical and other texts of the late 1960s and early 1970s, this paper explores the early discourses of interactivity—including writings by Charles Csuri, J.C.R. Licklider, Michael Noll, Ivan Sutherland and other notable figures—via the intersecting fields of computing and the arts, with a particular emphasis on the dynamic (in this instance, a disjuncture) between visionary ideas and the technical preconditions necessary for their realization.
APA, Harvard, Vancouver, ISO, and other styles
23

Ali, Zulfiqar, Aiman M. Ayyal Awwad, and Wolfgang Slany. "Using Executable Specification and Regression Testing for Broadcast Mechanism of Visual Programming Language on Smartphones." International Journal of Interactive Mobile Technologies (iJIM) 13, no. 02 (February 22, 2019): 50. http://dx.doi.org/10.3991/ijim.v13i02.9851.

Full text
Abstract:
<p class="0abstract">The rapid advancement of mobile computing technology and the rising usage of mobile apps made our daily life more productive. The mobile app should operate all the time bug-free in order to improve user satisfaction and offers great business value to the end user. At the same time, smartphones are full of special features that make testing of apps more challenging. Actually, the quality is a must for successful applications and it cannot be achieved without testing and verification. In this paper, we present the Behavior Driven Development (BDD) methodology and Cucumber framework to automate regression testing of Android apps. Particularly, the proposed methods use the visual programming language for smartphones (Catrobat) as a reference. The Catrobat program scripts communicate via a broadcast mechanism. The objective is to test the broadcast mechanism from different angles and track regression errors as well as specify and diagnose bugs with the help of executable specifications. The results show that the methods are able to effectively reveal deficiencies in the broadcast mechanism, and ensure that the app matches all expectations and needs of end users.</p>
APA, Harvard, Vancouver, ISO, and other styles
24

Pan, Zhibin, Jin Tang, Tardi Tjahjadi, and Fan Guo. "Fast Geo-Location Method Based on Panoramic Skyline in Hilly Area." ISPRS International Journal of Geo-Information 10, no. 8 (August 9, 2021): 537. http://dx.doi.org/10.3390/ijgi10080537.

Full text
Abstract:
Localization method based on skyline for visual geo-location is an important auxiliary localization method that does not use a satellite positioning system. Due to the computational complexity, existing panoramic skyline localization methods determine a small area using prior knowledge or auxiliary sensors. After correcting the camera orientation using inertial navigation sensors, a fine position is achieved via the skyline. In this paper, a new panoramic skyline localization method is proposed that involves the following. By clustering the sampling points in the location area and improving the existing retrieval method, the computing efficiency of the panoramic skyline localization is increased by fourfold. Furthermore, the camera orientation is estimated accurately from the terrain features in the image. Experimental results show that the proposed method achieves higher localization accuracy and requires less computation for a large area without the aid of external sensors.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhang, Kunlin, Wei Huang, Xiaoyu Hou, Jihui Xu, Ruidan Su, and Huaiyu Xu. "A Fault Diagnosis and Visualization Method for High-Speed Train Based on Edge and Cloud Collaboration." Applied Sciences 11, no. 3 (January 29, 2021): 1251. http://dx.doi.org/10.3390/app11031251.

Full text
Abstract:
Safety is the most important aspect of railway transportation. To ensure the safety of high-speed trains, various train components are equipped with sensor devices for real-time monitoring. Sensor monitoring data can be used for fast intelligent diagnosis and accurate positioning of train faults. However, existing train fault diagnosis technology based on cloud computing has disadvantages of long processing times and high consumption of computing resources, which conflict with the real-time response requirements of fault diagnosis. Aiming at the problems of train fault diagnosis in the cloud environment, this paper proposes a train fault diagnosis model based on edge and cloud collaboration. The model first utilizes a SAES-DNN (stacked auto-encoders deep neural network) fault recognition method, which can integrate automatic feature extraction and type recognition and complete fault classification over deep hidden features in high-dimensional data, so as to quickly locate faults. Next, to adapt to the characteristics of edge computing, the model applies a SAES-DNN model trained in the cloud and deployed in the edge via the transfer learning strategy and carries out real-time fault diagnosis on the vehicle sensor monitoring data. Using a motor fault as an example, when compared with a similar intelligent learning model, the proposed intelligent fault diagnosis model can greatly improve diagnosis accuracy and significantly reduce training time. Through the transfer learning approach, adaptability of the fault diagnosis algorithm for personalized applications and real-time performance of the fault diagnosis is enhanced. This paper also proposes a visual analysis method of train fault data based on knowledge graphs, which can effectively analyze fault causes and fault correlation.
APA, Harvard, Vancouver, ISO, and other styles
26

Hu, Jianming, Xiyang Zhi, Wei Zhang, Longfei Ren, and Lorenzo Bruzzone. "Salient Ship Detection via Background Prior and Foreground Constraint in Remote Sensing Images." Remote Sensing 12, no. 20 (October 15, 2020): 3370. http://dx.doi.org/10.3390/rs12203370.

Full text
Abstract:
Automatic ship detection in complicated maritime background is a challenging task in the field of optical remote sensing image interpretation and analysis. In this paper, we propose a novel and reliable ship detection framework based on a visual saliency model, which can efficiently detect multiple targets of different scales in complex scenes with sea clutter, clouds, wake and islands interferences. Firstly, we present a reliable background prior extraction method adaptive for the random locations of targets by computing boundary probability and then generate a saliency map based on the background prior. Secondly, we compute the prior probability of salient foreground regions and propose a weighting function to constrain false foreground clutter, gaining the foreground-based prediction map. Thirdly, we integrate the two prediction maps and improve the details of the integrated map by a guided filter function and a wake adjustment function, obtaining the fine selection of candidate regions. Afterwards, a classification is further performed to reduce false alarms and produce the final ship detection results. Qualitative and quantitative evaluations on two public available datasets demonstrate the robustness and efficiency of the proposed method against four advanced baseline methods.
APA, Harvard, Vancouver, ISO, and other styles
27

Pyasetsky, V. B. "Estimating Perceived Brightness in Low Luminance Conditions." Herald of the Bauman Moscow State Technical University. Series Instrument Engineering, no. 1 (130) (February 2020): 33–49. http://dx.doi.org/10.18698/0236-3933-2020-1-33-49.

Full text
Abstract:
Mesopic photometry, which studies visual perception of low-level optical radiation, is of great interest today in lighting engineering. It involves investigating human responses to visual observations in low light conditions in the object space, determining the optimum artificial illumination levels in industrial areas, and solving clinical perimetry problems. The estimation procedure for mesopic photometry recommended by the International Commission on Illumination (CIE) is based on computing a combination of photopic (daylight) and scotopic (nighttime) visual perception levels. This procedure being iterative makes it inconvenient to apply in engineering practice, as the number of iterative steps proves to be several dozens on average, exceeding a hundred in certain cases. As a result, the feasibility of using the CIE procedure instead of a purely photopic perception technique becomes questionable. The discrepancy in the results obtained via these methods informs the selection criterion. The paper compares computation results for perceived brightness in photopic and mesopic vision in low luminance conditions. We also establish whether it is possible to find analytical solutions using the CIE procedure. We show that, for radiation of a colour temperature in the range of 950--12000 K, the maximum computational discrepancy between photopic and mesopic vision scenarios lies in the --200--50 % range, while the minimum discrepancy is approximately 5 % for radiation characterised by a colour temperature of approximately 2000 K. We also present analytical solutions for several specific cases according to the CIE procedure
APA, Harvard, Vancouver, ISO, and other styles
28

Rehman, Amjad, Tanzila Saba, Khalid Haseeb, Teg Alam, and Jaime Lloret. "Sustainability Model for the Internet of Health Things (IoHT) Using Reinforcement Learning with Mobile Edge Secured Services." Sustainability 14, no. 19 (September 26, 2022): 12185. http://dx.doi.org/10.3390/su141912185.

Full text
Abstract:
In wireless multimedia networks, the Internet of Things (IoT) and visual sensors are used to interpret and exchange vast data in the form of images. The digital images are subsequently delivered to cloud systems via a sink node, where they are interacted with by smart communication systems using physical devices. Visual sensors are becoming a more significant part of digital systems and can help us live in a more intelligent world. However, for IoT-based data analytics, optimizing communications overhead by balancing the usage of energy and bandwidth resources is a new research challenge. Furthermore, protecting the IoT network’s data from anonymous attackers is critical. As a result, utilizing machine learning, this study proposes a mobile edge computing model with a secured cloud (MEC-Seccloud) for a sustainable Internet of Health Things (IoHT), providing real-time quality of service (QoS) for big data analytics while maintaining the integrity of green technologies. We investigate a reinforcement learning optimization technique to enable sensor interaction by examining metaheuristic methods and optimally transferring health-related information with the interaction of mobile edges. Furthermore, two-phase encryptions are used to guarantee data concealment and to provide secured wireless connectivity with cloud networks. The proposed model has shown considerable performance for various network metrics compared with earlier studies.
APA, Harvard, Vancouver, ISO, and other styles
29

Yang, Jing, Yang Xiu, Yan Ting Shi, Long Tao Zhao, and Xiao Yan Tan. "Substation Equipment Temperature Monitoring System Based on Heterogeneous Wireless Sensor Network." Applied Mechanics and Materials 511-512 (February 2014): 165–68. http://dx.doi.org/10.4028/www.scientific.net/amm.511-512.165.

Full text
Abstract:
According to the records that the too-high temperature rise of equipment leaded to accidents easily, the temperature monitoring system based on ZigBee and GPRS, RS232 technology is designed. The perception layer uses the heterogeneous network topology by adding appropriate heterogeneous nodes to ordinary ones, which are much better than the ordinary in computing power, storage capacity, etc, and this topology structure extends transmission distance. Both of them use CC2530 chips, and the heterogeneous nodes are not only responsible for collecting temperature data, but also responsible for forwarding data from ordinary nodes. The sink nodes send alarm messages to staff by GPRS, and forward data from heterogeneous nodes to a PC via RS232. The system uses Visual Studio for software design to realize the function of data processing, display and storage, in order to achieve real-time monitoring of substation equipment and early warning.
APA, Harvard, Vancouver, ISO, and other styles
30

Wei, Hui, Luping Wang, Shanshan Wang, Yuxiang Jiang, and Jingmeng Li. "A Signal-Processing Neural Model Based on Biological Retina." Electronics 9, no. 1 (December 27, 2019): 35. http://dx.doi.org/10.3390/electronics9010035.

Full text
Abstract:
Image signal processing has considerable value in artificial intelligence. However, due to the diverse disturbance (e.g., color, noise), the image signal processing, especially the representation of the signal, remains a big challenge. In the human visual system, it has been justified that simple cells in the primary visual cortex are obviously sensitive to vision signals with partial orientation features. In other words, the image signals are extracted and described along the pathway of visual processing. Inspired by this neural mechanism of the primary visual cortex, it is possible to build an image signal-processing model as the neural architecture. In this paper, we presented a method to process the image signal involving a multitude of disturbance. For image signals, we first extracted 4 rivalry pathways via the projection of color. Secondly, we designed an algorithm in which the computing process of the stimulus with partial orientation features can be altered into a process of analytical geometry, resulting in that the signals with orientation features can be extracted and characterized. Finally, through the integration of characterizations from the 4 different rivalry pathways, the image signals can be effectively interpreted and reconstructed. Instead of data-driven methods, the presented approach requires no prior training. With the use of geometric inferences, the method tends to be interpreted and applied in the signal processor. The extraction and integration of rivalry pathways of different colors allow the method to be effective and robust to the signals with the image noise and disturbance of colors. Experimental results showed that the approach can extract and describing the image signal with diverse disturbance. Based on the characterization of the image signal, it is possible to reconstruct signal features which can effectively represent the important information from the original image signal.
APA, Harvard, Vancouver, ISO, and other styles
31

Wei, Hui, Qingsong Zuo, and XuDong Guan. "Main Retina Information Processing Pathways Modeling." International Journal of Cognitive Informatics and Natural Intelligence 5, no. 3 (July 2011): 30–46. http://dx.doi.org/10.4018/ijcini.2011070102.

Full text
Abstract:
In many fields including digital image processing and artificial retina design, they always confront a balance issue among real-time, accuracy, computing load, power consumption, and other factors. It is difficult to achieve an optimal balance among these conflicting requirements. However, human retina can balance these conflicting requirements very well. It can efficiently and economically accomplish almost all the visual tasks. This paper presents a bio-inspired model of the retina, not only to simulate various types of retina cells but also to simulate complex structure of retina. The model covers main information processing pathways of retina so that it is much closer to the real retina. In this paper, the authors did some research on various characteristics of retina via large-scale statistical experiments, and further analyzed the relationship between retina’s structure and functions. The model can be used in bionic chip design, physiological assumptions verification, image processing and computer vision.
APA, Harvard, Vancouver, ISO, and other styles
32

Beer, Max, Niclas Eich, Martin Erdmann, Peter Fackeldey, Benjamin Fischer, Katharina Hafner, Dennis Daniel Nick Noll, et al. "Knowledge sharing on deep learning in physics research using VISPA." EPJ Web of Conferences 245 (2020): 05040. http://dx.doi.org/10.1051/epjconf/202024505040.

Full text
Abstract:
The VISPA (VISual Physics Analysis) project provides a streamlined work environment for physics analyses and hands-on teaching experiences with a focus on deep learning. VISPA has already been successfully used in HEP analyses and teaching and is now being further developed into an interactive deep learning platform. One specific example is to meet knowledge sharing needs in deep learning by combining paper, code and data at a central place. Additionally the possibility to run it directly from the web browser is a key feature of this development. Any SSH reachable resource can be accessed via the VISPA web interface. This enables a flexible and experiment agnostic computing experience. The user interface is based on JupyterLab and is extended with analysis specific tools, such as a parametric file browser and TensorBoard. Our VISPA instance is backed by extensive GPU resources and a rich software environment. We present the current status of the VISPA project and its upcoming new features.
APA, Harvard, Vancouver, ISO, and other styles
33

Kosowatz, John. "High-Tech Eyes." Mechanical Engineering 139, no. 03 (March 1, 2017): 36–41. http://dx.doi.org/10.1115/1.2017-mar-2.

Full text
Abstract:
This article provides an overview of high-tech sensors, visual detection software, and mobile computing power applications, which are being developed to enable visually impaired people to navigate. By adapting technology developed for robots, automobiles, and other products, researchers and developers are creating wearable devices that can aid the visually impaired as they navigate through their daily routines—even identifying people and places. The Eyeronman system, developed by NYU’s Visuomotor Integration Laboratory and Tactile Navigation Tools, combines a sensor-laden outer garment or belt with a vest studded with vibrating actuators. The sensors detect objects in the immediate environment and relay their locations via buzzes on the wearer's torso. OrCam’s, a computer vision company in Jerusalem, team of programmers, computer engineers, and hardware designers have developed MyEye device, which attaches to the temple of a pair of eyeglasses. The device instructs the user on how to store items in memory, including things such as credit cards and faces of friends and family.
APA, Harvard, Vancouver, ISO, and other styles
34

Arvind, Siddapuram, Puja Sahay Prasad, R. Vijaya Saraswathi, Y. Vijayalata, Zarin Tasneem, R. N. Ashlin Deepa, and Y. Ramadevi. "Deep Learning Regression-Based Retinal Layer Segmentation Process for Early Diagnosis of Retinal Anamolies and Secure Data Transmission through ThingSpeak." Mobile Information Systems 2022 (May 31, 2022): 1–12. http://dx.doi.org/10.1155/2022/8960132.

Full text
Abstract:
Diabetic retinopathy (DR) is a progressive type of problem that affects diabetic people. In general, this condition is asymptomatic in its early stages. When the condition progresses, it can cause hazy and unclear vision of objects. As a result, it is necessary to develop a framework for early diagnosis in order to prevent visual morbidity. The suggested method entails acquiring fundus and OCT images of the retina. To acquire the lesions, techniques such as preprocessing, sophisticated Chan–Vese segmentation, and object clustering are used. Furthermore, regression-based neural network (RNN) categorization is used to achieve expected results that help foretell retinal diseases. The methodology is implemented using the MATLAB technical computing language, together with the necessary toolboxes and blocksets. The proposed system requires two steps. In the first stage, the detection of diabetic retinopathy via the proposed deep learning technique is carried out. The data collected from the MATLAB are transmitted to the approved PC via the IoT module known as ThingSpeak in the second stage. To validate the robustness of the proposed approach, comparisons with regard to plots of confusion matrices, mean square error (MSE) plots, and receiver operating characteristic (ROC) plots are performed.
APA, Harvard, Vancouver, ISO, and other styles
35

Levelt, Willem J. M., Peter Praamstra, Antje S. Meyer, Päivi Helenius, and Riitta Salmelin. "An MEG Study of Picture Naming." Journal of Cognitive Neuroscience 10, no. 5 (September 1998): 553–67. http://dx.doi.org/10.1162/089892998562960.

Full text
Abstract:
The purpose of this study was to relate a psycholinguistic processing model of picture naming to the dynamics of cortical activation during picture naming. The activation was recorded from eight Dutch subjects with a whole-head neuromagnetometer. The processing model, based on extensive naming latency studies, is a stage model. In preparing a picture's name, the speaker performs a chain of specific operations. They are, in this order, computing the visual percept, activating an appropriate lexical concept, selecting the target word from the mental lexicon, phonological encoding, phonetic encoding, and initiation of articulation. The time windows for each of these operations are reasonably well known and could be related to the peak activity of dipole sources in the individual magnetic response patterns. The analyses showed a clear progression over these time windows from early occipital activation, via parietal and temporal to frontal activation. The major specific findings were that (1) a region in the left posterior temporal lobe, agreeing with the location of Wernicke's area, showed prominent activation starting about 200 msec after picture onset and peaking at about 350 msec, (i.e., within the stage of phonological encoding), and (2) a consistent activation was found in the right parietal cortex, peaking at about 230 msec after picture onset, thus preceding and partly overlapping with the left temporal response. An interpretation in terms of the management of visual attention is proposed.
APA, Harvard, Vancouver, ISO, and other styles
36

Aravena, Ricardo A., Mitchell B. Lyons, Adam Roff, and David A. Keith. "A Colourimetric Approach to Ecological Remote Sensing: Case Study for the Rainforests of South-Eastern Australia." Remote Sensing 13, no. 13 (June 29, 2021): 2544. http://dx.doi.org/10.3390/rs13132544.

Full text
Abstract:
To facilitate the simplification, visualisation and communicability of satellite imagery classifications, this study applied visual analytics to validate a colourimetric approach via the direct and scalable measurement of hue angle from enhanced false colour band ratio RGB composites. A holistic visual analysis of the landscape was formalised by creating and applying an ontological image interpretation key from an ecological-colourimetric deduction for rainforests within the variegated landscapes of south-eastern Australia. A workflow based on simple one-class, one-index density slicing was developed to implement this deductive approach to mapping using freely available Sentinel-2 imagery and the super computing power from Google Earth Engine for general public use. A comprehensive accuracy assessment based on existing field observations showed that the hue from a new false colour blend combining two band ratio RGBs provided the best overall results, producing a 15 m classification with an overall average accuracy of 79%. Additionally, a new index based on a band ratio subtraction performed better than any existing vegetation index typically used for tropical evergreen forests with comparable results to the false colour blend. The results emphasise the importance of the SWIR1 band in discriminating rainforests from other vegetation types. While traditional vegetation indices focus on productivity, colourimetric measurement offers versatile multivariate indicators that can encapsulate properties such as greenness, wetness and brightness as physiognomic indicators. The results confirmed the potential for the large-scale, high-resolution mapping of broadly defined vegetation types.
APA, Harvard, Vancouver, ISO, and other styles
37

Ancau, Dorina-Marcela, Mircea Ancau, and Mihai Ancau. "Deep-learning online EEG decoding brain-computer interface using error-related potentials recorded with a consumer-grade headset." Biomedical Physics & Engineering Express 8, no. 2 (January 28, 2022): 025006. http://dx.doi.org/10.1088/2057-1976/ac4c28.

Full text
Abstract:
Abstract Objective. Brain-computer interfaces (BCIs) allow subjects with sensorimotor disability to interact with the environment. Non-invasive BCIs relying on EEG signals such as event-related potentials (ERPs) have been established as a reliable compromise between spatio-temporal resolution and patient impact, but limitations due to portability and versatility preclude their broad application. Here we describe a deep-learning augmented error-related potential (ErrP) discriminating BCI using a consumer-grade portable headset EEG, the Emotiv EPOC+. Approach. We recorded and discriminated ErrPs offline and online from 14 subjects during a visual feedback task. Main results: We achieved online discrimination accuracies of up to 81%, comparable to those obtained with professional 32/64-channel EEG devices via deep-learning using either a generative-adversarial network or an intrinsic-mode function augmentation of the training data and minimalistic computing resources. Significance. Our BCI model has the potential of expanding the spectrum of BCIs to more portable, artificial intelligence-enhanced, efficient interfaces accelerating the routine deployment of these devices outside the controlled environment of a scientific laboratory.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhang, Yingying, Junyu Gao, Xiaoshan Yang, Chang Liu, Yan Li, and Changsheng Xu. "Find Objects and Focus on Highlights: Mining Object Semantics for Video Highlight Detection via Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 12902–9. http://dx.doi.org/10.1609/aaai.v34i07.6988.

Full text
Abstract:
With the increasing prevalence of portable computing devices, browsing unedited videos is time-consuming and tedious. Video highlight detection has the potential to significantly ease this situation, which discoveries moments of user's major or special interest in a video. Existing methods suffer from two problems. Firstly, most existing approaches only focus on learning holistic visual representations of videos but ignore object semantics for inferring video highlights. Secondly, current state-of-the-art approaches often adopt the pairwise ranking-based strategy, which cannot enjoy the global information to infer highlights. Therefore, we propose a novel video highlight framework, named VH-GNN, to construct an object-aware graph and model the relationships between objects from a global view. To reduce computational cost, we decompose the whole graph into two types of graphs: a spatial graph to capture the complex interactions of object within each frame, and a temporal graph to obtain object-aware representation of each frame and capture the global information. In addition, we optimize the framework via a proposed multi-stage loss, where the first stage aims to determine the highlight-probability and the second stage leverage the relationships between frames and focus on hard examples from the former stage. Extensive experiments on two standard datasets strongly evidence that VH-GNN obtains significant performance compared with state-of-the-arts.
APA, Harvard, Vancouver, ISO, and other styles
39

Chi, Ed H., Lichan Hong, Julie Heiser, Stuart K. Card, and Michelle Gumbrecht. "ScentIndex and ScentHighlights: Productive Reading Techniques for Conceptually Reorganizing Subject Indexes and Highlighting Passages." Information Visualization 6, no. 1 (January 11, 2007): 32–47. http://dx.doi.org/10.1057/palgrave.ivs.9500140.

Full text
Abstract:
A great deal of analytical work has been carried out in the context of reading, in digesting the semantics of the material, the identification of important entities, and capturing the relationship between entities. Visual analytic environments, therefore, must encompass reading tools that enable the rapid digestion of large amounts of reading material. Other than plain text search, subject indexes, and basic highlighting, tools are needed for rapid foraging of the text. In this paper, we describe a technique that presents an enhanced subject index for a book by conceptually reorganizing it to suit particular expressed user information needs. Users first enter information needs via keywords, describing the concepts they are trying to retrieve and comprehend. Then our system, called ScentIndex, computes what index entries are conceptually related, and reorganizes and displays these index entries on a single page. We provide a number of navigational cues to help users peruse over this list of index entries and find relevant passages quickly. We report some initial results in a new technique called ScentHighlights that enhances skimming activity by conceptually highlighting sentences. Both use similar techniques by computing what conceptual keywords are related to each other via word co-occurrence and spreading activation. Compared to regular reading of a paper book, our study showed that users are more efficient and more accurate in finding, comparing, and comprehending material in our system.
APA, Harvard, Vancouver, ISO, and other styles
40

Li, Shang, Guixuan Zhang, Zhengxiong Luo, and Jie Liu. "DFAN: Dual Feature Aggregation Network for Lightweight Image Super-Resolution." Wireless Communications and Mobile Computing 2022 (January 24, 2022): 1–12. http://dx.doi.org/10.1155/2022/8116846.

Full text
Abstract:
With the power of deep learning, super-resolution (SR) methods enjoy a dramatic boost in performance. However, they usually have a large model size and high computational complexity, which hinders the application in devices with limited memory and computing power. Some lightweight SR methods solve this issue by directly designing shallower architectures, but it will adversely affect the representation capability of convolutional neural networks. To address this issue, we propose the dual feature aggregation strategy for image SR. It enhances feature utilization via feature reuse, which largely improves the representation ability while only introducing marginal computational cost. Thus, a smaller model could achieve better cost-effectiveness with the dual feature aggregation strategy. Specifically, it consists of Local Aggregation Module (LAM) and Global Aggregation Module (GAM). LAM and GAM work together to further fuse hierarchical features adaptively along the channel and spatial dimensions. In addition, we propose a compact basic building block to compress the model size and extract hierarchical features in a more efficient way. Extensive experiments suggest that the proposed network performs favorably against state-of-the-art SR methods in terms of visual quality, memory footprint, and computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
41

Plaisted Grant, Kate, and Greg Davis. "Perception and apperception in autism: rejecting the inverse assumption." Philosophical Transactions of the Royal Society B: Biological Sciences 364, no. 1522 (May 27, 2009): 1393–98. http://dx.doi.org/10.1098/rstb.2009.0001.

Full text
Abstract:
In addition to those with savant skills, many individuals with autism spectrum conditions (ASCs) show superior perceptual and attentional skills relative to the general population. These superior skills and savant abilities raise important theoretical questions, including whether they develop as compensations for other underdeveloped cognitive mechanisms, and whether one skill is inversely related to another weakness via a common underlying neurocognitive mechanism. We discuss studies of perception and visual processing that show that this inverse hypothesis rarely holds true. Instead, they suggest that enhanced performance is not always accompanied by a complementary deficit and that there are undeniable difficulties in some aspects of perception that are not related to compensating strengths. Our discussion emphasizes the qualitative differences in perceptual processing revealed in these studies between individuals with and without ASCs. We argue that this research is important not only in furthering our understanding of the nature of the qualitative differences in perceptual processing in ASCs, but can also be used to highlight to society at large the exceptional skills and talent that individuals with ASCs are able to contribute in domains such as engineering, computing and mathematics that are highly valued in industry.
APA, Harvard, Vancouver, ISO, and other styles
42

Naeem, Muhammad Rashid, Mansoor Khan, Ako Muhammad Abdullah, Fazal Noor, Muhammad Ijaz Khan, Muhammad Asghar Khan, Insaf Ullah, and Shah Room. "A Malware Detection Scheme via Smart Memory Forensics for Windows Devices." Mobile Information Systems 2022 (October 3, 2022): 1–16. http://dx.doi.org/10.1155/2022/9156514.

Full text
Abstract:
With the introduction of 4G/5G Internet and the increase in the number of users, the malicious cyberattacks on computing devices have been increased making them vulnerable to external threats. High availability windows servers are designed to ensure delivery of consistent services such as business activities and e-services to their customers without any interruption. At the same time, a cyberattack on any of the clustered computer can put servers and customer devices in danger. A memory dump mechanism can capture the contents of memory in the event of a system or device crash such as corrupted files, damaged hardware, or irregular CPU power consumption. In this paper, we present a smart memory forensics scheme to recognize malicious attacks over high availability servers by capturing the memory dump of suspicious processes in the form of RGB visual images. Second, the local and global properties of malware images are captured using local binary patterns (LBP) and gray-level co-occurrence matrices (GLCM). A state-of-the-art t-distributed stochastic neighbor embedding scheme (t-SNE) is applied to reduce data dimensionality and improve the detection time of unknown malwares and their variants. An optimized CNN model is designed to predict malicious files harming servers or user devices. Throughout this study, we employed public data set of 4294 malicious samples covering malware variants and benign executables. A baseline is prepared to compare the performance of proposed model with state-of-the-art malware detection methods. The combined LBP + GLCM feature extraction along with t-SNE dimensionality reduction scheme further improved the detection accuracy by 98%, whereas the detection time is also increased by 73x. The overall performance shows that memory forensics is more effective for malware detection in terms accuracy and response time.
APA, Harvard, Vancouver, ISO, and other styles
43

Signell, Richard P., and Dharhas Pothina. "Analysis and Visualization of Coastal Ocean Model Data in the Cloud." Journal of Marine Science and Engineering 7, no. 4 (April 19, 2019): 110. http://dx.doi.org/10.3390/jmse7040110.

Full text
Abstract:
The traditional flow of coastal ocean model data is from High-Performance Computing (HPC) centers to the local desktop, or to a file server where just the needed data can be extracted via services such as OPeNDAP. Analysis and visualization are then conducted using local hardware and software. This requires moving large amounts of data across the internet as well as acquiring and maintaining local hardware, software, and support personnel. Further, as data sets increase in size, the traditional workflow may not be scalable. Alternatively, recent advances make it possible to move data from HPC to the Cloud and perform interactive, scalable, data-proximate analysis and visualization, with simply a web browser user interface. We use the framework advanced by the NSF-funded Pangeo project, a free, open-source Python system which provides multi-user login via JupyterHub and parallel analysis via Dask, both running in Docker containers orchestrated by Kubernetes. Data are stored in the Zarr format, a Cloud-friendly n-dimensional array format that allows performant extraction of data by anyone without relying on data services like OPeNDAP. Interactive visual exploration of data on complex, large model grids is made possible by new tools in the Python PyViz ecosystem, which can render maps at screen resolution, dynamically updating on pan and zoom operations. Two examples are given: (1) Calculating the maximum water level at each grid cell from a 53-GB, 720-time-step, 9-million-node triangular mesh ADCIRC simulation of Hurricane Ike; (2) Creating a dashboard for visualizing data from a curvilinear orthogonal COAWST/ROMS forecast model.
APA, Harvard, Vancouver, ISO, and other styles
44

Zarei, Mohammad, Seyyed Ashkezari, and Mehrdad Yari. "The investigation of the function of the central courtyard in moderating the harsh environmental conditions of a hot and dry climate (Case study: City of Yazd, Iran)." Spatium, no. 38 (2017): 1–9. http://dx.doi.org/10.2298/spat1738001z.

Full text
Abstract:
As one of the arid areas of Iran, Yazd is always exposed to extreme winds with dust and shifting sands. Therefore, the architectural principles in the residential architecture of the city need be adapted to such environmental conditions in order to minimize the influence of the severe winds on the interior spaces. This study investigates the influence of storms on the interior space of the central courtyards in Yazd, constructed during the Muzaffarid, Safavid and Qajar periods using CFD simulation. Three-dimensional models were prepared via Gambit software and studied in Fluent software. The wind speed entering the computing field was equal to 26.4m/s and the Dutch wind nuisance standard NEN 8100 was applied as the comfort criterion. The results showed a relationship between the extent of the central courtyard and the impact of severe storms on it, since an increase in the area of the courtyard provides enough space for the wind flow and move around it. This feature reaches its climax if the length to height proportion increases, as the wind brings the shifting sands into large courtyards, therefore, the architects tried to provide better conditions by creating microclimates.
APA, Harvard, Vancouver, ISO, and other styles
45

Cessac, Bruno. "Retinal Processing: Insights from Mathematical Modelling." Journal of Imaging 8, no. 1 (January 17, 2022): 14. http://dx.doi.org/10.3390/jimaging8010014.

Full text
Abstract:
The retina is the entrance of the visual system. Although based on common biophysical principles, the dynamics of retinal neurons are quite different from their cortical counterparts, raising interesting problems for modellers. In this paper, I address some mathematically stated questions in this spirit, discussing, in particular: (1) How could lateral amacrine cell connectivity shape the spatio-temporal spike response of retinal ganglion cells? (2) How could spatio-temporal stimuli correlations and retinal network dynamics shape the spike train correlations at the output of the retina? These questions are addressed, first, introducing a mathematically tractable model of the layered retina, integrating amacrine cells’ lateral connectivity and piecewise linear rectification, allowing for computing the retinal ganglion cells receptive field together with the voltage and spike correlations of retinal ganglion cells resulting from the amacrine cells networks. Then, I review some recent results showing how the concept of spatio-temporal Gibbs distributions and linear response theory can be used to characterize the collective spike response to a spatio-temporal stimulus of a set of retinal ganglion cells, coupled via effective interactions corresponding to the amacrine cells network. On these bases, I briefly discuss several potential consequences of these results at the cortical level.
APA, Harvard, Vancouver, ISO, and other styles
46

Prasetyo, Chrisna Joshua Sergio, I. Putu Gede Hendra Suputra, Luh Arida Ayu Rahning Putri, I. Made Widiartha, I. Ketut Gede Suhartana, and Anak Agung Istri Ngurah Eka Karyawati. "Video Steganography Encryption on Cloud Storage for Securing Digital Image." JELIKU (Jurnal Elektronik Ilmu Komputer Udayana) 11, no. 1 (July 8, 2022): 45. http://dx.doi.org/10.24843/jlk.2022.v11.i01.p05.

Full text
Abstract:
Cloud storage is a data storage service in cloud computing that allows stored data to be shared and accessed via the internet. Cloud storage is usually used to store personal data such as files, photos, or videos with so that these data can be accessed anywhere via the internet without the need to use physical storage media. However, cases of data leaks in cloud storage still occur which causes personal data stored in cloud storage to be accessed by other people who do not have access. The Client-Side Steganography Encryption on Cloud Storage Application was developed using the Modified Least Significant Bit (LSB) method and the Standard Advanced Encryption (AES) algorithm. This desktop-based application was developed to protect personal data of digital images embedded in a video so that unauthorized parties cannot view the data. This application is expected to be a data security solution on cloud storage to prevent theft of personal data by non-existent parties. From the test results, the developed application can receive input, process input, and produce the desired output. The image from the extraction process from video also does not change at all in terms of visual or visible. The results obtained from this test is the PSNR value with an average of 36,395 dB. A good PSNR value is above 30 dB and indicates that the quality of the extracted image is good and also indicates that the developed application can protect data of digital images into video.
APA, Harvard, Vancouver, ISO, and other styles
47

Ebejer, Oriana. "Is Block-Based Programming an Effective Teaching Tool?" MCAST Journal of Applied Research & Practice 5, no. 1 (July 5, 2021): 40–65. http://dx.doi.org/10.5604/01.3001.0015.0182.

Full text
Abstract:
Block-based Programming (BBP) is used as a teaching tool and has gained popularity with young people who start to learn computer programming. BBP hides the programming complexity and allows the user to write programs via a simple, visual interface. In this study, MCAST students’ opinions about BBP are compared to more traditional programming methods. This includes whether they think introducing programming through BBP at Level 2 is the right approach. This research is important to MCAST’s computing curricula to remain relevant to the students’ needs. The research participants are students who joined MCAST form Level 2 and have now completed Level 3 or a higher qualification. Twenty independent variables related to BBP have been identified from secondary data. These factors include: BBP is fun and media rich, it simplifies complex programming concepts, it has poor debugging tools, and it targets a younger audience. These factors are presented to the research participants via an online survey, which they answered using a five-point Likert scale (from Strongly Disagree to Strongly Agree). The participants were also asked some questions specifically on BBP to measure how much they value BBP as a teaching tool at their level. Out of 73 students who were contacted 32 responded to the survey. A test question was planted in the survey to check for consistency; one answer did not pass the validity test and was removed from the dataset. The data collected from the 31 surveys was analysed using SPSS. A dimensionality reduction technique (Principal Component Analysis) was then applied to the dataset to reduce the number of independent variables. The rotation matrix suggested that the variables may be reduced to five (from 20). Almost all the participants approve of BBP as the ideal approach with Level 2 students, but they also suggest that a text-based programming language should be introduced as well. Based on this research it is suggested to keep using BBP as a pedagogical tool but change the specific current tool used in class, Scratch, to one which offers a translation of the visual steps to text-based. Alternatively, complement the BBP tool with a text-based one.
APA, Harvard, Vancouver, ISO, and other styles
48

Ji, Xiaofei, Ce Wang, and Yibo Li. "A View-Invariant Action Recognition Based on Multi-View Space Hidden Markov Models." International Journal of Humanoid Robotics 11, no. 01 (March 2014): 1450011. http://dx.doi.org/10.1142/s021984361450011x.

Full text
Abstract:
Visual-based action recognition has already been widely used in human–machine interfaces. However, it is a challenging research to recognize the human actions from different viewpoints. In order to solve this issue, a novel multi-view space hidden Markov models (HMMs) algorithm for view-invariant action recognition is proposed. First, a view-insensitive feature representation by combining the bag-of-words of interest point with the amplitude histogram of optical flow is utilized for describing the human action sequences. The combined features could not only solve the problem that there was no possibility in establishing an association between traditional bag-of-words of interest point method and HMMs, but also greatly reduce the redundancy in the video. Second, the view space is partitioned into multiple sub-view space according to the camera rotation viewpoint. Human action models are trained by HMMs algorithm in each sub-view space. By computing the probabilities of the test sequence (i.e., observation sequence) for the given multi-view space HMMs, the similarity between the sub-view space and the test sequence viewpoint are analyzed during the recognition process. Finally, the action with unknown viewpoint is recognized via the probability weighted combination. The experimental results on multi-view action dataset IXMAS demonstrate that the proposed approach is highly efficient and effective in view-invariant action recognition.
APA, Harvard, Vancouver, ISO, and other styles
49

Wei, Longsheng, Wei Liu, Xinmei Wang, Feng Liu, and Dapeng Luo. "Objective Image Quality Assessment Based on Saliency Map." Journal of Advanced Computational Intelligence and Intelligent Informatics 20, no. 2 (March 18, 2016): 205–11. http://dx.doi.org/10.20965/jaciii.2016.p0205.

Full text
Abstract:
The development of objective image quality assessment metrics aligned with human perception is of fundamental importance to numerous image processing applications. In this paper, an objective image quality assessment approach based on saliency map is proposed. By local shift estimation method, the retargeted image is resized to the same size as the reference image. A gradient magnitude similarity map is computed by comparing the retargeted and reference images. The more similarly, the brighter of pixels in the gradient magnitude similarity map. At the same time, a saliency map of reference image is achieved by visual attention. Finally, an overall image quality score is computed from the gradient magnitude similarity map via saliency pooling strategy. The most important step in our approach is to generate a gradient magnitude similarity map that indicates at each spatial location in the source image how the structural information is preserved in the retargeted image. There are two key contributions in this paper, one is that we add the texture feature in computing saliency map because image gradient is very sensitive to texture information, and the other is that we propose a new objective image quality metrics by introducing saliency map into image quality evaluation. Experimental results indicate that the evaluation indexes of our approach are better than existing methods in the literature.
APA, Harvard, Vancouver, ISO, and other styles
50

Busaeed, Sahar, Iyad Katib, Aiiad Albeshri, Juan M. Corchado, Tan Yigitcanlar, and Rashid Mehmood. "LidSonic V2.0: A LiDAR and Deep-Learning-Based Green Assistive Edge Device to Enhance Mobility for the Visually Impaired." Sensors 22, no. 19 (September 30, 2022): 7435. http://dx.doi.org/10.3390/s22197435.

Full text
Abstract:
Over a billion people around the world are disabled, among whom 253 million are visually impaired or blind, and this number is greatly increasing due to ageing, chronic diseases, and poor environments and health. Despite many proposals, the current devices and systems lack maturity and do not completely fulfill user requirements and satisfaction. Increased research activity in this field is required in order to encourage the development, commercialization, and widespread acceptance of low-cost and affordable assistive technologies for visual impairment and other disabilities. This paper proposes a novel approach using a LiDAR with a servo motor and an ultrasonic sensor to collect data and predict objects using deep learning for environment perception and navigation. We adopted this approach using a pair of smart glasses, called LidSonic V2.0, to enable the identification of obstacles for the visually impaired. The LidSonic system consists of an Arduino Uno edge computing device integrated into the smart glasses and a smartphone app that transmits data via Bluetooth. Arduino gathers data, operates the sensors on the smart glasses, detects obstacles using simple data processing, and provides buzzer feedback to visually impaired users. The smartphone application collects data from Arduino, detects and classifies items in the spatial environment, and gives spoken feedback to the user on the detected objects. In comparison to image-processing-based glasses, LidSonic uses far less processing time and energy to classify obstacles using simple LiDAR data, according to several integer measurements. We comprehensively describe the proposed system’s hardware and software design, having constructed their prototype implementations and tested them in real-world environments. Using the open platforms, WEKA and TensorFlow, the entire LidSonic system is built with affordable off-the-shelf sensors and a microcontroller board costing less than USD 80. Essentially, we provide designs of an inexpensive, miniature green device that can be built into, or mounted on, any pair of glasses or even a wheelchair to help the visually impaired. Our approach enables faster inference and decision-making using relatively low energy with smaller data sizes, as well as faster communications for edge, fog, and cloud computing.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography