Journal articles on the topic 'Vision – Computer simulation'

To see the other types of publications on this topic, follow the link: Vision – Computer simulation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Vision – Computer simulation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Cheng, Wen-Huang, Sijie Song, Chieh-Yun Chen, Shintami Chusnul Hidayati, and Jiaying Liu. "Fashion Meets Computer Vision." ACM Computing Surveys 54, no. 4 (July 2021): 1–41. http://dx.doi.org/10.1145/3447239.

Full text
Abstract:
Fashion is the way we present ourselves to the world and has become one of the world’s largest industries. Fashion, mainly conveyed by vision, has thus attracted much attention from computer vision researchers in recent years. Given the rapid development, this article provides a comprehensive survey of more than 200 major fashion-related works covering four main aspects for enabling intelligent fashion: (1) Fashion detection includes landmark detection, fashion parsing, and item retrieval; (2) Fashion analysis contains attribute recognition, style learning, and popularity prediction; (3) Fashion synthesis involves style transfer, pose transformation, and physical simulation; and (4) Fashion recommendation comprises fashion compatibility, outfit matching, and hairstyle suggestion. For each task, the benchmark datasets and the evaluation protocols are summarized. Furthermore, we highlight promising directions for future research.
APA, Harvard, Vancouver, ISO, and other styles
2

Gao, Fa Zhao. "The Simulation of the Psychological Impact of Computer Vision De-Noising Technology." Applied Mechanics and Materials 556-562 (May 2014): 5013–16. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.5013.

Full text
Abstract:
The paper mainly discusses the analysis method for the psychological impact of computer vision noising technology. Actually, people's psychological acceptance and corresponding memory capacity of computer vision images with lots of noise are relatively poor. The de-noising process to computer vision images can improve the clarity, thus generating passive psychological impact. Therefore, the paper proposes a spatial domain filtering algorithm-based de-noising method for computer vision. It establishes wavelet packet decomposition tree for computer vision images and de-noises accordance with the decomposing results. The experiment results show that the proposed de-noising method has passive psychological influence and improves the memory capacity of computer vision images.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Jun, and Xiao Hua Ni. "Angle Measurement Based on Computer Vision." Applied Mechanics and Materials 456 (October 2013): 115–19. http://dx.doi.org/10.4028/www.scientific.net/amm.456.115.

Full text
Abstract:
In order to improve the precision and the speed of angle measurement,A new method for measuring the angle of the workpiece is presented in this paper, which is based on the computer vision testing technology. The image of workpiece is obtained, the first step is image preprocessing, then the measured worpiece image is processed by edge detection through Canny algorithm, specific features of workpieces edge is fully extracted, Then one can accomplish line detection by using Hough transform, Finally, the angle value is obtained through the means of Angle Calculation. By employing practical examples in engineering and simulation experiments, the experimental results proved the method has more strong anti-interference ability, more high accuracy and speed than traditional method.
APA, Harvard, Vancouver, ISO, and other styles
4

Rauhut, Markus. "Paradigmenwechsel durch Simulation." wt Werkstattstechnik online 110, no. 01-02 (2020): 70–72. http://dx.doi.org/10.37544/1436-4980-2020-01-02-72.

Full text
Abstract:
Mithilfe von Computer Vision, Computergrafik, maschinellem Lernen und Robotik wird am Fraunhofer-Institut für Techno- und Wirtschaftsmathematik ITWM ein virtuelles Framework konzipiert, welches das iterative Design eines Inspektionssystems unterstützt und somit ein fixes Bildaufnahme-Setup als Ausgangspunkt umgeht.
APA, Harvard, Vancouver, ISO, and other styles
5

ter Haar Romeny, Bart M. "Computer vision and Mathematica." Computing and Visualization in Science 5, no. 1 (July 2002): 53–65. http://dx.doi.org/10.1007/s00791-002-0087-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Masroeri, Agoes Achmad, Juniarko Prananda, and Muhammad Bahru Sholahuddin. "Motion Detection Simulation of Container Crane Spreader Using Computer Vision." International Review of Mechanical Engineering (IREME) 13, no. 8 (August 31, 2019): 438. http://dx.doi.org/10.15866/ireme.v13i8.16117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chavan, Y. V., D. K. Mishra, D. S. Bormane, and A. D. Shaligram. "Simulation of improved CMOS digital pixel sensor for computer vision." Journal of Optics 46, no. 1 (June 29, 2016): 1–7. http://dx.doi.org/10.1007/s12596-016-0350-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hu, Rui Min, Zhen Dong He, and Feng Bai. "The Research of 3D Human Motion Simulation and Video Analysis System Implemented in Sports Training." Advanced Materials Research 926-930 (May 2014): 2743–46. http://dx.doi.org/10.4028/www.scientific.net/amr.926-930.2743.

Full text
Abstract:
With the rapid development of computer technology, human motion tracking based on video is a kind of using ordinary camera tracking unmarked human movement technology. It has important application value in automatic monitoring, human-computer interaction, sports analysis and many other fields. This research is a hot research direction in the field of computer vision in recent years. Because of the complexity of the problem and the lack of understanding of the nature of the human visual tracking based on video is always a difficult problem in computer vision. The research content of this article is set in sports training, for motion analysis of non-contact, no interfere with measurement and simulation requirements, the use of computer graphics and computer vision technology, discussing 3D human motion simulation technology based on video analysis.
APA, Harvard, Vancouver, ISO, and other styles
9

Xu, Fang. "Analysis and Simulation of Dynamic Vision in the City." Enquiry A Journal for Architectural Research 16, no. 2 (November 24, 2019): 64–89. http://dx.doi.org/10.17831/enq:arcc.v16i2.1059.

Full text
Abstract:
This paper proposes a computer-aided Dynamic Visual Research and Design Protocol for environmental designers to analyze humans’ dynamic visual experiences in the city and to simulate dynamic vision in the design process. The Protocol recommends using action cameras to collect massive dynamic visual data from participants’ first-person perspectives. It prescribes a computer-aided visual analysis approach to produce cinematic charts and storyboards, which further afford qualitative interpretations for aesthetic assessment and discussion. Employing real-time 3D simulation technologies, the Protocol enables the simulation of people’s dynamic vision in designed urban environments to support evaluation in design. Detailed contents and merits of the Protocol were demonstrated by its application in the Urbanscape Studio, a community participatory design course based at Watertown, South Dakota.
APA, Harvard, Vancouver, ISO, and other styles
10

Jing, Sheng Gang, and Wen Yuan Wan. "Analysis of Lower Limbs Dynamics and its Application in the Sports Training Based on Computer Vision." Applied Mechanics and Materials 513-517 (February 2014): 3212–15. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.3212.

Full text
Abstract:
The lower limbs dynamics analysis model was researched, and applied it in the ports training based on computer vision. Through theoretical analysis and computer visual simulation, the detailed theoretical and simulation data was obtained for the lower limbs torque value, which was applied to sports training. Using the computer simulation technology, the human lower limbs skeletal dynamics model was established on the computer visual simulation platform. The joint torque values in different sports models were calculated for analyzing the optimum force and power mode. In the model building process, the single leg support motion model and running model were constructed, and the lower limbs dynamics analysis was implemented for the two models, calculating the joint torque values in theory and simulation. Finally, the ADAMS software was used for dynamic visual simulation in computer vision. Simulation results can show the force torque values vividly, the simulation result and theoretical result are compared and analyzed, it provides important data references and effective theoretical guidance in sports training, and it is meaningful for the optimization of physical training and improvement of training effect.
APA, Harvard, Vancouver, ISO, and other styles
11

Alvarez, Luis. "Computer Vision and Image Processing in Environmental Research." Systems Analysis Modelling Simulation 43, no. 9 (September 2003): 1229–42. http://dx.doi.org/10.1080/0232929032000115010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zou, Xiang Jun, Hai Xin Zou, J. Lu, Yan Chen, Ai Xia Guo, J. Z. Deng, and Lu Feng Luo. "Research on Manipulator Positioning Based on Stereo Vision in Virtual Environment." Key Engineering Materials 392-394 (October 2008): 200–204. http://dx.doi.org/10.4028/www.scientific.net/kem.392-394.200.

Full text
Abstract:
Inherent uncertainties always exist in the positioning picking manipulators in complex environment, modeling and simulation of positioning. The manipulators were discussed based on binocular stereo vision in virtual environment (VE). Based on stereo vision, a method how virtual manipulators locate picking object by human-computer interaction (HCI) was proposed. The data input from vision were mapped to virtual picking manipulators so that it could enable the positioning and simulation with route and events-driven mechanism. The positioning experimental platform in VE consists of hardware of CCD stereo vision and simulation software. The visualized simulation system was exploited by EON SDK. The simulation of manipulator’s positioning was realized in VE by the platform. This method can be used for virtual robot to locate objects’ long-distance positioning in complex environment.
APA, Harvard, Vancouver, ISO, and other styles
13

Wahab Hashmi, Abdul, Harlal Singh Mali, Anoj Meena, Mohammad Farukh Hashmi, and Neeraj Dhanraj Bokde. "Surface Characteristics Measurement Using Computer Vision: A Review." Computer Modeling in Engineering & Sciences 135, no. 2 (2023): 917–1005. http://dx.doi.org/10.32604/cmes.2023.021223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Dai, Luo. "Modeling and Simulation of Athlete’s Error Motion Recognition Based on Computer Vision." Complexity 2021 (March 16, 2021): 1–10. http://dx.doi.org/10.1155/2021/5513957.

Full text
Abstract:
Computer vision is widely used in manufacturing, sports, medical diagnosis, and other fields. In this article, a multifeature fusion error action expression method based on silhouette and optical flow information is proposed to overcome the shortcomings in the effectiveness of a single error action expression method based on the fusion of features for human body error action recognition. We analyse and discuss the human error action recognition method based on the idea of template matching to analyse the key issues that affect the overall expression of the error action sequences, and then, we propose a motion energy model based on the direct motion energy decomposition of the video clips of human error actions in the 3 Deron action sequence space through the filter group. The method can avoid preprocessing operations such as target localization and segmentation; then, we use MET features and combine with SVM to test the human body error database and compare the experimental results obtained by using different feature reduction and classification methods, and the results show that the method has the obvious comparative advantage in the recognition rate and is suitable for other dynamic scenes.
APA, Harvard, Vancouver, ISO, and other styles
15

Li, Peng. "Research on Visual Art Design Method Based on Virtual Reality." International Journal of Gaming and Computer-Mediated Simulations 13, no. 2 (April 2021): 16–25. http://dx.doi.org/10.4018/ijgcms.2021040102.

Full text
Abstract:
In today's society, computer technology has been deeply rooted in the hearts of people. Computers are wonderful tools for creative thinking. It is an extension of our visual function and the function of the visual cortex of the brain. Through this extension, we can see more scenes that we could not see before. As a computer simulation system that creates and feels virtual worlds, 3D digital virtual reality technology uses a computer as a media simulation or a real or imaginary scene. It is a system simulation of interactive 3D dynamic vision and entity behavior based on diversified information fusion. As the creator of visual arts, we must try to observe the world at a deeper level and establish a model that resonates with the viewer. At every level, our technology will convey the way we view the world more deeply. We will be more amazed at the richness of the real world. This chapter explores a visual art design method based on virtual reality.
APA, Harvard, Vancouver, ISO, and other styles
16

Nedel, Luciana, Anderson Maciel, Carla Dal Sasso Freitas, Claudio Jung, Manuel Oliveira, Jacob Scharcanski, Joao Comba, and Marcelo Walter. "Enhancing the Human with Computers: Ongoing research at the Computer Graphics, Image Processing and Interaction Group." Journal on Interactive Systems 2, no. 2 (November 16, 2011): 1. http://dx.doi.org/10.5753/jis.2011.581.

Full text
Abstract:
The Computer Graphics, Image Processing and Interaction (CGIP) group at UFRGS concentrates expertise from many different and complementary graphics related domains. In this paper we introduce the group and present our re- search lines and some ongoing projects. We selected mainly the projects related to 3D interaction and navigation, which includes applications as massive data visualization, surgery planning and simulation, tracking and computer vision algorithms, and modeling approaches for human perception and natural world.
APA, Harvard, Vancouver, ISO, and other styles
17

Gallo Sanabria, Jorge Daniel, Paula Andrea Mozuca Tamayo, and Rafael Iván Rincón Fonseca. "Autonomous trajectory following for an UAV based on computer vision." Visión electrónica 14, no. 1 (January 31, 2020): 51–56. http://dx.doi.org/10.14483/22484728.15968.

Full text
Abstract:
The trajectory following performed by unmanned aerial vehicles has several advantages that can be taken to several applications, going from package delivery to agriculture. However, this involves several challenges depending on the way the following is performed, particularly in the case of trajectory following by using computer vision. In here we will show the design, the simulation and the implementation of a simple algorithm for trajectory following by using computer vision, this algorithm will be executed on a drone that will arrive into a wished point.
APA, Harvard, Vancouver, ISO, and other styles
18

Qasim, Mohammed, and Omar Y. Ismael. "Shared Control of a Robot Arm Using BCI and Computer Vision." Journal Européen des Systèmes Automatisés 55, no. 1 (February 28, 2022): 139–46. http://dx.doi.org/10.18280/jesa.550115.

Full text
Abstract:
Brain-Computer Interface (BCI) is a device that can transform human thoughts into control commands. However, BCI aggravates the common problems of robot teleoperation due to its low-dimensional and noisy control commands, particularly when utilized to control high-DOF robots. Thus, a shared control strategy can enhance the BCI performance and reduce the workload for humans. This paper presents a shared control scheme that assists disabled people to control a robotic arm through a non-invasive Brain-Computer Interface (BCI) for reach and grasp activities. A novel algorithm is presented which generates a trajectory (position and orientation) for the end-effector to reach and grasp an object based on a specially designed color-coded tag placed on the object. A single camera is used for tag detection. The simulation is performed using the CoppeliaSim robot simulator in conjunction with MATLAB to implement the tag detection algorithm and Python script to receive the commands from the BCI. The human-in-the-loop simulation results prove the effectiveness of the proposed algorithm to reach and grasp objects.
APA, Harvard, Vancouver, ISO, and other styles
19

Cai, De Liang, and Shao Na Lin. "A Study on Posture Correction Based on Computer Vision." Applied Mechanics and Materials 513-517 (February 2014): 3207–11. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.3207.

Full text
Abstract:
During basketball sport, the movement of a joint is quite complex. And if move fast, it is difficult to constrain by fix algorithm the subtle angle changes between joints. Traditional sports vision modeling method is unable to describe the moving changes of the small areas which causes the unsatisfactory measuring effect of subtle posture in motion. This paper proposes a measurement method for three-dimensional motion posture of basketball athletes. It converts the constrained optimization problem for motion parameters as a nonlinear minimization problem by optimizing human motion parameters; uses L-M motion constraints parameter to provide fast convergent regularization method, in order to seek motion and structural parameters matrix of non-rigid basketball sport and complete three-dimensional measurement for motion parameters. The simulation results show that the method can accurately measure the 3D movement parameters of the athletes.
APA, Harvard, Vancouver, ISO, and other styles
20

Wu, Zhi Long, and Zhi Jie Wang. "The Simulation of Retinal Inner Plexiform Layer Based on Parallel Algorithm." Advanced Materials Research 680 (April 2013): 509–14. http://dx.doi.org/10.4028/www.scientific.net/amr.680.509.

Full text
Abstract:
The final objective of retinal simulation is to construct an artificial computer retina to replace the biological retina, and to offset the vision-impaired people. Due to the complexity of the retinal structure and the great number of bipolar cells and ganglion cells in retinal (exceeding tens of millions), both the speed and accuracy of the simulation of the retinal up to date are at a low level. In this paper we present a method for the simulation of inner plexiform layer of retina based on Compute Unified Device Architecture (CUDA) parallel algorithm to achieve the maximum utilization of CPU and Graphic Processing Unit(GPU), and to improve the speed and accuracy of the retina simulation.
APA, Harvard, Vancouver, ISO, and other styles
21

Gong, Lin. "Simulating 3D Cloud Shape Based on Computer Vision and Particle System." Applied Mechanics and Materials 182-183 (June 2012): 819–22. http://dx.doi.org/10.4028/www.scientific.net/amm.182-183.819.

Full text
Abstract:
Clouds are an important part of natural environment. The realistic simulation of cloud is a challenging topic in computer graphics. This paper proposes a simple, efficient approach based on computer vision and particle system to model various 3D clouds. This method use computer vision technology to extract 3D structure information of clouds from images, then using particles technology to fill the 3D space and render the cloud. This method is suitable to model all kinds of clouds, such as stratus, cumulus, cirrus etc. It is an improvement over earlier systems that modeled only one type of cloud.
APA, Harvard, Vancouver, ISO, and other styles
22

Wei, Yuan, Tao Hong, and Chaoqun Fang. "Research on Information Fusion of Computer Vision and Radar Signals in UAV Target Identification." Discrete Dynamics in Nature and Society 2022 (July 19, 2022): 1–13. http://dx.doi.org/10.1155/2022/3898277.

Full text
Abstract:
As one of the crucial sensing methods, multisensor fusion recognition aids the Internet of Things (IoT) in connecting things through ubiquitous perceptual terminals. The small size, sluggish flying speed, low flight altitude, and low electromagnetic intensity of unmanned aerial vehicles (UAVs) have put enormous strain on air traffic management and airspace security. It is urgent to achieve effective UAV target detection. The radio monitoring method, acoustic detection scheme, computer vision, and radar signal detection are commonly used technologies in this field. The radio monitoring approach has low accuracy, the acoustic detection strategy has a limited detection range, computer vision is limited by weather conditions, and the radar signals at low altitudes are influenced by ground clutter. To address these issues, this paper proposes an information fusion strategy based on two levels of fusion: data-level fusion and decision-level fusion. In this strategy, Computer vision and radar signals complement each other to improve the detection accuracy. For each level, the method of information fusion is introduced in detail. Furthermore, the effectiveness of the method has been demonstrated by a series of comprehensive experiments. The results show that the accuracy of the fusion method is improved, and the proposed method can still work even when the single method loses function.
APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Fa Yu, Zhen Li Gao, and Hong Yu Yin. "Design of Robot Lawn Mower Based on Computer Vision." Applied Mechanics and Materials 404 (September 2013): 624–30. http://dx.doi.org/10.4028/www.scientific.net/amm.404.624.

Full text
Abstract:
In this paper, we introduce a new research about mowing robot on obstacle avoidanceand path tracking controlling theoretically and practically, and propose the mowing control methodwhich based on computer vision. Lawn images collected by camera, through the extraction ofprogramming dealing with lawn, obstacles and boundary characteristics to achieve edge separation.Through special processing algorithm to determine boundary arises and obstacles location, size andspeed of the robot, with real-time capturing and processing. This paper gives the complete imagerecognition processing, and analyzes several methods of image filtering and edge detection, andproposes simple control algorithm for obstacle avoidance, and applies MATLAB for obstacleavoidance simulation. The results show that: the method can correctly identify the obstacles andboundaries, through the output about location, size of the obstacles and boundary, border slope andspeed of the robot to control robot mower. Most importantly, that computer vision applying in themowing robot itself is an innovation.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhu, Y. X., and X. S. Duan. "The Simulation of Vision Measurement System for Cannon Barrel Pose." Advanced Materials Research 588-589 (November 2012): 1337–40. http://dx.doi.org/10.4028/www.scientific.net/amr.588-589.1337.

Full text
Abstract:
For the pose measurement of cannon barrel, a vision method through checked plane had been proposed. To test and improve the precision of this new method without considering the hardware error and some other inextricable objective factors,derive the imaging model of the marker (checked plane) from motion model of cannon barrel and the position relative to it using variable-controlling method. Establish the computer simulation platform of vision measurement system for cannon barrel pose based on C++ Builder. The simulation experiment validate the veracity and dependability of this method.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhang, Yang, and Lin Li. "AR Sand Table with VSTAR System." Key Engineering Materials 480-481 (June 2011): 650–55. http://dx.doi.org/10.4028/www.scientific.net/kem.480-481.650.

Full text
Abstract:
We have proposed a three-dimensional sand table based on the video see-through tabletop augmented reality (VSTAR) system. The hybrid registration algorithm is used combining marker-based OI iterative computer vision method and photoelectric encoders mechanical tracking method instead of computer vision method only, which makes tracking does not dependent on markers, so that the tracking field of vision is not limited to the scope of markers. The system is of high precision and can be used for architecture design show, military topography simulation, museum exhibition and other virtual sand table applications.
APA, Harvard, Vancouver, ISO, and other styles
26

Álvarez-Tuñón, Olaya, Alberto Jardón, and Carlos Balaguer. "Generation and Processing of Simulated Underwater Images for Infrastructure Visual Inspection with UUVs." Sensors 19, no. 24 (December 12, 2019): 5497. http://dx.doi.org/10.3390/s19245497.

Full text
Abstract:
The development of computer vision algorithms for navigation or object detection is one of the key issues of underwater robotics. However, extracting features from underwater images is challenging due to the presence of lighting defects, which need to be counteracted. This requires good environmental knowledge, either as a dataset or as a physic model. The lack of available data, and the high variability of the conditions, makes difficult the development of robust enhancement algorithms. A framework for the development of underwater computer vision algorithms is presented, consisting of a method for underwater imaging simulation, and an image enhancement algorithm, both integrated in the open-source robotics simulator UUV Simulator. The imaging simulation is based on a novel combination of the scattering model and style transfer techniques. The use of style transfer allows a realistic simulation of different environments without any prior knowledge of them. Moreover, an enhancement algorithm that successfully performs a correction of the imaging defects in any given scenario for either the real or synthetic images has been developed. The proposed approach showcases then a novel framework for the development of underwater computer vision algorithms for SLAM, navigation, or object detection in UUVs.
APA, Harvard, Vancouver, ISO, and other styles
27

Ganoni, Ori, Ramakrishnan Mukundan, and Richard Green. "A Generalized Simulation Framework for Tethered Remotely Operated Vehicles in Realistic Underwater Environments." Drones 3, no. 1 (December 21, 2018): 1. http://dx.doi.org/10.3390/drones3010001.

Full text
Abstract:
This paper presents a framework for simulating visually realistic motion of underwater Remotely Operated Vehicles (ROVs) in highly complex models of aquatic environments. The models include a wide range of objects such as rocks, fish and marine plankton in addition to an ROV tether. A modified cable simulation for the underwater physical conditions has been developed for a tethered ROV. The simulation framework also incorporates models for low visibility conditions and intrinsic camera effects unique to the underwater environment. The visual models were implemented using the Unreal Engine 4 realistic game engine to be part of the presented framework. We developed a generalized method for implementing an ROV dynamics model and this method serves as a highly configurable component inside our framework. In this paper, we explore the unique characteristics of underwater simulation and the specialized models we developed for that environment. We use computer vision algorithms for feature extraction and feature tracking as a probe for comparing experiments done in our simulated environment against real underwater experiments. The experimental results presented in this paper successfully demonstrate the contribution of this realistic simulation framework to the understanding, analysis and development of computer vision and control algorithms to be used in today’s ROVs.
APA, Harvard, Vancouver, ISO, and other styles
28

Tang, Yi, Jin Qiu, and Ming Gao. "Fuzzy Medical Computer Vision Image Restoration and Visual Application." Computational and Mathematical Methods in Medicine 2022 (June 21, 2022): 1–10. http://dx.doi.org/10.1155/2022/6454550.

Full text
Abstract:
In order to shorten the image registration time and improve the imaging quality, this paper proposes a fuzzy medical computer vision image information recovery algorithm based on the fuzzy sparse representation algorithm. Firstly, by constructing a computer vision image acquisition model, the visual feature quantity of the fuzzy medical computer vision image is extracted, and the feature registration design of the fuzzy medical computer vision image is carried out by using the 3D visual reconstruction technology. Then, by establishing a multidimensional histogram structure model, the wavelet multidimensional scale feature detection method is used to achieve grayscale feature extraction of fuzzy medical computer vision images. Finally, the fuzzy sparse representation algorithm is used to automatically optimize the fuzzy medical computer vision images. The experimental results show that the proposed method has a short image information registration time, less than 10 ms, and has a high peak PSNR. When the number of pixels is 700, its peak PSNR can reach 83.5 dB, which is suitable for computer image restoration.
APA, Harvard, Vancouver, ISO, and other styles
29

Hoy, D. E. P., and F. Yu. "Surface quality assessment using computer vision methods." Journal of Materials Processing Technology 28, no. 1-2 (September 1991): 265–74. http://dx.doi.org/10.1016/0924-0136(91)90225-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Zhou, Tong, and Yilei Wang. "The Application and Development Trend of Youth Sports Simulation Based on Computer Vision." Wireless Communications and Mobile Computing 2022 (August 21, 2022): 1–9. http://dx.doi.org/10.1155/2022/8500869.

Full text
Abstract:
Based on computer vision technology, this paper presents a human motion analysis and target tracking technology based on computer vision. In terms of moving target detection, the current moving target detection technology is summarized, and some experimental results of the algorithm are given. The background difference method under monocular camera is emphatically analyzed. The preliminary human contour is obtained by the background difference method. In order to obtain a smoother target contour, the mathematical morphology is used to remove the noise, and the judgment algorithm of the size of the image connected domain is added. A specific threshold is set to remove the connected domain of the noise block less than the threshold. In the aspect of human motion recognition, this paper selects human motion features, including minimum external rectangle aspect ratio, rectangularity, circularity, and moment invariant. The criteria for selecting human motion features are strong noise resistance and obvious distinction. Then, the three types of human motion images are classified and recognized. After cross-validation and parameter optimization, the recognition accuracy is significantly improved. The experimental results show that the video sequence collected in the field has a total of 376 frames, and the frame rate is 10 frames/s. Due to the small traffic, the mean shift algorithm based on adaptive feature fusion is used to track the target every 2-3 frames. And set the inverse X direction as the direction of entering the scene and the X direction as the direction of moving out of the scene so that the allowable error of the distance between the detection and tracking results is 10. The weight of each feature is dynamically updated by the similarity between the candidate model and the target model, which solves the problem that the mean shift algorithm is not robust enough when similar objects are occluded and interfered and achieves more accurate tracking.
APA, Harvard, Vancouver, ISO, and other styles
31

A. Bukhari, Hanan. "FINE VISION TO HIGHLIGHT 3D RENDERING FOR PLAID FABRIC SIMULATION BY USING COMPUTER." مجلة دراسات وبحوث التربية النوعية 2, no. 2 (July 1, 2016): 370–92. http://dx.doi.org/10.21608/jsezu.2016.237045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Varagul, Jittima, and Toshio Ito. "Simulation of Detecting Function object for AGV Using Computer Vision with Neural Network." Procedia Computer Science 96 (2016): 159–68. http://dx.doi.org/10.1016/j.procs.2016.08.122.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Sun, Zhe, Cheng Zhang, Jiawei Chen, Pingbo Tang, and Alper Yilmaz. "Predictive nuclear power plant outage control through computer vision and data-driven simulation." Progress in Nuclear Energy 127 (September 2020): 103448. http://dx.doi.org/10.1016/j.pnucene.2020.103448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Aneela, Banda. "Implementing a Real Time Virtual Mouse System and Fingertip Detection based on Artificial Intelligence." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (June 25, 2021): 2265–70. http://dx.doi.org/10.22214/ijraset.2021.35485.

Full text
Abstract:
Artificial intelligence refers to the simulation of human intelligence in computers that have been trained to think and act like humans. It is a broad branch of computer science devoted to the creation of intelligent machines capable of doing activities that would normally need human intelligence. Despite the fact that Artificial intelligence is a heterogeneous science with several techniques, developments in machine learning and deep learning are driving a paradigm shift in practically every business. Human-computer interaction requires the identification of hand gestures utilizing vision-based technology. The keyboard and mouse have grown more significant in human-computer interaction in recent decades. This involves the progression of touch technology over buttons, as well as a variety of other gesture control modalities. A normal camera may be used to construct a hand tracking-based virtual mouse application. We combine camera and computer vision technologies, such as finger- tip identification and gesture recognition, into the proposed system to handle mouse operations (volume control, right click, left click), and show how it can execute all that existing mouse devices can.
APA, Harvard, Vancouver, ISO, and other styles
35

Hégron, Gérard, Bruno Arnaldi, and Thierry Priol. "VISYR: a simulation tool for mobile robots using vision sensors." Visual Computer 3, no. 5 (March 1988): 298–303. http://dx.doi.org/10.1007/bf01914865.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Xiang, Li, Lu Zheng, Li Zhicen, Zhu Wanchun, He Jintao, Yu Yaxiong, and Gong Jian. "Application of Computer 3D Digital Technology in Surgical Treatment of Pediatric Skull Deformity." E3S Web of Conferences 185 (2020): 03025. http://dx.doi.org/10.1051/e3sconf/202018503025.

Full text
Abstract:
Pediatric skull deformity requires immediate surgery as indicated by increased cranial pressure, mental retardation, impaired or absent vision, cranial deformity, and mental and spiritual defects. This study explores the application value of computer aided simulation in treatment of pediatric skull deformity. The application of computer simulation surgery in the treatment of children with pediatric skull deformity allows surgeons to be familiar with the operation process in advance. The use of computer 3D digital technology for preoperative design planning and simulation can reduce surgical difficulty to a certain extent, improve surgical efficiency, significantly increase intraoperative accuracy, and also reduce the risk of intraoperative bleeding and postoperative complications.
APA, Harvard, Vancouver, ISO, and other styles
37

Shen, Chen. "A Fabric Simulation Optimization Method Based on Intelligent 3D Vision Technology." Applied Mechanics and Materials 513-517 (February 2014): 4143–46. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.4143.

Full text
Abstract:
Subjected to the randomness of motion state, simulated three-dimensional coordinates of the fabric will displace during moving. Mutation in motion state significantly increases the 3D randomness of fabric motion parameters, so that the three-dimensional coordinates of the fabric change in neighborhood leading to non-realistic simulation effect. To solve the problem, the paper proposes a three-dimensional visual fabric simulation algorithm based on feature matching and energy constrained estimation. It adds a certain area constraints to deformation and direction randomness of the fabric in motion in accordance to the estimation of the fabric's characteristic parameter; uses fabric model and characteristic groups to limit the moving scope to realize the true 3D computer simulation for the fabric. Experiments show that the method can realize three-dimensional fabric simulation with high fidelity.
APA, Harvard, Vancouver, ISO, and other styles
38

Liu, Dalong, Yanfang Pan, and Liwei Li. "Logistics Engineering Simulation Using Computer 3D Modeling Technology." Journal of Physics: Conference Series 2143, no. 1 (December 1, 2021): 012018. http://dx.doi.org/10.1088/1742-6596/2143/1/012018.

Full text
Abstract:
Abstract 3D modeling technology is an important branch of interdisciplinary fields such as computer graphics, intelligent information processing, computer vision, and artificial intelligence. Through computer digitization, collecting three-dimensional data information of the target object, and then processing and simulation reproduction through computer technology, plays an important role in logistics engineering (LE). The purpose of this paper is the simulation research of LE based on computer three-dimensional modeling technology. This paper takes LE as the research object, firstly elaborates the functional and non-functional requirements of the system separately, and establishes an intelligent logistics system. This paper uses Flexsim simulation software to establish a logistics distribution simulation model. Based on the data collected in the survey, the model is parameterized. Through the data output from the simulation, the simulation data of the original logistics system and the logistics system designed in this paper are compared and analyzed. The simulation output data shows that the total number of products transported in and out of the warehouse of the original system’s 6 transport planes is 15,559, and the total number of products transported in and out of the warehouse of the 6 transport planes in the logistics system proposed in this paper is 17,144 pieces. It can be seen that this system has strong transportation efficiency in LE.
APA, Harvard, Vancouver, ISO, and other styles
39

Dong, Liang, Ding Gangyi, Yan Dapeng, and Huang Kexiang. "Multithreshold Image Segmentation and Computer Simulation Based on Interactive Processing System." Mobile Information Systems 2022 (July 8, 2022): 1–9. http://dx.doi.org/10.1155/2022/8091701.

Full text
Abstract:
With the increasing application of interactive processing systems at all levels, in order to better and efficiently spread the application range of interactive processing systems, this study has researched and developed an application system for interactive online processing of DVB-SMODIS data. In addition, correct and improved algorithms are proposed for most of the problems in DVB-SMODIS data. In order to better improve the overall search ability of the BBO algorithm in multithreshold image segmentation, this study discusses a new BBO algorithm. Image segmentation is an important processing part of digital image processing systems and computer vision systems. The use of computer simulation programming technology can simulate and replace the process of objects to produce simulation maps. According to the original object calculated by the computer system, it can be reconstructed and reproduced by referring to the mathematical formula of light waves and using traditional simulation technology experiments. Therefore, this article uses interactive processing technology to study the multithreshold graph segmentation and computer simulation technology.
APA, Harvard, Vancouver, ISO, and other styles
40

Wang, Junli, Shitong Wang, and Wenhao Leng. "Autonomous Piloting and Simulation on Underwater Manipulations Based on Vision Positioning." Mobile Information Systems 2022 (November 1, 2022): 1–10. http://dx.doi.org/10.1155/2022/2884611.

Full text
Abstract:
ROV equipped with an underwater manipulator plays a very important role in underwater investigation, construction, and some other manipulations. Moving ROV precisely and operating an underwater manipulator to grasp and move some objects are frequent operations in underwater manipulations. At the same time, they also consume a lot of the physical strength of operators, which seriously degrades the efficiency of underwater manipulations. In this paper, a scheme of grasping the rod-shaped object autonomously is proposed. In the proposed scheme, two cameras are arranged on the ROV frame to form a stereo vision system, and then, the parameters of the position in space of the rod-shaped object are calculated from the stereo images. Accordingly, the ROV is driven, and the manipulator is controlled according to these parameters such that the end effector of the manipulator can clamp the rod-shaped object exactly. As a result, the task of capturing an object is completed autonomously. In this paper, images of the scene about underwater manipulations are simulated with the marine engineering simulation software Vortex Studio, and the position parameters of the rod-shaped cable in the scene are obtained by the algorithm proposed in this paper, and the displacement to move the ROV and the joint angles to operate the manipulator are obtained consequently. Therefore, the feasibility of autonomous capture underwater object is verified.
APA, Harvard, Vancouver, ISO, and other styles
41

Jara, Carlos A., Francisco A. Candelas, Pablo Gil, Fernando Torres, Francisco Esquembre, and Sebastián Dormido. "EJS+EjsRL: An interactive tool for industrial robots simulation, Computer Vision and remote operation." Robotics and Autonomous Systems 59, no. 6 (June 2011): 389–401. http://dx.doi.org/10.1016/j.robot.2011.02.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Bao, Hongshu, and Xiang Yao. "Dynamic 3D image simulation of basketball movement based on embedded system and computer vision." Microprocessors and Microsystems 81 (March 2021): 103655. http://dx.doi.org/10.1016/j.micpro.2020.103655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Lee, Dongwoon. "Modeling and Simulation of Skeletal Muscle for Computer Graphics: A Survey." Foundations and Trends® in Computer Graphics and Vision 7, no. 4 (2011): 229–76. http://dx.doi.org/10.1561/0600000036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Xu, Siping, and Lan Chen. "Construction and Simulation of Deep Learning Algorithm for Robot Vision Tracking." Journal of Sensors 2022 (July 20, 2022): 1–6. http://dx.doi.org/10.1155/2022/1522657.

Full text
Abstract:
As one of the indispensable basic branches of computer vision, visual object tracking has very important research value. Therefore, a deep learning based on robot vision tracking is evaluated. Based on the basic principles of target tracking and search principle, a deep learning algorithm for visual tracking is constructed, and finally, evaluated, and simulated. The results showed that the accuracy rate increased from 90.9% to 90.13% after the addition of channel attention mechanism module. Variance was reduced from 3.78% to 1.27%, with better stability. The EAO, accuracy, and robustness of the algorithm are better than those without significant region weighting strategy. The strategy of using the improved residual network SE-ResNet network to extract multiresolution features from the correlation filtering framework is effective and helpful to improve the tracking performance.
APA, Harvard, Vancouver, ISO, and other styles
45

Mattoccia, Stefano, Branislav Kisačanin, Margrit Gelautz, Sek Chai, Ahmed Nabil Belbachir, Goksel Dedeoglu, and Fridtjof Stein. "Guest Editorial: Special Issue on Embedded Computer Vision." Journal of Signal Processing Systems 90, no. 6 (April 26, 2018): 873–76. http://dx.doi.org/10.1007/s11265-018-1365-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Peterson, Marco, Minzhen Du, Bryant Springle, and Jonathan Black. "SpaceDrones 2.0—Hardware-in-the-Loop Simulation and Validation for Orbital and Deep Space Computer Vision and Machine Learning Tasking Using Free-Flying Drone Platforms." Aerospace 9, no. 5 (May 6, 2022): 254. http://dx.doi.org/10.3390/aerospace9050254.

Full text
Abstract:
The proliferation of reusable space vehicles has fundamentally changed how assets are injected into the low earth orbit and beyond, increasing both the reliability and frequency of launches. Consequently, it has led to the rapid development and adoption of new technologies in the aerospace sector, including computer vision (CV), machine learning (ML)/artificial intelligence (AI), and distributed networking. All these technologies are necessary to enable truly autonomous “Human-out-of-the-loop” mission tasking for spaceborne applications as spacecrafts travel further into the solar system and our missions become more ambitious. This paper proposes a novel approach for space-based computer vision sensing and machine learning simulation and validation using synthetically trained models to generate the large amounts of space-based imagery needed to train computer vision models. We also introduce a method of image data augmentation known as domain randomization to enhance machine learning performance in the dynamic domain of spaceborne computer vision to tackle unique space-based challenges such as orientation and lighting variations. These synthetically trained computer vision models then apply that capability for hardware-in-the-loop testing and evaluation via free-flying robotic platforms, thus enabling sensor-based orbital vehicle control, onboard decision making, and mobile manipulation similar to air-bearing table methods. Given the current energy constraints of space vehicles using solar-based power plants, cameras provide an energy-efficient means of situational awareness when compared to active sensing instruments. When coupled with computationally efficient machine learning algorithms and methods, it can enable space systems proficient in classifying, tracking, capturing, and ultimately manipulating objects for orbital/planetary assembly and maintenance (tasks commonly referred to as In-Space Assembly and On-Orbit Servicing). Given the inherent dangers of manned spaceflight/extravehicular activities (EVAs) currently employed to perform spacecraft maintenance and the current limitation of long-duration human spaceflight outside the low earth orbit, space robotics armed with generalized sensing and control and machine learning architecture have a unique automation potential. However, the tools and methodologies required for hardware-in-the-loop simulation, testing, and validation at a large scale and at an affordable price point are in developmental stages. By leveraging a drone’s free-flight maneuvering capability, theater projection technology, synthetically generated orbital and celestial environments, and machine learning, this work strives to build a robust hardware-in-the-loop testing suite. While the focus of the specific computer vision models in this paper is narrowed down to solving visual sensing problems in orbit, this work can very well be extended to solve any problem set that requires a robust onboard computer vision, robotic manipulation, and free-flight capabilities.
APA, Harvard, Vancouver, ISO, and other styles
47

Sun, Zhiyuan, Linlin Huang, and Ruiqing Jia. "Coal and Gangue Separating Robot System Based on Computer Vision." Sensors 21, no. 4 (February 14, 2021): 1349. http://dx.doi.org/10.3390/s21041349.

Full text
Abstract:
In coal production, the raw coal contains a large amount of gangue, which affects the quality of coal and pollutes the environment. Separating coal and gangue can improve coal quality, save energy, and reduce consumption and make rational use of resources. The separated gangue can also be reused. Robots with computer vision technology have become current research hotspots due to simple equipment, are efficient, and create no pollution to the environment. However, the difficulty in identifying coal and gangue is that the difference between coal and gangue is small, and the background and prospects are similar. In addition, due to the irregular shape of gangue, real-time grasping requirements make robot control difficult. This paper presents a coal and gangue separating robot system based on computer vision, proposes a convolutional neural network to extract the classification and location information, and designs a robot multi-objective motion planning algorithm. Through simulation and experimental verification, the accuracy of coal gangue identification reaches 98% under the condition of ensuring real-time performance. The average separating rate reaches 75% on low-, medium-, and high-speed moving conveyor belts, which meets the needs of actual projects. This method has important guiding significance in detection and separation of objects in complex scenes.
APA, Harvard, Vancouver, ISO, and other styles
48

Sahoo, Saumya R., and Shital S. Chiddarwar. "Flatness-based control scheme for hardware-in-the-loop simulations of omnidirectional mobile robot." SIMULATION 96, no. 2 (June 26, 2019): 169–83. http://dx.doi.org/10.1177/0037549719859064.

Full text
Abstract:
Omnidirectional robots offer better maneuverability and a greater degree of freedom over conventional wheel mobile robots. However, the design of their control system remains a challenge. In this study, a real-time simulation system is used to design and develop a hardware-in-the-loop (HIL) simulation platform for an omnidirectional mobile robot using bond graphs and a flatness-based controller. The control input from the simulation model is transferred to the robot hardware through an Arduino microcontroller input board. For feedback to the simulation model, a Kinect-based vision system is used. The developed controller, the Kinect-based vision system, and the HIL configuration are validated in the HIL simulation-based environment. The results confirm that the proposed HIL system can be an efficient tool for verifying the performance of the hardware and simulation designs of flatness-based control systems for omnidirectional mobile robots.
APA, Harvard, Vancouver, ISO, and other styles
49

Kalantan, Zakiah I., and Jochen Einbeck. "Quantile-Based Estimation of the Finite Cauchy Mixture Model." Symmetry 11, no. 9 (September 19, 2019): 1186. http://dx.doi.org/10.3390/sym11091186.

Full text
Abstract:
Heterogeneity and outliers are two aspects which add considerable complexity to the analysis of data. The Cauchy mixture model is an attractive device to deal with both issues simultaneously. This paper develops an Expectation-Maximization-type algorithm to estimate the Cauchy mixture parameters. The main ingredient of the algorithm are appropriately weighted component-wise quantiles which can be efficiently computed. The effectiveness of the method is demonstrated through a simulation study, and the techniques are illustrated by real data from the fields of psychology, engineering and computer vision.
APA, Harvard, Vancouver, ISO, and other styles
50

Summers, Kenneth L., Thomas Preston Caudell, Kathryn Berkbigler, Brian Bush, Kei Davis, and Steve Smith. "Graph Visualization for the Analysis of the Structure and Dynamics of Extreme-Scale Supercomputers." Information Visualization 3, no. 3 (July 8, 2004): 209–22. http://dx.doi.org/10.1057/palgrave.ivs.9500079.

Full text
Abstract:
We are exploring the development and application of information visualization techniques for the analysis of new massively parallel supercomputer architectures. Modern supercomputers typically comprise very large clusters of commodity SMPs interconnected by possibly dense and often non-standard networks. The scale, complexity, and inherent non-locality of the structure and dynamics of this hardware, and the operating systems and applications distributed over them, challenge traditional analysis methods. As part of the á la carte (A Los Alamos Computer Architecture Toolkit for Extreme-Scale Architecture Simulation) team at Los Alamos National Laboratory, who are simulating these new architectures, we are exploring advanced visualization techniques and creating tools to enhance analysis of these simulations with intuitive three-dimensional representations and interfaces. This work complements existing and emerging algorithmic analysis tools. In this paper, we give background on the problem domain, a description of a prototypical computer architecture of interest (on the order of 10,000 processors connected by a quaternary fat-tree communications network), and a presentation of three classes of visualizations that clearly display the switching fabric and the flow of information in the interconnecting network.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography