Статті в журналах з теми "Kinematic identification- Vision based techniques"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Kinematic identification- Vision based techniques.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Kinematic identification- Vision based techniques".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Seah, Shao Xuan, Yan Han Lau, and Sutthiphong Srigrarom. "Multiple Aerial Targets Re-Identification by 2D- and 3D- Kinematics-Based Matching." Journal of Imaging 8, no. 2 (January 28, 2022): 26. http://dx.doi.org/10.3390/jimaging8020026.

Повний текст джерела
Анотація:
This paper presents two techniques in the matching and re-identification of multiple aerial target detections from multiple electro-optical devices: 2-dimensional and 3-dimensional kinematics-based matching. The main advantage of these methods over traditional image-based methods is that no prior image-based training is required; instead, relatively simpler graph matching algorithms are used. The first 2-dimensional method relies solely on the kinematic and geometric projections of the detected targets onto the images captured by the various cameras. Matching and re-identification across frames were performed using a series of correlation-based methods. This method is suitable for all targets with distinct motion observed by the camera. The second 3-dimensional method relies on the change in the size of detected targets to estimate motion in the focal axis by constructing an instantaneous direction vector in 3D space that is independent of camera pose. Matching and re-identification were achieved by directly comparing these vectors across frames under a global coordinate system. Such a method is suitable for targets in near to medium range where changes in detection sizes may be observed. While no overlapping field of view requirements were explicitly imposed, it is necessary for the aerial target to be detected in both cameras before matching can be carried out. Preliminary flight tests were conducted using 2–3 drones at varying ranges, and the effectiveness of these techniques was tested and compared. Using these proposed techniques, an MOTA score of more than 80% was achieved.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Chen, Biao, Chaoyang Chen, Jie Hu, Zain Sayeed, Jin Qi, Hussein F. Darwiche, Bryan E. Little, et al. "Computer Vision and Machine Learning-Based Gait Pattern Recognition for Flat Fall Prediction." Sensors 22, no. 20 (October 19, 2022): 7960. http://dx.doi.org/10.3390/s22207960.

Повний текст джерела
Анотація:
Background: Gait recognition has been applied in the prediction of the probability of elderly flat ground fall, functional evaluation during rehabilitation, and the training of patients with lower extremity motor dysfunction. Gait distinguishing between seemingly similar kinematic patterns associated with different pathological entities is a challenge for the clinician. How to realize automatic identification and judgment of abnormal gait is a significant challenge in clinical practice. The long-term goal of our study is to develop a gait recognition computer vision system using artificial intelligence (AI) and machine learning (ML) computing. This study aims to find an optimal ML algorithm using computer vision techniques and measure variables from lower limbs to classify gait patterns in healthy people. The purpose of this study is to determine the feasibility of computer vision and machine learning (ML) computing in discriminating different gait patterns associated with flat-ground falls. Methods: We used the Kinect® Motion system to capture the spatiotemporal gait data from seven healthy subjects in three walking trials, including normal gait, pelvic-obliquity-gait, and knee-hyperextension-gait walking. Four different classification methods including convolutional neural network (CNN), support vector machine (SVM), K-nearest neighbors (KNN), and long short-term memory (LSTM) neural networks were used to automatically classify three gait patterns. Overall, 750 sets of data were collected, and the dataset was divided into 80% for algorithm training and 20% for evaluation. Results: The SVM and KNN had a higher accuracy than CNN and LSTM. The SVM (94.9 ± 3.36%) had the highest accuracy in the classification of gait patterns, followed by KNN (94.0 ± 4.22%). The accuracy of CNN was 87.6 ± 7.50% and that of LSTM 83.6 ± 5.35%. Conclusions: This study revealed that the proposed AI machine learning (ML) techniques can be used to design gait biometric systems and machine vision for gait pattern recognition. Potentially, this method can be used to remotely evaluate elderly patients and help clinicians make decisions regarding disposition, follow-up, and treatment.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Huang, Jianbing, and Chia-Hsiang Menq. "Identification and Characterization of Regular Surfaces from Unorganized Points by Normal Sensitivity Analysis." Journal of Computing and Information Science in Engineering 2, no. 2 (June 1, 2002): 115–24. http://dx.doi.org/10.1115/1.1509075.

Повний текст джерела
Анотація:
In this paper, the concept of free motion subspace is introduced and utilized to characterize the special kinematic properties of regular surfaces, including planes, natural quadrics, and regular swept surfaces. Based on the concept, a general approach is developed to automatically identify the surface type and calculate the associated geometric parameters of an unknown surface from unorganized measurement points. In the approach, a normal sensitivity matrix, that characterizes the normal perturbation of surface points under differential motions, is derived. With the normal sensitivity matrix, it is shown that the free motion subspace of a surface can be determined through a regular eigen analysis. From the identified free motion subspace, the surface type of a regular surface can be determined and its geometric parameters can be simultaneously computed. An algorithm that identifies the free motion subspace of an unknown surface from its unorganized sample points has been implemented. Experiments are carried out to investigate the robustness and efficiency of the developed algorithm. The developed algorithm can be used to solve various problems including geometric primitive classification and parameter estimation, regular swept surface reconstruction, geometric constraint recognition and multi-view data registration. Integrated with state-of-art segmentation techniques, the proposed method can be used for object recognition, robot vision, and reverse engineering.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Amami, Mustafa M. "Fast and Reliable Vision-Based Navigation for Real Time Kinematic Applications." International Journal for Research in Applied Science and Engineering Technology 10, no. 2 (February 28, 2022): 922–32. http://dx.doi.org/10.22214/ijraset.2022.40395.

Повний текст джерела
Анотація:
Abstract: Automatic Image Matching (AIM) is the term used to identify the automatic detection of corresponding points located on the overlapping areas of multiple images. AIM is extensively used with Mobile Mapping System (MMS) for different engineering applications, such as highway infrastructure mapping, monitoring of road surface quality and markings, telecommunication, emergency response, and collecting data for Geographical Information Systems (GIS). Robotics community and Simultaneous Localization And Mapping (SLAM) based applications are other important areas that require fact and welldistributed AIM for robust vision navigation solutions. Different robust feature detection methods are commonly used for AIM, such as Scale Invariant Feature Transform (SIFT), Principal Component Analysis (PCA)–SIFT and Speeded Up Robust Features (SURF). The performance of such techniques have been widely investigated and compared, showing high capability to provide reliable and precise results. However, these techniques are still limited to be used for real and nearly real time SLAM based applications, such as intelligent Robots and low-cost Unmanned Aircraft Vehicles (UAV) based on vision navigation. The main limitations of these AIM techniques are represented in the relatively long processing time and the random distribution of matched points over the common area between images. This paper works on overcoming these two limitations, providing extremely fast AIM with well- distributed common points for robust real time vision navigation. Digital image pyramid, Epipolar line and 2D transformation have been utilized for limiting the size of search windows significantly and determining the rotating angle and scale level of features, reducing the overall processing time considerably. Using limited number of well-distributed common points has also helped to speed up the automatic matching besides providing robust vision navigation solution. The idea has been tested with terrestrial MMS images, and surveying UAV aerial images. The results reflect the high capability of the followed technique in providing fast and robust AIM for real-time SLAM based applications. Keywords: Automatic Image Matching, Epipolar Line, Image Pyramid, SLAM, Vision Navigation, Real Time, Vision Navigation.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Sanchez Guinea, Alejandro, Simon Heinrich, and Max Mühlhäuser. "Activity-Free User Identification Using Wearables Based on Vision Techniques." Sensors 22, no. 19 (September 28, 2022): 7368. http://dx.doi.org/10.3390/s22197368.

Повний текст джерела
Анотація:
In order to achieve the promise of smart spaces where the environment acts to fulfill the needs of users in an unobtrusive and personalized manner, it is necessary to provide means for a seamless and continuous identification of users to know who indeed is interacting with the system and to whom the smart services are to be provided. In this paper, we propose a new approach capable of performing activity-free identification of users based on hand and arm motion patterns obtained from an wrist-worn inertial measurement unit (IMU). Our approach is not constrained to particular types of movements, gestures, or activities, thus, allowing users to perform freely and unconstrained their daily routine while the user identification takes place. We evaluate our approach based on IMU data collected from 23 people performing their daily routines unconstrained. Our results indicate that our approach is able to perform activity-free user identification with an accuracy of 0.9485 for 23 users without requiring any direct input or specific action from users. Furthermore, our evaluation provides evidence regarding the robustness of our approach in various different configurations.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Silva, José Luís, Rui Bordalo, José Pissarra, and Paloma de Palacios. "Computer Vision-Based Wood Identification: A Review." Forests 13, no. 12 (November 30, 2022): 2041. http://dx.doi.org/10.3390/f13122041.

Повний текст джерела
Анотація:
Wood identification is an important tool in many areas, from biology to cultural heritage. In the fight against illegal logging, it has a more necessary and impactful application. Identifying a wood sample to genus or species level is difficult, expensive and time-consuming, even when using the most recent methods, resulting in a growing need for a readily accessible and field-applicable method for scientific wood identification. Providing fast results and ease of use, computer vision-based technology is an economically accessible option currently applied to meet the demand for automated wood identification. However, despite the promising characteristics and accurate results of this method, it remains a niche research area in wood sciences and is little known in other fields of application such as cultural heritage. To share the results and applicability of computer vision-based wood identification, this paper reviews the most frequently cited and relevant published research based on computer vision and machine learning techniques, aiming to facilitate and promote the use of this technology in research and encourage its application among end-users who need quick and reliable results.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

RADHIKA, K. R., S. V. SHEELA, M. K. VENKATESHA, and G. N. SEKHAR. "SIGNATURE AND IRIS AUTHENTICATION BASED ON DERIVED KINEMATIC VALUES." International Journal of Pattern Recognition and Artificial Intelligence 24, no. 08 (December 2010): 1237–60. http://dx.doi.org/10.1142/s021800141000841x.

Повний текст джерела
Анотація:
Authentication systems which are covenant with a measurable behavioral and physiological traits are essential for an online system. In this paper, two types of biometric sample authentication from different databases on a common algorithm using Continuous Dynamic Programming [CDP] are discussed. Using a common algorithm, a method for user-dependent threshold decisions can be achieved for both biometrics in a uniform fashion. The integration of static iris information and dynamic signature information are done at decision level. Inferences are drawn using voting techniques. The derived kinematic feature, acceleration, is used in this paper.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Dang, Minh. "Efficient Vision-Based Face Image Manipulation Identification Framework Based on Deep Learning." Electronics 11, no. 22 (November 17, 2022): 3773. http://dx.doi.org/10.3390/electronics11223773.

Повний текст джерела
Анотація:
Image manipulation of the human face is a trending topic of image forgery, which is done by transforming or altering face regions using a set of techniques to accomplish desired outputs. Manipulated face images are spreading on the internet due to the rise of social media, causing various societal threats. It is challenging to detect the manipulated face images effectively because (i) there has been a limited number of manipulated face datasets because most datasets contained images generated by GAN models; (ii) previous studies have mainly extracted handcrafted features and fed them into machine learning algorithms to perform manipulated face detection, which was complicated, error-prone, and laborious; and (iii) previous models failed to prove why their model achieved good performances. In order to address these issues, this study introduces a large face manipulation dataset containing vast variations of manipulated images created and manually validated using various manipulation techniques. The dataset is then used to train a fine-tuned RegNet model to detect manipulated face images robustly and efficiently. Finally, a manipulated region analysis technique is implemented to provide some in-depth insights into the manipulated regions. The experimental results revealed that the RegNet model showed the highest classification accuracy of 89% on the proposed dataset compared to standard deep learning models.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Bryła, Jakub, Adam Martowicz, Maciej Petko, Konrad Gac, Konrad Kobus, and Artur Kowalski. "Wear Analysis of 3D-Printed Spur and Herringbone Gears Used in Automated Retail Kiosks Based on Computer Vision and Statistical Methods." Materials 16, no. 16 (August 10, 2023): 5554. http://dx.doi.org/10.3390/ma16165554.

Повний текст джерела
Анотація:
This paper focuses on a wear evaluation conducted for prototype spur and herringbone gears made from PET-G filament using additive manufacturing. The main objective of this study is to verify if 3D-printed gears can be considered a reliable choice for long-term exploitation in selected mechanical systems, specifically automated retail kiosks. For this reason, two methods were applied, utilizing: (1) vision-based inspection of the gears’ cross-sectional geometry and (2) the statistical characterization of the selected kinematic parameters and torques generated by drives. The former method involves destructive testing and allows for identification of the gears’ operation-induced geometric shape evolution, whereas the latter method focuses on searching for nondestructive kinematic and torque-based indicators, which allow tracking of the wear. The novel contribution presented in this paper is the conceptual and experimental application of the identification of the changes of 3D-printed parts’ geometric properties resulting from wear. The inspected exploited and non-exploited 3D-printed parts underwent encasing in resin and a curing process, followed by cutting in a specific plane to reveal the desired shapes, before finally being subjected to a vision-based geometric characterization. The authors have experimentally demonstrated, in real industrial conditions, on batch production parts, the usefulness of the presented destructive testing technique providing valid indices for wear identification.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Heydarzadeh, Mohsen, Nima Karbasizadeh, Mehdi Tale Masouleh, and Ahmad Kalhor. "Experimental kinematic identification and position control of a 3-DOF decoupled parallel robot." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 233, no. 5 (May 31, 2018): 1841–55. http://dx.doi.org/10.1177/0954406218775906.

Повний текст джерела
Анотація:
This paper aims at using a kinematic identification procedure in order to enhance the control of a 3-DOF fully decoupled parallel robot, the so-called “Tripteron.” From a practical standpoint, manufacture errors lead to some kinematic uncertainties in the robot which cause real kinematic equations of the robot to be different from the theoretical ones. In this paper, using a white box identification procedure, the independence of degrees-of-freedom in the robot is studied. Considering the fact that the kinematic identification of a robotic manipulator requires the position of its end-effector to be known, in this paper “Kinect” sensor, which is a vision-infra red sensor, is utilized to obtain the spatial coordinates of the end-effector. In order to calibrate the Kinect, a novel approach which is based on a neuro-fuzzy algorithm, the so-called “LoLiMoT” algorithm, is used. Moreover, the results of experimentally performing the identification and calibrating approach are used to the end of implementing a closed-loop classic controller for path tracking purposes. Furthermore, the theoretical unidentified model was implemented in a sliding mode robust controller in order to compare the results with classic controller. The comparison reveals that classic controller which uses identified model leads to a better performance in terms of accuracy and control effort with respect to robust controller which is purely based on theoretical model.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Chen, Yen-Lin, Chin-Hsuan Liu, Chao-Wei Yu, Posen Lee, and Yao-Wen Kuo. "An Upper Extremity Rehabilitation System Using Efficient Vision-Based Action Identification Techniques." Applied Sciences 8, no. 7 (July 17, 2018): 1161. http://dx.doi.org/10.3390/app8071161.

Повний текст джерела
Анотація:
This study proposes an action identification system for home upper extremity rehabilitation. In the proposed system, we apply an RGB-depth (color-depth) sensor to capture the image sequences of the patient’s upper extremity actions to identify its movements. We apply a skin color detection technique to assist with extremity identification and to build up the upper extremity skeleton points. We use the dynamic time warping algorithm to determine the rehabilitation actions. The system presented herein builds up upper extremity skeleton points rapidly. Through the upper extremity of the human skeleton and human skin color information, the upper extremity skeleton points are effectively established by the proposed system, and the rehabilitation actions of patients are identified by a dynamic time warping algorithm. Thus, the proposed system can achieve a high recognition rate of 98% for the defined rehabilitation actions for the various muscles. Moreover, the computational speed of the proposed system can reach 125 frames per second—the processing time per frame is less than 8 ms on a personal computer platform. This computational efficiency allows efficient extensibility for future developments to deal with complex ambient environments and for implementation in embedded and pervasive systems. The major contributions of the study are: (1) the proposed system is not only a physical exercise game, but also a movement training program for specific muscle groups; (2) The hardware of upper extremity rehabilitation system included a personal computer with personal computer and a depth camera. These are economic equipment, so that patients who need this system can set up one set at home; (3) patients can perform rehabilitation actions in sitting position to prevent him/her from falling down during training; (4) the accuracy rate of identifying rehabilitation action is as high as 98%, which is sufficient for distinguishing between correct and wrong action when performing specific action trainings; (5) The proposed upper extremity rehabilitation system is real-time, efficient to vision-based action identification, and low-cost hardware and software, which is affordable for most families.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Jung, SeHee, Bingyi Su, Hanwen Wang, Lu Lu, Ziyang Xie, Xu Xu, and Edward P. Fitts. "A computer vision-based lifting task recognition method." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 66, no. 1 (September 2022): 1210–14. http://dx.doi.org/10.1177/1071181322661507.

Повний текст джерела
Анотація:
Low-back musculoskeletal disorders (MSDs) are major cause of work-related injury among workers in manual material handling (MMH). Epidemiology studies show that excessive repetition is one of major risk factors of low-back MSDs. Thus, it is essential to monitor the frequency of lifting tasks for an ergonomics intervention. In the current field practice, safety practitioners need to manually observe workers to identify their lifting frequency, which is time consuming and labor intensive. In this study, we propose a method that can recognize lifting actions from videos using computer vision and deep neural networks. An open-source package OpenPose was first adopted to detect bony landmarks of human body in real time. Interpolation and scaling techniques were then applied to prevent missing points and offset different recording environments. Spatial and temporal kinematic features of human motion were then derived. These features were fed into long short-term memory networks for lifting action recognition. The results show that the F1-score of the lifting action recognition is 0.88. The proposed method has potential to monitor lifting frequency in an automated way and thus could lead to a more practical ergonomics intervention.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Knyaz, V. A., A. A. Maksimov, and M. M. Novikov. "VISION BASED AUTOMATED ANTHROPOLOGICAL MEASUREMENTS AND ANALYSIS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W12 (May 9, 2019): 117–22. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w12-117-2019.

Повний текст джерела
Анотація:
<p><strong>Abstract.</strong> Modern techniques of optical 3D measurements such as photogrammetry, computer tomography, laser 3D scanning provide new possibilities for acquiring accurate 2D and 3D data of high resolution, thus creating new conditions for anthropological data analysis. Traditional anthropological manual point measurements can be substituted by analysis of accurate textured 3D models, which allow to retrieve more information about studied object and easily to share data for independent analysis. The paper presents the vision-based techniques for anthropological measurement and investigation needed in various applications, such as morphometric analysis, craniofacial identification, face approximation and other. Photogrammetric methods and their practical implementation in the automatic system for accurate digital 3D reconstruction of anthropological objects are described.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Cosenza, C., R. Brancati, V. Niola, and S. Savino. "Experimental Investigation on the Kinematics of an Underactuated Mechanical Finger through Vision-Based Technology." WSEAS TRANSACTIONS ON ENVIRONMENT AND DEVELOPMENT 18 (February 9, 2022): 322–32. http://dx.doi.org/10.37394/232015.2022.18.32.

Повний текст джерела
Анотація:
Marker-less vision techniques represent a promising route in the development of experimental methods to study the kinematic and the dynamic parameters of mechanical systems. The knowledge of a great number of these parameters is a fundamental issue in the system behaviour analysis and represents an even more crucial aspect for underactuated mechanical systems. In this paper, a technique is proposed to identify the kinematics of the phalanges of an underactuated mechanical finger, starting from the acquisition of the finger point cloud data by means of contactless vision system devices. The analytical model identified allows to determine the underactuated finger configuration as function of the shaft rotation of the single motor of the mechanical system.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Kwon, Jaerock, Yunju Lee, and Jehyung Lee. "Comparative Study of Markerless Vision-Based Gait Analyses for Person Re-Identification." Sensors 21, no. 24 (December 8, 2021): 8208. http://dx.doi.org/10.3390/s21248208.

Повний текст джерела
Анотація:
The model-based gait analysis of kinematic characteristics of the human body has been used to identify individuals. To extract gait features, spatiotemporal changes of anatomical landmarks of the human body in 3D were preferable. Without special lab settings, 2D images were easily acquired by monocular video cameras in real-world settings. The 2D and 3D locations of key joint positions were estimated by the 2D and 3D pose estimators. Then, the 3D joint positions can be estimated from the 2D image sequences in human gait. Yet, it has been challenging to have the exact gait features of a person due to viewpoint variance and occlusion of body parts in the 2D images. In the study, we conducted a comparative study of two different approaches: feature-based and spatiotemporal-based viewpoint invariant person re-identification using gait patterns. The first method is to use gait features extracted from time-series 3D joint positions to identify an individual. The second method uses a neural network, a Siamese Long Short Term Memory (LSTM) network with the 3D spatiotemporal changes of key joint positions in a gait cycle to classify an individual without extracting gait features. To validate and compare these two methods, we conducted experiments with two open datasets of the MARS and CASIA-A datasets. The results show that the Siamese LSTM outperforms the gait feature-based approaches on the MARS dataset by 20% and 55% on the CASIA-A dataset. The results show that feature-based gait analysis using 2D and 3D pose estimators is premature. As a future study, we suggest developing large-scale human gait datasets and designing accurate 2D and 3D joint position estimators specifically for gait patterns. We expect that the current comparative study and the future work could contribute to rehabilitation study, forensic gait analysis and early detection of neurological disorders.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Andreff, Nicolas, and Philippe Martinet. "Unifying Kinematic Modeling, Identification, and Control of a Gough–Stewart Parallel Robot Into a Vision-Based Framework." IEEE Transactions on Robotics 22, no. 6 (December 2006): 1077–86. http://dx.doi.org/10.1109/tro.2006.882931.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Zhang, Dashan, Andong Zhu, Wenhui Hou, Lu Liu, and Yuwei Wang. "Vision-Based Structural Modal Identification Using Hybrid Motion Magnification." Sensors 22, no. 23 (November 29, 2022): 9287. http://dx.doi.org/10.3390/s22239287.

Повний текст джерела
Анотація:
As a promising alternative to conventional contact sensors, vision-based technologies for a structural dynamic response measurement and health monitoring have attracted much attention from the research community. Among these technologies, Eulerian video magnification has a unique capability of analyzing modal responses and visualizing modal shapes. To reduce the noise interference and improve the quality and stability of the modal shape visualization, this study proposes a hybrid motion magnification framework that combines linear and phase-based motion processing. Based on the assumption that temporal variations can represent spatial motions, the linear motion processing extracts and manipulates the temporal intensity variations related to modal responses through matrix decomposition and underdetermined blind source separation (BSS) techniques. Meanwhile, the theory of Fourier transform profilometry (FTP) is utilized to reduce spatial high-frequency noise. As all spatial motions in a video are linearly controllable, the subsequent phase-based motion processing highlights the motions and visualizes the modal shapes with a higher quality. The proposed method is validated by two laboratory experiments and a field test on a large-scale truss bridge. The quantitative evaluation results with high-speed cameras demonstrate that the hybrid method performs better than the single-step phase-based motion magnification method in visualizing sound-induced subtle motions. In the field test, the vibration characteristics of the truss bridge when a train is driving across the bridge are studied with a commercial camera over 400 m away from the bridge. Moreover, four full-field modal shapes of the bridge are successfully observed.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Jurczyk, Karolina, Paweł Piskur, and Piotr Szymak. "Parameters Identification of the Flexible Fin Kinematics Model Using Vision and Genetic Algorithms." Polish Maritime Research 27, no. 2 (June 1, 2020): 39–47. http://dx.doi.org/10.2478/pomr-2020-0025.

Повний текст джерела
Анотація:
AbstractRecently a new type of autonomous underwater vehicle uses artificial fins to imitate the movements of marine animals, e.g. fish. These vehicles are biomimetic and their driving system is an undulating propulsion. There are two main methods of reproducing undulating motion. The first method uses a flexible tail fin, which is connected to a rigid hull by a movable axis. The second method is based on the synchronised operation of several mechanical joints to imitate the tail movement that can be observed among real marine animals such as fish. This paper will examine the first method of reproducing tail fin movement. The goal of the research presented in the paper is to identify the parameters of the one-piece flexible fin kinematics model. The model needs further analysis, e.g. using it with Computational Fluid Dynamics (CFD) in order to select the most suitable prototype for a Biomimetic Underwater Vehicle (BUV). The background of the work is explained in the first section of the paper and the kinematic model for the flexible fin is described in the next section. The following section is entitled Materials and Methods, and includes a description of a laboratory test of a water tunnel, a description of a Vision Algorithm (VA)which was used to determine the positions of the fin, and a Genetic Algorithm (GA) which was used to find the parameters of the kinematic fin. In the next section, the results of the research are presented and discussed. At the end of the paper, the summary including main conclusions and a schedule of the future research is inserted.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Gokaraju, Jnana Sai Abhishek Varma, Weon Keun Song, Min-Ho Ka, and Somyot Kaitwanidvilai. "Human and bird detection and classification based on Doppler radar spectrograms and vision images using convolutional neural networks." International Journal of Advanced Robotic Systems 18, no. 3 (May 1, 2021): 172988142110105. http://dx.doi.org/10.1177/17298814211010569.

Повний текст джерела
Анотація:
The study investigated object detection and classification based on both Doppler radar spectrograms and vision images using two deep convolutional neural networks. The kinematic models for a walking human and a bird flapping its wings were incorporated into MATLAB simulations to create data sets. The dynamic simulator identified the final position of each ellipsoidal body segment taking its rotational motion into consideration in addition to its bulk motion at each sampling point to describe its specific motion naturally. The total motion induced a micro-Doppler effect and created a micro-Doppler signature that varied in response to changes in the input parameters, such as varying body segment size, velocity, and radar location. Micro-Doppler signature identification of the radar signals returned from the target objects that were animated by the simulator required kinematic modeling based on a short-time Fourier transform analysis of the signals. Both You Only Look Once V3 and Inception V3 were used for the detection and classification of the objects with different red, green, blue colors on black or white backgrounds. The results suggested that clear micro-Doppler signature image-based object recognition could be achieved in low-visibility conditions. This feasibility study demonstrated the application possibility of Doppler radar to autonomous vehicle driving as a backup sensor for cameras in darkness. In this study, the first successful attempt of animated kinematic models and their synchronized radar spectrograms to object recognition was made.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Sultan, Ahmed, Walied Makram, Mohammed Kayed, and Abdelmaged Amin Ali. "Sign language identification and recognition: A comparative study." Open Computer Science 12, no. 1 (January 1, 2022): 191–210. http://dx.doi.org/10.1515/comp-2022-0240.

Повний текст джерела
Анотація:
Abstract Sign Language (SL) is the main language for handicapped and disabled people. Each country has its own SL that is different from other countries. Each sign in a language is represented with variant hand gestures, body movements, and facial expressions. Researchers in this field aim to remove any obstacles that prevent the communication with deaf people by replacing all device-based techniques with vision-based techniques using Artificial Intelligence (AI) and Deep Learning. This article highlights two main SL processing tasks: Sign Language Recognition (SLR) and Sign Language Identification (SLID). The latter task is targeted to identify the signer language, while the former is aimed to translate the signer conversation into tokens (signs). The article addresses the most common datasets used in the literature for the two tasks (static and dynamic datasets that are collected from different corpora) with different contents including numerical, alphabets, words, and sentences from different SLs. It also discusses the devices required to build these datasets, as well as the different preprocessing steps applied before training and testing. The article compares the different approaches and techniques applied on these datasets. It discusses both the vision-based and the data-gloves-based approaches, aiming to analyze and focus on main methods used in vision-based approaches such as hybrid methods and deep learning algorithms. Furthermore, the article presents a graphical depiction and a tabular representation of various SLR approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Parkhi, Abhinav, and Atish Khobragade. "Review on deep learning based techniques for person re-identification." 3C TIC: Cuadernos de desarrollo aplicados a las TIC 11, no. 2 (December 29, 2022): 208–23. http://dx.doi.org/10.17993/3ctic.2022.112.208-223.

Повний текст джерела
Анотація:
In-depth study has recently been concentrated on human re-identification, which is a crucial component of automated video surveillance. Re-identification is the act of identifying someone in photos or videos acquired from other cameras after they have already been recognized in an image or video from one camera. Re-identification, which involves generating consistent labelling between several cameras, or even just one camera, is required to reconnect missing or interrupted tracks. In addition to surveillance, it may be used in forensics, multimedia, and robotics.Re-identification of the person is a difficult problem since their look fluctuates across many cameras with visual ambiguity and spatiotemporal uncertainty. These issues can be largely caused by inadequate video feeds or lowresolution photos that are full of unnecessary facts and prevent re-identification. The geographical or temporal restrictions of the challenge are difficult to capture. The computer vision research community has given the problem a lot of attention because of how widely used and valuable it is. In this article, we look at the issue of human re-identification and discuss some viable approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Tahir, Muhammad, and Saeed Anwar. "Transformers in Pedestrian Image Retrieval and Person Re-Identification in a Multi-Camera Surveillance System." Applied Sciences 11, no. 19 (October 2, 2021): 9197. http://dx.doi.org/10.3390/app11199197.

Повний текст джерела
Анотація:
Person Re-Identification is an essential task in computer vision, particularly in surveillance applications. The aim is to identify a person based on an input image from surveillance photographs in various scenarios. Most Person re-ID techniques utilize Convolutional Neural Networks (CNNs); however, Vision Transformers are replacing pure CNNs for various computer vision tasks such as object recognition, classification, etc. The vision transformers contain information about local regions of the image. The current techniques take this advantage to improve the accuracy of the tasks underhand. We propose to use the vision transformers in conjunction with vanilla CNN models to investigate the true strength of transformers in person re-identification. We employ three backbones with different combinations of vision transformers on two benchmark datasets. The overall performance of the backbones increased, showing the importance of vision transformers. We provide ablation studies and show the importance of various components of the vision transformers in re-identification tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Slutski, Leonid. "Online Telecontrol Techniques Based on Object Parameter Adjusting." Presence: Teleoperators and Virtual Environments 6, no. 3 (June 1997): 255–67. http://dx.doi.org/10.1162/pres.1997.6.3.255.

Повний текст джерела
Анотація:
An approach to telerobotic system organization with manipulator variable parameters is developed. It is intended for solution of manipulation problems when fast transportation operations are combined with high-precision positioning operations. Based on previous research, manipulator gain was chosen as a means for system quality control. It was proposed that the human operator should personally adjust the robot parameters in compliance with situation requirements. Besides such direct parameter adjustment, another implementation of this concept based on indirect adjustment (such as by analog circuit) was also developed. An additional channel of parameter control was introduced into the system in these cases. A new hand-controller design and a method for synthesis of such system algorithms were also developed. The ground has thus been laid for the kinematic coordinate-parameter control for the main regimes of telerobot work. The described approach results in the organization of effective online systems with sufficiently simple control algorithms. By means of testing, the efficiency of these systems is shown.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Hoang, Xuan-Bach, Phu-Cuong Pham, and Yong-Lin Kuo. "Collision Detection of a HEXA Parallel Robot Based on Dynamic Model and a Multi-Dual Depth Camera System." Sensors 22, no. 15 (August 8, 2022): 5923. http://dx.doi.org/10.3390/s22155923.

Повний текст джерела
Анотація:
This paper introduces a Hexa parallel robot and obstacle collision detection method based on dynamic modeling and a computer vision system. The processes to deal with the collision issues refer to collision detection, collision isolation, and collision identification applied to the Hexa robot, respectively, in this paper. Initially, the configuration, kinematic and dynamic characteristics during movement trajectories of the Hexa parallel robot are analyzed to perform the knowledge extraction for the method. Next, a virtual force sensor is presented to estimate the collision detection signal created as a combination of the solution to the inverse dynamics and a low-pass filter. Then, a vision system consisting of dual-depth cameras is designed for obstacle isolation and determining the contact point location at the end-effector, an arm, and a rod of the Hexa robot. Finally, a recursive Newton-Euler algorithm is applied to compute contact forces caused by collision cases with the real-Hexa robot. Based on the experimental results, the force identification is compared to sensor forces for the performance evaluation of the proposed collision detection method.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

WANG, MIN, XIADONG LV, and XINHAN HUANG. "VISION BASED MOTION CONTROL AND TRAJECTORY TRACKING FOR MICROASSEMBLY ROBOTS." International Journal of Information Acquisition 04, no. 03 (September 2007): 237–49. http://dx.doi.org/10.1142/s0219878907001319.

Повний текст джерела
Анотація:
This paper presents a vision based motion control and trajectory tracking strategies for microassembly robots including a self-optimizing visual servoing depth motion control method and a novel trajectory snake tracking strategy. To measure micromanipulator depth motion, a normalized gray-variance focus measure operator is developed using depth from focus techniques. The extracted defocus features are theoretically distributed with one peak point which can be applied to locate the microscopic focal depth via self-optimizing control. Tracking differentiators are developed to suppress noises and track the features and their differential values without oscillation. Based on the differential defocus signals a coarse-to-fine self-optimizing controller is presented for micromanipulator to precisely locate focus depth. As well as a novel trajectory snake energy function of robotic motion is defined involving kinematic energy, curve potential and image potential energy. The motion trajectory can be located through searching the converged energy distribution of the snake function. Energy weights in the function are real-time adjusted to avoid local minima during convergence. To improve snake searching efficiency, quadratic-trajectory least square estimator is employed to predict manipulator motion position before tracking. Experimental results in a microassembly robotic system demonstrate that the proposed strategies are successful and effective.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Rohling, Robert N., and John M. Hollerbach. "Calibrating the Human Hand for Haptic Interfaces." Presence: Teleoperators and Virtual Environments 2, no. 4 (January 1993): 281–96. http://dx.doi.org/10.1162/pres.1993.2.4.281.

Повний текст джерела
Анотація:
Determination of human hand poses from hand master measurements of joint angles requires an accurate human hand model for each operator. A new method for human hand calibration is proposed, based on open-loop kinematic calibration. The parameters of a kinematic model of the human index finger are determined as an example. Singular value decomposition is used as a tool for analyzing the kinematic model and the identification process. It was found that accurate and reliable results are obtained only when the numerical condition is minimized through parameter scaling, model reduction and pose set selection. The identified kinematic parameters of the index finger with the Utah Dextrous Hand Master show that the kinematic model and the calibration procedure have an accuracy of about 2 mm.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Kaur, Surleen, and Prabhpreet Kaur. "Plant Species Identification based on Plant Leaf Using Computer Vision and Machine Learning Techniques." Journal of Multimedia Information System 6, no. 2 (June 30, 2019): 49–60. http://dx.doi.org/10.33851/jmis.2019.6.2.49.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Kong, Xuan, Tengyi Wang, Jie Zhang, Lu Deng, Jiwei Zhong, Yuping Cui, and Shudong Xia. "Tire Contact Force Equations for Vision-Based Vehicle Weight Identification." Applied Sciences 12, no. 9 (April 28, 2022): 4487. http://dx.doi.org/10.3390/app12094487.

Повний текст джерела
Анотація:
Overloaded vehicles have a variety of adverse effects; they not only damage pavements, bridges, and other infrastructure but also threaten the safety of human life. Therefore, it is necessary to address the problem of overloading, and this requires the accurate identification of the vehicle weight. Many methods have been used to identify vehicle weights. Most of them use contact methods that require sensors attached to or embedded in the road or bridge, which have disadvantages such as high cost, low accuracy, and poor durability. The authors have developed a vehicle weight identification method based on computer vision. The methodology identifies the tire–road contact force by establishing the relationship using the tire vertical deflection, which is extracted using computer vision techniques from the tire image. The focus of the present paper is to study the tire–road contact mechanism and develop tire contact force equations. Theoretical derivations and numerical simulations were conducted first to establish the tire force–deformation equations. The proposed vision-based vehicle weight identification method was then validated with field experiments using two passenger cars and two trucks. The effects of different tire specifications, loads, and inflation pressures were studied as well. The experiment showed that the results predicted by the proposed method agreed well with the measured results. Compared with the traditional method, the developed method based on tire mechanics and computer vision has the advantages of high accuracy and efficiency, easy operation, low cost, and there is no need to lay out sensors; thus, it provides a new approach to vehicle weighing.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Raza, Ali, Mohammad Rustom Al Nasar, Essam Said Hanandeh, Raed Abu Zitar, Ahmad Yacoub Nasereddin, and Laith Abualigah. "A Novel Methodology for Human Kinematics Motion Detection Based on Smartphones Sensor Data Using Artificial Intelligence." Technologies 11, no. 2 (April 11, 2023): 55. http://dx.doi.org/10.3390/technologies11020055.

Повний текст джерела
Анотація:
Kinematic motion detection aims to determine a person’s actions based on activity data. Human kinematic motion detection has many valuable applications in health care, such as health monitoring, preventing obesity, virtual reality, daily life monitoring, assisting workers during industry manufacturing, caring for the elderly. Computer vision-based activity recognition is challenging due to problems such as partial occlusion, background clutter, appearance, lighting, viewpoint, and changes in scale. Our research aims to detect human kinematic motions such as walking or running using smartphones’ sensor data within a high-performance framework. An existing dataset based on smartphones’ gyroscope and accelerometer sensor values is utilized for the experiments in our study. Sensor exploratory data analysis was conducted in order to identify valuable patterns and insights from sensor values. The six hyperparameters, tunned artificial indigence-based machine learning, and deep learning techniques were applied for comparison. Extensive experimentation showed that the ensemble learning-based novel ERD (ensemble random forest decision tree) method outperformed other state-of-the-art studies with high-performance accuracy scores. The proposed ERD method combines the random forest and decision tree models, which achieved a 99% classification accuracy score. The proposed method was successfully validated with the k-fold cross-validation approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Kumar, Santosh, Sanjay Kumar Singh, and Amit Kumar Singh. "Muzzle point pattern based techniques for individual cattle identification." IET Image Processing 11, no. 10 (October 1, 2017): 805–14. http://dx.doi.org/10.1049/iet-ipr.2016.0799.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Gupta, Ashish Kumar, Ayan Seal, Mukesh Prasad, and Pritee Khanna. "Salient Object Detection Techniques in Computer Vision—A Survey." Entropy 22, no. 10 (October 19, 2020): 1174. http://dx.doi.org/10.3390/e22101174.

Повний текст джерела
Анотація:
Detection and localization of regions of images that attract immediate human visual attention is currently an intensive area of research in computer vision. The capability of automatic identification and segmentation of such salient image regions has immediate consequences for applications in the field of computer vision, computer graphics, and multimedia. A large number of salient object detection (SOD) methods have been devised to effectively mimic the capability of the human visual system to detect the salient regions in images. These methods can be broadly categorized into two categories based on their feature engineering mechanism: conventional or deep learning-based. In this survey, most of the influential advances in image-based SOD from both conventional as well as deep learning-based categories have been reviewed in detail. Relevant saliency modeling trends with key issues, core techniques, and the scope for future research work have been discussed in the context of difficulties often faced in salient object detection. Results are presented for various challenging cases for some large-scale public datasets. Different metrics considered for assessment of the performance of state-of-the-art salient object detection models are also covered. Some future directions for SOD are presented towards end.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Chopra, Gaurav, and Pawan Whig. "Analysis of Tomato Leaf Disease Identification Techniques." Journal of Computer Science and Engineering (JCSE) 2, no. 2 (August 10, 2021): 98–103. http://dx.doi.org/10.36596/jcse.v2i2.171.

Повний текст джерела
Анотація:
India loses thousands of metric tons of tomato crop every year due to pests and diseases. Tomato leaf disease is a major issue that causes significant losses to farmers and possess a threat to the agriculture sector. Understanding how does an algorithm learn to classify different types of tomato leaf disease will help scientist and engineers built accurate models for tomato leaf disease detection. Convolutional neural networks with backpropagation algorithms have achieved great success in diagnosing various plant diseases. However, human benchmarks in diagnosing plant disease have still not been displayed by any computer vision method. Under different conditions, the accuracy of the plant identification system is much lower than expected by algorithms. This study performs analysis on features learned by the backpropagation algorithm and studies the state-of-the-art results achieved by image-based classification methods. The analysis is shown through gradient-based visualization methods. In our analysis, the most descriptive approach to generated attention maps is Grad-CAM. Moreover, it is also shown that using a different learning algorithm than backpropagation is also possible to achieve comparable accuracy to that of deep learning models. Hence, state-of-the-art results might show that Convolutional Neural Network achieves human comparable accuracy in tomato leaf disease classification through supervised learning. But, both genetic algorithms and semi-supervised models hold the potential to built precise systems for tomato leaf detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Lin, Huei-Yung, Jyun-Min Dai, Lu-Ting Wu, and Li-Qi Chen. "A Vision-Based Driver Assistance System with Forward Collision and Overtaking Detection." Sensors 20, no. 18 (September 9, 2020): 5139. http://dx.doi.org/10.3390/s20185139.

Повний текст джерела
Анотація:
One major concern in the development of intelligent vehicles is to improve the driving safety. It is also an essential issue for future autonomous driving and intelligent transportation. In this paper, we present a vision-based system for driving assistance. A front and a rear on-board camera are adopted for visual sensing and environment perception. The purpose is to avoid potential traffic accidents due to forward collision and vehicle overtaking, and assist the drivers or self-driving cars to perform safe lane change operations. The proposed techniques consist of lane change detection, forward collision warning, and overtaking vehicle identification. A new cumulative density function (CDF)-based symmetry verification method is proposed for the detection of front vehicles. The motion cue obtained from optical flow is used for overtaking detection. It is further combined with a convolutional neural network to remove repetitive patterns for more accurate overtaking vehicle identification. Our approach is able to adapt to a variety of highway and urban scenarios under different illumination conditions. The experiments and performance evaluation carried out on real scene images have demonstrated the effectiveness of the proposed techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Brambilla, Paolo, Chiara Conese, Davide Maria Fabris, Paolo Chiariotti, and Marco Tarabini. "Algorithms for Vision-Based Quality Control of Circularly Symmetric Components." Sensors 23, no. 5 (February 24, 2023): 2539. http://dx.doi.org/10.3390/s23052539.

Повний текст джерела
Анотація:
Quality inspection in the industrial production field is experiencing a strong technological development that benefits from the combination of vision-based techniques with artificial intelligence algorithms. This paper initially addresses the problem of defect identification for circularly symmetric mechanical components, characterized by the presence of periodic elements. In the specific case of knurled washers, we compare the performances of a standard algorithm for the analysis of grey-scale image with a Deep Learning (DL) approach. The standard algorithm is based on the extraction of pseudo-signals derived from the conversion of the grey scale image of concentric annuli. In the DL approach, the component inspection is shifted from the entire sample to specific areas repeated along the object profile where the defect may occur. The standard algorithm provides better results in terms of accuracy and computational time with respect to the DL approach. Nevertheless, DL reaches accuracy higher than 99% when performance is evaluated targeting the identification of damaged teeth. The possibility of extending the methods and the results to other circularly symmetrical components is analyzed and discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Chen, Zhiwei, Xuzhi Ruan, and Yao Zhang. "Vision-Based Dynamic Response Extraction and Modal Identification of Simple Structures Subject to Ambient Excitation." Remote Sensing 15, no. 4 (February 9, 2023): 962. http://dx.doi.org/10.3390/rs15040962.

Повний текст джерела
Анотація:
Vision-based modal analysis has gained popularity in the field of structural health monitoring due to significant advancements in optics and computer science. For long term monitoring, the structures are subjected to ambient excitation, so that their vibration amplitudes are quite small. Hence, although natural frequencies can be usually identified from the extracted displacements by vision-based techniques, it is still difficult to evaluate the corresponding mode shapes accurately due to limited resolution. In this study, a novel signal reconstruction algorithm is proposed to reconstruct the dynamic response extracted by the vision-based approach to identify the mode shapes of structures with low amplitude vibration due to environmental excitation. The experimental test of a cantilever beam shows that even if the vibration amplitude is as low as 0.01 mm, the first two mode shapes can be accurately identified if the proposed signal reconstruction algorithm is implemented, while without the proposed algorithm, they can only be identified when the vibration amplitude is at least 0.06 mm. The proposed algorithm can also perform well with various camera settings, indicating great potential to be used for vision-based structural health monitoring.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Baclig, Maria Martine, Noah Ergezinger, Qipei Mei, Mustafa Gül, Samer Adeeb, and Lindsey Westover. "A Deep Learning and Computer Vision Based Multi-Player Tracker for Squash." Applied Sciences 10, no. 24 (December 9, 2020): 8793. http://dx.doi.org/10.3390/app10248793.

Повний текст джерела
Анотація:
Sports pose a unique challenge for high-speed, unobtrusive, uninterrupted motion tracking due to speed of movement and player occlusion, especially in the fast and competitive sport of squash. The objective of this study is to use video tracking techniques to quantify kinematics in elite-level squash. With the increasing availability and quality of elite tournament matches filmed for entertainment purposes, a new methodology of multi-player tracking for squash that only requires broadcast video as an input is proposed. This paper introduces and evaluates a markerless motion capture technique using an autonomous deep learning based human pose estimation algorithm and computer vision to detect and identify players. Inverse perspective mapping is utilized to convert pixel coordinates to court coordinates and distance traveled, court position, ‘T’ dominance, and average speeds of elite players in squash is determined. The method was validated using results from a previous study using manual tracking where the proposed method (filtered coordinates) displayed an average absolute percent error to the manual approach of 3.73% in total distance traveled, 3.52% and 1.26% in average speeds <9 m/s with and without speeds <1 m/s, respectively. The method has proven to be the most effective in collecting kinematic data of elite players in squash in a timely manner with no special camera setup and limited manual intervention.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Taha, Zahari, Jouh Yeong Chew, and Hwa Jen Yap. "Omnidirectional Vision for Mobile Robot Navigation." Journal of Advanced Computational Intelligence and Intelligent Informatics 14, no. 1 (January 20, 2010): 55–62. http://dx.doi.org/10.20965/jaciii.2010.p0055.

Повний текст джерела
Анотація:
Machine vision has been widely studied, leading to the discovery of many image-processing and identification techniques. Together with this, rapid advances in computer processing speed have triggered a growing need for vision sensor data and faster robot response. In considering omnidirectional camera use in machine vision, we have studied omnidirectional image features in depth to determine correlation between parameters and ways to flatten 3-dimensional images into 2 dimensions. We also discuss ways to process omnidirectional images based on their individual features.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Singh, Law Kumar, Pooja, Hitendra Garg, and Munish Khanna. "An Artificial Intelligence-Based Smart System for Early Glaucoma Recognition Using OCT Images." International Journal of E-Health and Medical Communications 12, no. 4 (July 2021): 32–59. http://dx.doi.org/10.4018/ijehmc.20210701.oa3.

Повний текст джерела
Анотація:
Glaucoma is a progressive and constant eye disease that leads to a deficiency of peripheral vision and, at last, leads to irrevocable loss of vision. Detection and identification of glaucoma are essential for earlier treatment and to reduce vision loss. This motivates us to present a study on intelligent diagnosis system based on machine learning algorithm(s) for glaucoma identification using three-dimensional optical coherence tomography (OCT) data. This experimental work is attempted on 70 glaucomatous and 70 healthy eyes from combination of public (Mendeley) dataset and private dataset. Forty-five vital features were extracted using two approaches from the OCT images. K-nearest neighbor (KNN), linear discriminant analysis (LDA), decision tree, random forest, support vector machine (SVM) were applied for the categorization of OCT images among the glaucomatous and non-glaucomatous class. The largest AUC is achieved by KNN (0.97). The accuracy is obtained on fivefold cross-validation techniques. This study will facilitate to reach high standards in glaucoma diagnosis.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Touya, G., B. Decherf, M. Lalanne, and M. Dumont. "COMPARING IMAGE-BASED METHODS FOR ASSESSING VISUAL CLUTTER IN GENERALIZED MAPS." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-3/W5 (August 19, 2015): 227–33. http://dx.doi.org/10.5194/isprsannals-ii-3-w5-227-2015.

Повний текст джерела
Анотація:
Map generalization abstracts and simplifies geographic information to derive maps at smaller scales. The automation of map generalization requires techniques to evaluate the global quality of a generalized map. The quality and legibility of a generalized map is related to the complexity of the map, or the amount of clutter in the map, i.e. the excessive amount of information and its disorganization. Computer vision research is highly interested in measuring clutter in images, and this paper proposes to compare some of the existing techniques from computer vision, applied to generalized maps evaluation. Four techniques from the literature are described and tested on a large set of maps, generalized at different scales: edge density, subband entropy, quad tree complexity, and segmentation clutter. The results are analyzed against several criteria related to generalized maps, the identification of cluttered areas, the preservation of the global amount of information, the handling of occlusions and overlaps, foreground vs background, and blank space reduction.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Cruz Ulloa, Christyan, Lourdes Sánchez, Jaime Del Cerro, and Antonio Barrientos. "Deep Learning Vision System for Quadruped Robot Gait Pattern Regulation." Biomimetics 8, no. 3 (July 3, 2023): 289. http://dx.doi.org/10.3390/biomimetics8030289.

Повний текст джерела
Анотація:
Robots with bio-inspired locomotion systems, such as quadruped robots, have recently attracted significant scientific interest, especially those designed to tackle missions in unstructured terrains, such as search-and-rescue robotics. On the other hand, artificial intelligence systems have allowed for the improvement and adaptation of the locomotion capabilities of these robots based on specific terrains, imitating the natural behavior of quadruped animals. The main contribution of this work is a method to adjust adaptive gait patterns to overcome unstructured terrains using the ARTU-R (A1 Rescue Task UPM Robot) quadruped robot based on a central pattern generator (CPG), and the automatic identification of terrain and characterization of its obstacles (number, size, position and superability analysis) through convolutional neural networks for pattern regulation. To develop this method, a study of dog gait patterns was carried out, with validation and adjustment through simulation on the robot model in ROS-Gazebo and subsequent transfer to the real robot. Outdoor tests were carried out to evaluate and validate the efficiency of the proposed method in terms of its percentage of success in overcoming stretches of unstructured terrains, as well as the kinematic and dynamic variables of the robot. The main results show that the proposed method has an efficiency of over 93% for terrain characterization (identification of terrain, segmentation and obstacle characterization) and over 91% success in overcoming unstructured terrains. This work was also compared against main developments in state-of-the-art and benchmark models.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Gupta, Himanshu, Parul Jindal, Om Prakash Verma, Raj Kumar Arya, Abdelhamied A. Ateya, Naglaa F. Soliman, and Vijay Mohan. "Computer Vision-Based Approach for Automatic Detection of Dairy Cow Breed." Electronics 11, no. 22 (November 18, 2022): 3791. http://dx.doi.org/10.3390/electronics11223791.

Повний текст джерела
Анотація:
Purpose: Identification of individual cow breeds may offer various farming opportunities for disease detection, disease prevention and treatment, fertility and feeding, and welfare monitoring. However, due to the large population of cows with hundreds of breeds and almost identical visible appearance, their exact identification and detection become a tedious task. Therefore, the automatic detection of cow breeds would benefit the dairy industry. This study presents a computer-vision-based approach for identifying the breed of individual cattle. Methods: In this study, eight breeds of cows are considered to verify the classification process: Afrikaner, Brown Swiss, Gyr, Holstein Friesian, Limousin, Marchigiana, White Park, and Simmental cattle. A custom dataset is developed using web-mining techniques, comprising 1835 images grouped into 238, 223, 220, 212, 253, 185, 257, and 247 images for individual breeds. YOLOv4, a deep learning approach, is employed for breed classification and localization. The performance of the YOLOv4 algorithm is evaluated by training the model on different sets of training parameters. Results: Comprehensive analysis of the experimental results reveal that the proposed approach achieves an accuracy of 81.07%, with maximum kappa of 0.78 obtained at an image size of 608 × 608 and an intersection over union (IoU) threshold of 0.75 on the test dataset. Conclusions: The model performed better with YOLOv4 relative to other compared models. This places the proposed model among the top-ranked cow breed detection models. For future recommendations, it would be beneficial to incorporate simple tracking techniques between video frames to check the efficiency of this work.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Wang, Zhengpu, Baoshi Cao, Zongwu Xie, Boyu Ma, Kui Sun, and Yang Liu. "Kinematic Calibration of a Space Manipulator Based on Visual Measurement System with Extended Kalman Filter." Machines 11, no. 3 (March 21, 2023): 409. http://dx.doi.org/10.3390/machines11030409.

Повний текст джерела
Анотація:
The calibration of kinematic parameters has been widely used to improve the pose (position and orientation) accuracy of the robot arm. Intelligent measuring equipment with high accuracy is usually provided for the industrial manipulator. Unfortunately, large noise exists in the vision measurement system, which is provided for space manipulators. To overcome the adverse effect of measuring noise and improve the optimality of calibrating time, a calibration method based on extended Kalman filter (EKF) for space manipulators is proposed in this paper. Firstly, the identification model based on the Denavit–Hartenberg (D-H) modeling method is established. Then, the camera which is rigidly attached to the end-effector takes pictures of a calibration board that is settled around the manipulator. The actual pose of the end-effector is calculated based on the pictures of the calibration board. Subsequently, different data between the actual pose and theoretical pose as input, whilst error parameters are estimated by EKF and compensated in the kinematic algorithm. The simulation result shows that the pose accuracy has been improved by approximately 90 percent. Compared with the calibration method of the least squares estimate (LSE), EKF is beneficial to further optimize the calibrating time with a faster computation speed and ensure the stability of the calibration.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Chen, Yen-Lin, Wen-Yew Liang, Chuan-Yen Chiang, Tung-Ju Hsieh, Da-Cheng Lee, Shyan-Ming Yuan, and Yang-Lang Chang. "Vision-Based Finger Detection, Tracking, and Event Identification Techniques for Multi-Touch Sensing and Display Systems." Sensors 11, no. 7 (July 1, 2011): 6868–92. http://dx.doi.org/10.3390/s110706868.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Latif, Ghazanfar, Jaafar Alghazo, R. Maheswar, V. Vijayakumar, and Mohsin Butt. "Deep learning based intelligence cognitive vision drone for automatic plant diseases identification and spraying." Journal of Intelligent & Fuzzy Systems 39, no. 6 (December 4, 2020): 8103–14. http://dx.doi.org/10.3233/jifs-189132.

Повний текст джерела
Анотація:
The agriculture industry is of great importance in many countries and plays a considerable role in the national budget. Also, there is an increased interest in plantation and its effect on the environment. With vast areas suitable for farming, countries are always encouraging farmers through various programs to increase national farming production. However, the vast areas and large farms make it difficult for farmers and workers to continually monitor these broad areas to protect the plants from diseases and various weather conditions. A new concept dubbed Precision Farming has recently surfaced in which the latest technologies play an integral role in the farming process. In this paper, we propose a SMART Drone system equipped with high precision cameras, high computing power with proposed image processing methodologies, and connectivity for precision farming. The SMART system will automatically monitor vast farming areas with precision, identify infected plants, decide on the chemical and exact amount to spray. Besides, the system is connected to the cloud server for sending the images so that the cloud system can generate reports, including prediction on crop yield. The system is equipped with a user-friendly Human Computer Interface (HCI) for communication with the farm base. This multidrone system can process vast areas of farmland daily. The Image processing technique proposed in this paper is a modified ResNet architecture. The system is compared with deep CNN architecture and other machine learning based systems. The ResNet architecture achieves the highest average accuracy of 99.78% on a dataset consisting of 70,295 leaf images for 26 different diseases of 14 plants. The results obtained were compared with the CNN results applied in this paper and other similar techniques in previous literature. The comparisons indicate that the proposed ResNet architecture performs better compared to other similar techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Dharmik, R. C., Sushilkumar Chavhan, and S. R. Sathe. "Deep learning based missing object detection and person identification: an application for smart CCTV." 3C Tecnología_Glosas de innovación aplicadas a la pyme 11, no. 2 (December 29, 2022): 51–57. http://dx.doi.org/10.17993/3ctecno.2022.v11n2e42.51-57.

Повний текст джерела
Анотація:
Security and protection are the most crucial concerns in today’s quickly developing world. Deep Learning methods and computer vision assist in resolving both problems. One of the computer vision subtasks that allows us to recognise things is object detection. Videos are a source that is taken into account for detection, and image processing technology helps to increase the effectiveness of state-ofthe-art techniques. With all of these technologies, CCTV is recognised as a key element. Using a deep convolutional neural network, we accept CCTV data in real time in this article. The main objective is to make content the centre of things. Using the YOLO technique, we were able to detect the missing item with an improvement of 10% sparsity over the current state-of-the-art algorithm in the context of surveillance systems, where object detection is a crucial step. It can be utilised to take immediate additional action.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Morshed, Md Golam, Tangina Sultana, Aftab Alam, and Young-Koo Lee. "Human Action Recognition: A Taxonomy-Based Survey, Updates, and Opportunities." Sensors 23, no. 4 (February 15, 2023): 2182. http://dx.doi.org/10.3390/s23042182.

Повний текст джерела
Анотація:
Human action recognition systems use data collected from a wide range of sensors to accurately identify and interpret human actions. One of the most challenging issues for computer vision is the automatic and precise identification of human activities. A significant increase in feature learning-based representations for action recognition has emerged in recent years, due to the widespread use of deep learning-based features. This study presents an in-depth analysis of human activity recognition that investigates recent developments in computer vision. Augmented reality, human–computer interaction, cybersecurity, home monitoring, and surveillance cameras are all examples of computer vision applications that often go in conjunction with human action detection. We give a taxonomy-based, rigorous study of human activity recognition techniques, discussing the best ways to acquire human action features, derived using RGB and depth data, as well as the latest research on deep learning and hand-crafted techniques. We also explain a generic architecture to recognize human actions in the real world and its current prominent research topic. At long last, we are able to offer some study analysis concepts and proposals for academics. In-depth researchers of human action recognition will find this review an effective tool.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Mesejo, Pablo, Rubén Martos, Óscar Ibáñez, Jorge Novo, and Marcos Ortega. "A Survey on Artificial Intelligence Techniques for Biomedical Image Analysis in Skeleton-Based Forensic Human Identification." Applied Sciences 10, no. 14 (July 8, 2020): 4703. http://dx.doi.org/10.3390/app10144703.

Повний текст джерела
Анотація:
This paper represents the first survey on the application of AI techniques for the analysis of biomedical images with forensic human identification purposes. Human identification is of great relevance in today’s society and, in particular, in medico-legal contexts. As consequence, all technological advances that are introduced in this field can contribute to the increasing necessity for accurate and robust tools that allow for establishing and verifying human identity. We first describe the importance and applicability of forensic anthropology in many identification scenarios. Later, we present the main trends related to the application of computer vision, machine learning and soft computing techniques to the estimation of the biological profile, the identification through comparative radiography and craniofacial superimposition, traumatism and pathology analysis, as well as facial reconstruction. The potentialities and limitations of the employed approaches are described, and we conclude with a discussion about methodological issues and future research.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Batista, Josias, Darielson Souza, Laurinda dos Reis, Antônio Barbosa, and Rui Araújo. "Dynamic Model and Inverse Kinematic Identification of a 3-DOF Manipulator Using RLSPSO." Sensors 20, no. 2 (January 11, 2020): 416. http://dx.doi.org/10.3390/s20020416.

Повний текст джерела
Анотація:
This paper presents the identification of the inverse kinematics of a cylindrical manipulator using identification techniques of Least Squares (LS), Recursive Least Square (RLS), and a dynamic parameter identification algorithm based on Particle Swarm Optimization (PSO) with search space defined by RLS (RLSPSO). A helical trajectory in the cartesian space is used as input. The dynamic model is found through the Lagrange equation and the motion equations, which are used to calculate the torque values of each joint. The torques are calculated from the values of the inverse kinematics, identified by each algorithm and from the manipulator joint speeds and accelerations. The results obtained for the trajectories, speeds, accelerations, and torques of each joint are compared for each algorithm. The computational costs as well as the Multi-Correlation Coefficient ( R 2 ) are computed. The results demonstrated that the identification accuracy of RLSPSO is better than that of LS and PSO. This paper brings an improvement in RLS because it is a method with high complexity, so the proposed method (hybrid) aims to improve the computational cost and the results of the classic RLS.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Balakrishnan, Umarani, Krishnamurthi Venkatachalapathy, and Girirajkumar S. Marimuthu. "An enhanced PSO-DEFS based feature selection with biometric authentication for identification of diabetic retinopathy." Journal of Innovative Optical Health Sciences 09, no. 06 (August 2016): 1650020. http://dx.doi.org/10.1142/s1793545816500206.

Повний текст джерела
Анотація:
Recently, automatic diagnosis of diabetic retinopathy (DR) from the retinal image is the most significant research topic in the medical applications. Diabetic macular edema (DME) is the major reason for the loss of vision in patients suffering from DR. Early identification of the DR enables to prevent the vision loss and encourage diabetic control activities. Many techniques are developed to diagnose the DR. The major drawbacks of the existing techniques are low accuracy and high time complexity. To overcome these issues, this paper proposes an enhanced particle swarm optimization-differential evolution feature selection (PSO-DEFS) based feature selection approach with biometric authentication for the identification of DR. Initially, a hybrid median filter (HMF) is used for pre-processing the input images. Then, the pre-processed images are embedded with each other by using least significant bit (LSB) for authentication purpose. Simultaneously, the image features are extracted using convoluted local tetra pattern (CLTrP) and Tamura features. Feature selection is performed using PSO-DEFS and PSO-gravitational search algorithm (PSO-GSA) to reduce time complexity. Based on some performance metrics, the PSO-DEFS is chosen as a better choice for feature selection. The feature selection is performed based on the fitness value. A multi-relevance vector machine (M-RVM) is introduced to classify the 13 normal and 62 abnormal images among 75 images from 60 patients. Finally, the DR patients are further classified by M-RVM. The experimental results exhibit that the proposed approach achieves better accuracy, sensitivity, and specificity than the existing techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Esquina Limão, Caio Henrique, Thabatta Moreira Alves de Araújo, and Carlos Renato Lisboa Frances. "Deep learning based slope erosion detection." IAES International Journal of Artificial Intelligence (IJ-AI) 12, no. 3 (September 1, 2023): 1428. http://dx.doi.org/10.11591/ijai.v12.i3.pp1428-1438.

Повний текст джерела
Анотація:
Being increasingly present at the most diverse structure health monitoring (SHM) scenarios, many high-performance artificial intelligence techniques have been able to solve structural analysis problems. When it comes to image classification solutions, convolutional neural networks (CNNs) deliver the best results. This scenario encourages us to explore machine learning techniques, such as computer vision, and merge it with different technologies to achieve the best performance. This paper proposes a custom CNN architecture trained with slope erosion images that showed satisfactory results with an accuracy of 96.67%, enabling a precise and improved identification of instability indicators. These instabilities, when detected in advance, prevent disasters and enable proper maintenance to be carried out, given that its integrity directly affects structures built around and above it.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії