Academic literature on the topic 'Kinematic identification- Vision based techniques'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Kinematic identification- Vision based techniques.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Kinematic identification- Vision based techniques"

1

Seah, Shao Xuan, Yan Han Lau, and Sutthiphong Srigrarom. "Multiple Aerial Targets Re-Identification by 2D- and 3D- Kinematics-Based Matching." Journal of Imaging 8, no. 2 (January 28, 2022): 26. http://dx.doi.org/10.3390/jimaging8020026.

Full text
Abstract:
This paper presents two techniques in the matching and re-identification of multiple aerial target detections from multiple electro-optical devices: 2-dimensional and 3-dimensional kinematics-based matching. The main advantage of these methods over traditional image-based methods is that no prior image-based training is required; instead, relatively simpler graph matching algorithms are used. The first 2-dimensional method relies solely on the kinematic and geometric projections of the detected targets onto the images captured by the various cameras. Matching and re-identification across frames were performed using a series of correlation-based methods. This method is suitable for all targets with distinct motion observed by the camera. The second 3-dimensional method relies on the change in the size of detected targets to estimate motion in the focal axis by constructing an instantaneous direction vector in 3D space that is independent of camera pose. Matching and re-identification were achieved by directly comparing these vectors across frames under a global coordinate system. Such a method is suitable for targets in near to medium range where changes in detection sizes may be observed. While no overlapping field of view requirements were explicitly imposed, it is necessary for the aerial target to be detected in both cameras before matching can be carried out. Preliminary flight tests were conducted using 2–3 drones at varying ranges, and the effectiveness of these techniques was tested and compared. Using these proposed techniques, an MOTA score of more than 80% was achieved.
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Biao, Chaoyang Chen, Jie Hu, Zain Sayeed, Jin Qi, Hussein F. Darwiche, Bryan E. Little, et al. "Computer Vision and Machine Learning-Based Gait Pattern Recognition for Flat Fall Prediction." Sensors 22, no. 20 (October 19, 2022): 7960. http://dx.doi.org/10.3390/s22207960.

Full text
Abstract:
Background: Gait recognition has been applied in the prediction of the probability of elderly flat ground fall, functional evaluation during rehabilitation, and the training of patients with lower extremity motor dysfunction. Gait distinguishing between seemingly similar kinematic patterns associated with different pathological entities is a challenge for the clinician. How to realize automatic identification and judgment of abnormal gait is a significant challenge in clinical practice. The long-term goal of our study is to develop a gait recognition computer vision system using artificial intelligence (AI) and machine learning (ML) computing. This study aims to find an optimal ML algorithm using computer vision techniques and measure variables from lower limbs to classify gait patterns in healthy people. The purpose of this study is to determine the feasibility of computer vision and machine learning (ML) computing in discriminating different gait patterns associated with flat-ground falls. Methods: We used the Kinect® Motion system to capture the spatiotemporal gait data from seven healthy subjects in three walking trials, including normal gait, pelvic-obliquity-gait, and knee-hyperextension-gait walking. Four different classification methods including convolutional neural network (CNN), support vector machine (SVM), K-nearest neighbors (KNN), and long short-term memory (LSTM) neural networks were used to automatically classify three gait patterns. Overall, 750 sets of data were collected, and the dataset was divided into 80% for algorithm training and 20% for evaluation. Results: The SVM and KNN had a higher accuracy than CNN and LSTM. The SVM (94.9 ± 3.36%) had the highest accuracy in the classification of gait patterns, followed by KNN (94.0 ± 4.22%). The accuracy of CNN was 87.6 ± 7.50% and that of LSTM 83.6 ± 5.35%. Conclusions: This study revealed that the proposed AI machine learning (ML) techniques can be used to design gait biometric systems and machine vision for gait pattern recognition. Potentially, this method can be used to remotely evaluate elderly patients and help clinicians make decisions regarding disposition, follow-up, and treatment.
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, Jianbing, and Chia-Hsiang Menq. "Identification and Characterization of Regular Surfaces from Unorganized Points by Normal Sensitivity Analysis." Journal of Computing and Information Science in Engineering 2, no. 2 (June 1, 2002): 115–24. http://dx.doi.org/10.1115/1.1509075.

Full text
Abstract:
In this paper, the concept of free motion subspace is introduced and utilized to characterize the special kinematic properties of regular surfaces, including planes, natural quadrics, and regular swept surfaces. Based on the concept, a general approach is developed to automatically identify the surface type and calculate the associated geometric parameters of an unknown surface from unorganized measurement points. In the approach, a normal sensitivity matrix, that characterizes the normal perturbation of surface points under differential motions, is derived. With the normal sensitivity matrix, it is shown that the free motion subspace of a surface can be determined through a regular eigen analysis. From the identified free motion subspace, the surface type of a regular surface can be determined and its geometric parameters can be simultaneously computed. An algorithm that identifies the free motion subspace of an unknown surface from its unorganized sample points has been implemented. Experiments are carried out to investigate the robustness and efficiency of the developed algorithm. The developed algorithm can be used to solve various problems including geometric primitive classification and parameter estimation, regular swept surface reconstruction, geometric constraint recognition and multi-view data registration. Integrated with state-of-art segmentation techniques, the proposed method can be used for object recognition, robot vision, and reverse engineering.
APA, Harvard, Vancouver, ISO, and other styles
4

Amami, Mustafa M. "Fast and Reliable Vision-Based Navigation for Real Time Kinematic Applications." International Journal for Research in Applied Science and Engineering Technology 10, no. 2 (February 28, 2022): 922–32. http://dx.doi.org/10.22214/ijraset.2022.40395.

Full text
Abstract:
Abstract: Automatic Image Matching (AIM) is the term used to identify the automatic detection of corresponding points located on the overlapping areas of multiple images. AIM is extensively used with Mobile Mapping System (MMS) for different engineering applications, such as highway infrastructure mapping, monitoring of road surface quality and markings, telecommunication, emergency response, and collecting data for Geographical Information Systems (GIS). Robotics community and Simultaneous Localization And Mapping (SLAM) based applications are other important areas that require fact and welldistributed AIM for robust vision navigation solutions. Different robust feature detection methods are commonly used for AIM, such as Scale Invariant Feature Transform (SIFT), Principal Component Analysis (PCA)–SIFT and Speeded Up Robust Features (SURF). The performance of such techniques have been widely investigated and compared, showing high capability to provide reliable and precise results. However, these techniques are still limited to be used for real and nearly real time SLAM based applications, such as intelligent Robots and low-cost Unmanned Aircraft Vehicles (UAV) based on vision navigation. The main limitations of these AIM techniques are represented in the relatively long processing time and the random distribution of matched points over the common area between images. This paper works on overcoming these two limitations, providing extremely fast AIM with well- distributed common points for robust real time vision navigation. Digital image pyramid, Epipolar line and 2D transformation have been utilized for limiting the size of search windows significantly and determining the rotating angle and scale level of features, reducing the overall processing time considerably. Using limited number of well-distributed common points has also helped to speed up the automatic matching besides providing robust vision navigation solution. The idea has been tested with terrestrial MMS images, and surveying UAV aerial images. The results reflect the high capability of the followed technique in providing fast and robust AIM for real-time SLAM based applications. Keywords: Automatic Image Matching, Epipolar Line, Image Pyramid, SLAM, Vision Navigation, Real Time, Vision Navigation.
APA, Harvard, Vancouver, ISO, and other styles
5

Sanchez Guinea, Alejandro, Simon Heinrich, and Max Mühlhäuser. "Activity-Free User Identification Using Wearables Based on Vision Techniques." Sensors 22, no. 19 (September 28, 2022): 7368. http://dx.doi.org/10.3390/s22197368.

Full text
Abstract:
In order to achieve the promise of smart spaces where the environment acts to fulfill the needs of users in an unobtrusive and personalized manner, it is necessary to provide means for a seamless and continuous identification of users to know who indeed is interacting with the system and to whom the smart services are to be provided. In this paper, we propose a new approach capable of performing activity-free identification of users based on hand and arm motion patterns obtained from an wrist-worn inertial measurement unit (IMU). Our approach is not constrained to particular types of movements, gestures, or activities, thus, allowing users to perform freely and unconstrained their daily routine while the user identification takes place. We evaluate our approach based on IMU data collected from 23 people performing their daily routines unconstrained. Our results indicate that our approach is able to perform activity-free user identification with an accuracy of 0.9485 for 23 users without requiring any direct input or specific action from users. Furthermore, our evaluation provides evidence regarding the robustness of our approach in various different configurations.
APA, Harvard, Vancouver, ISO, and other styles
6

Silva, José Luís, Rui Bordalo, José Pissarra, and Paloma de Palacios. "Computer Vision-Based Wood Identification: A Review." Forests 13, no. 12 (November 30, 2022): 2041. http://dx.doi.org/10.3390/f13122041.

Full text
Abstract:
Wood identification is an important tool in many areas, from biology to cultural heritage. In the fight against illegal logging, it has a more necessary and impactful application. Identifying a wood sample to genus or species level is difficult, expensive and time-consuming, even when using the most recent methods, resulting in a growing need for a readily accessible and field-applicable method for scientific wood identification. Providing fast results and ease of use, computer vision-based technology is an economically accessible option currently applied to meet the demand for automated wood identification. However, despite the promising characteristics and accurate results of this method, it remains a niche research area in wood sciences and is little known in other fields of application such as cultural heritage. To share the results and applicability of computer vision-based wood identification, this paper reviews the most frequently cited and relevant published research based on computer vision and machine learning techniques, aiming to facilitate and promote the use of this technology in research and encourage its application among end-users who need quick and reliable results.
APA, Harvard, Vancouver, ISO, and other styles
7

RADHIKA, K. R., S. V. SHEELA, M. K. VENKATESHA, and G. N. SEKHAR. "SIGNATURE AND IRIS AUTHENTICATION BASED ON DERIVED KINEMATIC VALUES." International Journal of Pattern Recognition and Artificial Intelligence 24, no. 08 (December 2010): 1237–60. http://dx.doi.org/10.1142/s021800141000841x.

Full text
Abstract:
Authentication systems which are covenant with a measurable behavioral and physiological traits are essential for an online system. In this paper, two types of biometric sample authentication from different databases on a common algorithm using Continuous Dynamic Programming [CDP] are discussed. Using a common algorithm, a method for user-dependent threshold decisions can be achieved for both biometrics in a uniform fashion. The integration of static iris information and dynamic signature information are done at decision level. Inferences are drawn using voting techniques. The derived kinematic feature, acceleration, is used in this paper.
APA, Harvard, Vancouver, ISO, and other styles
8

Dang, Minh. "Efficient Vision-Based Face Image Manipulation Identification Framework Based on Deep Learning." Electronics 11, no. 22 (November 17, 2022): 3773. http://dx.doi.org/10.3390/electronics11223773.

Full text
Abstract:
Image manipulation of the human face is a trending topic of image forgery, which is done by transforming or altering face regions using a set of techniques to accomplish desired outputs. Manipulated face images are spreading on the internet due to the rise of social media, causing various societal threats. It is challenging to detect the manipulated face images effectively because (i) there has been a limited number of manipulated face datasets because most datasets contained images generated by GAN models; (ii) previous studies have mainly extracted handcrafted features and fed them into machine learning algorithms to perform manipulated face detection, which was complicated, error-prone, and laborious; and (iii) previous models failed to prove why their model achieved good performances. In order to address these issues, this study introduces a large face manipulation dataset containing vast variations of manipulated images created and manually validated using various manipulation techniques. The dataset is then used to train a fine-tuned RegNet model to detect manipulated face images robustly and efficiently. Finally, a manipulated region analysis technique is implemented to provide some in-depth insights into the manipulated regions. The experimental results revealed that the RegNet model showed the highest classification accuracy of 89% on the proposed dataset compared to standard deep learning models.
APA, Harvard, Vancouver, ISO, and other styles
9

Bryła, Jakub, Adam Martowicz, Maciej Petko, Konrad Gac, Konrad Kobus, and Artur Kowalski. "Wear Analysis of 3D-Printed Spur and Herringbone Gears Used in Automated Retail Kiosks Based on Computer Vision and Statistical Methods." Materials 16, no. 16 (August 10, 2023): 5554. http://dx.doi.org/10.3390/ma16165554.

Full text
Abstract:
This paper focuses on a wear evaluation conducted for prototype spur and herringbone gears made from PET-G filament using additive manufacturing. The main objective of this study is to verify if 3D-printed gears can be considered a reliable choice for long-term exploitation in selected mechanical systems, specifically automated retail kiosks. For this reason, two methods were applied, utilizing: (1) vision-based inspection of the gears’ cross-sectional geometry and (2) the statistical characterization of the selected kinematic parameters and torques generated by drives. The former method involves destructive testing and allows for identification of the gears’ operation-induced geometric shape evolution, whereas the latter method focuses on searching for nondestructive kinematic and torque-based indicators, which allow tracking of the wear. The novel contribution presented in this paper is the conceptual and experimental application of the identification of the changes of 3D-printed parts’ geometric properties resulting from wear. The inspected exploited and non-exploited 3D-printed parts underwent encasing in resin and a curing process, followed by cutting in a specific plane to reveal the desired shapes, before finally being subjected to a vision-based geometric characterization. The authors have experimentally demonstrated, in real industrial conditions, on batch production parts, the usefulness of the presented destructive testing technique providing valid indices for wear identification.
APA, Harvard, Vancouver, ISO, and other styles
10

Heydarzadeh, Mohsen, Nima Karbasizadeh, Mehdi Tale Masouleh, and Ahmad Kalhor. "Experimental kinematic identification and position control of a 3-DOF decoupled parallel robot." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 233, no. 5 (May 31, 2018): 1841–55. http://dx.doi.org/10.1177/0954406218775906.

Full text
Abstract:
This paper aims at using a kinematic identification procedure in order to enhance the control of a 3-DOF fully decoupled parallel robot, the so-called “Tripteron.” From a practical standpoint, manufacture errors lead to some kinematic uncertainties in the robot which cause real kinematic equations of the robot to be different from the theoretical ones. In this paper, using a white box identification procedure, the independence of degrees-of-freedom in the robot is studied. Considering the fact that the kinematic identification of a robotic manipulator requires the position of its end-effector to be known, in this paper “Kinect” sensor, which is a vision-infra red sensor, is utilized to obtain the spatial coordinates of the end-effector. In order to calibrate the Kinect, a novel approach which is based on a neuro-fuzzy algorithm, the so-called “LoLiMoT” algorithm, is used. Moreover, the results of experimentally performing the identification and calibrating approach are used to the end of implementing a closed-loop classic controller for path tracking purposes. Furthermore, the theoretical unidentified model was implemented in a sliding mode robust controller in order to compare the results with classic controller. The comparison reveals that classic controller which uses identified model leads to a better performance in terms of accuracy and control effort with respect to robust controller which is purely based on theoretical model.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Kinematic identification- Vision based techniques"

1

Yang, Xu. "One sample based feature learning and its application to object identification." Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3950624.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Werner, Felix. "Vision-based topological mapping and localisation." Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/31815/1/Felix_Werner_Thesis.pdf.

Full text
Abstract:
Competent navigation in an environment is a major requirement for an autonomous mobile robot to accomplish its mission. Nowadays, many successful systems for navigating a mobile robot use an internal map which represents the environment in a detailed geometric manner. However, building, maintaining and using such environment maps for navigation is difficult because of perceptual aliasing and measurement noise. Moreover, geometric maps require the processing of huge amounts of data which is computationally expensive. This thesis addresses the problem of vision-based topological mapping and localisation for mobile robot navigation. Topological maps are concise and graphical representations of environments that are scalable and amenable to symbolic manipulation. Thus, they are well-suited for basic robot navigation applications, and also provide a representational basis for the procedural and semantic information needed for higher-level robotic tasks. In order to make vision-based topological navigation suitable for inexpensive mobile robots for the mass market we propose to characterise key places of the environment based on their visual appearance through colour histograms. The approach for representing places using visual appearance is based on the fact that colour histograms change slowly as the field of vision sweeps the scene when a robot moves through an environment. Hence, a place represents a region of the environment rather than a single position. We demonstrate in experiments using an indoor data set, that a topological map in which places are characterised using visual appearance augmented with metric clues provides sufficient information to perform continuous metric localisation which is robust to the kidnapped robot problem. Many topological mapping methods build a topological map by clustering visual observations to places. However, due to perceptual aliasing observations from different places may be mapped to the same place representative in the topological map. A main contribution of this thesis is a novel approach for dealing with the perceptual aliasing problem in topological mapping. We propose to incorporate neighbourhood relations for disambiguating places which otherwise are indistinguishable. We present a constraint based stochastic local search method which integrates the approach for place disambiguation in order to induce a topological map. Experiments show that the proposed method is capable of mapping environments with a high degree of perceptual aliasing, and that a small map is found quickly. Moreover, the method of using neighbourhood information for place disambiguation is integrated into a framework for topological off-line simultaneous localisation and mapping which does not require an initial categorisation of visual observations. Experiments on an indoor data set demonstrate the suitability of our method to reliably localise the robot while building a topological map.
APA, Harvard, Vancouver, ISO, and other styles
3

Anguzza, Umberto. "A method to develop a computer-vision based system for the automaticac dairy cow identification and behaviour detection in free stall barns." Doctoral thesis, Università di Catania, 2013. http://hdl.handle.net/10761/1334.

Full text
Abstract:
In this thesis, a method to develop a computer-vision based system (CVBS) for the automatic dairy cow identification and behaviour detection in free stall barns is proposed. Two different methodologies based on digital image processing were proposed in order to achieve dairy cow identification and behaviour detection, respectively. Suitable algorithms among that used in computer vision science were chosen and adapted to the specific characteristics of the breeding environment under study. The trial was carried out during the years 2011 and 2012 in a dairy cow free-stall barn located in the municipality of Vittoria in the province of Ragusa. A multi-camera video-recording system was designed in order to obtain sequences of panoramic top-view images coming from the multi-camera video-recording system. The two methodologies proposed in order to achieve dairy cow identification and behaviour detection, were implemented in a software component of the CVBS and tested. Finally, the CVBS was validated by comparing the detection and identification results with those generated by an operator through visual recognition of cows in sequences of panoramic top-view images. This comparison allowed the computation of accuracy indices. The detection of the dairy cow behavioural activities in the barn provided a Cow Detection Percentage (CDP) index greater than 86% and a Quality Percentage (QP) index greater than 75%. With regard to cow identification the CVBS provided a CDP > 90% and a QP > 85%.
APA, Harvard, Vancouver, ISO, and other styles
4

Kuo, Yao-wen, and 郭耀文. "Vision-based Techniques for Real-time Action Identification of Upper Body Rehabilitation." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/m26nmg.

Full text
Abstract:
碩士
國立臺北科技大學
資訊工程系研究所
101
Rehabilitation takes a lot of time to practice rehabilitation actions, and thus causing a shortage of medical and rehabilitation staffs. As a result, home rehabilitation can not only reduce the loadings of family members of rehabilitation pedestrian, but also relieve shortages of medical and rehabilitation staffs, this thesis proposes an action identification of upper body rehabilitation system. For the proposed system, which is the most important point of building upper body skeleton, this thesis presents an algorithm to feasibly and rapidly build upper body skeleton points. Through the upper body of the human skeleton and human skin color information, an upper body skeleton points can be effectively established by the proposed system. As a result, the proposed system can achieve a high recognition rate of 98% for the defined rehabilitation actions for different muscle parts. Moreover, the computational speed of the proposed system can reach 125 FPS, i.e. the processing time per frame is 8ms, the computational efficiency can provide efficient extensibility in the future development for dealing with the complex ambient environments and the implementation on embedded and pervasive systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Chien, Rong-Chun, and 簡榮均. "Deep Learning Based Computer Vision Techniques for Real-time Identification of Construction Site Personal Equipment Violations." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/x7n995.

Full text
Abstract:
碩士
國立臺灣大學
土木工程學研究所
107
Being well-equipped with Personal Protective Equipment (PPE) plays an essential role in construction sites to protect individuals from accidents. However, due to inconvenience and discomfort, it is common to see workers not wearing them on site. Therefore, ensuring the wearing of PPE remains important subject to facilitate construction sites safety. In recent years, because of the increasing performance of graphic cards and the rise of deep learning, Convolutional neural network (CNN) based computer vision techniques is receiving increased attention. Monitoring PPE use via computer vision method has been considered to be effective rather than sensor-based method. In this paper, a two-stage method is proposed to automatically detect 3 kinds of PPE violations, including non-hardhat-use, un-equipped with safety vest and bare to the waist. Combining object detection model and classification model, our two-stage method can avoid feature loss of small-scale PPE to achieve better accuracy. First, the object detection model based on RetinaNet is adopted to detect the presence of worker in the image. Then, by using InceptionNet as classification model, these worker images are input to identify the violations of PPE use. This study collected 3015 site images to preform transfer learning. The results show that our method can effectively detect PPE violations at the frame rate of 10 fps. When the image resolution of the worker is 120 pixels or more, both precision and recall can be above 0.9; while the resolution is only 60 pixels, it could also achieve the precision of 0.8.
APA, Harvard, Vancouver, ISO, and other styles
6

Lu, Ming-Kun, and 呂鳴崑. "Multi-Camera Vision-based Finger Detection, Tracking, and Event Identification Techniques for Multi-Touch Sensing and Human Computer Interactive Systems." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/vgss3p.

Full text
Abstract:
碩士
國立臺北科技大學
資訊工程系研究所
100
Nowadays, multi-touch technology has become a popular issue. Multi-touch has been implemented in several ways including resistive type, capacitive type and so on. However, because of limitations, multi-touch by these implementations cannot support large screens. Therefore, this thesis proposes a multi-camera vision-based finger detection, tracking, and event identification techniques for multi-touch sensing with implementation. The proposed system detects the multi-finger pressing on an acrylic board by capturing the infrared light through four infrared cameras. The captured infrared points, which are equivalent to the multi-finger touched points, can be used for input equipments and supply man-computer interface with convenience. Additionally, the proposed system is a multi-touch sensing with computer vision technology. Compared with the conventional touch technology, multi-touch technology allows users to input complex commands. The proposed multi-touch point detection algorithm identifies the multi-finger touched points by using the bright object segmentation techniques. The extracted bright objects are then tracked, and the trajectories of objects are recorded. Furthermore, the system will analyze the trajectories of objects and identify the corresponding events pre-defined in the proposed system. For applications, this thesis wants to provide a simple human-computer interface with easy operation. Users can access and input commands by touch and move fingers. Besides, the proposed system is implemented with a table-sized screen, which can support multi-user interaction.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Kinematic identification- Vision based techniques"

1

MCBR-CDS 2009 (2009 London, England). Medical content-based retrieval for clinical decision support: First MICCAI international workshop, MCBR-CDS 2009, London, UK, September 20, 2009 : revised selected papers. Berlin: Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Feature Dimension Reduction for Content-Based Image Identification. IGI Global, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Kinematic identification- Vision based techniques"

1

Fontenla-Carrera, Gabriel, Ángel Manuel Fernández Vilán, and Pablo Izquierdo Belmonte. "Automatic Identification of Kinematic Diagrams with Computer Vision." In Proceedings of the XV Ibero-American Congress of Mechanical Engineering, 425–31. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-38563-6_62.

Full text
Abstract:
AbstractIn this work, a computer vision algorithm for the detection and recognition of 2D kinematic diagrams, both from paper schemes and digital files, was developed. Furthermore, it runs even with hand-made diagrams, which can be correctly identified. The algorithm is mainly based on the use of the free computer vision library OpenCV, being able to identify each element of the kinematic diagram, its connection with the other elements and store its pixels, which will allow in future research the implementation of motion in the sketches themselves. Allowed elements are revolute, prismatic, fixed, cylindrical and rigid joints and rigid bars. The main applications of this work are focused on the teaching world, communication of ideas in a quickly and graphical way and for fast and preliminary designs of new mechanisms as people can draw the diagram in a Tablet or paper and simulate it in real time, avoiding the necessity to learn how to operate a specialized simulation software and the time it takes to prepare the virtual model and obtain its results.
APA, Harvard, Vancouver, ISO, and other styles
2

Harary, Sivan, and Eugene Walach. "Identification of Malignant Breast Tumors Based on Acoustic Attenuation Mapping of Conventional Ultrasound Images." In Medical Computer Vision. Recognition Techniques and Applications in Medical Imaging, 233–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36620-8_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sajeendran, B. S., and R. Durairaj. "On-Orbit Real-Time Avionics Package Identification Using Vision-Based Machine Learning Techniques." In Lecture Notes in Mechanical Engineering, 429–37. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-1724-2_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cruz, Diego A., Cristian C. Cristancho, and Jorge E. Camargo. "Automatic Identification of Traditional Colombian Music Genres Based on Audio Content Analysis and Machine Learning Techniques." In Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, 646–55. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-33904-3_61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lockner, Yannik, Paul Buske, Maximilian Rudack, Zahra Kheirandish, Moritz Kröger, Stoyan Stoyanov, Seyed Ruhollah Dokhanchi, et al. "Improving Manufacturing Efficiency for Discontinuous Processes by Methodological Cross-Domain Knowledge Transfer." In Internet of Production, 1–33. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-030-98062-7_8-1.

Full text
Abstract:
AbstractDiscontinuous processes face common tasks when implementing modeling and optimization techniques for process optimization. While domain data may be unequal, knowledge about approaches for each step toward the solution, e.g., data gathering, model reduction, and model optimization, may be useful across different processes. A joint development of methodologies for machine learning methods, among other things, ultimately supports fast advances in cross-domain production technologies. In this work, an overview of common maturation stages of data-intensive modeling approaches for production efficiency enhancement is given. The stages are analyzed and communal challenges are elaborated. The used approaches include both physically motivated surrogate modeling as well as the advanced use of machine learning technologies. Apt research is depicted for each stage based on demonstrator work for diverse production technologies, among them high-pressure die casting, surface engineering, plastics injection molding, open-die forging, and automated tape placement. Finally, a holistic and general framework is illustrated covering the main concepts regarding the transfer of mature models into production environments on the example of laser technologies.Increasing customer requirements regarding process stability, transparency and product quality as well as desired high production efficiency in diverse manufacturing processes pose high demands on production technologies. The further development of digital support systems for manufacturing technologies can contribute to meet these demands in various production settings. Especially for discontinuous production, such as injection molding and laser cutting, the joint research for different technologies helps to identify common challenges, ranging from problem identification to knowledge perpetuation after successfully installing digital tools. Workstream CRD-B2.II “Discontinuous Production” confronts this research task by use case-based joint development of transferable methods. Based on the joint definition of a standard pipeline to solve problems with digital support, various stages of this pipeline, such as data generation and collection, model training, optimization, and the development and deployment of assistance systems are actively being researched. Regarding data generation, e.g., for the high-pressure die-casting process, data acquisition and extraction approaches for machines and production lines using OPC UA are investigated to get detailed process insights. For diverse discontinuous processes and use cases, relevant production data is not directly available in sufficient quality and needs to be preprocessed. For vision systems, ptychographic methods may improve recorded data by enhancing the picture sharpness to enable the usage of inline or low-cost equipment to detect small defects. Further down the pipeline, several research activities concern the domain-specific model training and optimization tasks. Within the realm of surface technologies, machine learning is applied to predict process behavior, e.g., by predicting the particle properties in plasma spraying process or plasma intensities in the physical vapor deposition process. The injection molding process can also be modeled by data-based approaches. The modeling efficiency based on the used amount of data can furthermore be effectively reduced by using transfer learning to transfer knowledge stored in artificial neural networks from one process to the next. Successful modeling approaches can then be transferred prototypically into production. On the examples of vision-based defect classification in the tape-laying process and a process optimization assistance system in open-die forging, the realization of prototypical support systems is demonstrated. Once mature, research results and consequent digital services must be made available for integrated usage in specific production settings using relevant architecture. By the example of a microservice-based infrastructure for laser technology, a suitable and flexible implementation of a service framework is realized. The connectivity to production assets is guaranteed by state-of-the-art communication protocols. This chapter illustrates the state of research for use-case-driven development of joint approaches.
APA, Harvard, Vancouver, ISO, and other styles
6

Kavati, Ilaiah, Munaga V. N. K. Prasad, and Chakravarthy Bhagvati. "Search Space Reduction in Biometric Databases." In Computer Vision, 1600–1626. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5204-8.ch066.

Full text
Abstract:
Deployment of biometrics for personal recognition in various real time applications lead to large scale databases. Identification of an individual on such large biometric databases using a one-one matching (i.e., exhaustive search) increases the response time of the system. Reducing the search space during identification increases the search speed and reduces the response time of the system. This chapter presents a comprehensive review of the current developments of the search space reduction techniques in biometric databases. Search space reduction techniques for the fingerprint databases are categorized into classification and indexing approaches. For the palmprint, the current search space reduction techniques are classified as hierarchical matching, classification and indexing approaches. Likewise, the iris indexing approaches are classified as texture based and color based techniques.
APA, Harvard, Vancouver, ISO, and other styles
7

Singh, Law Kumar, Pooja, Hitendra Garg, and Munish Khanna. "An Artificial Intelligence-Based Smart System for Early Glaucoma Recognition Using OCT Images." In Research Anthology on Improving Medical Imaging Techniques for Analysis and Intervention, 1424–54. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-7544-7.ch073.

Full text
Abstract:
Glaucoma is a progressive and constant eye disease that leads to a deficiency of peripheral vision and, at last, leads to irrevocable loss of vision. Detection and identification of glaucoma are essential for earlier treatment and to reduce vision loss. This motivates us to present a study on intelligent diagnosis system based on machine learning algorithm(s) for glaucoma identification using three-dimensional optical coherence tomography (OCT) data. This experimental work is attempted on 70 glaucomatous and 70 healthy eyes from combination of public (Mendeley) dataset and private dataset. Forty-five vital features were extracted using two approaches from the OCT images. K-nearest neighbor (KNN), linear discriminant analysis (LDA), decision tree, random forest, support vector machine (SVM) were applied for the categorization of OCT images among the glaucomatous and non-glaucomatous class. The largest AUC is achieved by KNN (0.97). The accuracy is obtained on fivefold cross-validation techniques. This study will facilitate to reach high standards in glaucoma diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
8

Tuzova, Lyudmila N., Dmitry V. Tuzoff, Sergey I. Nikolenko, and Alexey S. Krasnov. "Teeth and Landmarks Detection and Classification Based on Deep Neural Networks." In Computational Techniques for Dental Image Analysis, 129–50. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-6243-6.ch006.

Full text
Abstract:
In the recent decade, deep neural networks have enjoyed rapid development in various domains, including medicine. Convolutional neural networks (CNNs), deep neural network structures commonly used for image interpretation, brought the breakthrough in computer vision and became state-of-the-art techniques for various image recognition tasks, such as image classification, object detection, and semantic segmentation. In this chapter, the authors provide an overview of deep learning algorithms and review available literature for dental image analysis with methods based on CNNs. The present study is focused on the problems of landmarks and teeth detection and classification, as these tasks comprise an essential part of dental image interpretation both in clinical dentistry and in human identification systems based on the dental biometrical information.
APA, Harvard, Vancouver, ISO, and other styles
9

Verma, Vivek K., and Tarun Jain. "Machine-Learning-Based Image Feature Selection." In Feature Dimension Reduction for Content-Based Image Identification, 65–73. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5775-3.ch004.

Full text
Abstract:
This is the age of big data where aggregating information is simple and keeping it economical. Tragically, as the measure of machine intelligible data builds, the capacity to comprehend and make utilization of it doesn't keep pace with its development. In content-based image retrieval (CBIR) applications, every database needs its comparing parameter setting for feature extraction. CBIR is the application of computer vision techniques to the image retrieval problem that is the problem of searching for digital images in large databases. In any case, the vast majority of the CBIR frameworks perform ordering by an arrangement of settled and pre-particular parameters. All the major machine-learning-based search algorithms have discussed in this chapter for better understanding related with the image retrieval accuracy. The efficiency of FS using machine learning compared with some other search algorithms and observed for the improvement of the CBIR system.
APA, Harvard, Vancouver, ISO, and other styles
10

Latha, Y. L. Malathi, and Munaga V. N. K. Prasad. "A Survey on Palmprint-Based Biometric Recognition System." In Innovative Research in Attention Modeling and Computer Vision Applications, 304–26. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-8723-3.ch012.

Full text
Abstract:
The automatic use of physiological or behavioral characteristics to determine or verify identity of individual's is regarded as biometrics. Fingerprints, Iris, Voice, Face, and palmprints are considered as physiological biometrics whereas voice and signature are behavioral biometrics. Palmprint recognition is one of the popular methods which have been investigated over last fifteen years. Palmprint have very large internal surface and contain several unique stable characteristic features used to identify individuals. Several palmprint recognition methods have been extensively studied. This chapter is an attempt to review current palmprint research, describing image acquisition, preprocessing palmprint feature extraction and matching, palmprint related fusion and techniques used for real time palmprint identification in large databases. Various palmprint recognition methods are compared.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Kinematic identification- Vision based techniques"

1

Talakoub, Omid, and Farrokh Janabi Sharifi. "A robust vision-based technique for human arm kinematics identification." In Optics East 2006, edited by Yukitoshi Otani and Farrokh Janabi-Sharifi. SPIE, 2006. http://dx.doi.org/10.1117/12.686229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Das, Arpita, and Mahua Bhattacharya. "GA Based Neuro Fuzzy Techniques for Breast Cancer Identification." In 2008 International Machine Vision and Image Processing Conference (IMVIP). IEEE, 2008. http://dx.doi.org/10.1109/imvip.2008.19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Anil, Abhishek, Hardik Gupta, and Monika Arora. "Computer vision based method for identification of freshness in mushrooms." In 2019 International Conference on Issues and Challenges in Intelligent Computing Techniques (ICICT). IEEE, 2019. http://dx.doi.org/10.1109/icict46931.2019.8977698.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yang, Shanglin, Yang Lin, Yong Li, Suyi Zhang, Lihui Peng, and Defu Xu. "Machine Vision Based Granular Raw Material Adulteration Identification in Baijiu Brewing." In 2022 IEEE International Conference on Imaging Systems and Techniques (IST). IEEE, 2022. http://dx.doi.org/10.1109/ist55454.2022.9827757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Jin, Mary She, Saeid Nahavandi, and Abbas Kouzani. "A Review of Vision-Based Gait Recognition Methods for Human Identification." In 2010 International Conference on Digital Image Computing: Techniques and Applications (DICTA). IEEE, 2010. http://dx.doi.org/10.1109/dicta.2010.62.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cui, Lulu, Lu Wang, Jinyu Su, Zihan Song, and Xilai Li. "Classification and identification of degraded alpine m eadows based on machine learning techniques." In 2023 4th International Conference on Computer Vision, Image and Deep Learning (CVIDL). IEEE, 2023. http://dx.doi.org/10.1109/cvidl58838.2023.10167398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Yen-Lin, Chuan-Yen Chiang, Wen-Yew Liang, Tung-Ju Hsieh, Da-Cheng Lee, Shyan-Ming Yuan, and Yang-Lang Chang. "Developing Ubiquitous Multi-touch Sensing and Displaying Systems with Vision-Based Finger Detection and Event Identification Techniques." In Communication (HPCC). IEEE, 2011. http://dx.doi.org/10.1109/hpcc.2011.129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Freitas, Uéliton, Marcio Pache, Wesley Gonçalves, Edson Matsubara, José Sabino, Diego Sant'Ana, and Hemerson Pistori. "Analysis of color feature extraction techniques for Fish Species Identification." In Workshop de Visão Computacional. Sociedade Brasileira de Computação - SBC, 2020. http://dx.doi.org/10.5753/wvc.2020.13495.

Full text
Abstract:
Color recognition is an important step for computer vision to be able to recognize objects in the most different environmental conditions. Classifying objects by color using computer vision is a good alternative for different color conditions such as the aquarium. In which it is possible to use resources of a smartphone with real-time image classification applications. This paper presents some experimental results regarding the use of five different feature extraction techniques to the problem of fish species identification. The feature extractors tested are the Bag of Visual Words (BoVW), the Bag of Colors (BoC), the Bag of Features and Colors (BoFC), the Bag of Colored Words (BoCW), and the histograms HSV and RGB color spaces. The experiments were performed using a dataset, which is also a contribution of this work, containing 1120 images from fishes of 28 different species. The feature extractors were tested under three different supervised learning setups based on Decision Trees, K-Nearest Neighbors, and Support Vector Machine. From the attribute extraction techniques described, the best performance was BoC using the Support Vector Machines as a classifier with an FMeasure of 0.90 and AUC of 0.983348 with a dictionary size of 2048.
APA, Harvard, Vancouver, ISO, and other styles
9

Gai, Vasily, Irina Ephode, Roman Barinov, Igor Polyakov, Vladimir Golubenko, and Olga Andreeva. "Model and Algorithms for User Identification by Network Traffic." In 31th International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2021. http://dx.doi.org/10.20948/graphicon-2021-3027-1017-1027.

Full text
Abstract:
This paper proposes a method of user identification by network traffic. We describe the information model created, as well as the implementation of each of the proposed problem solving stages. During the network traffic collection stage, a method of capturing network packets on the user's device using specialized software is used. The information obtained is further filtered by removing redundant data. During the object feature descriptor construction stage, we extract and describe the characteristics of network sessions from which the behavioral habits of users are derived. Classification of users according to the extracted characteristics of the network sessions is performed using machine learning techniques. When analyzing the test results, the most appropriate machine learning algorithms for solving the problem of user identification by network traffic were proposed, such as: logistic regression, decision trees, SVM with a linear hyperplane and the boosting method. The accuracy of the above methods was more than 95%. The results proved that it is possible to identify a particular user with a sufficiently high accuracy based on the characteristics of the data transmitted through the network, without examining the contents of the transmitted packets. Comparison of the developed model has shown that the proposed model of user identification by network traffic works as effectively as the existing analogues.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Shichao, and Kaiyu Liu. "Optimization inspection method for concrete girder bridges using vision‐based deep learning and images acquired by unmanned aerial vehicles." In IABSE Conference, Seoul 2020: Risk Intelligence of Infrastructures. Zurich, Switzerland: International Association for Bridge and Structural Engineering (IABSE), 2020. http://dx.doi.org/10.2749/seoul.2020.257.

Full text
Abstract:
<p>Traditional concrete girder bridge inspection and monitoring techniques are usually carried out by trained inspectors, which are time‐consuming, dangerous and expensive. With the continuous development of unmanned aerial vehicle and computer vision techniques, the crack width identification of concrete girder bridge by unmanned aerial vehicles can meet the engineering precision requirements. Several image processing techniques have been implemented for detecting civil infrastructure defects to replace human‐conducted on‐site inspections. In this study partially, a deep learning algorithm based on a regional convolution neural network is applied, which combines deep learning techniques with image processing techniques to identify the surface crack of concrete girder bridges. The bridge detection images are captured by an unmanned aerial vehicle and transmitted to a computer. The sliding window algorithm is used to divide the bridge crack images into smaller bridge crack patches and bridge background patches. Based on the patches' analysis, the background patches and crack patches of concrete bridges are identified based on the ResNet convolution neural network. The detection process cracks identification of concrete girder bridge is executed in the computer through the proposed algorithm. The results show that the proposed method shows excellent performances and can indeed identify the shape of concrete cracks on the surface of concrete girder bridges.</p>
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography