Gotowa bibliografia na temat „Kinematic identification- Vision based techniques”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Kinematic identification- Vision based techniques”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Kinematic identification- Vision based techniques"
Seah, Shao Xuan, Yan Han Lau i Sutthiphong Srigrarom. "Multiple Aerial Targets Re-Identification by 2D- and 3D- Kinematics-Based Matching". Journal of Imaging 8, nr 2 (28.01.2022): 26. http://dx.doi.org/10.3390/jimaging8020026.
Pełny tekst źródłaChen, Biao, Chaoyang Chen, Jie Hu, Zain Sayeed, Jin Qi, Hussein F. Darwiche, Bryan E. Little i in. "Computer Vision and Machine Learning-Based Gait Pattern Recognition for Flat Fall Prediction". Sensors 22, nr 20 (19.10.2022): 7960. http://dx.doi.org/10.3390/s22207960.
Pełny tekst źródłaHuang, Jianbing, i Chia-Hsiang Menq. "Identification and Characterization of Regular Surfaces from Unorganized Points by Normal Sensitivity Analysis". Journal of Computing and Information Science in Engineering 2, nr 2 (1.06.2002): 115–24. http://dx.doi.org/10.1115/1.1509075.
Pełny tekst źródłaAmami, Mustafa M. "Fast and Reliable Vision-Based Navigation for Real Time Kinematic Applications". International Journal for Research in Applied Science and Engineering Technology 10, nr 2 (28.02.2022): 922–32. http://dx.doi.org/10.22214/ijraset.2022.40395.
Pełny tekst źródłaSanchez Guinea, Alejandro, Simon Heinrich i Max Mühlhäuser. "Activity-Free User Identification Using Wearables Based on Vision Techniques". Sensors 22, nr 19 (28.09.2022): 7368. http://dx.doi.org/10.3390/s22197368.
Pełny tekst źródłaSilva, José Luís, Rui Bordalo, José Pissarra i Paloma de Palacios. "Computer Vision-Based Wood Identification: A Review". Forests 13, nr 12 (30.11.2022): 2041. http://dx.doi.org/10.3390/f13122041.
Pełny tekst źródłaRADHIKA, K. R., S. V. SHEELA, M. K. VENKATESHA i G. N. SEKHAR. "SIGNATURE AND IRIS AUTHENTICATION BASED ON DERIVED KINEMATIC VALUES". International Journal of Pattern Recognition and Artificial Intelligence 24, nr 08 (grudzień 2010): 1237–60. http://dx.doi.org/10.1142/s021800141000841x.
Pełny tekst źródłaDang, Minh. "Efficient Vision-Based Face Image Manipulation Identification Framework Based on Deep Learning". Electronics 11, nr 22 (17.11.2022): 3773. http://dx.doi.org/10.3390/electronics11223773.
Pełny tekst źródłaBryła, Jakub, Adam Martowicz, Maciej Petko, Konrad Gac, Konrad Kobus i Artur Kowalski. "Wear Analysis of 3D-Printed Spur and Herringbone Gears Used in Automated Retail Kiosks Based on Computer Vision and Statistical Methods". Materials 16, nr 16 (10.08.2023): 5554. http://dx.doi.org/10.3390/ma16165554.
Pełny tekst źródłaHeydarzadeh, Mohsen, Nima Karbasizadeh, Mehdi Tale Masouleh i Ahmad Kalhor. "Experimental kinematic identification and position control of a 3-DOF decoupled parallel robot". Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 233, nr 5 (31.05.2018): 1841–55. http://dx.doi.org/10.1177/0954406218775906.
Pełny tekst źródłaRozprawy doktorskie na temat "Kinematic identification- Vision based techniques"
Yang, Xu. "One sample based feature learning and its application to object identification". Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3950624.
Pełny tekst źródłaWerner, Felix. "Vision-based topological mapping and localisation". Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/31815/1/Felix_Werner_Thesis.pdf.
Pełny tekst źródłaAnguzza, Umberto. "A method to develop a computer-vision based system for the automaticac dairy cow identification and behaviour detection in free stall barns". Doctoral thesis, Università di Catania, 2013. http://hdl.handle.net/10761/1334.
Pełny tekst źródłaKuo, Yao-wen, i 郭耀文. "Vision-based Techniques for Real-time Action Identification of Upper Body Rehabilitation". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/m26nmg.
Pełny tekst źródła國立臺北科技大學
資訊工程系研究所
101
Rehabilitation takes a lot of time to practice rehabilitation actions, and thus causing a shortage of medical and rehabilitation staffs. As a result, home rehabilitation can not only reduce the loadings of family members of rehabilitation pedestrian, but also relieve shortages of medical and rehabilitation staffs, this thesis proposes an action identification of upper body rehabilitation system. For the proposed system, which is the most important point of building upper body skeleton, this thesis presents an algorithm to feasibly and rapidly build upper body skeleton points. Through the upper body of the human skeleton and human skin color information, an upper body skeleton points can be effectively established by the proposed system. As a result, the proposed system can achieve a high recognition rate of 98% for the defined rehabilitation actions for different muscle parts. Moreover, the computational speed of the proposed system can reach 125 FPS, i.e. the processing time per frame is 8ms, the computational efficiency can provide efficient extensibility in the future development for dealing with the complex ambient environments and the implementation on embedded and pervasive systems.
Chien, Rong-Chun, i 簡榮均. "Deep Learning Based Computer Vision Techniques for Real-time Identification of Construction Site Personal Equipment Violations". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/x7n995.
Pełny tekst źródła國立臺灣大學
土木工程學研究所
107
Being well-equipped with Personal Protective Equipment (PPE) plays an essential role in construction sites to protect individuals from accidents. However, due to inconvenience and discomfort, it is common to see workers not wearing them on site. Therefore, ensuring the wearing of PPE remains important subject to facilitate construction sites safety. In recent years, because of the increasing performance of graphic cards and the rise of deep learning, Convolutional neural network (CNN) based computer vision techniques is receiving increased attention. Monitoring PPE use via computer vision method has been considered to be effective rather than sensor-based method. In this paper, a two-stage method is proposed to automatically detect 3 kinds of PPE violations, including non-hardhat-use, un-equipped with safety vest and bare to the waist. Combining object detection model and classification model, our two-stage method can avoid feature loss of small-scale PPE to achieve better accuracy. First, the object detection model based on RetinaNet is adopted to detect the presence of worker in the image. Then, by using InceptionNet as classification model, these worker images are input to identify the violations of PPE use. This study collected 3015 site images to preform transfer learning. The results show that our method can effectively detect PPE violations at the frame rate of 10 fps. When the image resolution of the worker is 120 pixels or more, both precision and recall can be above 0.9; while the resolution is only 60 pixels, it could also achieve the precision of 0.8.
Lu, Ming-Kun, i 呂鳴崑. "Multi-Camera Vision-based Finger Detection, Tracking, and Event Identification Techniques for Multi-Touch Sensing and Human Computer Interactive Systems". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/vgss3p.
Pełny tekst źródła國立臺北科技大學
資訊工程系研究所
100
Nowadays, multi-touch technology has become a popular issue. Multi-touch has been implemented in several ways including resistive type, capacitive type and so on. However, because of limitations, multi-touch by these implementations cannot support large screens. Therefore, this thesis proposes a multi-camera vision-based finger detection, tracking, and event identification techniques for multi-touch sensing with implementation. The proposed system detects the multi-finger pressing on an acrylic board by capturing the infrared light through four infrared cameras. The captured infrared points, which are equivalent to the multi-finger touched points, can be used for input equipments and supply man-computer interface with convenience. Additionally, the proposed system is a multi-touch sensing with computer vision technology. Compared with the conventional touch technology, multi-touch technology allows users to input complex commands. The proposed multi-touch point detection algorithm identifies the multi-finger touched points by using the bright object segmentation techniques. The extracted bright objects are then tracked, and the trajectories of objects are recorded. Furthermore, the system will analyze the trajectories of objects and identify the corresponding events pre-defined in the proposed system. For applications, this thesis wants to provide a simple human-computer interface with easy operation. Users can access and input commands by touch and move fingers. Besides, the proposed system is implemented with a table-sized screen, which can support multi-user interaction.
Książki na temat "Kinematic identification- Vision based techniques"
MCBR-CDS 2009 (2009 London, England). Medical content-based retrieval for clinical decision support: First MICCAI international workshop, MCBR-CDS 2009, London, UK, September 20, 2009 : revised selected papers. Berlin: Springer, 2010.
Znajdź pełny tekst źródłaFeature Dimension Reduction for Content-Based Image Identification. IGI Global, 2018.
Znajdź pełny tekst źródłaCzęści książek na temat "Kinematic identification- Vision based techniques"
Fontenla-Carrera, Gabriel, Ángel Manuel Fernández Vilán i Pablo Izquierdo Belmonte. "Automatic Identification of Kinematic Diagrams with Computer Vision". W Proceedings of the XV Ibero-American Congress of Mechanical Engineering, 425–31. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-38563-6_62.
Pełny tekst źródłaHarary, Sivan, i Eugene Walach. "Identification of Malignant Breast Tumors Based on Acoustic Attenuation Mapping of Conventional Ultrasound Images". W Medical Computer Vision. Recognition Techniques and Applications in Medical Imaging, 233–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36620-8_23.
Pełny tekst źródłaSajeendran, B. S., i R. Durairaj. "On-Orbit Real-Time Avionics Package Identification Using Vision-Based Machine Learning Techniques". W Lecture Notes in Mechanical Engineering, 429–37. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-1724-2_41.
Pełny tekst źródłaCruz, Diego A., Cristian C. Cristancho i Jorge E. Camargo. "Automatic Identification of Traditional Colombian Music Genres Based on Audio Content Analysis and Machine Learning Techniques". W Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, 646–55. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-33904-3_61.
Pełny tekst źródłaLockner, Yannik, Paul Buske, Maximilian Rudack, Zahra Kheirandish, Moritz Kröger, Stoyan Stoyanov, Seyed Ruhollah Dokhanchi i in. "Improving Manufacturing Efficiency for Discontinuous Processes by Methodological Cross-Domain Knowledge Transfer". W Internet of Production, 1–33. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-030-98062-7_8-1.
Pełny tekst źródłaKavati, Ilaiah, Munaga V. N. K. Prasad i Chakravarthy Bhagvati. "Search Space Reduction in Biometric Databases". W Computer Vision, 1600–1626. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5204-8.ch066.
Pełny tekst źródłaSingh, Law Kumar, Pooja, Hitendra Garg i Munish Khanna. "An Artificial Intelligence-Based Smart System for Early Glaucoma Recognition Using OCT Images". W Research Anthology on Improving Medical Imaging Techniques for Analysis and Intervention, 1424–54. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-7544-7.ch073.
Pełny tekst źródłaTuzova, Lyudmila N., Dmitry V. Tuzoff, Sergey I. Nikolenko i Alexey S. Krasnov. "Teeth and Landmarks Detection and Classification Based on Deep Neural Networks". W Computational Techniques for Dental Image Analysis, 129–50. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-6243-6.ch006.
Pełny tekst źródłaVerma, Vivek K., i Tarun Jain. "Machine-Learning-Based Image Feature Selection". W Feature Dimension Reduction for Content-Based Image Identification, 65–73. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5775-3.ch004.
Pełny tekst źródłaLatha, Y. L. Malathi, i Munaga V. N. K. Prasad. "A Survey on Palmprint-Based Biometric Recognition System". W Innovative Research in Attention Modeling and Computer Vision Applications, 304–26. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-8723-3.ch012.
Pełny tekst źródłaStreszczenia konferencji na temat "Kinematic identification- Vision based techniques"
Talakoub, Omid, i Farrokh Janabi Sharifi. "A robust vision-based technique for human arm kinematics identification". W Optics East 2006, redaktorzy Yukitoshi Otani i Farrokh Janabi-Sharifi. SPIE, 2006. http://dx.doi.org/10.1117/12.686229.
Pełny tekst źródłaDas, Arpita, i Mahua Bhattacharya. "GA Based Neuro Fuzzy Techniques for Breast Cancer Identification". W 2008 International Machine Vision and Image Processing Conference (IMVIP). IEEE, 2008. http://dx.doi.org/10.1109/imvip.2008.19.
Pełny tekst źródłaAnil, Abhishek, Hardik Gupta i Monika Arora. "Computer vision based method for identification of freshness in mushrooms". W 2019 International Conference on Issues and Challenges in Intelligent Computing Techniques (ICICT). IEEE, 2019. http://dx.doi.org/10.1109/icict46931.2019.8977698.
Pełny tekst źródłaYang, Shanglin, Yang Lin, Yong Li, Suyi Zhang, Lihui Peng i Defu Xu. "Machine Vision Based Granular Raw Material Adulteration Identification in Baijiu Brewing". W 2022 IEEE International Conference on Imaging Systems and Techniques (IST). IEEE, 2022. http://dx.doi.org/10.1109/ist55454.2022.9827757.
Pełny tekst źródłaWang, Jin, Mary She, Saeid Nahavandi i Abbas Kouzani. "A Review of Vision-Based Gait Recognition Methods for Human Identification". W 2010 International Conference on Digital Image Computing: Techniques and Applications (DICTA). IEEE, 2010. http://dx.doi.org/10.1109/dicta.2010.62.
Pełny tekst źródłaCui, Lulu, Lu Wang, Jinyu Su, Zihan Song i Xilai Li. "Classification and identification of degraded alpine m eadows based on machine learning techniques". W 2023 4th International Conference on Computer Vision, Image and Deep Learning (CVIDL). IEEE, 2023. http://dx.doi.org/10.1109/cvidl58838.2023.10167398.
Pełny tekst źródłaChen, Yen-Lin, Chuan-Yen Chiang, Wen-Yew Liang, Tung-Ju Hsieh, Da-Cheng Lee, Shyan-Ming Yuan i Yang-Lang Chang. "Developing Ubiquitous Multi-touch Sensing and Displaying Systems with Vision-Based Finger Detection and Event Identification Techniques". W Communication (HPCC). IEEE, 2011. http://dx.doi.org/10.1109/hpcc.2011.129.
Pełny tekst źródłaFreitas, Uéliton, Marcio Pache, Wesley Gonçalves, Edson Matsubara, José Sabino, Diego Sant'Ana i Hemerson Pistori. "Analysis of color feature extraction techniques for Fish Species Identification". W Workshop de Visão Computacional. Sociedade Brasileira de Computação - SBC, 2020. http://dx.doi.org/10.5753/wvc.2020.13495.
Pełny tekst źródłaGai, Vasily, Irina Ephode, Roman Barinov, Igor Polyakov, Vladimir Golubenko i Olga Andreeva. "Model and Algorithms for User Identification by Network Traffic". W 31th International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2021. http://dx.doi.org/10.20948/graphicon-2021-3027-1017-1027.
Pełny tekst źródłaWang, Shichao, i Kaiyu Liu. "Optimization inspection method for concrete girder bridges using vision‐based deep learning and images acquired by unmanned aerial vehicles". W IABSE Conference, Seoul 2020: Risk Intelligence of Infrastructures. Zurich, Switzerland: International Association for Bridge and Structural Engineering (IABSE), 2020. http://dx.doi.org/10.2749/seoul.2020.257.
Pełny tekst źródła