Academic literature on the topic 'First-person hand activity recognition'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'First-person hand activity recognition.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "First-person hand activity recognition"
Medarevic, Jelena, Marija Novicic, and Marko Markovic. "Feasibility test of activity index summary metric in human hand activity recognition." Serbian Journal of Electrical Engineering 19, no. 2 (2022): 225–38. http://dx.doi.org/10.2298/sjee2202225m.
Full textSenyurek, Volkan, Masudul Imtiaz, Prajakta Belsare, Stephen Tiffany, and Edward Sazonov. "Electromyogram in Cigarette Smoking Activity Recognition." Signals 2, no. 1 (February 9, 2021): 87–97. http://dx.doi.org/10.3390/signals2010008.
Full textRamirez, Heilym, Sergio A. Velastin, Paulo Aguayo, Ernesto Fabregas, and Gonzalo Farias. "Human Activity Recognition by Sequences of Skeleton Features." Sensors 22, no. 11 (May 25, 2022): 3991. http://dx.doi.org/10.3390/s22113991.
Full textRay, Sujan, Khaldoon Alshouiliy, and Dharma P. Agrawal. "Dimensionality Reduction for Human Activity Recognition Using Google Colab." Information 12, no. 1 (December 23, 2020): 6. http://dx.doi.org/10.3390/info12010006.
Full textGao, Zhiqiang, Dawei Liu, Kaizhu Huang, and Yi Huang. "Context-Aware Human Activity and Smartphone Position-Mining with Motion Sensors." Remote Sensing 11, no. 21 (October 29, 2019): 2531. http://dx.doi.org/10.3390/rs11212531.
Full textGuo, Jiang, Jun Cheng, Yu Guo, and Jian Xin Pang. "A Real-Time Dynamic Gesture Recognition System." Applied Mechanics and Materials 333-335 (July 2013): 849–55. http://dx.doi.org/10.4028/www.scientific.net/amm.333-335.849.
Full textBieck, Richard, Reinhard Fuchs, and Thomas Neumuth. "Surface EMG-based Surgical Instrument Classification for Dynamic Activity Recognition in Surgical Workflows." Current Directions in Biomedical Engineering 5, no. 1 (September 1, 2019): 37–40. http://dx.doi.org/10.1515/cdbme-2019-0010.
Full textBragin, A. D., and V. G. Spitsyn. "Motor imagery recognition in electroencephalograms using convolutional neural networks." Computer Optics 44, no. 3 (June 2020): 482–87. http://dx.doi.org/10.18287/2412-6179-co-669.
Full textLiu, Dan, Mao Ye, and Jianwei Zhang. "Improving Action Recognition Using Sequence Prediction Learning." International Journal of Pattern Recognition and Artificial Intelligence 34, no. 12 (March 20, 2020): 2050029. http://dx.doi.org/10.1142/s0218001420500299.
Full textYin, Guanghao, Shouqian Sun, Dian Yu, Dejian Li, and Kejun Zhang. "A Multimodal Framework for Large-Scale Emotion Recognition by Fusing Music and Electrodermal Activity Signals." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 3 (August 31, 2022): 1–23. http://dx.doi.org/10.1145/3490686.
Full textDissertations / Theses on the topic "First-person hand activity recognition"
Boutaleb, Mohamed Yasser. "Egocentric Hand Activity Recognition : The principal components of an egocentric hand activity recognition framework, exploitable for augmented reality user assistance." Electronic Thesis or Diss., CentraleSupélec, 2022. http://www.theses.fr/2022CSUP0007.
Full textHumans use their hands for various tasks in daily life and industry, making research in this area a recent focus of significant interest. Moreover, analyzing and interpreting human behavior using visual signals is one of the most animated and explored areas of computer vision. With the advent of new augmented reality technologies, researchers are increasingly interested in hand activity understanding from a first-person perspective exploring its suitability for human guidance and assistance. Our work is based on machine learning technology to contribute to this research area. Recently, deep neural networks have proven their outstanding effectiveness in many research areas, allowing researchers to jump significantly in efficiency and robustness.This thesis's main objective is to propose a user's activity recognition framework including four key components, which can be used to assist users during their activities oriented towards specific objectives: industry 4.0 (e.g., assisted assembly, maintenance) and teaching. Thus, the system observes the user's hands and the manipulated objects from the user's viewpoint to recognize his performed hand activity. The desired framework must robustly recognize the user's usual activities. Nevertheless, it must detect unusual ones to feedback and prevent him from performing wrong maneuvers, a fundamental requirement for user assistance. This thesis, therefore, combines techniques from the research fields of computer vision and machine learning to propose comprehensive hand activity recognition components essential for a complete assistance tool
Zhan, Kai. "First-Person Activity Recognition." Thesis, The University of Sydney, 2014. http://hdl.handle.net/2123/12948.
Full textTadesse, Girmaw Abebe. "Human activity recognition using a wearable camera." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/668914.
Full textLos avances en tecnologías wearables facilitan la comprensión de actividades humanas utilizando cuando se usan videos grabados en primera persona para una amplia gama de aplicaciones. En esta tesis, proponemos características robustas de movimiento para el reconocimiento de actividades humana a partir de videos en primera persona. Las características propuestas codifican características discriminativas estimadas a partir de optical flow como magnitud, dirección y dinámica de movimiento. Además, diseñamos nuevas características de inercia virtual a partir de video, sin usar sensores inerciales, utilizando el movimiento del centroide de intensidad a través de los fotogramas. Los resultados obtenidos en múltiples bases de datos demuestran que las características inerciales basadas en centroides mejoran el rendimiento de reconocimiento en comparación con grid-based características. Además, proponemos un algoritmo multicapa que codifica las relaciones jerárquicas y temporales entre actividades. La primera capa opera en grupos de características que codifican eficazmente las dinámicas del movimiento y las variaciones temporales de características de apariencia entre múltiples fotogramas utilizando una jerarquía. La segunda capa aprovecha el contexto temporal ponderando las salidas de la jerarquía durante el modelado. Además, diseñamos una técnica de postprocesado para filtrar las decisiones utilizando estimaciones pasadas y la confianza de la estimación actual. Validamos el algoritmo propuesto utilizando varios clasificadores. El modelado temporal muestra una mejora del rendimiento en el reconocimiento de actividades. También investigamos el uso de redes profundas (deep networks) para simplificar el diseño manual de características a partir de videos en primera persona. Proponemos apilar espectrogramas para representar movimientos globales a corto plazo. Estos espectrogramas contienen una representación espaciotemporal de múltiples componentes de movimiento. Esto nos permite aplicar convoluciones bidimensionales para aprender funciones de movimiento. Empleamos long short-term memory recurrent networks para codificar la dependencia temporal a largo plazo entre las actividades. Además, aplicamos transferencia de conocimiento entre diferentes dominios (cross-domain knowledge) entre enfoques inerciales y basados en la visión para el reconocimiento de la actividad en primera persona. Proponemos una combinación ponderada de información de diferentes modalidades de movimiento y/o secuencias. Los resultados muestran que el algoritmo propuesto obtiene resultados competitivos en comparación con existentes algoritmos basados en deep learning, a la vez que se reduce la complejidad.
Fathi, Alireza. "Learning descriptive models of objects and activities from egocentric video." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/48738.
Full textLiu, Hsuan-Ming, and 劉軒銘. "Activity Recognition in First-Person Camera View Based onTemporal Pyramid." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/92962830683022916719.
Full text國立臺灣大學
資訊網路與多媒體研究所
101
We present a simple but effective online recognition system for detecting interleaved activities of daily life (ADLs) in first-person-view videos. The two major difficulties in detecting ADLs are interleaving and variability in duration. We use temporal pyramid in our system to attack these difficulties, and this means we can use relatively simple models instead of time dependent probability ones such as Hidden semi-Markov model or nested models. The proposed solution includes the combination of conditional random fields (CRF) and an online inference algorithm, which explicitly considers multiple interleaved sequences by inferencing multi-stage activities on temporal pyramid. Although our system only uses linear chain-structured CRF model, which can be easily learned without a large amount of training data, it still recognizes complicated activity sequences. The system is evaluated on a data set provided by the work from state-of-the-art, and the result is comparable to their method. We also provide some experiment result using a customized dataset.
Lei, Yan-Jing, and 雷晏菁. "Activity Recognition of First-Person Vision and Sleep Posture Analysis." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/ygh973.
Full text國立臺灣大學
資訊工程學研究所
105
First-person vision camera technology is getting wildly used in our daily life to record every seconds of our activities, exercise, adventures, and so on. We present a succinct and robust 3D Convolutional Neural Network (CNN) architecture for both long-term and short-term activity recognition in first-person-view (FPV) videos. Recognizing activities allow us to categorize the amorphous input videos into meaningful chapters, enable efficient browsing, and find the fragments we need immediately. Previous methods for this task are based on hand-craft features, such as hands of subject, visual objects and optical flow. Our 3D CNN is deeper and use small kernel size with some strides. The network is designed for both long-term and short-term activity as well as trained on low resolution sparse optical flow for classifying the camera wearer activity in videos. Reduce the computational complexity while we train network with sparse optical flow. Next, we train an ensemble-learning meta-classifier to aggregate the predicted result of multiple models. No requirement of numerous time on training model and converging under limited amount of data. We achieve classification accuracy of 90%, which outperforms the current state-of-the-art by 15%. Evaluate on an extended FPV video dataset, which has almost twice amount of subjects than current state-of-the-art and nine classes of daily life activity. Our method finds the balance between long-term and short-term activity. For examples, sleep and watch TV for long-term activity or eat medicine and use phone for short-term activity. No assumptions are made on the scene structure. Different background would operate fine theoretically. In sleep posture classification, we propose a three-stream network to recognize the 10 types of sleep posture. Utilize the depth camera Kinect to capture the sleep image stream. Normalize the depth image and calculate the vertical distance map as network input data. Distinguish the major 10 types of sleep posture under different covering conditions, such as without covering, blanket covering and quilt covering. Allow us to observe the sleep status all night long and recommend the improvement method for better sleep quality. Furthermore, we gather 36 subjects to record the sleep image for 10 types of sleep posture. Evaluate the ability of network that we suggest can complete the tasks well. These days, elderly home care is the popular topic that everyone is discussed. All of us are looking for the solutions. Therefore, we propose a method of daily diary to assist the memory decline situation and hope it will delay the deterioration of disease. Review the whole day life by using the application to browse daily diary. Accomplish the goal of recollecting and recording memories.
Xia, Lu active 21st century. "Recognizing human activity using RGBD data." Thesis, 2014. http://hdl.handle.net/2152/24981.
Full texttext
Books on the topic "First-person hand activity recognition"
Norris, Pippa. Political Activism: New Challenges, New Opportunities. Edited by Carles Boix and Susan C. Stokes. Oxford University Press, 2009. http://dx.doi.org/10.1093/oxfordhb/9780199566020.003.0026.
Full textBook chapters on the topic "First-person hand activity recognition"
Siddiqi, Faisal. "Paradoxes of Strategic Labour Rights Litigation: Insights from the Baldia Factory Fire Litigation." In Interdisciplinary Studies in Human Rights, 59–96. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-73835-8_4.
Full textVerma, Kamal Kant, and Brij Mohan Singh. "A Six-Stream CNN Fusion-Based Human Activity Recognition on RGBD Data." In Challenges and Applications for Hand Gesture Recognition, 124–55. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-9434-6.ch007.
Full textOladapo Adenaiye, Oluwasanmi, Kathleen Marie McPhaul, and Donald K. Milton. "Acute Respiratory Infections: Diagnosis, Epidemiology, Management, and Prevention." In Modern Occupational Diseases Diagnosis, Epidemiology, Management and Prevention, 145–63. BENTHAM SCIENCE PUBLISHERS, 2022. http://dx.doi.org/10.2174/9789815049138122010012.
Full textJeon, Moon-Jin, Sang Wan Lee, and Zeungnam Bien. "Hand Gesture Recognition Using Multivariate Fuzzy Decision Tree and User Adaptation." In Contemporary Theory and Pragmatic Approaches in Fuzzy Computing Utilization, 105–19. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-1870-1.ch008.
Full textAnderson, Cindy L., and Kevin M. Anderson. "Practical Examples of Using Switch-Adapted and Battery-Powered Technology to Benefit Persons With Disabilities." In Handmade Teaching Materials for Students With Disabilities, 212–30. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-6240-5.ch009.
Full textAnderson, Cindy L., and Kevin M. Anderson. "Practical Examples of Using Switch-Adapted and Battery-Powered Technology to Benefit Persons With Disabilities." In Research Anthology on Physical and Intellectual Disabilities in an Inclusive Society, 736–53. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-3542-7.ch040.
Full textKumar Sharma, Avinash, Pratiyaksha Mittal, Ritik Ranjan, and Rishabh Chaturvedi. "Bank Robbery Detection System Using Computer Vision." In Advances in Transdisciplinary Engineering. IOS Press, 2023. http://dx.doi.org/10.3233/atde221322.
Full textTsatsoulis, P. Daphne, Aaron Jaech, Robert Batie, and Marios Savvides. "Multimodal Biometric Hand-Off for Robust Unobtrusive Continuous Biometric Authentication." In IT Policy and Ethics, 389–409. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2919-6.ch018.
Full textLynch, John Roy. "Democrats in the South: The Race Question." In Reminiscences of an Active Life, edited by John Hope Franklin, 503–12. University Press of Mississippi, 2008. http://dx.doi.org/10.14325/mississippi/9781604731149.003.0050.
Full textAnisimov, Dmytro, Dmytro Petrushin, and Victor Boguslavsky. "IMPROVEMENT OF PHYSICAL TRAINING OF FIRST-YEAR CADETS OF DNIPROPETROVSK STATE UNIVERSITY OF INTERNAL AFFAIRS." In Scientific space in the conditions of global transformations of the modern world. Publishing House “Baltija Publishing”, 2022. http://dx.doi.org/10.30525/978-9934-26-255-5-1.
Full textConference papers on the topic "First-person hand activity recognition"
Grewe, Lynne L., Chengzhi Hu, Krishna Tank, Aditya Jaiswal, Thomas Martin, Sahil Sutaria, Tran Huynh, and Francis David Bustos. "First person perspective video activity recognition." In Signal Processing, Sensor/Information Fusion, and Target Recognition XXIX, edited by Lynne L. Grewe, Erik P. Blasch, and Ivan Kadar. SPIE, 2020. http://dx.doi.org/10.1117/12.2557922.
Full textMa, Minghuang, Haoqi Fan, and Kris M. Kitani. "Going Deeper into First-Person Activity Recognition." In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016. http://dx.doi.org/10.1109/cvpr.2016.209.
Full textIwashita, Yumi, Asamichi Takamine, Ryo Kurazume, and M. S. Ryoo. "First-Person Animal Activity Recognition from Egocentric Videos." In 2014 22nd International Conference on Pattern Recognition (ICPR). IEEE, 2014. http://dx.doi.org/10.1109/icpr.2014.739.
Full textZhan, Kai, Vitor Guizilini, and Fabio Ramos. "Dense motion segmentation for first-person activity recognition." In 2014 13th International Conference on Control Automation Robotics & Vision (ICARCV). IEEE, 2014. http://dx.doi.org/10.1109/icarcv.2014.7064291.
Full textDemachi, Kazuyuki, and Shi Chen. "Development of Malicious Hand Behaviors Detection Method by Movie Analysis." In 2018 26th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/icone26-81643.
Full textOzkan, Fatih, Mehmet Ali Arabaci, Elif Surer, and Alptekin Temizel. "Boosted multiple kernel learning for first-person activity recognition." In 2017 25th European Signal Processing Conference (EUSIPCO). IEEE, 2017. http://dx.doi.org/10.23919/eusipco.2017.8081368.
Full textPrabhakar, Manav, and Snehasis Mukherjee. "First-person Activity Recognition by Modelling Subject - Action Relevance." In 2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022. http://dx.doi.org/10.1109/ijcnn55064.2022.9892547.
Full textSpriggs, Ekaterina H., Fernando De La Torre, and Martial Hebert. "Temporal segmentation and activity classification from first-person sensing." In 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2009. http://dx.doi.org/10.1109/cvprw.2009.5204354.
Full textGarcia-Hernando, Guillermo, Shanxin Yuan, Seungryul Baek, and Tae-Kyun Kim. "First-Person Hand Action Benchmark with RGB-D Videos and 3D Hand Pose Annotations." In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018. http://dx.doi.org/10.1109/cvpr.2018.00050.
Full textBaydoun, Mohamad, Alejandro Betancourt, Pietro Morerio, Lucio Marcenaro, Matthias Rauterberg, and Carlo Regazzoni. "Hand pose recognition in First Person Vision through graph spectral analysis." In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017. http://dx.doi.org/10.1109/icassp.2017.7952481.
Full text