Littérature scientifique sur le sujet « First-person hand activity recognition »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « First-person hand activity recognition ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "First-person hand activity recognition"
Medarevic, Jelena, Marija Novicic et Marko Markovic. « Feasibility test of activity index summary metric in human hand activity recognition ». Serbian Journal of Electrical Engineering 19, no 2 (2022) : 225–38. http://dx.doi.org/10.2298/sjee2202225m.
Texte intégralSenyurek, Volkan, Masudul Imtiaz, Prajakta Belsare, Stephen Tiffany et Edward Sazonov. « Electromyogram in Cigarette Smoking Activity Recognition ». Signals 2, no 1 (9 février 2021) : 87–97. http://dx.doi.org/10.3390/signals2010008.
Texte intégralRamirez, Heilym, Sergio A. Velastin, Paulo Aguayo, Ernesto Fabregas et Gonzalo Farias. « Human Activity Recognition by Sequences of Skeleton Features ». Sensors 22, no 11 (25 mai 2022) : 3991. http://dx.doi.org/10.3390/s22113991.
Texte intégralRay, Sujan, Khaldoon Alshouiliy et Dharma P. Agrawal. « Dimensionality Reduction for Human Activity Recognition Using Google Colab ». Information 12, no 1 (23 décembre 2020) : 6. http://dx.doi.org/10.3390/info12010006.
Texte intégralGao, Zhiqiang, Dawei Liu, Kaizhu Huang et Yi Huang. « Context-Aware Human Activity and Smartphone Position-Mining with Motion Sensors ». Remote Sensing 11, no 21 (29 octobre 2019) : 2531. http://dx.doi.org/10.3390/rs11212531.
Texte intégralGuo, Jiang, Jun Cheng, Yu Guo et Jian Xin Pang. « A Real-Time Dynamic Gesture Recognition System ». Applied Mechanics and Materials 333-335 (juillet 2013) : 849–55. http://dx.doi.org/10.4028/www.scientific.net/amm.333-335.849.
Texte intégralBieck, Richard, Reinhard Fuchs et Thomas Neumuth. « Surface EMG-based Surgical Instrument Classification for Dynamic Activity Recognition in Surgical Workflows ». Current Directions in Biomedical Engineering 5, no 1 (1 septembre 2019) : 37–40. http://dx.doi.org/10.1515/cdbme-2019-0010.
Texte intégralBragin, A. D., et V. G. Spitsyn. « Motor imagery recognition in electroencephalograms using convolutional neural networks ». Computer Optics 44, no 3 (juin 2020) : 482–87. http://dx.doi.org/10.18287/2412-6179-co-669.
Texte intégralLiu, Dan, Mao Ye et Jianwei Zhang. « Improving Action Recognition Using Sequence Prediction Learning ». International Journal of Pattern Recognition and Artificial Intelligence 34, no 12 (20 mars 2020) : 2050029. http://dx.doi.org/10.1142/s0218001420500299.
Texte intégralYin, Guanghao, Shouqian Sun, Dian Yu, Dejian Li et Kejun Zhang. « A Multimodal Framework for Large-Scale Emotion Recognition by Fusing Music and Electrodermal Activity Signals ». ACM Transactions on Multimedia Computing, Communications, and Applications 18, no 3 (31 août 2022) : 1–23. http://dx.doi.org/10.1145/3490686.
Texte intégralThèses sur le sujet "First-person hand activity recognition"
Boutaleb, Mohamed Yasser. « Egocentric Hand Activity Recognition : The principal components of an egocentric hand activity recognition framework, exploitable for augmented reality user assistance ». Electronic Thesis or Diss., CentraleSupélec, 2022. http://www.theses.fr/2022CSUP0007.
Texte intégralHumans use their hands for various tasks in daily life and industry, making research in this area a recent focus of significant interest. Moreover, analyzing and interpreting human behavior using visual signals is one of the most animated and explored areas of computer vision. With the advent of new augmented reality technologies, researchers are increasingly interested in hand activity understanding from a first-person perspective exploring its suitability for human guidance and assistance. Our work is based on machine learning technology to contribute to this research area. Recently, deep neural networks have proven their outstanding effectiveness in many research areas, allowing researchers to jump significantly in efficiency and robustness.This thesis's main objective is to propose a user's activity recognition framework including four key components, which can be used to assist users during their activities oriented towards specific objectives: industry 4.0 (e.g., assisted assembly, maintenance) and teaching. Thus, the system observes the user's hands and the manipulated objects from the user's viewpoint to recognize his performed hand activity. The desired framework must robustly recognize the user's usual activities. Nevertheless, it must detect unusual ones to feedback and prevent him from performing wrong maneuvers, a fundamental requirement for user assistance. This thesis, therefore, combines techniques from the research fields of computer vision and machine learning to propose comprehensive hand activity recognition components essential for a complete assistance tool
Zhan, Kai. « First-Person Activity Recognition ». Thesis, The University of Sydney, 2014. http://hdl.handle.net/2123/12948.
Texte intégralTadesse, Girmaw Abebe. « Human activity recognition using a wearable camera ». Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/668914.
Texte intégralLos avances en tecnologías wearables facilitan la comprensión de actividades humanas utilizando cuando se usan videos grabados en primera persona para una amplia gama de aplicaciones. En esta tesis, proponemos características robustas de movimiento para el reconocimiento de actividades humana a partir de videos en primera persona. Las características propuestas codifican características discriminativas estimadas a partir de optical flow como magnitud, dirección y dinámica de movimiento. Además, diseñamos nuevas características de inercia virtual a partir de video, sin usar sensores inerciales, utilizando el movimiento del centroide de intensidad a través de los fotogramas. Los resultados obtenidos en múltiples bases de datos demuestran que las características inerciales basadas en centroides mejoran el rendimiento de reconocimiento en comparación con grid-based características. Además, proponemos un algoritmo multicapa que codifica las relaciones jerárquicas y temporales entre actividades. La primera capa opera en grupos de características que codifican eficazmente las dinámicas del movimiento y las variaciones temporales de características de apariencia entre múltiples fotogramas utilizando una jerarquía. La segunda capa aprovecha el contexto temporal ponderando las salidas de la jerarquía durante el modelado. Además, diseñamos una técnica de postprocesado para filtrar las decisiones utilizando estimaciones pasadas y la confianza de la estimación actual. Validamos el algoritmo propuesto utilizando varios clasificadores. El modelado temporal muestra una mejora del rendimiento en el reconocimiento de actividades. También investigamos el uso de redes profundas (deep networks) para simplificar el diseño manual de características a partir de videos en primera persona. Proponemos apilar espectrogramas para representar movimientos globales a corto plazo. Estos espectrogramas contienen una representación espaciotemporal de múltiples componentes de movimiento. Esto nos permite aplicar convoluciones bidimensionales para aprender funciones de movimiento. Empleamos long short-term memory recurrent networks para codificar la dependencia temporal a largo plazo entre las actividades. Además, aplicamos transferencia de conocimiento entre diferentes dominios (cross-domain knowledge) entre enfoques inerciales y basados en la visión para el reconocimiento de la actividad en primera persona. Proponemos una combinación ponderada de información de diferentes modalidades de movimiento y/o secuencias. Los resultados muestran que el algoritmo propuesto obtiene resultados competitivos en comparación con existentes algoritmos basados en deep learning, a la vez que se reduce la complejidad.
Fathi, Alireza. « Learning descriptive models of objects and activities from egocentric video ». Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/48738.
Texte intégralLiu, Hsuan-Ming, et 劉軒銘. « Activity Recognition in First-Person Camera View Based onTemporal Pyramid ». Thesis, 2013. http://ndltd.ncl.edu.tw/handle/92962830683022916719.
Texte intégral國立臺灣大學
資訊網路與多媒體研究所
101
We present a simple but effective online recognition system for detecting interleaved activities of daily life (ADLs) in first-person-view videos. The two major difficulties in detecting ADLs are interleaving and variability in duration. We use temporal pyramid in our system to attack these difficulties, and this means we can use relatively simple models instead of time dependent probability ones such as Hidden semi-Markov model or nested models. The proposed solution includes the combination of conditional random fields (CRF) and an online inference algorithm, which explicitly considers multiple interleaved sequences by inferencing multi-stage activities on temporal pyramid. Although our system only uses linear chain-structured CRF model, which can be easily learned without a large amount of training data, it still recognizes complicated activity sequences. The system is evaluated on a data set provided by the work from state-of-the-art, and the result is comparable to their method. We also provide some experiment result using a customized dataset.
Lei, Yan-Jing, et 雷晏菁. « Activity Recognition of First-Person Vision and Sleep Posture Analysis ». Thesis, 2017. http://ndltd.ncl.edu.tw/handle/ygh973.
Texte intégral國立臺灣大學
資訊工程學研究所
105
First-person vision camera technology is getting wildly used in our daily life to record every seconds of our activities, exercise, adventures, and so on. We present a succinct and robust 3D Convolutional Neural Network (CNN) architecture for both long-term and short-term activity recognition in first-person-view (FPV) videos. Recognizing activities allow us to categorize the amorphous input videos into meaningful chapters, enable efficient browsing, and find the fragments we need immediately. Previous methods for this task are based on hand-craft features, such as hands of subject, visual objects and optical flow. Our 3D CNN is deeper and use small kernel size with some strides. The network is designed for both long-term and short-term activity as well as trained on low resolution sparse optical flow for classifying the camera wearer activity in videos. Reduce the computational complexity while we train network with sparse optical flow. Next, we train an ensemble-learning meta-classifier to aggregate the predicted result of multiple models. No requirement of numerous time on training model and converging under limited amount of data. We achieve classification accuracy of 90%, which outperforms the current state-of-the-art by 15%. Evaluate on an extended FPV video dataset, which has almost twice amount of subjects than current state-of-the-art and nine classes of daily life activity. Our method finds the balance between long-term and short-term activity. For examples, sleep and watch TV for long-term activity or eat medicine and use phone for short-term activity. No assumptions are made on the scene structure. Different background would operate fine theoretically. In sleep posture classification, we propose a three-stream network to recognize the 10 types of sleep posture. Utilize the depth camera Kinect to capture the sleep image stream. Normalize the depth image and calculate the vertical distance map as network input data. Distinguish the major 10 types of sleep posture under different covering conditions, such as without covering, blanket covering and quilt covering. Allow us to observe the sleep status all night long and recommend the improvement method for better sleep quality. Furthermore, we gather 36 subjects to record the sleep image for 10 types of sleep posture. Evaluate the ability of network that we suggest can complete the tasks well. These days, elderly home care is the popular topic that everyone is discussed. All of us are looking for the solutions. Therefore, we propose a method of daily diary to assist the memory decline situation and hope it will delay the deterioration of disease. Review the whole day life by using the application to browse daily diary. Accomplish the goal of recollecting and recording memories.
Xia, Lu active 21st century. « Recognizing human activity using RGBD data ». Thesis, 2014. http://hdl.handle.net/2152/24981.
Texte intégraltext
Livres sur le sujet "First-person hand activity recognition"
Norris, Pippa. Political Activism : New Challenges, New Opportunities. Sous la direction de Carles Boix et Susan C. Stokes. Oxford University Press, 2009. http://dx.doi.org/10.1093/oxfordhb/9780199566020.003.0026.
Texte intégralChapitres de livres sur le sujet "First-person hand activity recognition"
Siddiqi, Faisal. « Paradoxes of Strategic Labour Rights Litigation : Insights from the Baldia Factory Fire Litigation ». Dans Interdisciplinary Studies in Human Rights, 59–96. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-73835-8_4.
Texte intégralVerma, Kamal Kant, et Brij Mohan Singh. « A Six-Stream CNN Fusion-Based Human Activity Recognition on RGBD Data ». Dans Challenges and Applications for Hand Gesture Recognition, 124–55. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-9434-6.ch007.
Texte intégralOladapo Adenaiye, Oluwasanmi, Kathleen Marie McPhaul et Donald K. Milton. « Acute Respiratory Infections : Diagnosis, Epidemiology, Management, and Prevention ». Dans Modern Occupational Diseases Diagnosis, Epidemiology, Management and Prevention, 145–63. BENTHAM SCIENCE PUBLISHERS, 2022. http://dx.doi.org/10.2174/9789815049138122010012.
Texte intégralJeon, Moon-Jin, Sang Wan Lee et Zeungnam Bien. « Hand Gesture Recognition Using Multivariate Fuzzy Decision Tree and User Adaptation ». Dans Contemporary Theory and Pragmatic Approaches in Fuzzy Computing Utilization, 105–19. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-1870-1.ch008.
Texte intégralAnderson, Cindy L., et Kevin M. Anderson. « Practical Examples of Using Switch-Adapted and Battery-Powered Technology to Benefit Persons With Disabilities ». Dans Handmade Teaching Materials for Students With Disabilities, 212–30. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-6240-5.ch009.
Texte intégralAnderson, Cindy L., et Kevin M. Anderson. « Practical Examples of Using Switch-Adapted and Battery-Powered Technology to Benefit Persons With Disabilities ». Dans Research Anthology on Physical and Intellectual Disabilities in an Inclusive Society, 736–53. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-3542-7.ch040.
Texte intégralKumar Sharma, Avinash, Pratiyaksha Mittal, Ritik Ranjan et Rishabh Chaturvedi. « Bank Robbery Detection System Using Computer Vision ». Dans Advances in Transdisciplinary Engineering. IOS Press, 2023. http://dx.doi.org/10.3233/atde221322.
Texte intégralTsatsoulis, P. Daphne, Aaron Jaech, Robert Batie et Marios Savvides. « Multimodal Biometric Hand-Off for Robust Unobtrusive Continuous Biometric Authentication ». Dans IT Policy and Ethics, 389–409. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2919-6.ch018.
Texte intégralLynch, John Roy. « Democrats in the South : The Race Question ». Dans Reminiscences of an Active Life, sous la direction de John Hope Franklin, 503–12. University Press of Mississippi, 2008. http://dx.doi.org/10.14325/mississippi/9781604731149.003.0050.
Texte intégralAnisimov, Dmytro, Dmytro Petrushin et Victor Boguslavsky. « IMPROVEMENT OF PHYSICAL TRAINING OF FIRST-YEAR CADETS OF DNIPROPETROVSK STATE UNIVERSITY OF INTERNAL AFFAIRS ». Dans Scientific space in the conditions of global transformations of the modern world. Publishing House “Baltija Publishing”, 2022. http://dx.doi.org/10.30525/978-9934-26-255-5-1.
Texte intégralActes de conférences sur le sujet "First-person hand activity recognition"
Grewe, Lynne L., Chengzhi Hu, Krishna Tank, Aditya Jaiswal, Thomas Martin, Sahil Sutaria, Tran Huynh et Francis David Bustos. « First person perspective video activity recognition ». Dans Signal Processing, Sensor/Information Fusion, and Target Recognition XXIX, sous la direction de Lynne L. Grewe, Erik P. Blasch et Ivan Kadar. SPIE, 2020. http://dx.doi.org/10.1117/12.2557922.
Texte intégralMa, Minghuang, Haoqi Fan et Kris M. Kitani. « Going Deeper into First-Person Activity Recognition ». Dans 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016. http://dx.doi.org/10.1109/cvpr.2016.209.
Texte intégralIwashita, Yumi, Asamichi Takamine, Ryo Kurazume et M. S. Ryoo. « First-Person Animal Activity Recognition from Egocentric Videos ». Dans 2014 22nd International Conference on Pattern Recognition (ICPR). IEEE, 2014. http://dx.doi.org/10.1109/icpr.2014.739.
Texte intégralZhan, Kai, Vitor Guizilini et Fabio Ramos. « Dense motion segmentation for first-person activity recognition ». Dans 2014 13th International Conference on Control Automation Robotics & Vision (ICARCV). IEEE, 2014. http://dx.doi.org/10.1109/icarcv.2014.7064291.
Texte intégralDemachi, Kazuyuki, et Shi Chen. « Development of Malicious Hand Behaviors Detection Method by Movie Analysis ». Dans 2018 26th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/icone26-81643.
Texte intégralOzkan, Fatih, Mehmet Ali Arabaci, Elif Surer et Alptekin Temizel. « Boosted multiple kernel learning for first-person activity recognition ». Dans 2017 25th European Signal Processing Conference (EUSIPCO). IEEE, 2017. http://dx.doi.org/10.23919/eusipco.2017.8081368.
Texte intégralPrabhakar, Manav, et Snehasis Mukherjee. « First-person Activity Recognition by Modelling Subject - Action Relevance ». Dans 2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022. http://dx.doi.org/10.1109/ijcnn55064.2022.9892547.
Texte intégralSpriggs, Ekaterina H., Fernando De La Torre et Martial Hebert. « Temporal segmentation and activity classification from first-person sensing ». Dans 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2009. http://dx.doi.org/10.1109/cvprw.2009.5204354.
Texte intégralGarcia-Hernando, Guillermo, Shanxin Yuan, Seungryul Baek et Tae-Kyun Kim. « First-Person Hand Action Benchmark with RGB-D Videos and 3D Hand Pose Annotations ». Dans 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018. http://dx.doi.org/10.1109/cvpr.2018.00050.
Texte intégralBaydoun, Mohamad, Alejandro Betancourt, Pietro Morerio, Lucio Marcenaro, Matthias Rauterberg et Carlo Regazzoni. « Hand pose recognition in First Person Vision through graph spectral analysis ». Dans 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017. http://dx.doi.org/10.1109/icassp.2017.7952481.
Texte intégral