Journal articles on the topic 'Microsoft Kinect v2'

To see the other types of publications on this topic, follow the link: Microsoft Kinect v2.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Microsoft Kinect v2.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Cai, Laisi, Ye Ma, Shuping Xiong, and Yanxin Zhang. "Validity and Reliability of Upper Limb Functional Assessment Using the Microsoft Kinect V2 Sensor." Applied Bionics and Biomechanics 2019 (February 11, 2019): 1–14. http://dx.doi.org/10.1155/2019/7175240.

Full text
Abstract:
Objective. To quantify the concurrent accuracy and the test-retest reliability of a Kinect V2-based upper limb functional assessment system. Approach. Ten healthy males performed a series of upper limb movements, which were measured concurrently with Kinect V2 and the Vicon motion capture system (gold standard). Each participant attended two testing sessions, seven days apart. Four tasks were performed including hand to contralateral shoulder, hand to mouth, combing hair, and hand to back pocket. Upper limb kinematics were calculated using our developed kinematic model and the UWA model for Kinect V2 and Vicon. The interdevice coefficient of multiple correlation (CMC) and the root mean squared error (RMSE) were used to evaluate the validity of the kinematic waveforms. Mean absolute bias and Pearson’s r correlation were used to evaluate the validity of the angles at the points of target achieved (PTA) and the range of motion (ROM). The intersession CMC and RMSE and the intraclass correlation coefficient (ICC) were used to assess the test-retest reliability of Kinect V2. Main Results. Both validity and reliability are found to be task-dependent and plane-dependent. Kinect V2 had good accuracy in measuring shoulder and elbow flexion/extension angular waveforms (CMC>0.87), moderate accuracy of measuring shoulder adduction/abduction angular waveforms (CMC=0.69-0.82), and poor accuracy of measuring shoulder internal/external angles (CMC<0.6). We also found high test-retest reliability of Kinect V2 in most of the upper limb angular waveforms (CMC=0.75-0.99), angles at the PTA (ICC=0.65-0.91), and the ROM (ICC=0.68-0.96). Significance. Kinect V2 has great potential as a low-cost, easy implemented device for assessing upper limb angular waveforms when performing functional tasks. The system is suitable for assessing relative within-person change in upper limb motions over time, such as disease progression or improvement due to intervention.
APA, Harvard, Vancouver, ISO, and other styles
2

Gray, Aaron D., Brad W. Willis, Marjorie Skubic, Zhiyu Huo, Swithin Razu, Seth L. Sherman, Trent M. Guess, et al. "Development and Validation of a Portable and Inexpensive Tool to Measure the Drop Vertical Jump Using the Microsoft Kinect V2." Sports Health: A Multidisciplinary Approach 9, no. 6 (August 28, 2017): 537–44. http://dx.doi.org/10.1177/1941738117726323.

Full text
Abstract:
Background: Noncontact anterior cruciate ligament (ACL) injury in adolescent female athletes is an increasing problem. The knee-ankle separation ratio (KASR), calculated at initial contact (IC) and peak flexion (PF) during the drop vertical jump (DVJ), is a measure of dynamic knee valgus. The Microsoft Kinect V2 has shown promise as a reliable and valid marker-less motion capture device. Hypothesis: The Kinect V2 will demonstrate good to excellent correlation between KASR results at IC and PF during the DVJ, as compared with a “gold standard” Vicon motion analysis system. Study Design: Descriptive laboratory study. Level of Evidence: Level 2. Methods: Thirty-eight healthy volunteer subjects (20 male, 18 female) performed 5 DVJ trials, simultaneously measured by a Vicon MX-T40S system, 2 AMTI force platforms, and a Kinect V2 with customized software. A total of 190 jumps were completed. The KASR was calculated at IC and PF during the DVJ. The intraclass correlation coefficient (ICC) assessed the degree of KASR agreement between the Kinect and Vicon systems. Results: The ICCs of the Kinect V2 and Vicon KASR at IC and PF were 0.84 and 0.95, respectively, showing excellent agreement between the 2 measures. The Kinect V2 successfully identified the KASR at PF and IC frames in 182 of 190 trials, demonstrating 95.8% reliability. Conclusion: The Kinect V2 demonstrated excellent ICC of the KASR at IC and PF during the DVJ when compared with the Vicon system. A customized Kinect V2 software program demonstrated good reliability in identifying the KASR at IC and PF during the DVJ. Clinical Relevance: Reliable, valid, inexpensive, and efficient screening tools may improve the accessibility of motion analysis assessment of adolescent female athletes.
APA, Harvard, Vancouver, ISO, and other styles
3

Kurillo, Gregorij, Evan Hemingway, Mu-Lin Cheng, and Louis Cheng. "Evaluating the Accuracy of the Azure Kinect and Kinect v2." Sensors 22, no. 7 (March 23, 2022): 2469. http://dx.doi.org/10.3390/s22072469.

Full text
Abstract:
The Azure Kinect represents the latest generation of Microsoft Kinect depth cameras. Of interest in this article is the depth and spatial accuracy of the Azure Kinect and how it compares to its predecessor, the Kinect v2. In one experiment, the two sensors are used to capture a planar whiteboard at 15 locations in a grid pattern with laser scanner data serving as ground truth. A set of histograms reveals the temporal-based random depth error inherent in each Kinect. Additionally, a two-dimensional cone of accuracy illustrates the systematic spatial error. At distances greater than 2.5 m, we find the Azure Kinect to have improved accuracy in both spatial and temporal domains as compared to the Kinect v2, while for distances less than 2.5 m, the spatial and temporal accuracies were found to be comparable. In another experiment, we compare the distribution of random depth error between each Kinect sensor by capturing a flat wall across the field of view in horizontal and vertical directions. We find the Azure Kinect to have improved temporal accuracy over the Kinect v2 in the range of 2.5 to 3.5 m for measurements close to the optical axis. The results indicate that the Azure Kinect is a suitable substitute for Kinect v2 in 3D scanning applications.
APA, Harvard, Vancouver, ISO, and other styles
4

Caruso, L., R. Russo, and S. Savino. "Microsoft Kinect V2 vision system in a manufacturing application." Robotics and Computer-Integrated Manufacturing 48 (December 2017): 174–81. http://dx.doi.org/10.1016/j.rcim.2017.04.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Guffanti, Diego, Alberto Brunete, Miguel Hernando, Javier Rueda, and Enrique Navarro Cabello. "The Accuracy of the Microsoft Kinect V2 Sensor for Human Gait Analysis. A Different Approach for Comparison with the Ground Truth." Sensors 20, no. 16 (August 7, 2020): 4405. http://dx.doi.org/10.3390/s20164405.

Full text
Abstract:
Several studies have examined the accuracy of the Kinect V2 sensor during gait analysis. Usually the data retrieved by the Kinect V2 sensor are compared with the ground truth of certified systems using a Euclidean comparison. Due to the Kinect V2 sensor latency, the application of a uniform temporal alignment is not adequate to compare the signals. On that basis, the purpose of this study was to explore the abilities of the dynamic time warping (DTW) algorithm to compensate for sensor latency (3 samples or 90 ms) and develop a proper accuracy estimation. During the experimental stage, six iterations were performed using the a dual Kinect V2 system. The walking tests were developed at a self-selected speed. The sensor accuracy for Euclidean matching was consistent with that reported in previous studies. After latency compensation, the sensor accuracy demonstrated considerably lower error rates for all joints. This demonstrated that the accuracy was underestimated due to the use of inappropriate comparison techniques. On the contrary, DTW is a potential method that compensates for the sensor latency, and works sufficiently in comparison with certified systems.
APA, Harvard, Vancouver, ISO, and other styles
6

Ayed, Ines, Antoni Jaume-i-Capó, Pau Martínez-Bueso, Arnau Mir, and Gabriel Moyà-Alcover. "Balance Measurement Using Microsoft Kinect v2: Towards Remote Evaluation of Patient with the Functional Reach Test." Applied Sciences 11, no. 13 (June 30, 2021): 6073. http://dx.doi.org/10.3390/app11136073.

Full text
Abstract:
To prevent falls, it is important to measure periodically the balance ability of an individual using reliable clinical tests. As Red Green Blue Depth (RGBD) devices have been increasingly used for balance rehabilitation at home, they may also be used to assess objectively the balance ability and determine the effectiveness of a therapy. For this, we developed a system based on the Microsoft Kinect v2 for measuring the Functional Reach Test (FRT); one of the most used balance clinical tools to predict falls. Two experiments were conducted to compare the FRT measures computed by our system using the Microsoft Kinect v2 with those obtained by the standard method, i.e., manually. In terms of validity, we found a very strong correlation between the two methods (r = 0.97 and r = 0.99 (p < 0.05), for experiments 1 and 2, respectively). However, we needed to correct the measurements using a linear model to fit the data obtained by the Kinect system. Consequently, a linear regression model has been applied and examining the regression assumptions showed that the model works well for the data. Applying the paired t-test to the data after correction indicated that there is no statistically significant difference between the measurements obtained by both methods. As for the reliability of the test, we obtained good to excellent within repeatability of the FRT measurements tracked by Kinect (ICC = 0.86 and ICC = 0.99, for experiments 1 and 2, respectively). These results suggested that the Microsoft Kinect v2 device is reliable and adequate to calculate the standard FRT.
APA, Harvard, Vancouver, ISO, and other styles
7

Benedetti, Elisa, Roberta Ravanelli, Monica Moroni, Andrea Nascetti, and Mattia Crespi. "Exploiting Performance of Different Low-Cost Sensors for Small Amplitude Oscillatory Motion Monitoring: Preliminary Comparisons in View of Possible Integration." Journal of Sensors 2016 (2016): 1–10. http://dx.doi.org/10.1155/2016/7490870.

Full text
Abstract:
We address the problem of low amplitude oscillatory motion detection through different low-cost sensors: a LIS3LV02DQ MEMS accelerometer, a Microsoft Kinect v2 range camera, and a uBlox 6 GPS receiver. Several tests were performed using a one-direction vibrating table with different oscillation frequencies (in the range 1.5–3 Hz) and small challenging amplitudes (0.02 m and 0.03 m). A Mikrotron EoSens high-resolution camera was used to give reference data. A dedicated software tool was developed to retrieve Kinect v2 results. The capabilities of the VADASE algorithm were employed to process uBlox 6 GPS receiver observations. In the investigated time interval (in the order of tens of seconds) the results obtained indicate that displacements were detected with the resolution of fractions of millimeters with MEMS accelerometer and Kinect v2 and few millimeters with uBlox 6. MEMS accelerometer displays the lowest noise but a significant bias, whereas Kinect v2 and uBlox 6 appear more stable. The results suggest the possibility of sensor integration both for indoor (MEMS accelerometer + Kinect v2) and for outdoor (MEMS accelerometer + uBlox 6) applications and seem promising for structural monitoring applications.
APA, Harvard, Vancouver, ISO, and other styles
8

Banh, Tien Long, and Van Bien Bui. "First Experiences with Microsoft Kinect V2 for 3D Modelling of Mechanical Parts." Applied Mechanics and Materials 889 (March 2019): 329–36. http://dx.doi.org/10.4028/www.scientific.net/amm.889.329.

Full text
Abstract:
Most of types of laser distance measuring instrument cost hundreds of thousand dollars such as Atos scanner or Depth Camera that gives depth maps of space very fast. However, the handling is too complicated for non-professional users and the utilization of 3D reconstruction is very limited. This paper introduces a workflow of 3D reconstruction using a new cheaper instrument, the Microsoft Kinect. The first experiences with Microsoft Kinect v2 sensor are presented, and the ability of 3D modelling for mechanical parts is investigated. For this purpose, the point cloud on output data as well as a calibration approach are demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
9

Koontz, Alicia Marie, Ahlad Neti, Cheng-Shiu Chung, Nithin Ayiluri, Brooke A. Slavens, Celia Genevieve Davis, and Lin Wei. "Reliability of 3D Depth Motion Sensors for Capturing Upper Body Motions and Assessing the Quality of Wheelchair Transfers." Sensors 22, no. 13 (June 30, 2022): 4977. http://dx.doi.org/10.3390/s22134977.

Full text
Abstract:
Wheelchair users must use proper technique when performing sitting-pivot-transfers (SPTs) to prevent upper extremity pain and discomfort. Current methods to analyze the quality of SPTs include the TransKinect, a combination of machine learning (ML) models, and the Transfer Assessment Instrument (TAI), to automatically score the quality of a transfer using Microsoft Kinect V2. With the discontinuation of the V2, there is a necessity to determine the compatibility of other commercial sensors. The Intel RealSense D435 and the Microsoft Kinect Azure were compared against the V2 for inter- and intra-sensor reliability. A secondary analysis with the Azure was also performed to analyze its performance with the existing ML models used to predict transfer quality. The intra- and inter-sensor reliability was higher for the Azure and V2 (n = 7; ICC = 0.63 to 0.92) than the RealSense and V2 (n = 30; ICC = 0.13 to 0.7) for four key features. Additionally, the V2 and the Azure both showed high agreement with each other on the ML outcomes but not against a ground truth. Therefore, the ML models may need to be retrained ideally with the Azure, as it was found to be a more reliable and robust sensor for tracking wheelchair transfers in comparison to the V2.
APA, Harvard, Vancouver, ISO, and other styles
10

Gau, Michael-Lian, Huong Yong Ting, Jackie Tiew-Wei Ting, Marcella Peter, and Khairunnisa Ibrahim. "Sarawak Traditional Dance Motion Analysis and Comparison using Microsoft Kinect V2." Green Intelligent Systems and Applications 2, no. 1 (April 17, 2022): 42–52. http://dx.doi.org/10.53623/gisa.v2i1.78.

Full text
Abstract:
This research project aimed to develop a software program or an interactive dance motion analysis application that utilizes modern technology to preserve and maintain the Sarawak traditional dance culture. The software program employs the Microsoft Kinect V2 to collect the digital dance data. The proposed method analyses the collected dance data for comparison purposes only. The comparison process was executed by displaying a traditional dance on the screen where the user who wants to learn the traditional dance can follow it and obtain results on how similar the dance is compared to the recorded dance data. The comparison of the performed and recorded dance data was visualized in graph form. The comparison graph showed that the Microsoft Kinect V2 sensors were capable of comparing the dance motion but with minor glitches in detecting the joint orientation. Using better depth sensors would make the comparison more accurate and less likely to have problems with figuring out how the joints move.
APA, Harvard, Vancouver, ISO, and other styles
11

Venek, Verena, Wolfgang Kremser, and Thomas Stöggl. "Towards a Live Feedback Training System: Interchangeability of Orbbec Persee and Microsoft Kinect for Exercise Monitoring." Designs 5, no. 2 (April 15, 2021): 30. http://dx.doi.org/10.3390/designs5020030.

Full text
Abstract:
Many existing motion sensing applications in research, entertainment and exercise monitoring are based on the Microsoft Kinect and its skeleton tracking functionality. With the Kinect’s development and production halted, researchers and system designers are in need of a suitable replacement. We investigated the interchangeability of the discontinued Kinect v2 and the all-in-one, image-based motion tracking system Orbbec Persee for the use in an exercise monitoring system prototype called ILSE. Nine functional training exercises were performed by six healthy subjects in front of both systems simultaneously. Comparing the systems’ internal tracking states from ’not tracked’ to ‘tracked’ showed that the Persee system is more confident during motion sequences, while the Kinect is more confident for hip and trunk joint positions. Assessing the skeleton tracking robustness, the Persee’s tracking of body segment lengths was more consistent. Furthermore, we used both skeleton datasets as input for the ILSE exercise monitoring including posture recognition and repetition-counting. Persee data from exercises with lateral movement and in uncovered full-body frontal view provided the same results as Kinect data. The Persee further preferred tracking of quasi-static lower limb motions and tight-fitting clothes. With these limitations in mind, we find that the Orbbec Persee is a suitable replacement for the Microsoft Kinect for motion sensing within the ILSE exercise monitoring system.
APA, Harvard, Vancouver, ISO, and other styles
12

Edmunds, David M., Sophie E. Bashforth, Fatemeh Tahavori, Kevin Wells, and Ellen M. Donovan. "The feasibility of using Microsoft Kinect v2 sensors during radiotherapy delivery." Journal of Applied Clinical Medical Physics 17, no. 6 (November 2016): 446–53. http://dx.doi.org/10.1120/jacmp.v17i6.6377.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Dajime, Peter Fermin, Heather Smith, and Yanxin Zhang. "Automated classification of movement quality using the Microsoft Kinect V2 sensor." Computers in Biology and Medicine 125 (October 2020): 104021. http://dx.doi.org/10.1016/j.compbiomed.2020.104021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Silverstein, Evan, and Michael Snyder. "Comparative analysis of respiratory motion tracking using Microsoft Kinect v2 sensor." Journal of Applied Clinical Medical Physics 19, no. 3 (March 25, 2018): 193–204. http://dx.doi.org/10.1002/acm2.12318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Darby, John, María B. Sánchez, Penelope B. Butler, and Ian D. Loram. "An evaluation of 3D head pose estimation using the Microsoft Kinect v2." Gait & Posture 48 (July 2016): 83–88. http://dx.doi.org/10.1016/j.gaitpost.2016.04.030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Alzahrani, Mona Saleh, Salma Kammoun Jarraya, Hanêne Ben-Abdallah, and Manar Salamah Ali. "Comprehensive evaluation of skeleton features-based fall detection from Microsoft Kinect v2." Signal, Image and Video Processing 13, no. 7 (May 15, 2019): 1431–39. http://dx.doi.org/10.1007/s11760-019-01490-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Silverstein, Evan, and Michael Snyder. "Implementation of facial recognition with Microsoft Kinect v2 sensor for patient verification." Medical Physics 44, no. 6 (May 12, 2017): 2391–99. http://dx.doi.org/10.1002/mp.12241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Liu, Jiantao, and Xiaoxiang Yang. "Artificial Neural Network for Vibration Frequency Measurement Using Kinect V2." Shock and Vibration 2019 (March 12, 2019): 1–16. http://dx.doi.org/10.1155/2019/9064830.

Full text
Abstract:
Optical measurement can substantially reduce the required amount of labor and simplify the measurement process. Furthermore, the optical measurement method can provide full-field measurement results of the target object without affecting the physical properties of the measurement target, such as stiffness, mass, or damping. The advent of consumer grade depth cameras, such as the Microsoft Kinect, Intel RealSence, and ASUS Xtion, has attracted significant research attention owing to their availability and robustness in sampling depth information. This paper presents an effective method employing the Kinect sensor V2 and an artificial neural network for vibration frequency measurement. Experiments were conducted to verify the performance of the proposed method. The proposed method can provide good frequency prediction within acceptable accuracy compared to an industrial vibrometer, with the advantages of contactless process and easy pipeline implementation.
APA, Harvard, Vancouver, ISO, and other styles
19

Lee, Jaehoon, Min Hong, and Sungyong Ryu. "Sleep Monitoring System Using Kinect Sensor." International Journal of Distributed Sensor Networks 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/875371.

Full text
Abstract:
Sleep activity is one of crucial factors for determining the quality of human life. However, a traditional sleep monitoring system onerously requires many devices to be attached to human body for achieving sleep related information. In this paper, we proposed and implemented the sleep monitoring system which can detect the sleep movement and posture during sleep using a Microsoft Kinect v2 sensor without any body attached devices. The proposed sleep monitoring system can readily gather the sleep related information that can reveal the sleep patterns of individuals. We expect that the analyzed sleep related data can significantly improve the sleep quality.
APA, Harvard, Vancouver, ISO, and other styles
20

Mateo, Fernando, Emilio Soria-Olivas, Juan Carrasco, Santiago Bonanad, Felipe Querol, and Sofía Pérez-Alenda. "HemoKinect: A Microsoft Kinect V2 Based Exergaming Software to Supervise Physical Exercise of Patients with Hemophilia." Sensors 18, no. 8 (July 26, 2018): 2439. http://dx.doi.org/10.3390/s18082439.

Full text
Abstract:
Patients with hemophilia need to strictly follow exercise routines to minimize their risk of suffering bleeding in joints, known as hemarthrosis. This paper introduces and validates a new exergaming software tool called HemoKinect that intends to keep track of exercises using Microsoft Kinect V2’s body tracking capabilities. The software has been developed in C++ and MATLAB. The Kinect SDK V2.0 libraries have been used to obtain 3D joint positions from the Kinect color and depth sensors. Performing angle calculations and center-of-mass (COM) estimations using these joint positions, HemoKinect can evaluate the following exercises: elbow flexion/extension, knee flexion/extension (squat), step climb (ankle exercise) and multi-directional balance based on COM. The software generates reports and progress graphs and is able to directly send the results to the physician via email. Exercises have been validated with 10 controls and eight patients. HemoKinect successfully registered elbow and knee exercises, while displaying real-time joint angle measurements. Additionally, steps were successfully counted in up to 78% of the cases. Regarding balance, differences were found in the scores according to the difficulty level and direction. HemoKinect supposes a significant leap forward in terms of exergaming applicability to rehabilitation of patients with hemophilia, allowing remote supervision.
APA, Harvard, Vancouver, ISO, and other styles
21

Pankov, B., and M. Makolkina. "OVERVIEW OF EQUIPMENT FOR CAPTURING 3D IMAGES WITH SUBSEQUENT TRANSMISSION THROUGH A COMMUNICATION NETWORK." Telecom IT 9, no. 3 (December 15, 2021): 56–71. http://dx.doi.org/10.31854/2307-1303-2021-9-3-56-71.

Full text
Abstract:
The relatively recent advent of the publicly available consumer RGB-D sensors has enabled a wide range of functionality to interact with 3D technology. For example, devices such as Microsoft Kinect v1, Microsoft Kinect v2, and Intel Realsense F200 are currently available RGB-D sensors using Structured light and Time of Flight technologies for getting information about pixel depth. The article contains an extensive comparison of RGB-D devices in terms of the resolution of RGB and Depth cameras, latency (the time required to build a depth map and its processing), and also compares viewing angles, required USB interfaces, sensor sizes, etc. The main goal of the article is to provide a complete and visual comparison of devices that can potentially be used to capture three-dimensional images with their subsequent transmission by communication networks.
APA, Harvard, Vancouver, ISO, and other styles
22

Usami, Takuya, Kazuki Nishida, Hirotaka Iguchi, Taro Okumura, Hiroaki Sakai, Ruido Ida, Mitsuya Horiba, et al. "Evaluation of lower extremity gait analysis using Kinect V2® tracking system." SICOT-J 8 (2022): 27. http://dx.doi.org/10.1051/sicotj/2022027.

Full text
Abstract:
Introduction: Microsoft Kinect V2® (Kinect) is a peripheral device of Xbox® and acquires information such as depth, posture, and skeleton definition. In this study, we investigated whether Kinect can be used for human gait analysis. Methods: Ten healthy volunteers walked 20 trials, and each walk was recorded by a Kinect and infrared- and marker-based-motion capture system. Pearson’s correlation and overall agreement with a method of meta-analysis of Pearson’s correlation coefficient were used to assess the reliability of each parameter, including gait velocity, gait cycle time, step length, hip and knee joint angle, ground contact time of foot, and max ankle velocity. Hip and knee angles in one gait cycle were calculated in Kinect and motion capture groups. Results: The coefficients of correlation for gait velocity (r = 0.92), step length (r = 0.81) were regarded as strong reliability. Gait cycle time (r = 0.65), minimum flexion angle of hip joint (r = 0.68) were regarded as moderate reliability. The maximum flexion angle of the hip joint (r = 0.43) and maximum flexion angle of the knee joint (r = 0.54) were regarded as fair reliability. Minimum flexion angle of knee joint (r = 0.23), ground contact time of foot (r = 0.23), and maximum ankle velocity (r = 0.22) were regarded as poor reliability. The method of meta-analysis revealed that participants with small hip and knee flexion angles tended to have poor correlations in maximum flexion angle of hip and knee joints. Similar trajectories of hip and knee angles were observed in Kinect and motion capture groups. Conclusions: Our results strongly suggest that Kinect could be a reliable device for evaluating gait parameters, including gait velocity, gait cycle time, step length, minimum flexion angle of the hip joint, and maximum flexion angle of the knee joint.
APA, Harvard, Vancouver, ISO, and other styles
23

Guidi, G., S. Gonizzi, and L. Micoli. "3D CAPTURING PERFORMANCES OF LOW-COST RANGE SENSORS FOR MASS-MARKET APPLICATIONS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B5 (June 15, 2016): 33–40. http://dx.doi.org/10.5194/isprs-archives-xli-b5-33-2016.

Full text
Abstract:
Since the advent of the first Kinect as motion controller device for the Microsoft XBOX platform (November 2010), several similar active and low-cost range sensing devices have been introduced on the mass-market for several purposes, including gesture based interfaces, 3D multimedia interaction, robot navigation, finger tracking, 3D body scanning for garment design and proximity sensors for automotive. However, given their capability to generate a real time stream of range images, these has been used in some projects also as general purpose range devices, with performances that for some applications might be satisfying. This paper shows the working principle of the various devices, analyzing them in terms of systematic errors and random errors for exploring the applicability of them in standard 3D capturing problems. Five actual devices have been tested featuring three different technologies: i) Kinect V1 by Microsoft, Structure Sensor by Occipital, and Xtion PRO by ASUS, all based on different implementations of the Primesense sensor; ii) F200 by Intel/Creative, implementing the Realsense pattern projection technology; Kinect V2 by Microsoft, equipped with the Canesta TOF Camera. A critical analysis of the results tries first of all to compare them, and secondarily to focus the range of applications for which such devices could actually work as a viable solution.
APA, Harvard, Vancouver, ISO, and other styles
24

Guidi, G., S. Gonizzi, and L. Micoli. "3D CAPTURING PERFORMANCES OF LOW-COST RANGE SENSORS FOR MASS-MARKET APPLICATIONS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B5 (June 15, 2016): 33–40. http://dx.doi.org/10.5194/isprsarchives-xli-b5-33-2016.

Full text
Abstract:
Since the advent of the first Kinect as motion controller device for the Microsoft XBOX platform (November 2010), several similar active and low-cost range sensing devices have been introduced on the mass-market for several purposes, including gesture based interfaces, 3D multimedia interaction, robot navigation, finger tracking, 3D body scanning for garment design and proximity sensors for automotive. However, given their capability to generate a real time stream of range images, these has been used in some projects also as general purpose range devices, with performances that for some applications might be satisfying. This paper shows the working principle of the various devices, analyzing them in terms of systematic errors and random errors for exploring the applicability of them in standard 3D capturing problems. Five actual devices have been tested featuring three different technologies: i) Kinect V1 by Microsoft, Structure Sensor by Occipital, and Xtion PRO by ASUS, all based on different implementations of the Primesense sensor; ii) F200 by Intel/Creative, implementing the Realsense pattern projection technology; Kinect V2 by Microsoft, equipped with the Canesta TOF Camera. A critical analysis of the results tries first of all to compare them, and secondarily to focus the range of applications for which such devices could actually work as a viable solution.
APA, Harvard, Vancouver, ISO, and other styles
25

Albert, Justin Amadeus, Victor Owolabi, Arnd Gebel, Clemens Markus Brahms, Urs Granacher, and Bert Arnrich. "Evaluation of the Pose Tracking Performance of the Azure Kinect and Kinect v2 for Gait Analysis in Comparison with a Gold Standard: A Pilot Study." Sensors 20, no. 18 (September 8, 2020): 5104. http://dx.doi.org/10.3390/s20185104.

Full text
Abstract:
Gait analysis is an important tool for the early detection of neurological diseases and for the assessment of risk of falling in elderly people. The availability of low-cost camera hardware on the market today and recent advances in Machine Learning enable a wide range of clinical and health-related applications, such as patient monitoring or exercise recognition at home. In this study, we evaluated the motion tracking performance of the latest generation of the Microsoft Kinect camera, Azure Kinect, compared to its predecessor Kinect v2 in terms of treadmill walking using a gold standard Vicon multi-camera motion capturing system and the 39 marker Plug-in Gait model. Five young and healthy subjects walked on a treadmill at three different velocities while data were recorded simultaneously with all three camera systems. An easy-to-administer camera calibration method developed here was used to spatially align the 3D skeleton data from both Kinect cameras and the Vicon system. With this calibration, the spatial agreement of joint positions between the two Kinect cameras and the reference system was evaluated. In addition, we compared the accuracy of certain spatio-temporal gait parameters, i.e., step length, step time, step width, and stride time calculated from the Kinect data, with the gold standard system. Our results showed that the improved hardware and the motion tracking algorithm of the Azure Kinect camera led to a significantly higher accuracy of the spatial gait parameters than the predecessor Kinect v2, while no significant differences were found between the temporal parameters. Furthermore, we explain in detail how this experimental setup could be used to continuously monitor the progress during gait rehabilitation in older people.
APA, Harvard, Vancouver, ISO, and other styles
26

Chiu, Chuang-Yuan, Michael Thelwell, Terry Senior, Simon Choppin, John Hart, and Jon Wheat. "Comparison of depth cameras for three-dimensional reconstruction in medicine." Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 233, no. 9 (June 28, 2019): 938–47. http://dx.doi.org/10.1177/0954411919859922.

Full text
Abstract:
KinectFusion is a typical three-dimensional reconstruction technique which enables generation of individual three-dimensional human models from consumer depth cameras for understanding body shapes. The aim of this study was to compare three-dimensional reconstruction results obtained using KinectFusion from data collected with two different types of depth camera (time-of-flight and stereoscopic cameras) and compare these results with those of a commercial three-dimensional scanning system to determine which type of depth camera gives improved reconstruction. Torso mannequins and machined aluminium cylinders were used as the test objects for this study. Two depth cameras, Microsoft Kinect V2 and Intel Realsense D435, were selected as the representatives of time-of-flight and stereoscopic cameras, respectively, to capture scan data for the reconstruction of three-dimensional point clouds by KinectFusion techniques. The results showed that both time-of-flight and stereoscopic cameras, using the developed rotating camera rig, provided repeatable body scanning data with minimal operator-induced error. However, the time-of-flight camera generated more accurate three-dimensional point clouds than the stereoscopic sensor. Thus, this suggests that applications requiring the generation of accurate three-dimensional human models by KinectFusion techniques should consider using a time-of-flight camera, such as the Microsoft Kinect V2, as the image capturing sensor.
APA, Harvard, Vancouver, ISO, and other styles
27

Nichols, Andrew, Matteo Rubinato, Yun-Hang Cho, and Jiayi Wu. "Optimal Use of Titanium Dioxide Colourant to Enable Water Surfaces to Be Measured by Kinect Sensors." Sensors 20, no. 12 (June 21, 2020): 3507. http://dx.doi.org/10.3390/s20123507.

Full text
Abstract:
Recent studies have sought to use Microsoft Kinect sensors to measure water surface shape in steady flows or transient flow processes. They have typically employed a white colourant, usually titanium dioxide (TiO2), in order to make the surface opaque and visible to the infrared-based sensors. However, the ability of Kinect Version 1 (KV1) and Kinect Version 2 (KV2) sensors to measure the deformation of ostensibly smooth reflective surfaces has never been compared, with most previous studies using a V1 sensor with no justification. Furthermore, the TiO2 has so far been used liberally and indeterminately, with no consideration as to the type of TiO2 to use, the optimal proportion to use or the effect it may have on the very fluid properties being measured. This paper examines the use of anatase TiO2 with two generations of the Microsoft Kinect sensor. Assessing their performance for an ideal flat surface, it is shown that surface data obtained using the V2 sensor is substantially more reliable. Further, the minimum quantity of colourant to enable reliable surface recognition is discovered (0.01% by mass). A stability test shows that the colourant has a strong tendency to settle over time, meaning the fluid must remain well mixed, having serious implications for studies with low Reynolds number or transient processes such as dam breaks. Furthermore, the effect of TiO2 concentration on fluid properties is examined. It is shown that previous studies using concentrations in excess of 1% may have significantly affected the viscosity and surface tension, and thus the surface behaviour being measured. It is therefore recommended that future studies employ the V2 sensor with an anatase TiO2 concentration of 0.01%, and that the effects of TiO2 on the fluid properties are properly quantified before any TiO2-Kinect-derived dataset can be of practical use, for example, in validation of numerical models or in physical models of hydrodynamic processes.
APA, Harvard, Vancouver, ISO, and other styles
28

Kösesoy, İrfan, Cemil Öz, Fatih Aslan, Fahri Köroğlu, and Mustafa Yığılıtaş. "Reliability and Validity of an Innovative Method of ROM Measurement Using Microsoft Kinect V2." Pamukkale University Journal of Engineering Sciences 24, no. 5 (2018): 915–20. http://dx.doi.org/10.5505/pajes.2017.65707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Hannink, Erin, Thomas Shannon, Karen L. Barker, and Helen Dawes. "The reliability and reproducibility of sagittal spinal curvature measurement using the Microsoft Kinect V2." Journal of Back and Musculoskeletal Rehabilitation 33, no. 2 (March 19, 2020): 295–301. http://dx.doi.org/10.3233/bmr-191554.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Edmunds, D., K. Tang, R. Symonds-Tayler, and E. Donovan. "EP-1624: Respiratory gating of an Elekta linac using a Microsoft Kinect v2 system." Radiotherapy and Oncology 123 (May 2017): S879—S880. http://dx.doi.org/10.1016/s0167-8140(17)32059-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Dolatabadi, Elham, Babak Taati, and Alex Mihailidis. "Concurrent validity of the Microsoft Kinect for Windows v2 for measuring spatiotemporal gait parameters." Medical Engineering & Physics 38, no. 9 (September 2016): 952–58. http://dx.doi.org/10.1016/j.medengphy.2016.06.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Oudah, Munir, Ali Al-Naji, and Javaan Chahl. "Elderly Care Based on Hand Gestures Using Kinect Sensor." Computers 10, no. 1 (December 26, 2020): 5. http://dx.doi.org/10.3390/computers10010005.

Full text
Abstract:
Technological advances have allowed hand gestures to become an important research field especially in applications such as health care and assisting applications for elderly people, providing a natural interaction with the assisting system through a camera by making specific gestures. In this study, we proposed three different scenarios using a Microsoft Kinect V2 depth sensor then evaluated the effectiveness of the outcomes. The first scenario used joint tracking combined with a depth threshold to enhance hand segmentation and efficiently recognise the number of fingers extended. The second scenario utilised the metadata parameters provided by the Kinect V2 depth sensor, which provided 11 parameters related to the tracked body and gave information about three gestures for each hand. The third scenario used a simple convolutional neural network with joint tracking by depth metadata to recognise and classify five hand gesture categories. In this study, deaf-mute elderly people performed five different hand gestures, each related to a specific request, such as needing water, meal, toilet, help and medicine. Next, the request was sent via the global system for mobile communication (GSM) as a text message to the care provider’s smartphone because the elderly subjects could not execute any activity independently.
APA, Harvard, Vancouver, ISO, and other styles
33

IVORRA, EUGENIO, MARIO ORTEGA, and MARIANO ALCANIZ. "AZURE KINECT BODY TRACKING UNDER REVIEW FOR THE SPECIFIC CASE OF UPPER LIMB EXERCISES." MM Science Journal 2021, no. 2 (June 2, 2021): 4333–41. http://dx.doi.org/10.17973/mmsj.2021_6_2021012.

Full text
Abstract:
A tool for human pose estimation and quantification using consumer-level equipment is a long-pursued objective. Many studies have employed the Microsoft Kinect v2 depth camera but with recent release of the new Kinect Azure a revision is required. This work researches the specific case of estimating the range of motion in five upper limb exercises using four different pose estimation methods. These exercises were recorded with the Kinect Azure camera and assessed with the OptiTrack motion tracking system as baseline. The statistical analysis consisted of evaluation of intra-rater reliability with intra-class correlation, the Pearson correlation coefficient and Bland–Altman statistical procedure. The modified version of the OpenPose algorithm with the post-processing algorithm PoseFix had excellent reliability with most intra-class correlations being over 0.75. The Azure body tracking algorithm had intermediate results. The results obtained justify clinicians employing these methods, as quick and low-cost simple tools, to assess upper limb angles.
APA, Harvard, Vancouver, ISO, and other styles
34

Noonan, P. J., J. Howard, W. A. Hallett, and R. N. Gunn. "Repurposing the Microsoft Kinect for Windows v2 for external head motion tracking for brain PET." Physics in Medicine and Biology 60, no. 22 (November 3, 2015): 8753–66. http://dx.doi.org/10.1088/0031-9155/60/22/8753.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Alimasi, Alimina, Hongchen Liu, and Chengang Lyu. "Low Frequency Vibration Visual Monitoring System Based on Multi-Modal 3DCNN-ConvLSTM." Sensors 20, no. 20 (October 17, 2020): 5872. http://dx.doi.org/10.3390/s20205872.

Full text
Abstract:
Low frequency vibration monitoring has significant implications on environmental safety and engineering practices. Vibration expressed by visual information should contain sufficient spatial information. RGB-D camera could record diverse spatial information of vibration in frame images. Deep learning can adaptively transform frame images into deep abstract features through nonlinear mapping, which is an effective method to improve the intelligence of vibration monitoring. In this paper, a multi-modal low frequency visual vibration monitoring system based on Kinect v2 and 3DCNN-ConvLSTM is proposed. Microsoft Kinect v2 collects RGB and depth video information of vibrating objects in unstable ambient light. The 3DCNN-ConvLSTM architecture can effectively learn the spatial-temporal characteristics of muti-frequency vibration. The short-term spatiotemporal feature of the collected vibration information is learned through 3D convolution networks and the long-term spatiotemporal feature is learned through convolutional LSTM. Multi-modal fusion of RGB and depth mode is used to further improve the monitoring accuracy to 93% in the low frequency vibration range of 0–10 Hz. The results show that the system can monitor low frequency vibration and meet the basic measurement requirements.
APA, Harvard, Vancouver, ISO, and other styles
36

Seredin, O. S., A. V. Kopylov, S. C. Huang, and D. S. Rodionov. "A SKELETON FEATURES-BASED FALL DETECTION USING MICROSOFT KINECT V2 WITH ONE CLASS-CLASSIFIER OUTLIER REMOVAL." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W12 (May 9, 2019): 189–95. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w12-189-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> The real-time and robust fall detection is one of the key components of elderly people care and monitoring systems. Depth sensors, as they became more available, occupy an increasing place in event recognition systems. Some of them can directly produce a skeletal description of the human figure for compact representation of a person’s posture. Skeleton description makes the output of source video or detailed information about the depth outside the system unnecessary and raises the privacy of the entire system. Based on a comparative study of different RGB-D cameras, the most promising model for further development was chosen - Microsoft Kinect v2. The TST Fall Detection Dataset v2 is used here as a base for experiments. The proposed algorithm is based on the skeleton features encoding on the sequence of neighboring frames and support vector machine classifier. A version of a cumulative sum method is applied for combining the individual decisions on the consecutive frames. It is offered to use the one-class classifier for detection of low-quality skeletons. The 0.958 accuracy of our fall detection procedure was obtained in the cross-validation procedure based on the removal of records of a particular person from the database (Leave-one-Person-out).</p>
APA, Harvard, Vancouver, ISO, and other styles
37

Timmi, Alessandro, Gino Coates, Karine Fortin, David Ackland, Adam L. Bryant, Ian Gordon, and Peter Pivonka. "Accuracy of a novel marker tracking approach based on the low-cost Microsoft Kinect v2 sensor." Medical Engineering & Physics 59 (September 2018): 63–69. http://dx.doi.org/10.1016/j.medengphy.2018.04.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Scimmi, Leonardo Sabatino, Matteo Melchiorre, Mario Troise, Stefano Mauro, and Stefano Pastorelli. "A Practical and Effective Layout for a Safe Human-Robot Collaborative Assembly Task." Applied Sciences 11, no. 4 (February 17, 2021): 1763. http://dx.doi.org/10.3390/app11041763.

Full text
Abstract:
This work describes a layout to carry out a demonstrative assembly task, during which a collaborative robot performs pick-and-place tasks to supply an operator the parts that he/she has to assemble. In this scenario, the robot and operator share the workspace and a real time collision avoidance algorithm is implemented to modify the planned trajectories of the robot avoiding any collision with the human worker. The movements of the operator are tracked by two Microsoft Kinect v2 sensors to overcome problems related with occlusions and poor perception of a single camera. The data obtained by the two Kinect sensors are combined and then given as input to the collision avoidance algorithm. The experimental results show the effectiveness of the collision avoidance algorithm and the significant gain in terms of task times that the highest level of human-robot collaboration can bring.
APA, Harvard, Vancouver, ISO, and other styles
39

Gené-Mola, Jordi, Jordi Llorens, Joan R. Rosell-Polo, Eduard Gregorio, Jaume Arnó, Francesc Solanelles, José A. Martínez-Casasnovas, and Alexandre Escolà. "Assessing the Performance of RGB-D Sensors for 3D Fruit Crop Canopy Characterization under Different Operating and Lighting Conditions." Sensors 20, no. 24 (December 10, 2020): 7072. http://dx.doi.org/10.3390/s20247072.

Full text
Abstract:
The use of 3D sensors combined with appropriate data processing and analysis has provided tools to optimise agricultural management through the application of precision agriculture. The recent development of low-cost RGB-Depth cameras has presented an opportunity to introduce 3D sensors into the agricultural community. However, due to the sensitivity of these sensors to highly illuminated environments, it is necessary to know under which conditions RGB-D sensors are capable of operating. This work presents a methodology to evaluate the performance of RGB-D sensors under different lighting and distance conditions, considering both geometrical and spectral (colour and NIR) features. The methodology was applied to evaluate the performance of the Microsoft Kinect v2 sensor in an apple orchard. The results show that sensor resolution and precision decreased significantly under middle to high ambient illuminance (>2000 lx). However, this effect was minimised when measurements were conducted closer to the target. In contrast, illuminance levels below 50 lx affected the quality of colour data and may require the use of artificial lighting. The methodology was useful for characterizing sensor performance throughout the full range of ambient conditions in commercial orchards. Although Kinect v2 was originally developed for indoor conditions, it performed well under a range of outdoor conditions.
APA, Harvard, Vancouver, ISO, and other styles
40

Valdivia, Sergio, Robin Blanco, Alvaro Uribe-Quevedo, Lina Penuela, David Rojas, and Bill Kapralos. "Development and evaluation of two posture-tracking user interfaces for occupational health care." Advances in Mechanical Engineering 10, no. 6 (June 2018): 168781401876948. http://dx.doi.org/10.1177/1687814018769489.

Full text
Abstract:
The spinal column requires special care through exercises focused on muscle strengthening, flexibility, and mobility to minimize the risk of developing musculoskeletal disorders that may affect the quality of life. Guidelines for spinal column exercises are commonly presented through printed and multimedia guides accompanied with demonstrations performed by a physiotherapist, occupational health expert, or physical fitness trainer. However, existing guides lack interaction and oral explanations may not always be clear to the user, leading to decreased engagement and motivation to start, continue, or complete an exercise program. In this article, we present two interactive and engaging posture-tracking user interfaces intended to promote proper spinal column exercise form. One user interface employs a wooden manikin with an integrated inertial measurement unit to provide a tangible user interaction. The other user interface presents a mobile application that provides instructions and explanations about the exercises. Both user interfaces allow recording key postures during the exercise for reference and feedback. We compared the usability of the interfaces through a series of flexion and extension exercises, monitored with an inertial measuring unit worn around the torso, and a Microsoft Kinect V2 vision-based sensor. Although no significant differences between the manikin user interface and the mobile application were found in terms of usability, the inertial measurement unit provided more accurate and reliable data in comparison to the Microsoft Kinect V2 as a result of body occlusions in front of the sensor caused during the torso flexion. Although both user interfaces provide different experiences and performed well, we believe that a combination of both will improve user engagement and motivation, while providing a more accurate motion profile.
APA, Harvard, Vancouver, ISO, and other styles
41

Marchisotti, Daniele, Pietro Marzaroli, Remo Sala, Michele Sculati, Hermes Giberti, and Marco Tarabini. "Automatic measurement of the hand dimensions using consumer 3D cameras." ACTA IMEKO 9, no. 2 (June 30, 2020): 75. http://dx.doi.org/10.21014/acta_imeko.v9i2.706.

Full text
Abstract:
<p class="Abstract">This article describes the metrological characterisation of two prototypes that use the point clouds acquired by consumer 3D cameras for the measurement of the human hand geometrical parameters. The initial part of the work is focused on the general description of algorithms that allow for the derivation of dimensional parameters of the hand. Algorithms were tested on data acquired using Microsoft Kinect v2 and Intel RealSense D400 series sensors. The accuracy of the proposed measurement methods has been evaluated in different tests aiming to identify bias errors deriving from point-cloud inaccuracy and at the identification of the effect of the hand pressure and the wrist flexion/extension. Results evidenced an accuracy better than 1 mm in the identification of the hand’s linear dimension and better than 20 cm<sup>3 </sup>for hand volume measurements. The relative uncertainty of linear dimensions, areas, and volumes was in the range of 1-10 %. Measurements performed with the Intel RealSense D400 were, on average, more repeatable than those performed with Microsoft Kinect. The uncertainty values limit the use of these devices to applications where the requested accuracy is larger than 5 % (volume measurements), 3 % (area measurements), and 1 mm (hands’ linear dimensions and thickness).</p>
APA, Harvard, Vancouver, ISO, and other styles
42

Marchisotti, Daniele, Paolo Schito, and Emanuele Zappa. "3D Measurement of Large Deformations on a Tensile Structure during Wind Tunnel Tests Using Microsoft Kinect V2." Sensors 22, no. 16 (August 17, 2022): 6149. http://dx.doi.org/10.3390/s22166149.

Full text
Abstract:
Wind tunnel tests often require deformation and displacement measures to determine the behavior of structures to evaluate their response to wind excitation. However, common measurement techniques make it possible to measure these quantities only at a few specific points. Moreover, these kinds of measurements, such as Linear Variable Differential Transformer LVDTs or fiber optics, usually influence the downstream and upstream air fluxes and the structure under test. In order to characterize the displacement of the structure not just at a few points, but for the entire structure, in this article, the application of 3D cameras during a wind tunnel test is presented. In order to validate this measurement technique in this application field, a wind tunnel test was executed. Three Kinect V2 depth sensors were used for a 3D displacement measurement of a test structure that did not present any optical marker or feature. The results highlighted that by using a low-cost and user-friendly measurement system, it is possible to obtain 3D measurements in a volume of several cubic meters (4 m × 4 m × 4 m wind tunnel chamber), without significant disturbance of wind flux and by means of a simple calibration of sensors, executed directly inside the wind tunnel. The obtained results highlighted a displacement directed to the internal part of the structure for the side most exposed to wind, while the sides, parallel to the wind flux, were more subjected to vibrations and with an outwards average displacement. These results are compliant with the expected behavior of the structure.
APA, Harvard, Vancouver, ISO, and other styles
43

Van-Bien, Bui, Banh Tien-Long, and Nguyen Duc-Toan. "Improving the Depth Accuracy and Assessment of Microsoft Kinect v2 Towards a Usage for Mechanical Part Modeling." Journal of the Korean Society for Precision Engineering 36, no. 8 (August 1, 2019): 691–97. http://dx.doi.org/10.7736/kspe.2019.36.8.691.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Naeemabadi, Mreza, Birthe Dinesen, Ole Kaeseler Andersen, and John Hansen. "Influence of a Marker-Based Motion Capture System on the Performance of Microsoft Kinect v2 Skeleton Algorithm." IEEE Sensors Journal 19, no. 1 (January 1, 2019): 171–79. http://dx.doi.org/10.1109/jsen.2018.2876624.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Amini, Amin, Konstantinos Banitsas, and William R. Young. "Kinect4FOG: monitoring and improving mobility in people with Parkinson’s using a novel system incorporating the Microsoft Kinect v2." Disability and Rehabilitation: Assistive Technology 14, no. 6 (May 23, 2018): 566–73. http://dx.doi.org/10.1080/17483107.2018.1467975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Edmunds, D., and E. Donovan. "SU-G-JeP4-01: An Assessment of a Microsoft Kinect V2 Sensor for Voluntary Breath-Hold Monitoring in Radiotherapy." Medical Physics 43, no. 6Part28 (June 2016): 3681. http://dx.doi.org/10.1118/1.4957111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Fuertes Muñoz, Gabriel, Ramón A. Mollineda, Jesús Gallardo Casero, and Filiberto Pla. "A RGBD-Based Interactive System for Gaming-Driven Rehabilitation of Upper Limbs." Sensors 19, no. 16 (August 9, 2019): 3478. http://dx.doi.org/10.3390/s19163478.

Full text
Abstract:
Current physiotherapy services may not be effective or suitable for certain patients due to lack of motivation, poor adherence to exercises, insufficient supervision and feedback or, in the worst case, refusal to continue with the rehabilitation plan. This paper introduces a novel approach for rehabilitation of upper limbs through KineActiv, a platform based on Microsoft Kinect v2 and developed in Unity Engine. KineActiv proposes exergames to encourage patients to perform rehabilitation exercises prescribed by a specialist, controls the patient′s performance, and corrects execution errors on the fly. KineActiv comprises a web platform where the physiotherapist can review session results, monitor patient health, and adjust rehabilitation routines. We recruited 10 patients for assessing the system usability as well as the system performance. Results show that KineActiv is a usable, enjoyable and reliable system, that does not cause any negative feelings.
APA, Harvard, Vancouver, ISO, and other styles
48

Kozlow, Patrick, Noor Abid, and Svetlana Yanushkevich. "Gait Type Analysis Using Dynamic Bayesian Networks." Sensors 18, no. 10 (October 4, 2018): 3329. http://dx.doi.org/10.3390/s18103329.

Full text
Abstract:
This paper focuses on gait abnormality type identification—specifically, recognizing antalgic gait. Through experimentation, we demonstrate that detecting an individual’s gait type is a viable biometric that can be used along with other common biometrics for applications such as forensics. To classify gait, the gait data is represented by coordinates that reflect the body joint coordinates obtained using a Microsoft Kinect v2 system. Features such as cadence, stride length, and other various joint angles are extracted from the input data. Using approaches such as the dynamic Bayesian network, the obtained features are used to model as well as perform gait type classification. The proposed approach is compared with other classification techniques and experimental results reveal that it is capable of obtaining a 88.68% recognition rate. The results illustrate the potential of using a dynamic Bayesian network for gait abnormality classification.
APA, Harvard, Vancouver, ISO, and other styles
49

Vonstad, Elise Klæbo, Xiaomeng Su, Beatrix Vereijken, Kerstin Bach, and Jan Harald Nilsen. "Comparison of a Deep Learning-Based Pose Estimation System to Marker-Based and Kinect Systems in Exergaming for Balance Training." Sensors 20, no. 23 (December 4, 2020): 6940. http://dx.doi.org/10.3390/s20236940.

Full text
Abstract:
Using standard digital cameras in combination with deep learning (DL) for pose estimation is promising for the in-home and independent use of exercise games (exergames). We need to investigate to what extent such DL-based systems can provide satisfying accuracy on exergame relevant measures. Our study assesses temporal variation (i.e., variability) in body segment lengths, while using a Deep Learning image processing tool (DeepLabCut, DLC) on two-dimensional (2D) video. This variability is then compared with a gold-standard, marker-based three-dimensional Motion Capturing system (3DMoCap, Qualisys AB), and a 3D RGB-depth camera system (Kinect V2, Microsoft Inc). Simultaneous data were collected from all three systems, while participants (N = 12) played a custom balance training exergame. The pose estimation DLC-model is pre-trained on a large-scale dataset (ImageNet) and optimized with context-specific pose annotated images. Wilcoxon’s signed-rank test was performed in order to assess the statistical significance of the differences in variability between systems. The results showed that the DLC method performs comparably to the Kinect and, in some segments, even to the 3DMoCap gold standard system with regard to variability. These results are promising for making exergames more accessible and easier to use, thereby increasing their availability for in-home exercise.
APA, Harvard, Vancouver, ISO, and other styles
50

Naeemabadi, MReza, Birthe Dinesen, Ole Kæseler Andersen, and John Hansen. "Investigating the impact of a motion capture system on Microsoft Kinect v2 recordings: A caution for using the technologies together." PLOS ONE 13, no. 9 (September 14, 2018): e0204052. http://dx.doi.org/10.1371/journal.pone.0204052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography