Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Multi-Kinect.

Zeitschriftenartikel zum Thema „Multi-Kinect“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Multi-Kinect" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Xinchen Ye, Jingyu Yang, Hao Huang, Chunping Hou und Yao Wang. „Computational Multi-View Imaging with Kinect“. IEEE Transactions on Broadcasting 60, Nr. 3 (September 2014): 540–54. http://dx.doi.org/10.1109/tbc.2014.2345931.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Rahman, Md Wasiur, Fatema Tuz Zohra und Marina L. Gavrilova. „Score Level and Rank Level Fusion for Kinect-Based Multi-Modal Biometric System“. Journal of Artificial Intelligence and Soft Computing Research 9, Nr. 3 (01.07.2019): 167–76. http://dx.doi.org/10.2478/jaiscr-2019-0001.

Der volle Inhalt der Quelle
Annotation:
Abstract Computational intelligence firmly made its way into the areas of consumer applications, banking, education, social networks, and security. Among all the applications, biometric systems play a significant role in ensuring an uncompromised and secure access to resources and facilities. This article presents a first multimodal biometric system that combines KINECT gait modality with KINECT face modality utilizing the rank level and the score level fusion. For the KINECT gait modality, a new approach is proposed based on the skeletal information processing. The gait cycle is calculated using three consecutive local minima computed for the distance between left and right ankles. The feature distance vectors are calculated for each person’s gait cycle, which allows extracting the biometric features such as the mean and the variance of the feature distance vector. For Kinect face recognition, a novel method based on HOG features has been developed. Then, K-nearest neighbors feature matching algorithm is applied as feature classification for both gait and face biometrics. Two fusion algorithms are implemented. The combination of Borda count and logistic regression approaches are used in the rank level fusion. The weighted sum method is used for score level fusion. The recognition accuracy obtained for multi-modal biometric recognition system tested on KINECT Gait and KINECT Eurocom Face datasets is 93.33% for Borda count rank level fusion, 96.67% for logistic regression rank-level fusion and 96.6% for score level fusion.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Abdurrahman, Muhammad Rijal, Tatacipta Dirgantara, Sandro Mihradi und Andi Isra Mahyuddin. „Validity of Kinect for Assessment of Joint Motion during Gait“. Applied Mechanics and Materials 660 (Oktober 2014): 921–26. http://dx.doi.org/10.4028/www.scientific.net/amm.660.921.

Der volle Inhalt der Quelle
Annotation:
One of the most common methods employed in gait analysis is the optical measurement method. While many analyzer systems are available commercially, the prices of those systems are rather prohibitive. In this work, an alternative method to obtain gait data using Microsoft KinectTM(Kinect) is investigated. Kinect, a 3D camera system created for gaming purposes, offers an ability which may be suitable for application in gait analysis. It has high mobility, needs no marker, is easy to use, and its price is relatively affordable. However, the performance of Kinect as a measurement tools in gait analysis must be first evaluated. In this work, Kinect is utilized to obtain joint movements of human walking motion to evaluate its suitability as an alternative motion analyzer in gait analyses. The data generated by Kinect are then processed to obtain gait parameters. The resulting parameters are compared to those obtained by 3D Motion Analyzer System that uses multi-camera previously developed. The results show promising prospect for Kinect application in gait analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Albert, Justin Amadeus, Victor Owolabi, Arnd Gebel, Clemens Markus Brahms, Urs Granacher und Bert Arnrich. „Evaluation of the Pose Tracking Performance of the Azure Kinect and Kinect v2 for Gait Analysis in Comparison with a Gold Standard: A Pilot Study“. Sensors 20, Nr. 18 (08.09.2020): 5104. http://dx.doi.org/10.3390/s20185104.

Der volle Inhalt der Quelle
Annotation:
Gait analysis is an important tool for the early detection of neurological diseases and for the assessment of risk of falling in elderly people. The availability of low-cost camera hardware on the market today and recent advances in Machine Learning enable a wide range of clinical and health-related applications, such as patient monitoring or exercise recognition at home. In this study, we evaluated the motion tracking performance of the latest generation of the Microsoft Kinect camera, Azure Kinect, compared to its predecessor Kinect v2 in terms of treadmill walking using a gold standard Vicon multi-camera motion capturing system and the 39 marker Plug-in Gait model. Five young and healthy subjects walked on a treadmill at three different velocities while data were recorded simultaneously with all three camera systems. An easy-to-administer camera calibration method developed here was used to spatially align the 3D skeleton data from both Kinect cameras and the Vicon system. With this calibration, the spatial agreement of joint positions between the two Kinect cameras and the reference system was evaluated. In addition, we compared the accuracy of certain spatio-temporal gait parameters, i.e., step length, step time, step width, and stride time calculated from the Kinect data, with the gold standard system. Our results showed that the improved hardware and the motion tracking algorithm of the Azure Kinect camera led to a significantly higher accuracy of the spatial gait parameters than the predecessor Kinect v2, while no significant differences were found between the temporal parameters. Furthermore, we explain in detail how this experimental setup could be used to continuously monitor the progress during gait rehabilitation in older people.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Rafiuzzaman, Mohammad, und Cemil Öz. „Distance Physical Rehabilitation System Framework with Multi-Kinect Motion Captured Data“. Communications on Applied Electronics 1, Nr. 5 (25.04.2015): 29–39. http://dx.doi.org/10.5120/cae-1558.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Kim, Bonghyun, und Sangyoung Oh. „Design of Multi-Screen Digital Experience Contents System Based on Kinect“. Advanced Science Letters 23, Nr. 3 (01.03.2017): 1581–84. http://dx.doi.org/10.1166/asl.2017.8638.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Lin, Xizhou, Fei Yuan und En Cheng. „Kinect depth image enhancement with adaptive joint multi-lateral discrete filters“. Journal of Difference Equations and Applications 23, Nr. 1-2 (26.09.2016): 350–66. http://dx.doi.org/10.1080/10236198.2016.1235159.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Rausch, Johannes, Andreas Maier, Rebecca Fahrig, Jang-Hwan Choi, Waldo Hinshaw, Frank Schebesch, Sven Haase, Jakob Wasza, Joachim Hornegger und Christian Riess. „Kinect-Based Correction of Overexposure Artifacts in Knee Imaging with C-Arm CT Systems“. International Journal of Biomedical Imaging 2016 (2016): 1–15. http://dx.doi.org/10.1155/2016/2502486.

Der volle Inhalt der Quelle
Annotation:
Objective.To demonstrate a novel approach of compensating overexposure artifacts in CT scans of the knees without attaching any supporting appliances to the patient. C-Arm CT systems offer the opportunity to perform weight-bearing knee scans on standing patients to diagnose diseases like osteoarthritis. However, one serious issue is overexposure of the detector in regions close to the patella, which can not be tackled with common techniques.Methods.A Kinect camera is used to algorithmically remove overexposure artifacts close to the knee surface. Overexposed near-surface knee regions are corrected by extrapolating the absorption values from more reliable projection data. To achieve this, we develop a cross-calibration procedure to transform surface points from the Kinect to CT voxel coordinates.Results.Artifacts at both knee phantoms are reduced significantly in the reconstructed data and a major part of the truncated regions is restored.Conclusion.The results emphasize the feasibility of the proposed approach. The accuracy of the cross-calibration procedure can be increased to further improve correction results.Significance.The correction method can be extended to a multi-Kinect setup for use in real-world scenarios. Using depth cameras does not require prior scans and offers the possibility of a temporally synchronized correction of overexposure artifacts. To achieve this, we develop a cross-calibration procedure to transform surface points from the Kinect to CT voxel coordinates.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Alimasi, Alimina, Hongchen Liu und Chengang Lyu. „Low Frequency Vibration Visual Monitoring System Based on Multi-Modal 3DCNN-ConvLSTM“. Sensors 20, Nr. 20 (17.10.2020): 5872. http://dx.doi.org/10.3390/s20205872.

Der volle Inhalt der Quelle
Annotation:
Low frequency vibration monitoring has significant implications on environmental safety and engineering practices. Vibration expressed by visual information should contain sufficient spatial information. RGB-D camera could record diverse spatial information of vibration in frame images. Deep learning can adaptively transform frame images into deep abstract features through nonlinear mapping, which is an effective method to improve the intelligence of vibration monitoring. In this paper, a multi-modal low frequency visual vibration monitoring system based on Kinect v2 and 3DCNN-ConvLSTM is proposed. Microsoft Kinect v2 collects RGB and depth video information of vibrating objects in unstable ambient light. The 3DCNN-ConvLSTM architecture can effectively learn the spatial-temporal characteristics of muti-frequency vibration. The short-term spatiotemporal feature of the collected vibration information is learned through 3D convolution networks and the long-term spatiotemporal feature is learned through convolutional LSTM. Multi-modal fusion of RGB and depth mode is used to further improve the monitoring accuracy to 93% in the low frequency vibration range of 0–10 Hz. The results show that the system can monitor low frequency vibration and meet the basic measurement requirements.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Seddik, Bassem, Sami Gazzah und Najoua Essoukri Ben Amara. „Human‐action recognition using a multi‐layered fusion scheme of Kinect modalities“. IET Computer Vision 11, Nr. 7 (18.08.2017): 530–40. http://dx.doi.org/10.1049/iet-cvi.2016.0326.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Vijayanagar, Krishna Rao, Maziar Loghman und Joohee Kim. „Real-Time Refinement of Kinect Depth Maps using Multi-Resolution Anisotropic Diffusion“. Mobile Networks and Applications 19, Nr. 3 (01.09.2013): 414–25. http://dx.doi.org/10.1007/s11036-013-0458-7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Kim, Da-won, Hee-jo Nam, Seung-yeon Lee, You-kyung Haam, O.-k. Seo und HyungJune Lee. „Multi-User Home-Training Healthcare System Using Kinect Sensor and Wearable Devices“. Journal of Korean Institute of Communications and Information Sciences 44, Nr. 4 (30.04.2019): 719–27. http://dx.doi.org/10.7840/kics.2019.44.4.719.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Ibánez, José de Jesús Luis González, und Alf Inge Wang. „Learning Recycling from Playing a Kinect Game“. International Journal of Game-Based Learning 5, Nr. 3 (Juli 2015): 25–44. http://dx.doi.org/10.4018/ijgbl.2015070103.

Der volle Inhalt der Quelle
Annotation:
The emergence of gesture-based computing and inexpensive gesture recognition technology such as the Kinect have opened doors for a new generation of educational games. Gesture based-based interfaces make it possible to provide user interfaces that are more nature and closer to the tasks being carried out, and helping students that learn best through movement (compared to audio and vision). For younger students, motion interfaces can stimulate development of motor skills and let students be physically active during the school day. In this article, an evaluation is presented of a Kinect educational game where students learn to recycle using body gestures. The focus of the evaluation was to investigate potential advantages using gesture-interfaces in educational games, how the game affected the students' engagement, motivation and learning, and if there were any social preferences for playing the game. The results show that elementary school students get highly motivated and engaged playing a Kinect recycling game. The students also report that they learn from playing this game and prefer such game-based learning to traditional lectures. Finally, the students preferred playing this game as a multi-player game, where the boys preferred to play competitive while the girls preferred playing collaboratively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Mateo, Fernando, Emilio Soria-Olivas, Juan Carrasco, Santiago Bonanad, Felipe Querol und Sofía Pérez-Alenda. „HemoKinect: A Microsoft Kinect V2 Based Exergaming Software to Supervise Physical Exercise of Patients with Hemophilia“. Sensors 18, Nr. 8 (26.07.2018): 2439. http://dx.doi.org/10.3390/s18082439.

Der volle Inhalt der Quelle
Annotation:
Patients with hemophilia need to strictly follow exercise routines to minimize their risk of suffering bleeding in joints, known as hemarthrosis. This paper introduces and validates a new exergaming software tool called HemoKinect that intends to keep track of exercises using Microsoft Kinect V2’s body tracking capabilities. The software has been developed in C++ and MATLAB. The Kinect SDK V2.0 libraries have been used to obtain 3D joint positions from the Kinect color and depth sensors. Performing angle calculations and center-of-mass (COM) estimations using these joint positions, HemoKinect can evaluate the following exercises: elbow flexion/extension, knee flexion/extension (squat), step climb (ankle exercise) and multi-directional balance based on COM. The software generates reports and progress graphs and is able to directly send the results to the physician via email. Exercises have been validated with 10 controls and eight patients. HemoKinect successfully registered elbow and knee exercises, while displaying real-time joint angle measurements. Additionally, steps were successfully counted in up to 78% of the cases. Regarding balance, differences were found in the scores according to the difficulty level and direction. HemoKinect supposes a significant leap forward in terms of exergaming applicability to rehabilitation of patients with hemophilia, allowing remote supervision.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Akbari, Ghasem, Mohammad Nikkhoo, Lizhen Wang, Carl P. C. Chen, Der-Sheng Han, Yang-Hua Lin, Hung-Bin Chen und Chih-Hsiu Cheng. „Frailty Level Classification of the Community Elderly Using Microsoft Kinect-Based Skeleton Pose: A Machine Learning Approach“. Sensors 21, Nr. 12 (10.06.2021): 4017. http://dx.doi.org/10.3390/s21124017.

Der volle Inhalt der Quelle
Annotation:
Frailty is one of the most important geriatric syndromes, which can be associated with increased risk for incident disability and hospitalization. Developing a real-time classification model of elderly frailty level could be beneficial for designing a clinical predictive assessment tool. Hence, the objective of this study was to predict the elderly frailty level utilizing the machine learning approach on skeleton data acquired from a Kinect sensor. Seven hundred and eighty-seven community elderly were recruited in this study. The Kinect data were acquired from the elderly performing different functional assessment exercises including: (1) 30-s arm curl; (2) 30-s chair sit-to-stand; (3) 2-min step; and (4) gait analysis tests. The proposed methodology was successfully validated by gender classification with accuracies up to 84 percent. Regarding frailty level evaluation and prediction, the results indicated that support vector classifier (SVC) and multi-layer perceptron (MLP) are the most successful estimators in prediction of the Fried’s frailty level with median accuracies up to 97.5 percent. The high level of accuracy achieved with the proposed methodology indicates that ML modeling can identify the risk of frailty in elderly individuals based on evaluating the real-time skeletal movements using the Kinect sensor.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Chang, M., und Z. Kang. „AN INDOOR SLAM METHOD BASED ON KINECT AND MULTI-FEATURE EXTENDED INFORMATION FILTER“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W7 (12.09.2017): 327–32. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w7-327-2017.

Der volle Inhalt der Quelle
Annotation:
Based on the frame of ORB-SLAM in this paper the transformation parameters between adjacent Kinect image frames are computed using ORB keypoints, from which priori information matrix and information vector are calculated. The motion update of multi-feature extended information filter is then realized. According to the point cloud data formed by depth image, ICP algorithm was used to extract the point features of the point cloud data in the scene and built an observation model while calculating a-posteriori information matrix and information vector, and weakening the influences caused by the error accumulation in the positioning process. Furthermore, this paper applied ORB-SLAM frame to realize autonomous positioning in real time in interior unknown environment. In the end, Lidar was used to get data in the scene in order to estimate positioning accuracy put forward in this paper.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

SHIMURA, Kouyou, Yoshinobu ANDO, Takashi YOSHIMI und Makoto MIZUKAWA. „2P1-P22 Development of Multi-Kinect Sensor Targeted to Guidance Robot(Communication Robot)“. Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2013 (2013): _2P1—P22_1—_2P1—P22_2. http://dx.doi.org/10.1299/jsmermd.2013._2p1-p22_1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Sevrin, L., N. Noury, N. Abouchi, F. Jumel, B. Massot und J. Saraydaryan. „Preliminary results on algorithms for multi-kinect trajectory fusion in a living lab“. IRBM 36, Nr. 6 (November 2015): 361–66. http://dx.doi.org/10.1016/j.irbm.2015.10.003.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Lachat, E., T. Landes und P. Grussenmeyer. „COMBINATION OF TLS POINT CLOUDS AND 3D DATA FROM KINECT V2 SENSOR TO COMPLETE INDOOR MODELS“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B5 (16.06.2016): 659–66. http://dx.doi.org/10.5194/isprsarchives-xli-b5-659-2016.

Der volle Inhalt der Quelle
Annotation:
The combination of data coming from multiple sensors is more and more applied for remote sensing issues (multi-sensor imagery) but also in cultural heritage or robotics, since it often results in increased robustness and accuracy of the final data. In this paper, the reconstruction of building elements such as window frames or door jambs scanned thanks to a low cost 3D sensor (Kinect v2) is presented. Their combination within a global point cloud of an indoor scene acquired with a terrestrial laser scanner (TLS) is considered. If the added elements acquired with the Kinect sensor enable to reach a better level of detail of the final model, an adapted acquisition protocol may also provide several benefits as for example time gain. The paper aims at analyzing whether the two measurement techniques can be complementary in this context. The limitations encountered during the acquisition and reconstruction steps are also investigated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Lachat, E., T. Landes und P. Grussenmeyer. „COMBINATION OF TLS POINT CLOUDS AND 3D DATA FROM KINECT V2 SENSOR TO COMPLETE INDOOR MODELS“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B5 (16.06.2016): 659–66. http://dx.doi.org/10.5194/isprs-archives-xli-b5-659-2016.

Der volle Inhalt der Quelle
Annotation:
The combination of data coming from multiple sensors is more and more applied for remote sensing issues (multi-sensor imagery) but also in cultural heritage or robotics, since it often results in increased robustness and accuracy of the final data. In this paper, the reconstruction of building elements such as window frames or door jambs scanned thanks to a low cost 3D sensor (Kinect v2) is presented. Their combination within a global point cloud of an indoor scene acquired with a terrestrial laser scanner (TLS) is considered. If the added elements acquired with the Kinect sensor enable to reach a better level of detail of the final model, an adapted acquisition protocol may also provide several benefits as for example time gain. The paper aims at analyzing whether the two measurement techniques can be complementary in this context. The limitations encountered during the acquisition and reconstruction steps are also investigated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Sanchez, Jesus Adrian Garcia, Kazuhiro Ohnishi, Atsushi Shibata, Fangyan Dong und Kaoru Hirota. „Deep Level Emotion Understanding Using Customized Knowledge for Human-Robot Communication“. Journal of Advanced Computational Intelligence and Intelligent Informatics 19, Nr. 1 (20.01.2015): 91–99. http://dx.doi.org/10.20965/jaciii.2015.p0091.

Der volle Inhalt der Quelle
Annotation:
In this study, a method for acquiring deep level emotion understanding is proposed to facilitate better human-robot communication, where customized learning knowledge of an observed agent (human or robot) is used with the observed input information from a Kinect sensor device. It aims to obtain agentdependent emotion understanding by utilizing special customized knowledge of the agent rather than ordinary surface level emotion understanding that uses visual/acoustic/distance information without any customized knowledge. In the experiment employing special demonstration scenarios where a company employee’s emotion is understood by a secretary eye robot equipped with a Kinect sensor device, it is confirmed that the proposed method provides deep level emotion understanding that is different from ordinary surface level emotion understanding. The proposal is being planned to be applied to a part of the emotion understanding module in the demonstration experiments of an ongoing robotics research project titled “Multi-Agent Fuzzy Atmosfield.”
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Wu, Yingnian, Guojun Yang und Lin Zhang. „Mouse simulation in human–machine interface using kinect and 3 gear systems“. International Journal of Modeling, Simulation, and Scientific Computing 05, Nr. 04 (29.09.2014): 1450015. http://dx.doi.org/10.1142/s1793962314500159.

Der volle Inhalt der Quelle
Annotation:
We never stop finding better ways to communicate with machines. To interact with computers we tried several ways, from punched tape and tape reader to QWERTY keyboards and command lines, from graphic user interface and mouse to multi-touch screens. The way we communicate with computers or devices are getting more direct and easier. In this paper, we give gesture mouse simulation in human–computer interface based on 3 Gear Systems using two Kinect sensors. The Kinect sensor is the perfect device to achieve dynamic gesture tracking and pose recognition. We hope the 3 Gear Systems can work as a mouse, to be more specific, use gestures to do click, double click and scroll. We use Coordinate Converting Matrix and Kalman Filter to reduce the shaking caused by errors and makes the interface create a better user experience. Finally the future of human–computer interface is discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Kim, Minseok, und Jae Yeol Lee. „Interactive lens through smartphones for supporting level-of-detailed views in a public display“. Journal of Computational Design and Engineering 2, Nr. 2 (07.01.2015): 73–78. http://dx.doi.org/10.1016/j.jcde.2014.12.001.

Der volle Inhalt der Quelle
Annotation:
Abstract In this paper, we propose a new approach to providing interactive and collaborative lens among multi-users for supporting level-of-detailed views using smartphones in a public display. In order to provide smartphone-based lens capability, the locations of smartphones are effectively detected and tracked using Kinect, which provides RGB data and depth data (RGB-D). In particular, human skeleton information is extracted from the Kinect 3D depth data to calculate the smartphone location more efficiently and correctly with respect to the public display and to support head tracking for easy target selection and adaptive view generation. The suggested interactive and collaborative lens using smartphones not only can explore local spaces of the shared display but also can provide various kinds of activities such as LOD viewing and collaborative interaction. Implementation results are given to show the advantage and effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Mat Sanusi, Khaleel Asyraaf, Daniele Di Mitri, Bibeg Limbu und Roland Klemke. „Table Tennis Tutor: Forehand Strokes Classification Based on Multimodal Data and Neural Networks“. Sensors 21, Nr. 9 (30.04.2021): 3121. http://dx.doi.org/10.3390/s21093121.

Der volle Inhalt der Quelle
Annotation:
Beginner table-tennis players require constant real-time feedback while learning the fundamental techniques. However, due to various constraints such as the mentor’s inability to be around all the time, expensive sensors and equipment for sports training, beginners are unable to get the immediate real-time feedback they need during training. Sensors have been widely used to train beginners and novices for various skills development, including psychomotor skills. Sensors enable the collection of multimodal data which can be utilised with machine learning to classify training mistakes, give feedback, and further improve the learning outcomes. In this paper, we introduce the Table Tennis Tutor (T3), a multi-sensor system consisting of a smartphone device with its built-in sensors for collecting motion data and a Microsoft Kinect for tracking body position. We focused on the forehand stroke mistake detection. We collected a dataset recording an experienced table tennis player performing 260 short forehand strokes (correct) and mimicking 250 long forehand strokes (mistake). We analysed and annotated the multimodal data for training a recurrent neural network that classifies correct and incorrect strokes. To investigate the accuracy level of the aforementioned sensors, three combinations were validated in this study: smartphone sensors only, the Kinect only, and both devices combined. The results of the study show that smartphone sensors alone perform sub-par than the Kinect, but similar with better precision together with the Kinect. To further strengthen T3’s potential for training, an expert interview session was held virtually with a table tennis coach to investigate the coach’s perception of having a real-time feedback system to assist beginners during training sessions. The outcome of the interview shows positive expectations and provided more inputs that can be beneficial for the future implementations of the T3.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Jun, Chanmo, Ju Yeon Lee und Sang Do Noh. „A Study on Modeling Automation of Human Engineering Simulation Using Multi Kinect Depth Cameras“. Transactions of the Society of CAD/CAM Engineers 21, Nr. 1 (01.03.2016): 9–19. http://dx.doi.org/10.7315/cadcam.2016.009.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Liu, Shenglan, Muxin Sun, Xiaodong Huang, Wei Wang und Feilong Wang. „Feature fusion using Extended Jaccard Graph and word embedding for robot“. Assembly Automation 37, Nr. 3 (07.08.2017): 278–84. http://dx.doi.org/10.1108/aa-01-2017-005.

Der volle Inhalt der Quelle
Annotation:
Purpose Robot vision is a fundamental device for human–robot interaction and robot complex tasks. In this paper, the authors aim to use Kinect and propose a feature graph fusion (FGF) for robot recognition. Design/methodology/approach The feature fusion utilizes red green blue (RGB) and depth information to construct fused feature from Kinect. FGF involves multi-Jaccard similarity to compute a robust graph and word embedding method to enhance the recognition results. Findings The authors also collect DUT RGB-Depth (RGB-D) face data set and a benchmark data set to evaluate the effectiveness and efficiency of this method. The experimental results illustrate that FGF is robust and effective to face and object data sets in robot applications. Originality/value The authors first utilize Jaccard similarity to construct a graph of RGB and depth images, which indicates the similarity of pair-wise images. Then, fusion feature of RGB and depth images can be computed by the Extended Jaccard Graph using word embedding method. The FGF can get better performance and efficiency in RGB-D sensor for robots.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Wang, Sen, Runxiao Wang, Xinxin Zuo und Weiwei Yu. „Real-Time Artifact Compensation for Depth Images of Multi-Frequency ToF“. Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 37, Nr. 1 (Februar 2019): 152–59. http://dx.doi.org/10.1051/jnwpu/20193710152.

Der volle Inhalt der Quelle
Annotation:
During the last few years, Time-of-Flight(TOF) sensor achieved a significant impact onto research and industrial fields due to that it can capture depth easily. For dynamic scenes and phase fusion, ToF sensor's working principles can lead to significant artifacts, therefore an efficient method to combine motion compensation and kernel density estimate multi-frequency unwrapping is proposed. Firstly, the raw multi-phase images are captured, then calculate the optical flow between each frequency. Secondly, by generating multiple depth hypotheses, uses a spatial kernel density estimation is used to rank them with wrapped phase images. Finally, the accurate depth from fused phase image is gotten. The algorithm on Kinect V2 is validated and the pixel-wise part is optimized using GPU. The method shows its real time superior performance on real datasets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Diao, Xiaolei, Xiaoqiang Li und Chen Huang. „Multi-Term Attention Networks for Skeleton-Based Action Recognition“. Applied Sciences 10, Nr. 15 (31.07.2020): 5326. http://dx.doi.org/10.3390/app10155326.

Der volle Inhalt der Quelle
Annotation:
The same action takes different time in different cases. This difference will affect the accuracy of action recognition to a certain extent. We propose an end-to-end deep neural network called “Multi-Term Attention Networks” (MTANs), which solves the above problem by extracting temporal features with different time scales. The network consists of a Multi-Term Attention Recurrent Neural Network (MTA-RNN) and a Spatio-Temporal Convolutional Neural Network (ST-CNN). In MTA-RNN, a method for fusing multi-term temporal features are proposed to extract the temporal dependence of different time scales, and the weighted fusion temporal feature is recalibrated by the attention mechanism. Ablation research proves that this network has powerful spatio-temporal dynamic modeling capabilities for actions with different time scales. We perform extensive experiments on four challenging benchmark datasets, including the NTU RGB+D dataset, UT-Kinect dataset, Northwestern-UCLA dataset, and UWA3DII dataset. Our method achieves better results than the state-of-the-art benchmarks, which demonstrates the effectiveness of MTANs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Oliver, Miguel, Francisco Montero, José Pascual Molina, Pascual González und Antonio Fernández-Caballero. „Multi-camera systems for rehabilitation therapies: a study of the precision of Microsoft Kinect sensors“. Frontiers of Information Technology & Electronic Engineering 17, Nr. 4 (April 2016): 348–64. http://dx.doi.org/10.1631/fitee.1500347.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Geerse, Daphne J., Bert H. Coolen und Melvyn Roerdink. „Kinematic Validation of a Multi-Kinect v2 Instrumented 10-Meter Walkway for Quantitative Gait Assessments“. PLOS ONE 10, Nr. 10 (13.10.2015): e0139913. http://dx.doi.org/10.1371/journal.pone.0139913.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Kuang, Yiqun, Hong Cheng, Jiasheng Hao, Ruimeng Xie und Fang Cui. „Multi-modal gesture recognition with voting-based dynamic time warping“. International Journal of Advanced Robotic Systems 16, Nr. 6 (01.11.2019): 172988141989239. http://dx.doi.org/10.1177/1729881419892398.

Der volle Inhalt der Quelle
Annotation:
Gesture recognition has remained a challenging problem in the fields of human robot interaction. With the development of depth sensors such as Kinect, different modalities become available for gesture recognition while its advantages have not been fully exploited. One of the critical issues for multi-modal gesture recognition is how to fuse features from different modalities. In this article, we present a unified framework for multi-modal gesture recognition based on dynamic time warping. The 3D implicit shape model is applied to characterize the space-time structure of the local features extracted from different modalities. And then, all votes from the local features are incorporated into a common probability space which is then used for building the distance matrix. Meanwhile, an upper-bounding method UB_Pro is proposed to speed up dynamic time warping. The proposed approach is evaluated on the challenging ChaLearn Isolated Gesture Dataset, showing comparable performance in comparison to the state-of-the-art approaches for multi-modal gesture recognition problem.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Salau, Jennifer, Jan H. Haas, Wolfgang Junge und Georg Thaller. „Extrinsic calibration of a multi-Kinect camera scanning passage for measuring functional traits in dairy cows“. Biosystems Engineering 151 (November 2016): 409–24. http://dx.doi.org/10.1016/j.biosystemseng.2016.10.008.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Essmaeel, Kyis, Cyrille Migniot, Albert Dipanda, Luigi Gallo, Ernesto Damiani und Giuseppe De Pietro. „A new 3D descriptor for human classification: application for human detection in a multi-kinect system“. Multimedia Tools and Applications 78, Nr. 16 (17.04.2019): 22479–508. http://dx.doi.org/10.1007/s11042-019-7568-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Salous, S., J. Newton, L. Leroy und S. S. Chendeb. „Dynamic Sensor Selection Based on Joint Data Quality in the Context of a Multi-Kinect Module inside the CAVE “Le SAS”“. International Journal of Computer Theory and Engineering 8, Nr. 6 (Dezember 2016): 471–74. http://dx.doi.org/10.7763/ijcte.2016.v8.1091.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Zeng, Ming, Hong Lin Ren, Qing Hao Meng, Chang Wei Chen und Shu Gen Ma. „Motion Comparison Method Combining Segmented Multi-Joint Line Graphs with the SIFT Matching Technique“. Applied Mechanics and Materials 599-601 (August 2014): 1566–70. http://dx.doi.org/10.4028/www.scientific.net/amm.599-601.1566.

Der volle Inhalt der Quelle
Annotation:
In this paper, an effective motion comparison method based on segmented multi-joint line graphs combined with the SIFT feature matching method is proposed. Firstly, the multi-joint 3D motion data are captured using the Kinect. Secondly, 3D motion data are normalized and distortion data are removed. Therefore, a 2D line graph can be obtained. Next, SIFT features of the 2D motion line graph are extracted. Finally, the line graphs are divided into several regions and then the comparison results can be calculated based on SIFT matching ratios between the tutor’s local line graph and the trainee’s local line graph. The experimental results show that the proposed method not only can easily deal with the several challenge problems in motion analysis, e.g., the problem of different rhythm of motions, the problem of a large amount of data, but also can provide detailed error correction cues.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Comeras-Chueca, Cristina, Lorena Villalba-Heredia, Marcos Pérez-Llera, Gabriel Lozano-Berges, Jorge Marín-Puyalto, Germán Vicente-Rodríguez, Ángel Matute-Llorente, José A. Casajús und Alejandro González-Agüero. „Assessment of Active Video Games’ Energy Expenditure in Children with Overweight and Obesity and Differences by Gender“. International Journal of Environmental Research and Public Health 17, Nr. 18 (15.09.2020): 6714. http://dx.doi.org/10.3390/ijerph17186714.

Der volle Inhalt der Quelle
Annotation:
(1) Background: Childhood obesity has become a main global health problem and active video games (AVG) could be used to increase energy expenditure. The aim of this study was to investigate the energy expenditure during an AVG intervention combined with exercise, differentiating by gender. (2) Methods: A total of 45 children with overweight or obesity (19 girls) performed an AVG intervention combined with exercise. The AVG used were the Xbox Kinect, Nintendo Wii, dance mats, BKOOL cycling simulator, and Nintendo Switch. The energy expenditure was estimated from the heart rate recorded during the sessions and the data from the individual maximal tests. (3) Results: The mean energy expenditure was 315.1 kilocalories in a one-hour session. Participants spent the most energy on BKOOL, followed by Ring Fit Adventures, Dance Mats, Xbox Kinect, and the Nintendo Wii, with significant differences between BKOOL and the Nintendo Wii. Significant differences between boys and girls were found, but were partially due to the difference in weight, VO2max, and fat-free mass. (4) Conclusions: The energy expenditure with AVG combined with multi-component exercise was 5.68 kcal/min in boys and 4.66 kcal/min in girls with overweight and obesity. AVG could be an effective strategy to increase energy expenditure in children and adolescents with overweight and obesity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Tsai, G. J., K. W. Chiang, C. H. Chu, Y. L. Chen, N. El-Sheimy und A. Habib. „THE PERFORMANCE ANALYSIS OF AN INDOOR MOBILE MAPPING SYSTEM WITH RGB-D SENSOR“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-1/W4 (26.08.2015): 183–88. http://dx.doi.org/10.5194/isprsarchives-xl-1-w4-183-2015.

Der volle Inhalt der Quelle
Annotation:
Over the years, Mobile Mapping Systems (MMSs) have been widely applied to urban mapping, path management and monitoring and cyber city, etc. The key concept of mobile mapping is based on positioning technology and photogrammetry. In order to achieve the integration, multi-sensor integrated mapping technology has clearly established. In recent years, the robotic technology has been rapidly developed. The other mapping technology that is on the basis of low-cost sensor has generally used in robotic system, it is known as the Simultaneous Localization and Mapping (SLAM). The objective of this study is developed a prototype of indoor MMS for mobile mapping applications, especially to reduce the costs and enhance the efficiency of data collection and validation of direct georeferenced (DG) performance. The proposed indoor MMS is composed of a tactical grade Inertial Measurement Unit (IMU), the Kinect RGB-D sensor and light detection, ranging (LIDAR) and robot. In summary, this paper designs the payload for indoor MMS to generate the floor plan. In first session, it concentrates on comparing the different positioning algorithms in the indoor environment. Next, the indoor plans are generated by two sensors, Kinect RGB-D sensor LIDAR on robot. Moreover, the generated floor plan will compare with the known plan for both validation and verification.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Sun, Xinyao, Anup Basu und Irene Cheng. „Multi-Sensor Motion Fusion Using Deep Neural Network Learning“. International Journal of Multimedia Data Engineering and Management 8, Nr. 4 (Oktober 2017): 1–18. http://dx.doi.org/10.4018/ijmdem.2017100101.

Der volle Inhalt der Quelle
Annotation:
Hand pose estimation for a continuous sequence has been an important topic not only in computer vision but also human-computer-interaction. Exploring the feasibility to use hand gestures to replace input devices, e.g., mouse, keyboard, joy-stick and touch screen, has attracted increasing attention from academic and industrial researchers. The fast advancement of hand pose estimation techniques is complemented by the rapid development of smart sensors technology such as Kinect and Leap. We introduce a hand pose estimation multi-sensor system. Two tracking models are proposed based on Deep (Recurrent) Neural Network (DRNN) architecture. Data captured from different sensors are analyzed and fused to produce an optimal hand pose sequence. Experimental results show that our models outperform previous methods with better accuracy, meeting real-time application requirement. Performance comparisons between DNN and DRNN, spatial and spatial-temporal features, and single- and dual- sensors, are also presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Rosas-Cervantes, Vinicio Alejandro, Quoc-Dong Hoang, Soon-Geul Lee und Jae-Hwan Choi. „Multi-Robot 2.5D Localization and Mapping Using a Monte Carlo Algorithm on a Multi-Level Surface“. Sensors 21, Nr. 13 (04.07.2021): 4588. http://dx.doi.org/10.3390/s21134588.

Der volle Inhalt der Quelle
Annotation:
Most indoor environments have wheelchair adaptations or ramps, providing an opportunity for mobile robots to navigate sloped areas avoiding steps. These indoor environments with integrated sloped areas are divided into different levels. The multi-level areas represent a challenge for mobile robot navigation due to the sudden change in reference sensors as visual, inertial, or laser scan instruments. Using multiple cooperative robots is advantageous for mapping and localization since they permit rapid exploration of the environment and provide higher redundancy than using a single robot. This study proposes a multi-robot localization using two robots (leader and follower) to perform a fast and robust environment exploration on multi-level areas. The leader robot is equipped with a 3D LIDAR for 2.5D mapping and a Kinect camera for RGB image acquisition. Using 3D LIDAR, the leader robot obtains information for particle localization, with particles sampled from the walls and obstacle tangents. We employ a convolutional neural network on the RGB images for multi-level area detection. Once the leader robot detects a multi-level area, it generates a path and sends a notification to the follower robot to go into the detected location. The follower robot utilizes a 2D LIDAR to explore the boundaries of the even areas and generate a 2D map using an extension of the iterative closest point. The 2D map is utilized as a re-localization resource in case of failure of the leader robot.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Salau, J., J. H. Haas, G. Thaller, M. Leisen und W. Junge. „Developing a multi-Kinect-system for monitoring in dairy cows: object recognition and surface analysis using wavelets“. Animal 10, Nr. 9 (2016): 1513–24. http://dx.doi.org/10.1017/s1751731116000021.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Strbac, Matija, Nebojsa Malesevic, Radoje Cobeljic und Laszlo Schwirtlich. „Feedback control of the forearm movement of tetraplegic patient based on microsoft kinect and multi-pad electrodes“. Journal of Automatic Control 21, Nr. 1 (2013): 7–11. http://dx.doi.org/10.2298/jac1301007s.

Der volle Inhalt der Quelle
Annotation:
We present a novel system for control of elbow movements by electrical stimulation of the biceps and triceps in tetraplegic patients. The operation of the system uses the novel algorithm and applies closed loop control. Movement of the arm is generated via multi-pad electrodes developed by Tecnalia Serbia, Ltd. by the stimulator that allows asynchronous activation of individual pads. The electrodes are positioned over the innervation of the biceps and triceps muscles on the upper arm. This layout allows distributed activation; thereby, selective and low fatiguing activation of paralyzed muscles. The sensory feedback comes from the image acquired by the Microsoft Kinect system and the depth stream analysis is performed in real time by the computer running in the MatLab environment. The image based feedback allows control of the hand position at the target by cocontraction of the antagonists. The control adjusts the stimulation intensity and results with the tracking of the desired movement. The algorithm was proven to operate efficiently in a tetraplegic patient.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Gao, Mingjing, Min Yu, Hang Guo und Yuan Xu. „Mobile Robot Indoor Positioning Based on a Combination of Visual and Inertial Sensors“. Sensors 19, Nr. 8 (13.04.2019): 1773. http://dx.doi.org/10.3390/s19081773.

Der volle Inhalt der Quelle
Annotation:
Multi-sensor integrated navigation technology has been applied to the indoor navigation and positioning of robots. For the problems of a low navigation accuracy and error accumulation, for mobile robots with a single sensor, an indoor mobile robot positioning method based on a visual and inertial sensor combination is presented in this paper. First, the visual sensor (Kinect) is used to obtain the color image and the depth image, and feature matching is performed by the improved scale-invariant feature transform (SIFT) algorithm. Then, the absolute orientation algorithm is used to calculate the rotation matrix and translation vector of a robot in two consecutive frames of images. An inertial measurement unit (IMU) has the advantages of high frequency updating and rapid, accurate positioning, and can compensate for the Kinect speed and lack of precision. Three-dimensional data, such as acceleration, angular velocity, magnetic field strength, and temperature data, can be obtained in real-time with an IMU. The data obtained by the visual sensor is loosely combined with that obtained by the IMU, that is, the differences in the positions and attitudes of the two sensor outputs are optimally combined by the adaptive fade-out extended Kalman filter to estimate the errors. Finally, several experiments show that this method can significantly improve the accuracy of the indoor positioning of the mobile robots based on the visual and inertial sensors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Salau, Jennifer, Jan H. Haas, Wolfgang Junge und Georg Thaller. „A multi-Kinect cow scanning system: Calculating linear traits from manually marked recordings of Holstein-Friesian dairy cows“. Biosystems Engineering 157 (Mai 2017): 92–98. http://dx.doi.org/10.1016/j.biosystemseng.2017.03.001.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Pu, Haitao, Jian Lian und Mingqu Fan. „Automatic Recognition of Flock Behavior of Chickens with Convolutional Neural Network and Kinect Sensor“. International Journal of Pattern Recognition and Artificial Intelligence 32, Nr. 07 (14.03.2018): 1850023. http://dx.doi.org/10.1142/s0218001418500234.

Der volle Inhalt der Quelle
Annotation:
In this paper, we propose an automatic convolutional neural network (CNN)-based method to recognize the chicken behavior within a poultry farm using a Kinect sensor. It resolves the hardships in flock behavior image classification by leveraging a data-driven mechanism and exploiting non-manually extracted multi-scale image features which combine both the local and global characteristics of the image. To our best knowledge, this is probably the first attempt of deep learning strategy in the field of domestic animal behavior recognition. To testify the performance of our proposed method, we conducted experiments between state-of-the-art methods and our method. Experimental results witness that our proposed approach outperforms the state-of-the-art methods both in effectiveness and efficiency. Our proposed CNN architecture for recognizing flock behavior of chickens produces an extremely impressive accuracy of 99.17%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Kim, Yejin, und . „Interactive Dance Guidance Using Example Motions“. International Journal of Engineering & Technology 7, Nr. 3.34 (01.09.2018): 521. http://dx.doi.org/10.14419/ijet.v7i3.34.19372.

Der volle Inhalt der Quelle
Annotation:
Background/Objectives: Human movements in dance are difficult to train without taking an actual class. In this paper, an interactive system of dance guidance is proposed to teach dance motions using examples.Methods/Statistical analysis: In the proposed system, a set of example motions are captured from experts through a method of marker-free motion capture, which consists of multiple Kinect cameras. The captured motions are calibrated and optimally reconstructed into a motion database. For the efficient exchange of motion data between a student and an instructor, a posture-based motion search and multi-mode views are provided for online lessons.Findings: To capture accurate example motions, the proposed system solves the joint occlusion problem by using multiple Kinect cameras. An iterative closest point (ICP) method is used to unify the multiple camera data into the same coordinate system, which generates an output motion in real time. Comparing to a commercial system, our system can capture various dance motions over an average of 85% accuracy, as shown in the experimental results. Using the touch screen devices, a student can browse a desired motion from the database to start a dance practice and send own motion to an instructor for feedback. By conducting online dance lessons such as ballet, K-pop, and traditional Korean, our experimental results show that the participating students can train their dance skills over a given period.Improvements/Applications: Our system is applicable to any student who wants to learn dance motions without taking an actual class andto receive online feedback from a distant instructor.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Liu, Xueping, Yibo Li und Qingjun Wang. „Multi-View Hierarchical Bidirectional Recurrent Neural Network for Depth Video Sequence Based Action Recognition“. International Journal of Pattern Recognition and Artificial Intelligence 32, Nr. 10 (20.06.2018): 1850033. http://dx.doi.org/10.1142/s0218001418500337.

Der volle Inhalt der Quelle
Annotation:
Human action recognition based on depth video sequence is an important research direction in the field of computer vision. The present study proposed a classification framework based on hierarchical multi-view to resolve depth video sequence-based action recognition. Herein, considering the distinguishing feature of 3D human action space, we project the 3D human action image to three coordinate planes, so that the 3D depth image is converted to three 2D images, and then feed them to three subnets, respectively. With the increase of the number of layers, the representations of subnets are hierarchically fused to be the inputs of next layers. The final representations of the depth video sequence are fed into a single layer perceptron, and the final result is decided by the time accumulated through the output of the perceptron. We compare with other methods on two publicly available datasets, and we also verify the proposed method through the human action database acquired by our Kinect system. Our experimental results demonstrate that our model has high computational efficiency and achieves the performance of state-of-the-art method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Salau, Jennifer, Jan H. Haas, Wolfgang Junge und Georg Thaller. „Automated calculation of udder depth and rear leg angle in Holstein-Friesian cows using a multi-Kinect cow scanning system“. Biosystems Engineering 160 (August 2017): 154–69. http://dx.doi.org/10.1016/j.biosystemseng.2017.06.006.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Setyonugroho, Adityo, James Purnama und Maulahikmah Galinium. „Multi Modal Gender Recognition for Gender-Based Marketing Using Depth Camera“. Journal of Applied Information, Communication and Technology 2, Nr. 2 (13.05.2015): 45–52. http://dx.doi.org/10.33555/ejaict.v2i2.87.

Der volle Inhalt der Quelle
Annotation:
This research is conducted to prove that gender recognition by computer which can be done in real time by using depth camera. Gender recognition can be used on many industries, such as security, marketing, and other sectors. The purpose of this research is to detect gender by using images of user (RGB image) and voice. Furthermore, gender-based marketing is used for the implementation of this system. By using multi modalities, the result is more accurate than only using one factor. Image processing algorithm is used on processing facial image, which is Linear Discriminant Analysis (LDA) algorithm. Furthermore, gender can also be detected by special frequency of each gender speech. Autocorrelation is one of the methods that is able to detect pitch from detected audio. Kinect for Windows v2 was carried out as visual and audio sensor. This research proved that gender can be detected by using those modalities with right algorithm. Several problems are also found during the experiments, such as input data problem, not matching algorithm, and small percentage of accuracy. In conclusion, detecting gender can be done by computer (real time or not) and several ideal conditions must be made to get proper and high accuracy result, such as person distances from camera, lighting, image size.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Adduci, M., K. Amplianitis und R. Reulke. „A Quality Evaluation of Single and Multiple Camera Calibration Approaches for an Indoor Multi Camera Tracking System“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-5 (05.06.2014): 9–15. http://dx.doi.org/10.5194/isprsarchives-xl-5-9-2014.

Der volle Inhalt der Quelle
Annotation:
Human detection and tracking has been a prominent research area for several scientists around the globe. State of the art algorithms have been implemented, refined and accelerated to significantly improve the detection rate and eliminate false positives. While 2D approaches are well investigated, 3D human detection and tracking is still an unexplored research field. In both 2D/3D cases, introducing a multi camera system could vastly expand the accuracy and confidence of the tracking process. Within this work, a quality evaluation is performed on a multi RGB-D camera indoor tracking system for examining how camera calibration and pose can affect the quality of human tracks in the scene, independently from the detection and tracking approach used. After performing a calibration step on every Kinect sensor, state of the art single camera pose estimators were evaluated for checking how good the quality of the poses is estimated using planar objects such as an ordinate chessboard. With this information, a bundle block adjustment and ICP were performed for verifying the accuracy of the single pose estimators in a multi camera configuration system. Results have shown that single camera estimators provide high accuracy results of less than half a pixel forcing the bundle to converge after very few iterations. In relation to ICP, relative information between cloud pairs is more or less preserved giving a low score of fitting between concatenated pairs. Finally, sensor calibration proved to be an essential step for achieving maximum accuracy in the generated point clouds, and therefore in the accuracy of the produced 3D trajectories, from each sensor.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Salau, Jennifer, Jan Henning Haas, Wolfgang Junge und Georg Thaller. „Determination of Body Parts in Holstein Friesian Cows Comparing Neural Networks and k Nearest Neighbour Classification“. Animals 11, Nr. 1 (29.12.2020): 50. http://dx.doi.org/10.3390/ani11010050.

Der volle Inhalt der Quelle
Annotation:
Machine learning methods have become increasingly important in animal science, and the success of an automated application using machine learning often depends on the right choice of method for the respective problem and data set. The recognition of objects in 3D data is still a widely studied topic and especially challenging when it comes to the partition of objects into predefined segments. In this study, two machine learning approaches were utilized for the recognition of body parts of dairy cows from 3D point clouds, i.e., sets of data points in space. The low cost off-the-shelf depth sensor Microsoft Kinect V1 has been used in various studies related to dairy cows. The 3D data were gathered from a multi-Kinect recording unit which was designed to record Holstein Friesian cows from both sides in free walking from three different camera positions. For the determination of the body parts head, rump, back, legs and udder, five properties of the pixels in the depth maps (row index, column index, depth value, variance, mean curvature) were used as features in the training data set. For each camera positions, a k nearest neighbour classifier and a neural network were trained and compared afterwards. Both methods showed small Hamming losses (between 0.007 and 0.027 for k nearest neighbour (kNN) classification and between 0.045 and 0.079 for neural networks) and could be considered successful regarding the classification of pixel to body parts. However, the kNN classifier was superior, reaching overall accuracies 0.888 to 0.976 varying with the camera position. Precision and recall values associated with individual body parts ranged from 0.84 to 1 and from 0.83 to 1, respectively. Once trained, kNN classification is at runtime prone to higher costs in terms of computational time and memory compared to the neural networks. The cost vs. accuracy ratio for each methodology needs to be taken into account in the decision of which method should be implemented in the application.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie