Auswahl der wissenschaftlichen Literatur zum Thema „Multi-Kinect“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Multi-Kinect" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Multi-Kinect"

1

Xinchen Ye, Jingyu Yang, Hao Huang, Chunping Hou und Yao Wang. „Computational Multi-View Imaging with Kinect“. IEEE Transactions on Broadcasting 60, Nr. 3 (September 2014): 540–54. http://dx.doi.org/10.1109/tbc.2014.2345931.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Rahman, Md Wasiur, Fatema Tuz Zohra und Marina L. Gavrilova. „Score Level and Rank Level Fusion for Kinect-Based Multi-Modal Biometric System“. Journal of Artificial Intelligence and Soft Computing Research 9, Nr. 3 (01.07.2019): 167–76. http://dx.doi.org/10.2478/jaiscr-2019-0001.

Der volle Inhalt der Quelle
Annotation:
Abstract Computational intelligence firmly made its way into the areas of consumer applications, banking, education, social networks, and security. Among all the applications, biometric systems play a significant role in ensuring an uncompromised and secure access to resources and facilities. This article presents a first multimodal biometric system that combines KINECT gait modality with KINECT face modality utilizing the rank level and the score level fusion. For the KINECT gait modality, a new approach is proposed based on the skeletal information processing. The gait cycle is calculated using three consecutive local minima computed for the distance between left and right ankles. The feature distance vectors are calculated for each person’s gait cycle, which allows extracting the biometric features such as the mean and the variance of the feature distance vector. For Kinect face recognition, a novel method based on HOG features has been developed. Then, K-nearest neighbors feature matching algorithm is applied as feature classification for both gait and face biometrics. Two fusion algorithms are implemented. The combination of Borda count and logistic regression approaches are used in the rank level fusion. The weighted sum method is used for score level fusion. The recognition accuracy obtained for multi-modal biometric recognition system tested on KINECT Gait and KINECT Eurocom Face datasets is 93.33% for Borda count rank level fusion, 96.67% for logistic regression rank-level fusion and 96.6% for score level fusion.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Abdurrahman, Muhammad Rijal, Tatacipta Dirgantara, Sandro Mihradi und Andi Isra Mahyuddin. „Validity of Kinect for Assessment of Joint Motion during Gait“. Applied Mechanics and Materials 660 (Oktober 2014): 921–26. http://dx.doi.org/10.4028/www.scientific.net/amm.660.921.

Der volle Inhalt der Quelle
Annotation:
One of the most common methods employed in gait analysis is the optical measurement method. While many analyzer systems are available commercially, the prices of those systems are rather prohibitive. In this work, an alternative method to obtain gait data using Microsoft KinectTM(Kinect) is investigated. Kinect, a 3D camera system created for gaming purposes, offers an ability which may be suitable for application in gait analysis. It has high mobility, needs no marker, is easy to use, and its price is relatively affordable. However, the performance of Kinect as a measurement tools in gait analysis must be first evaluated. In this work, Kinect is utilized to obtain joint movements of human walking motion to evaluate its suitability as an alternative motion analyzer in gait analyses. The data generated by Kinect are then processed to obtain gait parameters. The resulting parameters are compared to those obtained by 3D Motion Analyzer System that uses multi-camera previously developed. The results show promising prospect for Kinect application in gait analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Albert, Justin Amadeus, Victor Owolabi, Arnd Gebel, Clemens Markus Brahms, Urs Granacher und Bert Arnrich. „Evaluation of the Pose Tracking Performance of the Azure Kinect and Kinect v2 for Gait Analysis in Comparison with a Gold Standard: A Pilot Study“. Sensors 20, Nr. 18 (08.09.2020): 5104. http://dx.doi.org/10.3390/s20185104.

Der volle Inhalt der Quelle
Annotation:
Gait analysis is an important tool for the early detection of neurological diseases and for the assessment of risk of falling in elderly people. The availability of low-cost camera hardware on the market today and recent advances in Machine Learning enable a wide range of clinical and health-related applications, such as patient monitoring or exercise recognition at home. In this study, we evaluated the motion tracking performance of the latest generation of the Microsoft Kinect camera, Azure Kinect, compared to its predecessor Kinect v2 in terms of treadmill walking using a gold standard Vicon multi-camera motion capturing system and the 39 marker Plug-in Gait model. Five young and healthy subjects walked on a treadmill at three different velocities while data were recorded simultaneously with all three camera systems. An easy-to-administer camera calibration method developed here was used to spatially align the 3D skeleton data from both Kinect cameras and the Vicon system. With this calibration, the spatial agreement of joint positions between the two Kinect cameras and the reference system was evaluated. In addition, we compared the accuracy of certain spatio-temporal gait parameters, i.e., step length, step time, step width, and stride time calculated from the Kinect data, with the gold standard system. Our results showed that the improved hardware and the motion tracking algorithm of the Azure Kinect camera led to a significantly higher accuracy of the spatial gait parameters than the predecessor Kinect v2, while no significant differences were found between the temporal parameters. Furthermore, we explain in detail how this experimental setup could be used to continuously monitor the progress during gait rehabilitation in older people.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Rafiuzzaman, Mohammad, und Cemil Öz. „Distance Physical Rehabilitation System Framework with Multi-Kinect Motion Captured Data“. Communications on Applied Electronics 1, Nr. 5 (25.04.2015): 29–39. http://dx.doi.org/10.5120/cae-1558.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Kim, Bonghyun, und Sangyoung Oh. „Design of Multi-Screen Digital Experience Contents System Based on Kinect“. Advanced Science Letters 23, Nr. 3 (01.03.2017): 1581–84. http://dx.doi.org/10.1166/asl.2017.8638.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Lin, Xizhou, Fei Yuan und En Cheng. „Kinect depth image enhancement with adaptive joint multi-lateral discrete filters“. Journal of Difference Equations and Applications 23, Nr. 1-2 (26.09.2016): 350–66. http://dx.doi.org/10.1080/10236198.2016.1235159.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Rausch, Johannes, Andreas Maier, Rebecca Fahrig, Jang-Hwan Choi, Waldo Hinshaw, Frank Schebesch, Sven Haase, Jakob Wasza, Joachim Hornegger und Christian Riess. „Kinect-Based Correction of Overexposure Artifacts in Knee Imaging with C-Arm CT Systems“. International Journal of Biomedical Imaging 2016 (2016): 1–15. http://dx.doi.org/10.1155/2016/2502486.

Der volle Inhalt der Quelle
Annotation:
Objective.To demonstrate a novel approach of compensating overexposure artifacts in CT scans of the knees without attaching any supporting appliances to the patient. C-Arm CT systems offer the opportunity to perform weight-bearing knee scans on standing patients to diagnose diseases like osteoarthritis. However, one serious issue is overexposure of the detector in regions close to the patella, which can not be tackled with common techniques.Methods.A Kinect camera is used to algorithmically remove overexposure artifacts close to the knee surface. Overexposed near-surface knee regions are corrected by extrapolating the absorption values from more reliable projection data. To achieve this, we develop a cross-calibration procedure to transform surface points from the Kinect to CT voxel coordinates.Results.Artifacts at both knee phantoms are reduced significantly in the reconstructed data and a major part of the truncated regions is restored.Conclusion.The results emphasize the feasibility of the proposed approach. The accuracy of the cross-calibration procedure can be increased to further improve correction results.Significance.The correction method can be extended to a multi-Kinect setup for use in real-world scenarios. Using depth cameras does not require prior scans and offers the possibility of a temporally synchronized correction of overexposure artifacts. To achieve this, we develop a cross-calibration procedure to transform surface points from the Kinect to CT voxel coordinates.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Alimasi, Alimina, Hongchen Liu und Chengang Lyu. „Low Frequency Vibration Visual Monitoring System Based on Multi-Modal 3DCNN-ConvLSTM“. Sensors 20, Nr. 20 (17.10.2020): 5872. http://dx.doi.org/10.3390/s20205872.

Der volle Inhalt der Quelle
Annotation:
Low frequency vibration monitoring has significant implications on environmental safety and engineering practices. Vibration expressed by visual information should contain sufficient spatial information. RGB-D camera could record diverse spatial information of vibration in frame images. Deep learning can adaptively transform frame images into deep abstract features through nonlinear mapping, which is an effective method to improve the intelligence of vibration monitoring. In this paper, a multi-modal low frequency visual vibration monitoring system based on Kinect v2 and 3DCNN-ConvLSTM is proposed. Microsoft Kinect v2 collects RGB and depth video information of vibrating objects in unstable ambient light. The 3DCNN-ConvLSTM architecture can effectively learn the spatial-temporal characteristics of muti-frequency vibration. The short-term spatiotemporal feature of the collected vibration information is learned through 3D convolution networks and the long-term spatiotemporal feature is learned through convolutional LSTM. Multi-modal fusion of RGB and depth mode is used to further improve the monitoring accuracy to 93% in the low frequency vibration range of 0–10 Hz. The results show that the system can monitor low frequency vibration and meet the basic measurement requirements.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Seddik, Bassem, Sami Gazzah und Najoua Essoukri Ben Amara. „Human‐action recognition using a multi‐layered fusion scheme of Kinect modalities“. IET Computer Vision 11, Nr. 7 (18.08.2017): 530–40. http://dx.doi.org/10.1049/iet-cvi.2016.0326.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Dissertationen zum Thema "Multi-Kinect"

1

Yang, Lin. „3D Sensing and Tracking of Human Gait“. Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32540.

Der volle Inhalt der Quelle
Annotation:
Motion capture technology has been applied in many fields such as animation, medicine, military, etc. since it was first proposed in the 1970s. Based on the principles applied, motion capture technology is generally classified into six categories: 1) Optical; 2) Inertial; 3) Magnetic; 4) Mechanical; 5) Acoustic and 6) Markerless. Different from the other five kinds of motion capture technologies which try to track path of specific points with different equipment, markerless systems recognize human or non-human body's motion with vision-based technology which focuses on analyzing and processing the captured images for motion capture. The user doed not need to wear any equipment and is free to do any action in an extensible measurement area while a markerless motion capture system is working. Though this kind of system is considered as the preferred solution for motion capture, the difficulty for realizing an effective and high accuracy markerless system is much higher than the other technologies mentioned, which makes markerless motion capture development a popular research direction. Microsoft Kinect sensor has attracted lots of attention since the launch of its first version with its depth sensing feature which gives the sensor the ability to do motion capture without any extra devices. Recently, Microsoft released a new version of Kinect sensor with improved hardware and and targeted at the consumer market. However, to the best of our knowlege, the accuracy assessment of the sensor remains to be answered since it was released. In this thesis, we measure the depth accuracy of the newly released Kinect v2 depth sensor from different aspects and propose a trilateration method to improve the depth accuracy with multiple Kinects simultaneously. Based on the trilateration method, a low-cost, no wearable equipment requirement and easy setup human gait tracking system is realized.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Salous, Saleh. „Fusion de données multi-Kinect visant à améliorer l’interaction gestuelle au sein d’une installation de réalité virtuelle“. Thesis, Paris 8, 2015. http://www.theses.fr/2015PA080085/document.

Der volle Inhalt der Quelle
Annotation:
Les technologies liées à la réalité virtuelle sont les outils les plus avancés dans le domaine de l’interaction numérique, permettant à un utilisateur de communiquer avec une simulation créée à partir d’un matériel et d’une solution logicielle dédiés. Le degré d’immersion proposé par ces technologies et leur feedback audio et vidéo peut donner l’impression que ces environnements virtuels sont réels. Par conséquent, de multiples secteurs tels que le divertissement vidéo-ludique ou la médecine peuvent incorporer ces technologies. De plus, les installations de réalité virtuelle existantes sont nombreuses et leurs caractéristiques peuvent varier en fonction des contraintes physiques et financières des projets, ainsi qu’en fonction de l’expérience utilisateur souhaitée. Un de ces types d’installations de réalité virtuelle, le CAVE, est au cœur de cette thèse. Notre CAVE, nommé « Le SAS », utilise à heure actuelle une combinaison de technologie AR pour détecter des utilisateurs et d’un joystick pour récupérer des inputs directionnels. Notre objectif à travers cette thèse est de présenter, décrire et analyser une méthode alternative de détection de mouvements au sein du SAS, reposant sur l’utilisation d’un système de 4 Kinects connectées ensemble. Cette analyse est pertinente et justifiée étant donnée l’importance d’un système de détection d’utilisateur dans une installation de réalité virtuelle. Afin de proposer un niveau satisfaisant ‟interaction avec l’environnement virtuel, les capteurs installés sur le CAVE peuvent détecter différents types d’inputs. Dans le cadre d’un système multi-Kinect, l’interaction repose sur la détection de gestes effectués par l’utilisateur. Ces gestes sont extraits d’un squelette virtuel formé à partir des données recueillies par les Kinects. Cette thèse va aborder quatre points-clés décrits ci-dessous : V Premièrement, nous étudierons le contexte lié à notre CAVE et définirions ses caractéristiques ainsi que les contraintes que cet environnement particulier de réalité virtuelle impose à notre dispositif multi-Kinect. En second lieu, nous aborderons le sujet es algorithmes de suivi d’utilisateur au sein d’un CAVE. En effet, les dimensions du SAS amènent à utiliser plusieurs capteurs pour suivre l’utilisateur. Par conséquent, il devient nécessaire d’utiliser un algorithme capable de déterminer en temps-réel quelles Kinects produisent les données les plus précises et les plus fiables afin de correctement détecter les mouvements de l’utilisateur. Par la suite, nous proposerons un algorithme de détection de gestes. Cette étape est la suite logique de la détection d’utilisateur et consiste à interpréter les mouvements enregistrés. Bien que les Kinects soient capables d’enregistrer les mouvements et gestes de l’utilisateur, le CAVE doit être configuré afin de reconnaître certains gestes spécifiques, créant ainsi la possibilité d’interagir avec un environnement virtuel. Notre analyse se concentrera sur trois gestes spécifiques : Lever la main droite, lever la main gauche, et effectuer un petit saut. Finalement, nous fournirons des résultats d’expériences ayant pour objectif de comparer l’efficacité d’un système Multi-Kinect par rapport à l’utilisation d’un seul capteur. Nous présenterons des données indiquant une amélioration de la précision de la détection de gestes avec plusieurs Kinects
Virtual Reality is the most modern technology that allows a user to interact with an artificial environment created by Hardware and Software, with visual and aural feedback powerful enough to create the impression of a realistic environment. As a consequence, this form of computer interaction can be used in various contexts such as entertainment, medicine or vehicle driving training. Furthermore, numerous types of VR installations exist depending on the physical and financial constraints as well as on the intended final user experience provided by the system. The subject of this thesis is user interaction in a specific type of VR installation called a CAVE. Our CAVE, named “Le SAS”, currently relies on AR technology technology to detect users, and a joystick is used to provide directional inputs. Our objective is to present, describe and analyze an alternative user-tracking method relying on a 4-Kinect set-up tasked with tracking the user‟s movements inside this CAVE. Proper usertracking is one of the main challenges provided by Virtual Reality as well as one of the core elements that define a proper and functional VR system; therefore it is important to implement an effective tracking system.In order to create true interaction with the virtual world provided by the CAVE, the sensors can detect various types of input. In the case of a multi-Kinect system, interaction with the CAVE will be based on user gestures which recognition is performed by the Kinects on a skeleton created after fusing the joint data from the various sensors.This thesis will focus on four main points, as described below. The first part will provide a context analysis of our immersive CAVE “Le SAS” and define thefeatures as well as the constraints of this specific environment in which the multi-Kinect system is installed.In the second part, the topic of tracking algorithms will be discussed. Indeed, the immersive CAVE‟s large-scale implies a tracking system composed of several sensors. The use of a network of cameras to track a user inside the CAVE is synonymous with the use of an algorithm that determines in real-time what sensors provide the most accurate tracking data and will therefore properly recognize the user‟s inputs and movements.Subsequently, we will propose a gesture detection algorithm. Once the user‟s gestures are properly tracked, such an algorithm is necessary in order to provide interaction. While the Kinects can capture the user‟s movements, the question of the detection of specific gestures by this system comes into play as the CAVE needs to be configured as to recognize specific gestures as potential inputs. The presented algorithm will focus on three specific gestures: Raising the right hand, raising the left hand and short hopping. Lastly, we will provide experimental results comparing the effectiveness of a multi-Kinect set-up with the effectiveness of a single sensor and present data showing a noticeable increase in accuracy with the 4-Kinect system
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Macknojia, Rizwan. „Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces“. Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/23976.

Der volle Inhalt der Quelle
Annotation:
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version. The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors. The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Almeida, Caio Sacramento de Britto. „A multi-view environment for markerless augmented reality“. Instituto de Matemática. Departamento de ciência da Computação, 2014. http://repositorio.ufba.br/ri/handle/ri/19287.

Der volle Inhalt der Quelle
Annotation:
Submitted by Santos Davilene (davilenes@ufba.br) on 2016-05-25T16:33:11Z No. of bitstreams: 1 Caio Sacramento - Versão final.pdf: 44848885 bytes, checksum: 5e9d6d2dbd205475b3fd3d6642804fbc (MD5)
Made available in DSpace on 2016-05-25T16:33:11Z (GMT). No. of bitstreams: 1 Caio Sacramento - Versão final.pdf: 44848885 bytes, checksum: 5e9d6d2dbd205475b3fd3d6642804fbc (MD5)
Realidade aumentada e uma tecnologia que permite que gr a cos computacionais 2D e 3D sejam alinhados ou registrados com cenas do mundo real em tempo real. Esta proje ção de imagens virtuais requer uma referência na imagem real capturada, o que geralmente e obtido atrav es da utiliza c~ao de um ou mais marcadores. Mas existem situa c~oes em que a utiliza c~ao de marcadores pode não ser adequada, como por exemplo em aplica cões m edicas. Neste trabalho, e apresentado um ambiente multi-câmera, composto por oculos de realidade aumentada e dois dispositivos Kinect, o qual n~ao utiliza marcadores duciais para executar aplica cões de realidade aumentada. Todos os equipamentos s~ao calibrados de acordo com um sistema de referência comum, e então os modelos virtuais são transformados de acordo, tamb em. Para isso, duas abordagens foram especi cadas e implementadas: a primeira, baseada em um Kinect e dados de uxo otico e acelerômetro dos oculos de realidade aumentada, e outra baseada somente em dois dispositivos Kinect. Os resultados relacionados a qualidade e a performance obtidas por estas duas abordagens s~ao apresentados e discutidos, bem como uma compara c~ao entre eles, al em de todas as questões relacionadas que foram encontradas e tratadas neste trabalho.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Holmquist, Karl. „SLAMIt A Sub-Map Based SLAM System : On-line creation of multi-leveled map“. Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-133974.

Der volle Inhalt der Quelle
Annotation:
In many situations after a big catastrophe such as the one in Fukushima, the disaster area is highly dangerous for humans to enter. It is in such environments that a semi-autonomous robot could limit the risks to humans by exploring and mapping the area on its own. This thesis intends to design and implement a software based SLAM system which has potential to run in real-time using a Kinect 2 sensor as input. The focus of the thesis has been to create a system which allows for efficient storage and representation of the map, in order to be able to explore large environments. This is done by separating the map in different abstraction levels corresponding to local maps connected by a global map. During the implementation, this structure has been kept in mind in order to allow modularity. This makes it possible for each sub-component in the system to be exchanged if needed. The thesis is broad in the sense that it uses techniques from distinct areas to solve the sub-problems that exist. Some examples being, object detection and classification, point-cloud registration and efficient 3D-based occupancy trees.
I många situationer efter en stor katastrof, såsom den i Fukushima, är området ytterst farligt för människor att vistas. Det är i sådana miljöer som semi-autonomarobotar kan begränsa risken för människor genom att utforska och kartlägga området på egen hand. Det här exjobbet fokuserar på att designa och implementera ett mjukvarubaserat SLAM system med real-tids potential användandes en Kinect 2 sensor. Exjobbet har fokuserat på att skapa ett system som tillåter effektiv lagring och representering av kartan för att tillåta utforskning utav stora områden. Det görs genom att separera kartan i olika abstraktionsnivåer, vilka korresponderar mot lokala kartor sammankopplade med en global karta. Strukturen av system har tagit hänsyn till under utvecklingen för att tillåta modularitet. Vilket gör det möjligt att byta ut komponenter i systemet. Det här exjobbet är brett i det avseende att det använder tekniker från flera olika områden för att lösa de sub-problem som finns. Några exempel är objektdetektion och klassificering, punkt-molnsregistrering och effektiva 3D-baserade okupationsträd.
Después de grandes catástrofes, cómo la reciente en Fukushima, está demasiado peligroso para permitir humanes a entrar. En estás situaciones estaría más preferible entrar con un robot semi-automático que puede explorar, crear un mapa de la ambiente y encontrar los riesgos que hay. Está obra intente de diseñar e implementar un sistema SLAM, con la potencial de crear está mapa en tiempo real, utilizando una camera Kinect 2. En el centro de la tesis está el diseño de una mapa que será eficiente alojar y manejar, para ser utilizado explorando áreas grandes. Se logra esto por la manera de la separación del mapa en distintas niveles de abstracción qué corresponde a mapas métricos locales y una mapa topológica que conecta estas. La estructura del sistema ha sido considerado para permitir utilizar varios tipos de sensores, además que permitir cambiar ciertas partes de la sistema. Esté tesis cobra distintas áreas cómo lo de detección de objetos, estimación de la posición del sistema, registrar nubes de puntos y alojamiento de 3D-mapas.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

LU, YU-CHEN, und 呂昱辰. „AR Painting: Multi Kinect Reconstruction and 3D Painting“. Thesis, 2017. http://ndltd.ncl.edu.tw/handle/30833531016720675740.

Der volle Inhalt der Quelle
Annotation:
碩士
銘傳大學
資訊傳播工程學系碩士班
105
Many peoples have the experience of graffiti; however, not every real objects can be drawn. So we design a system to draw graffiti on not only paper/wall but also any object. Our system can paint on a virtual object reconstructed from KinectV2 camera. And the result will display on the real object in real time by projection mapping. The system consists of three parts, including model reconstruction, 3D painting, and projection mapping. First, the 3D coordinates of chessboard corners are used to compute the transformation matrices of multiple KinectV2 cameras. The point cloud data will be aligned by these matrices in the model reconstruction. Second, the user can hold controllers to paint model in virtual reality. Because the point number of those point cloud data is enormous, we use KD-Tree to increase the speed of interaction. Finally, we construct a corresponding texture of the model, and display the texture on the object in projection mapping. In conclusion, we integrate KinectV2 cameras, HTC Vive, and a projector in our graffiti system. Therefore, the user can paint an object by virtual reality devices, and at the same time, the other audience will see the result by projection mapping.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Huang, Kuan-Chih, und 黃冠智. „3D Video Surveillance by An Octal-shaped Multi-Kinect Imaging Device“. Thesis, 2016. http://ndltd.ncl.edu.tw/handle/30020673456274369347.

Der volle Inhalt der Quelle
Annotation:
碩士
國立交通大學
多媒體工程研究所
104
In this study, a system for 3D video surveillance by an octal-shaped multi-Kinect imaging device has been proposed. Affixed on the ceiling of a building for monitoring the indoor environment below, the octal-shaped multi-Kinect imaging device has a 360o view of the environment, composed of eight Kinect devices looking outward and one Kinect device looking downward. The system includes the functions of 3D human image construction, 3D environment image construction, and monitoring and displays of human activities in the environment. To implement such functions, several methods and strategies have been proposed. Firstly, a method for 3D image construction using Kinect images is adopted. Then, a method for calibration of the relevant transformation between every two consecutive Kinect images using a new distance measure, color-filtered distance-weighted correlation (DWC), is proposed for use in constructing a complete 3D human image. Furthermore, in order to handle the blurring phenomenon appearing in a completed 3D human image, a strategy for selecting typical image frames with largest or proper human body parts and a method for splitting human limbs in the images based on skin-color filtering and canny edge detection are proposed. Secondly, an environment construction method based on the color-filtered DWC measure is proposed, which can be used to merge the background images acquired by the octal-shaped multi-Kinect imaging device to build up a 360o indoor environment image. A method for viewing the complete 3D human image in the environment image from different perspective view is also proposed, which uses the techniques of human body detection and regression analysis to decide the three axes of the human in the 3D image and displays the complete 3D human image accordingly. Good experimental results are also shown, which prove the feasibility of the proposed system and methods for real applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Ke, Chia-Yu, und 柯佳佑. „Multi-Human Occlusion Handling and Tracking Using Particle Filter based on Kinect“. Thesis, 2011. http://ndltd.ncl.edu.tw/handle/20078774177112122440.

Der volle Inhalt der Quelle
Annotation:
碩士
國立交通大學
電控工程研究所
100
This paper presents a novel human detection and tracking system with the particle filter approach based on sensors of Kinect. The human detection module extracts the human position by integrating foreground extraction and the depth image. Moreover, the human bodies are judged according to the scaled features with depth information. The advantage of this paper is we consider a real-time particle filtering approach to track multi-human with parameters in several features including the position, color and depth information. Furthermore, different from the most tracking system, we present the occlusion handling approach by the cooperation of the depth and tracking information, and the scaling processes with the different human postures and the positions of depth. Experimental results show that the proposed human tracking system achieves good results in the challenged video. Even in the all dark scenes and light changing situations, the algorithm we presented can still work in a good performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lin, Huan-Po, und 林煥博. „Human Modeling and Tracking for 3D Video Surveillance Using a Multi-KINECT Imaging Device“. Thesis, 2014. http://ndltd.ncl.edu.tw/handle/19375687328980222186.

Der volle Inhalt der Quelle
Annotation:
碩士
國立交通大學
資訊科學與工程研究所
102
In this study, several methods and algorithms are proposed for 3D human modeling and environment monitoring via the use of KINECT images for 3D video surveillance. An octagonal multi-KINECT imaging device to monitor the indoor environment is adopted. The octagonal multi-KINECT imaging device has a 360-degree view of the indoor environment, composed of eight KINECT devices looking outward and one KINECT device looking downward. With the octagonal multi-KINECT imaging device used as a 3D video surveillance system, methods for detecting humans, tracking human activities, and conducting handoff processes between the nine KINECT devices are proposed. After collecting the human data from the tracking process, a method for human modeling is applied. With the human model completed, a method for human body feature extraction is carried out and the result is shown to users. In more detail, firstly a method for 3D image construction using KINECT images proposed by Ma and Tsai [11] is reviewed. A method for calibration between KINECT devices using the DWC measure based on the evolution strategy is proposed, which can avoid being trapped in the local minimum and get a good calibration result. Then, a method using the mentioned calibration results and the 3D images converted from KINECT images is proposed to construct an indoor environment model. Furthermore, an algorithm for background learning using RGBD images is proposed. Also proposed is a method for human detection based on a background learning scheme and a 3D connected-component labeling technique. Afterwards, a method for human tracking is proposed, which uses the result from the human detection and conducts dynamic tracking after solving the handoff problem between the nine KINECT devices. With the data saved during human tracking, human modeling and human body feature extraction are conducted. A method for building up the human model by using the DWC measure and the randomized K-d tree structure is proposed. Then, a method for finding the bounding box circumscribing the human model is proposed. Some useful geometric properties are also exploited for finding an optimal bounding box. Finally, the bounding box is measured to compute human body features like height, width, and thickness for use in video surveillance and other applications. Good experimental results are also shown, which prove the feasibility of the proposed methods for real applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Chien, Chi-Liang, und 簡綺良. „Fast Construction of Smooth 3D Whole Human-body Color Models by A Two-level Multi-Kinect System“. Thesis, 2016. http://ndltd.ncl.edu.tw/handle/04267756455025924077.

Der volle Inhalt der Quelle
Annotation:
碩士
國立交通大學
多媒體工程研究所
104
Nowadays, 3D printing has become more and more popular. It is a technology which can be used to print various models in proper materials with a 3D printer. And another related technology, 3D scanning, is also considered indispensable, which can be employed to capture the color and depth information of an object by using a 3D scanner. In this study, both technologies are used to construct and print human models. As a continuation of the research of Chiu and Tsai [2], their two-level multi-Kinects system, which is composed of one Kinect version 2 device and eleven Kinect version 1 devices, is used in this study to scan the color and depth data of a human body. The major goal is to refine their system to construct smooth colored whole-body models for 3D printing. The processes of the proposed system include two phases, learning and construction. In the learning process, at first the human head part is segmented out by a method based on skin-color detection. Then, the merging parameters of the 3D data of three human-body parts, namely, the head, the upper body, and the lower one, are derived by a newly-proposed color-filtered distance-weighted correlation (DWC) measure together with a speedup measure based on the use of distance map. In the model construction process, the calibrated merging parameters are applied to the 3D images obtained through vision-based transformations from the color and depth data of the Kinect devices, yielding a set of well-merged 3D whole-body data. To construct a smooth whole-body model, at first the 3D whole-body data are split into three parts, the head, the middle body, and the legs, by edge detection and skin-color segmentation techniques. Then, different modeling parameters on the three parts are applied respectively to construct partial models with different degrees of smoothness for the three parts. By composing these partial models of the three parts, a smooth whole-body model is acquired. To construct a colored human-body model, a coloring method based on the concept of k nearest neighbor (kNN) is proposed to assign colors to the vertex points of the original monochrome model. Finally, before taking the constructed colored model to the 3D printer, the colors of the model in the CMYK color space are adjusted to make the colors of the to-be-printed result visually close to those of the model constructed originally and seen on the screen. Good experimental results are also presented to show the feasibility of the proposed methods and the system for real applications of 3D human-body scanning and printing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Multi-Kinect"

1

Jiang, Feng, Shengping Zhang, Shen Wu, Yang Gao und Debin Zhao. „Multi-layered Gesture Recognition with Kinect“. In Gesture Recognition, 387–416. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57021-1_13.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Støvring, Nikolaj Marimo, Esbern Torgard Kaspersen, Jeppe Milling Korsholm, Yousif Ali Hassan Najim, Soraya Makhlouf, Alireza Khani und Cumhur Erkut. „Multi-kinect Skeleton Fusion for Enactive Games“. In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 173–80. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-55834-9_20.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Gottfried, Jens-Malte, Janis Fehr und Christoph S. Garbe. „Computing Range Flow from Multi-modal Kinect Data“. In Advances in Visual Computing, 758–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24028-7_70.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Hossain Bari, A. S. M., und Marina L. Gavrilova. „Multi-layer Perceptron Architecture for Kinect-Based Gait Recognition“. In Advances in Computer Graphics, 356–63. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22514-8_31.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sang, Haifeng, und Wei Li. „Gesture Detection and Recognition Fused with Multi-feature Based on Kinect“. In Biometric Recognition, 597–606. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25417-3_70.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Chakraborty, Saikat, Rishabh Mishra, Anurag Dwivedi, Tania Das und Anup Nandy. „A Low-Cost Pathological Gait Detection System in Multi-Kinect Environment“. In Springer Proceedings in Physics, 97–104. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-6467-3_13.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Silveira, Mariana Lyra, Thiago Loureiro Carvalho, Anselmo Frizera Neto und Teodiano Bastos Filho. „A Multi-Kinect System for Serious Game Development Using ROS and Unity“. In XXVI Brazilian Congress on Biomedical Engineering, 585–91. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-2119-1_91.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Kurillo, Gregorij, Ferda Ofli, Jennifer Marcoe, Paul Gorman, Holly Jimison, Misha Pavel und Ruzena Bajcsy. „Multi-disciplinary Design and In-Home Evaluation of Kinect-Based Exercise Coaching System for Elderly“. In Lecture Notes in Computer Science, 101–13. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-20913-5_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Tang, Tiffany Y., und Relic Yongfu Wang. „A Comparative Study of Applying Low-Latency Smoothing Filters in a Multi-kinect Virtual Play Environment“. In HCI International 2016 – Posters' Extended Abstracts, 144–48. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-40542-1_23.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Galatas, Georgios, Gerasimos Potamianos und Fillia Makedon. „Robust Multi-Modal Speech Recognition in Two Languages Utilizing Video and Distance Information from the Kinect“. In Human-Computer Interaction. Interaction Modalities and Techniques, 43–48. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39330-3_5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Multi-Kinect"

1

Li, Yi. „Multi-scenario gesture recognition using Kinect“. In 2012 17th International Conference on Computer Games: AI, Animation, Mobile, Interactive Multimedia, Educational & Serious Games (CGAMES). IEEE, 2012. http://dx.doi.org/10.1109/cgames.2012.6314563.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Faion, Florian, Simon Friedberger, Antonio Zea und Uwe D. Hanebeck. „Intelligent sensor-scheduling for multi-kinect-tracking“. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2012). IEEE, 2012. http://dx.doi.org/10.1109/iros.2012.6386007.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Zhao, Yue, Yunda Liu, Min Dong und Sheng Bi. „Multi-feature gesture recognition based on Kinect“. In 2016 IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER). IEEE, 2016. http://dx.doi.org/10.1109/cyber.2016.7574856.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Yu Zha und Yijie Fan. „Multi-person gait recognition system based on Kinect“. In 2016 2nd IEEE International Conference on Computer and Communications (ICCC). IEEE, 2016. http://dx.doi.org/10.1109/compcomm.2016.7924722.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Radkowski, Rafael. „HoloLens Integration into a Multi-Kinect Tracking Environment“. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 2018. http://dx.doi.org/10.1109/ismar-adjunct.2018.00052.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Yang, Roy Sirui, Yuk Hin Chan, Rui Gong, Minh Nguyen, Alfonso Gastelum Strozzi, Patrice Delmas, Georgy Gimel'farb und Rachel Ababou. „Multi-Kinect scene reconstruction: Calibration and depth inconsistencies“. In 2013 28th International Conference of Image and Vision Computing New Zealand (IVCNZ). IEEE, 2013. http://dx.doi.org/10.1109/ivcnz.2013.6726991.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Saiyi Li, Pubudu N. Pathirana und Terry Caelli. „Multi-kinect skeleton fusion for physical rehabilitation monitoring“. In 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 2014. http://dx.doi.org/10.1109/embc.2014.6944762.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zhang, Hanzhen, Xiaojuan He und Yuehu Liu. „A Human Skeleton Data Optimization Algorithm For Multi-Kinect“. In 2020 Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC). IEEE, 2020. http://dx.doi.org/10.1109/ipec49694.2020.9115142.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

He, Xiaojuan, Chengcheng Chen, Hanzhen Zhang und Yuehu Liu. „A Human Gait Sequence Merging Method For Multi-kinect“. In 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). IEEE, 2020. http://dx.doi.org/10.1109/itnec48623.2020.9085205.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Han, Guozhu, und Wu Song. „Motion capture of maintenance personnel based on multi-Kinect“. In 2013 International Conference on Quality, Reliability, Risk, Maintenance, and Safety Engineering (QR2MSE). IEEE, 2013. http://dx.doi.org/10.1109/qr2mse.2013.6625806.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie