Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Multi-Kinect“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Multi-Kinect" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Multi-Kinect"
Xinchen Ye, Jingyu Yang, Hao Huang, Chunping Hou und Yao Wang. „Computational Multi-View Imaging with Kinect“. IEEE Transactions on Broadcasting 60, Nr. 3 (September 2014): 540–54. http://dx.doi.org/10.1109/tbc.2014.2345931.
Der volle Inhalt der QuelleRahman, Md Wasiur, Fatema Tuz Zohra und Marina L. Gavrilova. „Score Level and Rank Level Fusion for Kinect-Based Multi-Modal Biometric System“. Journal of Artificial Intelligence and Soft Computing Research 9, Nr. 3 (01.07.2019): 167–76. http://dx.doi.org/10.2478/jaiscr-2019-0001.
Der volle Inhalt der QuelleAbdurrahman, Muhammad Rijal, Tatacipta Dirgantara, Sandro Mihradi und Andi Isra Mahyuddin. „Validity of Kinect for Assessment of Joint Motion during Gait“. Applied Mechanics and Materials 660 (Oktober 2014): 921–26. http://dx.doi.org/10.4028/www.scientific.net/amm.660.921.
Der volle Inhalt der QuelleAlbert, Justin Amadeus, Victor Owolabi, Arnd Gebel, Clemens Markus Brahms, Urs Granacher und Bert Arnrich. „Evaluation of the Pose Tracking Performance of the Azure Kinect and Kinect v2 for Gait Analysis in Comparison with a Gold Standard: A Pilot Study“. Sensors 20, Nr. 18 (08.09.2020): 5104. http://dx.doi.org/10.3390/s20185104.
Der volle Inhalt der QuelleRafiuzzaman, Mohammad, und Cemil Öz. „Distance Physical Rehabilitation System Framework with Multi-Kinect Motion Captured Data“. Communications on Applied Electronics 1, Nr. 5 (25.04.2015): 29–39. http://dx.doi.org/10.5120/cae-1558.
Der volle Inhalt der QuelleKim, Bonghyun, und Sangyoung Oh. „Design of Multi-Screen Digital Experience Contents System Based on Kinect“. Advanced Science Letters 23, Nr. 3 (01.03.2017): 1581–84. http://dx.doi.org/10.1166/asl.2017.8638.
Der volle Inhalt der QuelleLin, Xizhou, Fei Yuan und En Cheng. „Kinect depth image enhancement with adaptive joint multi-lateral discrete filters“. Journal of Difference Equations and Applications 23, Nr. 1-2 (26.09.2016): 350–66. http://dx.doi.org/10.1080/10236198.2016.1235159.
Der volle Inhalt der QuelleRausch, Johannes, Andreas Maier, Rebecca Fahrig, Jang-Hwan Choi, Waldo Hinshaw, Frank Schebesch, Sven Haase, Jakob Wasza, Joachim Hornegger und Christian Riess. „Kinect-Based Correction of Overexposure Artifacts in Knee Imaging with C-Arm CT Systems“. International Journal of Biomedical Imaging 2016 (2016): 1–15. http://dx.doi.org/10.1155/2016/2502486.
Der volle Inhalt der QuelleAlimasi, Alimina, Hongchen Liu und Chengang Lyu. „Low Frequency Vibration Visual Monitoring System Based on Multi-Modal 3DCNN-ConvLSTM“. Sensors 20, Nr. 20 (17.10.2020): 5872. http://dx.doi.org/10.3390/s20205872.
Der volle Inhalt der QuelleSeddik, Bassem, Sami Gazzah und Najoua Essoukri Ben Amara. „Human‐action recognition using a multi‐layered fusion scheme of Kinect modalities“. IET Computer Vision 11, Nr. 7 (18.08.2017): 530–40. http://dx.doi.org/10.1049/iet-cvi.2016.0326.
Der volle Inhalt der QuelleDissertationen zum Thema "Multi-Kinect"
Yang, Lin. „3D Sensing and Tracking of Human Gait“. Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32540.
Der volle Inhalt der QuelleSalous, Saleh. „Fusion de données multi-Kinect visant à améliorer l’interaction gestuelle au sein d’une installation de réalité virtuelle“. Thesis, Paris 8, 2015. http://www.theses.fr/2015PA080085/document.
Der volle Inhalt der QuelleVirtual Reality is the most modern technology that allows a user to interact with an artificial environment created by Hardware and Software, with visual and aural feedback powerful enough to create the impression of a realistic environment. As a consequence, this form of computer interaction can be used in various contexts such as entertainment, medicine or vehicle driving training. Furthermore, numerous types of VR installations exist depending on the physical and financial constraints as well as on the intended final user experience provided by the system. The subject of this thesis is user interaction in a specific type of VR installation called a CAVE. Our CAVE, named “Le SAS”, currently relies on AR technology technology to detect users, and a joystick is used to provide directional inputs. Our objective is to present, describe and analyze an alternative user-tracking method relying on a 4-Kinect set-up tasked with tracking the user‟s movements inside this CAVE. Proper usertracking is one of the main challenges provided by Virtual Reality as well as one of the core elements that define a proper and functional VR system; therefore it is important to implement an effective tracking system.In order to create true interaction with the virtual world provided by the CAVE, the sensors can detect various types of input. In the case of a multi-Kinect system, interaction with the CAVE will be based on user gestures which recognition is performed by the Kinects on a skeleton created after fusing the joint data from the various sensors.This thesis will focus on four main points, as described below. The first part will provide a context analysis of our immersive CAVE “Le SAS” and define thefeatures as well as the constraints of this specific environment in which the multi-Kinect system is installed.In the second part, the topic of tracking algorithms will be discussed. Indeed, the immersive CAVE‟s large-scale implies a tracking system composed of several sensors. The use of a network of cameras to track a user inside the CAVE is synonymous with the use of an algorithm that determines in real-time what sensors provide the most accurate tracking data and will therefore properly recognize the user‟s inputs and movements.Subsequently, we will propose a gesture detection algorithm. Once the user‟s gestures are properly tracked, such an algorithm is necessary in order to provide interaction. While the Kinects can capture the user‟s movements, the question of the detection of specific gestures by this system comes into play as the CAVE needs to be configured as to recognize specific gestures as potential inputs. The presented algorithm will focus on three specific gestures: Raising the right hand, raising the left hand and short hopping. Lastly, we will provide experimental results comparing the effectiveness of a multi-Kinect set-up with the effectiveness of a single sensor and present data showing a noticeable increase in accuracy with the 4-Kinect system
Macknojia, Rizwan. „Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces“. Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/23976.
Der volle Inhalt der QuelleAlmeida, Caio Sacramento de Britto. „A multi-view environment for markerless augmented reality“. Instituto de Matemática. Departamento de ciência da Computação, 2014. http://repositorio.ufba.br/ri/handle/ri/19287.
Der volle Inhalt der QuelleMade available in DSpace on 2016-05-25T16:33:11Z (GMT). No. of bitstreams: 1 Caio Sacramento - Versão final.pdf: 44848885 bytes, checksum: 5e9d6d2dbd205475b3fd3d6642804fbc (MD5)
Realidade aumentada e uma tecnologia que permite que gr a cos computacionais 2D e 3D sejam alinhados ou registrados com cenas do mundo real em tempo real. Esta proje ção de imagens virtuais requer uma referência na imagem real capturada, o que geralmente e obtido atrav es da utiliza c~ao de um ou mais marcadores. Mas existem situa c~oes em que a utiliza c~ao de marcadores pode não ser adequada, como por exemplo em aplica cões m edicas. Neste trabalho, e apresentado um ambiente multi-câmera, composto por oculos de realidade aumentada e dois dispositivos Kinect, o qual n~ao utiliza marcadores duciais para executar aplica cões de realidade aumentada. Todos os equipamentos s~ao calibrados de acordo com um sistema de referência comum, e então os modelos virtuais são transformados de acordo, tamb em. Para isso, duas abordagens foram especi cadas e implementadas: a primeira, baseada em um Kinect e dados de uxo otico e acelerômetro dos oculos de realidade aumentada, e outra baseada somente em dois dispositivos Kinect. Os resultados relacionados a qualidade e a performance obtidas por estas duas abordagens s~ao apresentados e discutidos, bem como uma compara c~ao entre eles, al em de todas as questões relacionadas que foram encontradas e tratadas neste trabalho.
Holmquist, Karl. „SLAMIt A Sub-Map Based SLAM System : On-line creation of multi-leveled map“. Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-133974.
Der volle Inhalt der QuelleI många situationer efter en stor katastrof, såsom den i Fukushima, är området ytterst farligt för människor att vistas. Det är i sådana miljöer som semi-autonomarobotar kan begränsa risken för människor genom att utforska och kartlägga området på egen hand. Det här exjobbet fokuserar på att designa och implementera ett mjukvarubaserat SLAM system med real-tids potential användandes en Kinect 2 sensor. Exjobbet har fokuserat på att skapa ett system som tillåter effektiv lagring och representering av kartan för att tillåta utforskning utav stora områden. Det görs genom att separera kartan i olika abstraktionsnivåer, vilka korresponderar mot lokala kartor sammankopplade med en global karta. Strukturen av system har tagit hänsyn till under utvecklingen för att tillåta modularitet. Vilket gör det möjligt att byta ut komponenter i systemet. Det här exjobbet är brett i det avseende att det använder tekniker från flera olika områden för att lösa de sub-problem som finns. Några exempel är objektdetektion och klassificering, punkt-molnsregistrering och effektiva 3D-baserade okupationsträd.
Después de grandes catástrofes, cómo la reciente en Fukushima, está demasiado peligroso para permitir humanes a entrar. En estás situaciones estaría más preferible entrar con un robot semi-automático que puede explorar, crear un mapa de la ambiente y encontrar los riesgos que hay. Está obra intente de diseñar e implementar un sistema SLAM, con la potencial de crear está mapa en tiempo real, utilizando una camera Kinect 2. En el centro de la tesis está el diseño de una mapa que será eficiente alojar y manejar, para ser utilizado explorando áreas grandes. Se logra esto por la manera de la separación del mapa en distintas niveles de abstracción qué corresponde a mapas métricos locales y una mapa topológica que conecta estas. La estructura del sistema ha sido considerado para permitir utilizar varios tipos de sensores, además que permitir cambiar ciertas partes de la sistema. Esté tesis cobra distintas áreas cómo lo de detección de objetos, estimación de la posición del sistema, registrar nubes de puntos y alojamiento de 3D-mapas.
LU, YU-CHEN, und 呂昱辰. „AR Painting: Multi Kinect Reconstruction and 3D Painting“. Thesis, 2017. http://ndltd.ncl.edu.tw/handle/30833531016720675740.
Der volle Inhalt der Quelle銘傳大學
資訊傳播工程學系碩士班
105
Many peoples have the experience of graffiti; however, not every real objects can be drawn. So we design a system to draw graffiti on not only paper/wall but also any object. Our system can paint on a virtual object reconstructed from KinectV2 camera. And the result will display on the real object in real time by projection mapping. The system consists of three parts, including model reconstruction, 3D painting, and projection mapping. First, the 3D coordinates of chessboard corners are used to compute the transformation matrices of multiple KinectV2 cameras. The point cloud data will be aligned by these matrices in the model reconstruction. Second, the user can hold controllers to paint model in virtual reality. Because the point number of those point cloud data is enormous, we use KD-Tree to increase the speed of interaction. Finally, we construct a corresponding texture of the model, and display the texture on the object in projection mapping. In conclusion, we integrate KinectV2 cameras, HTC Vive, and a projector in our graffiti system. Therefore, the user can paint an object by virtual reality devices, and at the same time, the other audience will see the result by projection mapping.
Huang, Kuan-Chih, und 黃冠智. „3D Video Surveillance by An Octal-shaped Multi-Kinect Imaging Device“. Thesis, 2016. http://ndltd.ncl.edu.tw/handle/30020673456274369347.
Der volle Inhalt der Quelle國立交通大學
多媒體工程研究所
104
In this study, a system for 3D video surveillance by an octal-shaped multi-Kinect imaging device has been proposed. Affixed on the ceiling of a building for monitoring the indoor environment below, the octal-shaped multi-Kinect imaging device has a 360o view of the environment, composed of eight Kinect devices looking outward and one Kinect device looking downward. The system includes the functions of 3D human image construction, 3D environment image construction, and monitoring and displays of human activities in the environment. To implement such functions, several methods and strategies have been proposed. Firstly, a method for 3D image construction using Kinect images is adopted. Then, a method for calibration of the relevant transformation between every two consecutive Kinect images using a new distance measure, color-filtered distance-weighted correlation (DWC), is proposed for use in constructing a complete 3D human image. Furthermore, in order to handle the blurring phenomenon appearing in a completed 3D human image, a strategy for selecting typical image frames with largest or proper human body parts and a method for splitting human limbs in the images based on skin-color filtering and canny edge detection are proposed. Secondly, an environment construction method based on the color-filtered DWC measure is proposed, which can be used to merge the background images acquired by the octal-shaped multi-Kinect imaging device to build up a 360o indoor environment image. A method for viewing the complete 3D human image in the environment image from different perspective view is also proposed, which uses the techniques of human body detection and regression analysis to decide the three axes of the human in the 3D image and displays the complete 3D human image accordingly. Good experimental results are also shown, which prove the feasibility of the proposed system and methods for real applications.
Ke, Chia-Yu, und 柯佳佑. „Multi-Human Occlusion Handling and Tracking Using Particle Filter based on Kinect“. Thesis, 2011. http://ndltd.ncl.edu.tw/handle/20078774177112122440.
Der volle Inhalt der Quelle國立交通大學
電控工程研究所
100
This paper presents a novel human detection and tracking system with the particle filter approach based on sensors of Kinect. The human detection module extracts the human position by integrating foreground extraction and the depth image. Moreover, the human bodies are judged according to the scaled features with depth information. The advantage of this paper is we consider a real-time particle filtering approach to track multi-human with parameters in several features including the position, color and depth information. Furthermore, different from the most tracking system, we present the occlusion handling approach by the cooperation of the depth and tracking information, and the scaling processes with the different human postures and the positions of depth. Experimental results show that the proposed human tracking system achieves good results in the challenged video. Even in the all dark scenes and light changing situations, the algorithm we presented can still work in a good performance.
Lin, Huan-Po, und 林煥博. „Human Modeling and Tracking for 3D Video Surveillance Using a Multi-KINECT Imaging Device“. Thesis, 2014. http://ndltd.ncl.edu.tw/handle/19375687328980222186.
Der volle Inhalt der Quelle國立交通大學
資訊科學與工程研究所
102
In this study, several methods and algorithms are proposed for 3D human modeling and environment monitoring via the use of KINECT images for 3D video surveillance. An octagonal multi-KINECT imaging device to monitor the indoor environment is adopted. The octagonal multi-KINECT imaging device has a 360-degree view of the indoor environment, composed of eight KINECT devices looking outward and one KINECT device looking downward. With the octagonal multi-KINECT imaging device used as a 3D video surveillance system, methods for detecting humans, tracking human activities, and conducting handoff processes between the nine KINECT devices are proposed. After collecting the human data from the tracking process, a method for human modeling is applied. With the human model completed, a method for human body feature extraction is carried out and the result is shown to users. In more detail, firstly a method for 3D image construction using KINECT images proposed by Ma and Tsai [11] is reviewed. A method for calibration between KINECT devices using the DWC measure based on the evolution strategy is proposed, which can avoid being trapped in the local minimum and get a good calibration result. Then, a method using the mentioned calibration results and the 3D images converted from KINECT images is proposed to construct an indoor environment model. Furthermore, an algorithm for background learning using RGBD images is proposed. Also proposed is a method for human detection based on a background learning scheme and a 3D connected-component labeling technique. Afterwards, a method for human tracking is proposed, which uses the result from the human detection and conducts dynamic tracking after solving the handoff problem between the nine KINECT devices. With the data saved during human tracking, human modeling and human body feature extraction are conducted. A method for building up the human model by using the DWC measure and the randomized K-d tree structure is proposed. Then, a method for finding the bounding box circumscribing the human model is proposed. Some useful geometric properties are also exploited for finding an optimal bounding box. Finally, the bounding box is measured to compute human body features like height, width, and thickness for use in video surveillance and other applications. Good experimental results are also shown, which prove the feasibility of the proposed methods for real applications.
Chien, Chi-Liang, und 簡綺良. „Fast Construction of Smooth 3D Whole Human-body Color Models by A Two-level Multi-Kinect System“. Thesis, 2016. http://ndltd.ncl.edu.tw/handle/04267756455025924077.
Der volle Inhalt der Quelle國立交通大學
多媒體工程研究所
104
Nowadays, 3D printing has become more and more popular. It is a technology which can be used to print various models in proper materials with a 3D printer. And another related technology, 3D scanning, is also considered indispensable, which can be employed to capture the color and depth information of an object by using a 3D scanner. In this study, both technologies are used to construct and print human models. As a continuation of the research of Chiu and Tsai [2], their two-level multi-Kinects system, which is composed of one Kinect version 2 device and eleven Kinect version 1 devices, is used in this study to scan the color and depth data of a human body. The major goal is to refine their system to construct smooth colored whole-body models for 3D printing. The processes of the proposed system include two phases, learning and construction. In the learning process, at first the human head part is segmented out by a method based on skin-color detection. Then, the merging parameters of the 3D data of three human-body parts, namely, the head, the upper body, and the lower one, are derived by a newly-proposed color-filtered distance-weighted correlation (DWC) measure together with a speedup measure based on the use of distance map. In the model construction process, the calibrated merging parameters are applied to the 3D images obtained through vision-based transformations from the color and depth data of the Kinect devices, yielding a set of well-merged 3D whole-body data. To construct a smooth whole-body model, at first the 3D whole-body data are split into three parts, the head, the middle body, and the legs, by edge detection and skin-color segmentation techniques. Then, different modeling parameters on the three parts are applied respectively to construct partial models with different degrees of smoothness for the three parts. By composing these partial models of the three parts, a smooth whole-body model is acquired. To construct a colored human-body model, a coloring method based on the concept of k nearest neighbor (kNN) is proposed to assign colors to the vertex points of the original monochrome model. Finally, before taking the constructed colored model to the 3D printer, the colors of the model in the CMYK color space are adjusted to make the colors of the to-be-printed result visually close to those of the model constructed originally and seen on the screen. Good experimental results are also presented to show the feasibility of the proposed methods and the system for real applications of 3D human-body scanning and printing.
Buchteile zum Thema "Multi-Kinect"
Jiang, Feng, Shengping Zhang, Shen Wu, Yang Gao und Debin Zhao. „Multi-layered Gesture Recognition with Kinect“. In Gesture Recognition, 387–416. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57021-1_13.
Der volle Inhalt der QuelleStøvring, Nikolaj Marimo, Esbern Torgard Kaspersen, Jeppe Milling Korsholm, Yousif Ali Hassan Najim, Soraya Makhlouf, Alireza Khani und Cumhur Erkut. „Multi-kinect Skeleton Fusion for Enactive Games“. In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 173–80. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-55834-9_20.
Der volle Inhalt der QuelleGottfried, Jens-Malte, Janis Fehr und Christoph S. Garbe. „Computing Range Flow from Multi-modal Kinect Data“. In Advances in Visual Computing, 758–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24028-7_70.
Der volle Inhalt der QuelleHossain Bari, A. S. M., und Marina L. Gavrilova. „Multi-layer Perceptron Architecture for Kinect-Based Gait Recognition“. In Advances in Computer Graphics, 356–63. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22514-8_31.
Der volle Inhalt der QuelleSang, Haifeng, und Wei Li. „Gesture Detection and Recognition Fused with Multi-feature Based on Kinect“. In Biometric Recognition, 597–606. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25417-3_70.
Der volle Inhalt der QuelleChakraborty, Saikat, Rishabh Mishra, Anurag Dwivedi, Tania Das und Anup Nandy. „A Low-Cost Pathological Gait Detection System in Multi-Kinect Environment“. In Springer Proceedings in Physics, 97–104. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-6467-3_13.
Der volle Inhalt der QuelleSilveira, Mariana Lyra, Thiago Loureiro Carvalho, Anselmo Frizera Neto und Teodiano Bastos Filho. „A Multi-Kinect System for Serious Game Development Using ROS and Unity“. In XXVI Brazilian Congress on Biomedical Engineering, 585–91. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-2119-1_91.
Der volle Inhalt der QuelleKurillo, Gregorij, Ferda Ofli, Jennifer Marcoe, Paul Gorman, Holly Jimison, Misha Pavel und Ruzena Bajcsy. „Multi-disciplinary Design and In-Home Evaluation of Kinect-Based Exercise Coaching System for Elderly“. In Lecture Notes in Computer Science, 101–13. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-20913-5_10.
Der volle Inhalt der QuelleTang, Tiffany Y., und Relic Yongfu Wang. „A Comparative Study of Applying Low-Latency Smoothing Filters in a Multi-kinect Virtual Play Environment“. In HCI International 2016 – Posters' Extended Abstracts, 144–48. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-40542-1_23.
Der volle Inhalt der QuelleGalatas, Georgios, Gerasimos Potamianos und Fillia Makedon. „Robust Multi-Modal Speech Recognition in Two Languages Utilizing Video and Distance Information from the Kinect“. In Human-Computer Interaction. Interaction Modalities and Techniques, 43–48. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39330-3_5.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Multi-Kinect"
Li, Yi. „Multi-scenario gesture recognition using Kinect“. In 2012 17th International Conference on Computer Games: AI, Animation, Mobile, Interactive Multimedia, Educational & Serious Games (CGAMES). IEEE, 2012. http://dx.doi.org/10.1109/cgames.2012.6314563.
Der volle Inhalt der QuelleFaion, Florian, Simon Friedberger, Antonio Zea und Uwe D. Hanebeck. „Intelligent sensor-scheduling for multi-kinect-tracking“. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2012). IEEE, 2012. http://dx.doi.org/10.1109/iros.2012.6386007.
Der volle Inhalt der QuelleZhao, Yue, Yunda Liu, Min Dong und Sheng Bi. „Multi-feature gesture recognition based on Kinect“. In 2016 IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER). IEEE, 2016. http://dx.doi.org/10.1109/cyber.2016.7574856.
Der volle Inhalt der QuelleYu Zha und Yijie Fan. „Multi-person gait recognition system based on Kinect“. In 2016 2nd IEEE International Conference on Computer and Communications (ICCC). IEEE, 2016. http://dx.doi.org/10.1109/compcomm.2016.7924722.
Der volle Inhalt der QuelleRadkowski, Rafael. „HoloLens Integration into a Multi-Kinect Tracking Environment“. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 2018. http://dx.doi.org/10.1109/ismar-adjunct.2018.00052.
Der volle Inhalt der QuelleYang, Roy Sirui, Yuk Hin Chan, Rui Gong, Minh Nguyen, Alfonso Gastelum Strozzi, Patrice Delmas, Georgy Gimel'farb und Rachel Ababou. „Multi-Kinect scene reconstruction: Calibration and depth inconsistencies“. In 2013 28th International Conference of Image and Vision Computing New Zealand (IVCNZ). IEEE, 2013. http://dx.doi.org/10.1109/ivcnz.2013.6726991.
Der volle Inhalt der QuelleSaiyi Li, Pubudu N. Pathirana und Terry Caelli. „Multi-kinect skeleton fusion for physical rehabilitation monitoring“. In 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 2014. http://dx.doi.org/10.1109/embc.2014.6944762.
Der volle Inhalt der QuelleZhang, Hanzhen, Xiaojuan He und Yuehu Liu. „A Human Skeleton Data Optimization Algorithm For Multi-Kinect“. In 2020 Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC). IEEE, 2020. http://dx.doi.org/10.1109/ipec49694.2020.9115142.
Der volle Inhalt der QuelleHe, Xiaojuan, Chengcheng Chen, Hanzhen Zhang und Yuehu Liu. „A Human Gait Sequence Merging Method For Multi-kinect“. In 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). IEEE, 2020. http://dx.doi.org/10.1109/itnec48623.2020.9085205.
Der volle Inhalt der QuelleHan, Guozhu, und Wu Song. „Motion capture of maintenance personnel based on multi-Kinect“. In 2013 International Conference on Quality, Reliability, Risk, Maintenance, and Safety Engineering (QR2MSE). IEEE, 2013. http://dx.doi.org/10.1109/qr2mse.2013.6625806.
Der volle Inhalt der Quelle