Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: 3D video system.

Artykuły w czasopismach na temat „3D video system”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „3D video system”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Cha, Jongeun, Mohamad Eid i Abdulmotaleb El Saddik. "Touchable 3D video system". ACM Transactions on Multimedia Computing, Communications, and Applications 5, nr 4 (październik 2009): 1–25. http://dx.doi.org/10.1145/1596990.1596993.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Okui, Makoto. "6-2. 3D Video System". Journal of The Institute of Image Information and Television Engineers 65, nr 9 (2011): 1282–86. http://dx.doi.org/10.3169/itej.65.1282.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Aggoun, Amar, Emmanuel Tsekleves, Mohammad Rafiq Swash, Dimitrios Zarpalas, Anastasios Dimou, Petros Daras, Paulo Nunes i Luis Ducla Soares. "Immersive 3D Holoscopic Video System". IEEE MultiMedia 20, nr 1 (styczeń 2013): 28–37. http://dx.doi.org/10.1109/mmul.2012.42.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Mohammed, Dhrgham Hani, i Laith Ali Abdul-Rahaim. "A Proposed of Multimedia Compression System Using Three - Dimensional Transformation". Webology 18, SI05 (30.10.2021): 816–31. http://dx.doi.org/10.14704/web/v18si05/web18264.

Pełny tekst źródła
Streszczenie:
Video compression has become especially important nowadays with the increase of data transmitted over transmission channels, the reducing the size of the videos must be done without affecting the quality of the video. This process is done by cutting the video thread into frames of specific lengths and converting them into a three-dimensional matrix. The proposed compression scheme uses the traditional red-green-blue color space representation and applies a three-dimensional discrete Fourier transform (3D-DFT) or three-dimensional discrete wavelet transform (3D-DWT) to the signal matrix after converted the video stream to three-dimensional matrices. The resulting coefficients from the transformation are encoded using the EZW encoder algorithm. Three main criteria by which the performance of the proposed video compression system will be tested; Compression ratio (CR), peak signal-to-noise ratio (PSNR) and processing time (PT). Experiments showed high compression efficiency for videos using the proposed technique with the required bit rate, the best bit rate for traditional video compression. 3D discrete wavelet conversion has a high frame rate with natural spatial resolution and scalability through visual and spatial resolution Beside the quality and other advantages when compared to current conventional systems in complexity, low power, high throughput, low latency and minimum the storage requirements. All proposed systems implement using MATLAB R2020b.
Style APA, Harvard, Vancouver, ISO itp.
5

Kim, Han-Kil, Sang-Woong Joo, Hun-Hee Kim i Hoe-Kyung Jung. "3D Video Simulation System Using GPS". Journal of the Korea Institute of Information and Communication Engineering 18, nr 4 (30.04.2014): 855–60. http://dx.doi.org/10.6109/jkiice.2014.18.4.855.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Hasan, Md Mehedi, Md Ariful Islam, Sejuti Rahman, Michael R. Frater i John F. Arnold. "No-Reference Quality Assessment of Transmitted Stereoscopic Videos Based on Human Visual System". Applied Sciences 12, nr 19 (7.10.2022): 10090. http://dx.doi.org/10.3390/app121910090.

Pełny tekst źródła
Streszczenie:
Provisioning the stereoscopic 3D (S3D) video transmission services of admissible quality in a wireless environment is an immense challenge for video service providers. Unlike for 2D videos, a widely accepted No-reference objective model for assessing transmitted 3D videos that explores the Human Visual System (HVS) appropriately has not been developed yet. Distortions perceived in 2D and 3D videos are significantly different due to the sophisticated manner in which the HVS handles the dissimilarities between the two different views. In real-time video transmission, viewers only have the distorted or receiver end content of the original video acquired through the communication medium. In this paper, we propose a No-reference quality assessment method that can estimate the quality of a stereoscopic 3D video based on HVS. By evaluating perceptual aspects and correlations of visual binocular impacts in a stereoscopic movie, the approach creates a way for the objective quality measure to assess impairments similarly to a human observer who would experience the similar material. Firstly, the disparity is measured and quantified by the region-based similarity matching algorithm, and then, the magnitude of the edge difference is calculated to delimit the visually perceptible areas of an image. Finally, an objective metric is approximated by extracting these significant perceptual image features. Experimental analysis with standard S3D video datasets demonstrates the lower computational complexity for the video decoder and comparison with the state-of-the-art algorithms shows the efficiency of the proposed approach for 3D video transmission at different quantization (QP 26 and QP 32) and loss rate (1% and 3% packet loss) parameters along with the perceptual distortion features.
Style APA, Harvard, Vancouver, ISO itp.
7

Zhang, Yizhong, Jiaolong Yang, Zhen Liu, Ruicheng Wang, Guojun Chen, Xin Tong i Baining Guo. "VirtualCube: An Immersive 3D Video Communication System". IEEE Transactions on Visualization and Computer Graphics 28, nr 5 (maj 2022): 2146–56. http://dx.doi.org/10.1109/tvcg.2022.3150512.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Zhang, Yingchun, Jianbo Huang i Siwen Duan. "3D video conversion system based on depth information extraction". MATEC Web of Conferences 232 (2018): 02048. http://dx.doi.org/10.1051/matecconf/201823202048.

Pełny tekst źródła
Streszczenie:
3D movies have received more and more attention in recent years. However, the investment in making 3D movies is high and difficult, which restricts its development. And there are many existing 2D movie resources, and how to convert it into 3D movies is also a problem. Therefore, this paper proposes a 3D video conversion system based on depth information extraction. The system consists of four parts: segmentation of movie video frame sequences, extraction of frame image depth information, generation of virtual multi-viewpoint and synthesis of 3D video. The system can effectively extract the depth information of the movie and by it finally convert a 2D movie into a 3D movie.
Style APA, Harvard, Vancouver, ISO itp.
9

XIAO, JIANGJIAN, HUI CHENG, FENG HAN i HARPREET SAWHNEY. "GEO-BASED AERIAL SURVEILLANCE VIDEO PROCESSING FOR SCENE UNDERSTANDING AND OBJECT TRACKING". International Journal of Pattern Recognition and Artificial Intelligence 23, nr 07 (listopad 2009): 1285–307. http://dx.doi.org/10.1142/s0218001409007582.

Pełny tekst źródła
Streszczenie:
This paper presents an approach to extract semantic layers from aerial surveillance videos for scene understanding and object tracking. The input videos are captured by low flying aerial platforms and typically consist of strong parallax from non-ground-plane structures as well as moving objects. Our approach leverages the geo-registration between video frames and reference images (such as those available from Terraserver and Google satellite imagery) to establish a unique geo-spatial coordinate system for pixels in the video. The geo-registration process enables Euclidean 3D reconstruction with absolute scale unlike traditional monocular structure from motion where continuous scale estimation over long periods of time is an issue. Geo-registration also enables correlation of video data to other stored information sources such as GIS (Geo-spatial Information System) databases. In addition to the geo-registration and 3D reconstruction aspects, the other key contributions of this paper also include: (1) providing a reliable geo-based solution to estimate camera pose for 3D reconstruction, (2) exploiting appearance and 3D shape constraints derived from geo-registered videos for labeling of structures such as buildings, foliage, and roads for scene understanding, and (3) elimination of moving object detection and tracking errors using 3D parallax constraints and semantic labels derived from geo-registered videos. Experimental results on extended time aerial video data demonstrates the qualitative and quantitative aspects of our work.
Style APA, Harvard, Vancouver, ISO itp.
10

Yan, Ming, Xin Gang Wang i Jun Feng Li. "3D Video Transmission System for China Mobile Multimedia Broadcasting". Applied Mechanics and Materials 519-520 (luty 2014): 469–72. http://dx.doi.org/10.4028/www.scientific.net/amm.519-520.469.

Pełny tekst źródła
Streszczenie:
With the popularization of three dimensional (3D) video on TV, some related aspects, such as broadcast television network transmission, technology application, and content products and so on, will face the challenges of the new technology. Rapid progress has been achieved in the China Mobile Multimedia Broadcasting (CMMB) technology, 3D TV technology applied in the mobile TV will become the bright spot in the future. This paper firstly introduces the transmission modes of 3D TV, and then gives the technology solution of 3D TV and the upgrade solution of electronic service guide Electronic Service Guide (ESG) based on CMMB. Test result shows that the new system with 3D video can work correctly when the 3D video was encoded with H.264 standard and the image resolution was conformed to the standard definition (320x240).
Style APA, Harvard, Vancouver, ISO itp.
11

Zhebin, Zhang, Zhang Ji'an, Zhang Xuexi, Wang Yizhou i Gao Wen. "A distributed 2D-to-3D video conversion system". China Communications 10, nr 5 (maj 2013): 30–38. http://dx.doi.org/10.1109/cc.2013.6520936.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Min, Dongbo, Donghyun Kim, SangUn Yun i Kwanghoon Sohn. "2D/3D freeview video generation for 3DTV system". Signal Processing: Image Communication 24, nr 1-2 (styczeń 2009): 31–48. http://dx.doi.org/10.1016/j.image.2008.10.009.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Wurmlin, Stephan, Edouard Lamboray, Oliver G. Staadt i Markus H. Gross. "3D Video Recorder: a System for Recording and Playing Free-Viewpoint Video+". Computer Graphics Forum 22, nr 2 (czerwiec 2003): 181–93. http://dx.doi.org/10.1111/1467-8659.00659.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

West, Greg L., Kyoko Konishi i Veronique D. Bohbot. "Video Games and Hippocampus-Dependent Learning". Current Directions in Psychological Science 26, nr 2 (kwiecień 2017): 152–58. http://dx.doi.org/10.1177/0963721416687342.

Pełny tekst źródła
Streszczenie:
Research examining the impact of video games on neural systems has largely focused on visual attention and motor control. Recent evidence now shows that video games can also impact the hippocampal memory system. Further, action and 3D-platform video-game genres are thought to have differential impacts on this system. In this review, we examine the specific design elements unique to either action or 3D-platform video games and break down how they could either favor or discourage use of the hippocampal memory system during gameplay. Analysis is based on well-established principles of hippocampus-dependent and non-hippocampus-dependent forms of learning from the human and rodent literature.
Style APA, Harvard, Vancouver, ISO itp.
15

Mihradi, Sandro, Ferryanto, Tatacipta Dirgantara i Andi I. Mahyuddin. "Tracking of Markers for 2D and 3D Gait Analysis Using Home Video Cameras". International Journal of E-Health and Medical Communications 4, nr 3 (lipiec 2013): 36–52. http://dx.doi.org/10.4018/jehmc.2013070103.

Pełny tekst źródła
Streszczenie:
This work presents the development of an affordable optical motion-capture system which uses home video cameras for 2D and 3D gait analysis. The 2D gait analyzer system consists of one camcorder and one PC while the 3D gait analyzer system uses two camcorders, a flash and two PCs. Both systems make use of 25 fps camcorder, LED markers and technical computing software to track motions of markers attached to human body during walking. In the experiment for 3D gait analyzer system, the two cameras are synchronized by using flash. The recorded videos for both systems are extracted into frames and then converted into binary images, and bridge morphological operation is applied for unconnected pixel to facilitate marker detection process. Least distance method is then employed to track the markers motions, and 3D Direct Linear Transformation is used to reconstruct 3D markers positions. The correlation between length in pixel and in the real world resulted from calibration process is used to reconstruct 2D markers positions. To evaluate the reliability of the 2D and 3D optical motion-capture system developed in the present work, spatio-temporal and kinematics parameters calculated from the obtained markers positions are qualitatively compared with the ones from literature, and the results show good compatibility.
Style APA, Harvard, Vancouver, ISO itp.
16

Yu, Jia Xi, i Wen Hui Zhang. "Design of 3D-TV Horizontal Parallax Obtaining System Based on FPGA". Applied Mechanics and Materials 401-403 (wrzesień 2013): 1834–38. http://dx.doi.org/10.4028/www.scientific.net/amm.401-403.1834.

Pełny tekst źródła
Streszczenie:
In this paper, a design of FPGA-based 3D-TV horizontal parallax acquiring system is presented. The system will receive the stereoscopic video by a HD-SDI receiver GS2971, and outputs a video of horizontal parallax to a digital TV through a HDMI transmitter SiI9134. In this system, FPGA plays an important role that converts the stereoscopic video to the horizontal parallax video. In addition, a microcontroller is selected as the control center of the entire system. This system can get the horizontal parallax of the stereoscopic video in real time, and is helpful for the stereoscopic program producer to control the horizontal parallax of the 3D program.
Style APA, Harvard, Vancouver, ISO itp.
17

Zhang, Qiuwen, Shuaichao Wei i Rijian Su. "Low-Complexity Texture Video Coding Based on Motion Homogeneity for 3D-HEVC". Scientific Programming 2019 (15.01.2019): 1–13. http://dx.doi.org/10.1155/2019/1574081.

Pełny tekst źródła
Streszczenie:
Three-dimensional extension of the high efficiency video coding (3D-HEVC) is an emerging international video compression standard for multiview video system applications. Similar to HEVC, a computationally expensive mode decision is performed using all depth levels and prediction modes to select the least rate-distortion (RD) cost for each coding unit (CU). In addition, new tools and intercomponent prediction techniques have been introduced to 3D-HEVC for improving the compression efficiency of the multiview texture videos. These techniques, despite achieving the highest texture video coding efficiency, involve extremely high-complex procedures, thus limiting 3D-HEVC encoders in practical applications. In this paper, a fast texture video coding method based on motion homogeneity is proposed to reduce 3D-HEVC computational complexity. Because the multiview texture videos instantly represent the same scene at the same time (considering that the optimal CU depth level and prediction modes are highly multiview content dependent), it is not efficient to use all depth levels and prediction modes in 3D-HEVC. The motion homogeneity model of a CU is first studied according to the motion vectors and prediction modes from the corresponding CUs. Based on this model, we present three efficient texture video coding approaches, such as the fast depth level range determination, early SKIP/Merge mode decision, and adaptive motion search range adjustment. Experimental results demonstrate that the proposed overall method can save 56.6% encoding time with only trivial coding efficiency degradation.
Style APA, Harvard, Vancouver, ISO itp.
18

Chen, Hanqing, Chunyan Hu, Feifei Lee, Chaowei Lin, Wei Yao, Lu Chen i Qiu Chen. "A Supervised Video Hashing Method Based on a Deep 3D Convolutional Neural Network for Large-Scale Video Retrieval". Sensors 21, nr 9 (29.04.2021): 3094. http://dx.doi.org/10.3390/s21093094.

Pełny tekst źródła
Streszczenie:
Recently, with the popularization of camera tools such as mobile phones and the rise of various short video platforms, a lot of videos are being uploaded to the Internet at all times, for which a video retrieval system with fast retrieval speed and high precision is very necessary. Therefore, content-based video retrieval (CBVR) has aroused the interest of many researchers. A typical CBVR system mainly contains the following two essential parts: video feature extraction and similarity comparison. Feature extraction of video is very challenging, previous video retrieval methods are mostly based on extracting features from single video frames, while resulting the loss of temporal information in the videos. Hashing methods are extensively used in multimedia information retrieval due to its retrieval efficiency, but most of them are currently only applied to image retrieval. In order to solve these problems in video retrieval, we build an end-to-end framework called deep supervised video hashing (DSVH), which employs a 3D convolutional neural network (CNN) to obtain spatial-temporal features of videos, then train a set of hash functions by supervised hashing to transfer the video features into binary space and get the compact binary codes of videos. Finally, we use triplet loss for network training. We conduct a lot of experiments on three public video datasets UCF-101, JHMDB and HMDB-51, and the results show that the proposed method has advantages over many state-of-the-art video retrieval methods. Compared with the DVH method, the mAP value of UCF-101 dataset is improved by 9.3%, and the minimum improvement on JHMDB dataset is also increased by 0.3%. At the same time, we also demonstrate the stability of the algorithm in the HMDB-51 dataset.
Style APA, Harvard, Vancouver, ISO itp.
19

Li, Hao Jun, i Yue Sheng Zhu. "A New Approach of 2D-to-3D Video Conversion and Its Implementation on Embedded System". Applied Mechanics and Materials 58-60 (czerwiec 2011): 2552–57. http://dx.doi.org/10.4028/www.scientific.net/amm.58-60.2552.

Pełny tekst źródła
Streszczenie:
Rather than based on the stereo vision principle and the relationship between the camera position and the video scene objects used in most of current 2D-to-3D video conversion algorithms, a new prediction method of video depth information based on video frame differences is proposed and implemented on an embedded platform in this paper. 3D stereoscopic video sequences are generated by using the original 2D video sequences and the depth information. The theoretical analysis and experimental results have showed that the proposed method is more feasible and efficiency compared with the current algorithms.
Style APA, Harvard, Vancouver, ISO itp.
20

Vasudevan, Vani. "3D projection mapping for facial cosmetic surgery by creating a 3D video". International Journal of Engineering & Technology 7, nr 2-1 (23.03.2018): 409. http://dx.doi.org/10.14419/ijet.v7i2.9520.

Pełny tekst źródła
Streszczenie:
Due to the rapid growth of technology, millions of systems are designed day by day to fulfill their duties in facilitating the human lives. In this paper, the projection mapping method is used to help users who need to do facial cosmetic surgery in taking the right decisions by providing them with a unique system, which can be used in cosmetic surgeries’ clinics. The main purpose of the system is to make a projection mapping on a face model that presents the changes that need to be applied on the face features with some facial expressions, as a result, the user will be able to compare between his/her face before and after the surgery and take the certainly decision. A suitable web based UI is created to make it easier for the user by enabling him/her to choose the needed surgery and come up with his/her own projection mapping video.
Style APA, Harvard, Vancouver, ISO itp.
21

Estrela, Vania Vieira, Maria Aparecida de Jesus, Jenice Aroma, Kumudha Raimond, Sandro R. Fernandes, Nikolaos Andreopoulos, Edwiges G. H. Grata, Andrey Terziev, Ricardo Tadeu Lopes i Anand Deshpande. "Motion Estimation Role in the Context of 3D Video". International Journal of Multimedia Data Engineering and Management 12, nr 3 (lipiec 2021): 1–23. http://dx.doi.org/10.4018/ijmdem.291556.

Pełny tekst źródła
Streszczenie:
The 3D end-to-end video system (i.e., 3D acquisition, processing, streaming, error concealment, virtual/augmented reality handling, content retrieval, rendering, and displaying) still needs improvements. This paper scrutinizes the Motion Compensation/Motion Estimation (MCME)impact in the 3D Video (3DV) from the end-to-end users' point of view deeply. The concepts of Motion Vectors (MVs) and disparities are very close, and they help to ameliorate all the stages of the end-to-end 3DV system. The High-Efficiency Video Coding (HEVC) video codec standard is taken into consideration to evaluate the emergent trend towards computational treatment throughout the Cloud whenever possible. The tight bond between movement and depth affects 3D information recovery from these cues, and optimize the performance of algorithms and standards from several parts of the 3D system. Still, 3DV lacks support for engaging interactive 3DV services. Better bit allocation strategies also ameliorate all 3D pipeline stages while being attentive to Cloud-based deployments for 3D streaming.
Style APA, Harvard, Vancouver, ISO itp.
22

Zhu, Ge, Huili Zhang, Yirui Jiang, Juan Lei, Linqing He i Hongwei Li. "Dynamic Fusion Technology of Mobile Video and 3D GIS: The Example of Smartphone Video". ISPRS International Journal of Geo-Information 12, nr 3 (14.03.2023): 125. http://dx.doi.org/10.3390/ijgi12030125.

Pełny tekst źródła
Streszczenie:
Mobile videos contain a large amount of data, where the information interesting to the user can either be discrete or distributed. This paper introduces a method for fusing 3D geographic information systems (GIS) and video image textures. For the dynamic fusion of video in 3DGIS where the position and pose angle of the filming device change moment by moment, it integrates GIS 3D visualization, pose resolution and motion interpolation, and proposes a projection texture mapping method for constructing a dynamic depth camera to achieve dynamic fusion. In this paper, the accuracy and time efficiency of different systems of gradient descent and complementary filtering algorithms are analyzed mainly by quantitative analysis method, and the effect of dynamic fusion is analyzed by the playback delay and rendering frame rate of video on 3DGIS as indicators. The experimental results show that the gradient descent method under the Aerial Attitude Reference System (AHRS) is more suitable for the solution of smartphone attitude, and can control the root mean square error of attitude solution within 2°; the delay of video playback on 3DGIS is within 29 ms, and the rendering frame rate is 34.9 fps, which meets the requirements of the minimum resolution of human eyes.
Style APA, Harvard, Vancouver, ISO itp.
23

SAKAMOTO, Naohisa, Takeshi TAKAI, Koji KOYAMADA, Takashi MATSUYAMA i Yoshihito KIKKAWA. "Development of 3D Video Display System Using Omnidirectional Display". Journal of the Visualization Society of Japan 23, Supplement1 (2003): 399–402. http://dx.doi.org/10.3154/jvs.23.supplement1_399.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Hefeeda, Mohamed. "Spider: A System for Finding Illegal 3D Video Copies". Qatar Foundation Annual Research Forum Proceedings, nr 2011 (listopad 2011): CSO10. http://dx.doi.org/10.5339/qfarf.2011.cso10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Morvan, Yannick, Dirk Farin i Peter De With. "System architecture for free-viewpoint video and 3D-TV". IEEE Transactions on Consumer Electronics 54, nr 2 (maj 2008): 925–32. http://dx.doi.org/10.1109/tce.2008.4560180.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Chung, Young-uk, Yong-Hoon Choi, Suwon Park i Hyukjoon Lee. "A QoS Aware Resource Allocation Strategy for 3D A/V Streaming in OFDMA Based Wireless Systems". Scientific World Journal 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/419236.

Pełny tekst źródła
Streszczenie:
Three-dimensional (3D) video is expected to be a “killer app” for OFDMA-based broadband wireless systems. The main limitation of 3D video streaming over a wireless system is the shortage of radio resources due to the large size of the 3D traffic. This paper presents a novel resource allocation strategy to address this problem. In the paper, the video-plus-depth 3D traffic type is considered. The proposed resource allocation strategy focuses on the relationship between 2D video and the depth map, handling them with different priorities. It is formulated as an optimization problem and is solved using a suboptimal heuristic algorithm. Numerical results show that the proposed scheme provides a better quality of service compared to conventional schemes.
Style APA, Harvard, Vancouver, ISO itp.
27

Siddique, Arslan, Francesco Banterle, Massimiliano Corsini, Paolo Cignoni, Daniel Sommerville i Chris Joffe. "MoReLab: A Software for User-Assisted 3D Reconstruction". Sensors 23, nr 14 (17.07.2023): 6456. http://dx.doi.org/10.3390/s23146456.

Pełny tekst źródła
Streszczenie:
We present MoReLab, a tool for user-assisted 3D reconstruction. This reconstruction requires an understanding of the shapes of the desired objects. Our experiments demonstrate that existing Structure from Motion (SfM) software packages fail to estimate accurate 3D models in low-quality videos due to several issues such as low resolution, featureless surfaces, low lighting, etc. In such scenarios, which are common for industrial utility companies, user assistance becomes necessary to create reliable 3D models. In our system, the user first needs to add features and correspondences manually on multiple video frames. Then, classic camera calibration and bundle adjustment are applied. At this point, MoReLab provides several primitive shape tools such as rectangles, cylinders, curved cylinders, etc., to model different parts of the scene and export 3D meshes. These shapes are essential for modeling industrial equipment whose videos are typically captured by utility companies with old video cameras (low resolution, compression artifacts, etc.) and in disadvantageous lighting conditions (low lighting, torchlight attached to the video camera, etc.). We evaluate our tool on real industrial case scenarios and compare it against existing approaches. Visual comparisons and quantitative results show that MoReLab achieves superior results with regard to other user-interactive 3D modeling tools.
Style APA, Harvard, Vancouver, ISO itp.
28

Chen, Hao, Ya Ping Hu, Jin Gyi Yan, Jiong Cong Chen i Juan Hu. "Research on the Substation 3D Real Scene Surveillance". Advanced Materials Research 1008-1009 (sierpień 2014): 742–47. http://dx.doi.org/10.4028/www.scientific.net/amr.1008-1009.742.

Pełny tekst źródła
Streszczenie:
The reasonable distribution of the video monitoring points in the substation is one of the technical problems to be solved in the engineering. We combined the camera calibration algorithm with the real application of the substation video monitoring point layout, and proposed a nonlinear camera calibration method of the external parameters based on the 3D real scene, which is suited for the substation video surveillance system. The optimal video monitoring points are determined by the relationship between the video produced by the camera movement and the visual scope displayed with the 3D real scene, and testified by the location of each object through its boundary geometric element shown in the pictures. It has been proved applicable by the video simulating analysis on the 110kV main transformer monitoring points. This method has been used in the engineering design of the Guangdong Power Grid substation video surveillance system.
Style APA, Harvard, Vancouver, ISO itp.
29

Sedik, Ahmed, Mohamed Marey i Hala Mostafa. "An Adaptive Fatigue Detection System Based on 3D CNNs and Ensemble Models". Symmetry 15, nr 6 (16.06.2023): 1274. http://dx.doi.org/10.3390/sym15061274.

Pełny tekst źródła
Streszczenie:
Due to the widespread issue of road accidents, researchers have been drawn to investigate strategies to prevent them. One major contributing factor to these accidents is driver fatigue resulting from exhaustion. Various approaches have been explored to address this issue, with machine and deep learning proving to be effective in processing images and videos to detect asymmetric signs of fatigue, such as yawning, facial characteristics, and eye closure. This study proposes a multistage system utilizing machine and deep learning techniques. The first stage is designed to detect asymmetric states, including tiredness and non-vigilance as well as yawning. The second stage is focused on detecting eye closure. The machine learning approach employs several algorithms, including Support Vector Machine (SVM), k-Nearest Neighbor (KNN), Multi-layer Perceptron (MLP), Decision Tree (DT), Logistic Regression (LR), and Random Forest (RF). Meanwhile, the deep learning approach utilizes 2D and 3D Convolutional Neural Networks (CNNs). The architectures of proposed deep learning models are designed after several trials, and their parameters have been selected to achieve optimal performance. The effectiveness of the proposed methods is evaluated using video and image datasets, where the video dataset is classified into three states: alert, tired, and non-vigilant, while the image dataset is classified based on four facial symptoms, including open or closed eyes and yawning. A more robust system is achieved by combining the image and video datasets, resulting in multiple classes for detection. Simulation results demonstrate that the 3D CNN proposed in this study outperforms the other methods, with detection accuracies of 99 percent, 99 percent, and 98 percent for the image, video, and mixed datasets, respectively. Notably, this achievement surpasses the highest accuracy of 97 percent found in the literature, suggesting that the proposed methods for detecting drowsiness are indeed effective solutions.
Style APA, Harvard, Vancouver, ISO itp.
30

Saputra, Ade, Dandi Sunardi, Ardi Wijaya i Agung Kharismah Hidayah. "Visualisasi SD N 47 Kota Bengkulu Berbasis 3D Dengan Menggunakan Teknik Chrome Key Dan Sketchup". JURNAL MEDIA INFOTAMA 19, nr 1 (19.04.2023): 197–204. http://dx.doi.org/10.37676/jmi.v19i1.3684.

Pełny tekst źródła
Streszczenie:
Profile video is an electronic medium to convey information that is very effective in introducing a school. Through this visual media, all information can be easily digested by all circles of society. State Elementary School 47 Bengkulu City, Mem needs promotional media to the community to increase the admission of new students. Based on this, then to do the promotion, a profile video of Sekolah Dasar Negeri 47 bengkulu city was made. In making this video, Sketchup software and enscape are used to make it easier to create profile videos. The purpose of the research is to make a 3D animated video as a promotional medium at Sekolah Dasar Negeri 47 Bengkulu City. Research methods through multimedia system development models using Luther Sutopo's Multimedia Development Life Cycle (MDLC). The research was carried out as a promotional medium through 3D animated videos using the sketchup application. The respondent's due diligence test received a response of 87.5%, which means that the video profile that was made is liked by the general public.
Style APA, Harvard, Vancouver, ISO itp.
31

Thanh Le, Tuan, JongBeom Jeong i Eun-Seok Ryu. "Efficient Transcoding and Encryption for Live 360 CCTV System". Applied Sciences 9, nr 4 (21.02.2019): 760. http://dx.doi.org/10.3390/app9040760.

Pełny tekst źródła
Streszczenie:
In recent years, the rapid development of surveillance information in closed-circuit television (CCTV) has become an indispensable element in security systems. Several CCTV systems designed for video compression and encryption need to improve for the best performance and different security levels. Specially, the advent of 360 video makes the CCTV promising for surveillance without any blind areas. Compared to current systems, 360 CCTV requires the large bandwidth with low latency to run smoothly. Therefore, to improve the system performance, it needs to be more robust to run smoothly. Video transmission and transcoding is an essential process in converting codecs, changing bitrates or resizing the resolution for 360 videos. High-performance transcoding is one of the key factors of real time CCTV stream. Additionally, the security of video streams from cameras to endpoints is also an important priority in CCTV research. In this paper, a real-time transcoding system designed with the ARIA block cipher encryption algorithm is presented. Experimental results show that the proposed method achieved approximately 200% speedup compared to libx265 FFmpeg in transcoding task, and it could handle multiple transcoding sessions simultaneously at high performance for both live 360 CCTV system and existing 2D/3D CCTV system.
Style APA, Harvard, Vancouver, ISO itp.
32

Tai, Yong Hang, Jun Sheng Shi, Zai Qing Chen, Qiong Li i Bin Zhuo. "Application of AM-OLED Micro-Display in Stereo Display System". Advanced Materials Research 936 (czerwiec 2014): 2209–13. http://dx.doi.org/10.4028/www.scientific.net/amr.936.2209.

Pełny tekst źródła
Streszczenie:
OLED micro-display is applied widely in the HMD stereo display field. Based on dual 0.5 inch 800×600 high resolution AM-OLED displays, we proposed a 3D circuit design scheme which used PCs VGA interface as video input, PIC18LF2550 as the MCU. According to the principle of the human eye binocular disparity, PC stereo video sources achieved odd frames and even frames displaying simultaneous in two AM-OLED panels, which implemented the 3D function. Display system has been tested by playing the stereoscopic video source of the left and right sides format form the PC and the effect presented a good performance which verified the effectiveness of the proposed scheme.
Style APA, Harvard, Vancouver, ISO itp.
33

Yun, Kugjin, Won-Sik Cheong, Jinyoung Lee i Kyuheon Kim. "Design and Implementation of Hybrid Network Associated 3D Video Broadcasting System". Journal of Broadcast Engineering 19, nr 5 (30.09.2014): 687–98. http://dx.doi.org/10.5909/jbe.2014.19.5.687.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Fu, Yong Hua. "The Dynamic Multi-Angle Video Capture System Study Based on Glass-Free 3D". Applied Mechanics and Materials 740 (marzec 2015): 714–17. http://dx.doi.org/10.4028/www.scientific.net/amm.740.714.

Pełny tekst źródła
Streszczenie:
The paper implements a dynamic system for multi-angle glass-free 3D video, the technology for playing content acquisition have higher requirements. The system using the camera matrix, gathering content for multi-angle shooting, and then after decoding, correction, synthesis, processing changes have sustained multiple angles of video images, in order to achieve the Glass-Free 3D viewing . System uses Java to achieve the driver. As the experimental results shows that the camera capturing problems like distortion and image alignment are solved and the dynamic full HD video with four view angles realized.
Style APA, Harvard, Vancouver, ISO itp.
35

Lian, Ping Ping. "A Novel USB3.0 High Definition 3D Video Camera Based on ARM". Advanced Materials Research 1037 (październik 2014): 474–77. http://dx.doi.org/10.4028/www.scientific.net/amr.1037.474.

Pełny tekst źródła
Streszczenie:
This paper focuses on researches of the design and implementation of an innovative 3D video camera. Traditional 3D cameras use USB2.0 bus or similar buses. Those cameras are not capable of transferring high definition (HD) videos due to bus speed limitation. Utilizing the newest USB3.0 bus and high definition image sensors, this thesis designs one HD 3D camera and solves above problem. To maximize data transfer speed, it employs one 200MHz operating frequency ARM controller which guarantees real-time system responses at the same time. The field trial has demonstrated HD 3D camera in the article is feasible and rich in value of research.
Style APA, Harvard, Vancouver, ISO itp.
36

Roganov, Vladimir, Michail Miheev, Elvira Roganova, Olga Grintsova i Jurijs Lavendels. "Modernisation of Endoscopic Equipment Using 3D Indicators". Applied Computer Systems 23, nr 1 (1.05.2018): 75–80. http://dx.doi.org/10.2478/acss-2018-0010.

Pełny tekst źródła
Streszczenie:
Abstract The development of new software to improve the operation of modernised and developed technological facilities in different sectors of the national economy requires a systematic approach. For example, the use of video recording systems obtained during operations with the use of endoscopic equipment allows monitoring the work of doctors. Minor change of the used software allows using additionally processed video fragments for creation of training complexes. The authors of the present article took part in the development of many educational software and hardware systems. The first such system was the “Contact” system, developed in the eighties of the last century at Riga Polytechnic Institute. Later on, car simulators, air plan simulators, walking excavator simulators and the optical software-hardware training system “Three-Dimensional Medical Atlas” were developed. Analysis of various simulators and training systems showed that the computers used in them could not by themselves be a learning system. When creating a learning system, many factors must be considered so that the student does not receive false skills. The goal of the study is to analyse the training systems created for the professional training of medical personnel working with endoscopic equipment, in particular, with equipment equipped with 3D indicators.
Style APA, Harvard, Vancouver, ISO itp.
37

Oikonomou, Andreas, Saad Amin, Raouf N. G. Naguib, Alison Todman i Hassanein Al-Omishy. "Interactive Reality System (IRiS): Interactive 3D Video Playback in Multimedia Applications". Journal of Advanced Computational Intelligence and Intelligent Informatics 10, nr 2 (20.03.2006): 145–49. http://dx.doi.org/10.20965/jaciii.2006.p0145.

Pełny tekst źródła
Streszczenie:
We developed a novel interactive video recording and playback technique for biomedical multimedia training but also applicable to other areas of multimedia. The Interactive Reality System (IRiS) improves on video playback used in most multimedia applications by controlling not only time, as in conventional video playback, but also space. A prototype is being tested and evaluated for multimedia training in breast self-examination (BSE). We discuss the advantages of IRiS and compare it to other similar approaches, such as QuickTime and iPIX. We detail the design of IRiS, its development, refinement, final implementation, evaluation, some projected plans and its uses in other biomedical and multimedia training scenarios.
Style APA, Harvard, Vancouver, ISO itp.
38

Obayashi, Mizuki, Shohei Mori, Hideo Saito, Hiroki Kajita i Yoshifumi Takatsume. "Multi-View Surgical Camera Calibration with None-Feature-Rich Video Frames: Toward 3D Surgery Playback". Applied Sciences 13, nr 4 (14.02.2023): 2447. http://dx.doi.org/10.3390/app13042447.

Pełny tekst źródła
Streszczenie:
Mounting multi-view cameras within a surgical light is a practical choice since some cameras are expected to observe surgery with few occlusions. Such multi-view videos must be reassembled for easy reference. A typical way is to reconstruct the surgery in 3D. However, the geometrical relationship among cameras is changed because each camera independently moves every time the lighting is reconfigured (i.e., every time surgeons touch the surgical light). Moreover, feature matching between surgical images is potentially challenging because of missing rich features. To address the challenge, we propose a feature-matching strategy that enables robust calibration of the multi-view camera system by collecting a set of a small number of matches over time while the cameras stay stationary. Our approach would enable conversion from multi-view videos to a 3D video. However, surgical videos are long and, thus, the cost of the conversion rapidly grows. Therefore, we implement a video player where only selected frames are converted to minimize time and data until playbacks. We demonstrate that sufficient calibration quality with real surgical videos can lead to a promising 3D mesh and a recently emerged 3D multi-layer representation. We reviewed comments from surgeons to discuss the differences between those 3D representations on an autostereoscopic display with respect to medical usage.
Style APA, Harvard, Vancouver, ISO itp.
39

Lee, Dohoon, Yoonmo Yang i Byung Tae Oh. "Boundary Artifacts Reduction in View Synthesis of 3D Video System". Journal of Broadcast Engineering 21, nr 6 (30.11.2016): 878–88. http://dx.doi.org/10.5909/jbe.2016.21.6.878.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Gürler, C. Goktug, i Murat Tekalp. "Peer-to-peer system design for adaptive 3D video streaming". IEEE Communications Magazine 51, nr 5 (maj 2013): 108–14. http://dx.doi.org/10.1109/mcom.2013.6515054.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Tsai, Sung-Fang, Chao-Chung Cheng, Chung-Te Li i Liang-Gee Chen. "A real-time 1080p 2D-to-3D video conversion system". IEEE Transactions on Consumer Electronics 57, nr 2 (maj 2011): 915–22. http://dx.doi.org/10.1109/tce.2011.5955240.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Hashimoto, Hideyuki, Yuki Fujibayashi i Hiroki Imamura. "3D Video Communication System by Using Kinect and Head Mounted Displays". International Journal of Image and Graphics 15, nr 02 (kwiecień 2015): 1540003. http://dx.doi.org/10.1142/s0219467815400033.

Pełny tekst źródła
Streszczenie:
Existing video communication systems, used in private business or video teleconference, show a part of the body of users on display only. This situation does not have realistic sensation because of showing a part of body on display and showing other users by 2D. This makes users feel communicating with the other users at a long distance without realistic sensation. Furthermore, although these existing communication systems have file transfer function such as sending and receiving file data, it does not use intuitive manipulation. It uses mouse or touching display only. In order to solve these problems, we propose 3D communication system supported by Kinect and head mounted display (HMD) to provide users communications with realistic sensation and intuitive manipulation. This system is able to show whole body of users on HMD as if they were in the same room by 3D reconstruction. It also enables users to transfer and share information by intuitive manipulation to augmented reality (AR) objects toward the other users in the future. The result of this paper is a system that extracts human body by using Kinect, reconstructs extracted human body on HMD, and also recognizes users' hands to be able to manipulate AR objects by a hand.
Style APA, Harvard, Vancouver, ISO itp.
43

Hidayat, Riyan, Hendri Gunawan i Diki Susandi. "PEMBUATAN VIDEO PROFIL PERUSAHAAN BERBASIS ANIMASI 3D DI PT. KRAKATAU INSAN MANDIRI". Jurnal Sistem Informasi dan Informatika (Simika) 2, nr 1 (21.02.2019): 64–80. http://dx.doi.org/10.47080/simika.v2i1.353.

Pełny tekst źródła
Streszczenie:
Multimedia as a medium of communication, information delivery, and promotion is one of the most popular technology fields. As a fields that are flexible and can be attractive, as well as pamper the human senses, multimedia can simultaneously affect humans visually, audio, and touch so that it can make consumers digest the message contained more optimally. One product that requires multimedia to improve its superiority is a company profile or profile company. In this case, PT. Krakatau Insan Mandiri seeks to introduce clearly as well as attractive to consumers through profile videos 3D animation based company. The benefits are documentation and promotional media and information about the company profile of PT. Krakatau Insan Mandiri. In making this company profile video the author uses methodologies include: Literature Study, Analysis and Design of Systems, System Implementation, System Testing, and Documentation
Style APA, Harvard, Vancouver, ISO itp.
44

Kadosh, Michael, i Yitzhak Yitzhaky. "3D Object Detection via 2D Segmentation-Based Computational Integral Imaging Applied to a Real Video". Sensors 23, nr 9 (22.04.2023): 4191. http://dx.doi.org/10.3390/s23094191.

Pełny tekst źródła
Streszczenie:
This study aims to achieve accurate three-dimensional (3D) localization of multiple objects in a complicated scene using passive imaging. It is challenging, as it requires accurate localization of the objects in all three dimensions given recorded 2D images. An integral imaging system captures the scene from multiple angles and is able to computationally produce blur-based depth information about the objects in the scene. We propose a method to detect and segment objects in a 3D space using integral-imaging data obtained by a video camera array. Using objects’ two-dimensional regions detected via deep learning, we employ local computational integral imaging in detected objects’ depth tubes to estimate the depth positions of the objects along the viewing axis. This method analyzes object-based blurring characteristics in the 3D environment efficiently. Our camera array produces an array of multiple-view videos of the scene, called elemental videos. Thus, the proposed 3D object detection applied to the video frames allows for 3D tracking of the objects with knowledge of their depth positions along the video. Results show successful 3D object detection with depth localization in a real-life scene based on passive integral imaging. Such outcomes have not been obtained in previous studies using integral imaging; mainly, the proposed method outperforms them in its ability to detect the depth locations of objects that are in close proximity to each other, regardless of the object size. This study may contribute when robust 3D object localization is desired with passive imaging, but it requires a camera or lens array imaging apparatus.
Style APA, Harvard, Vancouver, ISO itp.
45

Ali, Sharifnezhad, Abdollahzadekan Mina, Shafieian Mehdi i Sahafnejad-Mohammadi Iman. "C3D data based on 2-dimensional images from video camera". Annals of Biomedical Science and Engineering 5, nr 1 (13.01.2021): 001–5. http://dx.doi.org/10.29328/journal.abse.1001010.

Pełny tekst źródła
Streszczenie:
The Human three-dimensional (3D) musculoskeletal model is based on motion analysis methods and can be obtained by particular motion capture systems that export 3D data with coordinate 3D (C3D) format. Unique cameras and specific software are essential for analyzing the data. This equipment is quite expensive, and using them is time-consuming. This research intends to use ordinary video cameras and open source systems to get 3D data and create a C3D format due to these problems. By capturing movements with two video cameras, marker coordination is obtainable using Skill-Spector. To create C3D data from 3D coordinates of the body points, MATLAB functions were used. The subject was captured simultaneously with both the Cortex system and two video cameras during each validation test. The mean correlation coefficient of datasets is 0.7. This method can be used as an alternative method for motion analysis due to a more detailed comparison. The C3D data collection, which we presented in this research, is more accessible and cost-efficient than other systems. In this method, only two cameras have been used.
Style APA, Harvard, Vancouver, ISO itp.
46

Chen, Zhicheng, i Lulu Ding. "3D Video Analysis and Its Application in Developmental and Educational Psychology Teaching". Discrete Dynamics in Nature and Society 2022 (14.05.2022): 1–9. http://dx.doi.org/10.1155/2022/2551272.

Pełny tekst źródła
Streszczenie:
With the rapid development of information technology and network technology and the new requirements of education and teaching in the new era, 3D video technology has been more and more widely used in the field of education and teaching. Educational psychology plays a positive role in the field of education. Applying its relevant theories to classroom teaching is not only conducive to the establishment of a good teacher-student relationship but also helps teachers to understand the psychological characteristics and learning process of students, so as to greatly improve the quality of teaching. Therefore, this article is based on 3D video analysis technology, through the observation of the teaching video and coding, the verbal interaction between teachers and students’ lines of code and data analysis, mainly to its frequency, specific length and single length data used for analysis, explores the lesson for the current situation of the teaching video case teacher’s teaching behavior, commonness, and difference. From the perspective of learner psychology, the model of teacher’s teaching behavior in MOOCs teaching video is constructed. On the other hand, from the perspective of coding system and learner psychology, this paper proposes the improvement strategies of teachers’ teaching behavior in MOOCs teaching videos in China. The results show that the application of 3D video analysis method and educational psychology in student education can improve student work efficiency.
Style APA, Harvard, Vancouver, ISO itp.
47

Reino, Anthony J., William Lawson, Baxter J. Garcia i Robert J. Greenstein. "Three Dimensional Video Imaging for Endoscopic Sinus Surgery and Diagnosis". American Journal of Rhinology 9, nr 4 (lipiec 1995): 197–202. http://dx.doi.org/10.2500/105065895781873746.

Pełny tekst źródła
Streszczenie:
Technological advances in video imaging over the last decade have resulted in remarkable additions to the armamentarium of instrumentation for the otolaryngologist. The use of video cameras and computer generated imaging in the operating room and office is invaluable for documentation and teaching purposes. Despite the obvious advantages of these systems, problems are evident, the most serious of which include image distortion and inability to judge depth of field. For more than 6 decades 3D imaging has been neither technically nor commercially successful. Reasons include alignment difficulties and image distortion. The result is “visual fatigue,” usually in about 15 minutes. At its extreme, this may be characterized by headache, nausea, and even vomiting. In this study, we employed the first 3D video imager to electronically manipulate a single video source to produce 3D images; therefore, neither alignment nor image distortions were produced. Of interest to the clinical surgeon, “visual fatigue” does not seem to occur; however, with prolonged procedures (greater than 2 hours) there exists the potential for physical intolerance for some individuals. This is the first unit that is compatible with any rigid or flexible videoendoscopic system and the small diameter endoscopes available for endoscopic sinus surgery. Moreover, prerecorded 2D tapes may be viewed in 3D on an existing VCR. The 3D image seems to provide enhanced anatomic awareness with less image distortion. We have found this system to be optically superior to the 2D video imagers currently available.
Style APA, Harvard, Vancouver, ISO itp.
48

Xu, Li Hong, Lu Lin Yang i Rui Hua Wei. "Design of a Greenhouse Visualization System Based on Cloud Computing and Android System". Applied Mechanics and Materials 519-520 (luty 2014): 1455–60. http://dx.doi.org/10.4028/www.scientific.net/amm.519-520.1455.

Pełny tekst źródła
Streszczenie:
This visualization system aimed at collecting and storing environmental data and crops growing data so as to establish the relationship between environmental data and crop growth data.In order to solve the problem of big data storage,this paper put forward a new system architecture “Data collect+Cloud+Android”,which took advantage of cloud platform to store greenhouse environmental data and crops growth 3D data.Greenhouse environmental data was automatically uploaded to cloud platform and Android smart phone exchanged data with cloud platform.The greenhouse visualization system included:3D visualization of parameters in greenhouse,3D crop model,traceability system and video in greenhouse.During the test,the system runs smoothly and no data packet loss.
Style APA, Harvard, Vancouver, ISO itp.
49

Lee, Jaehyun, Sungjae Ha, Philippe Gentet, Leehwan Hwang, Soonchul Kwon i Seunghyun Lee. "A Novel Real-Time Virtual 3D Object Composition Method for 360° Video". Applied Sciences 10, nr 23 (4.12.2020): 8679. http://dx.doi.org/10.3390/app10238679.

Pełny tekst źródła
Streszczenie:
As highly immersive virtual reality (VR) content, 360° video allows users to observe all viewpoints within the desired direction from the position where the video is recorded. In 360° video content, virtual objects are inserted into recorded real scenes to provide a higher sense of immersion. These techniques are called 3D composition. For a realistic 3D composition in a 360° video, it is important to obtain the internal (focal length) and external (position and rotation) parameters from a 360° camera. Traditional methods estimate the trajectory of a camera by extracting the feature point from the recorded video. However, incorrect results may occur owing to stitching errors from a 360° camera attached to several high-resolution cameras for the stitching process, and a large amount of time is spent on feature tracking owing to the high-resolution of the video. We propose a new method for pre-visualization and 3D composition that overcomes the limitations of existing methods. This system achieves real-time position tracking of the attached camera using a ZED camera and a stereo-vision sensor, and real-time stabilization using a Kalman filter. The proposed system shows high time efficiency and accurate 3D composition.
Style APA, Harvard, Vancouver, ISO itp.
50

Kitahara, Itaru, i Yuichi Ohta. "Scalable 3D Representation for 3D Video in a Large-Scale Space". Presence: Teleoperators and Virtual Environments 13, nr 2 (kwiecień 2004): 164–77. http://dx.doi.org/10.1162/1054746041382401.

Pełny tekst źródła
Streszczenie:
In this paper, we introduce our research aimed at realizing a 3D video system in a very large-scale space such as a soccer stadium or concert hall. We propose a method for describing the shape of a 3D object with a set of planes in order to effectively synthesize a novel view of the object. The most effective layout of the planes can be determined based on the relative locations of the observer's viewing position, multiple cameras, and 3D objects. We describe a method for controlling the LOD of the 3D representation by adjusting the planes' orientation, interval, and resolution. The data size of the 3D model and the processing time can be reduced drastically. The effectiveness of the proposed method is demonstrated by our experimental results.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii