Journal articles on the topic '3D video system'

To see the other types of publications on this topic, follow the link: 3D video system.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic '3D video system.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Cha, Jongeun, Mohamad Eid, and Abdulmotaleb El Saddik. "Touchable 3D video system." ACM Transactions on Multimedia Computing, Communications, and Applications 5, no. 4 (October 2009): 1–25. http://dx.doi.org/10.1145/1596990.1596993.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Okui, Makoto. "6-2. 3D Video System." Journal of The Institute of Image Information and Television Engineers 65, no. 9 (2011): 1282–86. http://dx.doi.org/10.3169/itej.65.1282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Aggoun, Amar, Emmanuel Tsekleves, Mohammad Rafiq Swash, Dimitrios Zarpalas, Anastasios Dimou, Petros Daras, Paulo Nunes, and Luis Ducla Soares. "Immersive 3D Holoscopic Video System." IEEE MultiMedia 20, no. 1 (January 2013): 28–37. http://dx.doi.org/10.1109/mmul.2012.42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mohammed, Dhrgham Hani, and Laith Ali Abdul-Rahaim. "A Proposed of Multimedia Compression System Using Three - Dimensional Transformation." Webology 18, SI05 (October 30, 2021): 816–31. http://dx.doi.org/10.14704/web/v18si05/web18264.

Full text
Abstract:
Video compression has become especially important nowadays with the increase of data transmitted over transmission channels, the reducing the size of the videos must be done without affecting the quality of the video. This process is done by cutting the video thread into frames of specific lengths and converting them into a three-dimensional matrix. The proposed compression scheme uses the traditional red-green-blue color space representation and applies a three-dimensional discrete Fourier transform (3D-DFT) or three-dimensional discrete wavelet transform (3D-DWT) to the signal matrix after converted the video stream to three-dimensional matrices. The resulting coefficients from the transformation are encoded using the EZW encoder algorithm. Three main criteria by which the performance of the proposed video compression system will be tested; Compression ratio (CR), peak signal-to-noise ratio (PSNR) and processing time (PT). Experiments showed high compression efficiency for videos using the proposed technique with the required bit rate, the best bit rate for traditional video compression. 3D discrete wavelet conversion has a high frame rate with natural spatial resolution and scalability through visual and spatial resolution Beside the quality and other advantages when compared to current conventional systems in complexity, low power, high throughput, low latency and minimum the storage requirements. All proposed systems implement using MATLAB R2020b.
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Han-Kil, Sang-Woong Joo, Hun-Hee Kim, and Hoe-Kyung Jung. "3D Video Simulation System Using GPS." Journal of the Korea Institute of Information and Communication Engineering 18, no. 4 (April 30, 2014): 855–60. http://dx.doi.org/10.6109/jkiice.2014.18.4.855.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hasan, Md Mehedi, Md Ariful Islam, Sejuti Rahman, Michael R. Frater, and John F. Arnold. "No-Reference Quality Assessment of Transmitted Stereoscopic Videos Based on Human Visual System." Applied Sciences 12, no. 19 (October 7, 2022): 10090. http://dx.doi.org/10.3390/app121910090.

Full text
Abstract:
Provisioning the stereoscopic 3D (S3D) video transmission services of admissible quality in a wireless environment is an immense challenge for video service providers. Unlike for 2D videos, a widely accepted No-reference objective model for assessing transmitted 3D videos that explores the Human Visual System (HVS) appropriately has not been developed yet. Distortions perceived in 2D and 3D videos are significantly different due to the sophisticated manner in which the HVS handles the dissimilarities between the two different views. In real-time video transmission, viewers only have the distorted or receiver end content of the original video acquired through the communication medium. In this paper, we propose a No-reference quality assessment method that can estimate the quality of a stereoscopic 3D video based on HVS. By evaluating perceptual aspects and correlations of visual binocular impacts in a stereoscopic movie, the approach creates a way for the objective quality measure to assess impairments similarly to a human observer who would experience the similar material. Firstly, the disparity is measured and quantified by the region-based similarity matching algorithm, and then, the magnitude of the edge difference is calculated to delimit the visually perceptible areas of an image. Finally, an objective metric is approximated by extracting these significant perceptual image features. Experimental analysis with standard S3D video datasets demonstrates the lower computational complexity for the video decoder and comparison with the state-of-the-art algorithms shows the efficiency of the proposed approach for 3D video transmission at different quantization (QP 26 and QP 32) and loss rate (1% and 3% packet loss) parameters along with the perceptual distortion features.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Yizhong, Jiaolong Yang, Zhen Liu, Ruicheng Wang, Guojun Chen, Xin Tong, and Baining Guo. "VirtualCube: An Immersive 3D Video Communication System." IEEE Transactions on Visualization and Computer Graphics 28, no. 5 (May 2022): 2146–56. http://dx.doi.org/10.1109/tvcg.2022.3150512.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Yingchun, Jianbo Huang, and Siwen Duan. "3D video conversion system based on depth information extraction." MATEC Web of Conferences 232 (2018): 02048. http://dx.doi.org/10.1051/matecconf/201823202048.

Full text
Abstract:
3D movies have received more and more attention in recent years. However, the investment in making 3D movies is high and difficult, which restricts its development. And there are many existing 2D movie resources, and how to convert it into 3D movies is also a problem. Therefore, this paper proposes a 3D video conversion system based on depth information extraction. The system consists of four parts: segmentation of movie video frame sequences, extraction of frame image depth information, generation of virtual multi-viewpoint and synthesis of 3D video. The system can effectively extract the depth information of the movie and by it finally convert a 2D movie into a 3D movie.
APA, Harvard, Vancouver, ISO, and other styles
9

XIAO, JIANGJIAN, HUI CHENG, FENG HAN, and HARPREET SAWHNEY. "GEO-BASED AERIAL SURVEILLANCE VIDEO PROCESSING FOR SCENE UNDERSTANDING AND OBJECT TRACKING." International Journal of Pattern Recognition and Artificial Intelligence 23, no. 07 (November 2009): 1285–307. http://dx.doi.org/10.1142/s0218001409007582.

Full text
Abstract:
This paper presents an approach to extract semantic layers from aerial surveillance videos for scene understanding and object tracking. The input videos are captured by low flying aerial platforms and typically consist of strong parallax from non-ground-plane structures as well as moving objects. Our approach leverages the geo-registration between video frames and reference images (such as those available from Terraserver and Google satellite imagery) to establish a unique geo-spatial coordinate system for pixels in the video. The geo-registration process enables Euclidean 3D reconstruction with absolute scale unlike traditional monocular structure from motion where continuous scale estimation over long periods of time is an issue. Geo-registration also enables correlation of video data to other stored information sources such as GIS (Geo-spatial Information System) databases. In addition to the geo-registration and 3D reconstruction aspects, the other key contributions of this paper also include: (1) providing a reliable geo-based solution to estimate camera pose for 3D reconstruction, (2) exploiting appearance and 3D shape constraints derived from geo-registered videos for labeling of structures such as buildings, foliage, and roads for scene understanding, and (3) elimination of moving object detection and tracking errors using 3D parallax constraints and semantic labels derived from geo-registered videos. Experimental results on extended time aerial video data demonstrates the qualitative and quantitative aspects of our work.
APA, Harvard, Vancouver, ISO, and other styles
10

Yan, Ming, Xin Gang Wang, and Jun Feng Li. "3D Video Transmission System for China Mobile Multimedia Broadcasting." Applied Mechanics and Materials 519-520 (February 2014): 469–72. http://dx.doi.org/10.4028/www.scientific.net/amm.519-520.469.

Full text
Abstract:
With the popularization of three dimensional (3D) video on TV, some related aspects, such as broadcast television network transmission, technology application, and content products and so on, will face the challenges of the new technology. Rapid progress has been achieved in the China Mobile Multimedia Broadcasting (CMMB) technology, 3D TV technology applied in the mobile TV will become the bright spot in the future. This paper firstly introduces the transmission modes of 3D TV, and then gives the technology solution of 3D TV and the upgrade solution of electronic service guide Electronic Service Guide (ESG) based on CMMB. Test result shows that the new system with 3D video can work correctly when the 3D video was encoded with H.264 standard and the image resolution was conformed to the standard definition (320x240).
APA, Harvard, Vancouver, ISO, and other styles
11

Zhebin, Zhang, Zhang Ji'an, Zhang Xuexi, Wang Yizhou, and Gao Wen. "A distributed 2D-to-3D video conversion system." China Communications 10, no. 5 (May 2013): 30–38. http://dx.doi.org/10.1109/cc.2013.6520936.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Min, Dongbo, Donghyun Kim, SangUn Yun, and Kwanghoon Sohn. "2D/3D freeview video generation for 3DTV system." Signal Processing: Image Communication 24, no. 1-2 (January 2009): 31–48. http://dx.doi.org/10.1016/j.image.2008.10.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Wurmlin, Stephan, Edouard Lamboray, Oliver G. Staadt, and Markus H. Gross. "3D Video Recorder: a System for Recording and Playing Free-Viewpoint Video+." Computer Graphics Forum 22, no. 2 (June 2003): 181–93. http://dx.doi.org/10.1111/1467-8659.00659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

West, Greg L., Kyoko Konishi, and Veronique D. Bohbot. "Video Games and Hippocampus-Dependent Learning." Current Directions in Psychological Science 26, no. 2 (April 2017): 152–58. http://dx.doi.org/10.1177/0963721416687342.

Full text
Abstract:
Research examining the impact of video games on neural systems has largely focused on visual attention and motor control. Recent evidence now shows that video games can also impact the hippocampal memory system. Further, action and 3D-platform video-game genres are thought to have differential impacts on this system. In this review, we examine the specific design elements unique to either action or 3D-platform video games and break down how they could either favor or discourage use of the hippocampal memory system during gameplay. Analysis is based on well-established principles of hippocampus-dependent and non-hippocampus-dependent forms of learning from the human and rodent literature.
APA, Harvard, Vancouver, ISO, and other styles
15

Mihradi, Sandro, Ferryanto, Tatacipta Dirgantara, and Andi I. Mahyuddin. "Tracking of Markers for 2D and 3D Gait Analysis Using Home Video Cameras." International Journal of E-Health and Medical Communications 4, no. 3 (July 2013): 36–52. http://dx.doi.org/10.4018/jehmc.2013070103.

Full text
Abstract:
This work presents the development of an affordable optical motion-capture system which uses home video cameras for 2D and 3D gait analysis. The 2D gait analyzer system consists of one camcorder and one PC while the 3D gait analyzer system uses two camcorders, a flash and two PCs. Both systems make use of 25 fps camcorder, LED markers and technical computing software to track motions of markers attached to human body during walking. In the experiment for 3D gait analyzer system, the two cameras are synchronized by using flash. The recorded videos for both systems are extracted into frames and then converted into binary images, and bridge morphological operation is applied for unconnected pixel to facilitate marker detection process. Least distance method is then employed to track the markers motions, and 3D Direct Linear Transformation is used to reconstruct 3D markers positions. The correlation between length in pixel and in the real world resulted from calibration process is used to reconstruct 2D markers positions. To evaluate the reliability of the 2D and 3D optical motion-capture system developed in the present work, spatio-temporal and kinematics parameters calculated from the obtained markers positions are qualitatively compared with the ones from literature, and the results show good compatibility.
APA, Harvard, Vancouver, ISO, and other styles
16

Yu, Jia Xi, and Wen Hui Zhang. "Design of 3D-TV Horizontal Parallax Obtaining System Based on FPGA." Applied Mechanics and Materials 401-403 (September 2013): 1834–38. http://dx.doi.org/10.4028/www.scientific.net/amm.401-403.1834.

Full text
Abstract:
In this paper, a design of FPGA-based 3D-TV horizontal parallax acquiring system is presented. The system will receive the stereoscopic video by a HD-SDI receiver GS2971, and outputs a video of horizontal parallax to a digital TV through a HDMI transmitter SiI9134. In this system, FPGA plays an important role that converts the stereoscopic video to the horizontal parallax video. In addition, a microcontroller is selected as the control center of the entire system. This system can get the horizontal parallax of the stereoscopic video in real time, and is helpful for the stereoscopic program producer to control the horizontal parallax of the 3D program.
APA, Harvard, Vancouver, ISO, and other styles
17

Zhang, Qiuwen, Shuaichao Wei, and Rijian Su. "Low-Complexity Texture Video Coding Based on Motion Homogeneity for 3D-HEVC." Scientific Programming 2019 (January 15, 2019): 1–13. http://dx.doi.org/10.1155/2019/1574081.

Full text
Abstract:
Three-dimensional extension of the high efficiency video coding (3D-HEVC) is an emerging international video compression standard for multiview video system applications. Similar to HEVC, a computationally expensive mode decision is performed using all depth levels and prediction modes to select the least rate-distortion (RD) cost for each coding unit (CU). In addition, new tools and intercomponent prediction techniques have been introduced to 3D-HEVC for improving the compression efficiency of the multiview texture videos. These techniques, despite achieving the highest texture video coding efficiency, involve extremely high-complex procedures, thus limiting 3D-HEVC encoders in practical applications. In this paper, a fast texture video coding method based on motion homogeneity is proposed to reduce 3D-HEVC computational complexity. Because the multiview texture videos instantly represent the same scene at the same time (considering that the optimal CU depth level and prediction modes are highly multiview content dependent), it is not efficient to use all depth levels and prediction modes in 3D-HEVC. The motion homogeneity model of a CU is first studied according to the motion vectors and prediction modes from the corresponding CUs. Based on this model, we present three efficient texture video coding approaches, such as the fast depth level range determination, early SKIP/Merge mode decision, and adaptive motion search range adjustment. Experimental results demonstrate that the proposed overall method can save 56.6% encoding time with only trivial coding efficiency degradation.
APA, Harvard, Vancouver, ISO, and other styles
18

Chen, Hanqing, Chunyan Hu, Feifei Lee, Chaowei Lin, Wei Yao, Lu Chen, and Qiu Chen. "A Supervised Video Hashing Method Based on a Deep 3D Convolutional Neural Network for Large-Scale Video Retrieval." Sensors 21, no. 9 (April 29, 2021): 3094. http://dx.doi.org/10.3390/s21093094.

Full text
Abstract:
Recently, with the popularization of camera tools such as mobile phones and the rise of various short video platforms, a lot of videos are being uploaded to the Internet at all times, for which a video retrieval system with fast retrieval speed and high precision is very necessary. Therefore, content-based video retrieval (CBVR) has aroused the interest of many researchers. A typical CBVR system mainly contains the following two essential parts: video feature extraction and similarity comparison. Feature extraction of video is very challenging, previous video retrieval methods are mostly based on extracting features from single video frames, while resulting the loss of temporal information in the videos. Hashing methods are extensively used in multimedia information retrieval due to its retrieval efficiency, but most of them are currently only applied to image retrieval. In order to solve these problems in video retrieval, we build an end-to-end framework called deep supervised video hashing (DSVH), which employs a 3D convolutional neural network (CNN) to obtain spatial-temporal features of videos, then train a set of hash functions by supervised hashing to transfer the video features into binary space and get the compact binary codes of videos. Finally, we use triplet loss for network training. We conduct a lot of experiments on three public video datasets UCF-101, JHMDB and HMDB-51, and the results show that the proposed method has advantages over many state-of-the-art video retrieval methods. Compared with the DVH method, the mAP value of UCF-101 dataset is improved by 9.3%, and the minimum improvement on JHMDB dataset is also increased by 0.3%. At the same time, we also demonstrate the stability of the algorithm in the HMDB-51 dataset.
APA, Harvard, Vancouver, ISO, and other styles
19

Li, Hao Jun, and Yue Sheng Zhu. "A New Approach of 2D-to-3D Video Conversion and Its Implementation on Embedded System." Applied Mechanics and Materials 58-60 (June 2011): 2552–57. http://dx.doi.org/10.4028/www.scientific.net/amm.58-60.2552.

Full text
Abstract:
Rather than based on the stereo vision principle and the relationship between the camera position and the video scene objects used in most of current 2D-to-3D video conversion algorithms, a new prediction method of video depth information based on video frame differences is proposed and implemented on an embedded platform in this paper. 3D stereoscopic video sequences are generated by using the original 2D video sequences and the depth information. The theoretical analysis and experimental results have showed that the proposed method is more feasible and efficiency compared with the current algorithms.
APA, Harvard, Vancouver, ISO, and other styles
20

Vasudevan, Vani. "3D projection mapping for facial cosmetic surgery by creating a 3D video." International Journal of Engineering & Technology 7, no. 2-1 (March 23, 2018): 409. http://dx.doi.org/10.14419/ijet.v7i2.9520.

Full text
Abstract:
Due to the rapid growth of technology, millions of systems are designed day by day to fulfill their duties in facilitating the human lives. In this paper, the projection mapping method is used to help users who need to do facial cosmetic surgery in taking the right decisions by providing them with a unique system, which can be used in cosmetic surgeries’ clinics. The main purpose of the system is to make a projection mapping on a face model that presents the changes that need to be applied on the face features with some facial expressions, as a result, the user will be able to compare between his/her face before and after the surgery and take the certainly decision. A suitable web based UI is created to make it easier for the user by enabling him/her to choose the needed surgery and come up with his/her own projection mapping video.
APA, Harvard, Vancouver, ISO, and other styles
21

Estrela, Vania Vieira, Maria Aparecida de Jesus, Jenice Aroma, Kumudha Raimond, Sandro R. Fernandes, Nikolaos Andreopoulos, Edwiges G. H. Grata, Andrey Terziev, Ricardo Tadeu Lopes, and Anand Deshpande. "Motion Estimation Role in the Context of 3D Video." International Journal of Multimedia Data Engineering and Management 12, no. 3 (July 2021): 1–23. http://dx.doi.org/10.4018/ijmdem.291556.

Full text
Abstract:
The 3D end-to-end video system (i.e., 3D acquisition, processing, streaming, error concealment, virtual/augmented reality handling, content retrieval, rendering, and displaying) still needs improvements. This paper scrutinizes the Motion Compensation/Motion Estimation (MCME)impact in the 3D Video (3DV) from the end-to-end users' point of view deeply. The concepts of Motion Vectors (MVs) and disparities are very close, and they help to ameliorate all the stages of the end-to-end 3DV system. The High-Efficiency Video Coding (HEVC) video codec standard is taken into consideration to evaluate the emergent trend towards computational treatment throughout the Cloud whenever possible. The tight bond between movement and depth affects 3D information recovery from these cues, and optimize the performance of algorithms and standards from several parts of the 3D system. Still, 3DV lacks support for engaging interactive 3DV services. Better bit allocation strategies also ameliorate all 3D pipeline stages while being attentive to Cloud-based deployments for 3D streaming.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhu, Ge, Huili Zhang, Yirui Jiang, Juan Lei, Linqing He, and Hongwei Li. "Dynamic Fusion Technology of Mobile Video and 3D GIS: The Example of Smartphone Video." ISPRS International Journal of Geo-Information 12, no. 3 (March 14, 2023): 125. http://dx.doi.org/10.3390/ijgi12030125.

Full text
Abstract:
Mobile videos contain a large amount of data, where the information interesting to the user can either be discrete or distributed. This paper introduces a method for fusing 3D geographic information systems (GIS) and video image textures. For the dynamic fusion of video in 3DGIS where the position and pose angle of the filming device change moment by moment, it integrates GIS 3D visualization, pose resolution and motion interpolation, and proposes a projection texture mapping method for constructing a dynamic depth camera to achieve dynamic fusion. In this paper, the accuracy and time efficiency of different systems of gradient descent and complementary filtering algorithms are analyzed mainly by quantitative analysis method, and the effect of dynamic fusion is analyzed by the playback delay and rendering frame rate of video on 3DGIS as indicators. The experimental results show that the gradient descent method under the Aerial Attitude Reference System (AHRS) is more suitable for the solution of smartphone attitude, and can control the root mean square error of attitude solution within 2°; the delay of video playback on 3DGIS is within 29 ms, and the rendering frame rate is 34.9 fps, which meets the requirements of the minimum resolution of human eyes.
APA, Harvard, Vancouver, ISO, and other styles
23

SAKAMOTO, Naohisa, Takeshi TAKAI, Koji KOYAMADA, Takashi MATSUYAMA, and Yoshihito KIKKAWA. "Development of 3D Video Display System Using Omnidirectional Display." Journal of the Visualization Society of Japan 23, Supplement1 (2003): 399–402. http://dx.doi.org/10.3154/jvs.23.supplement1_399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Hefeeda, Mohamed. "Spider: A System for Finding Illegal 3D Video Copies." Qatar Foundation Annual Research Forum Proceedings, no. 2011 (November 2011): CSO10. http://dx.doi.org/10.5339/qfarf.2011.cso10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Morvan, Yannick, Dirk Farin, and Peter De With. "System architecture for free-viewpoint video and 3D-TV." IEEE Transactions on Consumer Electronics 54, no. 2 (May 2008): 925–32. http://dx.doi.org/10.1109/tce.2008.4560180.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Chung, Young-uk, Yong-Hoon Choi, Suwon Park, and Hyukjoon Lee. "A QoS Aware Resource Allocation Strategy for 3D A/V Streaming in OFDMA Based Wireless Systems." Scientific World Journal 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/419236.

Full text
Abstract:
Three-dimensional (3D) video is expected to be a “killer app” for OFDMA-based broadband wireless systems. The main limitation of 3D video streaming over a wireless system is the shortage of radio resources due to the large size of the 3D traffic. This paper presents a novel resource allocation strategy to address this problem. In the paper, the video-plus-depth 3D traffic type is considered. The proposed resource allocation strategy focuses on the relationship between 2D video and the depth map, handling them with different priorities. It is formulated as an optimization problem and is solved using a suboptimal heuristic algorithm. Numerical results show that the proposed scheme provides a better quality of service compared to conventional schemes.
APA, Harvard, Vancouver, ISO, and other styles
27

Siddique, Arslan, Francesco Banterle, Massimiliano Corsini, Paolo Cignoni, Daniel Sommerville, and Chris Joffe. "MoReLab: A Software for User-Assisted 3D Reconstruction." Sensors 23, no. 14 (July 17, 2023): 6456. http://dx.doi.org/10.3390/s23146456.

Full text
Abstract:
We present MoReLab, a tool for user-assisted 3D reconstruction. This reconstruction requires an understanding of the shapes of the desired objects. Our experiments demonstrate that existing Structure from Motion (SfM) software packages fail to estimate accurate 3D models in low-quality videos due to several issues such as low resolution, featureless surfaces, low lighting, etc. In such scenarios, which are common for industrial utility companies, user assistance becomes necessary to create reliable 3D models. In our system, the user first needs to add features and correspondences manually on multiple video frames. Then, classic camera calibration and bundle adjustment are applied. At this point, MoReLab provides several primitive shape tools such as rectangles, cylinders, curved cylinders, etc., to model different parts of the scene and export 3D meshes. These shapes are essential for modeling industrial equipment whose videos are typically captured by utility companies with old video cameras (low resolution, compression artifacts, etc.) and in disadvantageous lighting conditions (low lighting, torchlight attached to the video camera, etc.). We evaluate our tool on real industrial case scenarios and compare it against existing approaches. Visual comparisons and quantitative results show that MoReLab achieves superior results with regard to other user-interactive 3D modeling tools.
APA, Harvard, Vancouver, ISO, and other styles
28

Chen, Hao, Ya Ping Hu, Jin Gyi Yan, Jiong Cong Chen, and Juan Hu. "Research on the Substation 3D Real Scene Surveillance." Advanced Materials Research 1008-1009 (August 2014): 742–47. http://dx.doi.org/10.4028/www.scientific.net/amr.1008-1009.742.

Full text
Abstract:
The reasonable distribution of the video monitoring points in the substation is one of the technical problems to be solved in the engineering. We combined the camera calibration algorithm with the real application of the substation video monitoring point layout, and proposed a nonlinear camera calibration method of the external parameters based on the 3D real scene, which is suited for the substation video surveillance system. The optimal video monitoring points are determined by the relationship between the video produced by the camera movement and the visual scope displayed with the 3D real scene, and testified by the location of each object through its boundary geometric element shown in the pictures. It has been proved applicable by the video simulating analysis on the 110kV main transformer monitoring points. This method has been used in the engineering design of the Guangdong Power Grid substation video surveillance system.
APA, Harvard, Vancouver, ISO, and other styles
29

Sedik, Ahmed, Mohamed Marey, and Hala Mostafa. "An Adaptive Fatigue Detection System Based on 3D CNNs and Ensemble Models." Symmetry 15, no. 6 (June 16, 2023): 1274. http://dx.doi.org/10.3390/sym15061274.

Full text
Abstract:
Due to the widespread issue of road accidents, researchers have been drawn to investigate strategies to prevent them. One major contributing factor to these accidents is driver fatigue resulting from exhaustion. Various approaches have been explored to address this issue, with machine and deep learning proving to be effective in processing images and videos to detect asymmetric signs of fatigue, such as yawning, facial characteristics, and eye closure. This study proposes a multistage system utilizing machine and deep learning techniques. The first stage is designed to detect asymmetric states, including tiredness and non-vigilance as well as yawning. The second stage is focused on detecting eye closure. The machine learning approach employs several algorithms, including Support Vector Machine (SVM), k-Nearest Neighbor (KNN), Multi-layer Perceptron (MLP), Decision Tree (DT), Logistic Regression (LR), and Random Forest (RF). Meanwhile, the deep learning approach utilizes 2D and 3D Convolutional Neural Networks (CNNs). The architectures of proposed deep learning models are designed after several trials, and their parameters have been selected to achieve optimal performance. The effectiveness of the proposed methods is evaluated using video and image datasets, where the video dataset is classified into three states: alert, tired, and non-vigilant, while the image dataset is classified based on four facial symptoms, including open or closed eyes and yawning. A more robust system is achieved by combining the image and video datasets, resulting in multiple classes for detection. Simulation results demonstrate that the 3D CNN proposed in this study outperforms the other methods, with detection accuracies of 99 percent, 99 percent, and 98 percent for the image, video, and mixed datasets, respectively. Notably, this achievement surpasses the highest accuracy of 97 percent found in the literature, suggesting that the proposed methods for detecting drowsiness are indeed effective solutions.
APA, Harvard, Vancouver, ISO, and other styles
30

Saputra, Ade, Dandi Sunardi, Ardi Wijaya, and Agung Kharismah Hidayah. "Visualisasi SD N 47 Kota Bengkulu Berbasis 3D Dengan Menggunakan Teknik Chrome Key Dan Sketchup." JURNAL MEDIA INFOTAMA 19, no. 1 (April 19, 2023): 197–204. http://dx.doi.org/10.37676/jmi.v19i1.3684.

Full text
Abstract:
Profile video is an electronic medium to convey information that is very effective in introducing a school. Through this visual media, all information can be easily digested by all circles of society. State Elementary School 47 Bengkulu City, Mem needs promotional media to the community to increase the admission of new students. Based on this, then to do the promotion, a profile video of Sekolah Dasar Negeri 47 bengkulu city was made. In making this video, Sketchup software and enscape are used to make it easier to create profile videos. The purpose of the research is to make a 3D animated video as a promotional medium at Sekolah Dasar Negeri 47 Bengkulu City. Research methods through multimedia system development models using Luther Sutopo's Multimedia Development Life Cycle (MDLC). The research was carried out as a promotional medium through 3D animated videos using the sketchup application. The respondent's due diligence test received a response of 87.5%, which means that the video profile that was made is liked by the general public.
APA, Harvard, Vancouver, ISO, and other styles
31

Thanh Le, Tuan, JongBeom Jeong, and Eun-Seok Ryu. "Efficient Transcoding and Encryption for Live 360 CCTV System." Applied Sciences 9, no. 4 (February 21, 2019): 760. http://dx.doi.org/10.3390/app9040760.

Full text
Abstract:
In recent years, the rapid development of surveillance information in closed-circuit television (CCTV) has become an indispensable element in security systems. Several CCTV systems designed for video compression and encryption need to improve for the best performance and different security levels. Specially, the advent of 360 video makes the CCTV promising for surveillance without any blind areas. Compared to current systems, 360 CCTV requires the large bandwidth with low latency to run smoothly. Therefore, to improve the system performance, it needs to be more robust to run smoothly. Video transmission and transcoding is an essential process in converting codecs, changing bitrates or resizing the resolution for 360 videos. High-performance transcoding is one of the key factors of real time CCTV stream. Additionally, the security of video streams from cameras to endpoints is also an important priority in CCTV research. In this paper, a real-time transcoding system designed with the ARIA block cipher encryption algorithm is presented. Experimental results show that the proposed method achieved approximately 200% speedup compared to libx265 FFmpeg in transcoding task, and it could handle multiple transcoding sessions simultaneously at high performance for both live 360 CCTV system and existing 2D/3D CCTV system.
APA, Harvard, Vancouver, ISO, and other styles
32

Tai, Yong Hang, Jun Sheng Shi, Zai Qing Chen, Qiong Li, and Bin Zhuo. "Application of AM-OLED Micro-Display in Stereo Display System." Advanced Materials Research 936 (June 2014): 2209–13. http://dx.doi.org/10.4028/www.scientific.net/amr.936.2209.

Full text
Abstract:
OLED micro-display is applied widely in the HMD stereo display field. Based on dual 0.5 inch 800×600 high resolution AM-OLED displays, we proposed a 3D circuit design scheme which used PCs VGA interface as video input, PIC18LF2550 as the MCU. According to the principle of the human eye binocular disparity, PC stereo video sources achieved odd frames and even frames displaying simultaneous in two AM-OLED panels, which implemented the 3D function. Display system has been tested by playing the stereoscopic video source of the left and right sides format form the PC and the effect presented a good performance which verified the effectiveness of the proposed scheme.
APA, Harvard, Vancouver, ISO, and other styles
33

Yun, Kugjin, Won-Sik Cheong, Jinyoung Lee, and Kyuheon Kim. "Design and Implementation of Hybrid Network Associated 3D Video Broadcasting System." Journal of Broadcast Engineering 19, no. 5 (September 30, 2014): 687–98. http://dx.doi.org/10.5909/jbe.2014.19.5.687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Fu, Yong Hua. "The Dynamic Multi-Angle Video Capture System Study Based on Glass-Free 3D." Applied Mechanics and Materials 740 (March 2015): 714–17. http://dx.doi.org/10.4028/www.scientific.net/amm.740.714.

Full text
Abstract:
The paper implements a dynamic system for multi-angle glass-free 3D video, the technology for playing content acquisition have higher requirements. The system using the camera matrix, gathering content for multi-angle shooting, and then after decoding, correction, synthesis, processing changes have sustained multiple angles of video images, in order to achieve the Glass-Free 3D viewing . System uses Java to achieve the driver. As the experimental results shows that the camera capturing problems like distortion and image alignment are solved and the dynamic full HD video with four view angles realized.
APA, Harvard, Vancouver, ISO, and other styles
35

Lian, Ping Ping. "A Novel USB3.0 High Definition 3D Video Camera Based on ARM." Advanced Materials Research 1037 (October 2014): 474–77. http://dx.doi.org/10.4028/www.scientific.net/amr.1037.474.

Full text
Abstract:
This paper focuses on researches of the design and implementation of an innovative 3D video camera. Traditional 3D cameras use USB2.0 bus or similar buses. Those cameras are not capable of transferring high definition (HD) videos due to bus speed limitation. Utilizing the newest USB3.0 bus and high definition image sensors, this thesis designs one HD 3D camera and solves above problem. To maximize data transfer speed, it employs one 200MHz operating frequency ARM controller which guarantees real-time system responses at the same time. The field trial has demonstrated HD 3D camera in the article is feasible and rich in value of research.
APA, Harvard, Vancouver, ISO, and other styles
36

Roganov, Vladimir, Michail Miheev, Elvira Roganova, Olga Grintsova, and Jurijs Lavendels. "Modernisation of Endoscopic Equipment Using 3D Indicators." Applied Computer Systems 23, no. 1 (May 1, 2018): 75–80. http://dx.doi.org/10.2478/acss-2018-0010.

Full text
Abstract:
Abstract The development of new software to improve the operation of modernised and developed technological facilities in different sectors of the national economy requires a systematic approach. For example, the use of video recording systems obtained during operations with the use of endoscopic equipment allows monitoring the work of doctors. Minor change of the used software allows using additionally processed video fragments for creation of training complexes. The authors of the present article took part in the development of many educational software and hardware systems. The first such system was the “Contact” system, developed in the eighties of the last century at Riga Polytechnic Institute. Later on, car simulators, air plan simulators, walking excavator simulators and the optical software-hardware training system “Three-Dimensional Medical Atlas” were developed. Analysis of various simulators and training systems showed that the computers used in them could not by themselves be a learning system. When creating a learning system, many factors must be considered so that the student does not receive false skills. The goal of the study is to analyse the training systems created for the professional training of medical personnel working with endoscopic equipment, in particular, with equipment equipped with 3D indicators.
APA, Harvard, Vancouver, ISO, and other styles
37

Oikonomou, Andreas, Saad Amin, Raouf N. G. Naguib, Alison Todman, and Hassanein Al-Omishy. "Interactive Reality System (IRiS): Interactive 3D Video Playback in Multimedia Applications." Journal of Advanced Computational Intelligence and Intelligent Informatics 10, no. 2 (March 20, 2006): 145–49. http://dx.doi.org/10.20965/jaciii.2006.p0145.

Full text
Abstract:
We developed a novel interactive video recording and playback technique for biomedical multimedia training but also applicable to other areas of multimedia. The Interactive Reality System (IRiS) improves on video playback used in most multimedia applications by controlling not only time, as in conventional video playback, but also space. A prototype is being tested and evaluated for multimedia training in breast self-examination (BSE). We discuss the advantages of IRiS and compare it to other similar approaches, such as QuickTime and iPIX. We detail the design of IRiS, its development, refinement, final implementation, evaluation, some projected plans and its uses in other biomedical and multimedia training scenarios.
APA, Harvard, Vancouver, ISO, and other styles
38

Obayashi, Mizuki, Shohei Mori, Hideo Saito, Hiroki Kajita, and Yoshifumi Takatsume. "Multi-View Surgical Camera Calibration with None-Feature-Rich Video Frames: Toward 3D Surgery Playback." Applied Sciences 13, no. 4 (February 14, 2023): 2447. http://dx.doi.org/10.3390/app13042447.

Full text
Abstract:
Mounting multi-view cameras within a surgical light is a practical choice since some cameras are expected to observe surgery with few occlusions. Such multi-view videos must be reassembled for easy reference. A typical way is to reconstruct the surgery in 3D. However, the geometrical relationship among cameras is changed because each camera independently moves every time the lighting is reconfigured (i.e., every time surgeons touch the surgical light). Moreover, feature matching between surgical images is potentially challenging because of missing rich features. To address the challenge, we propose a feature-matching strategy that enables robust calibration of the multi-view camera system by collecting a set of a small number of matches over time while the cameras stay stationary. Our approach would enable conversion from multi-view videos to a 3D video. However, surgical videos are long and, thus, the cost of the conversion rapidly grows. Therefore, we implement a video player where only selected frames are converted to minimize time and data until playbacks. We demonstrate that sufficient calibration quality with real surgical videos can lead to a promising 3D mesh and a recently emerged 3D multi-layer representation. We reviewed comments from surgeons to discuss the differences between those 3D representations on an autostereoscopic display with respect to medical usage.
APA, Harvard, Vancouver, ISO, and other styles
39

Lee, Dohoon, Yoonmo Yang, and Byung Tae Oh. "Boundary Artifacts Reduction in View Synthesis of 3D Video System." Journal of Broadcast Engineering 21, no. 6 (November 30, 2016): 878–88. http://dx.doi.org/10.5909/jbe.2016.21.6.878.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Gürler, C. Goktug, and Murat Tekalp. "Peer-to-peer system design for adaptive 3D video streaming." IEEE Communications Magazine 51, no. 5 (May 2013): 108–14. http://dx.doi.org/10.1109/mcom.2013.6515054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Tsai, Sung-Fang, Chao-Chung Cheng, Chung-Te Li, and Liang-Gee Chen. "A real-time 1080p 2D-to-3D video conversion system." IEEE Transactions on Consumer Electronics 57, no. 2 (May 2011): 915–22. http://dx.doi.org/10.1109/tce.2011.5955240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Hashimoto, Hideyuki, Yuki Fujibayashi, and Hiroki Imamura. "3D Video Communication System by Using Kinect and Head Mounted Displays." International Journal of Image and Graphics 15, no. 02 (April 2015): 1540003. http://dx.doi.org/10.1142/s0219467815400033.

Full text
Abstract:
Existing video communication systems, used in private business or video teleconference, show a part of the body of users on display only. This situation does not have realistic sensation because of showing a part of body on display and showing other users by 2D. This makes users feel communicating with the other users at a long distance without realistic sensation. Furthermore, although these existing communication systems have file transfer function such as sending and receiving file data, it does not use intuitive manipulation. It uses mouse or touching display only. In order to solve these problems, we propose 3D communication system supported by Kinect and head mounted display (HMD) to provide users communications with realistic sensation and intuitive manipulation. This system is able to show whole body of users on HMD as if they were in the same room by 3D reconstruction. It also enables users to transfer and share information by intuitive manipulation to augmented reality (AR) objects toward the other users in the future. The result of this paper is a system that extracts human body by using Kinect, reconstructs extracted human body on HMD, and also recognizes users' hands to be able to manipulate AR objects by a hand.
APA, Harvard, Vancouver, ISO, and other styles
43

Hidayat, Riyan, Hendri Gunawan, and Diki Susandi. "PEMBUATAN VIDEO PROFIL PERUSAHAAN BERBASIS ANIMASI 3D DI PT. KRAKATAU INSAN MANDIRI." Jurnal Sistem Informasi dan Informatika (Simika) 2, no. 1 (February 21, 2019): 64–80. http://dx.doi.org/10.47080/simika.v2i1.353.

Full text
Abstract:
Multimedia as a medium of communication, information delivery, and promotion is one of the most popular technology fields. As a fields that are flexible and can be attractive, as well as pamper the human senses, multimedia can simultaneously affect humans visually, audio, and touch so that it can make consumers digest the message contained more optimally. One product that requires multimedia to improve its superiority is a company profile or profile company. In this case, PT. Krakatau Insan Mandiri seeks to introduce clearly as well as attractive to consumers through profile videos 3D animation based company. The benefits are documentation and promotional media and information about the company profile of PT. Krakatau Insan Mandiri. In making this company profile video the author uses methodologies include: Literature Study, Analysis and Design of Systems, System Implementation, System Testing, and Documentation
APA, Harvard, Vancouver, ISO, and other styles
44

Kadosh, Michael, and Yitzhak Yitzhaky. "3D Object Detection via 2D Segmentation-Based Computational Integral Imaging Applied to a Real Video." Sensors 23, no. 9 (April 22, 2023): 4191. http://dx.doi.org/10.3390/s23094191.

Full text
Abstract:
This study aims to achieve accurate three-dimensional (3D) localization of multiple objects in a complicated scene using passive imaging. It is challenging, as it requires accurate localization of the objects in all three dimensions given recorded 2D images. An integral imaging system captures the scene from multiple angles and is able to computationally produce blur-based depth information about the objects in the scene. We propose a method to detect and segment objects in a 3D space using integral-imaging data obtained by a video camera array. Using objects’ two-dimensional regions detected via deep learning, we employ local computational integral imaging in detected objects’ depth tubes to estimate the depth positions of the objects along the viewing axis. This method analyzes object-based blurring characteristics in the 3D environment efficiently. Our camera array produces an array of multiple-view videos of the scene, called elemental videos. Thus, the proposed 3D object detection applied to the video frames allows for 3D tracking of the objects with knowledge of their depth positions along the video. Results show successful 3D object detection with depth localization in a real-life scene based on passive integral imaging. Such outcomes have not been obtained in previous studies using integral imaging; mainly, the proposed method outperforms them in its ability to detect the depth locations of objects that are in close proximity to each other, regardless of the object size. This study may contribute when robust 3D object localization is desired with passive imaging, but it requires a camera or lens array imaging apparatus.
APA, Harvard, Vancouver, ISO, and other styles
45

Ali, Sharifnezhad, Abdollahzadekan Mina, Shafieian Mehdi, and Sahafnejad-Mohammadi Iman. "C3D data based on 2-dimensional images from video camera." Annals of Biomedical Science and Engineering 5, no. 1 (January 13, 2021): 001–5. http://dx.doi.org/10.29328/journal.abse.1001010.

Full text
Abstract:
The Human three-dimensional (3D) musculoskeletal model is based on motion analysis methods and can be obtained by particular motion capture systems that export 3D data with coordinate 3D (C3D) format. Unique cameras and specific software are essential for analyzing the data. This equipment is quite expensive, and using them is time-consuming. This research intends to use ordinary video cameras and open source systems to get 3D data and create a C3D format due to these problems. By capturing movements with two video cameras, marker coordination is obtainable using Skill-Spector. To create C3D data from 3D coordinates of the body points, MATLAB functions were used. The subject was captured simultaneously with both the Cortex system and two video cameras during each validation test. The mean correlation coefficient of datasets is 0.7. This method can be used as an alternative method for motion analysis due to a more detailed comparison. The C3D data collection, which we presented in this research, is more accessible and cost-efficient than other systems. In this method, only two cameras have been used.
APA, Harvard, Vancouver, ISO, and other styles
46

Chen, Zhicheng, and Lulu Ding. "3D Video Analysis and Its Application in Developmental and Educational Psychology Teaching." Discrete Dynamics in Nature and Society 2022 (May 14, 2022): 1–9. http://dx.doi.org/10.1155/2022/2551272.

Full text
Abstract:
With the rapid development of information technology and network technology and the new requirements of education and teaching in the new era, 3D video technology has been more and more widely used in the field of education and teaching. Educational psychology plays a positive role in the field of education. Applying its relevant theories to classroom teaching is not only conducive to the establishment of a good teacher-student relationship but also helps teachers to understand the psychological characteristics and learning process of students, so as to greatly improve the quality of teaching. Therefore, this article is based on 3D video analysis technology, through the observation of the teaching video and coding, the verbal interaction between teachers and students’ lines of code and data analysis, mainly to its frequency, specific length and single length data used for analysis, explores the lesson for the current situation of the teaching video case teacher’s teaching behavior, commonness, and difference. From the perspective of learner psychology, the model of teacher’s teaching behavior in MOOCs teaching video is constructed. On the other hand, from the perspective of coding system and learner psychology, this paper proposes the improvement strategies of teachers’ teaching behavior in MOOCs teaching videos in China. The results show that the application of 3D video analysis method and educational psychology in student education can improve student work efficiency.
APA, Harvard, Vancouver, ISO, and other styles
47

Reino, Anthony J., William Lawson, Baxter J. Garcia, and Robert J. Greenstein. "Three Dimensional Video Imaging for Endoscopic Sinus Surgery and Diagnosis." American Journal of Rhinology 9, no. 4 (July 1995): 197–202. http://dx.doi.org/10.2500/105065895781873746.

Full text
Abstract:
Technological advances in video imaging over the last decade have resulted in remarkable additions to the armamentarium of instrumentation for the otolaryngologist. The use of video cameras and computer generated imaging in the operating room and office is invaluable for documentation and teaching purposes. Despite the obvious advantages of these systems, problems are evident, the most serious of which include image distortion and inability to judge depth of field. For more than 6 decades 3D imaging has been neither technically nor commercially successful. Reasons include alignment difficulties and image distortion. The result is “visual fatigue,” usually in about 15 minutes. At its extreme, this may be characterized by headache, nausea, and even vomiting. In this study, we employed the first 3D video imager to electronically manipulate a single video source to produce 3D images; therefore, neither alignment nor image distortions were produced. Of interest to the clinical surgeon, “visual fatigue” does not seem to occur; however, with prolonged procedures (greater than 2 hours) there exists the potential for physical intolerance for some individuals. This is the first unit that is compatible with any rigid or flexible videoendoscopic system and the small diameter endoscopes available for endoscopic sinus surgery. Moreover, prerecorded 2D tapes may be viewed in 3D on an existing VCR. The 3D image seems to provide enhanced anatomic awareness with less image distortion. We have found this system to be optically superior to the 2D video imagers currently available.
APA, Harvard, Vancouver, ISO, and other styles
48

Xu, Li Hong, Lu Lin Yang, and Rui Hua Wei. "Design of a Greenhouse Visualization System Based on Cloud Computing and Android System." Applied Mechanics and Materials 519-520 (February 2014): 1455–60. http://dx.doi.org/10.4028/www.scientific.net/amm.519-520.1455.

Full text
Abstract:
This visualization system aimed at collecting and storing environmental data and crops growing data so as to establish the relationship between environmental data and crop growth data.In order to solve the problem of big data storage,this paper put forward a new system architecture “Data collect+Cloud+Android”,which took advantage of cloud platform to store greenhouse environmental data and crops growth 3D data.Greenhouse environmental data was automatically uploaded to cloud platform and Android smart phone exchanged data with cloud platform.The greenhouse visualization system included:3D visualization of parameters in greenhouse,3D crop model,traceability system and video in greenhouse.During the test,the system runs smoothly and no data packet loss.
APA, Harvard, Vancouver, ISO, and other styles
49

Lee, Jaehyun, Sungjae Ha, Philippe Gentet, Leehwan Hwang, Soonchul Kwon, and Seunghyun Lee. "A Novel Real-Time Virtual 3D Object Composition Method for 360° Video." Applied Sciences 10, no. 23 (December 4, 2020): 8679. http://dx.doi.org/10.3390/app10238679.

Full text
Abstract:
As highly immersive virtual reality (VR) content, 360° video allows users to observe all viewpoints within the desired direction from the position where the video is recorded. In 360° video content, virtual objects are inserted into recorded real scenes to provide a higher sense of immersion. These techniques are called 3D composition. For a realistic 3D composition in a 360° video, it is important to obtain the internal (focal length) and external (position and rotation) parameters from a 360° camera. Traditional methods estimate the trajectory of a camera by extracting the feature point from the recorded video. However, incorrect results may occur owing to stitching errors from a 360° camera attached to several high-resolution cameras for the stitching process, and a large amount of time is spent on feature tracking owing to the high-resolution of the video. We propose a new method for pre-visualization and 3D composition that overcomes the limitations of existing methods. This system achieves real-time position tracking of the attached camera using a ZED camera and a stereo-vision sensor, and real-time stabilization using a Kalman filter. The proposed system shows high time efficiency and accurate 3D composition.
APA, Harvard, Vancouver, ISO, and other styles
50

Kitahara, Itaru, and Yuichi Ohta. "Scalable 3D Representation for 3D Video in a Large-Scale Space." Presence: Teleoperators and Virtual Environments 13, no. 2 (April 2004): 164–77. http://dx.doi.org/10.1162/1054746041382401.

Full text
Abstract:
In this paper, we introduce our research aimed at realizing a 3D video system in a very large-scale space such as a soccer stadium or concert hall. We propose a method for describing the shape of a 3D object with a set of planes in order to effectively synthesize a novel view of the object. The most effective layout of the planes can be determined based on the relative locations of the observer's viewing position, multiple cameras, and 3D objects. We describe a method for controlling the LOD of the 3D representation by adjusting the planes' orientation, interval, and resolution. The data size of the 3D model and the processing time can be reduced drastically. The effectiveness of the proposed method is demonstrated by our experimental results.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography