Gotowa bibliografia na temat „3D video system”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „3D video system”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "3D video system"

1

Cha, Jongeun, Mohamad Eid i Abdulmotaleb El Saddik. "Touchable 3D video system". ACM Transactions on Multimedia Computing, Communications, and Applications 5, nr 4 (październik 2009): 1–25. http://dx.doi.org/10.1145/1596990.1596993.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Okui, Makoto. "6-2. 3D Video System". Journal of The Institute of Image Information and Television Engineers 65, nr 9 (2011): 1282–86. http://dx.doi.org/10.3169/itej.65.1282.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Aggoun, Amar, Emmanuel Tsekleves, Mohammad Rafiq Swash, Dimitrios Zarpalas, Anastasios Dimou, Petros Daras, Paulo Nunes i Luis Ducla Soares. "Immersive 3D Holoscopic Video System". IEEE MultiMedia 20, nr 1 (styczeń 2013): 28–37. http://dx.doi.org/10.1109/mmul.2012.42.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Mohammed, Dhrgham Hani, i Laith Ali Abdul-Rahaim. "A Proposed of Multimedia Compression System Using Three - Dimensional Transformation". Webology 18, SI05 (30.10.2021): 816–31. http://dx.doi.org/10.14704/web/v18si05/web18264.

Pełny tekst źródła
Streszczenie:
Video compression has become especially important nowadays with the increase of data transmitted over transmission channels, the reducing the size of the videos must be done without affecting the quality of the video. This process is done by cutting the video thread into frames of specific lengths and converting them into a three-dimensional matrix. The proposed compression scheme uses the traditional red-green-blue color space representation and applies a three-dimensional discrete Fourier transform (3D-DFT) or three-dimensional discrete wavelet transform (3D-DWT) to the signal matrix after converted the video stream to three-dimensional matrices. The resulting coefficients from the transformation are encoded using the EZW encoder algorithm. Three main criteria by which the performance of the proposed video compression system will be tested; Compression ratio (CR), peak signal-to-noise ratio (PSNR) and processing time (PT). Experiments showed high compression efficiency for videos using the proposed technique with the required bit rate, the best bit rate for traditional video compression. 3D discrete wavelet conversion has a high frame rate with natural spatial resolution and scalability through visual and spatial resolution Beside the quality and other advantages when compared to current conventional systems in complexity, low power, high throughput, low latency and minimum the storage requirements. All proposed systems implement using MATLAB R2020b.
Style APA, Harvard, Vancouver, ISO itp.
5

Kim, Han-Kil, Sang-Woong Joo, Hun-Hee Kim i Hoe-Kyung Jung. "3D Video Simulation System Using GPS". Journal of the Korea Institute of Information and Communication Engineering 18, nr 4 (30.04.2014): 855–60. http://dx.doi.org/10.6109/jkiice.2014.18.4.855.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Hasan, Md Mehedi, Md Ariful Islam, Sejuti Rahman, Michael R. Frater i John F. Arnold. "No-Reference Quality Assessment of Transmitted Stereoscopic Videos Based on Human Visual System". Applied Sciences 12, nr 19 (7.10.2022): 10090. http://dx.doi.org/10.3390/app121910090.

Pełny tekst źródła
Streszczenie:
Provisioning the stereoscopic 3D (S3D) video transmission services of admissible quality in a wireless environment is an immense challenge for video service providers. Unlike for 2D videos, a widely accepted No-reference objective model for assessing transmitted 3D videos that explores the Human Visual System (HVS) appropriately has not been developed yet. Distortions perceived in 2D and 3D videos are significantly different due to the sophisticated manner in which the HVS handles the dissimilarities between the two different views. In real-time video transmission, viewers only have the distorted or receiver end content of the original video acquired through the communication medium. In this paper, we propose a No-reference quality assessment method that can estimate the quality of a stereoscopic 3D video based on HVS. By evaluating perceptual aspects and correlations of visual binocular impacts in a stereoscopic movie, the approach creates a way for the objective quality measure to assess impairments similarly to a human observer who would experience the similar material. Firstly, the disparity is measured and quantified by the region-based similarity matching algorithm, and then, the magnitude of the edge difference is calculated to delimit the visually perceptible areas of an image. Finally, an objective metric is approximated by extracting these significant perceptual image features. Experimental analysis with standard S3D video datasets demonstrates the lower computational complexity for the video decoder and comparison with the state-of-the-art algorithms shows the efficiency of the proposed approach for 3D video transmission at different quantization (QP 26 and QP 32) and loss rate (1% and 3% packet loss) parameters along with the perceptual distortion features.
Style APA, Harvard, Vancouver, ISO itp.
7

Zhang, Yizhong, Jiaolong Yang, Zhen Liu, Ruicheng Wang, Guojun Chen, Xin Tong i Baining Guo. "VirtualCube: An Immersive 3D Video Communication System". IEEE Transactions on Visualization and Computer Graphics 28, nr 5 (maj 2022): 2146–56. http://dx.doi.org/10.1109/tvcg.2022.3150512.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Zhang, Yingchun, Jianbo Huang i Siwen Duan. "3D video conversion system based on depth information extraction". MATEC Web of Conferences 232 (2018): 02048. http://dx.doi.org/10.1051/matecconf/201823202048.

Pełny tekst źródła
Streszczenie:
3D movies have received more and more attention in recent years. However, the investment in making 3D movies is high and difficult, which restricts its development. And there are many existing 2D movie resources, and how to convert it into 3D movies is also a problem. Therefore, this paper proposes a 3D video conversion system based on depth information extraction. The system consists of four parts: segmentation of movie video frame sequences, extraction of frame image depth information, generation of virtual multi-viewpoint and synthesis of 3D video. The system can effectively extract the depth information of the movie and by it finally convert a 2D movie into a 3D movie.
Style APA, Harvard, Vancouver, ISO itp.
9

XIAO, JIANGJIAN, HUI CHENG, FENG HAN i HARPREET SAWHNEY. "GEO-BASED AERIAL SURVEILLANCE VIDEO PROCESSING FOR SCENE UNDERSTANDING AND OBJECT TRACKING". International Journal of Pattern Recognition and Artificial Intelligence 23, nr 07 (listopad 2009): 1285–307. http://dx.doi.org/10.1142/s0218001409007582.

Pełny tekst źródła
Streszczenie:
This paper presents an approach to extract semantic layers from aerial surveillance videos for scene understanding and object tracking. The input videos are captured by low flying aerial platforms and typically consist of strong parallax from non-ground-plane structures as well as moving objects. Our approach leverages the geo-registration between video frames and reference images (such as those available from Terraserver and Google satellite imagery) to establish a unique geo-spatial coordinate system for pixels in the video. The geo-registration process enables Euclidean 3D reconstruction with absolute scale unlike traditional monocular structure from motion where continuous scale estimation over long periods of time is an issue. Geo-registration also enables correlation of video data to other stored information sources such as GIS (Geo-spatial Information System) databases. In addition to the geo-registration and 3D reconstruction aspects, the other key contributions of this paper also include: (1) providing a reliable geo-based solution to estimate camera pose for 3D reconstruction, (2) exploiting appearance and 3D shape constraints derived from geo-registered videos for labeling of structures such as buildings, foliage, and roads for scene understanding, and (3) elimination of moving object detection and tracking errors using 3D parallax constraints and semantic labels derived from geo-registered videos. Experimental results on extended time aerial video data demonstrates the qualitative and quantitative aspects of our work.
Style APA, Harvard, Vancouver, ISO itp.
10

Yan, Ming, Xin Gang Wang i Jun Feng Li. "3D Video Transmission System for China Mobile Multimedia Broadcasting". Applied Mechanics and Materials 519-520 (luty 2014): 469–72. http://dx.doi.org/10.4028/www.scientific.net/amm.519-520.469.

Pełny tekst źródła
Streszczenie:
With the popularization of three dimensional (3D) video on TV, some related aspects, such as broadcast television network transmission, technology application, and content products and so on, will face the challenges of the new technology. Rapid progress has been achieved in the China Mobile Multimedia Broadcasting (CMMB) technology, 3D TV technology applied in the mobile TV will become the bright spot in the future. This paper firstly introduces the transmission modes of 3D TV, and then gives the technology solution of 3D TV and the upgrade solution of electronic service guide Electronic Service Guide (ESG) based on CMMB. Test result shows that the new system with 3D video can work correctly when the 3D video was encoded with H.264 standard and the image resolution was conformed to the standard definition (320x240).
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "3D video system"

1

Chen, Dongbin. "Development of a 3D video-theodolite image based survey system". Thesis, University of East London, 2003. http://roar.uel.ac.uk/3555/.

Pełny tekst źródła
Streszczenie:
The scope of this thesis is to investigate and to develop a zoom lens videotheodolite system, which is formed by a zoom lens CCD camera, a motorised theodolite and a computer with the developed system software. A novel automatic calibration procedure is developed for the zoom lens CCD video-theodolite system. This method is suitable for the efficient calibration of any other video-theodolite system used in fieldwork. A novel image edge detection algorithm is developed. The maximum directional edge detection algorithm depends on the maximum directional gradient to judge the edges in an image. A novel line detection algorithm based on the Hough lines transform was developed for the applications of the video-theodolite system. This new algorithm can obtain not only the line parameters of r and 9 but also the data of the two terminal image points of the detected line. A novel method of constructing panoramic images from sequential images is developed based on the zoom lens video-theodolite system. It is effectively applied in the system to generate a panorama of a scene. A novel image matching algorithm is developed. The line features are matched with the constraint of epipolar lines. Through an experiment to match real world buildings, it is shown that the novel stereo matching algorithm is robust and effective. Another novel image matching algorithm is also developed. This algorithm is used to automatically measure the image displacement between the stereo images for the video-theodolite system. The accuracy of the zoom lens video-theodolite system is evaluated by three experiments. The measuring accuracy of this system can be within 0.09 pixels. The software of this system based on PC system is developed. It has a standard MFC windows interface with menu controls. This system software includes the control functions, measuring functions and image processing functions for the zoom lens video-theodolite system.
Style APA, Harvard, Vancouver, ISO itp.
2

Magaia, Lourenco Lazaro. "A video-based traffic monitoring system". Thesis, Stellenbosch : University of Stellenbosch, 2006. http://hdl.handle.net/10019.1/1243.

Pełny tekst źródła
Streszczenie:
Thesis (PhD (Mathematical Sciences. Applied Mathematics))--University of Stellenbosch, 2006.
This thesis addresses the problem of bulding a video-based traffic monitoring system. We employ clustering, trackiing and three-dimensional reconstruction of moving objects over a long image sequence. We present an algorithms that robustly recovers the motion and reconstructs three-dimensional shapes from a sequence of video images, Magaia et al [91]. The problem ...
Style APA, Harvard, Vancouver, ISO itp.
3

Markström, Johannes. "3D Position Estimation of a Person of Interest in Multiple Video Sequences : People Detection". Thesis, Linköpings universitet, Datorseende, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-98140.

Pełny tekst źródła
Streszczenie:
In most cases today when a specific person's whereabouts is monitored through video surveillance it is done manually and his or her location when not seen is based on assumptions on how fast he or she can move. Since humans are good at recognizing people this can be done accurately, given good video data, but the time needed to go through all data is extensive and therefore expensive. Because of the rapid technical development computers are getting cheaper to use and therefore more interesting to use for tedious work. This thesis is a part of a larger project that aims to see to what extent it is possible to estimate a person of interest's time dependent 3D position, when seen in surveillance videos. The surveillance videos are recorded with non overlapping monocular cameras. Furthermore the project aims to see if the person of interest's movement, when position data is unavailable, could be predicted. The outcome of the project is a software capable of following a person of interest's movement with an error estimate visualized as an area indicating where the person of interest might be at a specific time. This thesis main focus is to implement and evaluate a people detector meant to be used in the project, reduce noise in position measurement, predict the position when the person of interest's location is unknown, and to evaluate the complete project. The project combines known methods in computer vision and signal processing and the outcome is a software that can be used on a normal PC running on a Windows operating system. The software implemented in the thesis use a Hough transform based people detector and a Kalman filter for one step ahead prediction. The detector is evaluated with known methods such as Miss-rate vs. False Positives per Window or Image (FPPW and FPPI respectively) and Recall vs. 1-Precision. The results indicate that it is possible to estimate a person of interest's 3D position with single monocular cameras. It is also possible to follow the movement, to some extent, were position data are unavailable. However the software needs more work in order to be robust enough to handle the diversity that may appear in different environments and to handle large scale sensor networks.
Style APA, Harvard, Vancouver, ISO itp.
4

Johansson, Victor. "3D Position Estimation of a Person of Interest in Multiple Video Sequences : Person of Interest Recognition". Thesis, Linköpings universitet, Datorseende, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-97970.

Pełny tekst źródła
Streszczenie:
Because of the increase in the number of security cameras, there is more video footage available than a human could efficiently process. In combination with the fact that computers are getting more efficient, it is getting more and more interesting to solve the problem of detecting and recognizing people automatically. Therefore a method is proposed for estimating a 3D-path of a person of interest in multiple, non overlapping, monocular cameras. This project is a collaboration between two master theses. This thesis will focus on recognizing a person of interest from several possible candidates, as well as estimating the 3D-position of a person and providing a graphical user interface for the system. The recognition of the person of interest includes keeping track of said person frame by frame, and identifying said person in video sequences where the person of interest has not been seen before. The final product is able to both detect and recognize people in video, as well as estimating their 3D-position relative to the camera. The product is modular and any part can be improved or changed completely, without changing the rest of the product. This results in a highly versatile product which can be tailored for any given situation.
Style APA, Harvard, Vancouver, ISO itp.
5

Martell, Angel Alfredo. "Benchmarking structure from motion algorithms with video footage taken from a drone against laser-scanner generated 3D models". Thesis, Luleå tekniska universitet, Rymdteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-66280.

Pełny tekst źródła
Streszczenie:
Structure from motion is a novel approach to generate 3D models of objects and structures. The dataset simply consists of a series of images of an object taken from different positions. The ease of the data acquisition and the wide array of available algorithms makes the technique easily accessible. The structure from motion method identifies features in all the images from the dataset, like edges with gradients in multiple directions, and tries to match these features between all the images and then computing the relative motion that the camera was subject to between any pair of images. It builds a 3D model with the correlated features. It then creates a 3D point cloud with colour information of the scanned object. There are different implementations of the structure from motion method that use different approaches to solve the feature-correlation problem between the images from the data set, different methods for detecting the features and different alternatives for sparse reconstruction and dense reconstruction as well. These differences influence variations in the final output across distinct algorithms. This thesis benchmarked these different algorithms in accuracy and processing time. For this purpose, a terrestrial 3D laser scanner was used to scan structures and buildings to generate a ground truth reference to which the structure from motion algorithms were compared. Then a video feed from a drone with a built-in camera was captured when flying around the structure or building to generate the input for the structure from motion algorithms. Different structures are considered taking into account how rich or poor in features they are, since this impacts the result of the structure from motion algorithms. The structure from motion algorithms generated 3D point clouds, which then are analysed with a tool like CloudCompare to benchmark how similar it is to the laser scanner generated data, and the runtime was recorded for comparing it across all algorithms. Subjective analysis has also been made, such as how easy to use the algorithm is and how complete the produced model looks in comparison to the others. In the comparison it was found that there is no absolute best algorithm, since every algorithm highlights in different aspects. There are algorithms that are able to generate a model very fast, managing to scale the execution time linearly in function of the size of their input, but at the expense of accuracy. There are also algorithms that take a long time for dense reconstruction, but generate almost complete models even in the presence of featureless surfaces, like COLMAP modified PatchMacht algorithm. The structure from motion methods are able to generate models with an accuracy of up to \unit[3]{cm} when scanning a simple building, where Visual Structure from Motion and Open Multi-View Environment ranked among the most accurate. It is worth highlighting that the error in accuracy grows as the complexity of the scene increases. Finally, it was found that the structure from motion method cannot reconstruct correctly structures with reflective surfaces, as well as repetitive patterns when the images are taken from mid to close range, as the produced errors can be as high as \unit[1]{m} on a large structure.
Style APA, Harvard, Vancouver, ISO itp.
6

Yin, Ling. "Automatic Stereoscopic 3D Chroma-Key Matting Using Perceptual Analysis and Prediction". Thesis, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/31851.

Pełny tekst źródła
Streszczenie:
This research presents a novel framework for automatic chroma keying and the optimizations for real-time and stereoscopic 3D processing. It first simulates the process of human perception on isolating foreground elements in a given scene by perceptual analysis, and then predicts foreground colours and alpha map based on the analysis results and the restored clean background plate rather than direct sampling. Besides, an object level depth map is generated through stereo matching on a carefully determined feature map. In addition, three prototypes on different platforms have been implemented according to their hardware capability based on the proposed framework. To achieve real-time performance, the entire procedures are optimized for parallel processing and data paths on the GPU, as well as heterogeneous computing between GPU and CPU. The qualitative comparisons between results generated by the proposed algorithm and other existing algorithms show that the proposed one is able to generate more acceptable alpha maps and foreground colours especially in those regions that contain translucencies and details. And the quantitative evaluations also validate our advantages in both quality and speed.
Style APA, Harvard, Vancouver, ISO itp.
7

Koz, Alper. "Watermarking For 3d Representations". Phd thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608886/index.pdf.

Pełny tekst źródła
Streszczenie:
In this thesis, a number of novel watermarking techniques for different 3D representations are presented. A novel watermarking method is proposed for the mono-view video, which might be interpreted as the basic implicit representation of 3D scenes. The proposed method solves the common flickering problem in the existing video watermarking schemes by means of adjusting the watermark strength with respect to temporal contrast thresholds of human visual system (HVS), which define the maximum invisible distortions in the temporal direction. The experimental results indicate that the proposed method gives better results in both objective and subjective measures, compared to some recognized methods in the literature. The watermarking techniques for the geometry and image based representations of 3D scenes, denoted as 3D watermarking, are examined and classified into three groups, as 3D-3D, 3D-2D and 2D-2D watermarking, in which the pair of symbols identifies whether the watermark is embedded-detected in a 3D model or a 2D projection of it. A detailed literature survey on 3D-3D watermarking is presented that mainly focuses on protection of the intellectual property rights of the 3D geometrical representations. This analysis points out the specific problems in 3D-3D geometry watermarking , such as the lack of a unique 3D scene representation, standardization for the coding schemes and benchmarking tools on 3D geometry watermarking. For 2D-2D watermarking category, the copyright problem for the emerging free-view televisions (FTV) is introduced. The proposed watermarking method for this original problem embeds watermarks into each view of the multi-view video by utilizing the spatial sensitivity of HVS. The hidden signal in a selected virtual view is detected by computing the normalized correlation between the selected view and a generated pattern, namely rendered watermark, which is obtained by applying the same rendering operations which has occurred on the selected view to the original watermark. An algorithm for the estimation of the virtual camera position and rotation is also developed based on the projective planar relations between image planes. The simulation results show the applicability of the method to the FTV systems. Finally, the thesis also presents a novel 3D-2D watermarking method, in which a watermark is embedded into 3-D representation of the object and detected from a 2-D projection (image) of the same model. A novel solution based on projective invariants is proposed which modifies the cross ratio of the five coplanar points on the 3D model according to the watermark bit and extracts the embedded bit from the 2D projections of the model by computing the cross-ratio. After presenting the applicability of the algorithm via simulations, the future directions for this novel problem for 3D watermarking are addressed.
Style APA, Harvard, Vancouver, ISO itp.
8

Göransson, Rasmus. "Automatic 3D reconstruction with kinect : A modylar system for creating high quality light weight textured meshes from rgbd video". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-142480.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Wong, Timothy. "System Design and Analysis for Creating a 3D Virtual Street Scene for Autonomous Vehicles using Geometric Proxies from a Single Video Camera". DigitalCommons@CalPoly, 2019. https://digitalcommons.calpoly.edu/theses/2041.

Pełny tekst źródła
Streszczenie:
Self-driving vehicles use a variety of sensors to understand the environment they are in. In order to do so, they must accurately measure the distances and positions of the objects around them. A common representation of the environment around the vehicle is a 3D point cloud, or a set of 3D data points which represent the positions of objects in the real world relative to the car. However, while accurate and useful, these point clouds require large amounts of memory compared to other representations such as lightweight polygonal meshes. In addition, 3D point clouds can be difficult for a human to visually understand as the data points do not always form a naturally coherent object. This paper introduces a system to lower the memory consumption needed for the graphical representation of a virtual street environment. At this time, the proposed system takes in as input a single front-facing video. The system uses the video to retrieve still images of a scene which are then segmented to distinguish the different relevant objects, such as cars and stop signs. The system generates a corresponding virtual street scene and these key objects are visualized in the virtual world as low poly, or low resolution, models of the respective objects. This virtual 3D street environment is created to allow a remote operator to visualize the world that the car is traveling through. At this time, the virtual street includes geometric proxies for parallel parked cars in the form of lightweight polygonal meshes. These meshes are predefined, taking up less memory than a point cloud, which can be costly to transmit from the remote vehicle and potentially difficult for a remote human operator to understand. This paper contributes a design and analysis of an initial system for generating and placing these geometric proxies of parked cars in a virtual street environment from one input video. We discuss the limitations and measure the error for this system as well as reflect on future improvements.
Style APA, Harvard, Vancouver, ISO itp.
10

Gurram, Prudhvi K. "Automated 3D object modeling from aerial video imagery /". Online version of thesis, 2009. http://hdl.handle.net/1850/11207.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "3D video system"

1

Chen, Dongbin. Development of a 3D video-theodolite image based survey system. London: University of East London, 2003.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

3D and HD broadband video networking. Boston: Artech House, 2010.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

3D video technologies: An overview of research trends. Bellingham, Wash: SPIE, 2011.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Argyriou, Vasileios. Image, video & 3D data registration: Medical, satellite and video processing applications with quality metrics. Hoboken: Wiley, 2015.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Novel 3D media technologies. New York: Springer, 2015.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Zatt, Bruno. 3D Video Coding for Embedded Devices: Energy Efficient Algorithms and Architectures. New York, NY: Springer New York, 2013.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

3D game animation for dummies. Hoboken, N.J: Wiley, 2005.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Professionelle Videotechnik: Grundlagen, Filmtechnik, Fernsehtechnik, Gera te- und Studiotechnik in SD, HD, DI, 3D. Wyd. 5. Berlin: Springer, 2009.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Oliver, Schreer, Kauff Peter i Sikora Thomas, red. 3D videocommunication: Algorithms, concepts, and real-time systems in human centred communication. Chichester, England: Wiley, 2005.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Photoshop 3D for animators. Burlington, MA: Focal Press, 2011.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "3D video system"

1

Müller, Karsten, Philipp Merkle i Gerhard Tech. "3D Video Compression". W 3D-TV System with Depth-Image-Based Rendering, 223–48. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4419-9964-1_8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Watt, Simon J., i Kevin J. MacKenzie. "3D Media and the Human Visual System". W Emerging Technologies for 3D Video, 349–76. Chichester, UK: John Wiley & Sons, Ltd, 2013. http://dx.doi.org/10.1002/9781118583593.ch18.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Ishii, Ryo, Shiro Ozawa, Takafumi Mukouchi i Norihiko Matsuura. "MoPaCo: Pseudo 3D Video Communication System". W Lecture Notes in Computer Science, 131–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21669-5_16.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Matsuyama, Takashi, Shohei Nobuhara, Takeshi Takai i Tony Tung. "Active Camera System for Object Tracking and Multi-view Observation". W 3D Video and Its Applications, 45–85. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-4120-4_3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Angueira, Pablo, David de la Vega, Javier Morgade i Manuel María Vélez. "Transmission of 3D Video over Broadcasting". W 3D-TV System with Depth-Image-Based Rendering, 299–344. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4419-9964-1_11.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Yan, Fei, Wei-Qi Liu, Yin-Ping Liu, Bao-Yi Lu i Zhen-Shen Song. "Design of Video Mosaic System". W Advances in 3D Image and Graphics Representation, Analysis, Computing and Information Technology, 293–303. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-3867-4_34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Daribo, Ismael, Hideo Saito, Ryo Furukawa, Shinsaku Hiura i Naoki Asada. "Effects of Wavelet-Based Depth Video Compression". W 3D-TV System with Depth-Image-Based Rendering, 277–98. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4419-9964-1_10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Furukawa, Ryo, Masahito Naito, Daisuke Miyazaki, Masahi Baba, Shinsaku Hiura, Yoji Sanomura, Shinji Tanaka i Hiroshi Kawasaki. "Auto-calibration Method for Active 3D Endoscope System Using Silhouette of Pattern Projector". W Image and Video Technology, 222–36. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75786-5_19.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Frick, Anatol, i Reinhard Koch. "LDV Generation from Multi-View Hybrid Image and Depth Video". W 3D-TV System with Depth-Image-Based Rendering, 191–220. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4419-9964-1_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Zhang, Liang, Carlos Vázquez, Grégory Huchet i Wa James Tam. "DIBR-Based Conversion from Monoscopic to Stereoscopic and Multi-View Video". W 3D-TV System with Depth-Image-Based Rendering, 107–43. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4419-9964-1_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "3D video system"

1

d'Ursel, Wauthier. "3D holographic video system". W 2011 International Conference on 3D Imaging (IC3D). IEEE, 2011. http://dx.doi.org/10.1109/ic3d.2011.6584377.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Hibino, Hioki, Morita, Watanabe, Yauada i Asano. "3D HDTV video disc system". W IEEE 1990 International Conference on Consumer Electronics. IEEE, 1990. http://dx.doi.org/10.1109/icce.1990.665856.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Steurer, Johannes H., Matthias Pesch i Christopher Hahne. "3D holoscopic video imaging system". W IS&T/SPIE Electronic Imaging. SPIE, 2012. http://dx.doi.org/10.1117/12.915294.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Patil, Vishwas, Manu T. M., Basavaraj S. Anami i Darshankumar Billur. "Effective 3D Video Streaming Using 2D Video Encoding System". W 2022 2nd Asian Conference on Innovation in Technology (ASIANCON). IEEE, 2022. http://dx.doi.org/10.1109/asiancon55314.2022.9909378.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Tonchev, Krasimir, Ivaylo Bozhilov i Agata Manolova. "Semantic Communication System for 3D Video". W 2023 Joint International Conference on Digital Arts, Media and Technology with ECTI Northern Section Conference on Electrical, Electronics, Computer and Telecommunications Engineering (ECTI DAMT & NCON). IEEE, 2023. http://dx.doi.org/10.1109/ectidamtncon57770.2023.10139761.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Dongdong Zhang, Ye Yao, Dian Liu, Yanyu Chen i Di Zang. "Kinect-based 3D video conference system". W 2013 IEEE Global High Tech Congress on Electronics (GHTCE). IEEE, 2013. http://dx.doi.org/10.1109/ghtce.2013.6767265.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Lee, Pei-Jun, i Xu-Xian Huang. "3D motion estimation algorithm in 3D video coding". W 2011 International Conference on System Science and Engineering (ICSSE). IEEE, 2011. http://dx.doi.org/10.1109/icsse.2011.5961924.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Megyesi, Zoltan, Attila Barsi i Tibor Balogh. "3D Video Visualization on the Holovizio System". W 2008 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video. IEEE, 2008. http://dx.doi.org/10.1109/3dtv.2008.4547860.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Eng, Wei Yong, Dongbo Min, Viet-Anh Nguyen, Jiangbo Lu i Minh N. Do. "Gaze correction for 3D tele-immersive communication system". W 2013 11th IVMSP Workshop: 3D Image/Video Technologies and Applications. IEEE, 2013. http://dx.doi.org/10.1109/ivmspw.2013.6611942.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Xin, Baicheng, Ronggang Wang, Zhenyu Wang, Wenmin Wang, Chenchen Gu, Quanzhan Zheng i Wen Gao. "AVS 3D video streaming system over internet". W 2012 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC). IEEE, 2012. http://dx.doi.org/10.1109/icspcc.2012.6335735.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "3D video system"

1

Barkatov, Igor V., Volodymyr S. Farafonov, Valeriy O. Tiurin, Serhiy S. Honcharuk, Vitaliy I. Barkatov i Hennadiy M. Kravtsov. New effective aid for teaching technology subjects: 3D spherical panoramas joined with virtual reality. [б. в.], listopad 2020. http://dx.doi.org/10.31812/123456789/4407.

Pełny tekst źródła
Streszczenie:
Rapid development of modern technology and its increasing complexity make high demands to the quality of training of its users. Among others, an important class is vehicles, both civil and military. In the teaching of associated subjects, the accepted hierarchy of teaching aids includes common visual aids (posters, videos, scale models etc.) on the first stage, followed by simulators ranging in complexity, and finished at real vehicles. It allows achieving some balance between cost and efficiency by partial replacement of more expensive and elaborated aids with the less expensive ones. However, the analysis of teaching experience in the Military Institute of Armored Forces of National Technical University “Kharkiv Polytechnic Institute” (Institute) reveals that the balance is still suboptimal, and the present teaching aids are still not enough to allow efficient teaching. This fact raises the problem of extending the range of available teaching aids for vehicle-related subjects, which is the aim of the work. Benefiting from the modern information and visualization technologies, we present a new teaching aid that constitutes a spherical (360° or 3D) photographic panorama and a Virtual Reality (VR) device. The nature of the aid, its potential applications, limitations and benefits in comparison to the common aids are discussed. The proposed aid is shown to be cost-effective and is proved to increase efficiency of training, according to the results of a teaching experiment that was carried out in the Institute. For the implementation, a tight collaboration between the Institute and an IT company “Innovative Distance Learning Systems Limited” was established. A series of panoramas, which are already available, and its planned expansions are presented. The authors conclude that the proposed aid may significantly improve the cost-efficiency balance of teaching a range of technology subjects.
Style APA, Harvard, Vancouver, ISO itp.
2

Ruby, Jeffrey, Richard Massaro, John Anderson i Robert Fischer. Three-dimensional geospatial product generation from tactical sources, co-registration assessment, and considerations. Engineer Research and Development Center (U.S.), luty 2023. http://dx.doi.org/10.21079/11681/46442.

Pełny tekst źródła
Streszczenie:
According to Army Multi-Domain Operations (MDO) doctrine, generating timely, accurate, and exploitable geospatial products from tactical platforms is a critical capability to meet threats. The US Army Corps of Engineers, Engineer Research and Development Center, Geospatial Research Laboratory (ERDC-GRL) is carrying out 6.2 research to facilitate the creation of three-dimensional (3D) products from tactical sensors to include full-motion video, framing cameras, and sensors integrated on small Unmanned Aerial Systems (sUAS). This report describes an ERDC-GRL processing pipeline comprising custom code, open-source software, and commercial off-the-shelf (COTS) tools to geospatially rectify tactical imagery to authoritative foundation sources. Four datasets from different sensors and locations were processed against National Geospatial-Intelligence Agency–supplied foundation data. Results showed that the co-registration of tactical drone data to reference foundation varied from 0.34 m to 0.75 m, exceeding the accuracy objective of 1 m described in briefings presented to Army Futures Command (AFC) and the Assistant Security of the Army for Acquisition, Logistics and Technology (ASA(ALT)). A discussion summarizes the results, describes steps to address processing gaps, and considers future efforts to optimize the pipeline for generation of geospatial data for specific end-user devices and tactical applications.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii