Auswahl der wissenschaftlichen Literatur zum Thema „Video processing“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Video processing" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Video processing"

1

Jiang, Zhiying, Chong Guan und Ivo L. de Haaij. „Congruity and processing fluency“. Asia Pacific Journal of Marketing and Logistics 32, Nr. 5 (04.10.2019): 1070–88. http://dx.doi.org/10.1108/apjml-03-2019-0128.

Der volle Inhalt der Quelle
Annotation:
Purpose The purpose of this paper is to investigate the benefits of Ad-Video and Product-Video congruity for embedded online video advertising. A conceptual model is constructed to test how congruity between online advertisements, advertised products and online videos impact consumer post-viewing attitudes via processing fluency. Design/methodology/approach An online experiment with eight versions of mock video sections (with embedded online video advertisements) was conducted. The study is a 2 (type of appeal: informational vs emotional) × 2 (Ad-Video congruity: congruent vs incongruent) × 2 (Product-Video congruity: congruent vs incongruent) full-factorial between-subject design. A total of 252 valid responses were collected for data analysis. Findings Results show that congruity is related to the improvement of processing fluency only for informational ads/videos. The positive effect of Ad-Video congruity on processing fluency is only significant for informational appeals but not emotional appeal. Similarly, the positive effects of Product-Video congruity on processing fluency are only significant for informational appeals but not emotional appeal. Involvement has been found to be positively related to processing fluency too. Processing fluency has a positive impact on the attitudes toward the ads, advertised products and videos. Research limitations/implications The finding that congruity is related to the improvement of processing fluency only for informational ads/videos extends the existing literature by identifying the type of appeal as a boundary condition. Practical implications Both brand managers and online video platform owners should monitor and operationalize the content and appeal congruity, especially for informational ads on a large scale to improve consumers’ responses. Originality/value To the best of the authors’ knowledge, this is the first paper to examine the effects of Ad-Video and Product-Video congruity of embedded advertisements on video sharing platforms. The findings of this study add to the literature on congruity and processing fluency.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Coutts, Maurice D., und Dennis L. Matthies. „Video disc processing“. Journal of the Acoustical Society of America 81, Nr. 5 (Mai 1987): 1659. http://dx.doi.org/10.1121/1.395033.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Uytterhoeven, G. „Digital video processing“. Journal of Computational and Applied Mathematics 66, Nr. 1-2 (Januar 1996): N5—N6. http://dx.doi.org/10.1016/0377-0427(96)80474-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ritwik Baranwal. „Automatic Summarization of Cricket Highlights using Audio Processing“. January 2021 7, Nr. 01 (04.01.2021): 48–53. http://dx.doi.org/10.46501/ijmtst070111.

Der volle Inhalt der Quelle
Annotation:
The problem of automatic excitement detection in cricket videos is considered and applied for highlight generation. This paper focuses on detecting exciting events in video using complementary information from the audio and video domains. First, a method of audio and video elements separation is proposed. Thereafter, the “level-of-excitement” is measured using features such as amplitude, and spectral center of gravity extracted from the commentators speech’s amplitude to decide the threshold. Our experiments using actual cricket videos show that these features are well correlated with human assessment of excitability. Finally, audio/video information is fused according to time-order scenes which has “excitability” in order to generate highlights of cricket. The techniques described in this paper are generic and applicable to a variety of topic and video/acoustic domains.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Лун, Сюй, Xu Long, Йан Йихуа, Yan Yihua, Чэн Цзюнь und Cheng Jun. „Guided filtering for solar image/video processing“. Solar-Terrestrial Physics 3, Nr. 2 (09.08.2017): 9–15. http://dx.doi.org/10.12737/stp-3220172.

Der volle Inhalt der Quelle
Annotation:
A new image enhancement algorithm employing guided filtering is proposed in this work for enhancement of solar images and videos, so that users can easily figure out important fine structures imbedded in the recorded images/movies for solar observation. The proposed algorithm can efficiently remove image noises, including Gaussian and impulse noises. Meanwhile, it can further highlight fibrous structures on/beyond the solar disk. These fibrous structures can clearly demonstrate the progress of solar flare, prominence coronal mass emission, magnetic field, and so on. The experimental results prove that the proposed algorithm gives significant enhancement of visual quality of solar images beyond original input and several classical image en-hancement algorithms, thus facilitating easier determi-nation of interesting solar burst activities from recorded images/movies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Wira Widjanarko, Kresna, Krisna Aditya Herlambang und Muhamad Abdul Karim. „Faster Video Processing Menggunakan Teknik Parallel Processing Dengan Library OpenCV“. Jurnal Komunikasi, Sains dan Teknologi 1, Nr. 1 (30.06.2022): 10–18. http://dx.doi.org/10.61098/jkst.v1i1.2.

Der volle Inhalt der Quelle
Annotation:
Video yang diputar seringkali terlalu lama diproses, terutama aplikasi yang membutuhkan pemrosesan real-time, seperti pemrosesan aplikasi video webcam, oleh karena itu dilakukan pemrosesan paralel untuk mempercepat proses komputasi video. Penelitian ini membahas tentang pemrosesan paralel pada video agar proses komputasi video berjalan lebih cepat dibandingkan tanpa pemrosesan paralel. Pengujian dilakukan dengan dua jenis data: aliran video dari webcam laptop dan file video .mp4. Bahasa pemrograman yang digunakan dalam pengujian ini adalah Python dengan bantuan library OpenCV. Penelitian ini menghasilkan perbedaan yang signifikan dalam pemrosesan video baik dengan sumber webcam maupun File Video dalam format .mp4 tanpa pemrosesan paralel (multithreading) dengan Video Show dan Video Read serta penggabungan keduanya (Multithreading).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Et. al., G. Megala,. „State-Of-The-Art In Video Processing: Compression, Optimization And Retrieval“. Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, Nr. 5 (11.04.2021): 1256–72. http://dx.doi.org/10.17762/turcomat.v12i5.1793.

Der volle Inhalt der Quelle
Annotation:
Video compression plays a vital role in the modern social media networking with plethora of multimedia applications. It empowers transmission medium to competently transfer videos and enable resources to store the video efficiently. Nowadays high-resolution video data are transferred through the communication channel having high bit rate in order to send multiple compressed videos. There are many advances in transmission ability, efficient storage ways of these compressed video where compression is the primary task involved in multimedia services. This paper summarizes the compression standards, describes the main concepts involved in video coding. Video compression performs conversion of large raw bits of video sequence into a small compact one, achieving high compression ratio with good video perceptual quality. Removing redundant information is the main task in the video sequence compression. A survey on various block matching algorithms, quantization and entropy coding are focused. It is found that many of the methods having computational complexities needs improvement with optimization.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Moon, Nazmun Nessa, Imrus Salehin, Masuma Parvin, Md Mehedi Hasan, Iftakhar Mohammad Talha, Susanta Chandra Debnath, Fernaz Narin Nur und Mohd Saifuzzaman. „Natural language processing based advanced method of unnecessary video detection“. International Journal of Electrical and Computer Engineering (IJECE) 11, Nr. 6 (01.12.2021): 5411. http://dx.doi.org/10.11591/ijece.v11i6.pp5411-5419.

Der volle Inhalt der Quelle
Annotation:
<span>In this study we have described the process of identifying unnecessary video using an advanced combined method of natural language processing and machine learning. The system also includes a framework that contains analytics databases and which helps to find statistical accuracy and can detect, accept or reject unnecessary and unethical video content. In our video detection system, we extract text data from video content in two steps, first from video to MPEG-1 audio layer 3 (MP3) and then from MP3 to WAV format. We have used the text part of natural language processing to analyze and prepare the data set. We use both Naive Bayes and logistic regression classification algorithms in this detection system to determine the best accuracy for our system. In our research, our video MP4 data has converted to plain text data using the python advance library function. This brief study discusses the identification of unauthorized, unsocial, unnecessary, unfinished, and malicious videos when using oral video record data. By analyzing our data sets through this advanced model, we can decide which videos should be accepted or rejected for the further actions.</span>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Beric, Aleksandar, Jef van Meerbergen, Gerard de Haan und Ramanathan Sethuraman. „Memory-centric video processing“. IEEE Transactions on Circuits and Systems for Video Technology 18, Nr. 4 (April 2008): 439–52. http://dx.doi.org/10.1109/tcsvt.2008.918775.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Merigot, Alain, und Alfredo Petrosino. „Parallel processing for image and video processing“. Parallel Computing 34, Nr. 12 (Dezember 2008): 693. http://dx.doi.org/10.1016/j.parco.2008.09.001.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Video processing"

1

Aggoun, Amar. „DPCM video signal/image processing“. Thesis, University of Nottingham, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335792.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Zhao, Jin. „Video/Image Processing on FPGA“. Digital WPI, 2015. https://digitalcommons.wpi.edu/etd-theses/503.

Der volle Inhalt der Quelle
Annotation:
Video/Image processing is a fundamental issue in computer science. It is widely used for a broad range of applications, such as weather prediction, computerized tomography (CT), artificial intelligence (AI), and etc. Video-based advanced driver assistance system (ADAS) attracts great attention in recent years, which aims at helping drivers to become more concentrated when driving and giving proper warnings if any danger is insight. Typical ADAS includes lane departure warning, traffic sign detection, pedestrian detection, and etc. Both basic and advanced video/image processing technologies are deployed in video-based driver assistance system. The key requirements of driver assistance system are rapid processing time and low power consumption. We consider Field Programmable Gate Array (FPGA) as the most appropriate embedded platform for ADAS. Owing to the parallel architecture, an FPGA is able to perform high-speed video processing such that it could issue warnings timely and provide drivers longer time to response. Besides, the cost and power consumption of modern FPGAs, particular small size FPGAs, are considerably efficient. Compared to the CPU implementation, the FPGA video/image processing achieves about tens of times speedup for video-based driver assistance system and other applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Isaieva, O. A., und О. Г. Аврунін. „Image processing for video dermatoscopy“. Thesis, Osaka, Japan, 2019. http://openarchive.nure.ua/handle/document/10347.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Chen, Juan. „Content-based Digital Video Processing. Digital Videos Segmentation, Retrieval and Interpretation“. Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4256.

Der volle Inhalt der Quelle
Annotation:
Recent research approaches in semantics based video content analysis require shot boundary detection as the first step to divide video sequences into sections. Furthermore, with the advances in networking and computing capability, efficient retrieval of multimedia data has become an important issue. Content-based retrieval technologies have been widely implemented to protect intellectual property rights (IPR). In addition, automatic recognition of highlights from videos is a fundamental and challenging problem for content-based indexing and retrieval applications. In this thesis, a paradigm is proposed to segment, retrieve and interpret digital videos. Five algorithms are presented to solve the video segmentation task. Firstly, a simple shot cut detection algorithm is designed for real-time implementation. Secondly, a systematic method is proposed for shot detection using content-based rules and FSM (finite state machine). Thirdly, the shot detection is implemented using local and global indicators. Fourthly, a context awareness approach is proposed to detect shot boundaries. Fifthly, a fuzzy logic method is implemented for shot detection. Furthermore, a novel analysis approach is presented for the detection of video copies. It is robust to complicated distortions and capable of locating the copy of segments inside original videos. Then, iv objects and events are extracted from MPEG Sequences for Video Highlights Indexing and Retrieval. Finally, a human fighting detection algorithm is proposed for movie annotation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Haynes, Simon Dominic. „Reconfigurable architectures for video image processing“. Thesis, Imperial College London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.322797.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Fernando, Warnakulasuriya Anil Chandana. „Video processing in the compressed domain“. Thesis, University of Bristol, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.326724.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Leonce, Andrew. „HDR video enhancement, processing and coding“. Thesis, Loughborough University, 2015. https://dspace.lboro.ac.uk/2134/19639.

Der volle Inhalt der Quelle
Annotation:
Advances in digital camera technology have led to the development of image sensors that are capable of capturing High Dynamic Range (HDR) images. Although this has enabled the capture of greater depths of colour and illumination, there remain problems with regards to transmitting and displaying the HDR image data. Current consumer level displays are designed to only show images with a depth of 8-bits per pixel per channel. Typical HDR images can be 10-bits per pixel per channel and upwards, leading to the first problem, how to display HDR images on Standard Dynamic Range (SDR) displays. This is linked to a further problem, that of transmitting the HDR data to the SDR devices, due to the fact that most state-of-the-art image and video coding standards deal with only SDR data. Further, as with most technologies of this kind, current HDR displays are extremely expensive. Furthermore, media broadcast organisations have invested significant sums of money into their current architecture and are unwilling to completely change their systems at further cost.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Wu, Hao-Yu M. Eng Massachusetts Institute of Technology. „Eulerian Video Processing and medical applications“. Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77452.

Der volle Inhalt der Quelle
Annotation:
Thesis (M. Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 68-69).
Our goal is to reveal subtle yet informative signals in videos that are difficult or impossible to see with the naked eye. We can either display them in an indicative manner, or analyse them to extract important measurements, such as vital signs. Our method, which we call Eulerian Video Processing, takes a standard video sequence as input, and applies spatial decomposition, followed by temporal filtering to the frames. The resulting signals can be visually amplified to reveal hidden information, the process we called Eulerian Video Magnification. Using Eulerian Video Magnification, we are able to visualize the flow of blood as it fills the face and to amplify and reveal small motions. Our technique can be run in real time to instantly show phenomena occurring at the temporal frequencies selected by the user. Those signals can also be used to extract vital signs contactlessly. We presented a heart rate extraction system that is able to estimate heart rate of newborns from videos recorded in the real nursery environment. Our system can produce heart rate measurement that has clinical accuracy when newborns only have mild motions, and when the videos are acquired in brightly lit environments.
by Hao-Yu Wu.
M.Eng.and S.B.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Tsoligkas, Nick A. „Video/Image Processing Algorithms for Video Compression and Image Stabilization Applications“. Thesis, Teesside University, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.517469.

Der volle Inhalt der Quelle
Annotation:
As the use of video becomes increasingly popular and wide spread in the areas of broadcast services, internet, entertainment and security-related applications, providing means for fast. automated, and effective techniques to represent video based on its content, such as objects and meanings, is important topic of research. In many applications.. removing the hand shaking effect and making video images stable and clear or decomposing (and then transmitting) the video content into a collection of meaningful objects is a necessity. Therefore automatic techniques for video stabilization, extraction of objects from video data as well as transmitting their shapes, motion and texture at very low bit rates over error networks, are desired. In this thesis the design of a new low bit rate codec is presented. Furthermore a method about video stabilization is introduced. The main technical contributions resulted from this work are as follows. Firstly, an adaptive change detection algorithm identifies the objects from the background. The luminance difference between framer~ in the first stage, is modelled so as to separate contributions caused by noise and illumination variations from those caused by meaningful moving objects. In the second stage the segmentation tool based on image blocks, histograms and clustering algorithms segments the difference image into areas corresponding to objects. In the third stage morphological edge detection, contour analysis, and object labelling are the main tasks of the proposed segmentation algorithm. Secondly, a new low bit rate codec is designed and analyzed based on the proposed segmentation tool. The estimated motion vectors inside the change detection mask, the comer points of the shapes as well as the residual information inside the motion failure regions are transmitted to the decoder using different coding techniques, thus achieving efficient compression. Thirdly, a novel approach of estimating and removing unwanted video motion, which does not require accelerators or gyros, is presented. The algorithm estimates the camera motion from the incoming video stream and compensates for unwanted translation and rotation. A synchronization unit supervises and generates the stabilized video sequence. The reliability of all the proposed algorithms is demonstrated by extensive experimentation on various video shots.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Tsoi, Yau Chat. „Video cosmetics : digital removal of blemishes from video /“. View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?COMP%202003%20TSOI.

Der volle Inhalt der Quelle
Annotation:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 83-86). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Video processing"

1

Cremers, Daniel, Marcus Magnor, Martin R. Oswald und Lihi Zelnik-Manor, Hrsg. Video Processing and Computational Video. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24870-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Digital video processing. Upper Saddle River, NJ: Prentice Hall PTR, 1995.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Shivakumara, Palaiahnakote, und Umapada Pal. Cognitively Inspired Video Text Processing. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-7069-5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Sicuranza, Giovanni L., und Sanjit K. Mitra, Hrsg. Multidimensional Processing of Video Signals. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3616-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Pereira, Rafael Silva, und Karin K. Breitman. Video Processing in the Cloud. London: Springer London, 2011. http://dx.doi.org/10.1007/978-1-4471-2137-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Comaniciu, Dorin, Rudolf Mester, Kenichi Kanatani und David Suter, Hrsg. Statistical Methods in Video Processing. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/b104157.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Breitman, K. K. (Karin K.) und SpringerLink (Online service), Hrsg. Video Processing in the Cloud. London: Rafael Silva Pereira, 2011.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

L, Sicuranza Giovanni, und Mitra Sanjit Kumar, Hrsg. Multidimensional processing of video signals. Boston: Kluwer Academic, 1992.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Takao, Nishitani, Ang Peng H und Catthoor Francky, Hrsg. VLSI video/image signal processing. Boston: Kluwer Academic Publishers, 1993.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sicuranza, Giovanni L., und Sanjit Kumar Mitra. Multidimensional processing of video signals. New York: Springer, 1992.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Video processing"

1

Pajankar, Ashwin. „Video Processing“. In Raspberry Pi Image Processing Programming, 189–221. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8270-0_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Parekh, Ranjan. „Video Processing“. In Fundamentals of IMAGE, AUDIO, and VIDEO PROCESSING Using MATLAB®, 259–301. First edition. | Boca Raton : CRC Press, 2021.: CRC Press, 2021. http://dx.doi.org/10.1201/9781003019718-3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Inoué, Shinya, und Kenneth R. Spring. „Digital Image Processing“. In Video Microscopy, 509–58. Boston, MA: Springer US, 1997. http://dx.doi.org/10.1007/978-1-4615-5859-0_12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Schu, Markus. „Video Processing Tasks“. In Handbook of Visual Display Technology, 795–816. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-14346-0_42.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Schu, Markus. „Video Processing Tasks“. In Handbook of Visual Display Technology, 1–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-642-35947-7_42-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Schu, Markus. „Video Processing Tasks“. In Handbook of Visual Display Technology, 549–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-540-79567-4_42.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Müller, Karsten, Heiko Schwarz, Peter Eisert und Thomas Wiegand. „Video Data Processing“. In Digital Transformation, 43–62. Berlin, Heidelberg: Springer Berlin Heidelberg, 2019. http://dx.doi.org/10.1007/978-3-662-58134-6_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Huang, Yung-Lin. „Video Signal Processing“. In Advances in Multirate Systems, 143–68. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-59274-9_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Zhang, Yu-Jin. „Video Image Processing“. In Handbook of Image Engineering, 773–805. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-5873-3_21.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Walter, Robert J., und Michael W. Berns. „Digital Image Processing and Analysis“. In Video Microscopy, 327–92. Boston, MA: Springer US, 1986. http://dx.doi.org/10.1007/978-1-4757-6925-8_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Video processing"

1

Jensen, Bob. „Processing techniques for technical video production“. In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1991. http://dx.doi.org/10.1364/oam.1991.tha1.

Der volle Inhalt der Quelle
Annotation:
This presentation will focus on how a technical video production is assembled. Participants will discover the advantages and secrets of producing good technical videos. Various productions from the Maui Space Surveillance Site will be used to demonstrate how many separate elements are combined to achieve videos that meet complex technical objectives. Discussion of key production elements will include establishing objectives, choosing production values for a particular audience, script and storyboard writing, pre- production work, and techniques for production/post-production. Participants will learn about camera set-up, different camera shooting techniques, and the basic elements of lighting a scene. Effective use of audio will also be explored, including microphone types and applications, narration types and effects, and how to enhance productions through the use of appropriate music tracks. Basic editing will be covered, including aesthetics, transitions, movement, and shot variety. Effective presentation of data in technical productions will be demonstrated. Participants will learn how to use switcher and special effects, slow motion, still frames, and animation to provide scene emphasis and heighten viewer interest. Incorporating documentary footage and video from outside sources will be explained. Finally, effective methods of editing to upgrade and update older productions will also be presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Pinto, Carlos, Aleksandar Beric, Satendra Singh und Sachin Farfade. „HiveFlex-Video VSP1: Video Signal Processing Architecture for Video Coding and Post-Processing“. In Eighth IEEE International Symposium on Multimedia (ISM'06). IEEE, 2006. http://dx.doi.org/10.1109/ism.2006.83.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Hannan, Tanveer, Rajat Koner, Jonathan Kobold und Matthias Schubert. „Box Supervised Video Segmentation Proposal Network“. In 24th Irish Machine Vision and Image Processing Conference. Irish Pattern Recognition and Classification Society, 2022. http://dx.doi.org/10.56541/azwk8552.

Der volle Inhalt der Quelle
Annotation:
Bounding box supervision provides a balanced compromise between labeling effort and result quality for image segmentation. However, there exists no such work explicitly tailored for videos. Applying the image segmentation methods directly to videos produces sub-optimal solutions because they do not exploit the temporal information. In this work, we propose a box-supervised video segmentation proposal network. We take advantage of intrinsic video properties by introducing a novel box-guided motion calculation pipeline and a motion-aware affinity loss. As the motion is utilized only during training, the run-time remains fixed during inference time. We evaluate our model on Video Object Segmentation (VOS) challenge. The method outperforms the state-of-the-art self-supervised methods by 16.4% and 6.9% J&F score, and the majority of fully supervised ones on the DAVIS and Youtube-VOS dataset. Code is available at https://github.com/Tanveer81/BoxVOS.git.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

„Image and Video Processing“. In 2020 International Conference on Smart Systems and Technologies (SST). IEEE, 2020. http://dx.doi.org/10.1109/sst49455.2020.9264108.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

„Image and video processing“. In 2010 2nd International Conference on Image Processing Theory, Tools and Applications (IPTA). IEEE, 2010. http://dx.doi.org/10.1109/ipta.2010.5586833.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

„Session WA7b: Video processing“. In 2017 51st Asilomar Conference on Signals, Systems, and Computers. IEEE, 2017. http://dx.doi.org/10.1109/acssc.2017.8335724.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

„Session TP8b2: Video processing“. In 2015 49th Asilomar Conference on Signals, Systems and Computers. IEEE, 2015. http://dx.doi.org/10.1109/acssc.2015.7421369.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

„Image and Video Processing“. In 2022 29th International Conference on Systems, Signals and Image Processing (IWSSIP). IEEE, 2022. http://dx.doi.org/10.1109/iwssip55020.2022.9854455.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

„Image and Video Processing“. In 2019 International Conference on Systems, Signals and Image Processing (IWSSIP). IEEE, 2019. http://dx.doi.org/10.1109/iwssip.2019.8787278.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

„Image and Video Processing“. In 2022 International Conference on Smart Systems and Technologies (SST). IEEE, 2022. http://dx.doi.org/10.1109/sst55530.2022.9954742.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Video processing"

1

Roach, A. B. WebRTC Video Processing and Codec Requirements. RFC Editor, März 2016. http://dx.doi.org/10.17487/rfc7742.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Mitchell, Owen R., und Dominick Andrisani. Video Processing for Target Extraction, Recognition, and Tracking. Fort Belvoir, VA: Defense Technical Information Center, Juli 1989. http://dx.doi.org/10.21236/ada232403.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Fenimore, Charles, und Bruce F. Field. Video processing with the Princeton engine at NIST. Gaithersburg, MD: National Bureau of Standards, 1991. http://dx.doi.org/10.6028/nist.tn.1288.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Senecal, J., und A. Wegner. Extension of 4-8 Texture Hierarchies to Large Video Processing and Visualization. Office of Scientific and Technical Information (OSTI), November 2007. http://dx.doi.org/10.2172/924013.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Anastasiadis, S. H., J. K. Chen, J. T. Koberstein, A. F. Siegel und J. E. Sohn. The Determination of Interfacial Tension by Video Image Processing of Pendant Fluid Drops. Fort Belvoir, VA: Defense Technical Information Center, März 1986. http://dx.doi.org/10.21236/ada165745.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Carter, R. J. Modification and Validation of an Automotive Data Processing Unit, Compessed Video System, and Communications Equipment. Office of Scientific and Technical Information (OSTI), April 1997. http://dx.doi.org/10.2172/2734.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Qi, Yuan. Learning Algorithms for Audio and Video Processing: Independent Component Analysis and Support Vector Machine Based Approaches. Fort Belvoir, VA: Defense Technical Information Center, August 2000. http://dx.doi.org/10.21236/ada458739.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Selvaraju, Ragul, SHABARIRAJ SIDDESWARAN und Hariharan Sankarasubramanian. The Validation of Auto Rickshaw Model for Frontal Crash Studies Using Video Capture Data. SAE International, September 2020. http://dx.doi.org/10.4271/2020-28-0490.

Der volle Inhalt der Quelle
Annotation:
Despite being Auto rickshaws are the most important public transportation around Asian countries and especially in India, the safety standards and regulations have not been established as much as for the car segment. The Crash simulations have evolved to analyze the vehicle crashworthiness since crash experimentations are costly. The work intends to provide the validation for an Auto rickshaw model by comparing frontal crash simulation with a random head-on crash video. MATLAB video processing tool has been used to process the crash video, and the impact velocity of the frontal crash is obtained. The vehicle modelled in CATIA is imported in the LS-DYNA software simulation environment to perform frontal crash simulation at the captured speed. The simulation is compared with the crash video at 5, 25, and 40 milliseconds respectively. The comparison shows that the crash pattern of simulation and real crash video are similar in detail. Thus the modelled Auto-rickshaw can be used in the future to validate the real-time crash for providing the scope of improvement in Three-wheeler safety.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Selvaraju, Ragul, SHABARIRAJ SIDDESWARAN und Hariharan Sankarasubramanian. The Validation of Auto Rickshaw Model for Frontal Crash Studies Using Video Capture Data. SAE International, September 2020. http://dx.doi.org/10.4271/2020-28-0490.

Der volle Inhalt der Quelle
Annotation:
Despite being Auto rickshaws are the most important public transportation around Asian countries and especially in India, the safety standards and regulations have not been established as much as for the car segment. The Crash simulations have evolved to analyze the vehicle crashworthiness since crash experimentations are costly. The work intends to provide the validation for an Auto rickshaw model by comparing frontal crash simulation with a random head-on crash video. MATLAB video processing tool has been used to process the crash video, and the impact velocity of the frontal crash is obtained. The vehicle modelled in CATIA is imported in the LS-DYNA software simulation environment to perform frontal crash simulation at the captured speed. The simulation is compared with the crash video at 5, 25, and 40 milliseconds respectively. The comparison shows that the crash pattern of simulation and real crash video are similar in detail. Thus the modelled Auto-rickshaw can be used in the future to validate the real-time crash for providing the scope of improvement in Three-wheeler safety.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Ruby, Jeffrey, Richard Massaro, John Anderson und Robert Fischer. Three-dimensional geospatial product generation from tactical sources, co-registration assessment, and considerations. Engineer Research and Development Center (U.S.), Februar 2023. http://dx.doi.org/10.21079/11681/46442.

Der volle Inhalt der Quelle
Annotation:
According to Army Multi-Domain Operations (MDO) doctrine, generating timely, accurate, and exploitable geospatial products from tactical platforms is a critical capability to meet threats. The US Army Corps of Engineers, Engineer Research and Development Center, Geospatial Research Laboratory (ERDC-GRL) is carrying out 6.2 research to facilitate the creation of three-dimensional (3D) products from tactical sensors to include full-motion video, framing cameras, and sensors integrated on small Unmanned Aerial Systems (sUAS). This report describes an ERDC-GRL processing pipeline comprising custom code, open-source software, and commercial off-the-shelf (COTS) tools to geospatially rectify tactical imagery to authoritative foundation sources. Four datasets from different sensors and locations were processed against National Geospatial-Intelligence Agency–supplied foundation data. Results showed that the co-registration of tactical drone data to reference foundation varied from 0.34 m to 0.75 m, exceeding the accuracy objective of 1 m described in briefings presented to Army Futures Command (AFC) and the Assistant Security of the Army for Acquisition, Logistics and Technology (ASA(ALT)). A discussion summarizes the results, describes steps to address processing gaps, and considers future efforts to optimize the pipeline for generation of geospatial data for specific end-user devices and tactical applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie