Auswahl der wissenschaftlichen Literatur zum Thema „Blurry frame detection“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Blurry frame detection" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Blurry frame detection"

1

Li, Dengshan, Rujing Wang, Chengjun Xie, Liu Liu, Jie Zhang, Rui Li, Fangyuan Wang, Man Zhou und Wancai Liu. „A Recognition Method for Rice Plant Diseases and Pests Video Detection Based on Deep Convolutional Neural Network“. Sensors 20, Nr. 3 (21.01.2020): 578. http://dx.doi.org/10.3390/s20030578.

Der volle Inhalt der Quelle
Annotation:
Increasing grain production is essential to those areas where food is scarce. Increasing grain production by controlling crop diseases and pests in time should be effective. To construct video detection system for plant diseases and pests, and to build a real-time crop diseases and pests video detection system in the future, a deep learning-based video detection architecture with a custom backbone was proposed for detecting plant diseases and pests in videos. We first transformed the video into still frame, then sent the frame to the still-image detector for detection, and finally synthesized the frames into video. In the still-image detector, we used faster-RCNN as the framework. We used image-training models to detect relatively blurry videos. Additionally, a set of video-based evaluation metrics based on a machine learning classifier was proposed, which reflected the quality of video detection effectively in the experiments. Experiments showed that our system with the custom backbone was more suitable for detection of the untrained rice videos than VGG16, ResNet-50, ResNet-101 backbone system and YOLOv3 with our experimental environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Müller, Carola, und Sven Lidin. „Freeze Frames vs. Movies“. Acta Crystallographica Section A Foundations and Advances 70, a1 (05.08.2014): C178. http://dx.doi.org/10.1107/s2053273314098210.

Der volle Inhalt der Quelle
Annotation:
Sometimes, model building in crystallography is like resolving a puzzle: All obvious symmetrical or methodological errors are excluded, you apparently understand the measured patterns in 3D, but the structure solution and/or refinement is just not working. One such nerve-stretching problem arises from metrically commensurate structures (MCS). This expression means that the observed values of the components of the modulation wave vectors are rational by chance and not because of a lock-in. Hence, it is not a superstructure - although the boundaries between the two descriptions are blurry. Using a superstructure model for a MCS decreases the degrees of freedom, and forces the atomic arrangement to an artificial state of ordering. Just imagine it as looking at a freeze frame from a movie instead of watching the whole film. The consequences in structure solution and refinement of MCS are not always as dramatically as stated in the beginning. On the contrary, treating a superstructure like a MCS might be a worthwhile idea. Converting from a superstructure model to a superspace model may lead to a substantial decrease in the number of parameters needed to model the structure. Further, it can permit for the refinement of parameters that the paucity of data does not allow in a conventional description. However, it is well known that families of superstructures can be described elegantly by the use of superspace models that collectively treat a whole range of structures, commensurate and incommensurate. Nevertheless, practical complications in the refinement are not uncommon. Instances are overlapping satellites from different orders and parameter correlations. Notably, MCS occur in intermetallic compounds that are important for the performance of next-generation electronic devices. Based on examples of their (pseudo)hexagonal 3+1D and 3+2D structures, we will discuss the detection and occurrence of MCS as well as the benefits and limitations of implementing them artificially.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Zhu, Haidi, Haoran Wei, Baoqing Li, Xiaobing Yuan und Nasser Kehtarnavaz. „A Review of Video Object Detection: Datasets, Metrics and Methods“. Applied Sciences 10, Nr. 21 (04.11.2020): 7834. http://dx.doi.org/10.3390/app10217834.

Der volle Inhalt der Quelle
Annotation:
Although there are well established object detection methods based on static images, their application to video data on a frame by frame basis faces two shortcomings: (i) lack of computational efficiency due to redundancy across image frames or by not using a temporal and spatial correlation of features across image frames, and (ii) lack of robustness to real-world conditions such as motion blur and occlusion. Since the introduction of the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) in 2015, a growing number of methods have appeared in the literature on video object detection, many of which have utilized deep learning models. The aim of this paper is to provide a review of these papers on video object detection. An overview of the existing datasets for video object detection together with commonly used evaluation metrics is first presented. Video object detection methods are then categorized and a description of each of them is stated. Two comparison tables are provided to see their differences in terms of both accuracy and computational efficiency. Finally, some future trends in video object detection to address the challenges involved are noted.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wan, Jixiang, Ming Xia, Zunkai Huang, Li Tian, Xiaoying Zheng, Victor Chang, Yongxin Zhu und Hui Wang. „Event-Based Pedestrian Detection Using Dynamic Vision Sensors“. Electronics 10, Nr. 8 (08.04.2021): 888. http://dx.doi.org/10.3390/electronics10080888.

Der volle Inhalt der Quelle
Annotation:
Pedestrian detection has attracted great research attention in video surveillance, traffic statistics, and especially in autonomous driving. To date, almost all pedestrian detection solutions are derived from conventional framed-based image sensors with limited reaction speed and high data redundancy. Dynamic vision sensor (DVS), which is inspired by biological retinas, efficiently captures the visual information with sparse, asynchronous events rather than dense, synchronous frames. It can eliminate redundant data transmission and avoid motion blur or data leakage in high-speed imaging applications. However, it is usually impractical to directly apply the event streams to conventional object detection algorithms. For this issue, we first propose a novel event-to-frame conversion method by integrating the inherent characteristics of events more efficiently. Moreover, we design an improved feature extraction network that can reuse intermediate features to further reduce the computational effort. We evaluate the performance of our proposed method on a custom dataset containing multiple real-world pedestrian scenes. The results indicate that our proposed method raised its pedestrian detection accuracy by about 5.6–10.8%, and its detection speed is nearly 20% faster than previously reported methods. Furthermore, it can achieve a processing speed of about 26 FPS and an AP of 87.43% when implanted on a single CPU so that it fully meets the requirement of real-time detection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Zhang, Junpeng, Xiuping Jia und Jiankun Hu. „Local Region Proposing for Frame-Based Vehicle Detection in Satellite Videos“. Remote Sensing 11, Nr. 20 (12.10.2019): 2372. http://dx.doi.org/10.3390/rs11202372.

Der volle Inhalt der Quelle
Annotation:
Current new developments in remote sensing imagery enable satellites to capture videos from space. These satellite videos record the motion of vehicles over a vast territory, offering significant advantages in traffic monitoring systems over ground-based systems. However, detecting vehicles in satellite videos are challenged by the low spatial resolution and the low contrast in each video frame. The vehicles in these videos are small, and most of them are blurred into their background regions. While region proposals are often generated for efficient target detection, they have limited performance on satellite videos. To meet this challenge, we propose a Local Region Proposing approach (LRP) with three steps in this study. A video frame is segmented into semantic regions first and possible targets are then detected in these coarse scale regions. A discrete Histogram Mixture Model (HistMM) is proposed in the third step to narrow down the region proposals by quantifying their likelihoods towards the target category, where the training is conducted on positive samples only. Experiment results demonstrate that LRP generates region proposals with improved target recall rates. When a slim Fast-RCNN detector is applied, LRP achieves better detection performance over the state-of-the-art approaches tested.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Liu, Jie. „Impact of High-Tech Image Formats Based on Full-Frame Sensors on Visual Experience and Film-Television Production“. Wireless Communications and Mobile Computing 2021 (27.08.2021): 1–13. http://dx.doi.org/10.1155/2021/9881641.

Der volle Inhalt der Quelle
Annotation:
Today, the application of high-tech image format technology in contemporary visual experience and film and television production has become infinitely mature. In the era of the combination of modern technology and the Internet, virtual numbers connect the past with the future, merge reality, and myth and even synchronize the world between primitive and modern. This article adopts experimental analysis and comparative analysis, setting the experimental group and the reference group aim to use high-tech image formats from the perspective of a full-frame sensor to basically realize a perspective screen and a three-dimensional screen for observers in indoor scenes. In addition, the process of reconstructing a 3D model using high-precision geometric information and realistic color information is also described. The experimental results show that the sharpness threshold cannot be too small; otherwise, part of the clear image is misjudged as a blurred image. If the threshold is too large, the missed detection of the blurred image will increase. Combined with the subjective evaluation of the image, when the threshold is 0.8, the experimental result is close subjective evaluation, and the missed detection rate is 2.41%. This shows that the ASODVS three-dimensional digital scene constructed in this article can meet the needs of real-time image processing; it can effectively evaluate the clarity of realistic analog images. It shows that controlling the size of the y coordinate value can affect the user’s visual experience. The smaller the y value in a certain range, the clearer the result can be.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Leung, Ho Kwan, Xiu-Zhi Chen, Chao-Wei Yu, Hong-Yi Liang, Jian-Yi Wu und Yen-Lin Chen. „A Deep-Learning-Based Vehicle Detection Approach for Insufficient and Nighttime Illumination Conditions“. Applied Sciences 9, Nr. 22 (08.11.2019): 4769. http://dx.doi.org/10.3390/app9224769.

Der volle Inhalt der Quelle
Annotation:
Most object detection models cannot achieve satisfactory performance under nighttime and other insufficient illumination conditions, which may be due to the collection of data sets and typical labeling conventions. Public data sets collected for object detection are usually photographed with sufficient ambient lighting. However, their labeling conventions typically focus on clear objects and ignore blurry and occluded objects. Consequently, the detection performance levels of traditional vehicle detection techniques are limited in nighttime environments without sufficient illumination. When objects occupy a small number of pixels and the existence of crucial features is infrequent, traditional convolutional neural networks (CNNs) may suffer from serious information loss due to the fixed number of convolutional operations. This study presents solutions for data collection and the labeling convention of nighttime data to handle various types of situations, including in-vehicle detection. Moreover, the study proposes a specifically optimized system based on the Faster region-based CNN model. The system has a processing speed of 16 frames per second for 500 × 375-pixel images, and it achieved a mean average precision (mAP) of 0.8497 in our validation segment involving urban nighttime and extremely inadequate lighting conditions. The experimental results demonstrated that our proposed methods can achieve high detection performance in various nighttime environments, such as urban nighttime conditions with insufficient illumination, and extremely dark conditions with nearly no lighting. The proposed system outperforms original methods that have an mAP value of approximately 0.2.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Travers, Théo, Vincent G. Colin, Matthieu Loumaigne, Régis Barillé und Denis Gindre. „Single-Particle Tracking with Scanning Non-Linear Microscopy“. Nanomaterials 10, Nr. 8 (03.08.2020): 1519. http://dx.doi.org/10.3390/nano10081519.

Der volle Inhalt der Quelle
Annotation:
This study describes the adaptation of non-linear microscopy for single-particle tracking (SPT), a method commonly used in biology with single-photon fluorescence. Imaging moving objects with non-linear microscopy raises difficulties due to the scanning process of the acquisitions. The interest of the study is based on the balance between all the experimental parameters (objective, resolution, frame rate) which need to be optimized to record long trajectories with the best accuracy and frame rate. To evaluate the performance of the setup for SPT, several basic estimation methods are used and adapted to the new detection process. The covariance-based estimator (CVE) seems to be the best way to evaluate the diffusion coefficient from trajectories using the specific factors of motion blur and localization error.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Vanden Berk, Daniel E., Sarah C. Wesolowski, Mary J. Yeckley, Joseph M. Marcinik, Jean M. Quashnock, Lawrence M. Machia und Jian Wu. „Extreme ultraviolet quasar colours from GALEX observations of the SDSS DR14Q catalogue“. Monthly Notices of the Royal Astronomical Society 493, Nr. 2 (18.02.2020): 2745–64. http://dx.doi.org/10.1093/mnras/staa411.

Der volle Inhalt der Quelle
Annotation:
ABSTRACT The rest-frame far to extreme ultraviolet (UV) colour–redshift relationship has been constructed from data on over $480\, 000$ quasars carefully cross-matched between SDSS Data Release 14 and the final GALEX photometric catalogue. UV matching and detection probabilities are given for all the quasars, including dependencies on separation, optical brightness, and redshift. Detection limits are also provided for all objects. The UV colour distributions are skewed redward at virtually all redshifts, especially when detection limits are accounted for. The median GALEX far-UV minus near-UV (FUV − NUV) colour–redshift relation is reliably determined up to z ≈ 2.8, corresponding to rest-frame wavelengths as short as 400 Å. Extreme UV (EUV) colours are substantially redder than found previously, when detection limits are properly accounted for. Quasar template spectra were forward modelled through the GALEX bandpasses, accounting for intergalactic opacity, intrinsic reddening, and continuum slope variations. Intergalactic absorption by itself cannot account for the very red EUV colours. The colour–redshift relation is consistent with no intrinsic reddening, at least for SMC-like extinction. The best model fit has a FUV continuum power-law slope αν, FUV = −0.34 ± 0.03 consistent with previous results, but an EUV slope αν, EUV = −2.90 ± 0.04 that is much redder and inconsistent with any previous composite value (all ≳ −2.0). The EUV slope difference can be attributed in part to the tendency of previous studies to preferentially select UV brighter and bluer objects. The weak EUV flux suggests quasar accretion disc models that include outflows such as disc winds.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Holešovský, Ondřej, Radoslav Škoviera, Václav Hlaváč und Roman Vítek. „Experimental Comparison between Event and Global Shutter Cameras“. Sensors 21, Nr. 4 (06.02.2021): 1137. http://dx.doi.org/10.3390/s21041137.

Der volle Inhalt der Quelle
Annotation:
We compare event-cameras with fast (global shutter) frame-cameras experimentally, asking: “What is the application domain, in which an event-camera surpasses a fast frame-camera?” Surprisingly, finding the answer has been difficult. Our methodology was to test event- and frame-cameras on generic computer vision tasks where event-camera advantages should manifest. We used two methods: (1) a controlled, cheap, and easily reproducible experiment (observing a marker on a rotating disk at varying speeds); (2) selecting one challenging practical ballistic experiment (observing a flying bullet having a ground truth provided by an ultra-high-speed expensive frame-camera). The experimental results include sampling/detection rates and position estimation errors as functions of illuminance and motion speed; and the minimum pixel latency of two commercial state-of-the-art event-cameras (ATIS, DVS240). Event-cameras respond more slowly to positive than to negative large and sudden contrast changes. They outperformed a frame-camera in bandwidth efficiency in all our experiments. Both camera types provide comparable position estimation accuracy. The better event-camera was limited by pixel latency when tracking small objects, resulting in motion blur effects. Sensor bandwidth limited the event-camera in object recognition. However, future generations of event-cameras might alleviate bandwidth limitations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Blurry frame detection"

1

Vraňáková, Sofia. „Zpracování snímků sítnice s vysokým rozlišením“. Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442580.

Der volle Inhalt der Quelle
Annotation:
Diplomová práca je zameraná na spracovávanie obrazov sietnice s vysokým rozlíšením. Cieľom práce je zlepšiť výslednú kvalitu výsledných snímkov sietnice získaných zo sekvencie snímkov nižšej kvality. Jednotlivé snímky sú najskôr spracované pomocou bilaterálnej filtrácie a zlepšenia kontrastu. v ďalšom kroku sú odstránené rozmazané snímky a snímky zobrazujúce iné časti sietnice. Posun medzi jednotlivými snímkami v sekvencii sa odhaduje pomocou fázovej korelácie, a tieto obrazy sú potom fúzované do výsledného snímku s vysokým rozlíšením pomocou priemerovania a využitia superrozlišovacej techniky, presnejšie regularizácie pomocou bilaterálneho celkového rozptylu. Výsledné mediánové hodnoty skóre kvality získaných obrazov sú PIQUE 0.2600, NIQE 0.0701, a BRISQUE 0.3936 pre techniku priemerovania, a PIQUE 0.1063, NIQE 0.0507, and BRISQUE 0.1570 pre superrozlišovaciu techniku.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Blurry frame detection"

1

Strelkova, Tatyana, Alexander I. Strelkov, Vladimir M. Kartashov, Alexander P. Lytyuga und Alexander S. Kalmykov. „Methods of Reception and Signal Processing in Machine Vision Systems“. In Examining Optoelectronics in Machine Vision and Applications in Industry 4.0, 71–102. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-6522-3.ch003.

Der volle Inhalt der Quelle
Annotation:
The chapter covers development of mathematical model of signals in optoelectronic systems. The mathematical model can be used as a base for detection algorithm development for optical signal from objects. Analytical expressions for mean values and signal and noise components dispersion are cited. These expressions can be used for estimating efficiency of the offered algorithm by the criterion of detection probabilistic characteristics and criterion of signal/noise relation value. The possibility of signal detection characteristics improvement with low signal-to-noise ratio is shown. The method is proposed for detection of moving objects and combines correlation and threshold methods, as well as optimization of the interframe processing of the sequence of analyzed frames. This method allows estimating the statistical characteristics of the signal and noise components and calculating the correlation integral when detecting moving low-contrast objects. The proposed algorithm for detecting moving objects in low illuminance conditions allows preventing the manifestation of the blur effect.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Blurry frame detection"

1

Oh, JungHwan, Sae Hwang, Wallapak Tavanapong, Piet C. de Groen und Johnny Wong. „Blurry-frame detection and shot segmentation in colonoscopy videos“. In Electronic Imaging 2004, herausgegeben von Minerva M. Yeung, Rainer W. Lienhart und Chung-Sheng Li. SPIE, 2003. http://dx.doi.org/10.1117/12.527108.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Yeh, Ching-Feng, Aaron Heidel, Hong-Yi Lee und Lin-Shan Lee. „Recognition of highly imbalanced code-mixed bilingual speech with frame-level language detection based on blurred posteriorgram“. In ICASSP 2012 - 2012 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2012. http://dx.doi.org/10.1109/icassp.2012.6289011.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie