Littérature scientifique sur le sujet « Multi-image superresolution »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Multi-image superresolution ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Multi-image superresolution"

1

PENG Zhen-ming, 彭真明, 景亮 JING Liang, 何艳敏 HE Yan-min et 张萍 Zhang Ping. « Superresolution fusion of multi-focus image based on multiscale sparse dictionary ». Optics and Precision Engineering 22, no 1 (2014) : 169–76. http://dx.doi.org/10.3788/ope.20142201.0169.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Wang, Yingqian, Jungang Yang, Chao Xiao et Wei An. « Fast Convergence Strategy for Multi-Image Superresolution via Adaptive Line Search ». IEEE Access 6 (2018) : 9129–39. http://dx.doi.org/10.1109/access.2018.2799161.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Xu, Qingqing, Zhiyu Zhu, Huilin Ge, Zheqing Zhang et Xu Zang. « Effective Face Detector Based on YOLOv5 and Superresolution Reconstruction ». Computational and Mathematical Methods in Medicine 2021 (16 novembre 2021) : 1–9. http://dx.doi.org/10.1155/2021/7748350.

Texte intégral
Résumé :
The application of face detection and recognition technology in security monitoring systems has made a huge contribution to public security. Face detection is an essential first step in many face analysis systems. In complex scenes, the accuracy of face detection would be limited because of the missing and false detection of small faces, due to image quality, face scale, light, and other factors. In this paper, a two-level face detection model called SR-YOLOv5 is proposed to address some problems of dense small faces in actual scenarios. The research first optimized the backbone and loss function of YOLOv5, which is aimed at achieving better performance in terms of mean average precision (mAP) and speed. Then, to improve face detection in blurred scenes or low-resolution situations, we integrated image superresolution technology on the detection head. In addition, some representative deep-learning algorithm based on face detection is discussed by grouping them into a few major categories, and the popular face detection benchmarks are enumerated in detail. Finally, the wider face dataset is used to train and test the SR-YOLOv5 model. Compared with multitask convolutional neural network (MTCNN), Contextual Multi-Scale Region-based CNN (CMS-RCNN), Finding Tiny Faces (HR), Single Shot Scale-invariant Face Detector (S3FD), and TinaFace algorithms, it is verified that the proposed model has higher detection precision, which is 0.7%, 0.6%, and 2.9% higher than the top one. SR-YOLOv5 can effectively use face information to accurately detect hard-to-detect face targets in complex scenes.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Shen, Kai, Hui Lu, Sarfaraz Baig et Michael R. Wang. « Improving lateral resolution and image quality of optical coherence tomography by the multi-frame superresolution technique for 3D tissue imaging ». Biomedical Optics Express 8, no 11 (6 octobre 2017) : 4887. http://dx.doi.org/10.1364/boe.8.004887.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Han, Xian-Hua, Yongqing Sun, Jian Wang, Boxin Shi, Yinqiang Zheng et Yen-Wei Chen. « Spectral Representation via Data-Guided Sparsity for Hyperspectral Image Super-Resolution ». Sensors 19, no 24 (7 décembre 2019) : 5401. http://dx.doi.org/10.3390/s19245401.

Texte intégral
Résumé :
Hyperspectral imaging is capable of acquiring the rich spectral information of scenes and has great potential for understanding the characteristics of different materials in many applications ranging from remote sensing to medical imaging. However, due to hardware limitations, the existed hyper-/multi-spectral imaging devices usually cannot obtain high spatial resolution. This study aims to generate a high resolution hyperspectral image according to the available low resolution hyperspectral and high resolution RGB images. We propose a novel hyperspectral image superresolution method via non-negative sparse representation of reflectance spectra with a data guided sparsity constraint. The proposed method firstly learns the hyperspectral dictionary from the low resolution hyperspectral image and then transforms it into the RGB one with the camera response function, which is decided by the physical property of the RGB imaging camera. Given the RGB vector and the RGB dictionary, the sparse representation of each pixel in the high resolution image is calculated with the guidance of a sparsity map, which measures pixel material purity. The sparsity map is generated by analyzing the local content similarity of a focused pixel in the available high resolution RGB image and quantifying the spectral mixing degree motivated by the fact that the pixel spectrum of a pure material should have sparse representation of the spectral dictionary. Since the proposed method adaptively adjusts the sparsity in the spectral representation based on the local content of the available high resolution RGB image, it can produce more robust spectral representation for recovering the target high resolution hyperspectral image. Comprehensive experiments on two public hyperspectral datasets and three real remote sensing images validate that the proposed method achieves promising performances compared to the existing state-of-the-art methods.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Максимов, А. И. « EACH FRAME POINT RESTORATION ERROR BASED SUPER-RESOLUTION METHOD ». Южно-Сибирский научный вестник, no 4(38) (31 août 2021) : 30–38. http://dx.doi.org/10.25699/sssb.2021.38.4.011.

Texte intégral
Résumé :
В работе предложен метод повышения пространственного разрешения по серии кадров низкого разрешения, использующий для формирования результирующего изображения значения погрешностей восстановления в точке каждого кадра. Метод объединяет в себе результаты многолетних исследований автора в области повышения качества изображений и видеозаписей. Предложенный метод разрабатывался для решения прикладных задач криминалистической экспертизы видеозаписей и предназначен для повышения визуального качества плоского локального объекта, находящегося близко к центру кадра. Метод состоит из трех этапов. Первый этап - процедура сверхразрешающего восстановления в каждом кадре с учетом непрерывно-дискретной модели наблюдения сигнала с сохранением сведений об ошибке такого восстановления в дополнительный канал обработки изображения. Второй – геометрическое согласование восстановленных кадров с применением геометрического преобразования к дополнительному каналу обработки. Третий – взвешенное оптимальное по критерию минимизации среднеквадратической ошибки комплексирование кадров. Преимуществами предлагаемого метода являются оценка погрешности восстанавливаемого изображения в каждой точке, а также учет искажений изображений в непрерывной области. В работе проведено экспериментальное исследование ошибки восстановления предлагаемого метода, полученные результаты сравнивались со случаем, не использующим авторские находки предлагаемого метода, - усредняющим комплексированием линейно интерполированных кадров. Линейная интерполяция была взята, поскольку она также вписывается в фильтровую модель восстановления изображения на первом этапе работы метода. Полученные результаты демонстрируют превосходство предлагаемого метода. In this paper, a method for multi-frame superresolution is proposed. It exploits the values ​​of the recovery errors at the point of each frame to form the resulting high-resolution image. The method combines the results of many years of author's research in the field of image and video processing. The proposed method aims to apply to forensic tasks of video analysis. The method improves the visual quality of a flat local object located close to the center of the frame. The method consists of three stages. The first stage is the procedure of optimal super-resolution recovery of each frame with the use of the continuous-discrete observation model. During this stage, the recovery errors are stored in an additional image channel. The second stage is the frames registration. A geometric transformation is also applied to the additional channel during this stage. The final stage is the weighted optimal fusing. The advantages of the proposed method are the estimation of the error of the restored image at each point and taking into account the image degradations in the continuous domain. Experimental research of the reconstruction error of the method was carried out. The results were compared with the case that does not use the novel features of the proposed method - averaging fusing of linear interpolated frames. Linear interpolation was chosen as it also fits into the filtering model of image recovery of the method's first stage. The obtained results show that the proposed method outperforms the other one.
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Multi-image superresolution"

1

Zhou, Yu, Sam Kwong, Wei Gao, Xiao Zhang et Xu Wang. « Complexity reduction in multi-dictionary based single-image superresolution reconstruction via pahse congtuency ». Dans 2015 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR). IEEE, 2015. http://dx.doi.org/10.1109/icwapr.2015.7295941.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie