Статті в журналах з теми "Automatic Colorization"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Automatic Colorization.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Automatic Colorization".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Aoki, Terumasa, and Van Nguyen. "Global Distribution Adjustment and Nonlinear Feature Transformation for Automatic Colorization." Advances in Multimedia 2018 (2018): 1–15. http://dx.doi.org/10.1155/2018/1504691.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Automatic colorization is generally classified into two groups: propagation-based methods and reference-based methods. In reference-based automatic colorization methods, color image(s) are used as reference(s) to reconstruct original color of a gray target image. The most important task here is to find the best matching pairs for all pixels between reference and target images in order to transfer color information from reference to target pixels. A lot of attractive local feature-based image matching methods have already been developed for the last two decades. Unfortunately, as far as we know, there are no optimal matching methods for automatic colorization because the requirements for pixel matching in automatic colorization are wholly different from those for traditional image matching. To design an efficient matching algorithm for automatic colorization, clustering pixel with low computational cost and generating descriptive feature vector are the most important challenges to be solved. In this paper, we present a novel method to address these two problems. In particular, our work concentrates on solving the second problem (designing a descriptive feature vector); namely, we will discuss how to learn a descriptive texture feature using scaled sparse texture feature combining with a nonlinear transformation to construct an optimal feature descriptor. Our experimental results show our proposed method outperforms the state-of-the-art methods in terms of robustness for color reconstruction for automatic colorization applications.
2

Alam khan, Sharique, and Alok Katiyar. "Automatic colorization of natural images using deep learning." YMER Digital 21, no. 05 (May 20, 2022): 946–51. http://dx.doi.org/10.37896/ymer21.05/a6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
An approach based on deep learning for automatic colorization of image with optional userguided hints. The system maps a grey-scale image, along with, user hints” (selected colors) to an output colorization with a Convolution Neural Network (CNN). Previous approaches have relied heavily on user input which results in non-real-time desaturated outputs. The network takes user edits by fusing low-level information of source with high-level information, learned from large-scale data. Some networks are trained on a large data set to eliminate this dependency. The image colorization systems find their applications in astronomical photography, CCTV footage, electron microscopy, etc. The various approaches combine color data from large data sets and user inputs provide a model for accurate and efficient colorization of grey-scale images. Keywords—image colorization; deep learning; convolutional neural network; image processing.
3

Prasanna, N. Lakshmi, Sk Sohal Rehman, V. Naga Phani, S. Koteswara Rao, and T. Ram Santosh. "AUTOMATIC COLORIZATION USING CONVOLUTIONAL NEURAL NETWORKS." International Journal of Computer Science and Mobile Computing 10, no. 7 (July 30, 2021): 10–19. http://dx.doi.org/10.47760/ijcsmc.2021.v10i07.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Automatic Colorization helps to hallucinate what an input gray scale image would look like when colorized. Automatic coloring makes it look and feel better than Grayscale. One of the most important technologies used in Machine learning is Deep Learning. Deep learning is nothing but to train the computer with certain algorithms which imitates the working of the human brain. Some of the areas in which it is used are medical, Industrial Automation, Electronics etc. The main objective of this project is coloring Grayscale images. We have umbrellaed the concepts of convolutional neural networks along with the use of the Opencv library in Python to construct our desired model. A user interface has also been fabricated to get personalized inputs using PIL. The user had to give details about boundaries, what colors to put, etc. Colorization requires considerable user intervention and remains a tedious, time consuming, and expensive task. So, in this paper we try to build a model to colorize the grayscale images automatically by using some modern deep learning techniques. In colorization task, the model needs to find characteristics to map grayscale images with colored ones.
4

Netha, Guda Pranay, M. S. S. Manohar, M. Sai Amartya Maruth, and Ganjikunta Ganesh Kumar. "Colourization of Black and White Images using Deep Learning." International Journal of Computer Science and Mobile Computing 11, no. 1 (January 30, 2022): 116–21. http://dx.doi.org/10.47760/ijcsmc.2022.v11i01.014.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Colorization is the process of transforming grayscale photos into colour images that are aesthetically appealing. The basic objective is to persuade the spectator that the outcome is genuine. The majority of grayscale photographs that need to be colourized are of nature situations. Over the last 20 years, a broad range of colorization methods have been created, ranging from algorithmically simple but time- and energy-consuming procedures due to inescapable human participation to more difficult but also more automated ones. Automatic conversion has evolved into a difficult field that mixes machine learning and deep learning with art. The purpose of this study is to provide an overview and assessment of grayscale picture colorization methods and techniques used on natural photos. The study categorises existing colorization approaches, discusses the ideas underlying them, and highlights their benefits and drawbacks. Deep learning methods are given special consideration. The picture quality and processing time of relevant approaches are compared. Different measures are used to judge the quality of colour images. Because of the complexity of the human visual system, measuring the perceived quality of a colour image is difficult. Multiple metrics used to assess colorization systems provide results by calculating the difference between the expected colour value and the ground truth, which is not always consistent with image plausibility. According to the findings, user-guided neural networks are the most promising category for colorization since they successfully blend human participation with machine learning and neural network automation.
5

Farella, Elisa Mariarosaria, Salim Malek, and Fabio Remondino. "Colorizing the Past: Deep Learning for the Automatic Colorization of Historical Aerial Images." Journal of Imaging 8, no. 10 (October 1, 2022): 269. http://dx.doi.org/10.3390/jimaging8100269.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The colorization of grayscale images can, nowadays, take advantage of recent progress and the automation of deep-learning techniques. From the media industry to medical or geospatial applications, image colorization is an attractive and investigated image processing practice, and it is also helpful for revitalizing historical photographs. After exploring some of the existing fully automatic learning methods, the article presents a new neural network architecture, Hyper-U-NET, which combines a U-NET-like architecture and HyperConnections to handle the colorization of historical black and white aerial images. The training dataset (about 10,000 colored aerial image patches) and the realized neural network are available on our GitHub page to boost further research investigations in this field.
6

Xu, Min, and YouDong Ding. "Fully automatic image colorization based on semantic segmentation technology." PLOS ONE 16, no. 11 (November 30, 2021): e0259953. http://dx.doi.org/10.1371/journal.pone.0259953.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Aiming at these problems of image colorization algorithms based on deep learning, such as color bleeding and insufficient color, this paper converts the study of image colorization to the optimization of image semantic segmentation, and proposes a fully automatic image colorization model based on semantic segmentation technology. Firstly, we use the encoder as the local feature extraction network and use VGG-16 as the global feature extraction network. These two parts do not interfere with each other, but they share the low-level feature. Then, the first fusion module is constructed to merge local features and global features, and the fusion results are input into semantic segmentation network and color prediction network respectively. Finally, the color prediction network obtains the semantic segmentation information of the image through the second fusion module, and predicts the chrominance of the image based on it. Through several sets of experiments, it is proved that the performance of our model becomes stronger and stronger under the nourishment of the data. Even in some complex scenes, our model can predict reasonable colors and color correctly, and the output effect is very real and natural.
7

Liu, Shiguang, and Xiang Zhang. "Automatic grayscale image colorization using histogram regression." Pattern Recognition Letters 33, no. 13 (October 2012): 1673–81. http://dx.doi.org/10.1016/j.patrec.2012.06.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Huang, Zhitong, Nanxuan Zhao, and Jing Liao. "UniColor." ACM Transactions on Graphics 41, no. 6 (November 30, 2022): 1–16. http://dx.doi.org/10.1145/3550454.3555471.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We propose the first unified framework UniColor to support colorization in multiple modalities, including both unconditional and conditional ones, such as stroke, exemplar, text, and even a mix of them. Rather than learning a separate model for each type of condition, we introduce a two-stage colorization framework for incorporating various conditions into a single model. In the first stage, multi-modal conditions are converted into a common representation of hint points. Particularly, we propose a novel CLIP-based method to convert the text to hint points. In the second stage, we propose a Transformer-based network composed of Chroma-VQGAN and Hybrid-Transformer to generate diverse and high-quality colorization results conditioned on hint points. Both qualitative and quantitative comparisons demonstrate that our method outperforms state-of-the-art methods in every control modality and further enables multi-modal colorization that was not feasible before. Moreover, we design an interactive interface showing the effectiveness of our unified framework in practical usage, including automatic colorization, hybrid-control colorization, local recolorization, and iterative color editing. Our code and models are available at https://luckyhzt.github.io/unicolor .
9

Furusawa, Chie. "2-1 Colorization Techniques for Manga and Line Drawings; Comicolorization: Semi-Automatic Manga Colorization." Journal of The Institute of Image Information and Television Engineers 72, no. 5 (2018): 347–52. http://dx.doi.org/10.3169/itej.72.347.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Sugumar, S. J. "Colorization of Digital Images: An Automatic and Efficient Approach through Deep learning." Journal of Innovative Image Processing 4, no. 3 (September 16, 2022): 183–94. http://dx.doi.org/10.36548/jiip.2022.3.006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Colorization is not a guaranteed, but a feasible mapping between intensity and chrominance values. This paper presents a colorization system that draws inspiration from recent developments in deep learning and makes use of both locally and globally relevant data. One such property is the rarity of each color category on the quantized plane. The denoising model contains hybrid approach with cluster normalization through U-Net deep learning construction of framework. These are built on the basic U-Net design for segmentation. To eliminate gaussian noise in digital images, this article has developed and tested a generic deep learning denoising model. PSNR and MSE are used as performance measures for comparison purposes.
11

Serebryanaya, L. V., and V. V. Potaraev. "Automatic Image Colorization Based on Convolutional Neural Networks." Digital Transformation, no. 2 (July 11, 2020): 58–64. http://dx.doi.org/10.38086/2522-9613-2020-2-58-64.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Zhang, Zuyu, Yan Li, and Byeong-Seok Shin. "Robust Medical Image Colorization with Spatial Mask-Guided Generative Adversarial Network." Bioengineering 9, no. 12 (November 22, 2022): 721. http://dx.doi.org/10.3390/bioengineering9120721.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Color medical images provide better visualization and diagnostic information for doctors during clinical procedures than grayscale medical images. Although generative adversarial network-based image colorization approaches have shown promising results, in these methods, adversarial training is applied to the whole image without considering the appearance conflicts between the foreground objects and the background contents, resulting in generating various artifacts. To remedy this issue, we propose a fully automatic spatial mask-guided colorization with generative adversarial network (SMCGAN) framework for medical image colorization. It generates colorized images with fewer artifacts by introducing spatial masks, which encourage the network to focus on the colorization of the foreground regions instead of the whole image. Specifically, we propose a novel spatial mask-guided method by introducing an auxiliary foreground segmentation branch combined with the main colorization branch to obtain the spatial masks. The spatial masks are then used to generate masked colorized images where most background contents are filtered out. Moreover, two discriminators are utilized for the generated colorized images and masked generated colorized images, respectively, to assist the model in focusing on the colorization of foreground regions. We validate our proposed framework on two publicly available datasets, including the Visible Human Project (VHP) dataset and the prostate dataset from NCI-ISBI 2013 challenge. The experimental results demonstrate that SMCGAN outperforms the state-of-the-art GAN-based image colorization approaches with an average improvement of 8.48% in the PSNR metric. The proposed SMCGAN can also generate colorized medical images with fewer artifacts.
13

Lee, Yeongseop, and Seongjin Lee. "Automatic Colorization of Anime Style Illustrations Using a Two-Stage Generator." Applied Sciences 10, no. 23 (December 4, 2020): 8699. http://dx.doi.org/10.3390/app10238699.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Line-arts are used in many ways in the media industry. However, line-art colorization is tedious, labor-intensive, and time consuming. For such reasons, a Generative Adversarial Network (GAN)-based image-to-image colorization method has received much attention because of its promising results. In this paper, we propose to use color a point hinting method with two GAN-based generators used for enhancing the image quality. To improve the coloring performance of drawing with various line styles, generator takes account of the loss of the line-art. We propose a Line Detection Model (LDM) which is used in measuring line loss. LDM is a method of extracting line from a color image. We also propose histogram equalizer in the input line-art to generalize the distribution of line styles. This approach allows the generalization of the distribution of line style without increasing the complexity of inference stage. In addition, we propose seven segment hint pointing constraints to evaluate the colorization performance of the model with Fréchet Inception Distance (FID) score. We present visual and qualitative evaluations of the proposed methods. The result shows that using histogram equalization and LDM enabled line loss exhibits the best result. The Base model with XDoG (eXtended Difference-Of-Gaussians)generated line-art with and without color hints exhibits FID for colorized images score of 35.83 and 44.70, respectively, whereas the proposed model in the same scenario exhibits 32.16 and 39.77, respectively.
14

Wang, Zhiyuan, Yi Yu, Daqun Li, Yuanyuan Wan, and Mingyang Li. "Colorful Image Colorization with Classification and Asymmetric Feature Fusion." Sensors 22, no. 20 (October 20, 2022): 8010. http://dx.doi.org/10.3390/s22208010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
An automatic colorization algorithm can convert a grayscale image to a colorful image using regression loss functions or classification loss functions. However, the regression loss function leads to brown results, while the classification loss function leads to the problem of color overflow and the computation of the color categories and balance weights of the ground truth required for the weighted classification loss is too large. In this paper, we propose a new method to compute color categories and balance the weights of color images. In this paper, we propose a new method to compute color categories and balance weights of color images. Furthermore, we propose a U-Net-based colorization network. First, we propose a category conversion module and a category balance module to obtain the color categories and to balance weights, which dramatically reduces the training time. Second, we construct a classification subnetwork to constrain the colorization network with category loss, which improves the colorization accuracy and saturation. Finally, we introduce an asymmetric feature fusion (AFF) module to fuse the multiscale features, which effectively prevents color overflow and improves the colorization effect. The experiments show that our colorization network has peak signal-to-noise ratio (PSNR) and structure similarity index measure (SSIM) metrics of 25.8803 and 0.9368, respectively, for the ImageNet dataset. As compared with existing algorithms, our algorithm produces colorful images with vivid colors, no significant color overflow, and higher saturation.
15

Attea, Bara'a Ali, and Sana'a Khudayer Jaddwa Al-Janaby. "A FULLY AUTOMATIC GENETIC APPROACH FOR GRAYSCALE IMAGE COLORIZATION." Journal of Engineering 12, no. 02 (June 1, 2006): 237–45. http://dx.doi.org/10.31026/j.eng.2006.02.05.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Colorization is a computer assisted process of adding color to a monochrome (grayscale) image ormovie. The early published methods to perform the image colorizing rely on heuristic techniquesfor choosing RGB colors from a global palette and applying them to regions of the target grayscaledimage. The main improvement of the proposed technique is the adoption in a fully automaticway the genetic algorithm as an efficient search method to find best match for each pixel in thetarget image. The proposed genetic algorithm evolves a population of randomly selected individuals (that represents a possible color setting for target image using a reference colored source image toward solution that could resemble natural or real colors to the objects of the target scene). Moreover this study proposes new crossover operator, called Spread out Uniform Crossover (SUX) that turns the recombination scheme of uniform crossover over spreading vital genes at the expense of lethal genes rather than exchanging genes between mating parents to the generated offspring. The results of the proposed colorization techniques are good and plausible.
16

Zhang, Jiangning, Chao Xu, Jian Li, Yue Han, Yabiao Wang, Ying Tai, and Yong Liu. "SCSNet: An Efficient Paradigm for Learning Simultaneously Image Colorization and Super-resolution." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 3271–79. http://dx.doi.org/10.1609/aaai.v36i3.20236.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the practical application of restoring low-resolution gray-scale images, we generally need to run three separate processes of image colorization, super-resolution, and dows-sampling operation for the target device. However, this pipeline is redundant and inefficient for the independent processes, and some inner features could have been shared. Therefore, we present an efficient paradigm to perform Simultaneously Image Colorization and Super-resolution (SCS) and propose an end-to-end SCSNet to achieve this goal. The proposed method consists of two parts: colorization branch for learning color information that employs the proposed plug-and-play Pyramid Valve Cross Attention (PVCAttn) module to aggregate feature maps between source and reference images; and super-resolution branch for integrating color and texture information to predict target images, which uses the designed Continuous Pixel Mapping (CPM) module to predict high-resolution images at continuous magnification. Furthermore, our SCSNet supports both automatic and referential modes that is more flexible for practical application. Abundant experiments demonstrate the superiority of our method for generating authentic images over state-of-the-art methods, e.g., averagely decreasing FID by 1.8 and 5.1 compared with current best scores for automatic and referential modes, respectively, while owning fewer parameters (more than x2) and faster running speed (more than x3).
17

Lee, Yeongseop, and Seongjin Lee. "Service Platform for Serving Line-art Automatic Colorization Model." TRANSACTION OF THE KOREAN INSTITUTE OF ELECTRICAL ENGINEERS P 71, no. 1 (March 31, 2022): 41–47. http://dx.doi.org/10.5370/kieep.2022.71.1.41.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Abbadi, Nidhal K. El, and Eman Saleem Razaq. "Automatic gray images colorization based on lab color space." Indonesian Journal of Electrical Engineering and Computer Science 18, no. 3 (June 1, 2020): 1501. http://dx.doi.org/10.11591/ijeecs.v18.i3.pp1501-1509.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p>The colorization aim to transform a black and white image to a color image. This is a very hard issue and usually requiring manual intervention by the user to produce high-quality images free of artifact. The public problem of inserting gradients color to a gray image has no accurate method. The proposed method is fully automatic method. We suggested to use reference color image to help transfer colors from reference image to gray image. The reference image converted to Lab color space, while the gray scale image normalized according to the lightness channel L. the gray image concatenate with both a, and b channels before converting to RGB image. The results were promised compared with other methods.</p>
19

Haji-Esmaeili, Mohammad Mahdi, and Gholamali Montazer. "Automatic Colorization of Grayscale Images Using Generative Adversarial Networks." Signal and Data Processing 16, no. 1 (May 1, 2019): 57–74. http://dx.doi.org/10.29252/jsdp.16.1.57.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Chen, Changjian, Yi Xu, and Xiaokang Yang. "User tailored colorization using automatic scribbles and hierarchical features." Digital Signal Processing 87 (April 2019): 155–65. http://dx.doi.org/10.1016/j.dsp.2019.01.021.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Zhang, Wei, Chao-Wei Fang, and Guan-Bin Li. "Automatic Colorization with Improved Spatial Coherence and Boundary Localization." Journal of Computer Science and Technology 32, no. 3 (May 2017): 494–506. http://dx.doi.org/10.1007/s11390-017-1739-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Schmitt, M., L. H. Hughes, M. Körner, and X. X. Zhu. "COLORIZING SENTINEL-1 SAR IMAGES USING A VARIATIONAL AUTOENCODER CONDITIONED ON SENTINEL-2 IMAGERY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2 (May 30, 2018): 1045–51. http://dx.doi.org/10.5194/isprs-archives-xlii-2-1045-2018.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper, we have shown an approach for the automatic colorization of SAR backscatter images, which are usually provided in the form of single-channel gray-scale imagery. Using a deep generative model proposed for the purpose of photograph colorization and a Lab-space-based SAR-optical image fusion formulation, we are able to predict artificial color SAR images, which disclose much more information to the human interpreter than the original SAR data. Future work will aim at further adaption of the employed procedure to our special case of multi-sensor remote sensing imagery. Furthermore, we will investigate if the low-level representations learned intrinsically by the deep network can be used for SAR image interpretation in an end-to-end manner.
23

Petre, Cosmin-Gheorghe, and Stefan Trausan-Matu. "Automatic black and white image colorization using generative adversarial networks." International Joural of User-System Interaction 13, no. 2 (2020): 110–20. http://dx.doi.org/10.37789/ijusi.2020.13.2.4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Othman, Omar Abdulwahhab, Sait Ali Uymaz, and Betül Uzbaş. "Automatic Black & White Images colorization using Convolutional neural network." Academic Perspective Procedia 2, no. 3 (November 22, 2019): 1189–95. http://dx.doi.org/10.33793/acperpro.02.03.131.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper, automatic black and white image colorization method has been proposed. The study is based on the best-known deep learning algorithm CNN (Convolutional neural network). The Model that developed taking the input in gray scale and predict the color of image based on the dataset that trained on it. The color space used in this work is Lab Color space the model takes the L channel as the input and the ab channels as the output. The Image Net dataset used and random selected image have been used to construct a mini dataset of images that contains 39,604 images splitted into 80% training and 20% testing. The proposed method has been tested and evaluated on samples images with Mean-squared error and peak signal to noise ratio and reached an average of MSE= 51.36 and PSNR= 31.
25

Poterek, Quentin, Pierre-Alexis Herrault, Grzegorz Skupinski, and David Sheeren. "Deep Learning for Automatic Colorization of Legacy Grayscale Aerial Photographs." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 13 (2020): 2899–915. http://dx.doi.org/10.1109/jstars.2020.2992082.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Salmona, Antoine, Lucía Bouza, and Julie Delon. "DeOldify: A Review and Implementation of an Automatic Colorization Method." Image Processing On Line 12 (September 5, 2022): 347–68. http://dx.doi.org/10.5201/ipol.2022.403.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Li, Bo, Yu-Kun Lai, and Paul L. Rosin. "Example-based image colorization via automatic feature selection and fusion." Neurocomputing 266 (November 2017): 687–98. http://dx.doi.org/10.1016/j.neucom.2017.05.083.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Bartenyev, Oleg V., and Emil R. Salakhutdinov. "Sketch Colorization Based on Generative-Adversarial Neural Networks." Vestnik MEI, no. 1 (2022): 120–29. http://dx.doi.org/10.24160/1993-6982-2022-1-120-129.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Generative adversarial neural networks (GAN) successfully perform automatic colorization of character sketches. Such models can be further improved in such aspects as increasing the throughput, improving the colorization quality, and reducing the model size. Steps aimed at improving the colorization quality are taken. Known solutions are reviewed, and an initial GAN model with eight blocks in the generator encoder and decoder is developed and trained based on the review results. The second GAN model is obtained on the basis of the first one by including residual blocks in the generator encoder blocks with simultaneously using the attention blocks and residual blocks in the generator decoder. The third GAN model has been developed based on the second one: one block is added to the generator encoder and decoder. All models, including the initial and modified ones, have been trained on the same data set. The models have been trained either with or without using additional information about the image color (the color palette of the reference image or color hint labels). The trained models are evaluated with respect to the quality of the images they generate (colored sketches), determined by the Frechet Inception Distance metric. All modified GAN models generate images with quality superior to that of the initial model.
29

Li, Bo, Yu-Kun Lai, Matthew John, and Paul L. Rosin. "Automatic Example-Based Image Colorization Using Location-Aware Cross-Scale Matching." IEEE Transactions on Image Processing 28, no. 9 (September 2019): 4606–19. http://dx.doi.org/10.1109/tip.2019.2912291.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Oishi, Shuji, and Ryo Kurazume. "Manual/automatic colorization for three-dimensional geometric models utilizing laser reflectivity." Advanced Robotics 28, no. 24 (December 17, 2014): 1637–51. http://dx.doi.org/10.1080/01691864.2014.968616.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Chybicki, Mariusz, Wiktor Kozakiewicz, Dawid Sielski, and Anna Fabijańska. "Deep cartoon colorizer: An automatic approach for colorization of vintage cartoons." Engineering Applications of Artificial Intelligence 81 (May 2019): 37–46. http://dx.doi.org/10.1016/j.engappai.2019.02.006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Ambadkar, Tanmay, and Jignesh S. Bhatt. "A Simple Fast Resource-efficient Deep Learning for Automatic Image Colorization." Color and Imaging Conference 31, no. 1 (November 13, 2023): 126–31. http://dx.doi.org/10.2352/cic.2023.31.1.24.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Luo, Hui, and Qiang Zeng. "Study on the Application of Visual Communication Design in APP Interface Design in the Context of Deep Learning." Computational Intelligence and Neuroscience 2022 (June 20, 2022): 1–7. http://dx.doi.org/10.1155/2022/9262676.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Visual communication concepts enable linguistics or semiotics to the teaching of visual communication designs, creating graphic designs into an innovative and scientific discipline. The use of storyline techniques in visual communication not only inspires the imagination of designer but also arouses the visual memory of the audience. Besides, improving cultural heritage such as historical images is important to protect cultural diversity. Recently, the developments of deep learning (DL) and computer vision (CV) approaches make it possible for the automatic colorization of grayscale images into color images. Also, the usage of visual communication design in APP interface design has increased. With this motivation, this work introduces the enhanced deep learning-based automated historical image colorization (EDL-AHIC) technique for wireless network-enabled visual communication. The proposed EDL-AHIC technique intends to effectually convert the grayscale images into color images. The presented EDL-AHIC technique extracts the local as well as global features. For global feature extraction, the enhanced capsule network (ECN) model is applied. Finally, the fusion layer and decoding unit are employed to determine the output, i.e., chrominance component of the input image. A comprehensive experimental validation process is performed to ensure the betterment of the EDL-AHIC technique. The comparison study reported the supremacy of the EDL-AHIC technique over the other recent methods.
34

Alrubaie, Shymaa Akram, and Israa Mohammed Hassoon. "Support Vector Machine (SVM) for Colorization the Grayscale Image." Al-Qadisiyah Journal for Engineering Sciences 13, no. 3 (September 30, 2020): 207–14. http://dx.doi.org/10.30772/qjes.v13i3.658.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recently, there have been several automatic approaches to color grayscale images, which depend on the internal features of the grayscale images. There are several scales which are considered as a prominent key to extract the corresponding chromatic value of the gray level. In this aspect, colorizing methods that rely on automatic algorithms are still under investigation, especially after the development of neural networks used to recognize the features of images. This paper develops a new model to obtain a color image from an original grayscale image through the use of the Support Vector Machine to recognize the features of grayscale images which are extracted from two stages: the first stage is Haar Discrete Wavelets Transform used to configure the vector that combines with six of Statistical Measurements: (Mean, Variance, Skewness, Kurtosis, Energy and Standard Deviation) extracts from the grayscales image in the second stage. After the Support Vector Machine recognition has been done, the colorization process uses the result of Support Vector Machine to recover the color to greyscale images by using YCbCr color system then it converts the color to default color system (RGB) to be more clear. The proposed model will be able to move away from relying on the user to identify the source image which matches in color levels and it exceeds the network determinants of image types with similar color levels. In addition, Support Vector Machine is considered to be more reliable than neural networks in classification algorithms. The model performance is evaluated by using the Root Mean Squared Error (RMSE) in measuring the success of the assumed modal of matching the coloring (resulting) images and the original color images. So, a reality-related result has been obtained at a good rate for all the tested images. This model has proved to be successful in the process of recognizing the chromatic values of greyscale images then retrieving it. It takes less time complexity in trained data, and it isn’t complex in working.
35

Li, Hui, Wei Zeng, Guorong Xiao, and Huabin Wang. "The Instance-Aware Automatic Image Colorization Based on Deep Convolutional Neural Network." Intelligent Automation & Soft Computing 26, no. 4 (2020): 841–46. http://dx.doi.org/10.32604/iasc.2020.010118.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Chen, Shu-Chuan, and Aaron Ogata. "MixtureTree Annotator: A Program for Automatic Colorization and Visual Annotation of MixtureTree." PLOS ONE 10, no. 3 (March 31, 2015): e0118893. http://dx.doi.org/10.1371/journal.pone.0118893.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Koleini, Mina, S. Amirhassan Monadjemi, and Payman Moallem. "Automatic black and white film colorization using texture features and artificial neural networks." Journal of the Chinese Institute of Engineers 33, no. 7 (November 2010): 1049–57. http://dx.doi.org/10.1080/02533839.2010.9671693.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Tian, Nannan, Yuan Liu, Bo Wu, and Xiaofeng Li. "Colorization of Logo Sketch Based on Conditional Generative Adversarial Networks." Electronics 10, no. 4 (February 20, 2021): 497. http://dx.doi.org/10.3390/electronics10040497.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Logo design is a complex process for designers and color plays a very important role in logo design. The automatic colorization of logo sketch is of great value and full of challenges. In this paper, we propose a new logo design method based on Conditional Generative Adversarial Networks, which can output multiple colorful logos only by providing one logo sketch. We improve the traditional U-Net structure, adding channel attention and spatial attention in the process of skip-connection. In addition, the generator consists of parallel attention-based U-Net blocks, which can output multiple logo images. During the model optimization process, a style loss function is proposed to improve the color diversity of the logos. We evaluate our method on the self-built edges2logos dataset and the public edges2shoes dataset. Experimental results show that our method can generate more colorful and realistic logo images based on simple sketches. Compared to the classic networks, the logos generated by our network are also superior in visual effects.
39

Seo, Chang Wook, and Yongduek Seo. "Seg2pix: Few Shot Training Line Art Colorization with Segmented Image Data." Applied Sciences 11, no. 4 (February 5, 2021): 1464. http://dx.doi.org/10.3390/app11041464.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
There are various challenging issues in automating line art colorization. In this paper, we propose a GAN approach incorporating semantic segmentation image data. Our GAN-based method, named Seg2pix, can automatically generate high quality colorized images, aiming at computerizing one of the most tedious and repetitive jobs performed by coloring workers in the webtoon industry. The network structure of Seg2pix is mostly a modification of the architecture of Pix2pix, which is a convolution-based generative adversarial network for image-to-image translation. Through this method, we can generate high quality colorized images of a particular character with only a few training data. Seg2pix is designed to reproduce a segmented image, which becomes the suggestion data for line art colorization. The segmented image is automatically generated through a generative network with a line art image and a segmentation ground truth. In the next step, this generative network creates a colorized image from the line art and segmented image, which is generated from the former step of the generative network. To summarize, only one line art image is required for testing the generative model, and an original colorized image and segmented image are additionally required as the ground truth for training the model. These generations of the segmented image and colorized image proceed by an end-to-end method sharing the same loss functions. By using this method, we produce better qualitative results for automatic colorization of a particular character’s line art. This improvement can also be measured by quantitative results with Learned Perceptual Image Patch Similarity (LPIPS) comparison. We believe this may help artists exercise their creative expertise mainly in the area where computerization is not yet capable.
40

Tan, Cong, and Shaoyu Yang. "Automatic Extraction of Color Features from Landscape Images Based on Image Processing." Traitement du Signal 38, no. 3 (June 30, 2021): 747–55. http://dx.doi.org/10.18280/ts.380322.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The dominant color features determine the presentation effect and visual experience of landscapes. The existing studies rarely quantify the application effect of landscape colors through image colorization. Besides, it is unreasonable to analyze landscape images with multiple standard colors with a single color space. To solve the problem, this paper proposes an automatic extraction method for color features from landscape images based on image processing. Firstly, a landscape lighting model was constructed based on color constancy theories, and the quality of landscape images was improved with color constant image enhancement technology. In this way, the low-level color features were extracted from the landscape image library. Next, support vector machine (SVM) and fuzzy c-means (FCM) were innovatively integrated to extract high-level color features from landscape images. The proposed method was proved effective through experiments.
41

Viana, Monique Simplicio, Orides Morandin Junior, and Rodrigo Colnago Contreras. "An Improved Local Search Genetic Algorithm with a New Mapped Adaptive Operator Applied to Pseudo-Coloring Problem." Symmetry 12, no. 10 (October 14, 2020): 1684. http://dx.doi.org/10.3390/sym12101684.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In many situations, an expert must visually analyze an image arranged in grey levels. However, the human eye has strong difficulty in detecting details in this type of image, making it necessary to use artificial coloring techniques. The pseudo-coloring problem (PsCP) consists of assigning to a grey-level image, pre-segmented in K sub-regions, a set of K colors that are as dissimilar as possible. This problem is part of the well-known class of NP-Hard problems and, therefore, does not present an exact solution for all instances. Thus, meta-heuristics has been widely used to overcome this problem. In particular, genetic algorithm (GA) is one of those techniques that stands out in the literature and has already been used in PsCP. In this work, we present a new method that consists of an improvement of the GA specialized in solving the PsCP. In addition, we propose the addition of local search operators and rules for adapting parameters based on symmetric mapping functions to avoid common problems in this type of technique such as premature convergence and inadequate exploration in the search space. Our method is evaluated in three different case studies: the first consisting of the pseudo-colorization of real-world images on the RGB color space; the second consisting of the pseudo-colorization in RGB color space considering synthetic and abstract images in which its sub-regions are fully-connected; and the third consisting of the pseudo-colorization in the Munsell atlas color set. In all scenarios, our method is compared with other state-of-the-art techniques and presents superior results. Specifically, the use of mapped automatic adjustment operators proved to be powerful in boosting the proposed meta-heuristic to obtain more robust results in all evaluated instances of PsCP in all the considered case studies.
42

Ibraheem, Baraa Qasim, Kassem Danach, and Ahmad Ghandour. "Colorization of Black and White Images Using a Hybrid Deep Learning Framework." International Research Journal of Innovations in Engineering and Technology 08, no. 05 (2024): 06–11. http://dx.doi.org/10.47001/irjiet/2024.805002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the development of deep learning algorithms and their great success in the field of computer vision, the field of automatic image colorization has witnessed significant improvements in accuracy and realism. This study introduces a novel deep learning-based method for colorizing black and white photographs, utilizing the powerful feature extraction of the InceptionResNetV2 model and the generative capabilities of autoencoders. A custom data generator was developed for efficient preprocessing, data augmentation, and batch processing, enhancing memory usage and scalability. The system encodes grayscale images and extracts high-level features, which are then fused and decoded into two color channels, combined with the original luminance to recreate the image in the LAB color space. The method demonstrates strong performance with a PSNR of 22.8154 and a SSIM of 0.9097, showcasing its potential for applications like historical image restoration and media enhancement.
43

Gao, Baoquan, Yiding Ping, Yao Lu, and Chen Zhang. "Nighttime Cloud Cover Estimation Method at the Saishiteng 3850 m Site." Universe 8, no. 10 (October 18, 2022): 538. http://dx.doi.org/10.3390/universe8100538.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cloud cover is critical for astronomical sites because it can be used to assess the observability of the local sky and further the fractional photometric time. For cloud monitoring in site-testing campaigns with all-sky cameras, previous studies have mainly focused on moonless images, while the automatic processing methods for moonlight images are explored quite few. This paper proposes an automatic estimation method for cloud cover, which takes all cases of nighttime gray-scale all-sky images into account. For moonless images, the efficient Otsu algorithm is directly used to detect clouds. For moonlight images, they are transformed into cloud feature image using a colorization procedure, and then the Otsu algorithm is used to distinguish cloud pixels from sky pixels on the cloud feature image. The reliability of this method was evaluated on manually labeled images. The results show that the cloud cover error of this method is less than 9% in all scenarios. The fractional photometric time derived from this method is basically consistent with the published result of the Lenghu site.
44

Sufian, Maisarah Mohd, Ervin Gubin Moung, Mohd Hanafi Ahmad Hijazi, Farashazillah Yahya, Jamal Ahmad Dargham, Ali Farzamnia, Florence Sia, and Nur Faraha Mohd Naim. "COVID-19 Classification through Deep Learning Models with Three-Channel Grayscale CT Images." Big Data and Cognitive Computing 7, no. 1 (February 16, 2023): 36. http://dx.doi.org/10.3390/bdcc7010036.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
COVID-19, an infectious coronavirus disease, has triggered a pandemic that has claimed many lives. Clinical institutes have long considered computed tomography (CT) as an excellent and complementary screening method to reverse transcriptase-polymerase chain reaction (RT-PCR). Because of the limited dataset available on COVID-19, transfer learning-based models have become the go-to solutions for automatic COVID-19 detection. However, CT images are typically provided in grayscale, thus posing a challenge for automatic detection using pre-trained models, which were previously trained on RGB images. Several methods have been proposed in the literature for converting grayscale images to RGB (three-channel) images for use with pre-trained deep-learning models, such as pseudo-colorization, replication, and colorization. The most common method is replication, where the one-channel grayscale image is repeated in the three-channel image. While this technique is simple, it does not provide new information and can lead to poor performance due to redundant image features fed into the DL model. This study proposes a novel image pre-processing method for grayscale medical images that utilize Histogram Equalization (HE) and Contrast Limited Adaptive Histogram Equalization (CLAHE) to create a three-channel image representation that provides different information on each channel. The effectiveness of this method is evaluated using six other pre-trained models, including InceptionV3, MobileNet, ResNet50, VGG16, ViT-B16, and ViT-B32. The results show that the proposed image representation significantly improves the classification performance of the models, with the InceptionV3 model achieving an accuracy of 99.60% and a recall (also referred as sensitivity) of 99.59%. The proposed method addresses the limitation of using grayscale medical images for COVID-19 detection and can potentially improve the early detection and control of the disease. Additionally, the proposed method can be applied to other medical imaging tasks with a grayscale image input, thus making it a generalizable solution.
45

Lee, Yeongseop, and Seongjin Lee. "Automatic Colorization of High-resolution Animation Style Line-art based on Frequency Separation and Two-Stage Generator." TRANSACTION OF THE KOREAN INSTITUTE OF ELECTRICAL ENGINEERS P 69, no. 4 (December 31, 2020): 275–83. http://dx.doi.org/10.5370/kieep.2020.69.4.275.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Xu, Junhao, Chunjing Yao, Hongchao Ma, Chen Qian, and Jie Wang. "Automatic Point Cloud Colorization of Ground-Based LiDAR Data Using Video Imagery without Position and Orientation System." Remote Sensing 15, no. 10 (May 19, 2023): 2658. http://dx.doi.org/10.3390/rs15102658.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the continuous development of three-dimensional city modeling, traditional close-range photogrammetry is limited by complex processing procedures and incomplete 3D depth information, making it unable to meet high-precision modeling requirements. In contrast, the integration of light detection and ranging and cameras in mobile measurement systems provides a new and highly effective solution. Currently, integrated mobile measurement systems commonly require cameras, lasers, position and orientation system and inertial measurement units; thus, the hardware cost is relatively expensive, and the system integration is complex. Therefore, in this paper, we propose a ground mobile measurement system only composed of a LiDAR and a GoPro camera, providing a more convenient and reliable way to automatically obtain 3D point cloud data with spectral information. The automatic point cloud coloring based on video images mainly includes four aspects: (1) Establishing models for radial distortion and tangential distortion to correct video images. (2) Establishing a registration method based on normalized Zernike moments to obtain the exterior orientation elements. The error of the result is only 0.5–1 pixel, which is far higher than registration based on a collinearity equation. (3) Establishing relative orientation based on essential matrix decomposition and nonlinear optimization. This involves uniformly using the speeded-up robust features algorithm with distance restriction and random sample consensus to select corresponding points. The vertical parallax of the stereo image pair model is less than one pixel, indicating that the accuracy is high. (4) A point cloud coloring method based on Gaussian distribution with central region restriction is adopted. Only pixels within the central region are considered valid for coloring. Then, the point cloud is colored based on the mean of the Gaussian distribution of the color set. In the colored point cloud, the textures of the buildings are clear, and targets such as windows, grass, trees, and vehicles can be clearly distinguished. Overall, the result meets the accuracy requirements of applications such as tunnel detection, street-view modeling and 3D urban modeling.
47

Srivastava, Kshitija, Saksham Gogia, and G. Rohith. "An approach to pseudocoloring of grey scale image using deep learning technique." Journal of Physics: Conference Series 2466, no. 1 (March 1, 2023): 012030. http://dx.doi.org/10.1088/1742-6596/2466/1/012030.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Image pseudo colorization is the process of adding RGB colours to grayscale images to make them more appealing. Deep learning technology has made progress in the field of automatic colouring. In general, we divide automatic colouring methods into three groups based on where the colour information comes from: colouring based on what you already know and on reference pictures. The colouring method can meet the needs of most users, but there are some drawbacks. For example, users can’t colour different reference graphs for the different things in a picture. In order to solve this problem by recognising several objects and background regions in a picture and combine the final colouring results, the proposed method uses the deep learning approach that regional mixed colours be used more and the method be mastered by using deep learning. Qualitative results (visual perception) validate the effectiveness of pseudocolorisation which split into foreground colour based on a reference picture and background colour based on prior knowledge. Quantitative results such as Structural Similarity (SSIM), Peak Signal to Noise Ratio (PSNR), Image Matching Error and Entropy validates the effectiveness of strong edge strength, visually appealing quality and retention of maximum information without disturbing quality of image.
48

Lin, Horng-Horng, Harshad Kumar Dandage, Keh-Moh Lin, You-Teh Lin, and Yeou-Jiunn Chen. "Efficient Cell Segmentation from Electroluminescent Images of Single-Crystalline Silicon Photovoltaic Modules and Cell-Based Defect Identification Using Deep Learning with Pseudo-Colorization." Sensors 21, no. 13 (June 23, 2021): 4292. http://dx.doi.org/10.3390/s21134292.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Solar cells may possess defects during the manufacturing process in photovoltaic (PV) industries. To precisely evaluate the effectiveness of solar PV modules, manufacturing defects are required to be identified. Conventional defect inspection in industries mainly depends on manual defect inspection by highly skilled inspectors, which may still give inconsistent, subjective identification results. In order to automatize the visual defect inspection process, an automatic cell segmentation technique and a convolutional neural network (CNN)-based defect detection system with pseudo-colorization of defects is designed in this paper. High-resolution Electroluminescence (EL) images of single-crystalline silicon (sc-Si) solar PV modules are used in our study for the detection of defects and their quality inspection. Firstly, an automatic cell segmentation methodology is developed to extract cells from an EL image. Secondly, defect detection can be actualized by CNN-based defect detector and can be visualized with pseudo-colors. We used contour tracing to accurately localize the panel region and a probabilistic Hough transform to identify gridlines and busbars on the extracted panel region for cell segmentation. A cell-based defect identification system was developed using state-of-the-art deep learning in CNNs. The detected defects are imposed with pseudo-colors for enhancing defect visualization using K-means clustering. Our automatic cell segmentation methodology can segment cells from an EL image in about 2.71 s. The average segmentation errors along the x-direction and y-direction are only 1.6 pixels and 1.4 pixels, respectively. The defect detection approach on segmented cells achieves 99.8% accuracy. Along with defect detection, the defect regions on a cell are furnished with pseudo-colors to enhance the visualization.
49

Bieda, I. "Scene Change Localization in a Video." Bulletin of Taras Shevchenko National University of Kyiv. Series: Physics and Mathematics, no. 1 (2021): 57–62. http://dx.doi.org/10.17721/1812-5409.2021/1.6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Millions of videos are uploaded each day to Youtube and similar platforms. One of the many issues that these services face is the extraction of useful metadata. There are a lot of tasks that arise with the processing of videos. For example, putting an ad is better in the middle of a video, and as an advertiser, one would probably prefer to show the ad in between scene cuts, where it would be less intrusive. Another example is when one would like to watch only through the most interesting or important pieces of video recording. In many cases, it is better to have an automatic scene cut detection approach instead of manually labeling thousands of videos. The scene change detection can help to analyze video-stream automatically: which characters appear in which scenes, how they interact and for how long, their relations and importance, and also to track many other issues. The potential solution can rely on different facts: objects appearance, contrast or intensity changed, other colorization, background chang, and also sound changes. In this work, we propose the method for effective scene change detection, which is based on thresholding, and also fade-in/fade-out scene analysis. It uses computer vision and image analysis approaches to identify the scene cuts. Experiments demonstrate the effectiveness of the proposed scene change detection approach.
50

Song, Tianyu, Michael Sommersperger, The Anh Baran, Matthias Seibold, and Nassir Navab. "HAPPY: Hip Arthroscopy Portal Placement Using Augmented Reality." Journal of Imaging 8, no. 11 (November 6, 2022): 302. http://dx.doi.org/10.3390/jimaging8110302.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Correct positioning of the endoscope is crucial for successful hip arthroscopy. Only with adequate alignment can the anatomical target area be visualized and the procedure be successfully performed. Conventionally, surgeons rely on anatomical landmarks such as bone structure, and on intraoperative X-ray imaging, to correctly place the surgical trocar and insert the endoscope to gain access to the surgical site. One factor complicating the placement is deformable soft tissue, as it can obscure important anatomical landmarks. In addition, the commonly used endoscopes with an angled camera complicate hand–eye coordination and, thus, navigation to the target area. Adjusting for an incorrectly positioned endoscope prolongs surgery time, requires a further incision and increases the radiation exposure as well as the risk of infection. In this work, we propose an augmented reality system to support endoscope placement during arthroscopy. Our method comprises the augmentation of a tracked endoscope with a virtual augmented frustum to indicate the reachable working volume. This is further combined with an in situ visualization of the patient anatomy to improve perception of the target area. For this purpose, we highlight the anatomy that is visible in the endoscopic camera frustum and use an automatic colorization method to improve spatial perception. Our system was implemented and visualized on a head-mounted display. The results of our user study indicate the benefit of the proposed system compared to baseline positioning without additional support, such as an increased alignment speed, improved positioning error and reduced mental effort. The proposed approach might aid in the positioning of an angled endoscope, and may result in better access to the surgical area, reduced surgery time, less patient trauma, and less X-ray exposure during surgery.

До бібліографії