Gotowa bibliografia na temat „Automatic Colorization”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Automatic Colorization”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Automatic Colorization"

1

Aoki, Terumasa, i Van Nguyen. "Global Distribution Adjustment and Nonlinear Feature Transformation for Automatic Colorization". Advances in Multimedia 2018 (2018): 1–15. http://dx.doi.org/10.1155/2018/1504691.

Pełny tekst źródła
Streszczenie:
Automatic colorization is generally classified into two groups: propagation-based methods and reference-based methods. In reference-based automatic colorization methods, color image(s) are used as reference(s) to reconstruct original color of a gray target image. The most important task here is to find the best matching pairs for all pixels between reference and target images in order to transfer color information from reference to target pixels. A lot of attractive local feature-based image matching methods have already been developed for the last two decades. Unfortunately, as far as we know, there are no optimal matching methods for automatic colorization because the requirements for pixel matching in automatic colorization are wholly different from those for traditional image matching. To design an efficient matching algorithm for automatic colorization, clustering pixel with low computational cost and generating descriptive feature vector are the most important challenges to be solved. In this paper, we present a novel method to address these two problems. In particular, our work concentrates on solving the second problem (designing a descriptive feature vector); namely, we will discuss how to learn a descriptive texture feature using scaled sparse texture feature combining with a nonlinear transformation to construct an optimal feature descriptor. Our experimental results show our proposed method outperforms the state-of-the-art methods in terms of robustness for color reconstruction for automatic colorization applications.
Style APA, Harvard, Vancouver, ISO itp.
2

Alam khan, Sharique, i Alok Katiyar. "Automatic colorization of natural images using deep learning". YMER Digital 21, nr 05 (20.05.2022): 946–51. http://dx.doi.org/10.37896/ymer21.05/a6.

Pełny tekst źródła
Streszczenie:
An approach based on deep learning for automatic colorization of image with optional userguided hints. The system maps a grey-scale image, along with, user hints” (selected colors) to an output colorization with a Convolution Neural Network (CNN). Previous approaches have relied heavily on user input which results in non-real-time desaturated outputs. The network takes user edits by fusing low-level information of source with high-level information, learned from large-scale data. Some networks are trained on a large data set to eliminate this dependency. The image colorization systems find their applications in astronomical photography, CCTV footage, electron microscopy, etc. The various approaches combine color data from large data sets and user inputs provide a model for accurate and efficient colorization of grey-scale images. Keywords—image colorization; deep learning; convolutional neural network; image processing.
Style APA, Harvard, Vancouver, ISO itp.
3

Prasanna, N. Lakshmi, Sk Sohal Rehman, V. Naga Phani, S. Koteswara Rao i T. Ram Santosh. "AUTOMATIC COLORIZATION USING CONVOLUTIONAL NEURAL NETWORKS". International Journal of Computer Science and Mobile Computing 10, nr 7 (30.07.2021): 10–19. http://dx.doi.org/10.47760/ijcsmc.2021.v10i07.002.

Pełny tekst źródła
Streszczenie:
Automatic Colorization helps to hallucinate what an input gray scale image would look like when colorized. Automatic coloring makes it look and feel better than Grayscale. One of the most important technologies used in Machine learning is Deep Learning. Deep learning is nothing but to train the computer with certain algorithms which imitates the working of the human brain. Some of the areas in which it is used are medical, Industrial Automation, Electronics etc. The main objective of this project is coloring Grayscale images. We have umbrellaed the concepts of convolutional neural networks along with the use of the Opencv library in Python to construct our desired model. A user interface has also been fabricated to get personalized inputs using PIL. The user had to give details about boundaries, what colors to put, etc. Colorization requires considerable user intervention and remains a tedious, time consuming, and expensive task. So, in this paper we try to build a model to colorize the grayscale images automatically by using some modern deep learning techniques. In colorization task, the model needs to find characteristics to map grayscale images with colored ones.
Style APA, Harvard, Vancouver, ISO itp.
4

Netha, Guda Pranay, M. S. S. Manohar, M. Sai Amartya Maruth i Ganjikunta Ganesh Kumar. "Colourization of Black and White Images using Deep Learning". International Journal of Computer Science and Mobile Computing 11, nr 1 (30.01.2022): 116–21. http://dx.doi.org/10.47760/ijcsmc.2022.v11i01.014.

Pełny tekst źródła
Streszczenie:
Colorization is the process of transforming grayscale photos into colour images that are aesthetically appealing. The basic objective is to persuade the spectator that the outcome is genuine. The majority of grayscale photographs that need to be colourized are of nature situations. Over the last 20 years, a broad range of colorization methods have been created, ranging from algorithmically simple but time- and energy-consuming procedures due to inescapable human participation to more difficult but also more automated ones. Automatic conversion has evolved into a difficult field that mixes machine learning and deep learning with art. The purpose of this study is to provide an overview and assessment of grayscale picture colorization methods and techniques used on natural photos. The study categorises existing colorization approaches, discusses the ideas underlying them, and highlights their benefits and drawbacks. Deep learning methods are given special consideration. The picture quality and processing time of relevant approaches are compared. Different measures are used to judge the quality of colour images. Because of the complexity of the human visual system, measuring the perceived quality of a colour image is difficult. Multiple metrics used to assess colorization systems provide results by calculating the difference between the expected colour value and the ground truth, which is not always consistent with image plausibility. According to the findings, user-guided neural networks are the most promising category for colorization since they successfully blend human participation with machine learning and neural network automation.
Style APA, Harvard, Vancouver, ISO itp.
5

Farella, Elisa Mariarosaria, Salim Malek i Fabio Remondino. "Colorizing the Past: Deep Learning for the Automatic Colorization of Historical Aerial Images". Journal of Imaging 8, nr 10 (1.10.2022): 269. http://dx.doi.org/10.3390/jimaging8100269.

Pełny tekst źródła
Streszczenie:
The colorization of grayscale images can, nowadays, take advantage of recent progress and the automation of deep-learning techniques. From the media industry to medical or geospatial applications, image colorization is an attractive and investigated image processing practice, and it is also helpful for revitalizing historical photographs. After exploring some of the existing fully automatic learning methods, the article presents a new neural network architecture, Hyper-U-NET, which combines a U-NET-like architecture and HyperConnections to handle the colorization of historical black and white aerial images. The training dataset (about 10,000 colored aerial image patches) and the realized neural network are available on our GitHub page to boost further research investigations in this field.
Style APA, Harvard, Vancouver, ISO itp.
6

Xu, Min, i YouDong Ding. "Fully automatic image colorization based on semantic segmentation technology". PLOS ONE 16, nr 11 (30.11.2021): e0259953. http://dx.doi.org/10.1371/journal.pone.0259953.

Pełny tekst źródła
Streszczenie:
Aiming at these problems of image colorization algorithms based on deep learning, such as color bleeding and insufficient color, this paper converts the study of image colorization to the optimization of image semantic segmentation, and proposes a fully automatic image colorization model based on semantic segmentation technology. Firstly, we use the encoder as the local feature extraction network and use VGG-16 as the global feature extraction network. These two parts do not interfere with each other, but they share the low-level feature. Then, the first fusion module is constructed to merge local features and global features, and the fusion results are input into semantic segmentation network and color prediction network respectively. Finally, the color prediction network obtains the semantic segmentation information of the image through the second fusion module, and predicts the chrominance of the image based on it. Through several sets of experiments, it is proved that the performance of our model becomes stronger and stronger under the nourishment of the data. Even in some complex scenes, our model can predict reasonable colors and color correctly, and the output effect is very real and natural.
Style APA, Harvard, Vancouver, ISO itp.
7

Liu, Shiguang, i Xiang Zhang. "Automatic grayscale image colorization using histogram regression". Pattern Recognition Letters 33, nr 13 (październik 2012): 1673–81. http://dx.doi.org/10.1016/j.patrec.2012.06.001.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Huang, Zhitong, Nanxuan Zhao i Jing Liao. "UniColor". ACM Transactions on Graphics 41, nr 6 (30.11.2022): 1–16. http://dx.doi.org/10.1145/3550454.3555471.

Pełny tekst źródła
Streszczenie:
We propose the first unified framework UniColor to support colorization in multiple modalities, including both unconditional and conditional ones, such as stroke, exemplar, text, and even a mix of them. Rather than learning a separate model for each type of condition, we introduce a two-stage colorization framework for incorporating various conditions into a single model. In the first stage, multi-modal conditions are converted into a common representation of hint points. Particularly, we propose a novel CLIP-based method to convert the text to hint points. In the second stage, we propose a Transformer-based network composed of Chroma-VQGAN and Hybrid-Transformer to generate diverse and high-quality colorization results conditioned on hint points. Both qualitative and quantitative comparisons demonstrate that our method outperforms state-of-the-art methods in every control modality and further enables multi-modal colorization that was not feasible before. Moreover, we design an interactive interface showing the effectiveness of our unified framework in practical usage, including automatic colorization, hybrid-control colorization, local recolorization, and iterative color editing. Our code and models are available at https://luckyhzt.github.io/unicolor .
Style APA, Harvard, Vancouver, ISO itp.
9

Furusawa, Chie. "2-1 Colorization Techniques for Manga and Line Drawings; Comicolorization: Semi-Automatic Manga Colorization". Journal of The Institute of Image Information and Television Engineers 72, nr 5 (2018): 347–52. http://dx.doi.org/10.3169/itej.72.347.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Sugumar, S. J. "Colorization of Digital Images: An Automatic and Efficient Approach through Deep learning". Journal of Innovative Image Processing 4, nr 3 (16.09.2022): 183–94. http://dx.doi.org/10.36548/jiip.2022.3.006.

Pełny tekst źródła
Streszczenie:
Colorization is not a guaranteed, but a feasible mapping between intensity and chrominance values. This paper presents a colorization system that draws inspiration from recent developments in deep learning and makes use of both locally and globally relevant data. One such property is the rarity of each color category on the quantized plane. The denoising model contains hybrid approach with cluster normalization through U-Net deep learning construction of framework. These are built on the basic U-Net design for segmentation. To eliminate gaussian noise in digital images, this article has developed and tested a generic deep learning denoising model. PSNR and MSE are used as performance measures for comparison purposes.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Automatic Colorization"

1

Hati, Yliess. "Expression Créative Assistée par IA : Le Cas de La Colorisation Automatique de Line Art". Electronic Thesis or Diss., Reims, 2023. http://www.theses.fr/2023REIMS060.

Pełny tekst źródła
Streszczenie:
La colorisation automatique de dessins encrés est une tâche complexe pour la vision par ordinateur. Contrairement aux images en niveaux de gris, les encrages manquent d’informations sémantiques telles que les ombrages et les textures, rendant la tâche encore plus difficile.Cette thèse repose sur des travaux connexes et explore l’utilisation de architec- tures génératives modernes telles que les GAN (Réseaux Génératifs Antagonistes) et les MDD (Modèles de Diffusion par Débruitage) pour à la fois améliorer la qualité des techniques précédentes et mieux capturer l’intention de colorisa- tion de l’utilisateur à travers trois contributions : PaintsTorch, StencilTorch et StablePaint.Ces travaux amènent à la définition et l’implémentation d’un procédé itératif et interactif basé sur des coups de pinceau colorés et des masques fournis par l’utilisateur final pour favoriser la collaboration entre l’Homme et la Machine en faveur de processus de travail naturels et émergents inspirés de la peinture digitale
Automatic lineart colorization is a challenging task for Computer Vision. Con- trary to grayscale images, linearts lack semantic information such as shading and texture, making the task even more difficult.This thesis dissertation is built upon related works and explores the use of modern generative Artificial Intelligence (AI) architectures such as Generative Adversarial Networks (GANs) and Denoising Diffusion Models (DDMs) to both improve the quality of previous techniques, as well as better capturing the user colorization intent throughout three contributions: PaintsTorch, StencilTorch and StablePaint.As a result, an iterative and interactive framework based on colored strokes and masks provided by the end user is built to foster Human-Machine collaboration in favour of natural, and emerging workflows inspired by digital painting processes
Style APA, Harvard, Vancouver, ISO itp.
2

Chang, Yu-wei, i 張佑瑋. "Automatic grayscale image colorization". Thesis, 2005. http://ndltd.ncl.edu.tw/handle/89981338295360370277.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Chen, Yung-An, i 陳勇安. "Automatic Colorization Defects Inspection using Deep Learning Network". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/3zbtg6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Automatic Colorization"

1

Tran, Tan-Bao, i Thai-Son Tran. "Automatic Natural Image Colorization". W Intelligent Information and Database Systems, 612–21. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-41964-6_53.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Larsson, Gustav, Michael Maire i Gregory Shakhnarovich. "Learning Representations for Automatic Colorization". W Computer Vision – ECCV 2016, 577–93. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46493-0_35.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Dhir, Rashi, Meghna Ashok, Shilpa Gite i Ketan Kotecha. "Automatic Image Colorization Using GANs". W Soft Computing and its Engineering Applications, 15–26. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-0708-0_2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Charpiat, Guillaume, Matthias Hofmann i Bernhard Schölkopf. "Automatic Image Colorization Via Multimodal Predictions". W Lecture Notes in Computer Science, 126–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-88690-7_10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Ding, Xiaowei, Yi Xu, Lei Deng i Xiaokang Yang. "Colorization Using Quaternion Algebra with Automatic Scribble Generation". W Lecture Notes in Computer Science, 103–14. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27355-1_12.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Golyadkin, Maksim, i Ilya Makarov. "Semi-automatic Manga Colorization Using Conditional Adversarial Networks". W Lecture Notes in Computer Science, 230–42. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72610-2_17.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Kouzouglidis, Panagiotis, Giorgos Sfikas i Christophoros Nikou. "Automatic Video Colorization Using 3D Conditional Generative Adversarial Networks". W Advances in Visual Computing, 209–18. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-33720-9_16.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Lee, Hyejin, Daehee Kim, Daeun Lee, Jinkyu Kim i Jaekoo Lee. "Bridging the Domain Gap Towards Generalization in Automatic Colorization". W Lecture Notes in Computer Science, 527–43. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19790-1_32.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Mouzon, Thomas, Fabien Pierre i Marie-Odile Berger. "Joint CNN and Variational Model for Fully-Automatic Image Colorization". W Lecture Notes in Computer Science, 535–46. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22368-7_42.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Dai, Jiawu, Bin Jiang, Chao Yang, Lin Sun i Bolin Zhang. "Local Pyramid Attention and Spatial Semantic Modulation for Automatic Image Colorization". W Big Data, 165–81. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9709-8_12.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Automatic Colorization"

1

Watanabe, Taiki, Seitaro Shinagawa, Takuya Funatomi, Akinobu Maejima, Yasuhiro Mukaigawa, Satoshi Nakamura i Hiroyuki Kubo. "Improved Automatic Colorization by Optimal Pre-colorization". W SIGGRAPH '23: Special Interest Group on Computer Graphics and Interactive Techniques Conference. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3588028.3603669.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Śluzek, Andrzej. "On Unguided Automatic Colorization of Monochrome Images". W WSCG 2023 – 31. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision. University of West Bohemia, Czech Republic, 2023. http://dx.doi.org/10.24132/csrn.3301.38.

Pełny tekst źródła
Streszczenie:
Image colorization is a challenging problem due to the infinite RGB solutions for a grayscale picture. Therefore, human assistance, either directly or indirectly, is essential for achieving visually plausible colorization. This paper aims to perform colorization using only a grayscale image as the data source, without any reliance on metadata or human hints. The method assumes an (arbitrary) rgb2gray model and utilizes a few simple heuristics. Despite probabilistic elements, the results are visually acceptable and repeatable, making this approach feasible (e.g. for aesthetic purposes) in domains where only monochrome visual representations exist. The paper explains the method, presents exemplary results, and discusses a few supplementary issues.
Style APA, Harvard, Vancouver, ISO itp.
3

Thasarathan, Harrish, Kamyar Nazeri i Mehran Ebrahimi. "Automatic Temporally Coherent Video Colorization". W 2019 16th Conference on Computer and Robot Vision (CRV). IEEE, 2019. http://dx.doi.org/10.1109/crv.2019.00033.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Konovalov, Vitaly. "Method for automatic cartoon colorization". W 2023 IX International Conference on Information Technology and Nanotechnology (ITNT). IEEE, 2023. http://dx.doi.org/10.1109/itnt57377.2023.10139184.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

AbdulHalim, Mayada F., i Zaineb A. Mejbil. "Automatic colorization without human intervention". W 2008 International Conference on Computer and Communication Engineering (ICCCE). IEEE, 2008. http://dx.doi.org/10.1109/iccce.2008.4580569.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Lal, Shamit, Vineet Garg i Om Prakash Verma. "Automatic Image Colorization Using Adversarial Training". W the 9th International Conference. New York, New York, USA: ACM Press, 2017. http://dx.doi.org/10.1145/3163080.3163104.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Deshpande, Aditya, Jason Rock i David Forsyth. "Learning Large-Scale Automatic Image Colorization". W 2015 IEEE International Conference on Computer Vision (ICCV). IEEE, 2015. http://dx.doi.org/10.1109/iccv.2015.72.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Lu, Yurong, Xianglin Huang, Yan Zhai, Lifang Yang i Yirui Wang. "ColorGAN: Automatic Image Colorization with GAN". W 2023 IEEE 3rd International Conference on Information Technology, Big Data and Artificial Intelligence (ICIBA). IEEE, 2023. http://dx.doi.org/10.1109/iciba56860.2023.10164924.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Goel, Divyansh, Sakshi Jain, Dinesh Kumar Vishwakarma i Aryan Bansal. "Automatic Image Colorization using U-Net". W 2021 12th International Conference on Computing Communication and Networking Technologies (ICCCNT). IEEE, 2021. http://dx.doi.org/10.1109/icccnt51525.2021.9580001.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Nguyen, Van, Vicky Sintunata i Terumasa Aoki. "Automatic Image Colorization based on Feature Lines". W International Conference on Computer Vision Theory and Applications. SCITEPRESS - Science and Technology Publications, 2016. http://dx.doi.org/10.5220/0005676401260133.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii