Auswahl der wissenschaftlichen Literatur zum Thema „Automatic Colorization“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Automatic Colorization" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Automatic Colorization"

1

Aoki, Terumasa, und Van Nguyen. „Global Distribution Adjustment and Nonlinear Feature Transformation for Automatic Colorization“. Advances in Multimedia 2018 (2018): 1–15. http://dx.doi.org/10.1155/2018/1504691.

Der volle Inhalt der Quelle
Annotation:
Automatic colorization is generally classified into two groups: propagation-based methods and reference-based methods. In reference-based automatic colorization methods, color image(s) are used as reference(s) to reconstruct original color of a gray target image. The most important task here is to find the best matching pairs for all pixels between reference and target images in order to transfer color information from reference to target pixels. A lot of attractive local feature-based image matching methods have already been developed for the last two decades. Unfortunately, as far as we know, there are no optimal matching methods for automatic colorization because the requirements for pixel matching in automatic colorization are wholly different from those for traditional image matching. To design an efficient matching algorithm for automatic colorization, clustering pixel with low computational cost and generating descriptive feature vector are the most important challenges to be solved. In this paper, we present a novel method to address these two problems. In particular, our work concentrates on solving the second problem (designing a descriptive feature vector); namely, we will discuss how to learn a descriptive texture feature using scaled sparse texture feature combining with a nonlinear transformation to construct an optimal feature descriptor. Our experimental results show our proposed method outperforms the state-of-the-art methods in terms of robustness for color reconstruction for automatic colorization applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Alam khan, Sharique, und Alok Katiyar. „Automatic colorization of natural images using deep learning“. YMER Digital 21, Nr. 05 (20.05.2022): 946–51. http://dx.doi.org/10.37896/ymer21.05/a6.

Der volle Inhalt der Quelle
Annotation:
An approach based on deep learning for automatic colorization of image with optional userguided hints. The system maps a grey-scale image, along with, user hints” (selected colors) to an output colorization with a Convolution Neural Network (CNN). Previous approaches have relied heavily on user input which results in non-real-time desaturated outputs. The network takes user edits by fusing low-level information of source with high-level information, learned from large-scale data. Some networks are trained on a large data set to eliminate this dependency. The image colorization systems find their applications in astronomical photography, CCTV footage, electron microscopy, etc. The various approaches combine color data from large data sets and user inputs provide a model for accurate and efficient colorization of grey-scale images. Keywords—image colorization; deep learning; convolutional neural network; image processing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Prasanna, N. Lakshmi, Sk Sohal Rehman, V. Naga Phani, S. Koteswara Rao und T. Ram Santosh. „AUTOMATIC COLORIZATION USING CONVOLUTIONAL NEURAL NETWORKS“. International Journal of Computer Science and Mobile Computing 10, Nr. 7 (30.07.2021): 10–19. http://dx.doi.org/10.47760/ijcsmc.2021.v10i07.002.

Der volle Inhalt der Quelle
Annotation:
Automatic Colorization helps to hallucinate what an input gray scale image would look like when colorized. Automatic coloring makes it look and feel better than Grayscale. One of the most important technologies used in Machine learning is Deep Learning. Deep learning is nothing but to train the computer with certain algorithms which imitates the working of the human brain. Some of the areas in which it is used are medical, Industrial Automation, Electronics etc. The main objective of this project is coloring Grayscale images. We have umbrellaed the concepts of convolutional neural networks along with the use of the Opencv library in Python to construct our desired model. A user interface has also been fabricated to get personalized inputs using PIL. The user had to give details about boundaries, what colors to put, etc. Colorization requires considerable user intervention and remains a tedious, time consuming, and expensive task. So, in this paper we try to build a model to colorize the grayscale images automatically by using some modern deep learning techniques. In colorization task, the model needs to find characteristics to map grayscale images with colored ones.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Netha, Guda Pranay, M. S. S. Manohar, M. Sai Amartya Maruth und Ganjikunta Ganesh Kumar. „Colourization of Black and White Images using Deep Learning“. International Journal of Computer Science and Mobile Computing 11, Nr. 1 (30.01.2022): 116–21. http://dx.doi.org/10.47760/ijcsmc.2022.v11i01.014.

Der volle Inhalt der Quelle
Annotation:
Colorization is the process of transforming grayscale photos into colour images that are aesthetically appealing. The basic objective is to persuade the spectator that the outcome is genuine. The majority of grayscale photographs that need to be colourized are of nature situations. Over the last 20 years, a broad range of colorization methods have been created, ranging from algorithmically simple but time- and energy-consuming procedures due to inescapable human participation to more difficult but also more automated ones. Automatic conversion has evolved into a difficult field that mixes machine learning and deep learning with art. The purpose of this study is to provide an overview and assessment of grayscale picture colorization methods and techniques used on natural photos. The study categorises existing colorization approaches, discusses the ideas underlying them, and highlights their benefits and drawbacks. Deep learning methods are given special consideration. The picture quality and processing time of relevant approaches are compared. Different measures are used to judge the quality of colour images. Because of the complexity of the human visual system, measuring the perceived quality of a colour image is difficult. Multiple metrics used to assess colorization systems provide results by calculating the difference between the expected colour value and the ground truth, which is not always consistent with image plausibility. According to the findings, user-guided neural networks are the most promising category for colorization since they successfully blend human participation with machine learning and neural network automation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Farella, Elisa Mariarosaria, Salim Malek und Fabio Remondino. „Colorizing the Past: Deep Learning for the Automatic Colorization of Historical Aerial Images“. Journal of Imaging 8, Nr. 10 (01.10.2022): 269. http://dx.doi.org/10.3390/jimaging8100269.

Der volle Inhalt der Quelle
Annotation:
The colorization of grayscale images can, nowadays, take advantage of recent progress and the automation of deep-learning techniques. From the media industry to medical or geospatial applications, image colorization is an attractive and investigated image processing practice, and it is also helpful for revitalizing historical photographs. After exploring some of the existing fully automatic learning methods, the article presents a new neural network architecture, Hyper-U-NET, which combines a U-NET-like architecture and HyperConnections to handle the colorization of historical black and white aerial images. The training dataset (about 10,000 colored aerial image patches) and the realized neural network are available on our GitHub page to boost further research investigations in this field.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Xu, Min, und YouDong Ding. „Fully automatic image colorization based on semantic segmentation technology“. PLOS ONE 16, Nr. 11 (30.11.2021): e0259953. http://dx.doi.org/10.1371/journal.pone.0259953.

Der volle Inhalt der Quelle
Annotation:
Aiming at these problems of image colorization algorithms based on deep learning, such as color bleeding and insufficient color, this paper converts the study of image colorization to the optimization of image semantic segmentation, and proposes a fully automatic image colorization model based on semantic segmentation technology. Firstly, we use the encoder as the local feature extraction network and use VGG-16 as the global feature extraction network. These two parts do not interfere with each other, but they share the low-level feature. Then, the first fusion module is constructed to merge local features and global features, and the fusion results are input into semantic segmentation network and color prediction network respectively. Finally, the color prediction network obtains the semantic segmentation information of the image through the second fusion module, and predicts the chrominance of the image based on it. Through several sets of experiments, it is proved that the performance of our model becomes stronger and stronger under the nourishment of the data. Even in some complex scenes, our model can predict reasonable colors and color correctly, and the output effect is very real and natural.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Liu, Shiguang, und Xiang Zhang. „Automatic grayscale image colorization using histogram regression“. Pattern Recognition Letters 33, Nr. 13 (Oktober 2012): 1673–81. http://dx.doi.org/10.1016/j.patrec.2012.06.001.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Huang, Zhitong, Nanxuan Zhao und Jing Liao. „UniColor“. ACM Transactions on Graphics 41, Nr. 6 (30.11.2022): 1–16. http://dx.doi.org/10.1145/3550454.3555471.

Der volle Inhalt der Quelle
Annotation:
We propose the first unified framework UniColor to support colorization in multiple modalities, including both unconditional and conditional ones, such as stroke, exemplar, text, and even a mix of them. Rather than learning a separate model for each type of condition, we introduce a two-stage colorization framework for incorporating various conditions into a single model. In the first stage, multi-modal conditions are converted into a common representation of hint points. Particularly, we propose a novel CLIP-based method to convert the text to hint points. In the second stage, we propose a Transformer-based network composed of Chroma-VQGAN and Hybrid-Transformer to generate diverse and high-quality colorization results conditioned on hint points. Both qualitative and quantitative comparisons demonstrate that our method outperforms state-of-the-art methods in every control modality and further enables multi-modal colorization that was not feasible before. Moreover, we design an interactive interface showing the effectiveness of our unified framework in practical usage, including automatic colorization, hybrid-control colorization, local recolorization, and iterative color editing. Our code and models are available at https://luckyhzt.github.io/unicolor .
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Furusawa, Chie. „2-1 Colorization Techniques for Manga and Line Drawings; Comicolorization: Semi-Automatic Manga Colorization“. Journal of The Institute of Image Information and Television Engineers 72, Nr. 5 (2018): 347–52. http://dx.doi.org/10.3169/itej.72.347.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sugumar, S. J. „Colorization of Digital Images: An Automatic and Efficient Approach through Deep learning“. Journal of Innovative Image Processing 4, Nr. 3 (16.09.2022): 183–94. http://dx.doi.org/10.36548/jiip.2022.3.006.

Der volle Inhalt der Quelle
Annotation:
Colorization is not a guaranteed, but a feasible mapping between intensity and chrominance values. This paper presents a colorization system that draws inspiration from recent developments in deep learning and makes use of both locally and globally relevant data. One such property is the rarity of each color category on the quantized plane. The denoising model contains hybrid approach with cluster normalization through U-Net deep learning construction of framework. These are built on the basic U-Net design for segmentation. To eliminate gaussian noise in digital images, this article has developed and tested a generic deep learning denoising model. PSNR and MSE are used as performance measures for comparison purposes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Automatic Colorization"

1

Hati, Yliess. „Expression Créative Assistée par IA : Le Cas de La Colorisation Automatique de Line Art“. Electronic Thesis or Diss., Reims, 2023. http://www.theses.fr/2023REIMS060.

Der volle Inhalt der Quelle
Annotation:
La colorisation automatique de dessins encrés est une tâche complexe pour la vision par ordinateur. Contrairement aux images en niveaux de gris, les encrages manquent d’informations sémantiques telles que les ombrages et les textures, rendant la tâche encore plus difficile.Cette thèse repose sur des travaux connexes et explore l’utilisation de architec- tures génératives modernes telles que les GAN (Réseaux Génératifs Antagonistes) et les MDD (Modèles de Diffusion par Débruitage) pour à la fois améliorer la qualité des techniques précédentes et mieux capturer l’intention de colorisa- tion de l’utilisateur à travers trois contributions : PaintsTorch, StencilTorch et StablePaint.Ces travaux amènent à la définition et l’implémentation d’un procédé itératif et interactif basé sur des coups de pinceau colorés et des masques fournis par l’utilisateur final pour favoriser la collaboration entre l’Homme et la Machine en faveur de processus de travail naturels et émergents inspirés de la peinture digitale
Automatic lineart colorization is a challenging task for Computer Vision. Con- trary to grayscale images, linearts lack semantic information such as shading and texture, making the task even more difficult.This thesis dissertation is built upon related works and explores the use of modern generative Artificial Intelligence (AI) architectures such as Generative Adversarial Networks (GANs) and Denoising Diffusion Models (DDMs) to both improve the quality of previous techniques, as well as better capturing the user colorization intent throughout three contributions: PaintsTorch, StencilTorch and StablePaint.As a result, an iterative and interactive framework based on colored strokes and masks provided by the end user is built to foster Human-Machine collaboration in favour of natural, and emerging workflows inspired by digital painting processes
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Chang, Yu-wei, und 張佑瑋. „Automatic grayscale image colorization“. Thesis, 2005. http://ndltd.ncl.edu.tw/handle/89981338295360370277.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Chen, Yung-An, und 陳勇安. „Automatic Colorization Defects Inspection using Deep Learning Network“. Thesis, 2019. http://ndltd.ncl.edu.tw/handle/3zbtg6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Automatic Colorization"

1

Tran, Tan-Bao, und Thai-Son Tran. „Automatic Natural Image Colorization“. In Intelligent Information and Database Systems, 612–21. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-41964-6_53.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Larsson, Gustav, Michael Maire und Gregory Shakhnarovich. „Learning Representations for Automatic Colorization“. In Computer Vision – ECCV 2016, 577–93. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46493-0_35.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Dhir, Rashi, Meghna Ashok, Shilpa Gite und Ketan Kotecha. „Automatic Image Colorization Using GANs“. In Soft Computing and its Engineering Applications, 15–26. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-0708-0_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Charpiat, Guillaume, Matthias Hofmann und Bernhard Schölkopf. „Automatic Image Colorization Via Multimodal Predictions“. In Lecture Notes in Computer Science, 126–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-88690-7_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Ding, Xiaowei, Yi Xu, Lei Deng und Xiaokang Yang. „Colorization Using Quaternion Algebra with Automatic Scribble Generation“. In Lecture Notes in Computer Science, 103–14. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27355-1_12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Golyadkin, Maksim, und Ilya Makarov. „Semi-automatic Manga Colorization Using Conditional Adversarial Networks“. In Lecture Notes in Computer Science, 230–42. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72610-2_17.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kouzouglidis, Panagiotis, Giorgos Sfikas und Christophoros Nikou. „Automatic Video Colorization Using 3D Conditional Generative Adversarial Networks“. In Advances in Visual Computing, 209–18. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-33720-9_16.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lee, Hyejin, Daehee Kim, Daeun Lee, Jinkyu Kim und Jaekoo Lee. „Bridging the Domain Gap Towards Generalization in Automatic Colorization“. In Lecture Notes in Computer Science, 527–43. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19790-1_32.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Mouzon, Thomas, Fabien Pierre und Marie-Odile Berger. „Joint CNN and Variational Model for Fully-Automatic Image Colorization“. In Lecture Notes in Computer Science, 535–46. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22368-7_42.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Dai, Jiawu, Bin Jiang, Chao Yang, Lin Sun und Bolin Zhang. „Local Pyramid Attention and Spatial Semantic Modulation for Automatic Image Colorization“. In Big Data, 165–81. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9709-8_12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Automatic Colorization"

1

Watanabe, Taiki, Seitaro Shinagawa, Takuya Funatomi, Akinobu Maejima, Yasuhiro Mukaigawa, Satoshi Nakamura und Hiroyuki Kubo. „Improved Automatic Colorization by Optimal Pre-colorization“. In SIGGRAPH '23: Special Interest Group on Computer Graphics and Interactive Techniques Conference. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3588028.3603669.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Śluzek, Andrzej. „On Unguided Automatic Colorization of Monochrome Images“. In WSCG 2023 – 31. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision. University of West Bohemia, Czech Republic, 2023. http://dx.doi.org/10.24132/csrn.3301.38.

Der volle Inhalt der Quelle
Annotation:
Image colorization is a challenging problem due to the infinite RGB solutions for a grayscale picture. Therefore, human assistance, either directly or indirectly, is essential for achieving visually plausible colorization. This paper aims to perform colorization using only a grayscale image as the data source, without any reliance on metadata or human hints. The method assumes an (arbitrary) rgb2gray model and utilizes a few simple heuristics. Despite probabilistic elements, the results are visually acceptable and repeatable, making this approach feasible (e.g. for aesthetic purposes) in domains where only monochrome visual representations exist. The paper explains the method, presents exemplary results, and discusses a few supplementary issues.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Thasarathan, Harrish, Kamyar Nazeri und Mehran Ebrahimi. „Automatic Temporally Coherent Video Colorization“. In 2019 16th Conference on Computer and Robot Vision (CRV). IEEE, 2019. http://dx.doi.org/10.1109/crv.2019.00033.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Konovalov, Vitaly. „Method for automatic cartoon colorization“. In 2023 IX International Conference on Information Technology and Nanotechnology (ITNT). IEEE, 2023. http://dx.doi.org/10.1109/itnt57377.2023.10139184.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

AbdulHalim, Mayada F., und Zaineb A. Mejbil. „Automatic colorization without human intervention“. In 2008 International Conference on Computer and Communication Engineering (ICCCE). IEEE, 2008. http://dx.doi.org/10.1109/iccce.2008.4580569.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Lal, Shamit, Vineet Garg und Om Prakash Verma. „Automatic Image Colorization Using Adversarial Training“. In the 9th International Conference. New York, New York, USA: ACM Press, 2017. http://dx.doi.org/10.1145/3163080.3163104.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Deshpande, Aditya, Jason Rock und David Forsyth. „Learning Large-Scale Automatic Image Colorization“. In 2015 IEEE International Conference on Computer Vision (ICCV). IEEE, 2015. http://dx.doi.org/10.1109/iccv.2015.72.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lu, Yurong, Xianglin Huang, Yan Zhai, Lifang Yang und Yirui Wang. „ColorGAN: Automatic Image Colorization with GAN“. In 2023 IEEE 3rd International Conference on Information Technology, Big Data and Artificial Intelligence (ICIBA). IEEE, 2023. http://dx.doi.org/10.1109/iciba56860.2023.10164924.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Goel, Divyansh, Sakshi Jain, Dinesh Kumar Vishwakarma und Aryan Bansal. „Automatic Image Colorization using U-Net“. In 2021 12th International Conference on Computing Communication and Networking Technologies (ICCCNT). IEEE, 2021. http://dx.doi.org/10.1109/icccnt51525.2021.9580001.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Nguyen, Van, Vicky Sintunata und Terumasa Aoki. „Automatic Image Colorization based on Feature Lines“. In International Conference on Computer Vision Theory and Applications. SCITEPRESS - Science and Technology Publications, 2016. http://dx.doi.org/10.5220/0005676401260133.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie