Статті в журналах з теми "ARTISTIC STYLE TRANSFER"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: ARTISTIC STYLE TRANSFER.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "ARTISTIC STYLE TRANSFER".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Zhu, Xuanying, Mugang Lin, Kunhui Wen, Huihuang Zhao, and Xianfang Sun. "Deep Deformable Artistic Font Style Transfer." Electronics 12, no. 7 (March 26, 2023): 1561. http://dx.doi.org/10.3390/electronics12071561.

Повний текст джерела
Анотація:
The essence of font style transfer is to move the style features of an image into a font while maintaining the font’s glyph structure. At present, generative adversarial networks based on convolutional neural networks play an important role in font style generation. However, traditional convolutional neural networks that recognize font images suffer from poor adaptability to unknown image changes, weak generalization abilities, and poor texture feature extractions. When the glyph structure is very complex, stylized font images cannot be effectively recognized. In this paper, a deep deformable style transfer network is proposed for artistic font style transfer, which can adjust the degree of font deformation according to the style and realize the multiscale artistic style transfer of text. The new model consists of a sketch module for learning glyph mapping, a glyph module for learning style features, and a transfer module for a fusion of style textures. In the glyph module, the Deform-Resblock encoder is designed to extract glyph features, in which a deformable convolution is introduced and the size of the residual module is changed to achieve a fusion of feature information at different scales, preserve the font structure better, and enhance the controllability of text deformation. Therefore, our network has greater control over text, processes image feature information better, and can produce more exquisite artistic fonts.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Lyu, Yanru, Chih-Long Lin, Po-Hsien Lin, and Rungtai Lin. "The Cognition of Audience to Artistic Style Transfer." Applied Sciences 11, no. 7 (April 6, 2021): 3290. http://dx.doi.org/10.3390/app11073290.

Повний текст джерела
Анотація:
Artificial Intelligence (AI) is becoming more popular in various fields, including the area of art creation. Advances in AI technology bring new opportunities and challenges in the creation, experience, and appreciation of art. The neural style transfer (NST) realizes the intelligent conversion of any artistic style using neural networks. However, the artistic style is the product of cognition that involving from visual to feel. The purpose of this paper is to study factors affecting audience cognitive difference and preference on artistic style transfer. Those factors are discussed to investigate the application of the AI generator model in art creation. Therefore, based on the artist’s encoding attributes (color, stroke, texture) and the audience’s decoding cognitive levels (technical, semantic, effectiveness), this study proposed a framework to evaluate artistic style transfer in the perspective of cognition. Thirty-one subjects with a background in art, aesthetics, and design were recruited to participate in the experiment. The experimental process consists of four style groups, including Fauvism, Expressionism, Cubism, and Renaissance. According to the finding in this study, participants can still recognize different artistic styles after transferred by neural networks. Besides, the features of texture and stroke are more impact on the perception of fitness than color. The audience may prefer the samples with high cognition in the semantic and effectiveness levels. The above indicates that through AI automated routine work, the cognition of the audience to artistic style still can be kept and transferred.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Liu, Kunxiao, Guowu Yuan, Hao Wu, and Wenhua Qian. "Coarse-to-Fine Structure-Aware Artistic Style Transfer." Applied Sciences 13, no. 2 (January 10, 2023): 952. http://dx.doi.org/10.3390/app13020952.

Повний текст джерела
Анотація:
Artistic style transfer aims to use a style image and a content image to synthesize a target image that retains the same artistic expression as the style image while preserving the basic content of the content image. Many recently proposed style transfer methods have a common problem; that is, they simply transfer the texture and color of the style image to the global structure of the content image. As a result, the content image has a local structure that is not similar to the local structure of the style image. In this paper, we present an effective method that can be used to transfer style patterns while fusing the local style structure to the local content structure. In our method, different levels of coarse stylized features are first reconstructed at low resolution using a coarse network, in which style color distribution is roughly transferred, and the content structure is combined with the style structure. Then, the reconstructed features and the content features are adopted to synthesize high-quality structure-aware stylized images with high resolution using a fine network with three structural selective fusion (SSF) modules. The effectiveness of our method is demonstrated through the generation of appealing high-quality stylization results and a comparison with some state-of-the-art style transfer methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Zhang, Chi, Yixin Zhu, and Song-Chun Zhu. "MetaStyle: Three-Way Trade-off among Speed, Flexibility, and Quality in Neural Style Transfer." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 1254–61. http://dx.doi.org/10.1609/aaai.v33i01.33011254.

Повний текст джерела
Анотація:
An unprecedented booming has been witnessed in the research area of artistic style transfer ever since Gatys et al. introduced the neural method. One of the remaining challenges is to balance a trade-off among three critical aspects—speed, flexibility, and quality: (i) the vanilla optimization-based algorithm produces impressive results for arbitrary styles, but is unsatisfyingly slow due to its iterative nature, (ii) the fast approximation methods based on feed-forward neural networks generate satisfactory artistic effects but bound to only a limited number of styles, and (iii) feature-matching methods like AdaIN achieve arbitrary style transfer in a real-time manner but at a cost of the compromised quality. We find it considerably difficult to balance the trade-off well merely using a single feed-forward step and ask, instead, whether there exists an algorithm that could adapt quickly to any style, while the adapted model maintains high efficiency and good image quality. Motivated by this idea, we propose a novel method, coined MetaStyle, which formulates the neural style transfer as a bilevel optimization problem and combines learning with only a few post-processing update steps to adapt to a fast approximation model with satisfying artistic effects, comparable to the optimization-based methods for an arbitrary style. The qualitative and quantitative analysis in the experiments demonstrates that the proposed approach achieves high-quality arbitrary artistic style transfer effectively, with a good trade-off among speed, flexibility, and quality.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Han, Xinying, Yang Wu, and Rui Wan. "A Method for Style Transfer from Artistic Images Based on Depth Extraction Generative Adversarial Network." Applied Sciences 13, no. 2 (January 8, 2023): 867. http://dx.doi.org/10.3390/app13020867.

Повний текст джерела
Анотація:
Depth extraction generative adversarial network (DE-GAN) is designed for artistic work style transfer. Traditional style transfer models focus on extracting texture features and color features from style images through an autoencoding network by mixing texture features and color features using high-dimensional coding. In the aesthetics of artworks, the color, texture, shape, and spatial features of the artistic object together constitute the artistic style of the work. In this paper, we propose a multi-feature extractor to extract color features, texture features, depth features, and shape masks from style images with U-net, multi-factor extractor, fast Fourier transform, and MiDas depth estimation network. At the same time, a self-encoder structure is used as the content extraction network core to generate a network that shares style parameters with the feature extraction network and finally realizes the generation of artwork images in three-dimensional artistic styles. The experimental analysis shows that compared with other advanced methods, DE-GAN-generated images have higher subjective image quality, and the generated style pictures are more consistent with the aesthetic characteristics of real works of art. The quantitative data analysis shows that images generated using the DE-GAN method have better performance in terms of structural features, image distortion, image clarity, and texture details.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ruder, Manuel, Alexey Dosovitskiy, and Thomas Brox. "Artistic Style Transfer for Videos and Spherical Images." International Journal of Computer Vision 126, no. 11 (April 21, 2018): 1199–219. http://dx.doi.org/10.1007/s11263-018-1089-z.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Banar, Nikolay, Matthia Sabatelli, Pierre Geurts, Walter Daelemans, and Mike Kestemont. "Transfer Learning with Style Transfer between the Photorealistic and Artistic Domain." Electronic Imaging 2021, no. 14 (January 18, 2021): 41–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.14.cvaa-041.

Повний текст джерела
Анотація:
Transfer Learning is an important strategy in Computer Vision to tackle problems in the face of limited training data. However, this strategy still heavily depends on the amount of availabl data, which is a challenge for small heritage institutions. This paper investigates various ways of enrichingsmaller digital heritage collections to boost the performance of deep learningmodels, using the identification of musical instruments as a case study. We apply traditional data augmentation techniques as well as the use of an external, photorealistic collection, distorted by Style Transfer. Style Transfer techniques are capable of artistically stylizing images, reusing the style from any other given image. Hence, collections can be easily augmented with artificially generated images. We introduce the distinction between inner and outer style transfer and show that artificially augmented images in both scenarios consistently improve classification results, on top of traditional data augmentation techniques. However, and counter-intuitively, such artificially generated artistic depictions of works are surprisingly hard to classify. In addition, we discuss an example of negative transfer within the non-photorealistic domain.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hien, Ngo Le Huy, Luu Van Huy, and Nguyen Van Hieu. "Artwork style transfer model using deep learning approach." Cybernetics and Physics, Volume 10, 2021, Number 3 (October 30, 2021): 127–37. http://dx.doi.org/10.35470/2226-4116-2021-10-3-127-137.

Повний текст джерела
Анотація:
Art in general and fine arts, in particular, play a significant role in human life, entertaining and dispelling stress and motivating their creativeness in specific ways. Many well-known artists have left a rich treasure of paintings for humanity, preserving their exquisite talent and creativity through unique artistic styles. In recent years, a technique called ’style transfer’ allows computers to apply famous artistic styles into the style of a picture or photograph while retaining the shape of the image, creating superior visual experiences. The basic model of that process, named ’Neural Style Transfer,’ has been introduced promisingly by Leon A. Gatys; however, it contains several limitations on output quality and implementation time, making it challenging to apply in practice. Based on that basic model, an image transform network was proposed in this paper to generate higher-quality artwork and higher abilities to perform on a larger image amount. The proposed model significantly shortened the execution time and can be implemented in a real-time application, providing promising results and performance. The outcomes are auspicious and can be used as a referenced model in color grading or semantic image segmentation, and future research focuses on improving its applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Lang, Langtian. "Style transfer with VGG19." Applied and Computational Engineering 6, no. 1 (June 14, 2023): 149–58. http://dx.doi.org/10.54254/2755-2721/6/20230752.

Повний текст джерела
Анотація:
Style transfer is a wide-used technique in image and photograph processing, which could transfer the style of an image to a target image that has a different content. This image processing technique has been used in the algorithms of some image processing software as well as modern artistic creation. However, the intrinsic principle of style transfer and its transfer accuracy is still not clear and stable. This article discusses a new method for preprocessing image data that uses feature extraction and forming vector fields and utilizing multiple VGG19 to separately train the distinct features in images to obtain a better effect in predicting. Our model could generate more autonomous and original images that are not simply adding a style filter to the image, which can help the development of AI style transfer and painting.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Dinesh Kumar, R., E. Golden Julie, Y. Harold Robinson, S. Vimal, Gaurav Dhiman, and Murugesh Veerasamy. "Deep Convolutional Nets Learning Classification for Artistic Style Transfer." Scientific Programming 2022 (January 10, 2022): 1–9. http://dx.doi.org/10.1155/2022/2038740.

Повний текст джерела
Анотація:
Humans have mastered the skill of creativity for many decades. The process of replicating this mechanism is introduced recently by using neural networks which replicate the functioning of human brain, where each unit in the neural network represents a neuron, which transmits the messages from one neuron to other, to perform subconscious tasks. Usually, there are methods to render an input image in the style of famous art works. This issue of generating art is normally called nonphotorealistic rendering. Previous approaches rely on directly manipulating the pixel representation of the image. While using deep neural networks which are constructed using image recognition, this paper carries out implementations in feature space representing the higher levels of the content image. Previously, deep neural networks are used for object recognition and style recognition to categorize the artworks consistent with the creation time. This paper uses Visual Geometry Group (VGG16) neural network to replicate this dormant task performed by humans. Here, the images are input where one is the content image which contains the features you want to retain in the output image and the style reference image which contains patterns or images of famous paintings and the input image which needs to be style and blend them together to produce a new image where the input image is transformed to look like the content image but “sketched” to look like the style image.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

WANG, Shenzhi, and Qiu CHEN. "Artistic Image Style Transfer Based on Laplacian Pyramid Network." International Symposium on Affective Science and Engineering ISASE2023 (2023): 1–4. http://dx.doi.org/10.5057/isase.2023-c000038.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Lin, Minxuan, Fan Tang, Weiming Dong, Xiao Li, Changsheng Xu, and Chongyang Ma. "Distribution Aligned Multimodal and Multi-domain Image Stylization." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 3 (July 22, 2021): 1–17. http://dx.doi.org/10.1145/3450525.

Повний текст джерела
Анотація:
Multimodal and multi-domain stylization are two important problems in the field of image style transfer. Currently, there are few methods that can perform multimodal and multi-domain stylization simultaneously. In this study, we propose a unified framework for multimodal and multi-domain style transfer with the support of both exemplar-based reference and randomly sampled guidance. The key component of our method is a novel style distribution alignment module that eliminates the explicit distribution gaps between various style domains and reduces the risk of mode collapse. The multimodal diversity is ensured by either guidance from multiple images or random style codes, while the multi-domain controllability is directly achieved by using a domain label. We validate our proposed framework on painting style transfer with various artistic styles and genres. Qualitative and quantitative comparisons with state-of-the-art methods demonstrate that our method can generate high-quality results of multi-domain styles and multimodal instances from reference style guidance or a random sampled style.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Ioannou, Eleftherios, and Steve Maddock. "Depth-Aware Neural Style Transfer for Videos." Computers 12, no. 4 (March 27, 2023): 69. http://dx.doi.org/10.3390/computers12040069.

Повний текст джерела
Анотація:
Temporal consistency and content preservation are the prominent challenges in artistic video style transfer. To address these challenges, we present a technique that utilizes depth data and we demonstrate this on real-world videos from the web, as well as on a standard video dataset of three-dimensional computer-generated content. Our algorithm employs an image-transformation network combined with a depth encoder network for stylizing video sequences. For improved global structure preservation and temporal stability, the depth encoder network encodes ground-truth depth information which is fused into the stylization network. To further enforce temporal coherence, we employ ConvLSTM layers in the encoder, and a loss function based on calculated depth information for the output frames is also used. We show that our approach is capable of producing stylized videos with improved temporal consistency compared to state-of-the-art methods whilst also successfully transferring the artistic style of a target painting.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Fu, Xuhui. "Digital Image Art Style Transfer Algorithm Based on CycleGAN." Computational Intelligence and Neuroscience 2022 (January 13, 2022): 1–10. http://dx.doi.org/10.1155/2022/6075398.

Повний текст джерела
Анотація:
With the continuous development and popularization of artificial intelligence technology in recent years, the field of deep learning has also developed relatively rapidly. The application of deep learning technology has attracted attention in image detection, image recognition, image recoloring, and image artistic style transfer. Some image art style transfer techniques with deep learning as the core are also widely used. This article intends to create an image art style transfer algorithm to quickly realize the image art style transfer based on the generation of confrontation network. The principle of generating a confrontation network is mainly to change the traditional deconvolution operation, by adjusting the image size and then convolving, using the content encoder and style encoder to encode the content and style of the selected image, and by extracting the content and style features. In order to enhance the effect of image artistic style transfer, the image is recognized by using a multi-scale discriminator. The experimental results show that this algorithm is effective and has great application and promotion value.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Lin, Chiu-Chin, Chih-Bin Hsu, Jen-Chun Lee, Chung-Hsien Chen, Te-Ming Tu, and Huang-Chu Huang. "A Variety of Choice Methods for Image-Based Artistic Rendering." Applied Sciences 12, no. 13 (July 2, 2022): 6710. http://dx.doi.org/10.3390/app12136710.

Повний текст джерела
Анотація:
Neural style transfer (NST) is a technique based on the deep learning of a convolutional neural network (CNN) to create entertaining pictures by cleverly stylizing ordinary pictures with the predetermined visual art style. However, three issues must be carefully investigated during the generation of neural-stylized artwork: the color scheme, the strength of style of the strokes, and the adjustment of image contrast. To solve these problems and select image colorization based on personal preference, in this paper, we propose modified universal-style transfer (UST) method combined with the image fusion and color enhancement methods to design a good post-processing framework to tackle the three above-mentioned issues simultaneously. This work provides more visual effects for stylized images, and also can integrate into the UST method effectively. In addition, the proposed method is suitable for stylized images generated by any NST method, but it also works similarly to the Multi-Style Transfer (MST) method, which mixes two different stylized images. Finally, our proposed method successfully combined the modified UST method and post-processing method to meet personal preference.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Wang, Zhizhong, Lei Zhao, Zhiwen Zuo, Ailin Li, Haibo Chen, Wei Xing, and Dongming Lu. "MicroAST: Towards Super-fast Ultra-Resolution Arbitrary Style Transfer." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 3 (June 26, 2023): 2742–50. http://dx.doi.org/10.1609/aaai.v37i3.25374.

Повний текст джерела
Анотація:
Arbitrary style transfer (AST) transfers arbitrary artistic styles onto content images. Despite the recent rapid progress, existing AST methods are either incapable or too slow to run at ultra-resolutions (e.g., 4K) with limited resources, which heavily hinders their further applications. In this paper, we tackle this dilemma by learning a straightforward and lightweight model, dubbed MicroAST. The key insight is to completely abandon the use of cumbersome pre-trained Deep Convolutional Neural Networks (e.g., VGG) at inference. Instead, we design two micro encoders (content and style encoders) and one micro decoder for style transfer. The content encoder aims at extracting the main structure of the content image. The style encoder, coupled with a modulator, encodes the style image into learnable dual-modulation signals that modulate both intermediate features and convolutional filters of the decoder, thus injecting more sophisticated and flexible style signals to guide the stylizations. In addition, to boost the ability of the style encoder to extract more distinct and representative style signals, we also introduce a new style signal contrastive loss in our model. Compared to the state of the art, our MicroAST not only produces visually superior results but also is 5-73 times smaller and 6-18 times faster, for the first time enabling super-fast (about 0.5 seconds) AST at 4K ultra-resolutions.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Kang, Dongwann, Feng Tian, and Sanghyun Seo. "Perceptually inspired real-time artistic style transfer for video stream." Journal of Real-Time Image Processing 13, no. 3 (June 27, 2016): 581–89. http://dx.doi.org/10.1007/s11554-016-0612-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Alexandru, Ioana, Constantin Nicula, Cristian Prodan, Răzvan-Paul Rotaru, Mihai-Lucian Voncilă, Nicolae Tarbă, and Costin-Anton Boiangiu. "Image Style Transfer via Multi-Style Geometry Warping." Applied Sciences 12, no. 12 (June 14, 2022): 6055. http://dx.doi.org/10.3390/app12126055.

Повний текст джерела
Анотація:
Style transfer of an image has been receiving attention from the scientific community since its inception in 2015. This topic is characterized by an accelerated process of innovation; it has been defined by techniques that blend content and style, first covering only textural details, and subsequently incorporating compositional features. The results of such techniques has had a significant impact on our understanding of the inner workings of Convolutional Neural Networks. Recent research has shown an increasing interest in the geometric deformation of images, since it is a defining trait for different artists, and in various art styles, that previous methods failed to account for. However, current approaches are limited to matching class deformations in order to obtain adequate outputs. This paper solves these limitations by combining previous works in a framework that can perform geometric deformation on images using different styles from multiple artists by building an architecture that uses multiple style images and one content image as input. The proposed framework uses a combination of various other existing frameworks in order to obtain a more intriguing artistic result. The framework first detects objects of interest from various classes inside the image and assigns them a bounding box, before matching each detected object image found in a bounding box with a similar style image and performing warping on each of them on the basis of these similarities. Next, the algorithm blends back together all the warped images so they are placed in a similar position as the initial image, and style transfer is finally applied between the merged warped images and a different chosen image. We manage to obtain stylistically pleasing results that were possible to generate in a reasonable amount of time, compared to other existing methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Choi, Hyun-Chul. "Toward Exploiting Second-Order Feature Statistics for Arbitrary Image Style Transfer." Sensors 22, no. 7 (March 29, 2022): 2611. http://dx.doi.org/10.3390/s22072611.

Повний текст джерела
Анотація:
Generating images of artistic style from input images, also known as image style transfer, has been improved in the quality of output style and the speed of image generation since deep neural networks have been applied in the field of computer vision research. However, the previous approaches used feature alignment techniques that were too simple in their transform layer to cover the characteristics of style features of images. In addition, they used an inconsistent combination of transform layers and loss functions in the training phase to embed arbitrary styles in a decoder network. To overcome these shortcomings, the second-order statistics of the encoded features are exploited to build an optimal arbitrary image style transfer technique. First, a new correlation-aware loss and a correlation-aware feature alignment technique are proposed. Using this consistent combination of loss and feature alignment methods strongly matches the second-order statistics of content features to those of the target-style features and, accordingly, the style capacity of the decoder network is increased. Secondly, a new component-wise style controlling method is proposed. This method can generate various styles from one or several style images by using style-specific components from second-order feature statistics. We experimentally prove that the proposed method achieves improvements in both the style capacity of the decoder network and the style variety without losing the ability of real-time processing (less than 200 ms) on Graphics Processing Unit (GPU) devices.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Wang, Meng, Yixuan Shao, and Haipeng Liu. "APST-Flow: A Reversible Network-Based Artistic Painting Style Transfer Method." Computers, Materials & Continua 75, no. 3 (2023): 5229–54. http://dx.doi.org/10.32604/cmc.2023.036631.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Cui, Xin, Meng Qi, Yi Niu, and Bingxin Li. "The Intra-Class and Inter-Class Relationships in Style Transfer." Applied Sciences 8, no. 9 (September 17, 2018): 1681. http://dx.doi.org/10.3390/app8091681.

Повний текст джерела
Анотація:
Neural style transfer, which has attracted great attention in both academic research and industrial engineering and demonstrated very exciting and remarkable results, is the technique of migrating the semantic content of one image to different artistic styles by using convolutional neural network (CNN). Recently, the Gram matrices used in the original and follow-up studies for style transfer were theoretically shown to be equivalent to minimizing a specific Maximum Mean Discrepancy (MMD). Since the Gram matrices are not a must for style transfer, how to design the proper process for aligning the neural activation between images to perform style transfer is an important problem. After careful analysis of some different algorithms for style loss construction, we discovered that some algorithms consider the relationships between different feature maps of a layer obtained from the CNN (inter-class relationships), while some do not (intra-class relationships). Surprisingly, the latter often show more details and finer strokes in the results. To further support our standpoint, we propose two new methods to perform style transfer: one takes inter-class relationships into account and the other does not, and conduct comparative experiments with existing methods. The experimental results verified our observation. Our proposed methods can achieve comparable perceptual quality yet with a lower complexity. We believe that our interpretation provides an effective design basis for designing style loss function for style transfer methods with different visual effects.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

RAMA PRASAD, V. V., and G. VINEELA RATNA. "DEEP LEARNING TECHNIQUE FOR TRANSFER OF ARTISTIC STYLE TO IMAGES AND VIDEOS." i-manager’s Journal on Image Processing 7, no. 2 (2020): 13. http://dx.doi.org/10.26634/jip.7.2.17555.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Sun, Yikang, Yanru Lyu, Po-Hsien Lin, and Rungtai Lin. "Comparison of Cognitive Differences of Artworks between Artist and Artistic Style Transfer." Applied Sciences 12, no. 11 (May 29, 2022): 5525. http://dx.doi.org/10.3390/app12115525.

Повний текст джерела
Анотація:
This study explores how audiences responded to perceiving and distinguishing the paintings created by AI or human artists. The stimuli were six paintings which were completed by AI and human artists. A total of 750 subjects participated to identify which ones were completed by human artists or by AI. Results revealed that most participants could correctly distinguish between paintings made by AI or human artists and that accuracy was higher for those who used “intuition” as the criterion for judgment. The participants preferred the paintings created by human artists. Furthermore, there were big differences in the perception of the denotation and connotation of paintings between audiences of different backgrounds. The reasons for this will be analyzed in subsequent research.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Wang, Quan, Sheng Li, Xinpeng Zhang, and Guorui Feng. "Multi-granularity Brushstrokes Network for Universal Style Transfer." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 4 (November 30, 2022): 1–17. http://dx.doi.org/10.1145/3506710.

Повний текст джерела
Анотація:
Neural style transfer has been developed in recent years, where both performance and efficiency have been greatly improved. However, most existing methods do not transfer the brushstrokes information of style images well. In this article, we address this issue by training a multi-granularity brushstrokes network based on a parallel coding structure. Specifically, we first adopt the content parsing module to obtain the spatial distribution of content image and the smoothness of different regions. Then, different brushstrokes features are transformed by a multi-granularity style-swap module guided by the region content map. Finally, the stylized features of the two branches are fused to enhance the stylized results. The multi-granularity brushstrokes network is jointly supervised by a new multi-layer brushstroke loss and pre-existing loss. The proposed method is close to the artistic drawing process. In addition, we can control whether the color of the stylized results tend to be the style image or the content image. Experimental results demonstrate the advantage of our proposed method compare with the existing schemes.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Yu, Keyi, Yu Wang, Sihan Zeng, Chen Liang, Xiaoyu Bai, Dachi Chen, and Wenping Wang. "InkGAN: Generative Adversarial Networks for Ink-And-Wash Style Transfer of Photographs." Advances in Artificial Intelligence and Machine Learning 03, no. 02 (2023): 1220–33. http://dx.doi.org/10.54364/aaiml.2023.1171.

Повний текст джерела
Анотація:
In this work, we present a novel approach for Chinese Ink-and-Wash style transfer using a GAN structure. The proposed method incorporates a specially designed smooth loss tailored for this style transfer task, and an end-to-end framework that seamlessly integrates various components for efficient and effective image style transferring. To demonstrate the superiority of our approach, comparative results against other popular style transfer methods such as CycleGAN is presented. The experimentation showcased the notable improvements achieved with our proposed method in terms of preserving the intricate details and capturing the essence of the Chinese Ink-and-Wash style. Furthermore, an ablation study is conducted to evaluate the effectiveness of each loss component in our framework. We conclude in the end and anticipate that our findings will inspire further advancements in this domain and foster new avenues for artistic expression in the digital realm.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Liu, Shan, Yun Bo, and Lingling Huang. "Application of Image Style Transfer Technology in Interior Decoration Design Based on Ecological Environment." Journal of Sensors 2021 (November 25, 2021): 1–7. http://dx.doi.org/10.1155/2021/9699110.

Повний текст джерела
Анотація:
With the further development of the social economy, people pay more attention to spiritual and cultural needs. As the main place of people’s daily life, the family is very important to the creation of its cultural atmosphere. In fact, China has fully entered the era of interior decoration, and people are paying more and more attention to decorative effects and the comfort and individual characteristics of decoration. Therefore, it is of practical significance to develop the application of decorative art in interior space design. However, the transfer effect of current interior decoration art design tends to be artistic, which leads to image distortion, and image content transfer errors are easy to occur in the process of transfer. The application of image style transfer in interior decoration art can effectively solve such problems. This paper analyzes the basic theory of image style transfer through image style transfer technology, Gram matrix, and Poisson image editing technology and designs images from several aspects such as image segmentation, content loss, enhanced style loss, and Poisson image editing constrained image spatial gradient. The application process of style transfer in interior decoration art realizes the application of image style transfer in interior decoration art. The experimental results show that the application of image style transmission in interior decoration art design can effectively avoid the contents of the interior decoration errors and distortions and has a good style transfer effect.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Yang, Shuai, Liming Jiang, Ziwei Liu, and Chen Change Loy. "VToonify." ACM Transactions on Graphics 41, no. 6 (November 30, 2022): 1–15. http://dx.doi.org/10.1145/3550454.3555437.

Повний текст джерела
Анотація:
Generating high-quality artistic portrait videos is an important and desirable task in computer graphics and vision. Although a series of successful portrait image toonification models built upon the powerful StyleGAN have been proposed, these image-oriented methods have obvious limitations when applied to videos, such as the fixed frame size, the requirement of face alignment, missing non-facial details and temporal inconsistency. In this work, we investigate the challenging controllable high-resolution portrait video style transfer by introducing a novel VToonify framework. Specifically, VToonify leverages the mid- and high-resolution layers of StyleGAN to render high-quality artistic portraits based on the multi-scale content features extracted by an encoder to better preserve the frame details. The resulting fully convolutional architecture accepts non-aligned faces in videos of variable size as input, contributing to complete face regions with natural motions in the output. Our framework is compatible with existing StyleGAN-based image toonification models to extend them to video toonification, and inherits appealing features of these models for flexible style control on color and intensity. This work presents two instantiations of VToonify built upon Toonify and DualStyleGAN for collection-based and exemplar-based portrait video style transfer, respectively. Extensive experimental results demonstrate the effectiveness of our proposed VToonify framework over existing methods in generating high-quality and temporally-coherent artistic portrait videos with flexible style controls. Code and pretrained models are available at our project page: www.mmlab-ntu.com/project/vtoonify/.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Shi, Jinglun. "Artificial Intelligence for Art Creation with Image Style." Highlights in Science, Engineering and Technology 44 (April 13, 2023): 67–74. http://dx.doi.org/10.54097/hset.v44i.7198.

Повний текст джерела
Анотація:
Artificial intelligence (AI) has become a great success in the past decade. Powered by efficient hardware and neural network models, AI has made a wide change to almost all disciplines. While still in an early stage, AI has been used in the art field much more frequently than before, with various methods, e.g., generative adversarial networks. However, whether AI can replace human beings in art creation is still an unsolved question. Therefore, how to use AI technology to achieve artistic creation has become very meaningful, and has also been favored by many research groups. In this paper, we demonstrate the ability of AI for art creation with a case study of image style transfer. The result shows that AI is still not satisfactory and consumes too much computation. However, a new creative idea is proposed, and we will conduct lightweight research on this basis in the future. So as to realize the artistic creation of real artificial intelligence.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Karimov, Artur, Ekaterina Kopets, Tatiana Shpilevaya, Evgenii Katser, Sergey Leonov, and Denis Butusov. "Comparing Neural Style Transfer and Gradient-Based Algorithms in Brushstroke Rendering Tasks." Mathematics 11, no. 10 (May 11, 2023): 2255. http://dx.doi.org/10.3390/math11102255.

Повний текст джерела
Анотація:
Non-photorealistic rendering (NPR) with explicit brushstroke representation is essential for both high-grade imitating of artistic paintings and generating commands for artistically skilled robots. Some algorithms for this purpose have been recently developed based on simple heuristics, e.g., using an image gradient for driving brushstroke orientation. The notable drawback of such algorithms is the impossibility of automatic learning to reproduce an individual artist’s style. In contrast, popular neural style transfer (NST) algorithms are aimed at this goal by their design. The question arises: how good is the performance of neural style transfer methods in comparison with the heuristic approaches? To answer this question, we develop a novel method for experimentally quantifying brushstroke rendering algorithms. This method is based on correlation analysis applied to histograms of six brushstroke parameters: length, orientation, straightness, number of neighboring brushstrokes (NBS-NB), number of brushstrokes with similar orientations in the neighborhood (NBS-SO), and orientation standard deviation in the neighborhood (OSD-NB). This method numerically captures similarities and differences in the distributions of brushstroke parameters and allows comparison of two NPR algorithms. We perform an investigation of the brushstrokes generated by the heuristic algorithm and the NST algorithm. The results imply that while the neural style transfer and the heuristic algorithms give rather different parameter histograms, their capabilities of mimicking individual artistic manner are limited comparably. A direct comparison of NBS-NB histograms of brushstrokes generated by these algorithms and of brushstrokes extracted from a real painting confirms this finding.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Liu, Kunxiao, Guowu Yuan, Hongyu Liu, and Hao Wu. "Multiscale style transfer based on a Laplacian pyramid for traditional Chinese painting." Electronic Research Archive 31, no. 4 (2023): 1897–921. http://dx.doi.org/10.3934/era.2023098.

Повний текст джерела
Анотація:
<abstract><p>Style transfer is adopted to synthesize appealing stylized images that preserve the structure of a content image but carry the pattern of a style image. Many recently proposed style transfer methods use only western oil paintings as style images to achieve image stylization. As a result, unnatural messy artistic effects are produced in stylized images when using these methods to directly transfer the patterns of traditional Chinese paintings, which are composed of plain colors and abstract objects. Moreover, most of them work only at the original image scale and thus ignore multiscale image information during training. In this paper, we present a novel effective multiscale style transfer method based on Laplacian pyramid decomposition and reconstruction, which can transfer unique patterns of Chinese paintings by learning different image features at different scales. In the first stage, the holistic patterns are transferred at low resolution by adopting a Style Transfer Base Network. Then, the details of the content and style are gradually enhanced at higher resolutions by a Detail Enhancement Network with an edge information selection (EIS) module in the second stage. The effectiveness of our method is demonstrated through the generation of appealing high-quality stylization results and a comparison with some state-of-the-art style transfer methods. Datasets and codes are available at <a href="https://github.com/toby-katakuri/LP_StyleTransferNet" target="_blank">https://github.com/toby-katakuri/LP_StyleTransferNet</a>.</p></abstract>
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Yang, Fu Wen, Hwei Jen Lin, Shwu-Huey Yen, and Chun-Hui Wang. "A Study on the Convolutional Neural Algorithm of Image Style Transfer." International Journal of Pattern Recognition and Artificial Intelligence 33, no. 05 (April 8, 2019): 1954020. http://dx.doi.org/10.1142/s021800141954020x.

Повний текст джерела
Анотація:
Recently, deep convolutional neural networks have resulted in noticeable improvements in image classification and have been used to transfer artistic style of images. Gatys et al. proposed the use of a learned Convolutional Neural Network (CNN) architecture VGG to transfer image style, but problems occur during the back propagation process because there is a heavy computational load. This paper solves these problems, including the simplification of the computation of chains of derivatives, accelerating the computation of adjustments, and efficiently choosing weights for different energy functions. The experimental results show that the proposed solutions improve the computational efficiency and render the adjustment of weights for energy functions easier.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Russell, Will G., and Michelle Hegmon. "Identifying Mimbres Artists." Advances in Archaeological Practice 3, no. 4 (November 2015): 358–77. http://dx.doi.org/10.7183/2326-3768.3.4.358.

Повний текст джерела
Анотація:
AbstractPast researchers have identified individual styles of painting in Mimbres Black-on-white bowls, leading Steven LeBlanc to recently call for the development of quantitative methods to enable and assess such identifications. We propose such a methodology here. Through a process of pair-wise, micro-stylistic comparisons, bowls painted by a single artist or group of closely cooperating artists are analytically linked in chain-like fashion. Two bowls are attributed to the “same hands” if their similarity measure is at or above 70 percent. Similarity measures are determined by comparing minute details that reflect artistic decisions. The method takes into account diachronic development of artistic skill, subject matter diversity, and the transfer of style across generations. Results can contribute to an understanding of stylistic development, craft specialization, and the role of artists in traditional societies.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Liu, Xiahan. "An Improved Particle Swarm Optimization-Powered Adaptive Classification and Migration Visualization for Music Style." Complexity 2021 (April 13, 2021): 1–10. http://dx.doi.org/10.1155/2021/5515095.

Повний текст джерела
Анотація:
Based on the adaptive particle swarm algorithm and error backpropagation neural network, this paper proposes methods for different styles of music classification and migration visualization. This method has the advantages of simple structure, mature algorithm, and accurate optimization. It can find better network weights and thresholds so that particles can jump out of the local optimal solutions previously searched and search in a larger space. The global search uses the gradient method to accelerate the optimization and control the real-time generation effect of the music style transfer, thereby improving the learning performance and convergence performance of the entire network, ultimately improving the recognition rate of the entire system, and visualizing the musical perception. This kind of real-time information visualization is an artistic expression form, in which artificial intelligence imitates human synesthesia, and it is also a kind of performance art. Combining traditional music visualization and image style transfer adds specific content expression to music visualization and time sequence expression to image style transfer. This visual effect can help users generate unique and personalized portraits with music; it can also be widely used by artists to express the relationship between music and vision. The simulation results show that the method has better classification performance and has certain practical significance and reference value.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Li, Ke, Degang Yang, and Yan Ma. "Image Style Transfer Based on Dynamic Convolutional Manifold Alignment of Halo Attention." Electronics 12, no. 8 (April 16, 2023): 1881. http://dx.doi.org/10.3390/electronics12081881.

Повний текст джерела
Анотація:
The objective of image style transfer is to render an image with artistic features of a style reference while preserving the details of the content image. With the development of deep learning, many arbitrary style transfer methods have emerged. From the recent arbitrary style transfer algorithms, it has been found that the images generated suffer from the problem of poorly stylized quality. To solve this problem, we propose an arbitrary style transfer algorithm based on halo attention dynamic convolutional manifold alignment. First, the features of the content image and style image are extracted by a pre-trained VGG encoder. Then, the features are extracted by halo attention and dynamic convolution, and then the content feature space and style feature space are aligned by attention operations and spatial perception interpolation. The output is achieved through dynamic convolution and halo attention. During this process, multi-level loss functions are used, and total variation loss is introduced to eliminate noise. The manifold alignment process is then repeated three times. Finally, the pre-trained VGG decoder is used to output the stylized image. The experimental results show that our proposed method can generate high-quality stylized images, achieving values of 33.861, 2.516, and 3.602 for ArtFID, style loss, and content loss, respectively. A qualitative comparison with existing algorithms showed that it achieved good results. In future work, we will aim to make the model lightweight.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Lin, Tianwei, Honglin Lin, Fu Li, Dongliang He, Wenhao Wu, Meiling Wang, Xin Li, and Yong Liu. "AdaCM: Adaptive ColorMLP for Real-Time Universal Photo-Realistic Style Transfer." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 2 (June 26, 2023): 1613–21. http://dx.doi.org/10.1609/aaai.v37i2.25248.

Повний текст джерела
Анотація:
Photo-realistic style transfer aims at migrating the artistic style from an exemplar style image to a content image, producing a result image without spatial distortions or unrealistic artifacts. Impressive results have been achieved by recent deep models. However, deep neural network based methods are too expensive to run in real-time. Meanwhile, bilateral grid based methods are much faster but still contain artifacts like overexposure. In this work, we propose the Adaptive ColorMLP (AdaCM), an effective and efficient framework for universal photo-realistic style transfer. First, we find the complex non-linear color mapping between input and target domain can be efficiently modeled by a small multi-layer perceptron (ColorMLP) model. Then, in AdaCM, we adopt a CNN encoder to adaptively predict all parameters for the ColorMLP conditioned on each input content and style image pair. Experimental results demonstrate that AdaCM can generate vivid and high-quality stylization results. Meanwhile, our AdaCM is ultrafast and can process a 4K resolution image in 6ms on one V100 GPU.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Gardini, E., M. J. Ferrarotti, A. Cavalli, and S. Decherchi. "Using Principal Paths to Walk Through Music and Visual Art Style Spaces Induced by Convolutional Neural Networks." Cognitive Computation 13, no. 2 (February 1, 2021): 570–82. http://dx.doi.org/10.1007/s12559-021-09823-y.

Повний текст джерела
Анотація:
AbstractComputational intelligence, particularly deep learning, offers powerful tools for discriminating and generating samples such as images. Deep learning methods have been used in different artistic contexts for neural style transfer, artistic style recognition, and musical genre recognition. Using a constrained manifold analysis protocol, we discuss to what extent spaces induced by deep-learning convolutional neural networks can capture historical/stylistic progressions in music and visual art. We use a path-finding algorithm, called principal path, to move from one point to another. We apply it to the vector space induced by convolutional neural networks. We perform experiments with visual artworks and songs, considering a subset of classes. Within this simplified scenario, we recover a reasonable historical/stylistic progression in several cases. We use the principal path algorithm to conduct an evolutionary analysis of vector spaces induced by convolutional neural networks. We perform several experiments in the visual art and music spaces. The principal path algorithm finds reasonable connections between visual artworks and songs from different styles/genres with respect to the historical evolution when a subset of classes is considered. This approach could be used in many areas to extract evolutionary information from an arbitrary high-dimensional space and deliver interesting cognitive insights.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Sun, Hongli. "Character Creation Model and Analysis of Figure Painting Fused with Texture Synthesis Style Transfer Algorithm." Scientific Programming 2022 (March 11, 2022): 1–10. http://dx.doi.org/10.1155/2022/3465566.

Повний текст джерела
Анотація:
Texture fusion is the process of applying the style of one style image to another content image. It is an artistic creation and image editing technology. In recent years, the rapid development of deep learning has injected new power into the field of computer vision, and a large number of image style transfer algorithms based on deep learning have been proposed. At the same time, some current character conversion algorithms based on unsupervised learning also face the loss of the content and structure of the generated characters, and at the same time, they have not learned a good face deformation effect, resulting in poor image generation effects. This paper studies the relevant research background and research significance of image style transfer methods and summarizes them in the time sequence of their development; summarizes the style transfer algorithms based on deep learning, and analyzes the advantages and disadvantages of each type of algorithm. Based on the fast style transfer algorithm, this method adds a saliency detection network and designs a saliency loss function. In the training process, the difference between the saliency map of the generated image and the content image is additionally calculated, and the saliency loss is used as a part of the total loss for iterative training. Experiments show that the stylized image generated by this algorithm can better retain the salient area of the content image and has a good visual effect. Compared with the original network, the amount of parameters of the attention mechanism is very small, and there is almost no additional burden.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Wang, Xinhao, Shuai Yang, Wenjing Wang, and Jiaying Liu. "Artistic Text Style Transfer: An overview of state-of-the-art methods and datasets [SP Forum]." IEEE Signal Processing Magazine 39, no. 6 (November 2022): 10–17. http://dx.doi.org/10.1109/msp.2022.3196763.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Gao, Yiming, and Jiangqin Wu. "GAN-Based Unpaired Chinese Character Image Translation via Skeleton Transformation and Stroke Rendering." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (April 3, 2020): 646–53. http://dx.doi.org/10.1609/aaai.v34i01.5405.

Повний текст джерела
Анотація:
The automatic style translation of Chinese characters (CH-Char) is a challenging problem. Different from English or general artistic style transfer, Chinese characters contain a large number of glyphs with the complicated content and characteristic style. Early methods on CH-Char synthesis are inefficient and require manual intervention. Recently some GAN-based methods are proposed for font generation. The supervised GAN-based methods require numerous image pairs, which is difficult for many chirography styles. In addition, unsupervised methods often cause the blurred and incorrect strokes. Therefore, in this work, we propose a three-stage Generative Adversarial Network (GAN) architecture for multi-chirography image translation, which is divided into skeleton extraction, skeleton transformation and stroke rendering with unpaired training data. Specifically, we first propose a fast skeleton extraction method (ENet). Secondly, we utilize the extracted skeleton and the original image to train a GAN model, RNet (a stroke rendering network), to learn how to render the skeleton with stroke details in target style. Finally, the pre-trained model RNet is employed to assist another GAN model, TNet (a skeleton transformation network), to learn to transform the skeleton structure on the unlabeled skeleton set. We demonstrate the validity of our method on two chirography datasets we established.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Sun, Yikang, Cheng-Hsiang Yang, Yanru Lyu, and Rungtai Lin. "From Pigments to Pixels: A Comparison of Human and AI Painting." Applied Sciences 12, no. 8 (April 7, 2022): 3724. http://dx.doi.org/10.3390/app12083724.

Повний текст джерела
Анотація:
From entertainment to medicine and engineering, artificial intelligence (AI) is now being used in a wide range of fields, yet the extent to which AI can be effectively applied to the creative arts remains to be seen. In this research, a neural algorithm of artistic style was used to generate six AI paintings and these were compared with six paintings on the same theme by an amateur painter. Two sets of paintings were compared by 380 participants, 70 percent of whom had previous painting experience. Results indicate that color and line are the key elements of aesthetic appreciation. Additionally, the style transfer had a marked effect on the viewer when there was a close correspondence between the painting and the style transfer but not when there was little correspondence, indicating that AI is of limited effectiveness in modifying an existing style. Although the use of neural networks simulating human learning has come a long way in narrowing the gap between paintings produced by AI and those produced in the traditional fashion, there remains a fundamental difference in terms of aesthetic appreciation since paintings generated by AI are based on technology, while those produced by humans are based on emotion.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Lyu, Xinying. "Transforming Music into Chinese Musical Style based on Nyquist." Highlights in Science, Engineering and Technology 39 (April 1, 2023): 292–98. http://dx.doi.org/10.54097/hset.v39i.6542.

Повний текст джерела
Анотація:
As music recomposing becoming a popular way of artistic creation, computer-aided music making has generated many neoteric pieces through algorithms. Style transfer is one of the methods of recomposition based on an existing musical piece. This paper aims to develop a method that could implement music style transferring to Chinese style by changing the mode into a pentatonic scale. It includes two methods in note-to-note modification that both achieved the purpose well and are suited for different situations. The output of the approach is proved to be successful by certain comparison between the existing Chinese style music and the one that was modified by the method. It is convenient for musicians to utilize the analysis to improve the efficiency of recomposition based on style changing. In addition, they will be given a general view of what the changed music could be without excessive thinking prior to the start of their composing process. These results shed light on guiding further exploration of music composing based on Nyquist.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Lin, Wenjie, Xueke Zhu, Wujian Ye, Chin-Chen Chang, Yijun Liu, and Chengmin Liu. "An Improved Image Steganography Framework Based on Y Channel Information for Neural Style Transfer." Security and Communication Networks 2022 (January 29, 2022): 1–12. http://dx.doi.org/10.1155/2022/2641615.

Повний текст джерела
Анотація:
Neural style transfer has effectively assisted artistic design in recent years, but it has also accelerated the tampering, synthesis, and dissemination of a large number of digital image resources without permission, resulting in a large number of copyright disputes. Image steganography can hide secret information in cover images to realize copyright protection, but the existing methods have poor robustness, which is hard to extract the original secret information from stylized steganographic (stego) images. To solve the above problem, we propose an improved image steganography framework for neural style transfer based on Y channel information and a novel structural loss, composed of an encoder, a style transfer network, and a decoder. By introducing a structural loss to restrain the process of network training, the encoder can embed the gray-scale secret image into Y channel of the cover image and then generate steganographic image, while the decoder can directly extract the above secret image from a stylized stego image output by the style transfer network. The experimental results demonstrate that the proposed method can effectively recover the original secret information from the stylized stego image, and the PSNR of the extracted secret image and the original secret image can reach 23.4 and 27.29 for the gray-scale secret image and binary image with the size of 256×256, respectively, maintaining most of the details and semantics. Therefore, the proposed method can not only preserve most of the secret information embedded in a stego image during the stylization process, but also help to further hide secret information and avoid steganographic attacks to a certain extent due to the stylization of a stego image, thus protecting secret information like copyright.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Texler, Aneta, Ondřej Texler, Michal Kučera, Menglei Chai, and Daniel Sýkora. "FaceBlit." Proceedings of the ACM on Computer Graphics and Interactive Techniques 4, no. 1 (April 26, 2021): 1–17. http://dx.doi.org/10.1145/3451270.

Повний текст джерела
Анотація:
We present FaceBlit---a system for real-time example-based face video stylization that retains textural details of the style in a semantically meaningful manner, i.e., strokes used to depict specific features in the style are present at the appropriate locations in the target image. As compared to previous techniques, our system preserves the identity of the target subject and runs in real-time without the need for large datasets nor lengthy training phase. To achieve this, we modify the existing face stylization pipeline of Fišer et al. [2017] so that it can quickly generate a set of guiding channels that handle identity preservation of the target subject while are still compatible with a faster variant of patch-based synthesis algorithm of Sýkora et al. [2019]. Thanks to these improvements we demonstrate a first face stylization pipeline that can instantly transfer artistic style from a single portrait to the target video at interactive rates even on mobile devices.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Geng, Alexander, Ali Moghiseh, Claudia Redenbach, and Katja Schladitz. "Comparing optimization methods for deep learning in image processing applications." tm - Technisches Messen 88, no. 7-8 (May 4, 2021): 443–53. http://dx.doi.org/10.1515/teme-2021-0023.

Повний текст джерела
Анотація:
Abstract Training a deep learning network requires choosing its weights such that the output minimizes a given loss function. In practice, stochastic gradient descent is frequently used for solving the optimization problem. Several variants of this approach have been suggested in the literature. We study the impact of the choice of the optimization method on the outcome of the learning process at the example of two image processing applications from quite different fields. The first one is artistic style transfer, where the content of one image is combined with the style of another one. The second application is a real world classification task from industry, namely detecting defects in images of air filters. In both cases, clear differences between the results of the individual optimization methods are observed.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Huang, Yichen. "Understanding Intuition: Can Rapid Cognition Perform Better than Rational Thinking in Differentiating Artworks between Artist and Artistic Style Transfer." Communications in Humanities Research 3, no. 1 (May 17, 2023): 416–22. http://dx.doi.org/10.54254/2753-7064/3/20220368.

Повний текст джерела
Анотація:
This study attempts to provide evidence that judgements based on rapid cognition can have higher accuracy than judgements based on rational thinking in particular situations. The design of the experiment was based on the previous study by Sun et al. 2022 that compared cognitive differences in artworks between artists and art style transfer. In the experiment of this paper, the stimuli were generated from 24 pairs of digital artworks done by AI and human painters respectively, and participants were asked to differentiate between the stimuli. The results indicated that participants made more correct choices when there was not enough time to process all the details than when there was enough time to consider all the evidence. This study once again demonstrates that rapid cognition holds advantages in analyzing complex information in a short period of time.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Chen, Hongyi, and Yeyun Luo. "The artistic style transfer from Shanghai modern landmark buildings images to Xiao Jiaochang New Year pictures based on deep learning." Journal of Physics: Conference Series 1678 (November 2020): 012083. http://dx.doi.org/10.1088/1742-6596/1678/1/012083.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Kondrashova, Lidiya, and Nataliia Chuvasova. "EMOTIONAL AND ARTISTIC ENVIRONMENT OF UNIVERSITY EDUCATION - A RESOURCE OF QUALITY TRAINING OF FUTURE TEACHERS." Academic Notes Series Pedagogical Science 1, no. 204 (June 2022): 31–35. http://dx.doi.org/10.36550/2415-7988-2022-1-204-31-35.

Повний текст джерела
Анотація:
This study was conducted to define and outline the main theoretical characteristics of the concepts of "quality of education", "emotional and artistic environment", "professional training of future teachers for successful professional activities in higher education institutions." The authors defined the emotional and artistic environment as: a system of conditions that determine the purpose and objectives, planning, functioning and development of learning, the mechanism of subject-subject interaction, emotional and aesthetic relations in the system "teacher - students", which has a positive impact on the quality of training of future teachers for professional activities. Emotional and artistic environment of the educational process provides the development of personality of the future teacher, his abilities and abilities, individual style and professional position, means, ways of mastering cultural values and interaction to enter the world of spirituality, music, art, creative experience in teaching. activities. The emotional and artistic environment operates with a set of tools by which students evaluate the mechanism of organization and construction of educational activities on the basis of aesthetics, art and science; provide "transfer" of objectively existing values in the field of significant phenomena for future teachers, deepening and expanding existing professional values, intensification of thought processes and reflective position of students. Identified and justified conditions: updating the content of the humanities; emotional and artistic orientation of their content; modeling of event-role situations when the results of the student's activity coincide with his expectations; creating an emotionally favorable climate in the educational process; inclusion of participants in the educational process in various types of artistic and creative activities ensure the productivity of the educational environment in the formation of emotional and artistic image of the future teacher, his readiness for artistic and creative activities through a system of educational and cognitive tasks. cinema, music, which serve as the basis for the visualization of information and the experience of artistic creativity gained by students.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Musinova, G., and K. Tulebayeva. "REFLECTION OF THE SEMIPALATINSK TRAGEDY IN DRAMATIC WORKS." Bulletin of the Eurasian Humanities Institute, Philology Series, no. 3 (September 15, 2022): 175–87. http://dx.doi.org/10.55808/1999-4214.2022-3.17.

Повний текст джерела
Анотація:
The purpose of the research work is to identify, through the scientific study of genre and poetic trends, the artistic identity, the ideological and thematic atmosphere of modern Kazakh drama, to show the features of poetics and the problems of stylistic affiliation, to differentiate the novelty and continuity of traditions, to identify the author's point of view, signs of his individual stylistic differences and common thematic features in dramatic works reflecting Semipalatinsk tragedy. The genre of drama in Kazakh literature in the period after the country's entry into the mainstream of independence is characterized by a peculiar direction of development, artistic research, genre patterns and style field. In the works of playwrights of this period, the changes in the life of the country under the auspices of independence, historical continuity in our spiritual world, vivid responses caused by free consciousness, tense character are clearly expressed, the author's position in the transfer of the drama of the social environment is clearly marked, and a breakthrough in the image of the primordial essence of things is also noticeable. The article examines the trends in the development of Kazakh dramaturgy of the period of independence, the specifics of the reflection of the Semipalatinsk tragedy in dramatic works, the novelty of plays and the continuity of traditions, ideological and meaningful searches, language, style, problems, shows the modern directions of development and artistic integrity of dramaturgy. It is obvious that the cardinal changes taking place in the Kazakh society, which acquired sovereignty and moved to a market economy accompanied by globalization, brought social spiritual life to a new level and significantly influenced the formation of an open human consciousness. In the first years of independence, dramatic works were born in the Kazakh artistic word, freed from the political restrictions of the era of totalitarian system and man-made «socialist realism», which quickly took root, laid new themes and plots in accordance with the requirements of the post-perestroika time. This trend is also a natural consequence of the spirit of the times, because Kazakh drama is the most important component of fiction and urban national spirituality, which is developing in the conditions of continuity of world artistic thought and has reached significant heights.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Gafurova, Khakima Sh. "Songwriting and functional load in the “Diary of Ephemeral Life”." RUDN Journal of Studies in Literature and Journalism 25, no. 3 (December 15, 2020): 511–20. http://dx.doi.org/10.22363/2312-9220-2020-25-3-511-520.

Повний текст джерела
Анотація:
In the history of Japan, the Heian era is considered the golden age, the heyday of aristocratic literature. It was then that the aristocracy became the creator of literary works, which to this day are the masterpieces of world literature. One of the unique phenomena of Japanese classical literature is recognized diary, including female diary literature of the Heian era (IX-XII centuries). Diary literature differs from other genres in terms of presentation style, artistic language, plot and composition, figurative system, presentation methods. Mitsitsuno no haha (935-996), who enriched the Japanese medieval art tradition with an original literary style, a new theme, a new poetic vision, new sensuality, is recognized as the most striking representative of womens diary literature in Japan. In the Diary of Ephemeral Life, by Mitsitsu no haha the tradition of poetic inserts is widely embodied, a special layer of poetic material (three-part, five-part) is highlighted. They serve as a necessary attribute for conveying an emotional mood, creating an image, enhancing important moments of importance. Considering the artistic features of the diary, the author of the article makes an attempt to suggest that poems are additional material in the transfer of various emotional states of the heroine. Poetic insertions enhance one or another mood, establish an emotional connection between the external and internal world of the author. Analysis of poetic inserts makes it possible to determine their functional load.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Levytska, Oksana. "Intermedial strategies in biographical novels about artists (based on biographical works about Vincent van Gogh written by V. Domontovych and about Chaim Soutine by Ralph Dutli)." LITERARY PROCESS: methodology, names, trends, no. 18 (2021): 41–47. http://dx.doi.org/10.28925/2412-2475.2021.18.6.

Повний текст джерела
Анотація:
The article is devoted to the study of peculiarities of intermedial relationships in biographical works about artists. Based on V. Domontovych’s fictionalized biography «A Lonesome Traveller Walking along a Lonesome Road» and Ralph Dutli’s novel «Soutine’s Last Journey» («Soutines letzte Fahrt»), it analyses the inter-artistic interaction of literature and painting, traces how the translation is done from the language of visual art into the language of literary work, and how the artist’s creative heritage and especially the peculiarities of his artistic technique influence the poetics of the biographical novel. Applying the methodology of intermedial analysis, it explores the ways in which art manifests itself on compositional, poetical, linguistic and stylistic levels in the novels about artists. The interaction of the verbal and fine arts is analysed at the level of thematization, construction of the artist’s image and description of the creative process. The intermediality of biographical novels about Vincent van Gogh and Chaim Soutine is considered through the transfer of features of a work of fine art into verbal art, through the employment of the main means of image creation in painting, such as colour, line, composition, etc. Based on the artistic practices of avant-garde art, the artists under analysis worked within, particular attention is paid to the identification of the poetical principles of artistic trends in the style of a literary work, that is, naturalistic and impressionist elements in Van Gogh’s biography and expressionist and surrealist elements in the novel about Soutine. In addition, the role of a pictorial quotation in the biographical fiction about the artist is worked out. A significant attention in biographical novels about artists is given to artistic detail and ekphrasis. Novels about artists provide rich material for the research of the dialogue between literature and fine arts not only for literary scholars but also for art critics.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії