Auswahl der wissenschaftlichen Literatur zum Thema „Aesthetic image enhancement“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Aesthetic image enhancement" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Aesthetic image enhancement"

1

Zhao, Xiaoyan, Ling Shi, Zhao Han und Peiyan Yuan. „A Mobile Image Aesthetics Processing System with Intelligent Scene Perception“. Applied Sciences 14, Nr. 2 (18.01.2024): 822. http://dx.doi.org/10.3390/app14020822.

Der volle Inhalt der Quelle
Annotation:
Image aesthetics processing (IAP) is used primarily to enhance the aesthetic quality of images. However, IAP faces several issues, including its failure to analyze the influence of visual scene information and the difficulty of deploying IAP capabilities to mobile devices. This study proposes an automatic IAP system (IAPS) for mobile devices that integrates machine learning and traditional image-processing methods. First, we employ an extremely computation-efficient deep learning model, ShuffleNet, designed for mobile devices as our scene recognition model. Then, to enable computational inferencing on resource-constrained edge devices, we use a modern mobile machine-learning library, TensorFlow Lite, to convert the model type to TFLite format. Subsequently, we adjust the image contrast and color saturation using group filtering, respectively. These methods enable us to achieve maximal aesthetic enhancement of images with minimal parameter adjustments. Finally, we use the InceptionResNet-v2 aesthetic evaluation model to rate the images. Even when employing the benchmark model with an accuracy of 70%, the score of the IAPS processing image is verified to be higher and more effective compared with a state-of-the-art smartphone’s beautification function. Additionally, an anonymous questionnaire survey with 100 participants is conducted, and the result shows that IAPS enhances the aesthetic appeal of images based on the public’s preferences.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Zhang, Fang-Lue, Miao Wang und Shi-Min Hu. „Aesthetic Image Enhancement by Dependence-Aware Object Recomposition“. IEEE Transactions on Multimedia 15, Nr. 7 (November 2013): 1480–90. http://dx.doi.org/10.1109/tmm.2013.2268051.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Ghose, Tandra, Yannik Schelske, Takeshi Suzuki und Andreas Dengel. „Low-level pixelated representations suffice for aesthetically pleasing contrast adjustment in photographs“. Psihologija 50, Nr. 3 (2017): 239–70. http://dx.doi.org/10.2298/psi1703239g.

Der volle Inhalt der Quelle
Annotation:
Today?s web-based automatic image enhancement algorithms decide to apply an enhancement operation by searching for ?similar? images in an online database of images and then applying the same level of enhancement as the image in the database. Two key bottlenecks in these systems are the storage cost for images and the cost of the search. Based on the principles of computational aesthetics, we consider storing task-relevant aesthetic summaries, a set of features which are sufficient to predict the level at which an image enhancement operation should be performed, instead of the entire image. The empirical question, then, is to ensure that the reduced representation indeed maintains enough information so that the resulting operation is perceived to be aesthetically pleasing to humans. We focus on the contrast adjustment operation, an important image enhancement primitive. We empirically study the efficacy of storing a pixelated summary of the 16 most representative colors of an image and performing contrast adjustments on this representation. We tested two variants of the pixelated image: a ?mid-level pixelized version? that retained spatial relationships and allowed for region segmentation and grouping as in the original image and a ?low-level pixelized-random version? which only retained the colors by randomly shuffling the 50 x 50 pixels. In an empirical study on 25 human subjects, we demonstrate that the preferred contrast for the low-level pixelized-random image is comparable to the original image even though it retains very few bits and no semantic information, thereby making it ideal for image matching and retrieval for automated contrast editing. In addition, we use an eye tracking study to show that users focus only on a small central portion of the low-level image, thus improving the performance of image search over commonly used computer vision algorithms to determine interesting key points.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Zhang, Xin, Xinyu Jiang, Qing Song und Pengzhou Zhang. „A Visual Enhancement Network with Feature Fusion for Image Aesthetic Assessment“. Electronics 12, Nr. 11 (03.06.2023): 2526. http://dx.doi.org/10.3390/electronics12112526.

Der volle Inhalt der Quelle
Annotation:
Image aesthetic assessment (IAA) with neural attention has made significant progress due to its effectiveness in object recognition. Current studies have shown that the features learned by convolutional neural networks (CNN) at different learning stages indicate meaningful information. The shallow feature contains the low-level information of images, and the deep feature perceives the image semantics and themes. Inspired by this, we propose a visual enhancement network with feature fusion (FF-VEN). It consists of two sub-modules, the visual enhancement module (VE module) and the shallow and deep feature fusion module (SDFF module). The former uses an adaptive filter in the spatial domain to simulate human eyes according to the region of interest (ROI) extracted by neural feedback. The latter not only extracts the shallow feature and the deep feature via transverse connection, but also uses a feature fusion unit (FFU) to fuse the pooled features together with the aim of information contribution maximization. Experiments on standard AVA dataset and Photo.net dataset show the effectiveness of FF-VEN.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Husselman, Tammy-Ann, Edson Filho, Luca W. Zugic, Emma Threadgold und Linden J. Ball. „Stimulus Complexity Can Enhance Art Appreciation: Phenomenological and Psychophysiological Evidence for the Pleasure-Interest Model of Aesthetic Liking“. Journal of Intelligence 12, Nr. 4 (03.04.2024): 42. http://dx.doi.org/10.3390/jintelligence12040042.

Der volle Inhalt der Quelle
Annotation:
We tested predictions deriving from the “Pleasure-Interest Model of Aesthetic Liking” (PIA Model), whereby aesthetic preferences arise from two fluency-based processes: an initial automatic, percept-driven default process and a subsequent perceiver-driven reflective process. One key trigger for reflective processing is stimulus complexity. Moreover, if meaning can be derived from such complexity, then this can engender increased interest and elevated liking. Experiment 1 involved graffiti street-art images, pre-normed to elicit low, moderate and high levels of interest. Subjective reports indicated a predicted enhancement in liking across increasing interest levels. Electroencephalography (EEG) recordings during image viewing revealed different patterns of alpha power in temporal brain regions across interest levels. Experiment 2 enforced a brief initial image-viewing stage and a subsequent reflective image-viewing stage. Differences in alpha power arose in most EEG channels between the initial and deliberative viewing stages. A linear increase in aesthetic liking was again seen across interest levels, with different patterns of alpha activity in temporal and occipital regions across these levels. Overall, the phenomenological data support the PIA Model, while the physiological data suggest that enhanced aesthetic liking might be associated with “flow-feelings” indexed by alpha activity in brain regions linked to visual attention and reducing distraction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Lee, Tae Sung, und Sanghoon Park. „Contouring the Mandible for Aesthetic Enhancement in Asian Patients“. Facial Plastic Surgery 36, Nr. 05 (Oktober 2020): 602–12. http://dx.doi.org/10.1055/s-0040-1717080.

Der volle Inhalt der Quelle
Annotation:
AbstractA prominent mandible that gives a squared face in Asians is considered unattractive as it imparts a coarse and masculine image. Mandibular contouring surgery allows slender oval faces. The purpose of conventional mandible reduction is to make the lower face appear slim in frontal view and to have a smooth contour in lateral view. As shaping the lateral contour of the mandible alone may result in minimal improvement in the frontal view, surgical techniques to reduce the width of the lower face through narrowing genioplasty (i.e., the “V-line” surgery) and sagittal resection of the lateral cortex should be combined. Examination of the shape and symmetry, the relationship between the maxilla and the mandible, understanding overlying soft tissue contribution, and understanding the overall balance of the face are mandatory. An important factor influencing ideal facial shape is patient's personal preference, which is often influenced by his/her ethnic and cultural background. Especially when consulting patients of different nationalities or ethnic backgrounds, careful attention should be paid to the patient's aesthetic sensibility regarding the ideal or desirable facial shape. Narrowing the chin and modification of chin shape can be accomplished by narrowing genioplasty with central strip resection. This midsymphyseal sectioning procedure yields safe and very satisfactory results. This procedure not only augments the narrowing effect by leaving soft tissues attached to the bone but also enables modification of chin shape by altering the shape of resection. The surgeon should customize the surgery based on a comprehensive assessment of the patient's preoperative chin and mandible morphology complemented by an assessment of their aesthetic goals.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Veinberga, Maija, Daiga Skujane und Peteris Rivza. „The impact of landscape aesthetic and ecological qualities on public preference of planting types in urban green spaces“. Landscape architecture and art 14 (16.07.2019): 7–17. http://dx.doi.org/10.22616/j.landarchart.2019.14.01.

Der volle Inhalt der Quelle
Annotation:
Landscape preference in relation to human perception of landscape ecological and aesthetic qualities analysed in different studies. The importance of both qualities is highlighted especially for urban green spaces, where the enhancement of environment quality in conjunction with providing high level aesthetics is becoming a topical issue. This paper analyses seven planting types in urban green spaces in accordance with six landscape ecological and aesthetic qualities. Therefore the aim of this research is to investigate which planting type inhabitants and tourists from four Latvian cities prefer more. Planting types were evaluated according to landscape ecological and aesthetic qualities – attractiveness, naturalness, neatness, necessity of care, wilderness and safety. The method of image simulations of the different planting type alternatives was used. The research results showed a correlation between the landscape preference and respondent`s gender, level of education and place of residence. The research did not display differences in landscape preference in terms of specific regional characteristics of the four selected cities. Results of this research could be used in the decision-making process for development of new and revitalization of current green spaces in the researched cities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Guo, Guanjun, Hanzi Wang, Chunhua Shen, Yan Yan und Hong-Yuan Mark Liao. „Automatic Image Cropping for Visual Aesthetic Enhancement Using Deep Neural Networks and Cascaded Regression“. IEEE Transactions on Multimedia 20, Nr. 8 (August 2018): 2073–85. http://dx.doi.org/10.1109/tmm.2018.2794262.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Hu, Kai, Chenghang Weng, Chaowen Shen, Tianyan Wang, Liguo Weng und Min Xia. „A multi-stage underwater image aesthetic enhancement algorithm based on a generative adversarial network“. Engineering Applications of Artificial Intelligence 123 (August 2023): 106196. http://dx.doi.org/10.1016/j.engappai.2023.106196.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Hu, Yaopeng. „Optimizing e-commerce recommendation systems through conditional image generation: Merging LoRA and cGANs for improved performance“. Applied and Computational Engineering 32, Nr. 1 (22.01.2024): 177–84. http://dx.doi.org/10.54254/2755-2721/32/20230207.

Der volle Inhalt der Quelle
Annotation:
This research concentrates on the integration of Low-Rank Adaptation for Text-to-Image Diffusion Fine-tuning and Conditional Image Generation in e-commerce recommendation systems. Low-Rank Adaptation for Text-to-Image Diffusion Fine-tuning, skilled in producing precise and diverse images from aesthetic descriptions provided by users, is extremely valuable for personalizing product suggestions. The enhancement of the interpretation of textual prompts and consequent image generation is accomplished through the fine-tuning of cross-attention layers in the Stable Diffusion model. In an effort to advance personalization further, Conditional Generative Adversarial Networks are employed to transform these textual descriptions into corresponding product images. In order to assure effective data communication, particularly in areas with low connectivity, the system makes use of Long Range technology, thereby improving system accessibility. Preliminary results demonstrate a considerable improvement in recommendation precision, user engagement, and conversion rates. These results underscore the potential impact of integrating such advanced artificial intelligence techniques in e-commerce, optimizing the shopping experience by generating personalized, accurate, and visually appealing product suggestions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Aesthetic image enhancement"

1

Payne, Andrew. „Automatic aesthetic image enhancement for consumer digital photographs“. Thesis, Loughborough University, 2007. https://dspace.lboro.ac.uk/2134/34568.

Der volle Inhalt der Quelle
Annotation:
Automatic image enhancement refers to the process of improving the quality of the visual content of an image without user interaction. There has been considerable research done in the area of image enhancement normally as a preprocessing step for computer vision applications. Throughout the literature, objective image quality metrics have been defined and image enhancements have been made to satisfy the quality metric. Quality metrics typically are based upon the signal to noise ratio, focus measurements, or strength of edges within the image content. Subjective human input is rarely considered in image enhancement applications. This thesis investigates the concept of automatic image enhancement. In this thesis, an automatic subjective image enhancement system based on the regional content of the image is proposed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Goswami, Abhishek. „Content-aware HDR tone mapping algorithms“. Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG013.

Der volle Inhalt der Quelle
Annotation:
Le rapport entre l'intensité de luminance la plus lumineuse et la plus sombre dans les images à plage dynamique élevée (High Dynamic Range-HDR) est supérieur à la capacité de rendu du support de sortie. Les opérateurs de mappage de tonalités (Tone mapping operators - TMOs) compressent l'image HDR tout en préservant les indices perceptuels, modifiant ainsi la qualité esthétique subjective. Les techniques dans le domaine de la peinture ainsi que les techniques de correction manuelle de l'exposition en photographie ont inspiré de nombreuses recherches sur les TMOs. Cependant, contrairement au processus de retouche manuel basé sur le contenu sémantique de l'image, les TMOs dans la littérature se sont principalement appuyés sur des règles photographiques ou des principes d'adaptation de la vision humaine pour obtenir la 'meilleure' qualité esthétique, ce qui est un problème mal posé en raison de sa subjectivité. Notre travail reformule les défis du mappage des tonalités en se mettant dans la peau d'un photographe, en suivant les principes photographiques, les statistiques d'images et leur recette de retouche locale pour réaliser les ajustements de tonalités. Dans cette thèse, nous présentons deux TMO sémantiques : un SemanticTMO traditionnel et un GSemTMO basé sur l'apprentissage profond. Nos nouveaux TMO utilisent explicitement des informations sémantiques dans le pipeline de mappage de tonalités. Notre nouveau G-SemTMO est le premier exemple de réseaux convolutifs sur les graphes (Graph Convolutional Networks - GCN) utilisé pour l'amélioration esthétique de l'image. Nous montrons que l'apprentissage basé sur des graphes peut tirer parti de l'agencement spatial de segments sémantiques similaire au masques locaux fabriqués par des experts. Il crée une compréhension de la scène basée sur les statistiques d'image spécifiques à la sémantique et prédit un mappage dynamique et local des tonalités. En comparant nos résultats aux TMO traditionnels et modernes basés sur l'apprentissage profond, nous montrons que G-SemTMO peut imiter les recettes d'un expert et mieux se rapprocher des styles esthétiques de référence lorsque comparé aux méthodes de pointe
The ratio between the brightest and the darkest luminance intensity in High Dynamic Range (HDR) images is larger than the rendering capability of the output media. Tone mapping operators (TMOs) compress the HDR image while preserving the perceptual cues thereby modifying the subjective aesthetic quality. Age old painting and photography techniques of manual exposure correction has inspired a lot of research for TMOs. However, unlike the manual retouching process based on semantic content of the image, TMOs in literature have mostly relied upon photographic rules or adaptation principles of human vision to aim for the 'best' aesthetic quality which is ill-posed due to its subjectivity. Our work reformulates the challenges of tone mapping by stepping into the shoes of a photographer, following the photographic principles, image statistics and their local retouching recipe to achieve the tonal adjustments. In this thesis, we present two semantic aware TMOs – a traditional SemanticTMO and a deep learning-based GSemTMO. Our novel TMOs explicitly use semantic information in the tone mapping pipeline. Our novel GSemTMO is the first instance of graph convolutional networks (GCN) being used for aesthetic image enhancement. We show that graph-based learning can leverage the spatial arrangement of semantic segments like the local masks made by experts. It creates a scene understanding based on the semantic specific image statistics a predicts a dynamic local tone mapping. Comparing our results to traditional and modern deep learning-based TMOs, we show that G-SemTMO can emulate an expert’s recipe and reach closer to reference aesthetic styles than the state-of-the-art methods
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Raud, Charlie. „How post-processing effects imitating camera artifacts affect the perceived realism and aesthetics of digital game graphics“. Thesis, Södertörns högskola, Institutionen för naturvetenskap, miljö och teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-34888.

Der volle Inhalt der Quelle
Annotation:
This study investigates how post-processing effects affect the realism and aesthetics of digital game graphics. Four focus groups explored a digital game environment and were exposed to various post-processing effects. During qualitative interviews these focus groups were asked questions about their experience and preferences and the results were analysed. The results can illustrate some of the different pros and cons with these popular post-processing effects and this could help graphical artists and game developers in the future to use this tool (post-processing effects) as effectively as possible.
Denna studie undersöker hur post-processing effekter påverkar realismen och estetiken hos digital spelgrafik. Fyra fokusgrupper utforskade en digital spelmiljö medan olika post-processing effekter exponerades för dem. Under kvalitativa fokusgruppsintervjuer fick de frågor angående deras upplevelser och preferenser och detta resultat blev sedan analyserat. Resultatet kan ge en bild av de olika för- och nackdelarna som finns med dessa populära post-processing effekter och skulle möjligen kunna hjälpa grafiker och spelutvecklare i framtiden att använda detta verktyg (post-processing effekter) så effektivt som möjligt.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Huang, Yong-Jian, und 黃詠健. „Developing a Professional Image Enhancement Mechanism Based on Contemporary Photograph Aesthetics Criteria Mining“. Thesis, 2015. http://ndltd.ncl.edu.tw/handle/17283014763907672061.

Der volle Inhalt der Quelle
Annotation:
碩士
國立臺灣科技大學
資訊工程系
103
In recent years, the rise of smartphones and digital cameras makes it easier to take photos and a mass amount of photos are spread on the Internet. Photographic aesthetics is some sort of art which is expressed by the professional photographers’ aesthetic sensibilities and emotion. Moreover, many professional photographers make adjustments to the photos in post, and let photos much become more beautiful and meet the conditions of photographic aesthetics rules. Enhancing the images followed by ambiguous photographic aesthetics become a big task for computer. In this thesis, an automatically image enhancement based on the aesthetics images dataset from the internet is proposed. We used many method to analyze an image such as RMS method, Laplace of Gaussian method, saliency map method, Gabor filter method and so on. We can use above sixteen features extracted from image to judge an image is good or not. We present a new concept to enhance images by using cluster styles which are generated from X-means and CART decision tree. When an input image is judged as a bad image by CART decision tree, the reason can be traced back by the decision tree characteristic to know which features needs enhancement. We list ten features which can enhance image efficiently such as gamma correction, Gaussian blur and so on. We use Interval Halving method to approach the value which come from giving suggestion of a feature by CART decision tree based on contemporary aesthetics criteria. In the experiments, we apply cluster and classification to our dataset, and the average of cluster’s accuracy is 96.8%. In the enhancement part, we use CART decision tree aesthetic suggestion which means some feature are not enough or some feature are too high that can enhance our image step by step. Then we can get differently image style result like professional photographers do.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Aesthetic image enhancement"

1

Chaudhary, Priyanka, Kailash Shaw und Pradeep Kumar Mallick. „A Survey on Image Enhancement Techniques Using Aesthetic Community“. In Advances in Intelligent Systems and Computing, 585–96. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-5520-1_53.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Olsson, Liselott Mariett, Robert Lecusay und Monica Nilsson. „Children and Adults Explore Human Beings’ Place in Nature and Culture: A Swedish Case-Study of Early Childhood Commons for More Equal and Inclusive Education“. In Educational Commons, 29–48. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-51837-9_3.

Der volle Inhalt der Quelle
Annotation:
AbstractThis chapter accounts for a Swedish case-study on the potential of educational commons to promote more equal and inclusive education in the early years. Several conditions decisive for this potential to be activated are identified and analysed: (1) the relation between research and practice, (2) the image of the child, (3) the role of teachers, (4) the definition of the educational task, and (5) the educational methods used. These conditions are described further in terms of how they were activated within a Playworld/Interactive performance based a common research question, shared by preschool children and adults, on human beings’ place in nature and culture. The chapter concludes that educational commons may function as a catalyst in promoting more equal and inclusive education if, the image of children and teachers is embedded within a shared, intergenerational search for meaning where both children and teachers are conceived as contributing commoners, education defines its task not only as compensatory but also as complementary and as a place for children’s search for meaning, where imagination, play and the creative co-construction of narratives must be allowed to co-exist with more conventional and “rational” modes of learning and teaching, methods and theoretical tools in educational practice and research carry an aesthetic variety that incorporates both sensous-perceptive experiences and an enhancement of individual and collective memories as well as opportunities for children and adults to formulate and gather around a common object of knowledge and interest.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

„Aesthetic Enhancement“. In The Texture of Images, 183–253. BRILL, 2020. http://dx.doi.org/10.1163/9789004440128_007.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Aesthetic image enhancement"

1

Zavalishin, Sergey S., und Yuri S. Bekhtin. „Visually aesthetic image contrast enhancement“. In 2018 7th Mediterranean Conference on Embedded Computing (MECO). IEEE, 2018. http://dx.doi.org/10.1109/meco.2018.8406077.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Li, Ling, Dong Liang, Yuanhang Gao, Sheng-Jun Huang und Songcan Chen. „ALL-E: Aesthetics-guided Low-light Image Enhancement“. In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/118.

Der volle Inhalt der Quelle
Annotation:
Evaluating the performance of low-light image enhancement (LLE) is highly subjective, thus making integrating human preferences into image enhancement a necessity. Existing methods fail to consider this and present a series of potentially valid heuristic criteria for training enhancement models. In this paper, we propose a new paradigm, i.e., aesthetics-guided low-light image enhancement (ALL-E), which introduces aesthetic preferences to LLE and motivates training in a reinforcement learning framework with an aesthetic reward. Each pixel, functioning as an agent, refines itself by recursive actions, i.e., its corresponding adjustment curve is estimated sequentially. Extensive experiments show that integrating aesthetic assessment improves both subjective experience and objective evaluation. Our results on various benchmarks demonstrate the superiority of ALL-E over state-of-the-art methods. Source code: https://dongl-group.github.io/project pages/ALLE.html
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Du, Xiaoyu, Xun Yang, Zhiguang Qin und Jinhui Tang. „Progressive Image Enhancement under Aesthetic Guidance“. In ICMR '19: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3323873.3325055.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Deng, Yubin, Chen Change Loy und Xiaoou Tang. „Aesthetic-Driven Image Enhancement by Adversarial Learning“. In MM '18: ACM Multimedia Conference. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3240508.3240531.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Bakhshali, Mohamad Amin, Mousa Shamsi und Amir Golzarfar. „Facial color image enhancement for aesthetic surgery blepharoplasty“. In 2012 IEEE Symposium on Industrial Electronics and Applications (ISIEA 2012). IEEE, 2012. http://dx.doi.org/10.1109/isiea.2012.6496659.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Li, Leida, Yuzhe Yang und Hancheng Zhu. „Naturalness Preserved Image Aesthetic Enhancement with Perceptual Encoder Constraint“. In ICMR '19: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3323873.3326591.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Zhang, Xiaoyan, Martin Constable und Kap Luk Chan. „Aesthetic enhancement of landscape photographs as informed by paintings across depth layers“. In 2011 18th IEEE International Conference on Image Processing (ICIP 2011). IEEE, 2011. http://dx.doi.org/10.1109/icip.2011.6115622.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Liu, Xiangfei, Xiushan Nie, Zhen Shen und Yilong Yin. „Joint Learning of Image Aesthetic Quality Assessment and Semantic Recognition Based on Feature Enhancement“. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021. http://dx.doi.org/10.1109/icassp39728.2021.9414367.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Chitale, Manasi, Mihika Choudhary, Ritika Jagtap, Priti Koutikkar, Lata Ragha und Chaitanya V. Mahamuni. „High-Resolution Image-to-Image Translation for Aesthetic Enhancements Using Generative Adversarial Network“. In 2024 2nd International Conference on Disruptive Technologies (ICDT). IEEE, 2024. http://dx.doi.org/10.1109/icdt61202.2024.10489200.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Zheng, Naishan, Jie Huang, Qi Zhu, Man Zhou, Feng Zhao und Zheng-Jun Zha. „Enhancement by Your Aesthetic: An Intelligible Unsupervised Personalized Enhancer for Low-Light Images“. In MM '22: The 30th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3503161.3547952.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie