Добірка наукової літератури з теми "Computer Vision, Fashion, Computational Aesthetics"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Computer Vision, Fashion, Computational Aesthetics".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Computer Vision, Fashion, Computational Aesthetics"

1

Wickramasinghe, W. A. P., A. T. Dharmarathne, and N. D. Kodikara. "A mathematical model for computational aesthetics." International Journal of Computational Vision and Robotics 1, no. 3 (2010): 311. http://dx.doi.org/10.1504/ijcvr.2010.038077.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zhu, Hancheng, Yong Zhou, Zhiwen Shao, Wenliang Du, Guangcheng Wang, and Qiaoyue Li. "Personalized Image Aesthetics Assessment via Multi-Attribute Interactive Reasoning." Mathematics 10, no. 22 (November 9, 2022): 4181. http://dx.doi.org/10.3390/math10224181.

Повний текст джерела
Анотація:
Due to the subjective nature of people’s aesthetic experiences with respect to images, personalized image aesthetics assessment (PIAA), which can simulate the aesthetic experiences of individual users to estimate images, has received extensive attention from researchers in the computational intelligence and computer vision communities. Existing PIAA models are usually built on prior knowledge that directly learns the generic aesthetic results of images from most people or the personalized aesthetic results of images from a large number of individuals. However, the learned prior knowledge ignores the mutual influence of the multiple attributes of images and users in their personalized aesthetic experiences. To this end, this paper proposes a personalized image aesthetics assessment method via multi-attribute interactive reasoning. Different from existing PIAA models, the multi-attribute interaction constructed from both images and users is used as more effective prior knowledge. First, we designed a generic aesthetics extraction module from the perspective of images to obtain the aesthetic score distribution and multiple objective attributes of images rated by most users. Then, we propose a multi-attribute interactive reasoning network from the perspective of users. By interacting multiple subjective attributes of users with multiple objective attributes of images, we fused the obtained multi-attribute interactive features and aesthetic score distribution to predict personalized aesthetic scores. Experimental results on multiple PIAA datasets demonstrated our method outperformed state-of-the-art PIAA methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Jiang, Bin, and Chris de Rijke. "Structural Beauty: A Structure-Based Computational Approach to Quantifying the Beauty of an Image." Journal of Imaging 7, no. 5 (April 23, 2021): 78. http://dx.doi.org/10.3390/jimaging7050078.

Повний текст джерела
Анотація:
To say that beauty is in the eye of the beholder means that beauty is largely subjective so varies from person to person. While the subjectivity view is commonly held, there is also an objectivity view that seeks to measure beauty or aesthetics in some quantitative manners. Christopher Alexander has long discovered that beauty or coherence highly correlates to the number of subsymmetries or substructures and demonstrated that there is a shared notion of beauty—structural beauty—among people and even different peoples, regardless of their faiths, cultures, and ethnicities. This notion of structural beauty arises directly out of living structure or wholeness, a physical and mathematical structure that underlies all space and matter. Based on the concept of living structure, this paper develops an approach for computing the structural beauty or life of an image (L) based on the number of automatically derived substructures (S) and their inherent hierarchy (H). To verify this approach, we conducted a series of case studies applied to eight pairs of images including Leonardo da Vinci’s Mona Lisa and Jackson Pollock’s Blue Poles. We discovered among others that Blue Poles is more structurally beautiful than the Mona Lisa, and traditional buildings are in general more structurally beautiful than their modernist counterparts. This finding implies that goodness of things or images is largely a matter of fact rather than an opinion or personal preference as conventionally conceived. The research on structural beauty has deep implications on many disciplines, where beauty or aesthetics is a major concern such as image understanding and computer vision, architecture and urban design, humanities and arts, neurophysiology, and psychology.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Rusin, R. M. "POSTMODERNISM: AESTHETICS AND THE ART OF VIRTUALITY." UKRAINIAN CULTURAL STUDIES, no. 1 (4) (2019): 67–69. http://dx.doi.org/10.17721/ucs.2019.1(4).13.

Повний текст джерела
Анотація:
At the end of the 20 and the beginning of the 21 century as a result of the changes that took place in art, there was a need for a theoretical re- thinking of artistic practices. This task was assumed by artists, art critics, art critics and other agents of the artistic world, trying to clarify the pos- sibility of a new vision of art, give it an objective assessment. Obviously, understanding the specifics of contemporary art is not so much in the assessment itself, but in clarifying the fundamentals of a different understanding of such concepts as "classical art", "contemporary art," "virtual art." If classical art received a thorough understanding of the history of art, art history and aesthetics for centuries, virtual art, as a specific form of contemporary art, needs to be thoroughly investigated. Contemporary art is experiencing significant transformations in the context of post-industrial culture. Increasingly important are computational methods for the production of virtual artefacts. The report notes that contemporary virtual art is a new space dynamically captured by the postmod- ernist practices of contemporary art. In modern practices of postmodernism in the field of virtual art with the rapid development of computer tech- nology sharply decreases the fate of human presence in the process of creativity. Machine modelling as a product of collective creativity allows you to create a new virtual image, regardless of its existence in the real world. In modern practices in the field of virtual art, the idea of artificial ("synthetic imagination") is used, which is a machine imagination with the use of artificial modelling of man's imagination. Artificial imagination with the help of interactive search allows you to synthesize images from the data- base and create a new virtual image, regardless of its existence in the real world. Thus, the rapid development of computer technology is increasingly reducing the fate of human presence in the field of virtual art. Postmodern experiments stimulate the erosion of the boundaries between traditional forms and genres of art. The perfection and availability of technical means of production, the development of computer technology practically led to the disappearance of original creativity as an act of indi- vidual creation.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Liu, Jian, Yuchen Zheng, Ke Dong, Haitong Yu, Jianjun Zhou, Ye Jiang, Zhaoneng Jiang, Sujie Guo, and Rui Ding. "Classification of Fashion Article Images Based on Improved Random Forest and VGG-IE Algorithm." International Journal of Pattern Recognition and Artificial Intelligence 34, no. 03 (July 31, 2019): 2051004. http://dx.doi.org/10.1142/s0218001420510040.

Повний текст джерела
Анотація:
In classification of fashion article images based on e-commerce image recommendation system, the classification accuracy and computation time cannot meet the actual requirements. Herein, for the first time to our knowledge, we present two diverse image recognition approaches for classification of fashion article images called random-forest method based on genetic algorithm (GA-RF) and Visual Geometry Group-Image Enhancement algorithm (VGG-IE) to solve classification accuracy and computation time problem. In GA-RF, the number of segmentation times and the decision trees are the key factors affecting the classification results. Improved genetic algorithm is introduced into the parameter optimization of forests to determine the optimal combination of the two parameters with minimal manual intervention. Finally, we propose six different Deep Neural Network architectures, including VGG-IE, to improve classification accuracy. The VGG-IE algorithm uses batch normalization and seven kinds training-data augmentation for ease and promotion of learning process. We investigate the effectiveness of the proposed method using Fashion-MNIST dataset and 70[Formula: see text]000 pictures, Experimental results demonstrate that, in comparison with the state-of-the-art algorithms for 10 categories of image recognition, our VGG algorithm has the shortest computational time when it satisfies certain classification accuracy. VGG-IE approach has the highest classification accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Cowell, Andrew J., Michelle L. Gregory, Joe Bruce, Jereme Haack, Doug Love, Stuart Rose, and Adrienne H. Andrew. "Understanding the Dynamics of Collaborative Multi-Party Discourse." Information Visualization 5, no. 4 (December 2006): 250–59. http://dx.doi.org/10.1057/palgrave.ivs.9500139.

Повний текст джерела
Анотація:
In this paper, we discuss the efforts underway at the Pacific Northwest National Laboratory in understanding the dynamics of multi-party discourse across a number of communication modalities, such as email, instant messaging traffic and meeting data. Two prototype systems are discussed. The Conversation Analysis Tool (ChAT) is an experimental test-bed for the development of computational linguistic components and enables users to easily identify topics or persons of interest within multi-party conversations, including who talked to whom, when, the entities that were discussed, etc. The Retrospective Analysis of Communication Events (RACE) prototype, leveraging many of the ChAT components, is an application built specifically for knowledge workers and focuses on merging different types of communication data so that the underlying message can be discovered in an efficient, timely fashion.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Seong-Yoon Shin, Gwanghyun Jo, and Guangxing Wang. "A Novel Method for Fashion Clothing Image Classification Based on Deep Learning." Journal of Information and Communication Technology 22, no. 1 (January 19, 2023): 127–48. http://dx.doi.org/10.32890/jict2023.22.1.6.

Повний текст джерела
Анотація:
Image recognition and classification is a significant research topic in computational vision and widely used computer technology. Themethods often used in image classification and recognition tasks are based on deep learning, like Convolutional Neural Networks(CNNs), LeNet, and Long Short-Term Memory networks (LSTM). Unfortunately, the classification accuracy of these methods isunsatisfactory. In recent years, using large-scale deep learning networks to achieve image recognition and classification canimprove classification accuracy, such as VGG16 and Residual Network (ResNet). However, due to the deep network hierarchyand complex parameter settings, these models take more time in the training phase, especially when the sample number is small, which can easily lead to overfitting. This paper suggested a deep learning-based image classification technique based on a CNN model and improved convolutional and pooling layers. Furthermore, the study adopted the approximate dynamic learning rate update algorithm in the model training to realize the learning rate’s self-adaptation, ensure the model’s rapid convergence, and shorten the training time. Using the proposed model, an experiment was conducted on the Fashion-MNIST dataset, taking 6,000 images as the training dataset and 1,000 images as the testing dataset. In actual experiments, the classification accuracy of the suggested method was 93 percent, 4.6 percent higher than that of the basic CNN model. Simultaneously, the study compared the influence of the batch size of model training on classification accuracy. Experimental outcomes showed this model is very generalized in fashion clothing image classification tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Bates, Joseph. "Virtual Reality, Art, and Entertainment." Presence: Teleoperators and Virtual Environments 1, no. 1 (January 1992): 133–38. http://dx.doi.org/10.1162/pres.1992.1.1.133.

Повний текст джерела
Анотація:
Most existing research on virtual reality concerns issues close to the interface, primarily how to present an underlying simulated world in a convincing fashion. However, for virtual reality to achieve its promise as a rich and popular artistic form, as have the novel, cinema, and television, we believe it will be necessary to explore well beyond the interface, to those issues of content and style that have made traditional media so powerful. We present a case for the importance of this research, then outline several topics we believe are central to the inquiry: developing computational theories for cognitive-emotional agents, presentation style, and drama.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Wang, Wenguan, Jianbing Shen, and Haibin Ling. "A Deep Network Solution for Attention and Aesthetics Aware Photo Cropping." IEEE Transactions on Pattern Analysis and Machine Intelligence 41, no. 7 (July 1, 2019): 1531–44. http://dx.doi.org/10.1109/tpami.2018.2840724.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Gu, Zhenfei, Can Chen, and Dengyin Zhang. "A Low-Light Image Enhancement Method Based on Image Degradation Model and Pure Pixel Ratio Prior." Mathematical Problems in Engineering 2018 (July 16, 2018): 1–19. http://dx.doi.org/10.1155/2018/8178109.

Повний текст джерела
Анотація:
Images captured in low-light conditions are prone to suffer from low visibility, which may further degrade the performance of most computational photography and computer vision applications. In this paper, we propose a low-light image degradation model derived from the atmospheric scattering model, which is simple but effective and robust. Then, we present a physically valid image prior named pure pixel ratio prior, which is a statistical regularity of extensive nature clear images. Based on the proposed model and the image prior, a corresponding low-light image enhancement method is also presented. In this method, we first segment the input image into scenes according to the brightness similarity and utilize a high-efficiency scene-based transmission estimation strategy rather than the traditional per-pixel fashion. Next, we refine the rough transmission map, by using a total variation smooth operator, and obtain the enhanced image accordingly. Experiments on a number of challenging nature low-light images verify the effectiveness and robustness of the proposed model, and the corresponding method can show its superiority over several state of the arts.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Computer Vision, Fashion, Computational Aesthetics"

1

Datta, Ritendra, Dhiraj Joshi, Jia Li, and James Z. Wang. "Studying Aesthetics in Photographic Images Using a Computational Approach." In Computer Vision – ECCV 2006, 288–301. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11744078_23.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Alima, N., R. Snooks, and J. McCormack. "Bio Scaffolds." In Proceedings of the 2021 DigitalFUTURES, 316–29. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-5983-6_29.

Повний текст джерела
Анотація:
Abstract‘Bio Scaffolds’ explores a series of design tectonics that emerge from a co-creation between human, machine and natural intelligences. This research establishes an integral connection between form and materiality by enabling biological materials to become a co-creator within the design and fabrication process. In this research paper, we explore a hybrid between architectural aesthetics and biological agency by choreographing natural growth through form. ‘Bio Scaffolds’ explores a series of 3D printed biodegradable scaffolds that orchestrate both Mycelia growth and degradation through form. A robotic arm is introduced into the system that can respond to the organism’s natural behavior by injecting additional Mycelium culture into a series of sacrificial frameworks. Equipped with computer vision systems, feedback controls, scanning processes and a multi-functional end-effector, the machine tends to nature by reacting to its patterns of growth, moisture, and color variation. Using this cybernetic intelligence, developed between human, machine, and Mycelium, our intention is to generate unexpected structural and morphological forms that are represented via a series of 3D printed Mycelium enclosures. ‘Bio Scaffolds’ explores an interplay between biological and computational complexity through non anthropocentric micro habitats.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Computer Vision, Fashion, Computational Aesthetics"

1

Sardenberg, Victor, and Mirco Becker. "Computational Quantitative Aesthetics Evaluation - Evaluating architectural images using computer vision, machine learning and social media." In eCAADe 2022: Co-creating the Future - Inclusion in and through Design. eCAADe, 2022. http://dx.doi.org/10.52842/conf.ecaade.2022.2.567.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Sardenberg, Victor, and Mirco Becker. "Aesthetic Measure of Architectural Photography utilizing Computer Vision: Parts-from-Wholes." In Design Computation Input/Output 2022. Design Computation, 2022. http://dx.doi.org/10.47330/dcio.2022.ggnl1577.

Повний текст джерела
Анотація:
The existing methods for solution space navigation require numerical values to score solutions. The authors introduce a method of quantitative aesthetic evaluation utilizing Computer Vision (CV) as a criterion to navigate solution spaces. Therefore, aesthetics can complement structural, environmental, and other quantitative criteria. The work stands in the extended history of quantifying the visual aesthetic experience. Some precedents are: Birkhoff [1933] and Max Bense [1965] built an approach with experiments to empirically support a measure, whereas Birkin [2010], Ostwald, and Vaughan [2016] devised the first computational methods working on vector drawings. Our research automates and accelerates aesthetic quantification by utilizing CV to extract computable datasets from images. We are especially keen on architectural images as a shorthand to assign an aesthetic value to design, aiming to navigate the solution space in architecture. This work devises a method for rearranging parts in architectural images focusing on formal aspects, in opposition to semantic segmentation where objects unrelated to architectural design (cars, persons, sky…) are quantified to score images [Verma and Jana and Ramamritham 2018]. It uses Maximally Stable Extremal Regions (MSER) [Matas 2004] to recognize architectural parts because it is superior to similar methods such as SimpleBlobDetector in this task. Our method disassembles the parts in a diagram of scaled parts (Fig. 2) to analyze them in isolation, and a diagram of connectivity graph (Fig. 3), to evaluate relationships. These diagrams are examined to compare photos of buildings, cars, and trees to assess the applicability of such a method to a range of objects. Parts and connections are thus quantified, and these values are inputted in a refined version of Birkhoff’s formula to calculate an aesthetic score for each image for navigating the solution space. Finally, it tests the method to draw comparisons between the discrete and continuous paradigms (Fig. 1) in the contemporary discourse of architecture, comparing Zaha Hadid Architects` Heydar Aliyev Centre and Gilles Retsin´s Diamonds House to argue that there is a difference between the aesthetic effects of continuous and discrete designs, besides their distinction in tectonic logic. The method proved to be an efficient procedure for comparatively quantifying the aesthetic judgment of architectural images, enabling designers to incorporate aesthetics as a complementary criterion for solution space navigation in computational design. The method of computational aesthetic measure for solution space navigation and its calibrations via crowdsourced evaluation of images is further detailed in a paper by the authors being published at the 2022 eCAADe conference.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

"DETECTION OF FASHION LANDMARKS BASED ON POSE ESTIMATION AND HUMAN PARSING." In 16th International Conference on Computer Graphics, Visualization, Computer Vision and Image Processing (CGVCVIP 2022), the 8th International Conference on Connected Smart Cities (CSC 2022), 7th International Conference on Big Data Analytics, Data Mining and Computational Intelligence (BigDaCI’22) and 11th International Conference on Theory and Practice in Modern Computing (TPMC 2022). IADIS Press, 2022. http://dx.doi.org/10.33965/mccsis2022_202206l008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Gokhale, Tejas. "Vision beyond Pixels: Visual Reasoning via Blocksworld Abstractions." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/907.

Повний текст джерела
Анотація:
Deep neural networks trained in an end-to-end fashion have brought about exceptional advances in computer vision, especially in computational perception. We go beyond perception and seek to enable vision modules to reason about perceived visual entities such as scenes, objects and actions. We introduce a challenging visual reasoning task, Image-Based Event Sequencing (IES) and compile the first IES dataset, Blocksworld Image Reasoning Dataset (BIRD). Motivated by the blocksworld concept, we propose a modular approach supported by literature in cognitive psychology and children's development. We decompose the problem into two stages - visual perception and event sequencing, and show that our approach can be extended to natural images without re-training.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Guo, Dan, Yang Wang, Peipei Song, and Meng Wang. "Recurrent Relational Memory Network for Unsupervised Image Captioning." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/128.

Повний текст джерела
Анотація:
Unsupervised image captioning with no annotations is an emerging challenge in computer vision, where the existing arts usually adopt GAN (Generative Adversarial Networks) models. In this paper, we propose a novel memory-based network rather than GAN, named Recurrent Relational Memory Network (R2M). Unlike complicated and sensitive adversarial learning that non-ideally performs for long sentence generation, R2M implements a concepts-to-sentence memory translator through two-stage memory mechanisms: fusion and recurrent memories, correlating the relational reasoning between common visual concepts and the generated words for long periods. R2M encodes visual context through unsupervised training on images, while enabling the memory to learn from irrelevant textual corpus via supervised fashion. Our solution enjoys less learnable parameters and higher computational efficiency than GAN-based methods, which heavily bear parameter sensitivity. We experimentally validate the superiority of R2M than state-of-the-arts on all benchmark datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії