Auswahl der wissenschaftlichen Literatur zum Thema „Fashion images“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Fashion images" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Fashion images"

1

Lee, Jeong Hwa, und Jisoo Ha. „Black Fashion-manias’ Images and Fashion Styles“. Fashion & Textile Research Journal 22, Nr. 2 (30.04.2020): 139–48. http://dx.doi.org/10.5805/sfti.2020.22.2.139.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Silva, Ana Cláudia Suriani da, und Maria Claudia Bonadio. „Images of Brazilian fashion“. Film, Fashion & Consumption 2, Nr. 3 (01.12.2013): 213–16. http://dx.doi.org/10.1386/ffc.2.3.213_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Kim, Hyang-Ja, und Young-Sam Kim. „Virtuality in Digital Fashion Images“. Journal of the Korean society of clothing and textiles 39, Nr. 2 (30.04.2015): 233. http://dx.doi.org/10.5850/jksct.2015.39.2.233.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Davis, Angela Y. „Afro Images: Politics, Fashion, and Nostalgia“. Critical Inquiry 21, Nr. 1 (Oktober 1994): 37–45. http://dx.doi.org/10.1086/448739.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Park, Yi Yeon, und Gi Young Kwon. „Fashion Types & Aesthetic Meanings of Modern Fashion adopting Supervillain Images“. Journal of Basic Design & Art 22, Nr. 3 (Juni 2021): 183–98. http://dx.doi.org/10.47294/ksbda.22.3.15.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Arjana, Sophia Rose. „Islamic Fashion and Anti-Fashion“. American Journal of Islam and Society 32, Nr. 2 (01.04.2015): 104–7. http://dx.doi.org/10.35632/ajis.v32i2.973.

Der volle Inhalt der Quelle
Annotation:
This volume of scholarship surrounding Islamic fashion presents a counternarrativeto a dominant story: that Muslim women in the West are subjugatedby the oppressive and patriarchal yoke of Islam. Islamic Fashion and Anti-Fashion: New Perspectives from Europe and North America offers a freshnew look at veiling, its intersection with religious piety, family, community,religious authority, fashion, and commoditization through sixteen distinct stud-104 The American Journal of Islamic Social Sciences 32:2ies ranging from clothing items like the burqini and the pardosu to larger issuessurrounding identity and politics, such as North American Islamophobia andits impact on Canadian Muslims. This book represents a large field of researchon Muslim women’s lived experiences, one that reveals the complexities inherentin these religious actors whose choices of dress reveal a large set ofcompeting values, desires, and commitments.The book is organized into five sections: location and encounter, historyand heritage, the marketplace, fashion and media, and fashion and anti-fashion.Two of its attractive features are the numerous black and white images runningthrough many of the chapters, as well as the two groups of stunning, provocativecolor photographs showing the richness of Islamic fashion, from “hijabistreet style” to London Muslim hipster style ...
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kim, Namhoon, Eunha Chun und Eunju Ko. „Country of origin effects on brand image, brand evaluation, and purchase intention“. International Marketing Review 34, Nr. 2 (10.04.2017): 254–71. http://dx.doi.org/10.1108/imr-03-2015-0071.

Der volle Inhalt der Quelle
Annotation:
Purpose The purpose of this paper is to analyze how national stereotype, country of origin (COO), and fashion brand’s images influence consumers’ brand evaluations and purchase intentions regarding fashion collections. Korea (Seoul) and overseas (New York and Paris) collections are compared and analyzed. Design/methodology/approach The authors conduct a structural equation modeling and multi-group analysis using data collected from Seoul, New York, and Paris. Findings Consumers make higher brand evaluations and ultimately have stronger purchase intentions toward fashion collections from countries that have stronger COO and fashion brand images. In the context of fashion collections, COO image is greatly influenced by a nation’s political economic and cultural artistic images. In addition, comparing the domestic Seoul fashion collection with New York and Paris collections reveals that a national stereotype images, COO images of fashion collection, and fashion brand’s images cause different brand evaluation and purchase intention. Originality/value The overarching value of the study is that it expands COO research, which has been limited to actual products. Also, the results provide a basic foundation for establishing marketing strategy based on COO image as a way to enhance the development and image of fashion collection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lee, Selee. „Fashion Discourses on Images of Calvin Klein“. Archives of Design Research 31, Nr. 4 (30.11.2018): 155–68. http://dx.doi.org/10.15187/adr.2018.11.31.4.155.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Obaid, Mustafa Amer, und Wesam M. Jasim. „Pre-convoluted neural networks for fashion classification“. Bulletin of Electrical Engineering and Informatics 10, Nr. 2 (01.04.2021): 750–58. http://dx.doi.org/10.11591/eei.v10i2.2750.

Der volle Inhalt der Quelle
Annotation:
In this work, concept of the fashion-MNIST images classification constructed on convolutional neural networks is discussed. Whereas, 28×28 grayscale images of 70,000 fashion products from 10 classes, with 7,000 images per category, are in the fashion-MNIST dataset. There are 60,000 images in the training set and 10,000 images in the evaluation set. The data has been initially pre-processed for resizing and reducing the noise. Then, this data is normalized for ensuring that all the data are on the same scale and this usually improves the performance. After normalizing the data, it is augmented where one image will be in three forms of output. The first output image is obtained by rotating the actual one; the second output image is obtained as acute angle image; and the third is obtained as tilt image. The new data set is of 180,000 images for training phase and 30,000 images for the testing phase. Finally, data is sent to training process as input for training model of the pre-convolution network. The pre-convolution neural network with the five layered convoluted deep neural network and do the training with the augmented data, The performance of the proposed system shows 94% accuracy where it was 93% in VGG16 and 92% in AlexNetnetworks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Häsler, Leonie. „Stereo Imaging In Fashion Photography“. Networking Knowledge: Journal of the MeCCSA Postgraduate Network 11, Nr. 1 (30.04.2018): 38–55. http://dx.doi.org/10.31165/nk.2018.111.528.

Der volle Inhalt der Quelle
Annotation:
Fashion photographs are generally two-dimensional images showing one side of a three-dimensional model. This paper, however, deals with far less well-known stereoscopic fashion photographs. Stereoscopy is a technique that creates the illusion of a 3-D image. Based on the image collection of Swiss textile and clothes company HANRO, the article analyzes the composition of 3-D pictures by putting them in a broader media-historical context. The archived stereoscopic photographs date back to the 1950s and show a series of women’s fashion. In the same period, Hollywood experienced a 3-D-boom that may have had a technical and aesthetical impact on these photographs. Although fashion is not mediated in moving images in this case study, codes or formal languages of a film are inscribed in the images, as will be shown in the following text. Building on these findings, this paper further discusses the influence of cinematography and other media practices on the fashion industry’s attempt to free its fashion imagery from the confines of a two-dimensional page.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Fashion images"

1

Mennborg, Alexander. „AI-Driven Image Manipulation : Image Outpainting Applied on Fashion Images“. Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-85148.

Der volle Inhalt der Quelle
Annotation:
The e-commerce industry frequently has to deal with displaying product images in a website where the images are provided by the selling partners. The images in question can have drastically different aspect ratios and resolutions which makes it harder to present them while maintaining a coherent user experience. Manipulating images by cropping can sometimes result in parts of the foreground (i.e. product or person within the image) to be cut off. Image outpainting is a technique that allows images to be extended past its boundaries and can be used to alter the aspect ratio of images. Together with object detection for locating the foreground makes it possible to manipulate images without sacrificing parts of the foreground. For image outpainting a deep learning model was trained on product images that can extend images by at least 25%. The model achieves 8.29 FID score, 44.29 PSNR score and 39.95 BRISQUE score. For testing this solution in practice a simple image manipulation pipeline was created which uses image outpainting when needed and it shows promising results. Images can be manipulated in under a second running on ZOTAC GeForce RTX 3060 (12GB) GPU and a few seconds running on a Intel Core i7-8700K (16GB) CPU. There is also a special case of images where the background has been digitally replaced with a solid color and they can be outpainted even faster without deep learning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Simó, Serra Edgar. „Understanding human-centric images : from geometry to fashion“. Doctoral thesis, Universitat Politècnica de Catalunya, 2015. http://hdl.handle.net/10803/327030.

Der volle Inhalt der Quelle
Annotation:
Understanding humans from photographs has always been a fundamental goal of computer vision. Early works focused on simple tasks such as detecting the location of individuals by means of bounding boxes. As the field progressed, harder and more higher level tasks have been undertaken. For example, from human detection came the 2D and 3D human pose estimation in which the task consisted of identifying the location in the image or space of all different body parts, e.g., head, torso, knees, arms, etc. Human attributes also became a great source of interest as they allow recognizing individuals and other properties such as gender or age. Later, the attention turned to the recognition of the action being performed. This, in general, relies on the previous works on pose estimation and attribute classification. Currently, even higher level tasks are being conducted such as predicting the motivations of human behavior or identifying the fashionability of an individual from a photograph. In this thesis we have developed a hierarchy of tools that cover all these range of problems, from low level feature point descriptors to high level fashion-aware conditional random fields models, all with the objective of understanding humans from monocular, RGB images. In order to build these high level models it is paramount to have a battery of robust and reliable low and mid level cues. Along these lines, we have proposed two low-level keypoint descriptors: one based on the theory of the heat diffusion on images, and the other that uses a convolutional neural network to learn discriminative image patch representations. We also introduce distinct low-level generative models for representing human pose: in particular we present a discrete model based on a directed acyclic graph and a continuous model that consists of poses clustered on a Riemannian manifold. As mid level cues we propose two 3D human pose estimation algorithms: one that estimates the 3D pose given a noisy 2D estimation, and an approach that simultaneously estimates both the 2D and 3D pose. Finally, we formulate higher level models built upon low and mid level cues for human understanding. Concretely, we focus on two different tasks in the context of fashion: semantic segmentation of clothing, and predicting the fashionability from images with metadata to ultimately provide fashion advice to the user. In summary, to robustly extract knowledge from images with the presence of humans it is necessary to build high level models that integrate low and mid level cues. In general, using and understanding strong features is critical for obtaining reliable performance. The main contribution of this thesis is in proposing a variety of low, mid and high level algorithms for human-centric images that can be integrated into higher level models for comprehending humans from photographs, as well as tackling novel fashion-oriented problems.
Siempre ha sido una meta fundamental de la visión por computador la comprensión de los seres humanos. Los primeros trabajos se fijaron en objetivos sencillos tales como la detección en imágenes de la posición de los individuos. A medida que la investigación progresó se emprendieron tareas mucho más complejas. Por ejemplo, a partir de la detección de los humanos se pasó a la estimación en dos y tres dimensiones de su postura por lo que la tarea consistía en identificar la localización en la imagen o el espacio de las diferentes partes del cuerpo, por ejemplo cabeza, torso, rodillas, brazos, etc...También los atributos humanos se convirtieron en una gran fuente de interés ya que permiten el reconocimiento de los individuos y de sus propiedades como el género o la edad. Más tarde, la atención se centró en el reconocimiento de la acción realizada. Todos estos trabajos reposan en las investigaciones previas sobre la estimación de las posturas y la clasificación de los atributos. En la actualidad, se llevan a cabo investigaciones de un nivel aún superior sobre cuestiones tales como la predicción de las motivaciones del comportamiento humano o la identificación del tallaje de un individuo a partir de una fotografía. En esta tesis desarrollamos una jerarquía de herramientas que cubre toda esta gama de problemas, desde descriptores de rasgos de bajo nivel a modelos probabilísticos de campos condicionales de alto nivel reconocedores de la moda, todos ellos con el objetivo de mejorar la comprensión de los humanos a partir de imágenes RGB monoculares. Para construir estos modelos de alto nivel es decisivo disponer de una batería de datos robustos y fiables de nivel bajo y medio. En este sentido, proponemos dos descriptores novedosos de bajo nivel: uno se basa en la teoría de la difusión de calor en las imágenes y otro utiliza una red neural convolucional para aprender representaciones discriminativas de trozos de imagen. También introducimos diferentes modelos de bajo nivel generativos para representar la postura humana: en particular presentamos un modelo discreto basado en un gráfico acíclico dirigido y un modelo continuo que consiste en agrupaciones de posturas en una variedad de Riemann. Como señales de nivel medio proponemos dos algoritmos estimadores de la postura humana: uno que estima la postura en tres dimensiones a partir de una estimación imprecisa en el plano de la imagen y otro que estima simultáneamente la postura en dos y tres dimensiones. Finalmente construimos modelos de alto nivel a partir de señales de nivel bajo y medio para la comprensión de la persona a partir de imágenes. En concreto, nos centramos en dos diferentes tareas en el ámbito de la moda: la segmentación semántica del vestido y la predicción del buen ajuste de la prenda a partir de imágenes con meta-datos con la finalidad de aconsejar al usuario sobre moda. En resumen, para extraer conocimiento a partir de imágenes con presencia de seres humanos es preciso construir modelos de alto nivel que integren señales de nivel medio y bajo. En general, el punto crítico para obtener resultados fiables es el empleo y la comprensión de rasgos fuertes. La aportación fundamental de esta tesis es la propuesta de una variedad de algoritmos de nivel bajo, medio y alto para el tratamiento de imágenes centradas en seres humanos que pueden integrarse en modelos de alto nivel, para mejor comprensión de los seres humanos a partir de fotografías, así como abordar problemas planteados por el buen ajuste de las prendas.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Ginsburg, Sara A. „Postfeminism Analysis of Sexualized Images in Fashion Advertisements“. Scholarship @ Claremont, 2017. http://scholarship.claremont.edu/cmc_theses/1543.

Der volle Inhalt der Quelle
Annotation:
This article applies methods of semiotic analysis to representations and understandings of female sexuality in fashion advertising. Through the framework of Paolo Freire’s Action Learning model, also known as the “empowerment spiral”, it is concluded that advertisements dealing in overt sexualization's of traditional conceptions of femininity produces a one-sided discourse in femininity in which the decoding of media images is oversimplified through a binary approach. In effect, this produces conflicts detrimental to feminist progress by virtue of ostrisizing postfeminist appreciations of sexual empowerment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Neupane, Aashish. „Visual Saliency Analysis on Fashion Images Using Image Processing and Deep Learning Approaches“. OpenSIUC, 2020. https://opensiuc.lib.siu.edu/theses/2784.

Der volle Inhalt der Quelle
Annotation:
ABSTRACTAASHISH NEUPANE, for the Master of Science degree in BIOMEDICAL ENGINEERING, presented on July 35, 2020, at Southern Illinois University Carbondale. TITLE: VISUAL SALIENCY ANALYSIS ON FASHION IMAGES USING IMAGE PROCESSING AND DEEP LEARNING APPROACHES.MAJOR PROFESSOR: Dr. Jun QinState-of-art computer vision technologies have been applied in fashion in multiple ways, and saliency modeling is one of those applications. In computer vision, a saliency map is a 2D topological map which indicates the probabilistic distribution of visual attention priorities. This study is focusing on analysis of the visual saliency on fashion images using multiple saliency models, evaluated by several evaluation metrics. A human subject study has been conducted to collect people’s visual attention on 75 fashion images. Binary ground-truth fixation maps for these images have been created based on the experimentally collected visual attention data using Gaussian blurring function. Saliency maps for these 75 fashion images were generated using multiple conventional saliency models as well as deep feature-based state-of-art models. DeepFeat has been studied extensively, with 44 sets of saliency maps, exploiting the features extracted from GoogLeNet and ResNet50. Seven other saliency models have also been utilized to predict saliency maps on these images. The results were compared over 5 evaluation metrics – AUC, CC, KL Divergence, NSS and SIM. The performance of all 8 saliency models on prediction of visual attention on fashion images over all five metrics were comparable to the benchmarked scores. Furthermore, the models perform well consistently over multiple evaluation metrics, thus indicating that saliency models could in fact be applied to effectively predict salient regions in random fashion advertisement images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Liu, Christine M. (Christine Mae). „Urbanhermes : fashion signaling and the social mobility of images“. Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/37393.

Der volle Inhalt der Quelle
Annotation:
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.
Includes bibliographical references (p. 91-94).
Urbanhermes is a messenger bag designed to display and disseminate meaningful yet ephemeral images between people in the public realm. These images surface as representation of the daily zeitgeist; the image as fashion emerges and grows in popularity as knowledge diffuses over a very short period of time. A wireless communication infrastructure allows users to pass along images from bag to bag, and potential proximity sensing adds awareness of others nearby who share a similar fashion signal. Dynamically formed communities interplay and merge through the coupled system of shared images. Urbanhermes, through adding layers of highly temporal information upon an individual's public identity, attempts to enrich social interaction and understand the cultural role of electronic fashion. The thesis, combining both social theory and technology, develops a fashion system that can enable further discussion in areas of signaling in sociable media design. We hypothesize that electronic fashion signals in the physical realm will allow people to disclose and perceive expressive qualities about themselves that would not be possible by current material fashions. This project presents a design framework and a proof-of-concept study in which this hypothesis may be examined.
by Christine M. Liu.
S.M.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Tu, Guoyun. „Image Captioning On General Data And Fashion Data : An Attribute-Image-Combined Attention-Based Network for Image Captioning on Mutli-Object Images and Single-Object Images“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-282925.

Der volle Inhalt der Quelle
Annotation:
Image captioning is a crucial field across computer vision and natural language processing. It could be widely applied to high-volume web images, such as conveying image content to visually impaired users. Many methods are adopted in this area such as attention-based methods, semantic-concept based models. These achieve excellent performance on general image datasets such as the MS COCO dataset. However, it is still left unexplored on single-object images.In this paper, we propose a new attribute-information-combined attention- based network (AIC-AB Net). At each time step, attribute information is added as a supplementary of visual information. For sequential word generation, spatial attention determines specific regions of images to pass the decoder. The sentinel gate decides whether to attend to the image or to the visual sentinel (what the decoder already knows, including the attribute information). Text attribute information is synchronously fed in to help image recognition and reduce uncertainty.We build a new fashion dataset consisting of fashion images to establish a benchmark for single-object images. This fashion dataset consists of 144,422 images from 24,649 fashion products, with one description sentence for each image. Our method is tested on the MS COCO dataset and the proposed Fashion dataset. The results show the superior performance of the proposed model on both multi-object images and single-object images. Our AIC-AB net outperforms the state-of-the-art network, Adaptive Attention Network by 0.017, 0.095, and 0.095 (CIDEr Score) on the COCO dataset, Fashion dataset (Bestsellers), and Fashion dataset (all vendors), respectively. The results also reveal the complement of attention architecture and attribute information.
Bildtextning är ett avgörande fält för datorsyn och behandling av naturligt språk. Det kan tillämpas i stor utsträckning på högvolyms webbbilder, som att överföra bildinnehåll till synskadade användare. Många metoder antas inom detta område såsom uppmärksamhetsbaserade metoder, semantiska konceptbaserade modeller. Dessa uppnår utmärkt prestanda på allmänna bilddatamängder som MS COCO-dataset. Det lämnas dock fortfarande outforskat på bilder med ett objekt.I denna uppsats föreslår vi ett nytt attribut-information-kombinerat uppmärksamhetsbaserat nätverk (AIC-AB Net). I varje tidsteg läggs attributinformation till som ett komplement till visuell information. För sekventiell ordgenerering bestämmer rumslig uppmärksamhet specifika regioner av bilder som ska passera avkodaren. Sentinelgrinden bestämmer om den ska ta hand om bilden eller den visuella vaktposten (vad avkodaren redan vet, inklusive attributinformation). Text attributinformation matas synkront för att hjälpa bildigenkänning och minska osäkerheten.Vi bygger en ny modedataset bestående av modebilder för att skapa ett riktmärke för bilder med en objekt. Denna modedataset består av 144 422 bilder från 24 649 modeprodukter, med en beskrivningsmening för varje bild. Vår metod testas på MS COCO dataset och den föreslagna Fashion dataset. Resultaten visar den överlägsna prestandan hos den föreslagna modellen på både bilder med flera objekt och enbildsbilder. Vårt AIC-AB-nät överträffar det senaste nätverket Adaptive Attention Network med 0,017, 0,095 och 0,095 (CIDEr Score) i COCO-datasetet, modedataset (bästsäljare) respektive modedatasetet (alla leverantörer). Resultaten avslöjar också komplementet till uppmärksamhetsarkitektur och attributinformation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ragnarsson, Julia. „WHO ARE U WEARING? : investigating iconic celebrity fashion images as dress“. Thesis, Högskolan i Borås, Akademin för textil, teknik och ekonomi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-10749.

Der volle Inhalt der Quelle
Annotation:
This collection is an observation of the relationship between celebrity culture, fashion and the female form. Exploring how the modern fashion image is communicated to a wider audience through mass media. At the same the work aims to explore new ways of developing clothing from a starting point in figurative prints. The work explores the body as the new context of the celebrity image in order to display different perspectives of both image and body. This has been found through an interaction between print and body, the visual perception within the relationship of these and from a social point of view. The work displays thoughts regarding perspectives on body ideals, female stereotypes, fashion, clothing, mass media and fame in today’s society. The bodies of celebrities are seen as walking billboards and advertisement for designers, the work questions this adopted culture by highlighting the phenomenon. While the work is a comment on the ridiculousness within the mass media and celebrity worship, it is also a homage to these women who have put a mark in fashion history. The final result could be seen as a series examples of possible outcomes from working with the image in relation to body. But also as a statement on how the current state of fashion, where new ideas seem less important as who is wearing what.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Cant, Mercedes. „#AerieREAL: Exploring the Tactics of Using Authentic Images in Branding of Young Women’s Fashion Companies“. Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39625.

Der volle Inhalt der Quelle
Annotation:
This thesis explores themes of authenticity in the Aerie REAL branding campaign. In it, I explore how Aerie links notions of authenticity, expressed as a vocal denunciation of photo-editing techniques, with the ideal female body. To do this, I analyze Aerie’s branding materials (including social media posts on two different websites, as well as Aerie product photography) in context of its lack of photo-editing and other branding choices, including its choice of brand spokesperson. I consider these materials within a semiotic framework developed from the French school of semiotics, and analyze them both through this framework and a content analysis. I also consider concepts of Aerie’s brand personality. In this study I illuminate many of the tensions between Aerie’s explicit goals in its REAL campaign and what it has presented within the campaign. This has implications for future representations of women in advertising, as well as the use of authenticity as a brand position.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Chanforan, Elsa. „Les griffes et le couturier : Représentations et usages contrastés de l'animalité dans l'iconographie de la mode“. Thesis, Perpignan, 2018. http://www.theses.fr/2018PERP0047/document.

Der volle Inhalt der Quelle
Annotation:
Ce travail de recherche porte sur les liens qu'entretiennent l'animal et la mode, au prisme d'une étude iconographique orientée par une approche pluridisciplinaire. Entre fascination et paradoxes, la mise à contribution de l'animal et de ses attributs – physiques, graphiques, symboliques – sert d'abord les productions matérielles et immatérielles de la mode : les bêtes participent aux stratégies de transfiguration du réel propres à ce secteur économique singulier. Dans le même mouvement, le recours à l'animal apparaît aussi comme le relais esthétisé de représentations normées ; il devient un prétexte pour penser le monde, la nature humaine et ses rapports sociaux. Ainsi, tout en suivant la dynamique contemporaine d'engouement pour une Wilderness fantasmée, l'iconographie de la mode participe aux réécritures actuelles de ce qui fait l'humain. Néanmoins, les images de mode jouent un rôle dans les réévaluations et les négociations croissantes des frontières qui séparent les membres du vivant. En développant un travail spécifique autour du corps et de ses parures, elles proposent une voie alternative pour reconsidérer une altérité animale aux contours de plus en plus poreux. Il s'agit donc d'observer comment les formes visuelles de la mode et de son imaginaire traduisent la complexité de relations anthropozoologiques contemporaines en pleine mutation
This research explores the connections between fashion and the animal, by means of an iconographic study guided by a multidisciplinary approach. Raising fascination and paradoxes, the use of the animal and its attributes – physical, graphical, symbolic – benefits, in the first place, material and symbolic fashion’s productions : animals are involved in the transfiguration-of-reality strategies peculiar to the unique economic sector that is fashion industry. At the same time, animals appear to be an efficient and aesthetic way of representing human activity : they are a tool to rethink the world, human nature and social relationships. Thus, involved in the general contemporary dynamics of keen interest for a fantasized Wilderness, the fashion iconography contributes to the current rewriting of human definition. Nevertheless, fashion pictures play a part in the growing negotiation of boundaries between members of the biological field. By developing a specific work on the human body an its fineries, they offer an alternative path to the reconsideration of an animal otherness whose borders seem more permeable everyday. This work is an attempt to examine how fashion's visual forms and imaginary express the contemporary complexity of far-changing anthropozoologic interactions
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Yamamoto, Takahisa. „Deep Learning Approaches on the Recognition of Affective Properties of Images“. Kyoto University, 2020. http://hdl.handle.net/2433/259068.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Fashion images"

1

Dufresne, Jean-Luc. Images de mode, 1940-1960: Hommage a Bernard Blossac. Granville: Musée Christian Dior, 1996.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Laurent, Yves Saint. Yves St. Laurent: Images of design 1958-1988. New York: Knopf, 1988.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Harold, Koda, und Fashion Institute of Technology (New York, N.Y.), Hrsg. Giorgio Armani: Images of man. New York: Rizzoli, 1990.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Elzingre, Martine. Femmes habillées: La mode de luxe--styles et images. Paris: Editions Austral, 1996.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Black & white model photography: Techniques & images. Buffalo, NY: Amherst Media, 1998.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Cashin, Marilynn A. A moment in time: Images of Victorian fashions from the mid-1800s. South Plainfield, N.J: Mac Publications, 1992.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ingres in fashion: Representations of dress and appearance in Ingres's images of women. New Haven: Yale University Press, 1999.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Newton, Helmut. Helmut Newton: Nouvelle images. Paris: Espace Photographique de Paris, 1988.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Newton, Helmut. Helmut Newton: New images. Bologna: Grafis, 1989.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Images in time: Flashing forward, backward, in front and behind photography in fashion, advertising and the press. Bath: Wunderkammer, 2011.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Fashion images"

1

Kritzmöller, Monika. „Images of fashion – Images of passion“. In Kampf um Images, 181–204. Wiesbaden: Springer Fachmedien Wiesbaden, 2014. http://dx.doi.org/10.1007/978-3-658-01712-5_9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Bug, Peter, Marcus Adam und Katharina Moessle. „Analysis of Moving Images in Fashion Stores in Stuttgart“. In Fashion and Film, 269–77. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-9542-0_13.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Yan, Cairong, Umar Subhan Malhi, Yongfeng Huang und Ran Tao. „Unsupervised Deep Clustering for Fashion Images“. In Communications in Computer and Information Science, 85–96. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-21451-7_8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Bug, Peter, und Julia Helwig. „Overview of Product Presentation with Moving Images in Fashion E-Commerce“. In Fashion and Film, 217–41. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-9542-0_11.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Bug, Peter, und Julia Helwig. „Current Use of Moving Images for Product Presentation in Fashion E-Commerce“. In Fashion and Film, 243–67. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-9542-0_12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Karessli, Nour, Romain Guigourès und Reza Shirvany. „Learning Size and Fit from Fashion Images“. In Lecture Notes in Social Networks, 111–31. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-55218-3_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Pillai, Raji S., und K. Sreekumar. „Classification of Fashion Images Using Transfer Learning“. In Evolution in Computational Intelligence, 325–32. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5788-0_32.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Makkapati, Vishnu, und Arun Patro. „Enhancing Symmetry in GAN Generated Fashion Images“. In Artificial Intelligence XXXIV, 405–10. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-71078-5_34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Jimenez, Cinthia M. „Grotesque Images in Fashion Ads: An Exploration of the Effect of Grotesque Images on Narrative Engagement“. In Fashion Communication in the Digital Age, 245–53. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15436-3_22.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Meshkini, Khatereh, Jan Platos und Hassan Ghassemain. „An Analysis of Convolutional Neural Network for Fashion Images Classification (Fashion-MNIST)“. In Advances in Intelligent Systems and Computing, 85–95. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-50097-9_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Fashion images"

1

Jiang, Shuhui, und Yun Fu. „Fashion Style Generator“. In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/520.

Der volle Inhalt der Quelle
Annotation:
In this paper, we focus on a new problem: applying artificial intelligence to automatically generate fashion style images. Given a basic clothing image and a fashion style image (e.g., leopard print), we generate a clothing image with the certain style in real time with a neural fashion style generator. Fashion style generation is related to recent artistic style transfer works, but has its own challenges. The synthetic image should preserve the similar design as the basic clothing, and meanwhile blend the new style pattern on the clothing. Neither existing global nor patch based neural style transfer methods could well solve these challenges. In this paper, we propose an end-to-end feed-forward neural network which consists of a fashion style generator and a discriminator. The global and patch based style and content losses calculated by the discriminator alternatively back-propagate the generator network and optimize it. The global optimization stage preserves the clothing form and design and the local optimization stage preserves the detailed style pattern. Extensive experiments show that our method outperforms the state-of-the-arts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Zhan, Huijing, Boxin Shi, Jiawei Chen, Qian Zheng, Ling-Yu Duan und Alex C. Kot. „Fashion Recommendation on Street Images“. In 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019. http://dx.doi.org/10.1109/icip.2019.8802939.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Hsiao, Wei-Lin, und Kristen Grauman. „Creating Capsule Wardrobes from Fashion Images“. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018. http://dx.doi.org/10.1109/cvpr.2018.00748.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Zhu, Xiangyu, Xiaoguang Han, Wei Zhang, Jian Zhao und Ligang Liu. „Learning Intrinsic Decomposition of Complex-Textured Fashion Images“. In 2020 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2020. http://dx.doi.org/10.1109/icme46284.2020.9102901.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Ak, Kenan, Ashraf Kassim, Joo-Hwee Lim und Jo Yew Tham. „Attribute Manipulation Generative Adversarial Networks for Fashion Images“. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2019. http://dx.doi.org/10.1109/iccv.2019.01064.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Jagadeesh, Vignesh, Robinson Piramuthu, Anurag Bhardwaj, Wei Di und Neel Sundaresan. „Large scale visual recommendations from street fashion images“. In KDD '14: The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2623330.2623332.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Martinsson, John, und Olof Mogren. „Semantic Segmentation of Fashion Images Using Feature Pyramid Networks“. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEE, 2019. http://dx.doi.org/10.1109/iccvw.2019.00382.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Yildirim, Gokhan, Nikolay Jetchev, Roland Vollgraf und Urs Bergmann. „Generating High-Resolution Fashion Model Images Wearing Custom Outfits“. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEE, 2019. http://dx.doi.org/10.1109/iccvw.2019.00389.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Bhatnagar, Shobhit, Deepanway Ghosal und Maheshkumar H. Kolekar. „Classification of fashion article images using convolutional neural networks“. In 2017 Fourth International Conference on Image Information Processing (ICIIP). IEEE, 2017. http://dx.doi.org/10.1109/iciip.2017.8313740.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Khurana, Tarasha, Kushagra Mahajan, Chetan Arora und Atul Rai. „Exploiting Texture Cues for Clothing Parsing in Fashion Images“. In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451281.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Fashion images"

1

Son, Jihyeong, NIgel AR Joseph und Vicki McCracken. Put Faces to Your Instagram Posts. Elements for a Fashion Brand�s Social Media Images to Help Overcome the �Algorithm�. Ames (Iowa): Iowa State University. Library, Januar 2019. http://dx.doi.org/10.31274/itaa.10232.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Jung, Dongjin, Hyosun An und Minjung Park. Analysis of Gucci Runway Images Using an Artificial Intelligence Based Visual Search Tool: A Comparison of Fashion Styles by Creative Directors. Ames (Iowa): Iowa State University. Library, Januar 2019. http://dx.doi.org/10.31274/itaa.8264.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Martin, Kathi, Nick Jushchyshyn und Daniel Caulfield-Sriklad. 3D Interactive Panorama Jessie Franklin Turner Evening Gown c. 1932. Drexel Digital Museum, 2015. http://dx.doi.org/10.17918/9zd6-2x15.

Der volle Inhalt der Quelle
Annotation:
The 3D Interactive Panorama provides multiple views and zoom in details of a bias cut evening gown by Jessie Franklin Turner, an American woman designer in the 1930s. The gown is constructed from pink 100% silk charmeuse with piping along the bodice edges and design lines. It has soft tucks at the neckline and small of back, a unique strap detail in the back and a self belt. The Interactive is part of the Drexel Digital Museum, an online archive of fashion images. The original gown is part of the Fox Historic Costume, Drexel University, a Gift of Mrs. Lewis H. Pearson 64-59-7.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Martin, Kathi, Nick Jushchyshyn und Claire King. James Galanos, Silk Chiffon Afternoon Dress c. Fall 1976. Drexel Digital Museum, 2018. http://dx.doi.org/10.17918/q3g5-n257.

Der volle Inhalt der Quelle
Annotation:
The URL links to a website page in the Drexel Digital Museum (DDM) fashion image archive containing a 3D interactive panorama of an evening suit by American fashion designer James Galanos with related text. This afternoon dress is from Galanos' Fall 1976 collection. It is made from pale pink silk chiffon and finished with hand stitching on the hems and edges of this dress, The dress was gifted to Drexel University as part of The James G. Galanos Archive at Drexel University in 2016. After it was imaged the gown was deemed too fragile to exhibit. By imaging it using high resolution GigaPan technology we are able to create an archival quality digital record of the dress and exhibit it virtually at life size in 3D panorama. The panorama is an HTML5 formatted version of an ultra-high resolution ObjectVR created from stitched tiles captured with GigaPan technology. It is representative the ongoing research of the DDM, an international, interdisciplinary group of researchers focused on production, conservation and dissemination of new media for exhibition of historic fashion.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Lee, Hyun-Jung, Ji-Yeon Lee und Kyu-Hye Lee. Brand Image and Evaluation Factors of Fashion Product Advertisement. Ames: Iowa State University, Digital Repository, 2013. http://dx.doi.org/10.31274/itaa_proceedings-180814-656.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Cho, Eunjoo, Ann Marie Fiore und Daniel W. Russell. Cross-Cultural Validation of a Fashion Brand Image Scale. Ames: Iowa State University, Digital Repository, November 2015. http://dx.doi.org/10.31274/itaa_proceedings-180814-41.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kim, Young. Fashion Image: Interdisciplinary and Collaborative Approach to Portfolio Presentation. Ames: Iowa State University, Digital Repository, November 2016. http://dx.doi.org/10.31274/itaa_proceedings-180814-1330.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Cho, Eunjoo, Ui-Jeen Yu und Ann Marie Fiore. The Role of Fashion Innovativeness, Brand Image, and Lovemarks in Enhancing Loyalty towards Fashion-Related Brands. Ames: Iowa State University, Digital Repository, November 2015. http://dx.doi.org/10.31274/itaa_proceedings-180814-42.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Park, Minjung, und Hyunjoo Im. Imagery Fluency and Fashion Involvement in Online Apparel Shopping. Ames: Iowa State University, Digital Repository, November 2016. http://dx.doi.org/10.31274/itaa_proceedings-180814-1459.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Martin, Kathi, Nick Jushchyshyn und Claire King. James Galanos Evening Gown c. 1957. Drexel Digital Museum, 2018. http://dx.doi.org/10.17918/jkyh-1b56.

Der volle Inhalt der Quelle
Annotation:
The URL links to a website page in the Drexel Digital Museum (DDM) fashion image archive containing a 3D interactive panorama of an evening suit by American fashion designer James Galanos with related text. This evening gown is from Galanos' Fall 1957 collection. It is embellished with polychrome glass beads in a red and green tartan plaid pattern on a base of silk . It was a gift of Mrs. John Thouron and is in The James G. Galanos Archive at Drexel University. The panorama is an HTML5 formatted version of an ultra-high resolution ObjectVR created from stitched tiles captured with GigaPan technology. It is representative the ongoing research of the DDM, an international, interdisciplinary group of researchers focused on production, conservation and dissemination of new media for exhibition of historic fashion.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie