Добірка наукової літератури з теми "Person re-ID models"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Person re-ID models".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Person re-ID models"

1

Liu, Minghui, Yafei Zhang, and Huafeng Li. "Survey of Cross-Modal Person Re-Identification from a Mathematical Perspective." Mathematics 11, no. 3 (January 28, 2023): 654. http://dx.doi.org/10.3390/math11030654.

Повний текст джерела
Анотація:
Person re-identification (Re-ID) aims to retrieve a particular pedestrian’s identification from a surveillance system consisting of non-overlapping cameras. In recent years, researchers have begun to focus on open-world person Re-ID tasks based on non-ideal situations. One of the most representative of these is cross-modal person Re-ID, which aims to match probe data with target data from different modalities. According to the modalities of probe and target data, we divided cross-modal person Re-ID into visible–infrared, visible–depth, visible–sketch, and visible–text person Re-ID. In cross-modal person Re-ID, the most challenging problem is the modal gap. According to the different methods of narrowing the modal gap, we classified the existing works into picture-based style conversion methods, feature-based modality-invariant embedding mapping methods, and modality-unrelated auxiliary information mining methods. In addition, by generalizing the aforementioned works, we find that although deep-learning-based models perform well, the black-box-like learning process makes these models less interpretable and generalized. Therefore, we attempted to interpret different cross-modal person Re-ID models from a mathematical perspective. Through the above work, we attempt to compensate for the lack of mathematical interpretation of models in previous person Re-ID reviews and hope that our work will bring new inspiration to researchers.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Yang, Fengxiang, Zhun Zhong, Hong Liu, Zheng Wang, Zhiming Luo, Shaozi Li, Nicu Sebe, and Shin'ichi Satoh. "Learning to Attack Real-World Models for Person Re-identification via Virtual-Guided Meta-Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 4 (May 18, 2021): 3128–35. http://dx.doi.org/10.1609/aaai.v35i4.16422.

Повний текст джерела
Анотація:
Recent advances in person re-identification (re-ID) have led to impressive retrieval accuracy. However, existing re-ID models are challenged by the adversarial examples crafted by adding quasi-imperceptible perturbations. Moreover, re-ID systems face the domain shift issue that training and testing domains are not consistent. In this study, we argue that learning powerful attackers with high universality that works well on unseen domains is an important step in promoting the robustness of re-ID systems. Therefore, we introduce a novel universal attack algorithm called ``MetaAttack'' for person re-ID. MetaAttack can mislead re-ID models on unseen domains by a universal adversarial perturbation. Specifically, to capture common patterns across different domains, we propose a meta-learning scheme to seek the universal perturbation via the gradient interaction between meta-train and meta-test formed by two datasets. We also take advantage of a virtual dataset (PersonX), instead of real ones, to conduct meta-test. This scheme not only enables us to learn with more comprehensive variation factors but also mitigates the negative effects caused by biased factors of real datasets. Experiments on three large-scale re-ID datasets demonstrate the effectiveness of our method in attacking re-ID models on unseen domains. Our final visualization results reveal some new properties of existing re-ID systems, which can guide us in designing a more robust re-ID model. Code and supplemental material are available at \url{https://github.com/FlyingRoastDuck/MetaAttack_AAAI21}.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Liu, Yang, Hao Sheng, Shuai Wang, Yubin Wu, and Zhang Xiong. "Feature-Level Camera Style Transfer for Person Re-Identification." Applied Sciences 12, no. 14 (July 20, 2022): 7286. http://dx.doi.org/10.3390/app12147286.

Повний текст джерела
Анотація:
The person re-identification (re-ID) problem has attracted growing interest in the computer vision community. Most public re-ID datasets are captured by multiple non-overlapping cameras, and the same person may appear dissimilar in different camera views due to variances of illuminations, viewpoints and postures. These differences, collectively referred to as camera style variance, make person re-ID still a challenging problem. Recently, researchers have attempted to solve this problem using generative models. The generative adversarial network (GAN) is widely used for the pose transfer or data augmentation to bridge the camera style gap. However, these methods, mostly based on image-level GAN, require huge computational power during the training of generative models. Furthermore, the training process of GAN is separated from the re-ID model, which makes it hard to achieve a global optimal for both models simultaneously. In this paper, the authors propose to alleviate camera style variance in the re-ID problem by adopting a feature-level Camera Style Transfer (CST) model, which can serve as an intra-class augmentation method and enhance the model robustness against camera style variance. Specifically, the proposed CST method transfers the camera style-related information of input features while preserving the corresponding identity information. Moreover, the training process can be embedded into the re-ID model in an end-to-end manner, which means the proposed approach can be deployed with much less time and memory cost. The proposed approach is verified on several different person re-ID baselines. Extensive experiments show the validity of the proposed CST model and its benefits for re-ID performance on the Market-1501 dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Zhu, Xiangping, Xiatian Zhu, Minxian Li, Pietro Morerio, Vittorio Murino, and Shaogang Gong. "Intra-Camera Supervised Person Re-Identification." International Journal of Computer Vision 129, no. 5 (February 26, 2021): 1580–95. http://dx.doi.org/10.1007/s11263-021-01440-4.

Повний текст джерела
Анотація:
AbstractExisting person re-identification (re-id) methods mostly exploit a large set of cross-camera identity labelled training data. This requires a tedious data collection and annotation process, leading to poor scalability in practical re-id applications. On the other hand unsupervised re-id methods do not need identity label information, but they usually suffer from much inferior and insufficient model performance. To overcome these fundamental limitations, we propose a novel person re-identification paradigm based on an idea of independent per-camera identity annotation. This eliminates the most time-consuming and tedious inter-camera identity labelling process, significantly reducing the amount of human annotation efforts. Consequently, it gives rise to a more scalable and more feasible setting, which we call Intra-Camera Supervised (ICS) person re-id, for which we formulate a Multi-tAsk mulTi-labEl (MATE) deep learning method. Specifically, MATE is designed for self-discovering the cross-camera identity correspondence in a per-camera multi-task inference framework. Extensive experiments demonstrate the cost-effectiveness superiority of our method over the alternative approaches on three large person re-id datasets. For example, MATE yields 88.7% rank-1 score on Market-1501 in the proposed ICS person re-id setting, significantly outperforming unsupervised learning models and closely approaching conventional fully supervised learning competitors.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Xu, Sheng, Chang Liu, Baochang Zhang, Jinhu Lü, Guodong Guo, and David Doermann. "BiRe-ID: Binary Neural Network for Efficient Person Re-ID." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 1s (February 28, 2022): 1–22. http://dx.doi.org/10.1145/3473340.

Повний текст джерела
Анотація:
Person re-identification (Re-ID) has been promoted by the significant success of convolutional neural networks (CNNs). However, the application of such CNN-based Re-ID methods depends on the tremendous consumption of computation and memory resources, which affects its development on resource-limited devices such as next generation AI chips. As a result, CNN binarization has attracted increasing attention, which leads to binary neural networks (BNNs). In this article, we propose a new BNN-based framework for efficient person Re-ID (BiRe-ID). In this work, we discover that the significant performance drop of binarized models for Re-ID task is caused by the degraded representation capacity of kernels and features. To address the issues, we propose the kernel and feature refinement based on generative adversarial learning (KR-GAL and FR-GAL) to enhance the representation capacity of BNNs. We first introduce an adversarial attention mechanism to refine the binarized kernels based on their real-valued counterparts. Specifically, we introduce a scale factor to restore the scale of 1-bit convolution. And we employ an effective generative adversarial learning method to train the attention-aware scale factor. Furthermore, we introduce a self-supervised generative adversarial network to refine the low-level features using the corresponding high-level semantic information. Extensive experiments demonstrate that our BiRe-ID can be effectively implemented on various mainstream backbones for the Re-ID task. In terms of the performance, our BiRe-ID surpasses existing binarization methods by significant margins, at the level even comparable with the real-valued counterparts. For example, on Market-1501, BiRe-ID achieves 64.0% mAP on ResNet-18 backbone, with an impressive 12.51× speedup in theory and 11.75× storage saving. In particular, the KR-GAL and FR-GAL methods show strong generalization on multiple tasks such as Re-ID, image classification, object detection, and 3D point cloud processing.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Chen, Yun-Chun, Yu-Jhe Li, Xiaofei Du, and Yu-Chiang Frank Wang. "Learning Resolution-Invariant Deep Representations for Person Re-Identification." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 8215–22. http://dx.doi.org/10.1609/aaai.v33i01.33018215.

Повний текст джерела
Анотація:
Person re-identification (re-ID) solves the task of matching images across cameras and is among the research topics in vision community. Since query images in real-world scenarios might suffer from resolution loss, how to solve the resolution mismatch problem during person re-ID becomes a practical problem. Instead of applying separate image super-resolution models, we propose a novel network architecture of Resolution Adaptation and re-Identification Network (RAIN) to solve cross-resolution person re-ID. Advancing the strategy of adversarial learning, we aim at extracting resolution-invariant representations for re-ID, while the proposed model is learned in an end-to-end training fashion. Our experiments confirm that the use of our model can recognize low-resolution query images, even if the resolution is not seen during training. Moreover, the extension of our model for semi-supervised re-ID further confirms the scalability of our proposed method for real-world scenarios and applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Chen, Di, Shanshan Zhang, Wanli Ouyang, Jian Yang, and Bernt Schiele. "Hierarchical Online Instance Matching for Person Search." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 10518–25. http://dx.doi.org/10.1609/aaai.v34i07.6623.

Повний текст джерела
Анотація:
Person Search is a challenging task which requires to retrieve a person's image and the corresponding position from an image dataset. It consists of two sub-tasks: pedestrian detection and person re-identification (re-ID). One of the key challenges is to properly combine the two sub-tasks into a unified framework. Existing works usually adopt a straightforward strategy by concatenating a detector and a re-ID model directly, either into an integrated model or into separated models. We argue that simply concatenating detection and re-ID is a sub-optimal solution, and we propose a Hierarchical Online Instance Matching (HOIM) loss which exploits the hierarchical relationship between detection and re-ID to guide the learning of our network. Our novel HOIM loss function harmonizes the objectives of the two sub-tasks and encourages better feature learning. In addition, we improve the loss update policy by introducing Selective Memory Refreshment (SMR) for unlabeled persons, which takes advantage of the potential discrimination power of unlabeled data. From the experiments on two standard person search benchmarks, i.e. CUHK-SYSU and PRW, we achieve state-of-the-art performance, which justifies the effectiveness of our proposed HOIM loss on learning robust features.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Wang, Menglin, Baisheng Lai, Jianqiang Huang, Xiaojin Gong, and Xian-Sheng Hua. "Camera-Aware Proxies for Unsupervised Person Re-Identification." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 4 (May 18, 2021): 2764–72. http://dx.doi.org/10.1609/aaai.v35i4.16381.

Повний текст джерела
Анотація:
This paper tackles the purely unsupervised person re-identification (Re-ID) problem that requires no annotations. Some previous methods adopt clustering techniques to generate pseudo labels and use the produced labels to train Re-ID models progressively. These methods are relatively simple but effective. However, most clustering-based methods take each cluster as a pseudo identity class, neglecting the large intra-ID variance caused mainly by the change of camera views. To address this issue, we propose to split each single cluster into multiple proxies and each proxy represents the instances coming from the same camera. These camera-aware proxies enable us to deal with large intra-ID variance and generate more reliable pseudo labels for learning. Based on the camera-aware proxies, we design both intra and inter-camera contrastive learning components for our Re-ID model to effectively learn the ID discrimination ability within and across cameras. Meanwhile, a proxy-balanced sampling strategy is also designed, which facilitates our learning further. Extensive experiments on three large-scale Re-ID datasets show that our proposed approach outperforms most unsupervised methods by a significant margin. Especially, on the challenging MSMT17 dataset, we gain 14.3 percent Rank-1 and 10.2 percent mAP improvements when compared to the second place. Code is available at: https://github.com/Terminator8758/CAP-master.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Yang, Zhao, Jiehao Liu, Tie Liu, Li Wang, and Sai Zhao. "Circle-Based Ratio Loss for Person Reidentification." Complexity 2020 (December 29, 2020): 1–11. http://dx.doi.org/10.1155/2020/9860562.

Повний текст джерела
Анотація:
Person reidentification (re-id) aims to recognize a specific pedestrian from uncrossed surveillance camera views. Most re-id methods perform the retrieval task by comparing the similarity of pedestrian features extracted from deep learning models. Therefore, learning a discriminative feature is critical for person reidentification. Many works supervise the model learning with one or more loss functions to obtain the discriminability of features. Softmax loss is one of the widely used loss functions in re-id. However, traditional softmax loss inherently focuses on the feature separability and fails to consider the compactness of within-class features. To further improve the accuracy of re-id, many efforts are conducted to shrink within-class discrepancy as well as between-class similarity. In this paper, we propose a circle-based ratio loss for person re-identification. Concretely, we normalize the learned features and classification weights to map these vectors in the hypersphere. Then we take the ratio of the maximal intraclass distance and the minimal interclass distance as an objective loss, so the between-class separability and within-class compactness can be optimized simultaneously during the training stage. Finally, with the joint training of an improved softmax loss and the ratio loss, the deep model could mine discriminative pedestrian information and learn robust features for the re-id task. Comprehensive experiments on three re-id benchmark datasets are carried out to illustrate the effectiveness of the proposed method. Specially, 83.12% mAP on Market-1501, 71.70% mAP on DukeMTMC-reID, and 66.26%/63.24% mAP on CUHK03 labeled/detected are achieved, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Gao, Junyao, Xinyang Jiang, Huishuai Zhang, Yifan Yang, Shuguang Dou, Dongsheng Li, Duoqian Miao, Cheng Deng, and Cairong Zhao. "Similarity Distribution Based Membership Inference Attack on Person Re-identification." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (June 26, 2023): 14820–28. http://dx.doi.org/10.1609/aaai.v37i12.26731.

Повний текст джерела
Анотація:
While person Re-identification (Re-ID) has progressed rapidly due to its wide real-world applications, it also causes severe risks of leaking personal information from training data. Thus, this paper focuses on quantifying this risk by membership inference (MI) attack. Most of the existing MI attack algorithms focus on classification models, while Re-ID follows a totally different training and inference paradigm. Re-ID is a fine-grained recognition task with complex feature embedding, and model outputs commonly used by existing MI like logits and losses are not accessible during inference. Since Re-ID focuses on modelling the relative relationship between image pairs instead of individual semantics, we conduct a formal and empirical analysis which validates that the distribution shift of the inter-sample similarity between training and test set is a critical criterion for Re-ID membership inference. As a result, we propose a novel membership inference attack method based on the inter-sample similarity distribution. Specifically, a set of anchor images are sampled to represent the similarity distribution conditioned on a target image, and a neural network with a novel anchor selection module is proposed to predict the membership of the target image. Our experiments validate the effectiveness of the proposed approach on both the Re-ID task and conventional classification task.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Person re-ID models"

1

Jambigi, Chaitra. "Towards Robust and Scalable Video Surveillance: Cross-modal and Domain Generalizable Person Re-identification." Thesis, 2022. https://etd.iisc.ac.in/handle/2005/5898.

Повний текст джерела
Анотація:
With rapid technological advances, one can easily find video surveillance systems deployed in public places such as malls, airports etc. as well as across private residential areas. These systems play a critical role in ensuring safety and security against criminal/anomalous activities. ‘Person Re-Identification’ (re-ID) is a key component of such a system and is well-studied in modern computer vision literature. The task of person re-ID is typically posed as an instance retrieval problem in a large wide-area network of cameras with non-overlapping field-of-views (FoV). When presented with an image of a person of interest (query) as observed in any given camera, the goal is to retrieve all image instances of the target with the same identity from all other cameras (gallery) in the network. Despite the extensive research in this area, there is still a gap between the efficacy of the existing re-ID frameworks under laboratory setting and their real-world deployability - thus necessitating the development of practical solutions for person re-ID. In this thesis, we explore two such research directions to build robust and scalable person re-ID models. The first part of the thesis proposes a solution for the challenging and open problem of Visible-Thermal Person Re-ID (VT Re-ID). In this cross-modal retrieval problem, the query image of a target (in dark/low-light conditions) is captured using a thermal imaging camera and the re-ID system needs to search and retrieve observations corresponding to the same identity from the gallery set, which is composed of visible spectrum images of various targets captured using standard RGB cameras in well-lit environment. Such a system has major applications in night-time surveillance and enables round-the-clock monitoring of the places of interest. Existing cross-modal re-ID methods align the modalities via adversarial learning or complex feature extraction modules that heavily rely on domain knowledge. We propose a simple but effective framework, MMD-ReID, to explicitly reduce the modality gap. MMD-ReID takes inspiration from ‘Maximum Mean Discrepancy’ (MMD), a statistical tool that determines the distance between two distributions. Our method uses a novel margin-based formulation to match class-conditional feature distributions of the visible and thermal samples to minimize intra-class distances while maintaining feature discriminability across identities. Extensive experiments show that our method outperforms state-of-the-art approaches by significant margins. The second part of the thesis attempts to solve a more challenging problem of Domain Generalization (DG) in person re-ID. Most existing re-ID models are trained and tested on the same dataset and perform poorly when evaluated on a new dataset (domain) without any explicit fine-tuning using annotated data samples from the latter. Recent multi-source DG methods use meta-learning approaches, which are prone to overfitting on the seen domains. To overcome this, we propose a novel strategy based on a supervised contrastive learning framework for learning domain-agnostic features. Our method attempts to model domain variations by creating hallucinated ‘positive’ samples that realistically mimic the perturbations one expects from domain-shift. We empirically show that by using our proposed pool of perturbation strategies, we are able to learn better generalizable features, thereby achieving state-of-the-art performance across unseen domains. We also hypothesize that training on a related, auxiliary task that is preserved across domains can help in learning robust features. With attribute prediction as the chosen auxiliary task, we experimentally show that such training indeed leads to a better generalization of the learnt model.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Person re-ID models"

1

Wang, Guan'an, Yang Yang, Jian Cheng, Jinqiao Wang, and Zengguang Hou. "Color-Sensitive Person Re-Identification." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/131.

Повний текст джерела
Анотація:
Recent deep Re-ID models mainly focus on learning high-level semantic features, while failing to explicitly explore color information which is one of the most important cues for person Re-ID. In this paper, we propose a novel Color-Sensitive Re-ID to take full advantage of color information. On one hand, we train our model with real and fake images. By using the extra fake images, more color information can be exploited and it can avoid overfitting during training. On the other hand, we also train our model with images of the same person with different colors. By doing so, features can be forced to focus on the color difference in regions. To generate fake images with specified colors, we propose a novel Color Translation GAN (CTGAN) to learn mappings between different clothing colors and preserve identity consistency among the same clothing color. Extensive evaluations on two benchmark datasets show that our approach significantly outperforms state-of-the-art Re-ID models.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Li, Dongdong, Zhigang Wang, Jian Wang, Xinyu Zhang, Errui Ding, Jingdong Wang, and Zhaoxiang Zhang. "Self-Guided Hard Negative Generation for Unsupervised Person Re-Identification." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/149.

Повний текст джерела
Анотація:
Recent unsupervised person re-identification (reID) methods mostly apply pseudo labels from clustering algorithms as supervision signals. Despite great success, this fashion is very likely to aggregate different identities with similar appearances into the same cluster. In result, the hard negative samples, playing important role in training reID models, are significantly reduced. To alleviate this problem, we propose a self-guided hard negative generation method for unsupervised person re-ID. Specifically, a joint framework is developed which incorporates a hard negative generation network (HNGN) and a re-ID network. To continuously generate harder negative samples to provide effective supervisions in the contrastive learning, the two networks are alternately trained in an adversarial manner to improve each other, where the reID network guides HNGN to generate challenging data and HNGN enforces the re-ID network to enhance discrimination ability. During inference, the performance of re-ID network is improved without introducing any extra parameters. Extensive experiments demonstrate that the proposed method significantly outperforms a strong baseline and also achieves better results than state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Rao, Haocong, and Chunyan Miao. "SimMC: Simple Masked Contrastive Learning of Skeleton Representations for Unsupervised Person Re-Identification." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/180.

Повний текст джерела
Анотація:
Recent advances in skeleton-based person re-identification (re-ID) obtain impressive performance via either hand-crafted skeleton descriptors or skeleton representation learning with deep learning paradigms. However, they typically require skeletal pre-modeling and label information for training, which leads to limited applicability of these methods. In this paper, we focus on unsupervised skeleton-based person re-ID, and present a generic Simple Masked Contrastive learning (SimMC) framework to learn effective representations from unlabeled 3D skeletons for person re-ID. Specifically, to fully exploit skeleton features within each skeleton sequence, we first devise a masked prototype contrastive learning (MPC) scheme to cluster the most typical skeleton features (skeleton prototypes) from different subsequences randomly masked from raw sequences, and contrast the inherent similarity between skeleton features and different prototypes to learn discriminative skeleton representations without using any label. Then, considering that different subsequences within the same sequence usually enjoy strong correlations due to the nature of motion continuity, we propose the masked intra-sequence contrastive learning (MIC) to capture intra-sequence pattern consistency between subsequences, so as to encourage learning more effective skeleton representations for person re-ID. Extensive experiments validate that the proposed SimMC outperforms most state-of-the-art skeleton-based methods. We further show its scalability and efficiency in enhancing the performance of existing models. Our codes are available at https://github.com/Kali-Hac/SimMC.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Yan, Yuming, Huimin Yu, Shuzhao Li, Zhaohui Lu, Jianfeng He, Haozhuo Zhang, and Runfa Wang. "Weakening the Influence of Clothing: Universal Clothing Attribute Disentanglement for Person Re-Identification." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/212.

Повний текст джерела
Анотація:
Most existing Re-ID studies focus on the short-term cloth-consistent setting and thus dominate by the visual appearance of clothing. However, the same person would wear different clothes and different people would wear the same clothes in reality, which invalidates these methods. To tackle the challenge of clothes change, we propose a Universal Clothing Attribute Disentanglement network (UCAD) which can effectively weaken the influence of clothing (identity-unrelated) and force the model to learn identity-related features that are unrelated to the worn clothing. For further study of Re-ID in cloth-changing scenarios, we construct a large-scale dataset called CSCC with the following unique features: (1) Severe: A large number of people have cloth-changing over four seasons. (2) High definition: The resolution of the cameras ranges from 1920×1080 to 3840×2160, which ensures that the recorded people are clear. Furthermore, we provide two variants of CSCC considering different degrees of cloth-changing, namely moderate and severe, so that researchers can effectively evaluate their models from various aspects. Experiments on several cloth-changing datasets including our CSCC and short-term dataset Market-1501 prove the superiority of UCAD. The dataset is available at https://github.com/yomin-y/UCAD.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Yin, Zhou, Wei-Shi Zheng, Ancong Wu, Hong-Xing Yu, Hai Wan, Xiaowei Guo, Feiyue Huang, and Jianhuang Lai. "Adversarial Attribute-Image Person Re-identification." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/153.

Повний текст джерела
Анотація:
While attributes have been widely used for person re-identification (Re-ID) which aims at matching the same person images across disjoint camera views, they are used either as extra features or for performing multi-task learning to assist the image-image matching task. However, how to find a set of person images according to a given attribute description, which is very practical in many surveillance applications, remains a rarely investigated cross-modality matching problem in person Re-ID. In this work, we present this challenge and leverage adversarial learning to formulate the attribute-image cross-modality person Re-ID model. By imposing a semantic consistency constraint across modalities as a regularization, the adversarial learning enables to generate image-analogous concepts of query attributes for matching the corresponding images at both global level and semantic ID level. We conducted extensive experiments on three attribute datasets and demonstrated that the regularized adversarial modelling is so far the most effective method for the attribute-image cross-modality person Re-ID problem.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Cheng, De, Xiaojun Chang, Li Liu, Alexander G. Hauptmann, Yihong Gong, and Nanning Zheng. "Discriminative Dictionary Learning With Ranking Metric Embedded for Person Re-Identification." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/134.

Повний текст джерела
Анотація:
The goal of person re-identification (Re-Id) is to match pedestrians captured from multiple non-overlapping cameras. In this paper, we propose a novel dictionary learning based method with the ranking metric embedded, for person Re-Id. A new and essential ranking graph Laplacian term is introduced, which minimizes the intra-personal compactness and maximizes the inter-personal dispersion in the objective. Different from the traditional dictionary learning based approaches and their extensions, which just use the same or not information, our proposed method can explore the ranking relationship among the person images, which is essential for such retrieval related tasks. Simultaneously, one distance measurement has been explicitly learned in the model to further improve the performance. Since we have reformulated these ranking constraints into the graph Laplacian form, the proposed method is easy-to-implement but effective. We conduct extensive experiments on three widely used person Re-Id benchmark datasets, and achieve state-of-the-art performances.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Li, Wei, Xiatian Zhu, and Shaogang Gong. "Person Re-Identification by Deep Joint Learning of Multi-Loss Classification." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/305.

Повний текст джерела
Анотація:
Existing person re-identification (re-id) methods rely mostly on either localised or global feature representation. This ignores their joint benefit and mutual complementary effects. In this work, we show the advantages of jointly learning local and global features in a Convolutional Neural Network (CNN) by aiming to discover correlated local and global features in different context. Specifically, we formulate a method for joint learning of local and global feature selection losses designed to optimise person re-id when using generic matching metrics such as the L2 distance. We design a novel CNN architecture for Jointly Learning Multi-Loss (JLML) of local and global discriminative feature optimisation subject concurrently to the same re-id labelled information. Extensive comparative evaluations demonstrate the advantages of this new JLML model for person re-id over a wide range of state-of-the-art re-id methods on five benchmarks (VIPeR, GRID, CUHK01, CUHK03, Market-1501).
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Huayta, Felix Olivier Sumari, Esteban Gonzalez Clúa, and Joris Guérin. "A Novel Human-Machine Hybrid Framework for Person Re-Identification from Full Frame Videos." In Anais Estendidos da Conference on Graphics, Patterns and Images. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/sibgrapi.est.2021.20013.

Повний текст джерела
Анотація:
With the major adoption of automation for cities security, person re-identification (Re-ID) has been extensively studied. In this dissertation, we argue that the current way of studying person re-identification, i.e. by trying to re-identify a person within already detected and pre-cropped images of people, is not sufficient to implement practical security applications, where the inputs to the system are the full frames of the video streams. To support this claim, we introduce the Full Frame Person Re-ID setting (FF-PRID) and define specific metrics to evaluate FF-PRID implementations. To improve robustness, we also formalize the hybrid human-machine collaboration framework, which is inherent to any Re-ID security applications. To demonstrate the importance of considering the FF-PRID setting, we build an experiment showing that combining a good people detection network with a good Re-ID model does not necessarily produce good results for the final application. This underlines a failure of the current formulation in assessing the quality of a Re-ID model and justifies the use of different metrics. We hope that this work will motivate the research community to consider the full problem in order to develop algorithms that are better suited to real-world scenarios.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Shen, Yunhang, Rongrong Ji, Xiaopeng Hong, Feng Zheng, Xiaowei Guo, Yongjian Wu, and Feiyue Huang. "A Part Power Set Model for Scale-Free Person Retrieval." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/471.

Повний текст джерела
Анотація:
Recently, person re-identification (re-ID) has attracted increasing research attention, which has broad application prospects in video surveillance and beyond. To this end, most existing methods highly relied on well-aligned pedestrian images and hand-engineered part-based model on the coarsest feature map. In this paper, to lighten the restriction of such fixed and coarse input alignment, an end-to-end part power set model with multi-scale features is proposed, which captures the discriminative parts of pedestrians from global to local, and from coarse to fine, enabling part-based scale-free person re-ID. In particular, we first factorize the visual appearance by enumerating $k$-combinations for all $k$ of $n$ body parts to exploit rich global and partial information to learn discriminative feature maps. Then, a combination ranking module is introduced to guide the model training with all combinations of body parts, which alternates between ranking combinations and estimating an appearance model. To enable scale-free input, we further exploit the pyramid architecture of deep networks to construct multi-scale feature maps with a feasible amount of extra cost in term of memory and time. Extensive experiments on the mainstream evaluation datasets, including Market-1501, DukeMTMC-reID and CUHK03, validate that our method achieves the state-of-the-art performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Pang, Jian, Dacheng Zhang, Huafeng Li, Weifeng Liu, and Zhengtao Yu. "Hazy Re-ID: An Interference Suppression Model for Domain Adaptation Person Re-Identification Under Inclement Weather Condition." In 2021 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2021. http://dx.doi.org/10.1109/icme51207.2021.9428462.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії