Gotowa bibliografia na temat „Cross modal person re-identification”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Cross modal person re-identification”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Cross modal person re-identification"

1

Hafner, Frank M., Amran Bhuyian, Julian F. P. Kooij i Eric Granger. "Cross-modal distillation for RGB-depth person re-identification". Computer Vision and Image Understanding 216 (luty 2022): 103352. http://dx.doi.org/10.1016/j.cviu.2021.103352.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Liu, Minghui, Yafei Zhang i Huafeng Li. "Survey of Cross-Modal Person Re-Identification from a Mathematical Perspective". Mathematics 11, nr 3 (28.01.2023): 654. http://dx.doi.org/10.3390/math11030654.

Pełny tekst źródła
Streszczenie:
Person re-identification (Re-ID) aims to retrieve a particular pedestrian’s identification from a surveillance system consisting of non-overlapping cameras. In recent years, researchers have begun to focus on open-world person Re-ID tasks based on non-ideal situations. One of the most representative of these is cross-modal person Re-ID, which aims to match probe data with target data from different modalities. According to the modalities of probe and target data, we divided cross-modal person Re-ID into visible–infrared, visible–depth, visible–sketch, and visible–text person Re-ID. In cross-modal person Re-ID, the most challenging problem is the modal gap. According to the different methods of narrowing the modal gap, we classified the existing works into picture-based style conversion methods, feature-based modality-invariant embedding mapping methods, and modality-unrelated auxiliary information mining methods. In addition, by generalizing the aforementioned works, we find that although deep-learning-based models perform well, the black-box-like learning process makes these models less interpretable and generalized. Therefore, we attempted to interpret different cross-modal person Re-ID models from a mathematical perspective. Through the above work, we attempt to compensate for the lack of mathematical interpretation of models in previous person Re-ID reviews and hope that our work will bring new inspiration to researchers.
Style APA, Harvard, Vancouver, ISO itp.
3

Xie, Zhongwei, Lin Li, Xian Zhong, Luo Zhong i Jianwen Xiang. "Image-to-video person re-identification with cross-modal embeddings". Pattern Recognition Letters 133 (maj 2020): 70–76. http://dx.doi.org/10.1016/j.patrec.2019.03.003.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Li, Diangang, Xing Wei, Xiaopeng Hong i Yihong Gong. "Infrared-Visible Cross-Modal Person Re-Identification with an X Modality". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 04 (3.04.2020): 4610–17. http://dx.doi.org/10.1609/aaai.v34i04.5891.

Pełny tekst źródła
Streszczenie:
This paper focuses on the emerging Infrared-Visible cross-modal person re-identification task (IV-ReID), which takes infrared images as input and matches with visible color images. IV-ReID is important yet challenging, as there is a significant gap between the visible and infrared images. To reduce this ‘gap’, we introduce an auxiliary X modality as an assistant and reformulate infrared-visible dual-mode cross-modal learning as an X-Infrared-Visible three-mode learning problem. The X modality restates from RGB channels to a format with which cross-modal learning can be easily performed. With this idea, we propose an X-Infrared-Visible (XIV) ReID cross-modal learning framework. Firstly, the X modality is generated by a lightweight network, which is learnt in a self-supervised manner with the labels inherited from visible images. Secondly, under the XIV framework, cross-modal learning is guided by a carefully designed modality gap constraint, with information exchanged cross the visible, X, and infrared modalities. Extensive experiments are performed on two challenging datasets SYSU-MM01 and RegDB to evaluate the proposed XIV-ReID approach. Experimental results show that our method considerably achieves an absolute gain of over 7% in terms of rank 1 and mAP even compared with the latest state-of-the-art methods.
Style APA, Harvard, Vancouver, ISO itp.
5

Lin, Ronghui, Rong Wang, Wenjing Zhang, Ao Wu i Yihan Bi. "Joint Modal Alignment and Feature Enhancement for Visible-Infrared Person Re-Identification". Sensors 23, nr 11 (23.05.2023): 4988. http://dx.doi.org/10.3390/s23114988.

Pełny tekst źródła
Streszczenie:
Visible-infrared person re-identification aims to solve the matching problem between cross-camera and cross-modal person images. Existing methods strive to perform better cross-modal alignment, but often neglect the critical importance of feature enhancement for achieving better performance. Therefore, we proposed an effective method that combines both modal alignment and feature enhancement. Specifically, we introduced Visible-Infrared Modal Data Augmentation (VIMDA) for visible images to improve modal alignment. Margin MMD-ID Loss was also used to further enhance modal alignment and optimize model convergence. Then, we proposed Multi-Grain Feature Extraction (MGFE) Structure for feature enhancement to further improve recognition performance. Extensive experiments have been carried out on SYSY-MM01 and RegDB. The result indicates that our method outperforms the current state-of-the-art method for visible-infrared person re-identification. Ablation experiments verified the effectiveness of the proposed method.
Style APA, Harvard, Vancouver, ISO itp.
6

Syed, Muhammad Adnan, Yongsheng Ou, Tao Li i Guolai Jiang. "Lightweight Multimodal Domain Generic Person Reidentification Metric for Person-Following Robots". Sensors 23, nr 2 (10.01.2023): 813. http://dx.doi.org/10.3390/s23020813.

Pełny tekst źródła
Streszczenie:
Recently, person-following robots have been increasingly used in many real-world applications, and they require robust and accurate person identification for tracking. Recent works proposed to use re-identification metrics for identification of the target person; however, these metrics suffer due to poor generalization, and due to impostors in nonlinear multi-modal world. This work learns a domain generic person re-identification to resolve real-world challenges and to identify the target person undergoing appearance changes when moving across different indoor and outdoor environments or domains. Our generic metric takes advantage of novel attention mechanism to learn deep cross-representations to address pose, viewpoint, and illumination variations, as well as jointly tackling impostors and style variations the target person randomly undergoes in various indoor and outdoor domains; thus, our generic metric attains higher recognition accuracy of target person identification in complex multi-modal open-set world, and attains 80.73% and 64.44% Rank-1 identification in multi-modal close-set PRID and VIPeR domains, respectively.
Style APA, Harvard, Vancouver, ISO itp.
7

Farooq, Ammarah, Muhammad Awais, Josef Kittler i Syed Safwan Khalid. "AXM-Net: Implicit Cross-Modal Feature Alignment for Person Re-identification". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 4 (28.06.2022): 4477–85. http://dx.doi.org/10.1609/aaai.v36i4.20370.

Pełny tekst źródła
Streszczenie:
Cross-modal person re-identification (Re-ID) is critical for modern video surveillance systems. The key challenge is to align cross-modality representations conforming to semantic information present for a person and ignore background information. This work presents a novel convolutional neural network (CNN) based architecture designed to learn semantically aligned cross-modal visual and textual representations. The underlying building block, named AXM-Block, is a unified multi-layer network that dynamically exploits the multi-scale knowledge from both modalities and re-calibrates each modality according to shared semantics. To complement the convolutional design, contextual attention is applied in the text branch to manipulate long-term dependencies. Moreover, we propose a unique design to enhance visual part-based feature coherence and locality information. Our framework is novel in its ability to implicitly learn aligned semantics between modalities during the feature learning stage. The unified feature learning effectively utilizes textual data as a super-annotation signal for visual representation learning and automatically rejects irrelevant information. The entire AXM-Net is trained end-to-end on CUHK-PEDES data. We report results on two tasks, person search and cross-modal Re-ID. The AXM-Net outperforms the current state-of-the-art (SOTA) methods and achieves 64.44% Rank@1 on the CUHK-PEDES test set. It also outperforms by >10% for cross-viewpoint text-to-image Re-ID scenarios on CrossRe-ID and CUHK-SYSU datasets.
Style APA, Harvard, Vancouver, ISO itp.
8

Zheng, Aihua, Zi Wang, Zihan Chen, Chenglong Li i Jin Tang. "Robust Multi-Modality Person Re-identification". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 4 (18.05.2021): 3529–37. http://dx.doi.org/10.1609/aaai.v35i4.16467.

Pełny tekst źródła
Streszczenie:
To avoid the illumination limitation in visible person re-identification (Re-ID) and the heterogeneous issue in cross-modality Re-ID, we propose to utilize complementary advantages of multiple modalities including visible (RGB), near infrared (NI) and thermal infrared (TI) ones for robust person Re-ID. A novel progressive fusion network is designed to learn effective multi-modal features from single to multiple modalities and from local to global views. Our method works well in diversely challenging scenarios even in the presence of missing modalities. Moreover, we contribute a comprehensive benchmark dataset, RGBNT201, including 201 identities captured from various challenging conditions, to facilitate the research of RGB-NI-TI multi-modality person Re-ID. Comprehensive experiments on RGBNT201 dataset comparing to the state-of-the-art methods demonstrate the contribution of multi-modality person Re-ID and the effectiveness of the proposed approach, which launch a new benchmark and a new baseline for multi-modality person Re-ID.
Style APA, Harvard, Vancouver, ISO itp.
9

Shi, Shuo, Changwei Huo, Yingchun Guo, Stephen Lean, Gang Yan i Ming Yu. "Truncated attention mechanism and cascade loss for cross-modal person re-identification". Journal of Intelligent & Fuzzy Systems 41, nr 6 (16.12.2021): 6575–87. http://dx.doi.org/10.3233/jifs-210382.

Pełny tekst źródła
Streszczenie:
Person re-identification with natural language description is a process of retrieving the corresponding person’s image from an image dataset according to a text description of the person. The key challenge in this cross-modal task is to extract visual and text features and construct loss functions to achieve cross-modal matching between text and image. Firstly, we designed a two-branch network framework for person re-identification with natural language description. In this framework we include the following: a Bi-directional Long Short-Term Memory (Bi-LSTM) network is used to extract text features and a truncated attention mechanism is proposed to select the principal component of the text features; a MobileNet is used to extract image features. Secondly, we proposed a Cascade Loss Function (CLF), which includes cross-modal matching loss and single modal classification loss, both with relative entropy function, to fully exploit the identity-level information. The experimental results on the CUHK-PEDES dataset demonstrate that our method achieves better results in Top-5 and Top-10 than other current 10 state-of-the-art algorithms.
Style APA, Harvard, Vancouver, ISO itp.
10

Yan, Shiyang, Jianan Zhao i Lin Xu. "Adaptive multi-task learning for cross domain and modal person re-identification". Neurocomputing 486 (maj 2022): 123–34. http://dx.doi.org/10.1016/j.neucom.2021.11.016.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Cross modal person re-identification"

1

Li, Yu-Jhe, i 李宇哲. "A Generative Dual Model for Cross-Resolution Person Re-Identification". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/we2598.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣大學
電信工程學研究所
107
Person re-identification (re-ID) aims at matching images of the same identity across camera views. Due to varying distances between the cameras and persons of interest, resolution mismatch can be expected, which would degrade person re-ID performances in real-world scenarios. To overcome this problem, we propose a novel generative adversarial network to address cross-resolution person re-ID, allowing query images with varying resolutions. By advancing adversarial learning techniques, our proposed model learns resolution-invariant image representations while being able to recover missing details in low-resolution input images. Thus, the resulting features can be jointly applied for improved re-ID performances. Our experiments on three benchmark datasets confirm the effectiveness of our method and its superiority over the state-of-the-art approaches, especially when the input resolutions are unseen during training.
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Cross modal person re-identification"

1

Uddin, Md Kamal, Antony Lam, Hisato Fukuda, Yoshinori Kobayashi i Yoshinori Kuno. "Exploiting Local Shape Information for Cross-Modal Person Re-identification". W Intelligent Computing Methodologies, 74–85. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-26766-7_8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Xu, Xiaohui, Song Wu, Shan Liu i Guoqiang Xiao. "Cross-Modal Based Person Re-identification via Channel Exchange and Adversarial Learning". W Neural Information Processing, 500–511. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92185-9_41.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Chen, Qingshan, Zhenzhen Quan, Kun Zhao, Yifan Zheng, Zhi Liu i Yujun Li. "A Cross-Modality Sketch Person Re-identification Model Based on Cross-Spectrum Image Generation". W Digital TV and Wireless Multimedia Communications, 312–24. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2266-4_24.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Zhu, Chuanlei, Xiaohong Li, Meibin Qi, Yimin Liu i Long Zhang. "A Local-Global Self-attention Interaction Network for RGB-D Cross-Modal Person Re-identification". W Pattern Recognition and Computer Vision, 89–102. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18916-6_8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Hu, Yang, Dong Yi, Shengcai Liao, Zhen Lei i Stan Z. Li. "Cross Dataset Person Re-identification". W Computer Vision - ACCV 2014 Workshops, 650–64. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-16634-6_47.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Wang, Yanan, Shuzhen Yang, Shuang Liu i Zhong Zhang. "Cross-Domain Person Re-identification: A Review". W Lecture Notes in Electrical Engineering, 153–60. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-8599-9_19.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Li, Zhihui, Wenhe Liu, Xiaojun Chang, Lina Yao, Mahesh Prakash i Huaxiang Zhang. "Domain-Aware Unsupervised Cross-dataset Person Re-identification". W Advanced Data Mining and Applications, 406–20. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-35231-8_29.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Zhang, Pengyi, Huanzhang Dou, Yunlong Yu i Xi Li. "Adaptive Cross-domain Learning for Generalizable Person Re-identification". W Lecture Notes in Computer Science, 215–32. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19781-9_13.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Jiang, Kongzhu, Tianzhu Zhang, Xiang Liu, Bingqiao Qian, Yongdong Zhang i Feng Wu. "Cross-Modality Transformer for Visible-Infrared Person Re-Identification". W Lecture Notes in Computer Science, 480–96. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19781-9_28.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Kim, Hyeonwoo, Hyungjoon Kim, Bumyeon Ko i Eenjun Hwang. "Person Re-identification Scheme Using Cross-Input Neighborhood Differences". W Transactions on Computational Science and Computational Intelligence, 825–31. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70296-0_61.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Cross modal person re-identification"

1

Hafner, Frank M., Amran Bhuiyan, Julian F. P. Kooij i Eric Granger. "RGB-Depth Cross-Modal Person Re-identification". W 2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). IEEE, 2019. http://dx.doi.org/10.1109/avss.2019.8909838.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Farooq, Ammarah, Muhammad Awais, Josef Kittler, Ali Akbari i Syed Safwan Khalid. "Cross Modal Person Re-identification with Visual-Textual Queries". W 2020 IEEE International Joint Conference on Biometrics (IJCB). IEEE, 2020. http://dx.doi.org/10.1109/ijcb48548.2020.9304940.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Lu, Lingyi, i Xin Xu. "Visible-Infrared Cross-Modal Person Re-identification based on Positive Feedback". W MMAsia '21: ACM Multimedia Asia. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3469877.3497693.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Tian, Xudong, Zhizhong Zhang, Shaohui Lin, Yanyun Qu, Yuan Xie i Lizhuang Ma. "Farewell to Mutual Information: Variational Distillation for Cross-Modal Person Re-Identification". W 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021. http://dx.doi.org/10.1109/cvpr46437.2021.00157.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Park, Hyunjong, Sanghoon Lee, Junghyup Lee i Bumsub Ham. "Learning by Aligning: Visible-Infrared Person Re-identification using Cross-Modal Correspondences". W 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021. http://dx.doi.org/10.1109/iccv48922.2021.01183.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Yin, Zhou, Wei-Shi Zheng, Ancong Wu, Hong-Xing Yu, Hai Wan, Xiaowei Guo, Feiyue Huang i Jianhuang Lai. "Adversarial Attribute-Image Person Re-identification". W Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/153.

Pełny tekst źródła
Streszczenie:
While attributes have been widely used for person re-identification (Re-ID) which aims at matching the same person images across disjoint camera views, they are used either as extra features or for performing multi-task learning to assist the image-image matching task. However, how to find a set of person images according to a given attribute description, which is very practical in many surveillance applications, remains a rarely investigated cross-modality matching problem in person Re-ID. In this work, we present this challenge and leverage adversarial learning to formulate the attribute-image cross-modality person Re-ID model. By imposing a semantic consistency constraint across modalities as a regularization, the adversarial learning enables to generate image-analogous concepts of query attributes for matching the corresponding images at both global level and semantic ID level. We conducted extensive experiments on three attribute datasets and demonstrated that the regularized adversarial modelling is so far the most effective method for the attribute-image cross-modality person Re-ID problem.
Style APA, Harvard, Vancouver, ISO itp.
7

Chen, Yongbiao, Sheng Zhang i Zhengwei Qi. "MAENet: Boosting Feature Representation for Cross-Modal Person Re-Identification with Pairwise Supervision". W ICMR '20: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3372278.3390699.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Hu, Jihui, Pengfei Ye, Danyang Li, Lingyun Dong, Xiaopan Chen i Xiaoke Zhu. "Point-Level and Set-Level Deep Representation Learning for Cross-Modal Person Re-identification". W 2022 IEEE 8th International Conference on Computer and Communications (ICCC). IEEE, 2022. http://dx.doi.org/10.1109/iccc56324.2022.10065694.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Liu, Xiang, i Liang Li. "Research on cross-modality person re-identification based on generative adversarial networks and modal compensation". W International Conference on Computer, Artificial Intelligence, and Control Engineering (CAICE 2023), redaktorzy Aniruddha Bhattacharjya i Xin Feng. SPIE, 2023. http://dx.doi.org/10.1117/12.2681151.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Ye, Mang, Zheng Wang, Xiangyuan Lan i Pong C. Yuen. "Visible Thermal Person Re-Identification via Dual-Constrained Top-Ranking". W Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/152.

Pełny tekst źródła
Streszczenie:
Cross-modality person re-identification between the thermal and visible domains is extremely important for night-time surveillance applications. Existing works in this filed mainly focus on learning sharable feature representations to handle the cross-modality discrepancies. However, besides the cross-modality discrepancy caused by different camera spectrums, visible thermal person re-identification also suffers from large cross-modality and intra-modality variations caused by different camera views and human poses. In this paper, we propose a dual-path network with a novel bi-directional dual-constrained top-ranking loss to learn discriminative feature representations. It is advantageous in two aspects: 1) end-to-end feature learning directly from the data without extra metric learning steps, 2) it simultaneously handles the cross-modality and intra-modality variations to ensure the discriminability of the learnt representations. Meanwhile, identity loss is further incorporated to model the identity-specific information to handle large intra-class variations. Extensive experiments on two datasets demonstrate the superior performance compared to the state-of-the-arts.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii