Littérature scientifique sur le sujet « Adversarial Information Fusion »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Adversarial Information Fusion ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Adversarial Information Fusion"

1

Kott, Alexander, Rajdeep Singh, William M. McEneaney et Wes Milks. « Hypothesis-driven information fusion in adversarial, deceptive environments ». Information Fusion 12, no 2 (avril 2011) : 131–44. http://dx.doi.org/10.1016/j.inffus.2010.09.001.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Wu, Zhaoli, Xuehan Wu, Yuancai Zhu, Jingxuan Zhai, Haibo Yang, Zhiwei Yang, Chao Wang et Jilong Sun. « Research on Multimodal Image Fusion Target Detection Algorithm Based on Generative Adversarial Network ». Wireless Communications and Mobile Computing 2022 (24 janvier 2022) : 1–10. http://dx.doi.org/10.1155/2022/1740909.

Texte intégral
Résumé :
In this paper, we propose a target detection algorithm based on adversarial discriminative domain adaptation for infrared and visible image fusion using unsupervised learning methods to reduce the differences between multimodal image information. Firstly, this paper improves the fusion model based on generative adversarial network and uses the fusion algorithm based on the dual discriminator generative adversarial network to generate high-quality IR-visible fused images and then blends the IR and visible images into a ternary dataset and combines the triple angular loss function to do migration learning. Finally, the fused images are used as the input images of faster RCNN object detection algorithm for detection, and a new nonmaximum suppression algorithm is used to improve the faster RCNN target detection algorithm, which further improves the target detection accuracy. Experiments prove that the method can achieve mutual complementation of multimodal feature information and make up for the lack of information in single-modal scenes, and the algorithm achieves good detection results for information from both modalities (infrared and visible light).
Styles APA, Harvard, Vancouver, ISO, etc.
3

Yuan, C., C. Q. Sun, X. Y. Tang et R. F. Liu. « FLGC-Fusion GAN : An Enhanced Fusion GAN Model by Importing Fully Learnable Group Convolution ». Mathematical Problems in Engineering 2020 (22 octobre 2020) : 1–13. http://dx.doi.org/10.1155/2020/6384831.

Texte intégral
Résumé :
The purpose of image fusion is to combine the source images of the same scene into a single composite image with more useful information and better visual effects. Fusion GAN has made a breakthrough in this field by proposing to use the generative adversarial network to fuse images. In some cases, considering retain infrared radiation information and gradient information at the same time, the existing fusion methods ignore the image contrast and other elements. To this end, we propose a new end-to-end network structure based on generative adversarial networks (GANs), termed as FLGC-Fusion GAN. In the generator, using the learnable grouping convolution can improve the efficiency of the model and save computing resources. Therefore, we can have a better trade-off between the accuracy and speed of the model. Besides, we take the residual dense block as the basic network building unit and use the perception characteristics of the inactive as content loss characteristics of input, achieving the effect of deep network supervision. Experimental results on two public datasets show that the proposed method performs well in subjective visual performance and objective criteria and has obvious advantages over other current typical methods.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Chen, Xiaoyu, Zhijie Teng, Yingqi Liu, Jun Lu, Lianfa Bai et Jing Han. « Infrared-Visible Image Fusion Based on Semantic Guidance and Visual Perception ». Entropy 24, no 10 (21 septembre 2022) : 1327. http://dx.doi.org/10.3390/e24101327.

Texte intégral
Résumé :
Infrared-visible fusion has great potential in night-vision enhancement for intelligent vehicles. The fusion performance depends on fusion rules that balance target saliency and visual perception. However, most existing methods do not have explicit and effective rules, which leads to the poor contrast and saliency of the target. In this paper, we propose the SGVPGAN, an adversarial framework for high-quality infrared-visible image fusion, which consists of an infrared-visible image fusion network based on Adversarial Semantic Guidance (ASG) and Adversarial Visual Perception (AVP) modules. Specifically, the ASG module transfers the semantics of the target and background to the fusion process for target highlighting. The AVP module analyzes the visual features from the global structure and local details of the visible and fusion images and then guides the fusion network to adaptively generate a weight map of signal completion so that the resulting fusion images possess a natural and visible appearance. We construct a joint distribution function between the fusion images and the corresponding semantics and use the discriminator to improve the fusion performance in terms of natural appearance and target saliency. Experimental results demonstrate that our proposed ASG and AVP modules can effectively guide the image-fusion process by selectively preserving the details in visible images and the salient information of targets in infrared images. The SGVPGAN exhibits significant improvements over other fusion methods.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Jia Ruiming, 贾瑞明, 李彤 Li Tong, 刘圣杰 Liu Shengjie, 崔家礼 Cui Jiali et 袁飞 Yuan Fei. « Infrared Simulation Based on Cascade Multi-Scale Information Fusion Adversarial Network ». Acta Optica Sinica 40, no 18 (2020) : 1810001. http://dx.doi.org/10.3788/aos202040.1810001.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Song, Xuhui, Hongtao Yu, Shaomei Li et Huansha Wang. « Robust Chinese Named Entity Recognition Based on Fusion Graph Embedding ». Electronics 12, no 3 (22 janvier 2023) : 569. http://dx.doi.org/10.3390/electronics12030569.

Texte intégral
Résumé :
Named entity recognition is an important basic task in the field of natural language processing. The current mainstream named entity recognition methods are mainly based on the deep neural network model. The vulnerability of the deep neural network itself leads to a significant decline in the accuracy of named entity recognition when there is adversarial text in the text. In order to improve the robustness of named entity recognition under adversarial conditions, this paper proposes a Chinese named entity recognition model based on fusion graph embedding. Firstly, the model encodes and represents the phonetic and glyph information of the input text through graph learning and integrates above-multimodal knowledge into the model, thus enhancing the robustness of the model. Secondly, we use the Bi-LSTM to further obtain the context information of the text. Finally, conditional random field is used to decode and label entities. The experimental results on OntoNotes4.0, MSRA, Weibo, and Resume datasets show that the F1 values of this model increased by 3.76%, 3.93%, 4.16%, and 6.49%, respectively, in the presence of adversarial text, which verifies the effectiveness of this model.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Xu, Dongdong, Yongcheng Wang, Shuyan Xu, Kaiguang Zhu, Ning Zhang et Xin Zhang. « Infrared and Visible Image Fusion with a Generative Adversarial Network and a Residual Network ». Applied Sciences 10, no 2 (11 janvier 2020) : 554. http://dx.doi.org/10.3390/app10020554.

Texte intégral
Résumé :
Infrared and visible image fusion can obtain combined images with salient hidden objectives and abundant visible details simultaneously. In this paper, we propose a novel method for infrared and visible image fusion with a deep learning framework based on a generative adversarial network (GAN) and a residual network (ResNet). The fusion is accomplished with an adversarial game and directed by the unique loss functions. The generator with residual blocks and skip connections can extract deep features of source image pairs and generate an elementary fused image with infrared thermal radiation information and visible texture information, and more details in visible images are added to the final images through the discriminator. It is unnecessary to design the activity level measurements and fusion rules manually, which are now implemented automatically. Also, there are no complicated multi-scale transforms in this method, so the computational cost and complexity can be reduced. Experiment results demonstrate that the proposed method eventually gets desirable images, achieving better performance in objective assessment and visual quality compared with nine representative infrared and visible image fusion methods.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Tang, Wei, Yu Liu, Chao Zhang, Juan Cheng, Hu Peng et Xun Chen. « Green Fluorescent Protein and Phase-Contrast Image Fusion via Generative Adversarial Networks ». Computational and Mathematical Methods in Medicine 2019 (4 décembre 2019) : 1–11. http://dx.doi.org/10.1155/2019/5450373.

Texte intégral
Résumé :
In the field of cell and molecular biology, green fluorescent protein (GFP) images provide functional information embodying the molecular distribution of biological cells while phase-contrast images maintain structural information with high resolution. Fusion of GFP and phase-contrast images is of high significance to the study of subcellular localization, protein functional analysis, and genetic expression. This paper proposes a novel algorithm to fuse these two types of biological images via generative adversarial networks (GANs) by carefully taking their own characteristics into account. The fusion problem is modelled as an adversarial game between a generator and a discriminator. The generator aims to create a fused image that well extracts the functional information from the GFP image and the structural information from the phase-contrast image at the same time. The target of the discriminator is to further improve the overall similarity between the fused image and the phase-contrast image. Experimental results demonstrate that the proposed method can outperform several representative and state-of-the-art image fusion methods in terms of both visual quality and objective evaluation.
Styles APA, Harvard, Vancouver, ISO, etc.
9

He, Gang, Jiaping Zhong, Jie Lei, Yunsong Li et Weiying Xie. « Hyperspectral Pansharpening Based on Spectral Constrained Adversarial Autoencoder ». Remote Sensing 11, no 22 (18 novembre 2019) : 2691. http://dx.doi.org/10.3390/rs11222691.

Texte intégral
Résumé :
Hyperspectral (HS) imaging is conducive to better describing and understanding the subtle differences in spectral characteristics of different materials due to sufficient spectral information compared with traditional imaging systems. However, it is still challenging to obtain high resolution (HR) HS images in both the spectral and spatial domains. Different from previous methods, we first propose spectral constrained adversarial autoencoder (SCAAE) to extract deep features of HS images and combine with the panchromatic (PAN) image to competently represent the spatial information of HR HS images, which is more comprehensive and representative. In particular, based on the adversarial autoencoder (AAE) network, the SCAAE network is built with the added spectral constraint in the loss function so that spectral consistency and a higher quality of spatial information enhancement can be ensured. Then, an adaptive fusion approach with a simple feature selection rule is induced to make full use of the spatial information contained in both the HS image and PAN image. Specifically, the spatial information from two different sensors is introduced into a convex optimization equation to obtain the fusion proportion of the two parts and estimate the generated HR HS image. By analyzing the results from the experiments executed on the tested data sets through different methods, it can be found that, in CC, SAM, and RMSE, the performance of the proposed algorithm is improved by about 1.42%, 13.12%, and 29.26% respectively on average which is preferable to the well-performed method HySure. Compared to the MRA-based method, the improvement of the proposed method in in the above three indexes is 17.63%, 0.83%, and 11.02%, respectively. Moreover, the results are 0.87%, 22.11%, and 20.66%, respectively, better than the PCA-based method, which fully illustrated the superiority of the proposed method in spatial information preservation. All the experimental results demonstrate that the proposed method is superior to the state-of-the-art fusion methods in terms of subjective and objective evaluations.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Zhou-xiang Jin, Zhou-xiang Jin, et Hao Qin Zhou-xiang Jin. « Generative Adversarial Network Based on Multi-feature Fusion Strategy for Motion Image Deblurring ». 電腦學刊 33, no 1 (février 2022) : 031–41. http://dx.doi.org/10.53106/199115992022023301004.

Texte intégral
Résumé :
<p>Deblurring of motion images is a part of the field of image restoration. The deblurring of motion images is not only difficult to estimate the motion parameters, but also contains complex factors such as noise, which makes the deblurring algorithm more difficult. Image deblurring can be divided into two categories: one is the non-blind image deblurring with known fuzzy kernel, and the other is the blind image deblurring with unknown fuzzy kernel. The traditional motion image deblurring networks ignore the non-uniformity of motion blurred images and cannot effectively recover the high frequency details and remove artifacts. In this paper, we propose a new generative adversarial network based on multi-feature fusion strategy for motion image deblurring. An adaptive residual module composed of deformation convolution module and channel attention module is constructed in the generative network. Where, the deformation convolution module learns the shape variables of motion blurred image features, and can dynamically adjust the shape and size of the convolution kernel according to the deformation information of the image, thus improving the ability of the network to adapt to image deformation. The channel attention module adjusts the extracted deformation features to obtain more high-frequency features and enhance the texture details of the restored image. Experimental results on public available GOPRO dataset show that the proposed algorithm improves the peak signal-to-noise ratio (PSNR) and is able to reconstruct high quality images with rich texture details compared to other motion image deblurring methods.</p> <p>&nbsp;</p>
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Adversarial Information Fusion"

1

KALLAS, KASSEM. « A Game-Theoretic Approach for Adversarial Information Fusion in Distributed Sensor Networks ». Doctoral thesis, Università di Siena, 2017. http://hdl.handle.net/11365/1005735.

Texte intégral
Résumé :
Every day we share our personal information through digital systems which are constantly exposed to threats. For this reason, security-oriented disciplines of signal processing have received increasing attention in the last decades: multimedia forensics, digital watermarking, biometrics, network monitoring, steganography and steganalysis are just a few examples. Even though each of these fields has its own peculiarities, they all have to deal with a common problem: the presence of one or more adversaries aiming at making the system fail. Adversarial Signal Processing lays the basis of a general theory that takes into account the impact that the presence of an adversary has on the design of effective signal processing tools. By focusing on the application side of Adversarial Signal Processing, namely adversarial information fusion in distributed sensor networks, and adopting a game-theoretic approach, this thesis contributes to the above mission by addressing four issues. First, we address decision fusion in distributed sensor networks by developing a novel soft isolation defense scheme that protects the network from adversaries, specifically, Byzantines. Second, we develop an optimum decision fusion strategy in the presence of Byzantines. In the next step, we propose a technique to reduce the complexity of the optimum fusion by relying on a novel nearly-optimum message passing algorithm based on factor graphs. Finally, we introduce a defense mechanism to protect decentralized networks running consensus algorithm against data falsification attacks.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Bittner, Ksenia. « Building Information Extraction and Refinement from VHR Satellite Imagery using Deep Learning Techniques ». Doctoral thesis, 2020. https://repositorium.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-202003262703.

Texte intégral
Résumé :
Building information extraction and reconstruction from satellite images is an essential task for many applications related to 3D city modeling, planning, disaster management, navigation, and decision-making. Building information can be obtained and interpreted from several data, like terrestrial measurements, airplane surveys, and space-borne imagery. However, the latter acquisition method outperforms the others in terms of cost and worldwide coverage: Space-borne platforms can provide imagery of remote places, which are inaccessible to other missions, at any time. Because the manual interpretation of high-resolution satellite image is tedious and time consuming, its automatic analysis continues to be an intense field of research. At times however, it is difficult to understand complex scenes with dense placement of buildings, where parts of buildings may be occluded by vegetation or other surrounding constructions, making their extraction or reconstruction even more difficult. Incorporation of several data sources representing different modalities may facilitate the problem. The goal of this dissertation is to integrate multiple high-resolution remote sensing data sources for automatic satellite imagery interpretation with emphasis on building information extraction and refinement, which challenges are addressed in the following: Building footprint extraction from Very High-Resolution (VHR) satellite images is an important but highly challenging task, due to the large diversity of building appearances and relatively low spatial resolution of satellite data compared to airborne data. Many algorithms are built on spectral-based or appearance-based criteria from single or fused data sources, to perform the building footprint extraction. The input features for these algorithms are usually manually extracted, which limits their accuracy. Based on the advantages of recently developed Fully Convolutional Networks (FCNs), i.e., the automatic extraction of relevant features and dense classification of images, an end-to-end framework is proposed which effectively combines the spectral and height information from red, green, and blue (RGB), pan-chromatic (PAN), and normalized Digital Surface Model (nDSM) image data and automatically generates a full resolution binary building mask. The proposed architecture consists of three parallel networks merged at a late stage, which helps in propagating fine detailed information from earlier layers to higher levels, in order to produce an output with high-quality building outlines. The performance of the model is examined on new unseen data to demonstrate its generalization capacity. The availability of detailed Digital Surface Models (DSMs) generated by dense matching and representing the elevation surface of the Earth can improve the analysis and interpretation of complex urban scenarios. The generation of DSMs from VHR optical stereo satellite imagery leads to high-resolution DSMs which often suffer from mismatches, missing values, or blunders, resulting in coarse building shape representation. To overcome these problems, a methodology based on conditional Generative Adversarial Network (cGAN) is developed for generating a good-quality Level of Detail (LoD) 2 like DSM with enhanced 3D object shapes directly from the low-quality photogrammetric half-meter resolution satellite DSM input. Various deep learning applications benefit from multi-task learning with multiple regression and classification objectives by taking advantage of the similarities between individual tasks. Therefore, an observation of such influences for important remote sensing applications such as realistic elevation model generation and roof type classification from stereo half-meter resolution satellite DSMs, is demonstrated in this work. Recently published deep learning architectures for both tasks are investigated and a new end-to-end cGAN-based network is developed, which combines different models that provide the best results for their individual tasks. To benefit from information provided by multiple data sources, a different cGAN-based work-flow is proposed where the generative part consists of two encoders and a common decoder which blends the intensity and height information within one network for the DSM refinement task. The inputs to the introduced network are single-channel photogrammetric DSMs with continuous values and pan-chromatic half-meter resolution satellite images. Information fusion from different modalities helps in propagating fine details, completes inaccurate or missing 3D information about building forms, and improves the building boundaries, making them more rectilinear. Lastly, additional comparison between the proposed methodologies for DSM enhancements is made to discuss and verify the most beneficial work-flow and applicability of the resulting DSMs for different remote sensing approaches.
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Adversarial Information Fusion"

1

Abrardo, Andrea, Mauro Barni, Kassem Kallas et Benedetta Tondi. « Adversarial Decision Fusion : A Heuristic Approach ». Dans Information Fusion in Distributed Sensor Networks with Byzantines, 45–55. Singapore : Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-32-9001-3_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Narute, Bharati, et Prashant Bartakke. « Brain MRI and CT Image Fusion Using Generative Adversarial Network ». Dans Communications in Computer and Information Science, 97–109. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11349-9_9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Wang, Huiying, Huixin Shen, Boyang Zhang, Yu Wen et Dan Meng. « Generating Adversarial Point Clouds on Multi-modal Fusion Based 3D Object Detection Model ». Dans Information and Communications Security, 187–203. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86890-1_11.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Yang, Dongxu, Yongbin Zheng, Peng Sun, Wanying Xu et Di Zhu. « A Generative Adversarial Network for Image Fusion via Preserving Texture Information ». Dans Lecture Notes in Electrical Engineering, 4795–803. Singapore : Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-6613-2_465.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Ballora, Mark, Nicklaus A. Giacobe, Michael McNeese et David L. Hall. « Information Data Fusion and Computer Network Defense ». Dans Situational Awareness in Computer Network Defense, 141–64. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-0104-8.ch009.

Texte intégral
Résumé :
Computer networks no longer simply enable military and civilian operations, but have become vital infrastructures for all types of operations ranging from sensing and command/control to logistics, power distribution, and many other functions. Consequently, network attacks have become weapons of choice for adversaries engaged in asymmetric warfare. Traditionally, data and information fusion techniques were developed to improve situational awareness and threat assessment by combining data from diverse sources, and have recently been extended to include both physical (“hard”) sensors and human observers (acting as “soft” sensors). This chapter provides an introduction to traditional data fusion models and adapts them to the domain of cyber security. Recent advances in hard and soft information fusion are summarized and applied to the cyber security domain. Research on the use of sound for human-in-the-loop pattern recognition (sonification) is also introduced. Finally, perspectives are provided on the future for data fusion in cyber security research.
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Adversarial Information Fusion"

1

Weerakoon, Dulanga, Kasthuri Jayarajah, Randy Tandriansyah et Archan Misra. « Resilient Collaborative Intelligence for Adversarial IoT Environments ». Dans 2019 22th International Conference on Information Fusion (FUSION). IEEE, 2019. http://dx.doi.org/10.23919/fusion43075.2019.9011397.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Bostrom-Rost, Per, Daniel Axehill et Gustaf Hendeby. « Informative Path Planning in the Presence of Adversarial Observers ». Dans 2019 22th International Conference on Information Fusion (FUSION). IEEE, 2019. http://dx.doi.org/10.23919/fusion43075.2019.9011193.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Vyavahare, Pooja, Lili Su et Nitin H. Vaidya. « Distributed Learning over Time-Varying Graphs with Adversarial Agents ». Dans 2019 22th International Conference on Information Fusion (FUSION). IEEE, 2019. http://dx.doi.org/10.23919/fusion43075.2019.9011353.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Tomsett, Richard, Amy Widdicombe, Tianwei Xing, Supriyo Chakraborty, Simon Julier, Prudhvi Gurram, Raghuveer Rao et Mani Srivastava. « Why the Failure ? How Adversarial Examples Can Provide Insights for Interpretable Machine Learning ». Dans 2018 International Conference on Information Fusion (FUSION). IEEE, 2018. http://dx.doi.org/10.23919/icif.2018.8455710.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Kodipaka, Surya, Ajaya Dahal, Logan Smith, Nicholas Smith, Bo Tang, John E. Ball et Maxwell Young. « Adversarial indoor signal detection ». Dans Signal Processing, Sensor/Information Fusion, and Target Recognition XXX, sous la direction de Lynne L. Grewe, Erik P. Blasch et Ivan Kadar. SPIE, 2021. http://dx.doi.org/10.1117/12.2587525.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Govaers, Felix, et Paul Baggenstoss. « On a Detection Method of Adversarial Samples for Deep Neural Networks ». Dans 2021 IEEE 24th International Conference on Information Fusion (FUSION). IEEE, 2021. http://dx.doi.org/10.23919/fusion49465.2021.9627060.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Caballero, William, Mark A. Friend et Erik Blasch. « Adversarial machine learning and adversarial risk analysis in multi-source command and control ». Dans Signal Processing, Sensor/Information Fusion, and Target Recognition XXX, sous la direction de Lynne L. Grewe, Erik P. Blasch et Ivan Kadar. SPIE, 2021. http://dx.doi.org/10.1117/12.2589027.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Ranjit Ganta, Srivatsava, et Raj Acharya. « On breaching enterprise data privacy through adversarial information fusion ». Dans 2008 IEEE 24th International Conference on Data Engineeing workshop (ICDE Workshop 2008). IEEE, 2008. http://dx.doi.org/10.1109/icdew.2008.4498326.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Arnold, G., A. M. Fullenkamp, A. Bornstein, F. Morelli, T. Brown, P. Iyer, J. Lavery et al. « Research directions in remote detection of covert tactical adversarial intent of individuals in asymmetric operations ». Dans 2010 13th International Conference on Information Fusion (FUSION 2010). IEEE, 2010. http://dx.doi.org/10.1109/icif.2010.5711892.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Zhang, Chongyang, Yu Qi et Hiroyuki Kameda. « Multi-scale perturbation fusion adversarial attack on MTCNN face detection system ». Dans 2022 4th International Conference on Communications, Information System and Computer Engineering (CISCE). IEEE, 2022. http://dx.doi.org/10.1109/cisce55963.2022.9851024.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie