Добірка наукової літератури з теми "Adversarial Information Fusion"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Adversarial Information Fusion".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Adversarial Information Fusion"

1

Kott, Alexander, Rajdeep Singh, William M. McEneaney, and Wes Milks. "Hypothesis-driven information fusion in adversarial, deceptive environments." Information Fusion 12, no. 2 (April 2011): 131–44. http://dx.doi.org/10.1016/j.inffus.2010.09.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Wu, Zhaoli, Xuehan Wu, Yuancai Zhu, Jingxuan Zhai, Haibo Yang, Zhiwei Yang, Chao Wang, and Jilong Sun. "Research on Multimodal Image Fusion Target Detection Algorithm Based on Generative Adversarial Network." Wireless Communications and Mobile Computing 2022 (January 24, 2022): 1–10. http://dx.doi.org/10.1155/2022/1740909.

Повний текст джерела
Анотація:
In this paper, we propose a target detection algorithm based on adversarial discriminative domain adaptation for infrared and visible image fusion using unsupervised learning methods to reduce the differences between multimodal image information. Firstly, this paper improves the fusion model based on generative adversarial network and uses the fusion algorithm based on the dual discriminator generative adversarial network to generate high-quality IR-visible fused images and then blends the IR and visible images into a ternary dataset and combines the triple angular loss function to do migration learning. Finally, the fused images are used as the input images of faster RCNN object detection algorithm for detection, and a new nonmaximum suppression algorithm is used to improve the faster RCNN target detection algorithm, which further improves the target detection accuracy. Experiments prove that the method can achieve mutual complementation of multimodal feature information and make up for the lack of information in single-modal scenes, and the algorithm achieves good detection results for information from both modalities (infrared and visible light).
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Yuan, C., C. Q. Sun, X. Y. Tang, and R. F. Liu. "FLGC-Fusion GAN: An Enhanced Fusion GAN Model by Importing Fully Learnable Group Convolution." Mathematical Problems in Engineering 2020 (October 22, 2020): 1–13. http://dx.doi.org/10.1155/2020/6384831.

Повний текст джерела
Анотація:
The purpose of image fusion is to combine the source images of the same scene into a single composite image with more useful information and better visual effects. Fusion GAN has made a breakthrough in this field by proposing to use the generative adversarial network to fuse images. In some cases, considering retain infrared radiation information and gradient information at the same time, the existing fusion methods ignore the image contrast and other elements. To this end, we propose a new end-to-end network structure based on generative adversarial networks (GANs), termed as FLGC-Fusion GAN. In the generator, using the learnable grouping convolution can improve the efficiency of the model and save computing resources. Therefore, we can have a better trade-off between the accuracy and speed of the model. Besides, we take the residual dense block as the basic network building unit and use the perception characteristics of the inactive as content loss characteristics of input, achieving the effect of deep network supervision. Experimental results on two public datasets show that the proposed method performs well in subjective visual performance and objective criteria and has obvious advantages over other current typical methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Chen, Xiaoyu, Zhijie Teng, Yingqi Liu, Jun Lu, Lianfa Bai, and Jing Han. "Infrared-Visible Image Fusion Based on Semantic Guidance and Visual Perception." Entropy 24, no. 10 (September 21, 2022): 1327. http://dx.doi.org/10.3390/e24101327.

Повний текст джерела
Анотація:
Infrared-visible fusion has great potential in night-vision enhancement for intelligent vehicles. The fusion performance depends on fusion rules that balance target saliency and visual perception. However, most existing methods do not have explicit and effective rules, which leads to the poor contrast and saliency of the target. In this paper, we propose the SGVPGAN, an adversarial framework for high-quality infrared-visible image fusion, which consists of an infrared-visible image fusion network based on Adversarial Semantic Guidance (ASG) and Adversarial Visual Perception (AVP) modules. Specifically, the ASG module transfers the semantics of the target and background to the fusion process for target highlighting. The AVP module analyzes the visual features from the global structure and local details of the visible and fusion images and then guides the fusion network to adaptively generate a weight map of signal completion so that the resulting fusion images possess a natural and visible appearance. We construct a joint distribution function between the fusion images and the corresponding semantics and use the discriminator to improve the fusion performance in terms of natural appearance and target saliency. Experimental results demonstrate that our proposed ASG and AVP modules can effectively guide the image-fusion process by selectively preserving the details in visible images and the salient information of targets in infrared images. The SGVPGAN exhibits significant improvements over other fusion methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Jia Ruiming, 贾瑞明, 李彤 Li Tong, 刘圣杰 Liu Shengjie, 崔家礼 Cui Jiali, and 袁飞 Yuan Fei. "Infrared Simulation Based on Cascade Multi-Scale Information Fusion Adversarial Network." Acta Optica Sinica 40, no. 18 (2020): 1810001. http://dx.doi.org/10.3788/aos202040.1810001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Song, Xuhui, Hongtao Yu, Shaomei Li, and Huansha Wang. "Robust Chinese Named Entity Recognition Based on Fusion Graph Embedding." Electronics 12, no. 3 (January 22, 2023): 569. http://dx.doi.org/10.3390/electronics12030569.

Повний текст джерела
Анотація:
Named entity recognition is an important basic task in the field of natural language processing. The current mainstream named entity recognition methods are mainly based on the deep neural network model. The vulnerability of the deep neural network itself leads to a significant decline in the accuracy of named entity recognition when there is adversarial text in the text. In order to improve the robustness of named entity recognition under adversarial conditions, this paper proposes a Chinese named entity recognition model based on fusion graph embedding. Firstly, the model encodes and represents the phonetic and glyph information of the input text through graph learning and integrates above-multimodal knowledge into the model, thus enhancing the robustness of the model. Secondly, we use the Bi-LSTM to further obtain the context information of the text. Finally, conditional random field is used to decode and label entities. The experimental results on OntoNotes4.0, MSRA, Weibo, and Resume datasets show that the F1 values of this model increased by 3.76%, 3.93%, 4.16%, and 6.49%, respectively, in the presence of adversarial text, which verifies the effectiveness of this model.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Xu, Dongdong, Yongcheng Wang, Shuyan Xu, Kaiguang Zhu, Ning Zhang, and Xin Zhang. "Infrared and Visible Image Fusion with a Generative Adversarial Network and a Residual Network." Applied Sciences 10, no. 2 (January 11, 2020): 554. http://dx.doi.org/10.3390/app10020554.

Повний текст джерела
Анотація:
Infrared and visible image fusion can obtain combined images with salient hidden objectives and abundant visible details simultaneously. In this paper, we propose a novel method for infrared and visible image fusion with a deep learning framework based on a generative adversarial network (GAN) and a residual network (ResNet). The fusion is accomplished with an adversarial game and directed by the unique loss functions. The generator with residual blocks and skip connections can extract deep features of source image pairs and generate an elementary fused image with infrared thermal radiation information and visible texture information, and more details in visible images are added to the final images through the discriminator. It is unnecessary to design the activity level measurements and fusion rules manually, which are now implemented automatically. Also, there are no complicated multi-scale transforms in this method, so the computational cost and complexity can be reduced. Experiment results demonstrate that the proposed method eventually gets desirable images, achieving better performance in objective assessment and visual quality compared with nine representative infrared and visible image fusion methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Tang, Wei, Yu Liu, Chao Zhang, Juan Cheng, Hu Peng, and Xun Chen. "Green Fluorescent Protein and Phase-Contrast Image Fusion via Generative Adversarial Networks." Computational and Mathematical Methods in Medicine 2019 (December 4, 2019): 1–11. http://dx.doi.org/10.1155/2019/5450373.

Повний текст джерела
Анотація:
In the field of cell and molecular biology, green fluorescent protein (GFP) images provide functional information embodying the molecular distribution of biological cells while phase-contrast images maintain structural information with high resolution. Fusion of GFP and phase-contrast images is of high significance to the study of subcellular localization, protein functional analysis, and genetic expression. This paper proposes a novel algorithm to fuse these two types of biological images via generative adversarial networks (GANs) by carefully taking their own characteristics into account. The fusion problem is modelled as an adversarial game between a generator and a discriminator. The generator aims to create a fused image that well extracts the functional information from the GFP image and the structural information from the phase-contrast image at the same time. The target of the discriminator is to further improve the overall similarity between the fused image and the phase-contrast image. Experimental results demonstrate that the proposed method can outperform several representative and state-of-the-art image fusion methods in terms of both visual quality and objective evaluation.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

He, Gang, Jiaping Zhong, Jie Lei, Yunsong Li, and Weiying Xie. "Hyperspectral Pansharpening Based on Spectral Constrained Adversarial Autoencoder." Remote Sensing 11, no. 22 (November 18, 2019): 2691. http://dx.doi.org/10.3390/rs11222691.

Повний текст джерела
Анотація:
Hyperspectral (HS) imaging is conducive to better describing and understanding the subtle differences in spectral characteristics of different materials due to sufficient spectral information compared with traditional imaging systems. However, it is still challenging to obtain high resolution (HR) HS images in both the spectral and spatial domains. Different from previous methods, we first propose spectral constrained adversarial autoencoder (SCAAE) to extract deep features of HS images and combine with the panchromatic (PAN) image to competently represent the spatial information of HR HS images, which is more comprehensive and representative. In particular, based on the adversarial autoencoder (AAE) network, the SCAAE network is built with the added spectral constraint in the loss function so that spectral consistency and a higher quality of spatial information enhancement can be ensured. Then, an adaptive fusion approach with a simple feature selection rule is induced to make full use of the spatial information contained in both the HS image and PAN image. Specifically, the spatial information from two different sensors is introduced into a convex optimization equation to obtain the fusion proportion of the two parts and estimate the generated HR HS image. By analyzing the results from the experiments executed on the tested data sets through different methods, it can be found that, in CC, SAM, and RMSE, the performance of the proposed algorithm is improved by about 1.42%, 13.12%, and 29.26% respectively on average which is preferable to the well-performed method HySure. Compared to the MRA-based method, the improvement of the proposed method in in the above three indexes is 17.63%, 0.83%, and 11.02%, respectively. Moreover, the results are 0.87%, 22.11%, and 20.66%, respectively, better than the PCA-based method, which fully illustrated the superiority of the proposed method in spatial information preservation. All the experimental results demonstrate that the proposed method is superior to the state-of-the-art fusion methods in terms of subjective and objective evaluations.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zhou-xiang Jin, Zhou-xiang Jin, and Hao Qin Zhou-xiang Jin. "Generative Adversarial Network Based on Multi-feature Fusion Strategy for Motion Image Deblurring." 電腦學刊 33, no. 1 (February 2022): 031–41. http://dx.doi.org/10.53106/199115992022023301004.

Повний текст джерела
Анотація:
<p>Deblurring of motion images is a part of the field of image restoration. The deblurring of motion images is not only difficult to estimate the motion parameters, but also contains complex factors such as noise, which makes the deblurring algorithm more difficult. Image deblurring can be divided into two categories: one is the non-blind image deblurring with known fuzzy kernel, and the other is the blind image deblurring with unknown fuzzy kernel. The traditional motion image deblurring networks ignore the non-uniformity of motion blurred images and cannot effectively recover the high frequency details and remove artifacts. In this paper, we propose a new generative adversarial network based on multi-feature fusion strategy for motion image deblurring. An adaptive residual module composed of deformation convolution module and channel attention module is constructed in the generative network. Where, the deformation convolution module learns the shape variables of motion blurred image features, and can dynamically adjust the shape and size of the convolution kernel according to the deformation information of the image, thus improving the ability of the network to adapt to image deformation. The channel attention module adjusts the extracted deformation features to obtain more high-frequency features and enhance the texture details of the restored image. Experimental results on public available GOPRO dataset show that the proposed algorithm improves the peak signal-to-noise ratio (PSNR) and is able to reconstruct high quality images with rich texture details compared to other motion image deblurring methods.</p> <p>&nbsp;</p>
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Adversarial Information Fusion"

1

KALLAS, KASSEM. "A Game-Theoretic Approach for Adversarial Information Fusion in Distributed Sensor Networks." Doctoral thesis, Università di Siena, 2017. http://hdl.handle.net/11365/1005735.

Повний текст джерела
Анотація:
Every day we share our personal information through digital systems which are constantly exposed to threats. For this reason, security-oriented disciplines of signal processing have received increasing attention in the last decades: multimedia forensics, digital watermarking, biometrics, network monitoring, steganography and steganalysis are just a few examples. Even though each of these fields has its own peculiarities, they all have to deal with a common problem: the presence of one or more adversaries aiming at making the system fail. Adversarial Signal Processing lays the basis of a general theory that takes into account the impact that the presence of an adversary has on the design of effective signal processing tools. By focusing on the application side of Adversarial Signal Processing, namely adversarial information fusion in distributed sensor networks, and adopting a game-theoretic approach, this thesis contributes to the above mission by addressing four issues. First, we address decision fusion in distributed sensor networks by developing a novel soft isolation defense scheme that protects the network from adversaries, specifically, Byzantines. Second, we develop an optimum decision fusion strategy in the presence of Byzantines. In the next step, we propose a technique to reduce the complexity of the optimum fusion by relying on a novel nearly-optimum message passing algorithm based on factor graphs. Finally, we introduce a defense mechanism to protect decentralized networks running consensus algorithm against data falsification attacks.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bittner, Ksenia. "Building Information Extraction and Refinement from VHR Satellite Imagery using Deep Learning Techniques." Doctoral thesis, 2020. https://repositorium.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-202003262703.

Повний текст джерела
Анотація:
Building information extraction and reconstruction from satellite images is an essential task for many applications related to 3D city modeling, planning, disaster management, navigation, and decision-making. Building information can be obtained and interpreted from several data, like terrestrial measurements, airplane surveys, and space-borne imagery. However, the latter acquisition method outperforms the others in terms of cost and worldwide coverage: Space-borne platforms can provide imagery of remote places, which are inaccessible to other missions, at any time. Because the manual interpretation of high-resolution satellite image is tedious and time consuming, its automatic analysis continues to be an intense field of research. At times however, it is difficult to understand complex scenes with dense placement of buildings, where parts of buildings may be occluded by vegetation or other surrounding constructions, making their extraction or reconstruction even more difficult. Incorporation of several data sources representing different modalities may facilitate the problem. The goal of this dissertation is to integrate multiple high-resolution remote sensing data sources for automatic satellite imagery interpretation with emphasis on building information extraction and refinement, which challenges are addressed in the following: Building footprint extraction from Very High-Resolution (VHR) satellite images is an important but highly challenging task, due to the large diversity of building appearances and relatively low spatial resolution of satellite data compared to airborne data. Many algorithms are built on spectral-based or appearance-based criteria from single or fused data sources, to perform the building footprint extraction. The input features for these algorithms are usually manually extracted, which limits their accuracy. Based on the advantages of recently developed Fully Convolutional Networks (FCNs), i.e., the automatic extraction of relevant features and dense classification of images, an end-to-end framework is proposed which effectively combines the spectral and height information from red, green, and blue (RGB), pan-chromatic (PAN), and normalized Digital Surface Model (nDSM) image data and automatically generates a full resolution binary building mask. The proposed architecture consists of three parallel networks merged at a late stage, which helps in propagating fine detailed information from earlier layers to higher levels, in order to produce an output with high-quality building outlines. The performance of the model is examined on new unseen data to demonstrate its generalization capacity. The availability of detailed Digital Surface Models (DSMs) generated by dense matching and representing the elevation surface of the Earth can improve the analysis and interpretation of complex urban scenarios. The generation of DSMs from VHR optical stereo satellite imagery leads to high-resolution DSMs which often suffer from mismatches, missing values, or blunders, resulting in coarse building shape representation. To overcome these problems, a methodology based on conditional Generative Adversarial Network (cGAN) is developed for generating a good-quality Level of Detail (LoD) 2 like DSM with enhanced 3D object shapes directly from the low-quality photogrammetric half-meter resolution satellite DSM input. Various deep learning applications benefit from multi-task learning with multiple regression and classification objectives by taking advantage of the similarities between individual tasks. Therefore, an observation of such influences for important remote sensing applications such as realistic elevation model generation and roof type classification from stereo half-meter resolution satellite DSMs, is demonstrated in this work. Recently published deep learning architectures for both tasks are investigated and a new end-to-end cGAN-based network is developed, which combines different models that provide the best results for their individual tasks. To benefit from information provided by multiple data sources, a different cGAN-based work-flow is proposed where the generative part consists of two encoders and a common decoder which blends the intensity and height information within one network for the DSM refinement task. The inputs to the introduced network are single-channel photogrammetric DSMs with continuous values and pan-chromatic half-meter resolution satellite images. Information fusion from different modalities helps in propagating fine details, completes inaccurate or missing 3D information about building forms, and improves the building boundaries, making them more rectilinear. Lastly, additional comparison between the proposed methodologies for DSM enhancements is made to discuss and verify the most beneficial work-flow and applicability of the resulting DSMs for different remote sensing approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Adversarial Information Fusion"

1

Abrardo, Andrea, Mauro Barni, Kassem Kallas, and Benedetta Tondi. "Adversarial Decision Fusion: A Heuristic Approach." In Information Fusion in Distributed Sensor Networks with Byzantines, 45–55. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-32-9001-3_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Narute, Bharati, and Prashant Bartakke. "Brain MRI and CT Image Fusion Using Generative Adversarial Network." In Communications in Computer and Information Science, 97–109. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11349-9_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wang, Huiying, Huixin Shen, Boyang Zhang, Yu Wen, and Dan Meng. "Generating Adversarial Point Clouds on Multi-modal Fusion Based 3D Object Detection Model." In Information and Communications Security, 187–203. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86890-1_11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Yang, Dongxu, Yongbin Zheng, Peng Sun, Wanying Xu, and Di Zhu. "A Generative Adversarial Network for Image Fusion via Preserving Texture Information." In Lecture Notes in Electrical Engineering, 4795–803. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-6613-2_465.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ballora, Mark, Nicklaus A. Giacobe, Michael McNeese, and David L. Hall. "Information Data Fusion and Computer Network Defense." In Situational Awareness in Computer Network Defense, 141–64. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-0104-8.ch009.

Повний текст джерела
Анотація:
Computer networks no longer simply enable military and civilian operations, but have become vital infrastructures for all types of operations ranging from sensing and command/control to logistics, power distribution, and many other functions. Consequently, network attacks have become weapons of choice for adversaries engaged in asymmetric warfare. Traditionally, data and information fusion techniques were developed to improve situational awareness and threat assessment by combining data from diverse sources, and have recently been extended to include both physical (“hard”) sensors and human observers (acting as “soft” sensors). This chapter provides an introduction to traditional data fusion models and adapts them to the domain of cyber security. Recent advances in hard and soft information fusion are summarized and applied to the cyber security domain. Research on the use of sound for human-in-the-loop pattern recognition (sonification) is also introduced. Finally, perspectives are provided on the future for data fusion in cyber security research.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Adversarial Information Fusion"

1

Weerakoon, Dulanga, Kasthuri Jayarajah, Randy Tandriansyah, and Archan Misra. "Resilient Collaborative Intelligence for Adversarial IoT Environments." In 2019 22th International Conference on Information Fusion (FUSION). IEEE, 2019. http://dx.doi.org/10.23919/fusion43075.2019.9011397.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bostrom-Rost, Per, Daniel Axehill, and Gustaf Hendeby. "Informative Path Planning in the Presence of Adversarial Observers." In 2019 22th International Conference on Information Fusion (FUSION). IEEE, 2019. http://dx.doi.org/10.23919/fusion43075.2019.9011193.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Vyavahare, Pooja, Lili Su, and Nitin H. Vaidya. "Distributed Learning over Time-Varying Graphs with Adversarial Agents." In 2019 22th International Conference on Information Fusion (FUSION). IEEE, 2019. http://dx.doi.org/10.23919/fusion43075.2019.9011353.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Tomsett, Richard, Amy Widdicombe, Tianwei Xing, Supriyo Chakraborty, Simon Julier, Prudhvi Gurram, Raghuveer Rao, and Mani Srivastava. "Why the Failure? How Adversarial Examples Can Provide Insights for Interpretable Machine Learning." In 2018 International Conference on Information Fusion (FUSION). IEEE, 2018. http://dx.doi.org/10.23919/icif.2018.8455710.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kodipaka, Surya, Ajaya Dahal, Logan Smith, Nicholas Smith, Bo Tang, John E. Ball, and Maxwell Young. "Adversarial indoor signal detection." In Signal Processing, Sensor/Information Fusion, and Target Recognition XXX, edited by Lynne L. Grewe, Erik P. Blasch, and Ivan Kadar. SPIE, 2021. http://dx.doi.org/10.1117/12.2587525.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Govaers, Felix, and Paul Baggenstoss. "On a Detection Method of Adversarial Samples for Deep Neural Networks." In 2021 IEEE 24th International Conference on Information Fusion (FUSION). IEEE, 2021. http://dx.doi.org/10.23919/fusion49465.2021.9627060.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Caballero, William, Mark A. Friend, and Erik Blasch. "Adversarial machine learning and adversarial risk analysis in multi-source command and control." In Signal Processing, Sensor/Information Fusion, and Target Recognition XXX, edited by Lynne L. Grewe, Erik P. Blasch, and Ivan Kadar. SPIE, 2021. http://dx.doi.org/10.1117/12.2589027.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Ranjit Ganta, Srivatsava, and Raj Acharya. "On breaching enterprise data privacy through adversarial information fusion." In 2008 IEEE 24th International Conference on Data Engineeing workshop (ICDE Workshop 2008). IEEE, 2008. http://dx.doi.org/10.1109/icdew.2008.4498326.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Arnold, G., A. M. Fullenkamp, A. Bornstein, F. Morelli, T. Brown, P. Iyer, J. Lavery, et al. "Research directions in remote detection of covert tactical adversarial intent of individuals in asymmetric operations." In 2010 13th International Conference on Information Fusion (FUSION 2010). IEEE, 2010. http://dx.doi.org/10.1109/icif.2010.5711892.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zhang, Chongyang, Yu Qi, and Hiroyuki Kameda. "Multi-scale perturbation fusion adversarial attack on MTCNN face detection system." In 2022 4th International Conference on Communications, Information System and Computer Engineering (CISCE). IEEE, 2022. http://dx.doi.org/10.1109/cisce55963.2022.9851024.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії