Добірка наукової літератури з теми "AI security"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "AI security".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "AI security"

1

Chen, Hsinchun. "AI and Security Informatics." IEEE Intelligent Systems 25, no. 5 (September 2010): 82–90. http://dx.doi.org/10.1109/mis.2010.116.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Agrawal, Jatin, Samarjeet Singh Kalra, and Himanshu Gidwani. "AI in cyber security." International Journal of Communication and Information Technology 4, no. 1 (January 1, 2023): 46–53. http://dx.doi.org/10.33545/2707661x.2023.v4.i1a.59.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

BS, Guru Prasad, Dr Kiran GM, and Dr Dinesha HA. "AI-Driven cyber security: Security intelligence modelling." International Journal of Multidisciplinary Research and Growth Evaluation 4, no. 6 (2023): 961–65. http://dx.doi.org/10.54660/.ijmrge.2023.4.6.961-965.

Повний текст джерела
Анотація:
The process of defending computer networks from cyber attacks or unintended, unauthorized access is known as cyber security. Organizations, businesses, and governments need cyber security solutions because cyber criminals pose a threat to everyone. Artificial intelligence promises to be a great solution for this. Security experts are better able to defend vulnerable networks and data from cyber attackers by combining the strengths of artificial intelligence and cyber security. This paper provides an introduction to the use of artificial intelligence in cyber security. AI-driven cyber security refers to the use of artificial intelligence and machine learning technologies to enhance the protection of computer systems and networks from cyber threats such as hacking, malware, phishing, and other forms of cyberattacks. AI-powered security solutions are designed to automate the process of detecting, analyzing, and responding to security incidents in real-time, thereby improving the efficiency and effectiveness of cyber defense. These solutions can analyze large amounts of data, identify patterns and anomalies, and make decisions faster and more accurately than humans alone, enabling organizations to stay ahead of evolving cyber threats.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Abudalou, Mohammad Ali. "Security DevOps: Enhancing Application Delivery with Speed and Security." International Journal of Computer Science and Mobile Computing 13, no. 5 (May 30, 2024): 100–104. http://dx.doi.org/10.47760/ijcsmc.2024.v13i05.009.

Повний текст джерела
Анотація:
"Protection DevOps: Security DevOps: Enhancing Application Delivery with Speed and Security" is a whole paper that explores the combination of synthetic intelligence (AI) technology into the protection DevOps framework. This integration goals to enhance software transport with the aid of AI-driven automation, predictive analytics, and chance intelligence. Inside the contemporary, rapid-paced virtual panorama, groups face developing pressure to deliver applications quickly while making sure sturdy safety capabilities are in proximity. The traditional method of protection frequently results in delays in deployment and hampers agility. With the aid of incorporating AI capabilities into DevOps protection practices, agencies can achieve stability among pace and safety. This paper examines how AI can optimize several factors of safety DevOps, together with: Automatic threat detection: AI-powered equipment can look at massive quantities of data in real-time to proactively come across and respond to protection threats. This functionality enables figuring out anomalies, predicting functionality dangers, and taking preemptive actions. Practical safety finding out: AI-pushed attempting out tools can carry out complete protection finding out, which includes vulnerability exams, penetration sorting out, and code assessment. Those gear leverage gadgets, studying algorithms to perceive safety gaps and advocate remediation measures. Predictive risk management: AI algorithms can examine historical protection statistics and patterns to expect future protection dangers. This proactive method lets companies put in place location-specific preventive measures and restrict the effects of safety incidents. Non-forestall compliance monitoring: AI-based total compliance system can display and implement regulatory compliance requirements at some point inside the improvement and deployment lifecycle. This guarantees that applications adhere to enterprise standards and regulatory hints. By incorporating AI into DevOps safety, businesses can gather accelerated software shipping without compromising protection. The synergistic aggregate of AI-driven automation, predictive analytics, and threat intelligence empowers DevOps organizations to respond to growing threats while preserving an excessive protection posture. This paper offers insights into the benefits of integrating AI into safety DevOps practices, uncommon use times, implementation strategies, and great practices. It courses agencies looking for to leverage AI technologies to decorate utility shipping with velocity and safety in a dynamic virtual environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Reddy, Haritha Madhava. "Role of AI in Security Compliance." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 11 (November 23, 2024): 1–6. http://dx.doi.org/10.55041/ijsrem32650.

Повний текст джерела
Анотація:
Abstract—Artificial Intelligence (AI) has emerged as a pivotal tool in enhancing security compliance across various industries. Its ability to analyze vast datasets, detect intricate patterns, and automate complex processes significantly improves risk management and regulatory adherence. AI enables real-time data analysis, promptly identifying potential violations and flagging security threats, thereby strengthening an organization’s overall security framework. However, while AI offers transformative advantages, its integration into existing security systems introduces new challenges, such as data privacy concerns, algorithmic bias, and the need for transparent decision-making. This paper explores the dual role of AI in both enhancing compliance efforts and presenting risks that require careful management. By adopting a balanced approach—leveraging AI’s capabilities while ensuring robust oversight— organizations can optimize compliance processes, address regulatory challenges, and mitigate associated risks. Achieving this balance is critical to securing long-term success in an increasingly regulated and digitized landscape. Keywords—Artificial Intelligence (AI), security compliance, risk management, regulatory compliance, data privacy, algorithmic bias, real-time data analysis, threat detection, automation, cybersecurity, transparency, decision-making, compliance automation, AI integration, ethical AI deployment, organizational security, regulatory frameworks.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Gudimetla, Sandeep Reddy, and Niranjan Reddy Kotha. "AI-POWERED THREAT DETECTION IN CLOUD ENVIRONMENTS." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 9, no. 1 (April 8, 2018): 638–42. http://dx.doi.org/10.61841/turcomat.v9i1.14730.

Повний текст джерела
Анотація:
This study assesses the effectiveness of artificial intelligence (AI) technologies in enhancing threat detection within cloud environments, a critical component given the escalating security challenges in cloud computing. Leveraging various AI methodologies, including machine learning models, deep learning, and anomaly detection techniques, the research aims to improve the accuracy and efficiency of security systems. These AI methods were applied to a series of simulated threat scenarios across diverse cloud platforms to evaluate their capability in real-time threat identification and mitigation. Results demonstrated a significant enhancement in detection rates and a decrease in false positives, indicating that AI can substantially improve the robustness of cloud security systems against sophisticated cyber threats. The study highlights the transformative potential of AI in cloud security, showing not only improvements in threat detection but also in the speed and reliability of responses to security incidents. Furthermore, the findings advocate for the integration of AI technologies into existing cloud security infrastructures to achieve more dynamic and adaptable security solutions. The conclusion points towards the need for ongoing research into advanced AI applications in cloud security, suggesting future directions such as the development of self-learning security systems and the exploration of AI's predictive capabilities in pre-empting security breaches. This research provides a foundation for further exploration and potential real-world application of AI in securing cloud environments against an increasingly complex landscape of cyber threats.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Poonia, Ramesh Chandra. "Securing the Sustainable Future : Cryptography and Security in AI & IoT." Journal of Discrete Mathematical Sciences and Cryptography 27, no. 4 (2024): i—vii. http://dx.doi.org/10.47974/jdmsc-27-4-foreword.

Повний текст джерела
Анотація:
The objective of this special issue titled “Securing the Sustainable Future: Cryptography and Security in AI & IoT” is to discuss recent advancements of cryptography and security in AI & IoT for a secure, sustainable future. The issue focuses on the social and economic impact of sustainable computing, with sub-topics covering the applications of AI & IoT. Additionally, the conference has explored security mechanisms in the context of sustainability.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Sengupta, Abhijeet. "Securing the Autonomous Future A Comprehensive Analysis of Security Challenges and Mitigation Strategies for AI Agents." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 12 (December 24, 2024): 1–2. https://doi.org/10.55041/ijsrem40091.

Повний текст джерела
Анотація:
The proliferation of Artificial Intelligence (AI) agents, characterized by their autonomy and capacity for independent decision-making, presents both unprecedented opportunities and novel security challenges. This research paper provides a comprehensive analysis of the security landscape surrounding AI agents, examining the unique vulnerabilities stemming from their inherent characteristics and the emerging threat vectors targeting these autonomous systems. We delve into a categorized framework of potential attacks, ranging from data poisoning and adversarial manipulation to physical tampering and exploitation of autonomy. Furthermore, we critically evaluate existing and propose novel mitigation strategies, encompassing secure development practices, robustness training, explainable AI techniques for monitoring, and the crucial role of ethical and regulatory frameworks. This paper contributes to the growing body of knowledge on AI security, offering insights for researchers, developers, and policymakers navigating the complexities of securing the autonomous future. Keywords: Artificial Intelligence, AI Agents, Autonomous Systems, Cybersecurity, Machine Learning Security, Adversarial Attacks, Data Poisoning, Robotics Security, Ethical AI, AI Governance
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Samijonov, Nurmukhammad Y. "AI FOR INFORMATION SECURITY AND CYBERSPACE." American Journal of Applied Science and Technology 3, no. 10 (October 1, 2023): 39–43. http://dx.doi.org/10.37547/ajast/volume03issue10-08.

Повний текст джерела
Анотація:
Taking into account the fact that artificial intelligence can expand the capabilities of humanity, accelerate the speed of data analysis several times, and help people reach correct and accurate conclusions in the decision-making process, it can improve data security or, on the contrary, target propaganda and fake news. Collecting information, taking very little responsibility, and taking into account the fact that social networks can expand the speed and extent of dissemination of information simultaneously in two opposite directions, AI can both strengthen cyber security or create new types of threats to it.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Samijonov, Nurmukhammad Y. "EMERGING SECURITY CONCERNS BECAUSE OF AI USAGE." Journal of Social Sciences and Humanities Research Fundamentals 3, no. 11 (November 1, 2023): 43–46. http://dx.doi.org/10.55640/jsshrf-03-11-10.

Повний текст джерела
Анотація:
Recently, there has been a lot of hype surrounding AI, bringing urgent concerns along with lots of myth, as the future of AI is still uncertain. While social media headlines warn that AI is about to outperform humans in near future, there's a good chance that attacks made possible by the increased application of AI will be very potent, precisely targeted, hard to identify, and likely to take advantage of holes in AI systems. This article analyzes the emerging security issues that are forming in the way AI is used in practice.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "AI security"

1

Djaidja, Taki Eddine Toufik. "Advancing the Security of 5G and Beyond Vehicular Networks through AI/DL." Electronic Thesis or Diss., Bourgogne Franche-Comté, 2024. http://www.theses.fr/2024UBFCK009.

Повний текст джерела
Анотація:
L'émergence des réseaux de cinquième génération (5G) et des réseaux véhiculaire (V2X) a ouvert une ère de connectivité et de services associés sans précédent. Ces réseaux permettent des interactions fluides entre les véhicules, l'infrastructure, et bien plus encore, en fournissant une gamme de services à travers des tranches de réseau (slices), chacune adaptée aux besoins spécifiques de ceux-ci. Les générations futures sont même censées apporter de nouvelles avancées à ces réseaux. Cependant, ce progrès remarquable les expose à une multitude de menaces en matière de cybersécurité, dont bon nombre sont difficiles à détecter et à atténuer efficacement avec les contre mesures actuelles. Cela souligne la nécessité de mettre en oeuvre de nouveaux mécanismes avancés de détection d'intrusion pour garantir l'intégrité, la confidentialité et la disponibilité des données et des services.Un domaine suscitant un intérêt croissant à la fois dans le monde universitaire qu'industriel est l'Intelligence Artificielle (IA), en particulier son application pour faire face aux menaces en cybersécurité. Notamment, les réseaux neuronaux (RN) ont montré des promesses dans ce contexte, même si les solutions basées sur l'IA sont accompagnées de défis majeurs.Ces défis peuvent être résumés comme des préoccupations concernant l'efficacité et l'efficience. Le premier concerne le besoin des Systèmes de Détection d'Intrusions (SDI) de détecter avec précision les menaces, tandis que le second implique d'atteindre l'efficacité en termes de temps et la détection précoce des menaces.Cette thèse représente l'aboutissement de nos recherches sur l'investigation des défis susmentionnés des SDI basés sur l'IA pour les systemes 5G en général et en particulier 5G-V2X. Nous avons entamé notre recherche en réalisant une revue de la littérature existante. Tout au long de cette thèse, nous explorons l'utilisation des systèmes d'inférence floue (SIF) et des RN, en mettant particulièrement l'accent sur cette derniere technique. Nous avons utilisé des techniques de pointe en apprentissage, notamment l'apprentissage profond (AP), en intégrant des réseaux neuronaux récurrents et des mécanismes d'attention. Ces techniques sont utilisées de manière innovante pour réaliser des progrès significatifs dans la résolution des préoccupations liées à l'amélioration de l'efficacité et de l'efficience des SDI. De plus, nos recherches explorent des défis supplémentaires liés à la confidentialité des données lors de l'utilisation des SDIs basés sur l'AP. Nous y parvenons en exploitant les algorithmes d'apprentissage fédéré (AF) les plus récents
The emergence of Fifth Generation (5G) and Vehicle-to-Everything (V2X) networks has ushered in an era of unparalleled connectivity and associated services. These networks facilitate seamless interactions among vehicles, infrastructure, and more, providing a range of services through network slices, each tailored to specific requirements. Future generations are even expected to bring further advancements to these networks. However, this remarkable progress also exposes them to a myriad of security threats, many of which current measures struggle to detect and mitigate effectively. This underscores the need for advanced intrusion detection mechanisms to ensure the integrity, confidentiality, and availability of data and services.One area of increasing interest in both academia and industry spheres is Artificial Intelligence (AI), particularly its application in addressing cybersecurity threats. Notably, neural networks (NNs) have demonstrated promise in this context, although AI-based solutions do come with inherent challenges. These challenges can be summarized as concerns about effectiveness and efficiency. The former pertains to the need for Intrusion Detection Systems (IDSs) to accurately detect threats, while the latter involves achieving time efficiency and early threat detection.This dissertation represents the culmination of our research findings on investigating the aforementioned challenges of AI-based IDSs in 5G systems in general and 5G-V2X in particular. We initiated our investigation by conducting a comprehensive review of the existing literature. Throughout this thesis, we explore the utilization of Fuzzy Inference Systems (FISs) and NNs, with a specific emphasis on the latter. We leveraged state-of-the-art NN learning, referred to as Deep Learning (DL), including the incorporation of recurrent neural networks and attention mechanisms. These techniques are innovatively harnessed to making significant progress in addressing the concerns of enhancing the effectiveness and efficiency of IDSs. Moreover, our research delves into additional challenges related to data privacy when employing DL-based IDSs. We achieve this by leveraging and experimenting state-of-the-art federated learning (FL) algorithms
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Hatoum, Makram. "Digital watermarking for PDF documents and images : security, robustness and AI-based attack." Thesis, Bourgogne Franche-Comté, 2020. http://www.theses.fr/2020UBFCD016.

Повний текст джерела
Анотація:
Le développement technologique a ses avantages et ses inconvénients. Nous pouvons facilement partager et télécharger du contenu numérique en utilisant l’Internet. En outre, les utilisateurs malveillants peuvent aussi modifier, dupliquer et diffuser illégalement tout type d'informations, comme des images et des documents. Par conséquent, nous devons protéger ces contenus et arrêter les pirates. Le but de cette thèse est de protéger les documents PDF et les images en utilisant la technique de tatouage numérique Spread Transform Dither Modulation (STDM), tout en tenant compte des exigences principales de transparence, de robustesse et de sécurité.La méthode de tatouage STDM a un bon niveau de transparence et de robustesse contre les attaques de bruit. La clé principale dans cette méthode de tatouage est le vecteur de projection qui vise à diffuser le message sur un ensemble d'éléments. Cependant, un tel vecteur clé peut être estimée par des utilisateurs non autorisés en utilisant les techniques de séparation BSS (Blind Source Separation). Dans notre première contribution, nous présentons notre méthode de tatouage proposé CAR-STDM (Component Analysis Resistant-STDM), qui garantit la sécurité tout en préservant la transparence et la robustesse contre les attaques de bruit.STDM est également affecté par l'attaque FGA (Fixed Gain Attack). Dans la deuxième contribution, nous présentons notre méthode de tatouage proposé N-STDM qui résiste l'attaque FGA et améliore la robustesse contre l'attaque Additive White Gaussian Noise (AWGN), l'attaque de compression JPEG, et diversité d'attaques de filtrage et géométriques. Les expérimentations ont été menées sur des documents PDF et des images dans le domaine spatial et le domaine fréquentiel.Récemment, l’Apprentissage Profond et les Réseaux de Neurones atteints du développement et d'amélioration notable, en particulier dans le traitement d'image, la segmentation et la classification. Des modèles tels que CNN (Convolutional Neural Network) sont utilisés pour la dé-bruitage des images. CNN a une performance adéquate de dé-bruitage, et il pourrait être nocif pour les images tatouées. Dans la troisième contribution, nous présentons l'effet du FCNN (Fully Convolutional Neural Network), comme une attaque de dé-bruitage, sur les images tatouées. Les méthodes de tatouage STDM et SS (Spread Spectrum) sont utilisés durant les expérimentations pour intégrer les messages dans les images en appliquant plusieurs scénarios. Cette évaluation montre qu'un tel type d'attaque de dé-bruitage préserve la qualité de l'image tout en brisant la robustesse des méthodes de tatouages évalués
Technological development has its pros and cons. Nowadays, we can easily share, download, and upload digital content using the Internet. Also, malicious users can illegally change, duplicate, and distribute any kind of information, such as images and documents. Therefore, we should protect such contents and arrest the perpetrator. The goal of this thesis is to protect PDF documents and images using the Spread Transform Dither Modulation (STDM), as a digital watermarking technique, while taking into consideration the main requirements of transparency, robustness, and security. STDM watermarking scheme achieved a good level of transparency and robustness against noise attacks. The key to this scheme is the projection vector that aims to spreads the embedded message over a set of cover elements. However, such a key vector can be estimated by unauthorized users using the Blind Source Separation (BSS) techniques. In our first contribution, we present our proposed CAR-STDM (Component Analysis Resistant-STDM) watermarking scheme, which guarantees security while preserving the transparency and robustness against noise attacks. STDM is also affected by the Fixed Gain Attack (FGA). In the second contribution, we present our proposed N-STDM watermarking scheme that resists the FGA attack and enhances the robustness against the Additive White Gaussian Noise (AWGN) attack, JPEG compression attack, and variety of filtering and geometric attacks. Experimentations have been conducted distinctly on PDF documents and images in the spatial domain and frequency domain. Recently, Deep Learning and Neural Networks achieved noticeable development and improvement, especially in image processing, segmentation, and classification. Diverse models such as Convolutional Neural Network (CNN) are exploited for modeling image priors for denoising. CNN has a suitable denoising performance, and it could be harmful to watermarked images. In the third contribution, we present the effect of a Fully Convolutional Neural Network (FCNN), as a denoising attack, on watermarked images. STDM and Spread Spectrum (SS) are used as watermarking schemes to embed the watermarks in the images using several scenarios. This evaluation shows that such type of denoising attack preserves the image quality while breaking the robustness of all evaluated watermarked schemes
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Radosavljevic, Bojan, and Axel Kimblad. "Etik och säkerhet när AI möter IoT." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20613.

Повний текст джерела
Анотація:
I dagens samhälle går den tekniska utvecklingen fort framåt. Artificiell intelligens och Internet of Things är två tekniker inom utvecklingen vars popularitet har ökat på senare tid. Dessa tekniker i integration har visat sig kunna bidra med stora verksamhetsnyttor, bland annat i form av ökad precishet vad gäller analyser, bättre kundvärde och effektivisering av ”downtime”. Med ny teknik kommer även utmaningar. I takt med att teknologierna ständigt växer uppstår frågor kring säkerhet och etik och hur detta ska hanteras. Målet med denna studien var att ta reda på hur experter värderar etiska frågor när artificiell intelligens används i kombination med Internet of Things-enheter. Vi fokuserade på följande forskningsfråga för att nå vårt mål: Hur värderas frågor om etik när artificiell intelligens används i kombination med Internet of Things? Resultatet vi kom fram till visar att både forskare och näringslivet värderar de etiska aspekterna högt. Studien visar även att de ansåg att teknikerna kan vara lösningen till många samhällsproblem men att etiken bör vara ett ämne som löpande bör diskuteras.
In today's society, technological developments are moving fast. Artificial intelligence and the Internet of Things are two technologies within the development whose popularity has increased in recent years. These technologies in integration have proven to be able to contribute with major business benefits, including in the form of increased precision with regard to analyzes, better customer value and efficiency of downtime. New technology also presents challenges. As the technologies are constantly growing, issues arise regarding safety and ethics and how this should be managed. The aim of this study is to find out how experts value ethical issues when using artificial intelligence in combination with the Internet of Things devices. We focused on the following research question to reach our goal: How are ethical issues evaluated when using artificial intelligence in combination with the Internet of Things? The result we found shows that both researchers and the business world value the ethical aspects highly. The study also shows that they considered the techniques to be the solution to many societal problems, but that ethics should be a topic that should be discussed on an ongoing basis.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

KRAYANI, ALI. "Learning Self-Awareness Models for Physical Layer Security in Cognitive and AI-enabled Radios." Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1074612.

Повний текст джерела
Анотація:
Cognitive Radio (CR) is a paradigm shift in wireless communications to resolve the spectrum scarcity issue with the ability to self-organize, self-plan and self-regulate. On the other hand, wireless devices that can learn from their environment can also be taught things by malicious elements of their environment, and hence, malicious attacks are a great concern in the CR, especially for physical layer security. This thesis introduces a data-driven Self-Awareness (SA) module in CR that can support the system to establish secure networks against various attacks from malicious users. Such users can manipulate the radio spectrum to make the CR learn wrong behaviours and take mistaken actions. The SA module consists of several functionalities that allow the radio to learn a hierarchical representation of the environment and grow its long-term memory incrementally. Therefore, this novel SA module is a way forward towards realizing the original vision of CR (i.e. Mitola's Radio) and AI-enabled radios. This thesis starts with a basic SA module implemented in two applications, namely the CR-based IoT and CR-based mmWave. The two applications differ in the data dimensionality (high and low) and the PHY-layer level at which the SA module is implemented. Choosing an appropriate learning algorithm for each application is crucial to achieving good performance. To this purpose, several generative models such as Generative Adversarial Networks, Variational AutoEncoders and Dynamic Bayesian Networks, and unsupervised machine learning algorithms such as Self Organizing Maps Growing Neural Gas with different configurations are proposed, and their performances are analysed. In addition, we studied the integration of CR and UAVs from the physical layer security perspective. It is shown how the acquired knowledge from previous experience within the Bayesian Filtering facilitates the radio spectrum perception and allows the UAV to detect any jamming attacks immediately. Moreover, exploiting the generalized errors during abnormal situations permits characterizing and identifying the jammer at multiple levels and learning a dynamic model that embeds its dynamic behaviour. Besides, a proactive consequence can be drawn after estimating the jammer's signal to act efficiently by mitigating its effects on the received stimuli or by designing an efficient resource allocation for anti-jamming using Active Inference. Experimental results show that introducing the novel SA functionalities provides the high accuracy of characterizing, detecting, classifying and predicting the jammer's activities and outperforms conventional detection methods such as Energy detectors and advanced classification methods such as Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN) and Stacked Autoencoder (SAE). It also verifies that the proposed approach achieves a higher degree of explainability than deep learning techniques and verifies the capability to learn an efficient strategy to avoid future attacks with higher convergence speed compared to conventional Frequency Hopping and Q-learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ranang, Martin Thorsen. "An Artificial Immune System Approach to Preserving Security in Computer Networks." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2002. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-255.

Повний текст джерела
Анотація:

It is believed that many of the mechanisms present in the biological immune system are well suited for adoption to the field of computer intrusion detection, in the form of artificial immune systems. In this report mechanisms in the biological immune system are introduced, their parallels in artificial immune systems are presented, and how they may be applied to intrusion detection in a computer environment is discussed. An artificial immune system is designed, implemented and applied to detect intrusive behavior in real network data in a simulated network environment. The effect of costimulation and clonal proliferation combined with somatic hypermutation to perform affinity maturation of detectors in the artificial immune system is explored through experiments. An exact expression for the probability of a match between two randomly chosen strings using the r-contiguous matching rule is developed. The use of affinity maturation makes it possible to perform anomaly detection by using smaller sets of detectors with a high level of specificity while maintaining a high level of cover and diversity, which increases the number of true positives, while keeping a low level of false negatives.

Стилі APA, Harvard, Vancouver, ISO та ін.
6

TOMA, ANDREA. "PHY-layer Security in Cognitive Radio Networks through Learning Deep Generative Models: an AI-based approach." Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/1003576.

Повний текст джерела
Анотація:
Recently, Cognitive Radio (CR) has been intended as an intelligent radio endowed with cognition which can be developed by implementing Artificial Intelligence (AI) techniques. Specifically, data-driven Self-Awareness (SA) functionalities, such as detection of spectrum abnormalities, can be effectively implemented as shown by the proposed research. One important application is PHY-layer security since it is essential to establish secure wireless communications against external jamming attacks. In this framework, signals are non-stationary and features from such kind of dynamic spectrum, with multiple high sampling rate signals, are then extracted through the Stockwell Transform (ST) with dual-resolution which has been proposed and validated in this work as part of spectrum sensing techniques. Afterwards, analysis of the state-of-the-art about learning dynamic models from observed features describes theoretical aspects of Machine Learning (ML). In particular, following the recent advances of ML, learning deep generative models with several layers of non-linear processing has been selected as AI method for the proposed spectrum abnormality detection in CR for a brain-inspired, data-driven SA. In the proposed approach, the features extracted from the ST representation of the wideband spectrum are organized in a high-dimensional generalized state vector and, then, a generative model is learned and employed to detect any deviation from normal situations in the analysed spectrum (abnormal signals or behaviours). Specifically, conditional GAN (C-GAN), auxiliary classifier GAN (AC-GAN), and deep VAE have been considered as deep generative models. A dataset of a dynamic spectrum with multi-OFDM signals has been generated by using the National Instruments mm-Wave Transceiver which operates at 28 GHz (central carrier frequency) with 800 MHz frequency range. Training of the deep generative model is performed on the generalized state vector representing the mmWave spectrum with normality pattern without any malicious activity. Testing is based on new and independent data samples corresponding to abnormality pattern where the moving signal follows a different behaviour which has not been observed during training. An abnormality indicator is measured and used for the binary classification (normality hypothesis otherwise abnormality hypothesis), while the performance of the generative models is evaluated and compared through ROC curves and accuracy metrics.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Musgrave, John. "Cognitive Malice Representation and Identification." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1565348664149804.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Zhang, Yueqian. "Resource Clogging Attacks in Mobile Crowd-Sensing: AI-based Modeling, Detection and Mitigation." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40082.

Повний текст джерела
Анотація:
Mobile Crowdsensing (MCS) has emerged as a ubiquitous solution for data collection from embedded sensors of the smart devices to improve the sensing capacity and reduce the sensing costs in large regions. Due to the ubiquitous nature of MCS, smart devices require cyber protection against adversaries that are becoming smarter with the objective of clogging the resources and spreading misinformation in such a non-dedicated sensing environment. In an MCS setting, one of the various adversary types has the primary goal of keeping participant devices occupied by submitting fake/illegitimate sensing tasks so as to clog the participant resources such as the battery, sensing, storage, and computing. With this in mind, this thesis proposes a systematical study of fake task injection in MCS, including modeling, detection, and mitigation of such resource clogging attacks. We introduce modeling of fake task attacks in MCS intending to clog the server and drain battery energy from mobile devices. We creatively grant mobility to the tasks for more extensive coverage of potential participants and propose two take movement patterns, namely Zone-free Movement (ZFM) model and Zone-limited Movement (ZLM) model. Based on the attack model and task movement patterns, we design task features and create structured simulation settings that can be modified to adapt different research scenarios and research purposes. Since the development of a secure sensing campaign highly depends on the existence of a realistic adversarial model. With this in mind, we apply the self-organizing feature map (SOFM) to maximize the number of impacted participants and recruits according to the user movement pattern of these cities. Our simulation results verify the magnified effect of SOFM-based fake task injection comparing with randomly selected attack regions in terms of more affected recruits and participants, and increased energy consumption in the recruited devices due to the illegitimate task submission. For the sake of a secure MCS platform, we introduce Machine Learning (ML) methods into the MCS server to detect and eliminate the fake tasks, making sure the tasks arrived at the user side are legitimate tasks. In our work, two machine learning algorithms, Random Forest and Gradient Boosting are adopted to train the system to predict the legitimacy of a task, and Gradient Boosting is proven to be a more promising algorithm. We have validated the feasibility of ML in differentiating the legitimacy of tasks in terms of precision, recall, and F1 score. By comparing the energy-consuming, effected recruits, and impacted candidates with and without ML, we convince the efficiency of applying ML to mitigate the effect of fake task injection.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

SYED, MUHAMMAD FARRUKH SHAHID. "Data-Driven Approach based on Deep Learning and Probabilistic Models for PHY-Layer Security in AI-enabled Cognitive Radio IoT." Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1048543.

Повний текст джерела
Анотація:
Cognitive Radio Internet of Things (CR-IoT) has revolutionized almost every field of life and reshaped the technological world. Several tiny devices are seamlessly connected in a CR-IoT network to perform various tasks in many applications. Nevertheless, CR-IoT surfers from malicious attacks that pulverize communication and perturb network performance. Therefore, recently it is envisaged to introduce higher-level Artificial Intelligence (AI) by incorporating Self-Awareness (SA) capabilities into CR-IoT objects to facilitate CR-IoT networks to establish secure transmission against vicious attacks autonomously. In this context, sub-band information from the Orthogonal Frequency Division Multiplexing (OFDM) modulated transmission in the spectrum has been extracted from the radio device receiver terminal, and a generalized state vector (GS) is formed containing low dimension in-phase and quadrature components. Accordingly, a probabilistic method based on learning a switching Dynamic Bayesian Network (DBN) from OFDM transmission with no abnormalities has been proposed to statistically model signal behaviors inside the CR-IoT spectrum. A Bayesian filter, Markov Jump Particle Filter (MJPF), is implemented to perform state estimation and capture malicious attacks. Subsequently, GS containing a higher number of subcarriers has been investigated. In this connection, Variational autoencoders (VAE) is used as a deep learning technique to extract features from high dimension radio signals into low dimension latent space z, and DBN is learned based on GS containing latent space data. Afterward, to perform state estimation and capture abnormalities in a spectrum, Adapted-Markov Jump Particle Filter (A-MJPF) is deployed. The proposed method can capture anomaly that appears due to either jammer attacks in transmission or cognitive devices in a network experiencing different transmission sources that have not been observed previously. The performance is assessed using the receiver operating characteristic (ROC) curves and the area under the curve (AUC) metrics.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

SIGNORI, ROBERTA. "POLIZIA PENITENZIARIA E SORVEGLIANZA DINAMICA IN CARCERE Le risposte ai cambiamenti organizzativi e l’impatto sul benessere del personale." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2017. http://hdl.handle.net/10281/158284.

Повний текст джерела
Анотація:
Il sistema Penitenziario Italiano è attualmente interessato da profondi mutamenti organizzativi che riguardano, in particolar modo, le modalità operative del personale di polizia penitenziaria. L’introduzione della sorveglianza dinamica in carcere ha rappresentato un importante cambiamento organizzativo capace di ridefinire gli spazi, i tempi e le modalità di interazione all’interno delle sezioni detentive. Quest’ ultima fa riferimento ad una modalità operativa incentrata non più sul controllo statico della persona detenuta, ma piuttosto sulla conoscenza e l’osservazione della stessa. Nella mente dei suoi ideatori, essa rappresenta non solo un nuovo modo di “fare” sorveglianza, ma anche e soprattutto “un nuovo modo d’essere lavorativo ed organizzativo” (de Pascalis 2013) che chiama direttamente in causa le competenze dei professionisti della sorveglianza. Questi ultimi, nel quotidiano esercizio dell’autorità nei confronti della popolazione detenuta, si interfacciano dunque con un contesto in continua trasformazione. Per tali motivi, l’ attuazione nelle sezioni detentive di questa nuova modalità operativa solleva una serie di interrogativi, soprattutto rispetto all’ influenza che essa può esercitare sulla quotidianità degli individui detenuti e del personale che opera a stretto contatto con gli stessi, ovvero, gli agenti penitenziari. La presente ricerca ha preso avvio proprio dalla constatazione dell’importanza di questo cambiamento organizzativo, e dell’influenza che lo stesso può esercitare sulle modalità attraverso cui gli agenti penitenziari concepiscono il proprio ruolo e svolgono i propri doveri professionali all’interno delle sezioni detentive. Più precisamente, la ricerca è guidata dall’intento di comprendere come si evolve la percezione dell’ identità di ruolo dei poliziotti penitenziari entro un quadro istituzionale in profondo mutamento. Questo elaborato porta quindi alla luce la dimensione identitaria del mestiere degli agenti penitenziari entro un contesto che si è definito “liminale” poiché strutturato attorno alla coesistenza di fini istituzionali sostanzialmente antitetici. Non è infatti possibile comprendere le risposte ad un cambiamento organizzativo, né tanto meno l’impatto di questo sul benessere del personale, senza prendere in considerazione come gli agenti concepiscono la propria identità di ruolo e in quali condizioni e attraverso quali dinamiche tale concezione si sviluppa. Questa ricerca permette dunque di evidenziare le condizioni che possono facilitare la transizione al nuovo modello operativo e incrementare il benessere del personale di polizia penitenziaria in relazione ad esso.
The Italian prison system is affected by deep organisational changes which affect the work of prison officers. The implementation of the so called “dynamic security” within detention wings is likely to redefine the interaction patterns between the staff and offenders. The “dynamic security” is regarded as an innovative surveillance procedure which relies on the observation and the knowledge of the offenders, rather than on their physical control. According to policy makers, the “dynamic security” is not just an innovative way of ensuring security, but it should also represent a “new way of being” of prison officers (de Pascalis 2013). The implementation of this organisational change raises questions regarding its influence on the daily life of offenders and prison guards and their interaction within a changing environment. This research focuses on the influence of the implementation of the “dynamic security” on prison officers role identity. It aims to shed light on the identity related dimension of the prison work within a context that I defined as “liminal” by virtue of the coexistence of two antithetical institutional objectives, that is to say, rehabilitation and reclusion. Indeed, responses to organizational changes cannot be understood and interpreted without taking into consideration the dynamics and processes of identification in the role of prison officer. This research will highlight the conditions which can facilitate the transition to new work practices and foster prison officer wellbeing, through the analysis of the processes of identification within the changing environment of prison.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "AI security"

1

Huang, Ken, Yang Wang, Ben Goertzel, Yale Li, Sean Wright, and Jyoti Ponnapalli, eds. Generative AI Security. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54252-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Thakkar, Hiren Kumar, Mayank Swarnkar, and Robin Singh Bhadoria, eds. Predictive Data Security using AI. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-6290-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Whitlock, Chris, and Frank Strickland. Winning the National Security AI Competition. Berkeley, CA: Apress, 2023. http://dx.doi.org/10.1007/978-1-4842-8814-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Sehgal, Naresh Kumar, Manoj Saxena, and Dhaval N. Shah. AI on the Edge with Security. Cham: Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-78272-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Jaswal, Gaurav, Vivek Kanhangad, and Raghavendra Ramachandra, eds. AI and Deep Learning in Biometric Security. First edition. | Boca Raton, FL : CRC Press, 2021. |: CRC Press, 2021. http://dx.doi.org/10.1201/9781003003489.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Hewage, Chaminda, Liqaa Nawaf, and Nishtha Kesswani, eds. AI Applications in Cyber Security and Communication Networks. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-3973-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Filì, Valeria. Il reddito imponibile ai fini contributivi. Torino: G. Giappichelli, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Kulkarni, Anand J., Patrick Siarry, Apoorva S. Shastri, and Mangal Singh. AI-Based Metaheuristics for Information Security and Digital Media. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003107767.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Karimipour, Hadis, and Farnaz Derakhshan, eds. AI-Enabled Threat Detection and Security Analysis for Industrial IoT. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-76613-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Raj, Balwinder, Brij B. Gupta, Shingo Yamaguchi, and Sandeep Singh Gill. AI for Big Data-Based Engineering Applications from Security Perspectives. Boca Raton: CRC Press, 2023. http://dx.doi.org/10.1201/9781003230113.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "AI security"

1

Cagle, Anton, and Ahmed Ceifelnasr Ahmed. "Security." In Architecting Enterprise AI Applications, 193–212. Berkeley, CA: Apress, 2024. https://doi.org/10.1007/979-8-8688-0902-6_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Huang, Ken, Aditi Joshi, Sandy Dun, and Nick Hamilton. "AI Regulations." In Generative AI Security, 61–98. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54252-7_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Huang, Ken, Ben Goertzel, Daniel Wu, and Anita Xie. "GenAI Model Security." In Generative AI Security, 163–98. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54252-7_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Huang, Ken, Jerry Huang, and Daniele Catteddu. "GenAI Data Security." In Generative AI Security, 133–62. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54252-7_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Huang, Ken, Grace Huang, Adam Dawson, and Daniel Wu. "GenAI Application Level Security." In Generative AI Security, 199–237. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54252-7_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Huang, Ken, Fan Zhang, Yale Li, Sean Wright, Vasan Kidambi, and Vishwas Manral. "Security and Privacy Concerns in ChatGPT." In Beyond AI, 297–328. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-45282-6_11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Frenkel, Edward. "AI Safety." In Artificial Intelligence Safety and Security, 199–205. First edition. | Boca Raton, FL : CRC Press/Taylor & Francis Group, 2018.: Chapman and Hall/CRC, 2018. http://dx.doi.org/10.1201/9781351251389-13.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Huang, Ken, Yang Wang, and Xiaochen Zhang. "Foundations of Generative AI." In Generative AI Security, 3–30. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54252-7_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Huang, Ken, Grace Huang, Yuyan Duan, and Ju Hyun. "Utilizing Prompt Engineering to Operationalize Cybersecurity." In Generative AI Security, 271–303. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54252-7_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Huang, Ken, John Yeoh, Sean Wright, and Henry Wang. "Build Your Security Program for GenAI." In Generative AI Security, 99–132. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54252-7_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "AI security"

1

Ünal, Hasan Tolga, Arif Furkan Mendi, Özgür Umut Vurgun, Ömer Özkan, and Mehmet Akif Nacar. "AI – Supported Collective Security System." In 2024 8th International Artificial Intelligence and Data Processing Symposium (IDAP), 1–6. IEEE, 2024. http://dx.doi.org/10.1109/idap64064.2024.10711160.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

van Oers, Alexander M., and Jorik T. Venema. "Anti-AI camouflage." In Artificial Intelligence for Security and Defence Applications II, edited by Henri Bouma, Yitzhak Yitzhaky, Radhakrishna Prabhu, and Hugo J. Kuijf, 32. SPIE, 2024. http://dx.doi.org/10.1117/12.3031144.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Diyora, Vishal, and Nilesh Savani. "Blockchain or AI: Web Applications Security Mitigations." In 2024 First International Conference on Pioneering Developments in Computer Science & Digital Technologies (IC2SDT), 418–23. IEEE, 2024. http://dx.doi.org/10.1109/ic2sdt62152.2024.10696861.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ambati, Sri Haritha, Norah Ridley, Enrico Branca, and Natalia Stakhanova. "Navigating (in)Security of AI-Generated Code." In 2024 IEEE International Conference on Cyber Security and Resilience (CSR), 1–8. IEEE, 2024. http://dx.doi.org/10.1109/csr61664.2024.10679468.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Bertino, Elisa, Murat Kantarcioglu, Cuneyt Gurcan Akcora, Sagar Samtani, Sudip Mittal, and Maanak Gupta. "AI for Security and Security for AI." In CODASPY '21: Eleventh ACM Conference on Data and Application Security and Privacy. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3422337.3450357.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Song, Dawn. "AI and Security." In ASIA CCS '20: The 15th ACM Asia Conference on Computer and Communications Security. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3320269.3384771.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Sasaki, Ryoichi. "AI and Security - What Changes with Generative AI." In 2023 IEEE 23rd International Conference on Software Quality, Reliability, and Security Companion (QRS-C). IEEE, 2023. http://dx.doi.org/10.1109/qrs-c60940.2023.00043.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Washizaki, Hironori, and Nobukazu Yoshioka. "AI Security Continuum: Concept and Challenges." In CAIN 2024: IEEE/ACM 3rd International Conference on AI Engineering - Software Engineering for AI. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3644815.3644983.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Jacobs, Arthur S., Roman Beltiukov, Walter Willinger, Ronaldo A. Ferreira, Arpit Gupta, and Lisandro Z. Granville. "AI/ML for Network Security." In CCS '22: 2022 ACM SIGSAC Conference on Computer and Communications Security. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3548606.3560609.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Deldari, Shohreh, Mohammad Goudarzi, Aditya Joshi, Arash Shaghaghi, Simon Finn, Flora D. Salim, and Sanjay Jha. "AuditNet: Conversational AI Security Assistant." In MobileHCI '24: 26th International Conference on Mobile Human-Computer Interaction, 1–4. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3640471.3680444.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "AI security"

1

Lewis, Daniel, and Josh Oxby. Energy security and AI. Parliamentary Office of Science and Technology, December 2024. https://doi.org/10.58248/pn735.

Повний текст джерела
Анотація:
This POSTnote summarises the current and emerging applications of AI and ML in the energy system, barriers to wider implementation, the challenges likely to be encountered, and policy considerations proposed by sector stakeholders.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Christie, Lorna. AI in policing and security. Parliamentary Office of Science and Technology, April 2021. http://dx.doi.org/10.58248/hs27.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Gehlhaus, Diana. Staying Ahead: Strengthening Tomorrow's U.S. AI and AI-Enabled Workforce. Center for Security and Emerging Technology, November 2021. http://dx.doi.org/10.51593/20210075.

Повний текст джерела
Анотація:
This research agenda provides a roadmap for the next phase of CSET’s line of research on the U.S. AI workforce. Our goal is to assist policymakers and other stakeholders in the national security community to create policies that will ensure the United States maintains its competitive advantage in AI talent. We welcome comments, feedback and input on this vision at cset@georgetown.edu.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Bennet, Karen, Gopi Krishnan Rajbahadur, Arthit Suriyawongkul, and Kate Stewart. Implementing AI Bill of Materials (AI BOM) with SPDX 3.0: A Comprehensive Guide to Creating AI and Dataset Bill of Materials. The Linux Foundation, October 2024. https://doi.org/10.70828/rned4427.

Повний текст джерела
Анотація:
A Software Bill of Materials (SBOM) is becoming an increasingly important tool in regulatory and technical spaces to introduce more transparency and security into a project's software supply chain. Artificial intelligence (AI) projects face unique challenges beyond the security of their software, and thus require a more expansive approach to a bill of materials. In this report, we introduce the concept of an AI-BOM, expanding on the SBOM to include the documentation of algorithms, data collection methods, frameworks and libraries, licensing information, and standard compliance.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kim, Kyungmee, and Boulanin Vincent. Artificial Intelligence for Climate Security: Possibilities and Challenges. Stockholm International Peace Research Institute, December 2023. http://dx.doi.org/10.55163/qdse8934.

Повний текст джерела
Анотація:
Recent advances in artificial intelligence (AI)—largely based on machine learning—offer possibilities for addressing climate-related security risks. AI can, for example, make disaster early-warning systems and long-term climate hazard modelling more efficient, reducing the risk that the impacts of climate change will lead to insecurity and conflict. This SIPRI Policy Report outlines the opportunities that AI presents for managing climate-related security risks. It gives examples of the use of AI in the field and delves into the problems—notably methodological and ethical—associated with the use of AI for climate security. The report concludes with recommendations for policymakers and researchers who are active in the area of climate security or who use AI for sustainability.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Murdick, Dewey, Daniel Chou, Ryan Fedasiuk, and Emily Weinstein. The Public AI Research Portfolio of China’s Security Forces. Center for Security and Emerging Technology, March 2021. http://dx.doi.org/10.51593/20200057.

Повний текст джерела
Анотація:
New analytic tools are used in this data brief to explore the public artificial intelligence (AI) research portfolio of China’s security forces. The methods contextualize Chinese-language scholarly papers that claim a direct working affiliation with components of the Ministry of Public Security, People's Armed Police Force, and People’s Liberation Army. The authors review potential uses of computer vision, robotics, natural language processing and general AI research.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Murdick, Dewey, James Dunham, and Jennifer Melot. AI Definitions Affect Policymaking. Center for Security and Emerging Technology, June 2020. http://dx.doi.org/10.51593/20200004.

Повний текст джерела
Анотація:
The task of artificial intelligence policymaking is complex and challenging, made all the more difficult by such a rapidly evolving technology. In order to address the security and economic implications of AI, policymakers must be able to viably define, categorize and assess AI research and technology. In this issue brief, CSET puts forward a functional definition of AI, based on three core principles, that significantly outperforms methods developed over the last decade.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hoffman, Wyatt. "Making AI Work for Cyber Defense: The Accuracy-Robustness Tradeoff ". Center for Security and Emerging Technology, December 2021. http://dx.doi.org/10.51593/2021ca007.

Повний текст джерела
Анотація:
Artificial intelligence will play an increasingly important role in cyber defense, but vulnerabilities in AI systems call into question their reliability in the face of evolving offensive campaigns. Because securing AI systems can require trade-offs based on the types of threats, defenders are often caught in a constant balancing act. This report explores the challenges in AI security and their implications for deploying AI-enabled cyber defenses at scale.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Weinstein, Emily, and Ngor Luong. U.S. Outbound Investment into Chinese AI Companies. Center for Security and Emerging Technology, February 2023. http://dx.doi.org/10.51593/20210067.

Повний текст джерела
Анотація:
U.S. policymakers are increasingly concerned about the national security implications of U.S. investments in China, and some are considering a new regime for reviewing outbound investment security. The authors identify the main U.S. investors active in the Chinese artificial intelligence market and the set of AI companies in China that have benefitted from U.S. capital. They also recommend next steps for U.S. policymakers to better address the concerns over capital flowing into the Chinese AI ecosystem.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Mutis, Santiago. Privately Held AI Companies by Sector. Center for Security and Emerging Technology, October 2020. http://dx.doi.org/10.51593/20200019.

Повний текст джерела
Анотація:
Understanding AI activity in the private sector is crucial both to grasping its economic and security implications and developing appropriate policy frameworks. This data brief shows particularly robust AI activity in software publishing and manufacturing, along with a high concentration of companies in California, Massachusetts and New York.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії