Academic literature on the topic 'Generative AI Risks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Generative AI Risks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Generative AI Risks"

1

El-Hadi, Mohamed. "Generative AI Poses Security Risks." مجلة الجمعية المصرية لنظم المعلومات وتکنولوجيا الحاسبات 34, no. 34 (January 1, 2024): 72–73. http://dx.doi.org/10.21608/jstc.2024.338476.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hu, Fangfei, Shiyuan Liu, Xinrui Cheng, Pengyou Guo, and Mengqi Yu. "Risks of Generative Artificial Intelligence and Multi-Tool Governance." Academic Journal of Management and Social Sciences 9, no. 2 (November 26, 2024): 88–93. http://dx.doi.org/10.54097/stvem930.

Full text
Abstract:
While generative AI, represented by ChatGPT, brings a technological revolution and convenience to life, it may raise a series of social and legal risks, mainly including violation of personal privacy and data security issues, infringement of intellectual property rights, generation of misleading and false content, and exacerbation of discrimination and prejudice. However, the traditional AI governance paradigm oriented towards conventional AI may not be adequately adapted to generative AI with generalized potential and based on big models. In order to encourage the innovative development of generative AI technology and regulate the risks, this paper explores the construction of a generative AI governance paradigm that combines legal regulation, technological regulation, and ethical governance of science and technology, and promotes the healthy development of generative AI on the track of safety, order, fairness, and co-governance.
APA, Harvard, Vancouver, ISO, and other styles
3

Moon, Su-Ji. "Effects of Perception of Potential Risk in Generative AI on Attitudes and Intention to Use." International Journal on Advanced Science, Engineering and Information Technology 14, no. 5 (October 30, 2024): 1748–55. http://dx.doi.org/10.18517/ijaseit.14.5.20445.

Full text
Abstract:
Generative artificial intelligence (AI) is rapidly advancing, offering numerous benefits to society while presenting unforeseen potential risks. This study aims to identify these potential risks through a comprehensive literature review and investigate how user’s perceptions of risk factors influence their attitudes and intentions to use generative AI technologies. Specifically, we examined the impact of four key risk factors: fake news generation, trust, bias, and privacy concerns. Our analysis of data collected from experienced generative AI users yielded several significant findings: First, users' perceptions of fake news generation by generative AI were found to have a significant negative impact on their attitudes towards these technologies. Second, user trust in generative AI positively influenced both attitudes toward and intentions to use these technologies. Third, users' awareness of potential biases in generative AI systems was shown to affect their attitudes towards these technologies negatively. Fourth, while users' privacy concerns regarding generative AI did not significantly impact their usage intentions directly, these concerns negatively influenced their overall attitudes toward the technology. Fifth, users' attitudes towards generative AI influenced their intentions to use these technologies positively. Based on the above results, to increase the intention to use generated artificial intelligence, legal, institutional, and technical countermeasures should be prepared for fake news generation, trust issues, bias, and privacy concerns while improving users' negative perceptions through literacy education on generated artificial intelligence, and education that can be used desirable and efficiently.
APA, Harvard, Vancouver, ISO, and other styles
4

Campbell, Mark, and Mlađan Jovanović. "Disinfecting AI: Mitigating Generative AI’s Top Risks." Computer 57, no. 5 (May 2024): 111–16. http://dx.doi.org/10.1109/mc.2024.3374433.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Siyed, Zahm. "Generative AI Increases Cybersecurity Risks for Seniors." Computer and Information Science 17, no. 2 (September 6, 2024): 39. http://dx.doi.org/10.5539/cis.v17n2p39.

Full text
Abstract:
We evaluate how generative AI exacerbates the cyber risks faced by senior citizens. We assess the risk that powerful LLMs can easily be misconfigured to serve a malicious purpose, and that platforms such as HackGPT or WormGPT can facilitate low-skilled script kiddies to replicate the effectiveness of high-skilled threat actors. We surveyed 85 seniors and found that the combination of loneliness and low cyber literacy places 87% of them at high risk of being hacked. Our survey further revealed that 67% of seniors have already been exposed to potentially exploitable digital intrusions and only 22% of seniors have sufficient awareness of risks to ask techno-literate for remedial assistance. Our risk analysis suggests that existing attack vectors can be augmented with AI to create highly personalized and believable digital exploits that are extremely difficult for seniors to distinguish from legitimate interactions. Technological advances allow for the replication of familiar voices, live digital reconstruction of faces, personalized targeting, and falsification of records. Once an attack vector is identified, certain generative polymorphic capabilities allow rapid mutation and obfuscation to deliver unique payloads. Both inbound and outbound risks exist. In addition to inbound attempts by individual threat actors, seniors are vulnerable to outbound attacks through poisoned LLMs, such as Threat GPT or PoisonGPT. Generative AI can maliciously alter databases to provide incorrect information or compromised instructions to gullible seniors seeking outbound digital guidance. By analyzing the extent to which senior citizens are at risk of exploitation through new developments in AI, the paper will contribute to the development of effective strategies to safeguard this vulnerable population.
APA, Harvard, Vancouver, ISO, and other styles
6

Sun, Hanpu. "Risks and Legal Governance of Generative Artificial Intelligence." International Journal of Social Sciences and Public Administration 4, no. 1 (August 23, 2024): 306–14. http://dx.doi.org/10.62051/ijsspa.v4n1.35.

Full text
Abstract:
Generative Artificial Intelligence (AI) represents a significant advancement in the field of artificial intelligence, characterized by its ability to autonomously generate original content by learning from existing data. Unlike traditional decision-based AI, which primarily aids in decision-making by analyzing data, generative AI can create new texts, images, music, and more, showcasing its immense potential across various domains. However, this technology also presents substantial risks, including data security threats, privacy violations, algorithmic biases, and the dissemination of false information. Addressing these challenges requires a multi-faceted approach involving technical measures, ethical considerations, and robust legal frameworks. This paper explores the evolution and capabilities of generative AI, outlines the associated risks, and discusses the regulatory and legal mechanisms needed to mitigate these risks. By emphasizing transparency, accountability, and ethical responsibility, we aim to ensure that generative AI contributes positively to society while safeguarding against its potential harms.
APA, Harvard, Vancouver, ISO, and other styles
7

Mortensen, Søren F. "Generative AI in securities services." Journal of Securities Operations & Custody 16, no. 4 (September 1, 2024): 302. http://dx.doi.org/10.69554/zlcg3875.

Full text
Abstract:
Generative AI (GenAI) is a technology that, since the launch of ChatGPT in November 2022, has taken the world by storm. While a lot of the conversation around GenAI is hype, there are some real applications of this technology that can bring real value to businesses. There are, however, risks in applying this technology blindly that sometimes can outweigh the value it brings. This paper discusses the potential applicability of GenAI to the processes in post-trade and what impact it could have on financial institutions and their ability to meet challenges in the market, such as T+1. We also discuss the risks of implementing this technology and how these can be mitigated, as well as ensuring that all the objectives are met not only from a business perspective, but also technology and compliance.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Mingzheng. "Generative AI: A New Challenge for Cybersecurity." Journal of Computer Science and Technology Studies 6, no. 2 (April 7, 2024): 13–18. http://dx.doi.org/10.32996/jcsts.2024.6.2.3.

Full text
Abstract:
The rapid development of Generative Artificial Intelligence (GAI) technology has shown tremendous potential in various fields, such as image generation, text generation, and video generation, and it has been widely applied in various industries. However, GAI also brings new risks and challenges to cybersecurity. This paper analyzes the application status of GAI technology in the field of cybersecurity and discusses the risks and challenges it brings, including data security risks, scientific and technological ethics and moral challenges, Artificial Intelligence (AI) fraud, and threats from cyberattacks. On this basis, this paper proposes some countermeasures to maintain cybersecurity and address the threats posed by GAI, including: establishing and improving standards and specifications for AI technology to ensure its security and reliability; developing AI-based cybersecurity defense technologies to enhance cybersecurity defense capabilities; improving the AI literacy of the whole society to help the public understand and use AI technology correctly. From the perspective of GAI technology background, this paper systematically analyzes its impact on cybersecurity and proposes some targeted countermeasures and suggestions, possessing certain theoretical and practical significance.
APA, Harvard, Vancouver, ISO, and other styles
9

Gabriel, Sonja. "Generative AI in Writing Workshops: A Path to AI Literacy." International Conference on AI Research 4, no. 1 (December 4, 2024): 126–32. https://doi.org/10.34190/icair.4.1.3022.

Full text
Abstract:
The widespread use of generative AI tools which can support or even take over several part of the writing process has sparked many discussions about integrity, AI literacy and changes to academic writing processes. This paper explores the impact of generative artificial intelligence (AI) tools on the academic writing pro-cess, drawing on data from a writing workshop and interviews with university students of a university teacher college in Austria. Despite the widespread assumption that generative AI, such as ChatGPT, is widely used by students to support their academic tasks, initial findings suggest a notable gap in participants' experience and understanding of these technologies. This discrepancy highlights the critical need for AI literacy and underscores the importance of familiarity with the potential, challenges and risks associated with generative AI to ensure its ethical and effective use. Through reflective discussions and feedback from work-shop participants, this study shows a differentiated perspective on the role of generative AI in academic writing, illustrating its value in generating ideas and overcoming writer's block, as well as its limitations due to the indispensable nature of human involvement in critical writing tasks.
APA, Harvard, Vancouver, ISO, and other styles
10

Tong, Yifeng. "Research on Criminal Risk Analysis and Governance Mechanism of Generative Artificial Intelligence such as ChatGPT." Studies in Law and Justice 2, no. 2 (June 2023): 85–94. http://dx.doi.org/10.56397/slj.2023.06.13.

Full text
Abstract:
The launch of ChatGPT marks a breakthrough in the development of generative artificial intelligence (generative AI) technology, which is based on the collection and learning of big data as its core operating mechanism. Now the generative AI has the characteristics of high intelligence and high generalization and thus has led to various criminal risks. The current criminal law norms are inadequate in terms of the attribution system and crime norms for the criminal risks caused by the generative AI technologies. Thus, we should clarify the types of risks and governance challenges of generative AI, clarify that data is the object of risk governance, and establish a pluralistic liability allocation mechanism and a mixed legal governance framework of criminal, civil, and administrative law on this basis.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Generative AI Risks"

1

Haidar, Ahmad. "Responsible Artificial Intelligence : Designing Frameworks for Ethical, Sustainable, and Risk-Aware Practices." Electronic Thesis or Diss., université Paris-Saclay, 2024. https://www.biblio.univ-evry.fr/theses/2024/interne/2024UPASI008.pdf.

Full text
Abstract:
L'intelligence artificielle (IA) transforme rapidement le monde, redéfinissant les relations entre technologie et société. Cette thèse explore le besoin essentiel de développer, de gouverner et d'utiliser l'IA et l'IA générative (IAG) de manière responsable et durable. Elle traite des risques éthiques, des lacunes réglementaires et des défis associés aux systèmes d'IA, tout en proposant des cadres concrets pour promouvoir une Intelligence Artificielle Responsable (IAR) et une Innovation Numérique Responsable (INR).La thèse commence par une analyse approfondie de 27 déclarations éthiques mondiales sur l'IA pour identifier des principes dominants tels que la transparence, l'équité, la responsabilité et la durabilité. Bien que significatifs, ces principes manquent souvent d'outils pratiques pour leur mise en œuvre. Pour combler cette lacune, la deuxième étude de la recherche présente un cadre intégrateur pour l'IAR basé sur quatre dimensions : technique, IA pour la durabilité, juridique et gestion responsable de l'innovation.La troisième partie de la thèse porte sur l'INR à travers une étude qualitative basée sur 18 entretiens avec des gestionnaires de secteurs divers. Cinq dimensions clés sont identifiées : stratégie, défis spécifiques au numérique, indicateurs de performance organisationnels, impact sur les utilisateurs finaux et catalyseurs. Ces dimensions permettent aux entreprises d'adopter des pratiques d'innovation durable et responsable tout en surmontant les obstacles à leur mise en œuvre.La quatrième étude analyse les risques émergents liés à l'IAG, tels que la désinformation, les biais, les atteintes à la vie privée, les préoccupations environnementales et la suppression d'emplois. À partir d'un ensemble de 858 incidents, cette recherche utilise une régression logistique binaire pour examiner l'impact sociétal de ces risques. Les résultats soulignent l'urgence d'établir des cadres réglementaires renforcés, une responsabilité numérique des entreprises et une gouvernance éthique de l'IA.En conclusion, cette thèse apporte des contributions critiques aux domaines de l'INR et de l'IAR en évaluant les principes éthiques, en proposant des cadres intégratifs et en identifiant des risques émergents. Elle souligne l'importance d'aligner la gouvernance de l'IA sur les normes internationales afin de garantir que les technologies d'IA servent l'humanité de manière durable et équitable
Artificial Intelligence (AI) is rapidly transforming the world, redefining the relationship between technology and society. This thesis investigates the critical need for responsible and sustainable development, governance, and usage of AI and Generative AI (GAI). The study addresses the ethical risks, regulatory gaps, and challenges associated with AI systems while proposing actionable frameworks for fostering Responsible Artificial Intelligence (RAI) and Responsible Digital Innovation (RDI).The thesis begins with a comprehensive review of 27 global AI ethical declarations to identify dominant principles such as transparency, fairness, accountability, and sustainability. Despite their significance, these principles often lack the necessary tools for practical implementation. To address this gap, the second study in the research presents an integrative framework for RAI based on four dimensions: technical, AI for sustainability, legal, and responsible innovation management.The third part of the thesis focuses on RDI through a qualitative study of 18 interviews with managers from diverse sectors. Five key dimensions are identified: strategy, digital-specific challenges, organizational KPIs, end-user impact, and catalysts. These dimensions enable companies to adopt sustainable and responsible innovation practices while overcoming obstacles in implementation.The fourth study analyzes emerging risks from GAI, such as misinformation, disinformation, bias, privacy breaches, environmental concerns, and job displacement. Using a dataset of 858 incidents, this research employs binary logistic regression to examine the societal impact of these risks. The results highlight the urgent need for stronger regulatory frameworks, corporate digital responsibility, and ethical AI governance. Thus, this thesis provides critical contributions to the fields of RDI and RAI by evaluating ethical principles, proposing integrative frameworks, and identifying emerging risks. It emphasizes the importance of aligning AI governance with international standards to ensure that AI technologies serve humanity sustainably and equitably
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Generative AI Risks"

1

Boyd, Brad, Elie Bursztein, Nicholas Carlini, and Brad Chen. Identifying and Mitigating the Security Risks of Generative AI. Now Publishers, 2024.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Generative AI Risks"

1

Medina, Dylan. "Risks and Opportunities in Pedagogy and Research." In Generative AI in Writing Education, 69–81. New York: Routledge, 2024. http://dx.doi.org/10.4324/9781003493563-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Thorat, Sachin R., and Davendranath G. Jha. "Revolutionizing Healthcare: The Transformative Power of Generative AI—Risks and Rewards." In Technologies for Energy, Agriculture, and Healthcare, 57–67. London: CRC Press, 2024. https://doi.org/10.1201/9781003596707-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Oppenlaender, Jonas. "The Cultivated Practices of Text-to-Image Generation." In Humane Autonomous Technology, 325–49. Cham: Springer International Publishing, 2024. http://dx.doi.org/10.1007/978-3-031-66528-8_14.

Full text
Abstract:
AbstractHumankind is entering a novel creative era in which anybody can synthesise digital information using generative artificial intelligence (AI). Text-to-image generation, in particular, has become vastly popular and millions of practitioners produce AI-generated images and AI art online. This chapter first gives an overview of the key developments that enabled a healthy co-creative online ecosystem around text-to-image generation to rapidly emerge, followed by a high-level description of key elements in this ecosystem. A particular focus is placed on prompt engineering, a creative practice that has been embraced by the AI art community. It is then argued that the emerging co-creative ecosystem constitutes an intelligent system on its own—a system that both supports human creativity, but also potentially entraps future generations and limits future development efforts in AI. The chapter discusses the potential risks and dangers of cultivating this co-creative ecosystem, such as the bias inherent in today’s training data, potential quality degradation in future image generation systems due to synthetic data becoming common place, and the potential long-term effects of text-to-image generation on people’s imagination, ambitions, and development.
APA, Harvard, Vancouver, ISO, and other styles
4

Donati, Pierpaolo. "Impact of AI/Robotics on Human Relations: Co-evolution Through Hybridisation." In Robotics, AI, and Humanity, 213–27. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-54173-6_18.

Full text
Abstract:
AbstractThis chapter examines how the processes of human enhancement that have been brought about by the digital revolution (including AI and robotics, besides ICTs) have given rise to new social identities and relationships. The central question consists in asking how the Digital Technological Matrix, understood as a cultural code that supports artificial intelligence and related technologies, causes a hybridisation between the human and the non-human, and to what extent such hybridisation promotes or puts human dignity at risk. Hybridisation is defined here as entanglements and interchanges between digital machines, their ways of operating, and human elements in social practices. The issue is not whether AI or robots can assume human-like characteristics, but how they interact with humans and affect their social identities and relationships, thereby generating a new kind of society.
APA, Harvard, Vancouver, ISO, and other styles
5

Christian, Michael, Henilia Yulita, Eko Retno Indriyarti, Suryo Wibowo, and Sunarno Sunarno. "Determinants of COVID-19 Mobile Advertising Acceptance Among Generation Z in Jabodetabek." In AI and Business, and Innovation Research: Understanding the Potential and Risks of AI for Modern Enterprises, 37–47. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-42085-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Putra, Winardi Triasa, and Ratna Roostika. "Consumer Motivation to Visit a New Coffee Shop: Empirical Study on Generation Y and Z Motivation." In AI and Business, and Innovation Research: Understanding the Potential and Risks of AI for Modern Enterprises, 79–88. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-42085-6_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Okada, Satoshi, and Takuho Mitsunaga. "An Improved Technique for Generating Effective Noises of Adversarial Camera Stickers." In Lecture Notes in Networks and Systems, 289–300. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-4581-4_21.

Full text
Abstract:
AbstractCyber-physical systems (CPS) represent the integration of the physical world with digital technologies and are expected to change our everyday lives significantly. With the rapid development of CPS, the importance of artificial intelligence (AI) has been increasingly recognized. Concurrently, adversarial attacks that cause incorrect predictions in AI models have emerged as a new risk. They are no longer limited to digital data and now extend to the physical environment. Thus, they are pointed out to pose serious practical threats to CPS. In this paper, we focus on the “adversarial camera stickers attack,” a type of physical adversarial attack. This attack directly affixes adversarial noise to a camera lens. Since the adversarial noise perturbs various images captured by the camera, it must be universal. To realize more effective adversarial camera stickers, we propose a new method for generating more universal adversarial noise compared to previous research. We first reveal that the existing method for generating noise of adversarial camera stickers does not always lead to the creation of universal perturbations. Then, we address this drawback by improving the optimization problem. Furthermore, we implement our proposed method, achieving an attack success rate 2.5 times higher than existing methods. Our experiments prove the capability of our proposed method to generate more universal adversarial noises, highlighting its potential effectiveness in enhancing security measures against adversarial attacks in CPS.
APA, Harvard, Vancouver, ISO, and other styles
8

Aulbach, Linda. "5. Artificial Intelligence, ethics and empathy." In Digital Humanities in the India Rim, 83–98. Cambridge, UK: Open Book Publishers, 2024. http://dx.doi.org/10.11647/obp.0423.05.

Full text
Abstract:
The development of Artificial Intelligence (AI) has sparked a huge debate about its impacts on individuals, cultures, societies and the world. Through AI, we now can either support, manipulate or even replace humans at a level we have not seen before. One of the core values of happy and thriving relationships between humans is empathy, and understanding another person’s feelings builds the foundation of human connection. Within the past few years, the field of AI has taken on the challenge of becoming empathic towards humans to create more trust, acceptance and attachment towards its applications. There are now ‘carebots’ with simple empathic chat features, which seem to be ‘nice to have’, but there is also a concerning development in the field of erobotics—the next (empathic) generation of sex robots, made for humans to fall in love with. The increase in emotional capacity within AI brings into focus how good or bad empathy really is. There is a high risk of manipulation of humans on a deep psychological level, yet there is also reason to believe that empathy is necessary to truly reach an ethical ‘gold’ standard. This chapter will examine empathic AI and its ethical issues with a focus on humanity. It will also touch on the question of what happens if AI becomes more human than humans.
APA, Harvard, Vancouver, ISO, and other styles
9

Sivaprasad, Adarsa, Ehud Reiter, Nava Tintarev, and Nir Oren. "Evaluation of Human-Understandability of Global Model Explanations Using Decision Tree." In Communications in Computer and Information Science, 43–65. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-50396-2_3.

Full text
Abstract:
AbstractIn explainable artificial intelligence (XAI) research, the predominant focus has been on interpreting models for experts and practitioners. Model agnostic and local explanation approaches are deemed interpretable and sufficient in many applications. However, in domains like healthcare, where end users are patients without AI or domain expertise, there is an urgent need for model explanations that are more comprehensible and instil trust in the model’s operations. We hypothesise that generating model explanations that are narrative, patient-specific and global (holistic of the model) would enable better understandability and enable decision-making. We test this using a decision tree model to generate both local and global explanations for patients identified as having a high risk of coronary heart disease. These explanations are presented to non-expert users. We find a strong individual preference for a specific type of explanation. The majority of participants prefer global explanations, while a smaller group prefers local explanations. A task based evaluation of mental models of these participants provide valuable feedback to enhance narrative global explanations. This, in turn, guides the design of health informatics systems that are both trustworthy and actionable.
APA, Harvard, Vancouver, ISO, and other styles
10

Olson, Eric. "Digital Transformation and AI in Energy Systems: Applications, Challenges, and the Path Forward." In Palgrave Studies in Digital Business & Enabling Technologies, 63–79. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-61749-2_4.

Full text
Abstract:
AbstractThe integration of digital technologies like Machine Learning (ML), Artificial Intelligence (AI), and the Internet of Things is transforming energy systems. This digital transformation aims to enhance efficiency, sustainability, and resilience in power generation, transmission, and consumption. A key focus is developing smart grids that leverage real-time data and intelligent algorithms to optimise operations. In response, deep learning and reinforcement learning techniques are being applied to bolster cybersecurity in the energy sector. Deep learning excels at detecting threats by identifying patterns in large datasets. Meanwhile, reinforcement learning can simulate attack scenarios to train adaptive defence strategies. However, cybersecurity threats pose a major risk as energy infrastructure becomes more interconnected. The Colonial Pipeline ransomware attack in 2021 demonstrated the vulnerabilities of critical infrastructure to cyberattacks. Despite great potential, challenges remain regarding model transparency, ethics, and data availability. Overall, realising the promise of AI in the energy sector requires navigating technical complexities and prioritising explainable, trustworthy systems. If implemented thoughtfully, these technologies can catalyse the transition to smarter, more efficient, resilient, and sustainable energy systems.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Generative AI Risks"

1

Rozek, Karolina. "GPT MODELS IN HIGHER EDUCATION: CHALLENGES AND OPPORTUNITIES." In 24th SGEM International Multidisciplinary Scientific GeoConference 2024, 729–34. STEF92 Technology, 2024. https://doi.org/10.5593/sgem2024/5.1/s22.881.

Full text
Abstract:
This article explores the challenges and opportunities presented by the integration of GPT (Generative Pre-trained Transformer) models in higher education. It examines the implications for teaching methodologies, student engagement, and the potential risks associated with the reliance on AI tools in academic settings. It highlights the benefits of AI in providing personalized, efficient and flexible learning environments, enhancing student engagement, and supporting individualized learning. However, it also addresses significant concerns regarding the potential oversimplification of academic tasks, the decline in students� critical thinking skills, and the challenges educators face in effectively incorporating AI tools without causing distractions. The article emphasizes the need for a balanced approach in integrating AI into academic education, ensuring that technology enhances rather than undermines the learning experience, and aligns with contemporary educational demands. Future research directions are suggested to better understand the direct impacts of AI tools like ChatGPT on student learning and academic integrity.
APA, Harvard, Vancouver, ISO, and other styles
2

Filho, Raimir Holanda, and Daniel Colares. "A Methodology for Risk Management of Generative AI based Systems." In 2024 International Conference on Software, Telecommunications and Computer Networks (SoftCOM), 1–6. IEEE, 2024. http://dx.doi.org/10.23919/softcom62040.2024.10721790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Udeshika Munasinghe, Sachini, Reza Rafeh, and Sarah Rauchas. "Estimating Value at Risk for Central Counterparties: A Generative AI Approach." In 2024 International Conference on Data Science and Its Applications (ICoDSA), 305–10. IEEE, 2024. http://dx.doi.org/10.1109/icodsa62899.2024.10652178.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fulantelli, Giovanni. "GENERATIVE AI SYSTEMS AND THE RISK OF POLARIZING NARRATIVES ABOUT MIGRATION PHENOMENA." In 17th annual International Conference of Education, Research and Innovation, 8052–60. IATED, 2024. https://doi.org/10.21125/iceri.2024.1965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Buchicchio, Emanuele, Alessio De Angelis, Antonio Moschitta, Francesco Santoni, Lucio San Marco, and Paolo Carbone. "Design, Validation, and Risk Assessment of LLM-Based Generative AI Systems Operating in the Legal Sector." In 2024 IEEE International Symposium on Systems Engineering (ISSE), 1–8. IEEE, 2024. http://dx.doi.org/10.1109/isse63315.2024.10741134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kabbar, Eltahir, and Bashar Barmada. "Assessment Validity in the Era of Generative AI Tools." In CITRENZ 2023 Conference. Unitec ePress, 2024. http://dx.doi.org/10.34074/proc.240105.

Full text
Abstract:
Generative AI tools, a recent disruptive educational technology, are expected to change how education is delivered and administered. This study proposes a risk identification framework to support educators in identifying assessment integrity risks caused by generative AI tools. The framework also suggests possible actions to mitigate these risks. The proposed framework uses four factors (Assessment Type, AI Knowledge, Course Level, and Bloom’s Taxonomy Cognitive Domain Level) to identify the risks associated with an assessment resulting from the usage of generative AI tools. It is critical to have such a framework to ensure the integrity of assessments while the education industry adapts to the generative AI tools era.
APA, Harvard, Vancouver, ISO, and other styles
7

Bird, Charlotte, Eddie Ungless, and Atoosa Kasirzadeh. "Typology of Risks of Generative Text-to-Image Models." In AIES '23: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3600211.3604722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Beltran, Marco Antonio, Marina Ivette Ruiz Mondragon, and Seung Hun Han. "Comparative Analysis of Generative AI Risks in the Public Sector." In dg.o 2024: 25th Annual International Conference on Digital Government Research. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3657054.3657125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Esposito, Mark, and Terence Tse. "Mitigating the Risks of Generative AI in Government through Algorithmic Governance." In dg.o 2024: 25th Annual International Conference on Digital Government Research. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3657054.3657124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jakovljević, Nemanja, and Veljko Dmitrović. "AI and Internal Audit, Reporting Transformation." In 43rd International Conference on Organizational Science Development. University of Maribor Press, 2024. http://dx.doi.org/10.18690/um.fov.3.2024.27.

Full text
Abstract:
The recent emergence of OpenAI and ChatGPT has brought numerous advantages for the professions of accountants and auditors, but at the same time numerous risks, threats and challenges. GPT's ability to understand, predict and generate human-like text has turned the technology into a clear foundation that redefines and shapes a wide range of activities, including internal auditing. GPT models have rapidly evolved from their initial roles in simple text generation to complex applications. Their ability to understand language and context, generate coherent and relevant text, and learn from vast amounts of data makes them ideal for tasks such as compiling internal audit reports. Internal audit reports summarize key findings and identify risks that need to be remedied for the audit committee, CEOs and senior management. However, writing and presenting such reports takes a lot of time, and using GPT can help significantly with that. The subject of the paper is a comprehensive review of a wide range of AI, internal audit, reporting transformation. The main conclusion points to the growing responsibility of internal auditors with the widespread use of generative artificial intelligence services to support audit reporting. Internal auditors must be aware of the risks and challenges brought by the new technology, based on artificial intelligence, which requires clear training and thematic areas incorporated into the curricula in the process of certification of internal auditors.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Generative AI Risks"

1

Bengio, Yoshua, Caroline Lequesne, Hugo Loiseau, Jocelyn Maclure, Juliette Powell, Sonja Solomun, and Lyse Langlois. Interdisciplinary Dialogues: The Major Risks of Generative AI. Observatoire international sur les impacts sociétaux de l’intelligence artificielle et du numérique, March 2024. http://dx.doi.org/10.61737/xsgm9843.

Full text
Abstract:
In an exciting series of Interdisciplinary Dialogues on the societal impacts of AI, we invite a guest speaker and panellists from the fields of science and engineering, health and humanities and social sciences to discuss the advances, challenges and opportunities raised by AI. The first dialogue in this series began with Yoshua Bengio, who, concerned about developments in generative AI and the major risks they pose for society, initiated the organization of a conference on the subject. The event took place on August 14, 2023 in Montreal, and was aimed at initiating collective, interdisciplinary reflection on the issues and risks posed by recent developments in AI. The conference took the form of a panel, moderated by Juliette Powell, to which seven specialists were invited who cover a variety of disciplines, including: computer science (Yoshua Bengio and Golnoosh Farnadi), law (Caroline Lequesne and Claire Boine), philosophy (Jocelyn Maclure), communication (Sonja Solomun) and political science (Hugo Loiseau). This document is the result of this first interdisciplinary dialogue on the societal impacts of AI. The speakers were invited to respond concisely, in the language of their choice, to questions raised during the event. Immerse yourself in reading these fascinating conversations, presented in a Q&A format that transcends disciplinary boundaries. The aim of these dialogues is to offer a critical and diverse perspective on the impact of AI on our everchanging world.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography