To see the other types of publications on this topic, follow the link: Generative AI Risks.

Journal articles on the topic 'Generative AI Risks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Generative AI Risks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

El-Hadi, Mohamed. "Generative AI Poses Security Risks." مجلة الجمعية المصرية لنظم المعلومات وتکنولوجيا الحاسبات 34, no. 34 (January 1, 2024): 72–73. http://dx.doi.org/10.21608/jstc.2024.338476.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hu, Fangfei, Shiyuan Liu, Xinrui Cheng, Pengyou Guo, and Mengqi Yu. "Risks of Generative Artificial Intelligence and Multi-Tool Governance." Academic Journal of Management and Social Sciences 9, no. 2 (November 26, 2024): 88–93. http://dx.doi.org/10.54097/stvem930.

Full text
Abstract:
While generative AI, represented by ChatGPT, brings a technological revolution and convenience to life, it may raise a series of social and legal risks, mainly including violation of personal privacy and data security issues, infringement of intellectual property rights, generation of misleading and false content, and exacerbation of discrimination and prejudice. However, the traditional AI governance paradigm oriented towards conventional AI may not be adequately adapted to generative AI with generalized potential and based on big models. In order to encourage the innovative development of generative AI technology and regulate the risks, this paper explores the construction of a generative AI governance paradigm that combines legal regulation, technological regulation, and ethical governance of science and technology, and promotes the healthy development of generative AI on the track of safety, order, fairness, and co-governance.
APA, Harvard, Vancouver, ISO, and other styles
3

Moon, Su-Ji. "Effects of Perception of Potential Risk in Generative AI on Attitudes and Intention to Use." International Journal on Advanced Science, Engineering and Information Technology 14, no. 5 (October 30, 2024): 1748–55. http://dx.doi.org/10.18517/ijaseit.14.5.20445.

Full text
Abstract:
Generative artificial intelligence (AI) is rapidly advancing, offering numerous benefits to society while presenting unforeseen potential risks. This study aims to identify these potential risks through a comprehensive literature review and investigate how user’s perceptions of risk factors influence their attitudes and intentions to use generative AI technologies. Specifically, we examined the impact of four key risk factors: fake news generation, trust, bias, and privacy concerns. Our analysis of data collected from experienced generative AI users yielded several significant findings: First, users' perceptions of fake news generation by generative AI were found to have a significant negative impact on their attitudes towards these technologies. Second, user trust in generative AI positively influenced both attitudes toward and intentions to use these technologies. Third, users' awareness of potential biases in generative AI systems was shown to affect their attitudes towards these technologies negatively. Fourth, while users' privacy concerns regarding generative AI did not significantly impact their usage intentions directly, these concerns negatively influenced their overall attitudes toward the technology. Fifth, users' attitudes towards generative AI influenced their intentions to use these technologies positively. Based on the above results, to increase the intention to use generated artificial intelligence, legal, institutional, and technical countermeasures should be prepared for fake news generation, trust issues, bias, and privacy concerns while improving users' negative perceptions through literacy education on generated artificial intelligence, and education that can be used desirable and efficiently.
APA, Harvard, Vancouver, ISO, and other styles
4

Campbell, Mark, and Mlađan Jovanović. "Disinfecting AI: Mitigating Generative AI’s Top Risks." Computer 57, no. 5 (May 2024): 111–16. http://dx.doi.org/10.1109/mc.2024.3374433.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Siyed, Zahm. "Generative AI Increases Cybersecurity Risks for Seniors." Computer and Information Science 17, no. 2 (September 6, 2024): 39. http://dx.doi.org/10.5539/cis.v17n2p39.

Full text
Abstract:
We evaluate how generative AI exacerbates the cyber risks faced by senior citizens. We assess the risk that powerful LLMs can easily be misconfigured to serve a malicious purpose, and that platforms such as HackGPT or WormGPT can facilitate low-skilled script kiddies to replicate the effectiveness of high-skilled threat actors. We surveyed 85 seniors and found that the combination of loneliness and low cyber literacy places 87% of them at high risk of being hacked. Our survey further revealed that 67% of seniors have already been exposed to potentially exploitable digital intrusions and only 22% of seniors have sufficient awareness of risks to ask techno-literate for remedial assistance. Our risk analysis suggests that existing attack vectors can be augmented with AI to create highly personalized and believable digital exploits that are extremely difficult for seniors to distinguish from legitimate interactions. Technological advances allow for the replication of familiar voices, live digital reconstruction of faces, personalized targeting, and falsification of records. Once an attack vector is identified, certain generative polymorphic capabilities allow rapid mutation and obfuscation to deliver unique payloads. Both inbound and outbound risks exist. In addition to inbound attempts by individual threat actors, seniors are vulnerable to outbound attacks through poisoned LLMs, such as Threat GPT or PoisonGPT. Generative AI can maliciously alter databases to provide incorrect information or compromised instructions to gullible seniors seeking outbound digital guidance. By analyzing the extent to which senior citizens are at risk of exploitation through new developments in AI, the paper will contribute to the development of effective strategies to safeguard this vulnerable population.
APA, Harvard, Vancouver, ISO, and other styles
6

Sun, Hanpu. "Risks and Legal Governance of Generative Artificial Intelligence." International Journal of Social Sciences and Public Administration 4, no. 1 (August 23, 2024): 306–14. http://dx.doi.org/10.62051/ijsspa.v4n1.35.

Full text
Abstract:
Generative Artificial Intelligence (AI) represents a significant advancement in the field of artificial intelligence, characterized by its ability to autonomously generate original content by learning from existing data. Unlike traditional decision-based AI, which primarily aids in decision-making by analyzing data, generative AI can create new texts, images, music, and more, showcasing its immense potential across various domains. However, this technology also presents substantial risks, including data security threats, privacy violations, algorithmic biases, and the dissemination of false information. Addressing these challenges requires a multi-faceted approach involving technical measures, ethical considerations, and robust legal frameworks. This paper explores the evolution and capabilities of generative AI, outlines the associated risks, and discusses the regulatory and legal mechanisms needed to mitigate these risks. By emphasizing transparency, accountability, and ethical responsibility, we aim to ensure that generative AI contributes positively to society while safeguarding against its potential harms.
APA, Harvard, Vancouver, ISO, and other styles
7

Mortensen, Søren F. "Generative AI in securities services." Journal of Securities Operations & Custody 16, no. 4 (September 1, 2024): 302. http://dx.doi.org/10.69554/zlcg3875.

Full text
Abstract:
Generative AI (GenAI) is a technology that, since the launch of ChatGPT in November 2022, has taken the world by storm. While a lot of the conversation around GenAI is hype, there are some real applications of this technology that can bring real value to businesses. There are, however, risks in applying this technology blindly that sometimes can outweigh the value it brings. This paper discusses the potential applicability of GenAI to the processes in post-trade and what impact it could have on financial institutions and their ability to meet challenges in the market, such as T+1. We also discuss the risks of implementing this technology and how these can be mitigated, as well as ensuring that all the objectives are met not only from a business perspective, but also technology and compliance.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Mingzheng. "Generative AI: A New Challenge for Cybersecurity." Journal of Computer Science and Technology Studies 6, no. 2 (April 7, 2024): 13–18. http://dx.doi.org/10.32996/jcsts.2024.6.2.3.

Full text
Abstract:
The rapid development of Generative Artificial Intelligence (GAI) technology has shown tremendous potential in various fields, such as image generation, text generation, and video generation, and it has been widely applied in various industries. However, GAI also brings new risks and challenges to cybersecurity. This paper analyzes the application status of GAI technology in the field of cybersecurity and discusses the risks and challenges it brings, including data security risks, scientific and technological ethics and moral challenges, Artificial Intelligence (AI) fraud, and threats from cyberattacks. On this basis, this paper proposes some countermeasures to maintain cybersecurity and address the threats posed by GAI, including: establishing and improving standards and specifications for AI technology to ensure its security and reliability; developing AI-based cybersecurity defense technologies to enhance cybersecurity defense capabilities; improving the AI literacy of the whole society to help the public understand and use AI technology correctly. From the perspective of GAI technology background, this paper systematically analyzes its impact on cybersecurity and proposes some targeted countermeasures and suggestions, possessing certain theoretical and practical significance.
APA, Harvard, Vancouver, ISO, and other styles
9

Gabriel, Sonja. "Generative AI in Writing Workshops: A Path to AI Literacy." International Conference on AI Research 4, no. 1 (December 4, 2024): 126–32. https://doi.org/10.34190/icair.4.1.3022.

Full text
Abstract:
The widespread use of generative AI tools which can support or even take over several part of the writing process has sparked many discussions about integrity, AI literacy and changes to academic writing processes. This paper explores the impact of generative artificial intelligence (AI) tools on the academic writing pro-cess, drawing on data from a writing workshop and interviews with university students of a university teacher college in Austria. Despite the widespread assumption that generative AI, such as ChatGPT, is widely used by students to support their academic tasks, initial findings suggest a notable gap in participants' experience and understanding of these technologies. This discrepancy highlights the critical need for AI literacy and underscores the importance of familiarity with the potential, challenges and risks associated with generative AI to ensure its ethical and effective use. Through reflective discussions and feedback from work-shop participants, this study shows a differentiated perspective on the role of generative AI in academic writing, illustrating its value in generating ideas and overcoming writer's block, as well as its limitations due to the indispensable nature of human involvement in critical writing tasks.
APA, Harvard, Vancouver, ISO, and other styles
10

Tong, Yifeng. "Research on Criminal Risk Analysis and Governance Mechanism of Generative Artificial Intelligence such as ChatGPT." Studies in Law and Justice 2, no. 2 (June 2023): 85–94. http://dx.doi.org/10.56397/slj.2023.06.13.

Full text
Abstract:
The launch of ChatGPT marks a breakthrough in the development of generative artificial intelligence (generative AI) technology, which is based on the collection and learning of big data as its core operating mechanism. Now the generative AI has the characteristics of high intelligence and high generalization and thus has led to various criminal risks. The current criminal law norms are inadequate in terms of the attribution system and crime norms for the criminal risks caused by the generative AI technologies. Thus, we should clarify the types of risks and governance challenges of generative AI, clarify that data is the object of risk governance, and establish a pluralistic liability allocation mechanism and a mixed legal governance framework of criminal, civil, and administrative law on this basis.
APA, Harvard, Vancouver, ISO, and other styles
11

Andrei, Andreia Gabriela, Mara Mațcu-Zaharia, and Dragoș Florentin Mariciuc. "Ready to Grip AI's Potential? Insights from an Exploratory Study on Perceptions of Human-AI Collaboration." BRAIN. Broad Research in Artificial Intelligence and Neuroscience 15, no. 2 (July 5, 2024): 01–22. http://dx.doi.org/10.18662/brain/15.2/560.

Full text
Abstract:
One of the emerging technologies arising with Industry 4.0 is generative artificial intelligence (AI). Despite its disruptive nature and controversies, the effective and ethical use of AI is increasingly preoccupying organizations of all sizes as well as their employees. Focusing on generative AI, this paper presents findings from a qualitative study that provides insights into how Generation Z, the newest workforce, perceives human-AI collaboration. Based on in-depth interviews and a micro-meso-macro approach, the study reveals a dual perspective. Participants recognized the advantages AI brings, such as increased efficiency, productivity, and information availability. However, they were concerned about various risks such as: technology addiction, job loss, data privacy and ethical issues. At the micro level, generative AI was seen as beneficial for providing information and inspiration, but over-reliance could limit people's skills and create dependency. At the meso, organizational level, it could increase efficiency and productivity, but potentially replace jobs. At the macro, societal level, generative AI could support innovation but risks dehumanizing communication and relationships. Data privacy and ethics concerns were expressed at all three levels, indicating that a combination of institutional safeguards and awareness of data privacy and ethics at all levels is required to achieve the full benefits of generative AI. This would help organisations to capitalise on technological advances and support the development of ethical use of AI tools.
APA, Harvard, Vancouver, ISO, and other styles
12

柴, 懿庭. "Generative AI Data Collection Risks and Regulatory Paths." Advances in Social Sciences 13, no. 08 (2024): 763–69. http://dx.doi.org/10.12677/ass.2024.138761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Xiaozhong, Yu‐Ru Lin, Zhuoren Jiang, and Qunfang Wu. "Social Risks in the Era of Generative AI." Proceedings of the Association for Information Science and Technology 61, no. 1 (October 2024): 790–94. http://dx.doi.org/10.1002/pra2.1103.

Full text
Abstract:
ABSTRACTGenerative AI (GAI) technologies have demonstrated human‐level performance on a vast spectrum of tasks. However, recent studies have also delved into the potential threats and vulnerabilities posed by GAI, particularly as they become increasingly prevalent in sensitive domains such as elections and education. Their use in politics raises concerns about manipulation and misinformation. Further exploration is imperative to comprehend the social risks associated with GAI across diverse societal contexts. In this panel, we aim to dissect the impact and risks posed by GAI on our social fabric, examining both technological and societal perspectives. Additionally, we will present our latest investigations, including the manipulation of ideologies using large language models (LLMs), the potential risk of AI self‐consciousness, the application of Explainable AI (XAI) to identify patterns of misinformation and mitigate their dissemination, as well as the influence of GAI on the quality of public discourse. These insights will serve as catalysts for stimulating discussions among the audience on this crucial subject matter, and contribute to fostering a deeper understanding of the importance of responsible development and deployment of GAI technologies.
APA, Harvard, Vancouver, ISO, and other styles
14

Feretzakis, Georgios, Konstantinos Papaspyridis, Aris Gkoulalas-Divanis, and Vassilios S. Verykios. "Privacy-Preserving Techniques in Generative AI and Large Language Models: A Narrative Review." Information 15, no. 11 (November 4, 2024): 697. http://dx.doi.org/10.3390/info15110697.

Full text
Abstract:
Generative AI, including large language models (LLMs), has transformed the paradigm of data generation and creative content, but this progress raises critical privacy concerns, especially when models are trained on sensitive data. This review provides a comprehensive overview of privacy-preserving techniques aimed at safeguarding data privacy in generative AI, such as differential privacy (DP), federated learning (FL), homomorphic encryption (HE), and secure multi-party computation (SMPC). These techniques mitigate risks like model inversion, data leakage, and membership inference attacks, which are particularly relevant to LLMs. Additionally, the review explores emerging solutions, including privacy-enhancing technologies and post-quantum cryptography, as future directions for enhancing privacy in generative AI systems. Recognizing that achieving absolute privacy is mathematically impossible, the review emphasizes the necessity of aligning technical safeguards with legal and regulatory frameworks to ensure compliance with data protection laws. By discussing the ethical and legal implications of privacy risks in generative AI, the review underscores the need for a balanced approach that considers performance, scalability, and privacy preservation. The findings highlight the need for ongoing research and innovation to develop privacy-preserving techniques that keep pace with the scaling of generative AI, especially in large language models, while adhering to regulatory and ethical standards.
APA, Harvard, Vancouver, ISO, and other styles
15

Kwon, Jungin. "Enhancing ethical awareness through generative AI literacy: A study on user engagement and competence." Edelweiss Applied Science and Technology 8, no. 6 (November 7, 2024): 4136–45. http://dx.doi.org/10.55214/25768484.v8i6.2904.

Full text
Abstract:
Generative artificial intelligence (AI) enables users to quickly and easily create desired outputs. However, it poses potential risks of social disruption due to biases in the data used to generate such outputs. This study examines whether providing literacy-based guidance on the risks and cautionary aspects of generative AI at the point of user acceptance of its outputs can enhance the user's ethical competence. For the study, participants were divided into an experimental group and a control group. The experimental group received warnings about the risks and considerations of accepting AI-generated outputs before using the AI, while the control group received no such guidance. The results revealed that the experimental group, which was informed of the risks and cautionary aspects, showed significant improvements in ethical awareness across various dimensions, including values, self-efficacy, self-regulation and engagement, and ethics and security. Based on these findings, the study suggests that systematic education on the risks and cautionary aspects of generative AI, alongside the technology’s dissemination, can play a crucial role in addressing social and ethical challenges.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhou, Tong, and Mohamad Rizal Abd Rahman. "Legal Perspective on the Risk of Copyright Infringement by AI-Generated Contents in China." JURNAL UNDANG-UNDANG DAN MASYARAKAT 34, no. 2 (November 14, 2024): 141–53. http://dx.doi.org/10.17576/juum-2024-3402-10.

Full text
Abstract:
The swift progress of Artificial Intelligence (AI) technology is driving advancements in the internet and cultural sectors, but it also poses challenges to traditional laws, especially copyright law. This article examines the legal complexities associated with AI-generated products and copyright infringement risks in the Chinese internet context. By combining the Copyright Law of the People's Republic of China and analyzing relevant judicial dispute cases in China, this study explores generative artificial intelligence models for generating creative content such as text, images, and music, and introduces their basic technical mechanisms. Focusing on how AI uses existing copyrighted material to generate new works, discuss the issue of attribution of authorship of AI-generated works and its impact on traditional copyright principles. Generative AI is at risk of copyright infringement through the use of machine learning and deep learning technologies to capture data on the Internet for a wide range of creative training. Although China does not have clear legal provisions on copyright ownership and authorship of generative AI, the legal definition of ‘work’ adds a new protective interpretation of generative AI. The study suggests that the loopholes in copyright infringement risks can be closed by improving and reforming China's copyright law, or judicial interpretation of AI, to promote the development of AI technology and keep the law up to date. The comprehensive legal framework not only protects the rights of copyright owners and encourages technological innovation, but also improves China's competitiveness in the global digital economy.
APA, Harvard, Vancouver, ISO, and other styles
17

Nahla, Febri, and Anis Masruri. "Analysis of the Impact of Artificial Intelligence on Information-Seeking Behavior." JPUA: Jurnal Perpustakaan Universitas Airlangga: Media Informasi dan Komunikasi Kepustakawanan 14, no. 2 (December 2, 2024): 69–75. https://doi.org/10.20473/jpua.v14i2.2024.69-75.

Full text
Abstract:
Background of the study: The need for information in the era of globalization is rapidly growing, causing users to strategize to obtain information effectively. One of the strategies used in information retrieval is generative AI applications. Purpose: This study aims to discuss generative AI applications used in information retrieval and the impact of generative AI usage on information-seeking behavior. Methods: Library research is used to collect data from various relevant literature sources. Finding: The use of generative AI in information retrieval provides positive impacts, such as speeding up the search process and presenting concise and easily understood information. However, the use of generative AI also has negative effects, such as inaccurate data and potential dependence on technology. Conclusion: Although generative AI provides convenience in information retrieval, there are negative risks, such as data inaccuracy and overdependence on technology. Therefore, regulations and oversight are needed to prevent these negative impacts ABSTRAK Latar Belakang: Kebutuhan informasi di era globalisasi semakin berkembang pesat, yang menyebabkan para pengguna informasi harus mengatur strategi untuk mendapatkan informasi secara efektif. Salah satu strategi yang digunakan adalah dengan memanfaatkan AI generatif. Tujuan: Penelitian ini bertujuan untuk membahas aplikasi-aplikasi AI Generatif yang digunakan dalam pencarian informasi serta dampak dari penggunaan AI generatif terhadap perilaku informasi. Metode: Metode penelitian yang digunakan adalah library research dengan pengumpulan data dari berbagai sumber literatur yang relevan. Temuan: Penggunaan AI generatif dalam pencarian informasi memberi dampak positif, seperti mempercepat proses pencarian dan menyajikan informasi yang ringkas dan mudah dipahami. Meskipun demikian, penggunaan AI generatif juga menimbulkan dampak negatif, yaitu data yang kurang akurat dan potensi ketergantungan pada teknologi. Kesimpulan: Meskipun AI generatif memberikan kemudahan dalam pencarian informasi, terdapat resiko negatif seperti ketidakakuratan data dan ketergantungan berlebihan pada teknologi. Oleh karena itu, diperlukan aturan dan pengawasan untuk mencegah dampak negatif tersebut.
APA, Harvard, Vancouver, ISO, and other styles
18

Lasker, Archit. "EXPLORING ETHICAL CONSIDERATIONS IN GENERATIVE AI." International Journal of Advanced Research 12, no. 04 (April 30, 2024): 531–35. http://dx.doi.org/10.21474/ijar01/18578.

Full text
Abstract:
Generative AI, which encompasses a range of technologies such as generative adversarial networks (GANs), language models, and image generators, has shown remarkable progress in recent years. These technologies have the potential to revolutionize various fields, from art and entertainment to healthcare and education. However, along with these advancements come ethical considerations that must be carefully addressed. This research paper examines the ethical challenges posed by generative AI, including issues related to bias, privacy, misinformation, and intellectual property. It also discusses strategies for mitigating these risks and fostering the responsible development and deployment of generative AI technologies.
APA, Harvard, Vancouver, ISO, and other styles
19

Xu, Honghui, Yingshu Li, Olusesi Balogun, Shaoen Wu, Yue Wang, and Zhipeng Cai. "Security Risks Concerns of Generative AI in the IoT." IEEE Internet of Things Magazine 7, no. 3 (May 2024): 62–67. http://dx.doi.org/10.1109/iotm.001.2400004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Barrett, Clark, Brad Boyd, Elie Bursztein, Nicholas Carlini, Brad Chen, Jihye Choi, Amrita Roy Chowdhury, et al. "Identifying and Mitigating the Security Risks of Generative AI." Foundations and Trends® in Privacy and Security 6, no. 1 (2023): 1–52. http://dx.doi.org/10.1561/3300000041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Noviandy, Teuku Rizky, Aga Maulana, Ghazi Mauer Idroes, Zahriah Zahriah, Maria Paristiowati, Talha Bin Emran, Mukhlisuddin Ilyas, and Rinaldi Idroes. "Embrace, Don’t Avoid: Reimagining Higher Education with Generative Artificial Intelligence." Journal of Educational Management and Learning 2, no. 2 (November 28, 2024): 81–90. https://doi.org/10.60084/jeml.v2i2.233.

Full text
Abstract:
This paper explores the potential of generative artificial intelligence (AI) to transform higher education. Generative AI is a technology that can create new content, like text, images, and code, by learning patterns from existing data. As generative AI tools become more popular, there is growing interest in how AI can improve teaching, learning, and research. Higher education faces many challenges, such as meeting diverse learning needs and preparing students for fast-changing careers. Generative AI offers solutions by personalizing learning experiences, making education more engaging, and supporting skill development through adaptive content. It can also help researchers by automating tasks like data analysis and hypothesis generation, making research faster and more efficient. Moreover, generative AI can streamline administrative tasks, improving efficiency across institutions. However, using AI also raises concerns about privacy, bias, academic integrity, and equal access. To address these issues, institutions must establish clear ethical guidelines, ensure data security, and promote fairness in AI use. Training for faculty and AI literacy for students are essential to maximize benefits while minimizing risks. The paper suggests a strategic framework for integrating AI in higher education, focusing on infrastructure, ethical practices, and continuous learning. By adopting AI responsibly, higher education can become more inclusive, engaging, and practical, preparing students for the demands of a technology-driven world.
APA, Harvard, Vancouver, ISO, and other styles
22

Montalvo, Fernando, Kathren Pavlov, Phuoc Thai, Bijita Devkota, and Brandon Takahashi. "Machine Learning and Human Expertise in the Balance: Generative AI in Healthcare Settings." Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care 13, no. 1 (June 2024): 211–15. http://dx.doi.org/10.1177/2327857924131010.

Full text
Abstract:
Generative artificial intelligence chatbots are undergoing a rapid surge in popularity and use adoption across various industries. Healthcare is one of the many industries undergoing shifts in knowledge acquisition and decision-making as medical professionals learn to interact with the technology and implement it in various aspects of their everyday practice. As a result, healthcare professionals and institutions need to understand the risks involved when generative AI is introduced into information-seeking and decision-making processes, as well as how individual differences among users shift the type of risk, as well as its potential for occurring. The present research presents a framework to guide understanding of risks and opportunities at the intersection of medical expertise, AI use-expertise, and generative AI-dependence. The framework will help researchers and healthcare professionals understand human-AI risk, drive the technology towards practical implementations, and design generative AI systems which are transparent and flexible.
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Yan, and Pouyan Esmaeilzadeh. "Generative AI in Medical Practice: In-Depth Exploration of Privacy and Security Challenges." Journal of Medical Internet Research 26 (March 8, 2024): e53008. http://dx.doi.org/10.2196/53008.

Full text
Abstract:
As advances in artificial intelligence (AI) continue to transform and revolutionize the field of medicine, understanding the potential uses of generative AI in health care becomes increasingly important. Generative AI, including models such as generative adversarial networks and large language models, shows promise in transforming medical diagnostics, research, treatment planning, and patient care. However, these data-intensive systems pose new threats to protected health information. This Viewpoint paper aims to explore various categories of generative AI in health care, including medical diagnostics, drug discovery, virtual health assistants, medical research, and clinical decision support, while identifying security and privacy threats within each phase of the life cycle of such systems (ie, data collection, model development, and implementation phases). The objectives of this study were to analyze the current state of generative AI in health care, identify opportunities and privacy and security challenges posed by integrating these technologies into existing health care infrastructure, and propose strategies for mitigating security and privacy risks. This study highlights the importance of addressing the security and privacy threats associated with generative AI in health care to ensure the safe and effective use of these systems. The findings of this study can inform the development of future generative AI systems in health care and help health care organizations better understand the potential benefits and risks associated with these systems. By examining the use cases and benefits of generative AI across diverse domains within health care, this paper contributes to theoretical discussions surrounding AI ethics, security vulnerabilities, and data privacy regulations. In addition, this study provides practical insights for stakeholders looking to adopt generative AI solutions within their organizations.
APA, Harvard, Vancouver, ISO, and other styles
24

S, Renjith. "Enhancing Learning Outcomes with Generative AI: A Study on Attention Span of College Students." Commerce & Business Researcher 15, no. 2 (March 1, 2024): 59–68. http://dx.doi.org/10.59640/cbr.v15i2.59-68.

Full text
Abstract:
This study investigates the transformative potential of Generative Artificial Intelligence (AI) in enhancing learning experiences and addressing the issue of low attention span among college students. The research focuses on four key components of AI-generated pedagogy: reducing learning challenges, enhancing learning experiences, ethical considerations, and balancing benefits and risks. Utilizing a quantitative survey-based approach, data were collected from 57 college students in the Kottayam District. The findings reveal that students perceive Generative AI as having a positive impact on learning experiences, effectively reducing learning challenges, enhancing the effectiveness of pedagogical strategies, addressing ethical concerns, and balancing benefits and risks. Notably, a strong positive correlation exists between these Generative AI-generated pedagogical components and low attention span, suggesting that Generative AI can effectively address this issue among college students and open new avenues for leveraging technology to improve educational outcomes. The present research contributes to the broader discourse on the effectiveness of technology-enhanced learning, particularly in improving engagement and attention span in educational settings. It underscores the need for further exploration of the role of Generative AI in education, emphasizing the balance of technological, ethical, and pedagogical considerations to maximize AI's benefits in learning environments.
APA, Harvard, Vancouver, ISO, and other styles
25

Rauh, Maribeth, Nahema Marchal, Arianna Manzini, Lisa Anne Hendricks, Ramona Comanescu, Canfer Akbulut, Tom Stepleton, et al. "Gaps in the Safety Evaluation of Generative AI." Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7 (October 16, 2024): 1200–1217. http://dx.doi.org/10.1609/aies.v7i1.31717.

Full text
Abstract:
Generative AI systems produce a range of ethical and social risks. Evaluation of these risks is a critical step on the path to ensuring the safety of these systems. However, evaluation requires the availability of validated and established measurement approaches and tools. In this paper, we provide an empirical review of the methods and tools that are available for evaluating known safety of generative AI systems to date. To this end, we review more than 200 safety-related evaluations that have been applied to generative AI systems. We categorise each evaluation along multiple axes to create a detailed snapshot of the safety evaluation landscape to date. We release this data for researchers and AI safety practitioners (https://bitly.ws/3hUzu). Analysing the current safety evaluation landscape reveals three systemic ”evaluation gaps”. First, a ”modality gap” emerges as few safety evaluations exist for non-text modalities. Second, a ”risk coverage gap” arises as evaluations for several ethical and social risks are simply lacking. Third, a ”context gap” arises as most safety evaluations are model-centric and fail to take into account the broader context in which AI systems operate. Devising next steps for safety practitioners based on these findings, we present tactical ”low-hanging fruit” steps towards closing the identified evaluation gaps and their limitations. We close by discussing the role and limitations of safety evaluation to ensure the safety of generative AI systems.
APA, Harvard, Vancouver, ISO, and other styles
26

Blease, Charlotte, and John Torous. "ChatGPT and mental healthcare: balancing benefits with risks of harms." BMJ Mental Health 26, no. 1 (November 2023): e300884. http://dx.doi.org/10.1136/bmjment-2023-300884.

Full text
Abstract:
Against the global need for increased access to mental services, health organisations are looking to technological advances to improve the delivery of care and lower costs. Since November 2022, with the public launch of OpenAI’s ChatGPT, the field of generative artificial intelligence (AI) has received expanding attention. Although generative AI itself is not new, technical advances and the increased accessibility of large language models (LLMs) (eg, OpenAI’s GPT-4 and Google’s Bard) suggest use of these tools could be clinically significant. LLMs are an application of generative AI technology that can summarise and generate content based on training on vast data sets. Unlike search engines, which provide internet links in response to typed entries, chatbots that rely on generative language models can simulate dialogue that resembles human conversations. We examine the potential promise and the risks of using LLMs in mental healthcare today, focusing on their scope to impact mental healthcare, including global equity in the delivery of care. Although we caution that LLMs should not be used to disintermediate mental health clinicians, we signal how—if carefully implemented—in the long term these tools could reap benefits for patients and health professionals.
APA, Harvard, Vancouver, ISO, and other styles
27

Litan, Avivah. "Mitigate Sensitive Data Risks With ChatGPT." ITNOW 65, no. 4 (November 23, 2023): 30. http://dx.doi.org/10.1093/itnow/bwad120.

Full text
Abstract:
Abstract As the use of generative AI tools becomes commonplace, Avivah Litan, distinguished VP Analyst at Gartner, looks at the key steps that organisations can take to help protect their sensitive data.
APA, Harvard, Vancouver, ISO, and other styles
28

Vujović, Dušan. "Generative AI: Riding the new general purpose technology storm." Ekonomika preduzeca 72, no. 1-2 (2024): 125–36. http://dx.doi.org/10.5937/ekopre2402125v.

Full text
Abstract:
Generative AI promises to revolutionize many industries (entertainment, marketing, healthcare, finance, and research) by empowering machines to create new data content inspired by existing data. It experienced exponential growth in recent years. In 2023 breakout year Gen AI impact reached 2.6-4.4 trillion USD (2.5-4.2% of global GDP). The development of modern LLM-based models has been facilitated by improvements in computing power, data availability, and algorithms. These models have diverse applications in text, visual, audio, and code generation across various domains. Leading companies are rapidly deploying Gen AI for strategic decision-making at corporate executive levels. While AI-related risks have been identified, mitigation measures are still in early stages. Leaders in Gen AI adoption anticipate workforce changes and re-skilling needs. Gen AI is primarily used for text functions, big data analysis, and customer services, with the strongest impact in knowledge-based sectors. High-performing AI companies prioritize revenue generation over cost reduction, rapidly expand the use of Gen AI across various business functions, and link business value to organizational performance and structure. There is a notable lack of attention to addressing broader societal risks and the impact on the labor force. Gen AI creates new job opportunities and improves productivity in key areas. Future investment in AI is expected to rise. Concerns about the potential AI singularity, where machines surpass human intelligence, are subject to debate. Some view singularity as a risk, others are more optimistic based on human control and societal constraints. Leading experts in Gen AI predict that the coming decade can be the most prosperous in history if we manage to harness the benefits of Gen AI and control its downside.
APA, Harvard, Vancouver, ISO, and other styles
29

Neumann, Peter G. "Risks to the Public." ACM SIGSOFT Software Engineering Notes 48, no. 3 (June 21, 2023): 4–7. http://dx.doi.org/10.1145/3599975.3599976.

Full text
Abstract:
ChatBots and Generative AI more generally have really opened up a deluge of RISKS items in the past few months, along with an increased awareness of risks relating to arti - cial intelligence. This issue includes just the tip of the iceberg with one-liners on that subject, where it is almost impossible to summarize all the risks. PGN
APA, Harvard, Vancouver, ISO, and other styles
30

Yu, Youchun. "Research on the Competition Law Risks and Regulation Brought by Generative Artificial Intelligence." Highlights in Business, Economics and Management 9 (June 13, 2023): 454–61. http://dx.doi.org/10.54097/hbem.v9i.9093.

Full text
Abstract:
The emergence of chatGPT signifies that Artificial Intelligence (AI) technologies for generating content will have a significant impact on people’s work and lifestyle. Nowadays, R&D of generative AI such as chatGPT has become the core domain in the latest wave of technological competition among nations, yet we must be aware that it may result in the disclosure of trade secrets, abuse of dominant market position and other market competition and monopoly risks. In order to regulate the advancement of generative artificial intelligence, it is imperative to refine the existing legislative framework and establish a novel regulatory architecture that involves the participation of the government, corporations, and end-users, to ensure both the free development of technology and its orderly compliance, and guard against the occurrence of malicious competition and market monopolies.
APA, Harvard, Vancouver, ISO, and other styles
31

Chauhan, Arun. "Generative artificial intelligence and misinformation warfare." International Journal of Multidisciplinary Research and Growth Evaluation 5, no. 4 (2024): 997–1002. http://dx.doi.org/10.54660/.ijmrge.2024.5.4.997-1002.

Full text
Abstract:
This research paper explores the intersection of generative AI and misinformation warfare, focusing on the sophisticated capabilities of AI technologies like Generative Adversarial Networks (GANs) and their potential misuse in creating deceptive content. Utilizing a mixed-methods approach, the study combines an extensive literature review with empirical data from a survey of 125 respondents, revealing high levels of awareness and concern about AI-generated misinformation. The findings indicate that AI-generated misinformation significantly impacts public opinion and societal trust, highlighting the urgent need for advanced detection mechanisms and comprehensive media literacy programs. The study underscores the importance of ethical guidelines and regulatory measures to manage the risks associated with generative AI. By providing a nuanced understanding of the technological and societal implications of AI-generated misinformation, this research contributes to the broader discourse on cybersecurity and information integrity, offering recommendations for future research and policy development to combat the pervasive threat of AI-enhanced misinformation.
APA, Harvard, Vancouver, ISO, and other styles
32

Shumakova, N. I., J. J. Lloyd, and E. V. Titova. "Towards Legal Regulations of Generative AI in the Creative Industry." Journal of Digital Technologies and Law 1, no. 4 (December 15, 2023): 880–908. http://dx.doi.org/10.21202/jdtl.2023.38.

Full text
Abstract:
Objective: this article aims to answer the following questions: 1. Can generative artificial intelligence be a subject of copyright law? 2. What risks the unregulated use of generative artificial intelligence systems can cause? 3. What legal gaps should be filled in to minimize such risks?Methods: comparative legal analysis, sociological method, concrete sociological method, quantitative data analysis, qualitative data analysis, statistical analysis, case study, induction, deduction.Results: the authors identified several risks of the unregulated usage of generative artificial intelligence in the creative industry, among which are: violation of copyright and labor law, violation of consumers rights and the rise of public distrust in government. They suggest that a prompt development of new legal norms can minimize these risks. In conclusion, the article constants that states have already begun to realize that the negative impact of generative artificial intelligence on the creative industry must not be ignored, hence the development of similar legal regulations in states with completely different regimes.Scientific novelty: the article provides a comprehensive study of the impact of generative artificial intelligence on the creative industry from two perspectives: the perspective of law and the perspective of the industry. The empirical basis of it consists of two international surveys and an expert opinion of a representative of the industry. This approach allowed the authors to improve the objectivity of their research and to obtain results that can be used for finding a practical solution for the identified risks. The problem of the ongoing development and popularization of generative artificial intelligence systems goes beyond the question “who is the author?” therefore, it needs to be solved by introduction of other than the already existing mechanisms and regulations - this point of view is supported not only by the results of the surveys but also by the analysis of current lawsuits against developers of generative artificial intelligence systems.Practical significance: the obtained results can be used to fasten the development of universal legal rules, regulations, instruments and standards, the current lack of which poses a threat not only to human rights, but also to several sectors within the creative industry and beyond.
APA, Harvard, Vancouver, ISO, and other styles
33

Lyubenko, R. Ya, and O. H. Puhachova. "RISKS OF MAINSTREAM GENERATIVE AI FOR SOCIAL NETWORKING SITES’ RESEARCH." СОЦІАЛЬНІ ТЕХНОЛОГІЇ: АКТУАЛЬНІ ПРОБЛЕМИ ТЕОРІЇ ТА ПРАКТИКИ, no. 100 (2023): 64–74. http://dx.doi.org/10.32782/2707-9147.2023.100.5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

周, 啸. "Research on Copyright Infringement Risks in Generative AI Data Use." Open Journal of Legal Science 12, no. 05 (2024): 3261–67. http://dx.doi.org/10.12677/ojls.2024.125463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Powell, Wardell, and Steven Courchesne. "Opportunities and risks involved in using ChatGPT to create first grade science lesson plans." PLOS ONE 19, no. 6 (June 17, 2024): e0305337. http://dx.doi.org/10.1371/journal.pone.0305337.

Full text
Abstract:
Generative AI can potentially support teachers in lesson planning by making the process of generating an outline more efficient. This qualitative study employed an exploratory case study design to examine a specific lesson design activity involving a series of prompts and responses from ChatGPT. The desired science lesson on heredity was aimed at first grade students. We analyzed the process’s efficiency, finding that within 30 minutes we could generate and substantially refine a lesson plan that accurately aligned with the desired curriculum framework and the 5E model of instruction. However, the iterations of the lesson plan included questionable components, missing details, and a fake resource. We discussed the implications of these findings for faculty looking to train pre-service teachers to appropriately use generative AI in lesson planning.
APA, Harvard, Vancouver, ISO, and other styles
36

Baha, Benson Yusuf, and Omachi Okolo. "Navigating the Ethical Dilemma of Generative AI in Higher Educational Institutions in Nigeria using the TOE Framework." European Journal of Computer Science and Information Technology, 12, no. 8 (August 15, 2024): 18–40. http://dx.doi.org/10.37745/ejcsit.2013/vol12n81840.

Full text
Abstract:
Generative AI tools stand at the threshold of innovation and the erosion of the long-standing values of creativity, critical thinking, authorship, and research in higher education. This research crafted a novel framework from the technology, organization, and environment (TOE) framework to guide higher educational institutions in Nigeria to navigate the ethical dilemma of generative AI. A questionnaire was used to collect data from twelve higher institutions among lecturers, students, and researchers across the six (6) geopolitical zones of Nigeria. The structural equation modeling was used to analyze the data using the SPPS Amos version 23. The results revealed that factors such as perceived risks of generative AI, Curriculum support, institutional policy, and perceived generative AI trends positively impact the need for a generative AI ethical framework in higher educational institutions in Nigeria. Furthermore, the study contributes to the adoption of theory to navigate the ethical dilemma in the use of generative AI tools in higher educational institutions in Nigeria. It also provides some practical implications that suggest the importance of inculcating ethical discussions into the curriculum as part of institutional policy to create awareness and guidance on the use of generative AI.
APA, Harvard, Vancouver, ISO, and other styles
37

Su, Hongxia, Huifen Liu, Shan Sun, and Mian Xu. "Application of Generative Artificial Intelligence Tool in Tourism in China." International Journal of Global Economics and Management 4, no. 3 (October 22, 2024): 122–30. http://dx.doi.org/10.62051/ijgem.v4n3.15.

Full text
Abstract:
With the rapid progress of artificial intelligence and natural language processing technology, large-scale language models such as ChatGPT are increasingly widely used in the tourism industry, profoundly changing the traditional mode of tourism services. This paper discusses the application of Generative AI tools in tourism and its challenges and risks from both the supply side and the demand side. The research points out that travel agencies are faced with the loss of tourists, technical threshold and cost pressure, and tourists also have certain risks when using Generative AI tools for travel planning and decision-making. In order to cope with these challenges and risks, this paper puts forward corresponding countermeasures and suggestions. Travel agencies should get committed to improving information quality, strengthening technical updates and ensuring information security. At the same time, tourists should also be vigilant when using Generative AI tools and plan their travel itinerary properly. These strategies help travel agencies and tourists better adapt to the changes brought by new technologies and promote the sustainable development of the tourism industry.
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Su-Yen. "Generative AI, learning and new literacies." Journal of Educational Technology Development and Exchange 16, no. 2 (2023): 1–19. http://dx.doi.org/10.18785/jetde.1602.01.

Full text
Abstract:
Launched in November 2022, OpenAI’s ChatGPT garnered over 100 million users within two months, sparking a surge in research and concern over potential risks of extensive AI experiments. The article, originating from a conference presentation by Tsinghua University and NTHU, Taiwan, provides a nuanced overview of Generative AI. It explores the classifications, applications, governance challenges, societal implications, and development trajectory of Generative AI, emphasizing its transformative role in employment and education. The piece highlights ChatGPT’s significant impact and the strategic adaptations required in various sectors, including medical education, engineering, information management, and distance education. Furthermore, it explores the opportunities and challenges associated with incorporating ChatGPT in educational settings, emphasizing its support in facilitating personalized learning, developing 21st-century competencies, fostering self-directed learning, and enhancing information accessibility. It also illustrates the integration of ChatGPT and text-to-image models in high school language courses through the lens of new literacies. The text uniquely integrates three layers of discourse: introductions to Generative AI by experts, scholarly debates on its merits and drawbacks, and practical classroom applications, offering a reflective snapshot of the current and potential states of Generative AI applications while emphasizing the interconnected discussions across various layers of discourse.
APA, Harvard, Vancouver, ISO, and other styles
39

Zainuddin, Nurkhamimi. "Does Artificial Intelligence Cause More Harm than Good in Schools?" International Journal of Language Education and Applied Linguistics 14, no. 1 (April 4, 2024): 1–3. http://dx.doi.org/10.15282/ijleal.v14i1.10432.

Full text
Abstract:
The integration of artificial intelligence (AI) in schools presents significant challenges and risks requiring responsible and ethical management. Despite warnings from tech leaders, major corporations push AI adoption in schools, leading to privacy violations, biased algorithms and curricular misinformation. Generative AI, though enhancing resources, risks disseminating false information. Biased AI models perpetuate inequalities, especially for marginalized groups. The financial burdens of AI implementation worsen budget constraints, and AI-driven surveillance raises privacy concerns. Governance must prioritize ethics and student rights, establishing transparent frameworks to prevent commercial interests from overshadowing educational goals. This editorial suggests halting AI adoption until comprehensive legislation safeguards against risks. Stakeholders should prioritize responsible AI development, stressing transparency and accountability. Collaboration between AI developers and educators is essential to ensuring AI serves students and society responsibly.
APA, Harvard, Vancouver, ISO, and other styles
40

Yang, Tianfang. "Legal Regulation of Generative Artificial Intelligence in China." Law & Digital Technologies 4, no. 1 (2024): 25. http://dx.doi.org/10.18254/s278229070031786-0.

Full text
Abstract:
The rapid development of generative artificial intelligence (AI), exemplified by technologies like ChatGPT, has prompted significant regulatory responses in China. This paper explores the legal framework established by China's Interim Measures for the Management of Generative Artificial Intelligence Services, highlighting its regulatory mechanisms and compliance obligations for AI service providers. The measures aim to address various risks associated with generative AI, such as data security, content management, and user protection, by implementing a dual registration system for algorithms and AI models. The Basic Safety Requirements for Generative Artificial Intelligence Services, published in 2024, provided detailed guidelines for ensuring the safety and legality of AI applications. This includes stringent assessments of data sources, content quality, and algorithm safety. By drawing comparisons with existing regulations like the Algorithmic Recommendations Regulation and the Deep Synthesis Regulation, this paper demonstrates China's consistent approach to AI governance, emphasizing the principles of promoting technological development while safeguarding public and individual interests. The findings suggest that China's regulatory framework for generative AI is designed to balance innovation with risk management, setting a precedent for comprehensive AI regulation.
APA, Harvard, Vancouver, ISO, and other styles
41

Dhagare, Mr Rahul Prabhakar. "Generative AI and Education: A Symbiotic Relationship." International Journal for Research in Applied Science and Engineering Technology 12, no. 11 (November 30, 2024): 1042–45. http://dx.doi.org/10.22214/ijraset.2024.65279.

Full text
Abstract:
Generative AI, with its capacity to create diverse content formats, holds immense potential to revolutionize the educational landscape. This research paper delves into the multifaceted ways in which generative AI can enhance teaching and learning, fostering a symbiotic relationship between technology and human ingenuity. Personalized Learning: Generative AI can analyse vast datasets of student performance and engagement to create adaptive learning paths, tailoring instruction to individual needs and preferences. This personalized approach ensures that each learner receives the optimal level of support and challenge, maximizing their potential for growth and development. Engaging Learning Experiences AI-powered tools can generate a wide range of interactive and immersive learning materials, such as simulations, virtual labs, and gamified experiences. These innovative resources can make education more engaging and enjoyable, capturing students' attention and motivating them to actively participate in the learning process. Streamlining Administrative Tasks Generative AI can automate time-consuming administrative tasks, such as grading assignments, providing personalized feedback, and generating comprehensive reports on student progress. By freeing up valuable time and resources, educators can focus on more meaningful interactions with students, fostering a deeper connection and a more supportive learning environment. Ethical Considerations While generative AI offers numerous benefits, it is essential to address potential challenges and ethical concerns. Data privacy, algorithmic bias, and the responsible use of AI are critical issues that must be carefully considered. By establishing clear guidelines, developing robust safeguards, and providing educators with the necessary training and support, we can mitigate risks and ensure that AI is used ethically and effectively in education.
APA, Harvard, Vancouver, ISO, and other styles
42

Machleidt, Petr, Jitka Mráčková, and Karel Mráček. "Perception of the risks inherent in new AI technologies." TATuP - Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis 33, no. 2 (June 28, 2024): 42–48. http://dx.doi.org/10.14512/tatup.33.2.42.

Full text
Abstract:
Artificial intelligence (AI) has undergone rapid development and is becoming one of the major social issues. The advent of generative AI is not only associated with potential benefits but also poses a number of risks, such as increasing malicious misuse. This brings the issue of regulating new AI technologies into focus. In the search for solutions, the article revisits the problem of applying the precautionary principle. We critically evaluate regulatory approaches, particularly with regard to maintaining an innovation-friendly environment. A prudent approach to new AI technologies not only requires regulatory measures but also places new demands on the education system. We also discuss the regulation of AI from the perspective of Czech legislation in a broader international context. Current challenges in the field of AI competence are highlighted. To make the issue more tangible, we provide specific examples from the Czech Republic.
APA, Harvard, Vancouver, ISO, and other styles
43

Yan, Miao, and Longfei Wang. "Assessing the Copyright Infringement Risk of Generative AI Created Works." Lecture Notes in Education Psychology and Public Media 53, no. 1 (May 13, 2024): 99–107. http://dx.doi.org/10.54254/2753-7048/53/20240010.

Full text
Abstract:
The rapid evolution of the Internet and big data technology has ushered in significant advancements in data production and utilization. However, this progress has brought to the forefront the intricate issue of copyright protection. This is particularly evident with the widespread adoption of generative artificial intelligence (AI), where concerns regarding data source protection, data processing standards, and the boundaries of data application have garnered considerable attention from both academia and industry. Big data mining techniques coupled with generative AI algorithms offer robust support for data collection and processing, but they also introduce risks of data infringement and pose challenges regarding algorithmic compliance and ownership rights over generated works. Addressing these issues is imperative. This paper recommends enhancing legal compliance standards throughout each stage of the generative AI process, recalibrating the scope of acceptable copyright use, and establishing a regulatory framework for generative AI works. These measures are crucial for fostering the sustainable and orderly advancement of the generative AI industry.
APA, Harvard, Vancouver, ISO, and other styles
44

Yeshwanth Vasa. "ETHICAL IMPLICATIONS AND BIAS IN GENERATIVE AI." International Journal for Research Publication and Seminar 15, no. 3 (September 28, 2024): 500–511. http://dx.doi.org/10.36676/jrps.v15.i3.1541.

Full text
Abstract:
This has led to several opportunities, mainly because generative AI has grown quickly, mostly in the content and services sectors. Since then, there have been some improvements to face these challenges, but there are still some urgent ethical issues and biases. This paper aims to discuss the following concerns concerning ethical questions around generative AI: privacy issues, the threat of misinformation, and issues related to intellectual property ownership. Also, it further proves the existence of bias in the systems and advocates for the consideration of race, gender, and other factors in society. Regarding these issues, the paper tries to answer the questions of what risks generative AI systems have, how these threats can be eradicated, and how the equity of using AI can be enhanced, which is unknown and remains a question among scholars and practitioners.
APA, Harvard, Vancouver, ISO, and other styles
45

Yeshwanth Vasa. "ETHICAL IMPLICATIONS AND BIAS IN GENERATIVE AI." International Journal for Research Publication and Seminar 14, no. 5 (December 30, 2023): 500–511. http://dx.doi.org/10.36676/jrps.v14.i5.1541.

Full text
Abstract:
This has led to several opportunities, mainly because generative AI has grown quickly, mostly in the content and services sectors. Since then, there have been some improvements to face these challenges, but there are still some urgent ethical issues and biases. This paper aims to discuss the following concerns concerning ethical questions around generative AI: privacy issues, the threat of misinformation, and issues related to intellectual property ownership. Also, it further proves the existence of bias in the systems and advocates for the consideration of race, gender, and other factors in society. Regarding these issues, the paper tries to answer the questions of what risks generative AI systems have, how these threats can be eradicated, and how the equity of using AI can be enhanced, which is unknown and remains a question among scholars and practitioners.
APA, Harvard, Vancouver, ISO, and other styles
46

Bhattacharya, Biswajit. "Generative AI in Healthcare Industry: Implementations and Challenges." International Journal for Research in Applied Science and Engineering Technology 12, no. 8 (August 31, 2024): 1–5. http://dx.doi.org/10.22214/ijraset.2024.63815.

Full text
Abstract:
Abstract: Generative AI (artificial intelligence) refers to algorithms and models that can be prompted to generate various types of content. Generative AI has quickly become a major factor in several industries, including health care. It has the potential to transform the sector, but we must understand how to use this technology in order to capitalize on its potential while avoiding the risks that may come with it while applying it to patient care. These models have played a crucial role in analyzing diverse forms of data, including medical imaging (encompassing image reconstruction, image-to-image translation, image generation, and image classification), clinical documentation, diagnostic assistance, clinical decision support, medical coding, and billing, as well as software engineering, testing and user data safety and security. In this review we briefly discuss some associated issues, such as trust, veracity, clinical safety and reliability, privacy and opportunities, e.g., AI-driven conversational user interfaces for friendlier human-computer interaction
APA, Harvard, Vancouver, ISO, and other styles
47

Kalia, Suman. "Potential Impact of Generative Artificial Intelligence(AI) on the Financial Industry." International Journal on Cybernetics & Informatics 12, no. 6 (October 7, 2023): 37–51. http://dx.doi.org/10.5121/ijci.2023.120604.

Full text
Abstract:
Presently, generative AI has taken center stage in the news media, educational institutions, and the world at large. Machine learning has been a decades-old phenomenon, with little exposure to the average person until very recently. In the natural world, the oldest and best example of a “generative” model is the human being - one can close one’s eyes and imagine several plausible different endings to one’s favorite TV show. This paper focuses on the impact of generative and machine learning AI on the financial industry. Although generative AI is an amazing tool for a discriminant user, it also challenges us to think critically about the ethical implications and societal impact of these powerful technologies on the financial industry. It requires ethical considerations to guide decision-making, mitigate risks, and ensure that generative AI is developed and used to align with ethical principles, social values, and in the best interests of communities.
APA, Harvard, Vancouver, ISO, and other styles
48

Lodge, Jason. "AI in the wild." Pacific Journal of Technology Enhanced Learning 6, no. 1 (April 19, 2024): 1. http://dx.doi.org/10.24135/pjtel.v6i1.176.

Full text
Abstract:
It has been well over a year since ChatGPT emerged and brought with it much commentary about challenges and opportunities for education. There has been considerable discussion about risks to academic integrity and the possibilities of generative AI for enhancing learning and teaching. As the dust settles, the hard work of determining how exactly generative AI will integrate into higher education begins. In this session, we will explore the current state of generative AI in student learning. While the integration of generative AI into formal coursework has been inconsistent, to say the least, many students are using these tools extensively as part of their studies. Drawing on in-depth interviews with 50 students across disciplines, a set of hypotheses about the impact of generative AI on student learning practices will be presented. A key component of the impact of these emerging technologies appears to be how familiar and confident students are in their understanding of their own learning. The implications of these findings will also be discussed.Jason Lodge is Associate Professor of Educational Psychology and Director of the Learning, Instruction, and Technology Lab in the School of Education and is a Deputy Associate Dean (Academic) in the Faculty of Humanities, Arts and Social Sciences at The University of Queensland. Jason’s research with his lab focuses on the cognitive, metacognitive, and emotional mechanisms of learning, primarily in post-secondary settings and in digital learning environments. He currently serves as Lead Editor of Australasian Journal of Educational Technology and Editor of Student Success.
APA, Harvard, Vancouver, ISO, and other styles
49

De Silva, Daswin, Shalinka Jayatilleke, Mona El-Ayoubi, Zafar Issadeen, Harsha Moraliyage, and Nishan Mills. "The Human-Centred Design of a Universal Module for Artificial Intelligence Literacy in Tertiary Education Institutions." Machine Learning and Knowledge Extraction 6, no. 2 (May 18, 2024): 1114–25. http://dx.doi.org/10.3390/make6020051.

Full text
Abstract:
Generative Artificial Intelligence (AI) is heralding a new era in AI for performing a spectrum of complex tasks that are indistinguishable from humans. Alongside language and text, Generative AI models have been built for all other modalities of digital data, image, video, audio, and code. The full extent of Generative AI and its opportunities, challenges, contributions, and risks are still being explored by academic researchers, industry practitioners, and government policymakers. While this deep understanding of Generative AI continues to evolve, the lack of fluency, literacy, and effective interaction with Generative and conventional AI technologies are common challenges across all domains. Tertiary education institutions are uniquely positioned to address this void. In this article, we present the human-centred design of a universal AI literacy module, followed by its four primary constructs that provide core competence in AI to coursework and research students and academic and professional staff in a tertiary education setting. In comparison to related work in AI literacy, our design is inclusive due to the collaborative approach between multiple stakeholder groups and is comprehensive given the descriptive formulation of the primary constructs of this module with exemplars of how they activate core operational competence across the four groups.
APA, Harvard, Vancouver, ISO, and other styles
50

Howell, Michael D., Greg S. Corrado, and Karen B. DeSalvo. "Three Epochs of Artificial Intelligence in Health Care." JAMA 331, no. 3 (January 16, 2024): 242. http://dx.doi.org/10.1001/jama.2023.25057.

Full text
Abstract:
ImportanceInterest in artificial intelligence (AI) has reached an all-time high, and health care leaders across the ecosystem are faced with questions about where, when, and how to deploy AI and how to understand its risks, problems, and possibilities.ObservationsWhile AI as a concept has existed since the 1950s, all AI is not the same. Capabilities and risks of various kinds of AI differ markedly, and on examination 3 epochs of AI emerge. AI 1.0 includes symbolic AI, which attempts to encode human knowledge into computational rules, as well as probabilistic models. The era of AI 2.0 began with deep learning, in which models learn from examples labeled with ground truth. This era brought about many advances both in people’s daily lives and in health care. Deep learning models are task-specific, meaning they do one thing at a time, and they primarily focus on classification and prediction. AI 3.0 is the era of foundation models and generative AI. Models in AI 3.0 have fundamentally new (and potentially transformative) capabilities, as well as new kinds of risks, such as hallucinations. These models can do many different kinds of tasks without being retrained on a new dataset. For example, a simple text instruction will change the model’s behavior. Prompts such as “Write this note for a specialist consultant” and “Write this note for the patient’s mother” will produce markedly different content.Conclusions and RelevanceFoundation models and generative AI represent a major revolution in AI’s capabilities, ffering tremendous potential to improve care. Health care leaders are making decisions about AI today. While any heuristic omits details and loses nuance, the framework of AI 1.0, 2.0, and 3.0 may be helpful to decision-makers because each epoch has fundamentally different capabilities and risks.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography