Auswahl der wissenschaftlichen Literatur zum Thema „Responsible Artificial Intelligence“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Responsible Artificial Intelligence" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Responsible Artificial Intelligence"

1

Tawei Wang, Tawei Wang. „Responsible Use of Artificial Intelligen“. International Journal of Computer Auditing 4, Nr. 1 (Dezember 2022): 001–3. http://dx.doi.org/10.53106/256299802022120401001.

Der volle Inhalt der Quelle
Annotation:
<p>Artificial intelligence (AI) has once again attracted the public&rsquo;s attention in the past decade. Riding on the wave of big data revolution, the development of AI is much more promising than that 50 years ago. One example is ChatGPT (https://openai.com/blog/chatgpt/), which has quickly become a hot topic in the past several months and has brought back all kinds of discussion about AI and decision making. In this editorial, I would like to highlight several perspectives that may help us rethink about the implications of using AI for decision making especially for audit professionals.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Teng, C. L., A. S. Bhullar, P. Jermain, D. Jordon, R. Nawfel, P. Patel, R. Sean, M. Shang und D. H. Wu. „Responsible Artificial Intelligence in Radiation Oncology“. International Journal of Radiation Oncology*Biology*Physics 120, Nr. 2 (Oktober 2024): e659. http://dx.doi.org/10.1016/j.ijrobp.2024.07.1446.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Gregor, Shirley. „Responsible Artificial Intelligence and Journal Publishing“. Journal of the Association for Information Systems 25, Nr. 1 (2024): 48–60. http://dx.doi.org/10.17705/1jais.00863.

Der volle Inhalt der Quelle
Annotation:
The aim of this opinion piece is to examine the responsible use of artificial intelligence (AI) in relation to academic journal publishing. The work discusses approaches to AI with particular attention to recent developments with generative AI. Consensus is noted around eight normative themes for principles for responsible AI and their associated risks. A framework from Shneiderman (2022) for human-centered AI is employed to consider journal publishing practices that can address the principles of responsible AI at different levels. The resultant AI principled governance matrix (AI-PGM) for journal publishing shows how countermeasures for risks can be employed at the levels of the author-researcher team, the organization, the industry, and by government regulation. The AI-PGM allows a structured approach to responsible AI and may be modified as developments with AI unfold. It shows how the whole publishing ecosystem should be considered when looking at the responsible use of AI—not just journal policy itself.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Haidar, Ahmad. „An Integrative Theoretical Framework for Responsible Artificial Intelligence“. International Journal of Digital Strategy, Governance, and Business Transformation 13, Nr. 1 (15.12.2023): 1–23. http://dx.doi.org/10.4018/ijdsgbt.334844.

Der volle Inhalt der Quelle
Annotation:
The rapid integration of Artificial Intelligence (AI) into various sectors has yielded significant benefits, such as enhanced business efficiency and customer satisfaction, while posing challenges, including privacy concerns, algorithmic bias, and threats to autonomy. In response to these multifaceted issues, this study proposes a novel integrative theoretical framework for Responsible AI (RAI), which addresses four key dimensions: technical, sustainable development, responsible innovation management, and legislation. The responsible innovation management and the legal dimensions form the foundational layers of the framework. The first embeds elements like anticipation and reflexivity into corporate culture, and the latter examines AI-specific laws from the European Union and the United States, providing a comparative perspective on legal frameworks governing AI. The study's findings may be helpful for businesses seeking to responsibly integrate AI, developers who focus on creating responsibly compliant AI, and policymakers looking to foster awareness and develop guidelines for RAI.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Shneiderman, Ben. „Responsible AI“. Communications of the ACM 64, Nr. 8 (August 2021): 32–35. http://dx.doi.org/10.1145/3445973.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Dignum, Virginia. „Responsible Artificial Intelligence --- From Principles to Practice“. ACM SIGIR Forum 56, Nr. 1 (Juni 2022): 1–6. http://dx.doi.org/10.1145/3582524.3582529.

Der volle Inhalt der Quelle
Annotation:
The impact of Artificial Intelligence does not depend only on fundamental research and technological developments, but for a large part on how these systems are introduced into society and used in everyday situations. AI is changing the way we work, live and solve challenges but concerns about fairness, transparency or privacy are also growing. Ensuring responsible, ethical AI is more than designing systems whose result can be trusted. It is about the way we design them, why we design them, and who is involved in designing them. In order to develop and use AI responsibly, we need to work towards technical, societal, institutional and legal methods and tools which provide concrete support to AI practitioners, as well as awareness and training to enable participation of all, to ensure the alignment of AI systems with our societies' principles and values. This paper is a curated version of my keynote at the Web Conference 2022.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Rodrigues, Rowena, Anais Resseguier und Nicole Santiago. „When Artificial Intelligence Fails“. Public Governance, Administration and Finances Law Review 8, Nr. 2 (14.12.2023): 17–28. http://dx.doi.org/10.53116/pgaflr.7030.

Der volle Inhalt der Quelle
Annotation:
Diverse initiatives promote the responsible development, deployment and use of Artificial Intelligence (AI). AI incident databases have emerged as a valuable and timely learning resource and tool in AI governance. This article assesses the value of such databases and outlines how this value can be enhanced. It reviews four databases: the AI Incident Database, the AI, Algorithmic, and Automation Incidents and Controversies Repository, the AI Incident Tracker and Where in the World Is AI. The article provides a descriptive analysis of these databases, examines their objectives, and locates them within the landscape of initiatives that advance responsible AI. It reflects on their primary objective, i.e. learning from mistakes to avoid them in the future, and explores how they might benefit diverse stakeholders. The article supports the broader uptake of these databases and recommends four key actions to enhance their value.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

VASYLKIVSKYI, Mikola, Ganna VARGATYUK und Olga BOLDYREVA. „INTELLIGENT RADIO INTERFACE WITH THE SUPPORT OF ARTIFICIAL INTELLIGENCE“. Herald of Khmelnytskyi National University. Technical sciences 217, Nr. 1 (23.02.2023): 26–32. http://dx.doi.org/10.31891/2307-5732-2023-317-1-26-32.

Der volle Inhalt der Quelle
Annotation:
The peculiarities of the implementation of the 6G intelligent radio interface infrastructure, which will use an individual configuration for each individual subscriber application and flexible services with lower overhead costs, have been studied. A personalized infrastructure consisting of an AI-enabled intelligent physical layer, an intelligent MAC controller, and an intelligent protocol is considered, followed by a potentially novel AI-based end-to-end (E2E) device. The intelligent controller is investigated, in particular the intelligent functions at the MAC level, which may become key components of the intelligent controller in the future. The joint optimization of these components, which will provide better system performance, is considered. It was determined that instead of using a complex mathematical method of optimization, it is possible to use machine learning, which has less complexity and can adapt to network conditions. A 6G radio interface design based on a combination of model-driven and data-driven artificial intelligence is investigated and is expected to provide customized radio interface optimization from pre-configuration to self-learning. The specifics of configuring the network scheme and transmission parameters at the level of subscriber equipment and services using a personalized radio interface to maximize the individual user experience without compromising the throughput of the system as a whole are determined. Artificial intelligence is considered, which will be a built-in function of the radio interface that creates an intelligent physical layer and is responsible for MAC access control, network management optimization (such as load balancing and power saving), replacing some non-linear or non-convex algorithms in receiver modules or compensation of shortcomings in non-linear models. Built-in intelligence has been studied, which will make the 6G physical layer more advanced and efficient, facilitate the optimization of structural elements of the physical layer and procedural design, including the possible change of the receiver architecture, will help implement new detection and positioning capabilities, which, in turn, will significantly affect the design of radio interface components. The requirements for the 6G network are defined, which provide for the creation of a single network with scanning and communication functions, which must be integrated into a single structure at the stage of radio interface design. The specifics of carefully designing a communication and scanning network that will offer full scanning capabilities and more fully meet all key performance indicators in the communications industry are explored.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Germanov, Nikolai S. „The concept of responsible artificial intelligence as the future of artificial intelligence in medicine“. Digital Diagnostics 4, Nr. 1S (26.06.2023): 27–29. http://dx.doi.org/10.17816/dd430334.

Der volle Inhalt der Quelle
Annotation:
Active deployment of artificial intelligence (AI) systems in medicine creates many challenges. Recently, the concept of responsible artificial intelligence (RAI) was widely discussed, which is aimed at solving the inevitable ethical, legal, and social problems. The scientific literature was analyzed and the possibility of applying the RAI concept to overcome the existing AI problems in medicine was considered. Studies of possible AI applications in medicine showed that current algorithms are unable to meet the basic enduring needs of society, particularly, fairness, transparency, and reliability. The RAI concept based on three principles accountability for AI activities and responsibility and transparency of findings (ART) was proposed to address ethical issues. Further evolution, without the development and application of the ART concept, turns dangerous and impossible the use of AI in such areas as medicine and public administration. The requirements for accountability and transparency of conclusions are based on the identified epistemological (erroneous, non-transparent, and incomplete conclusions) and regulatory (data confidentiality and discrimination of certain groups) problems of using AI in digital medicine [2]. Epistemological errors committed by AI are not limited to omissions related to the volume and representativeness of the original databases analyzed. In addition, these include the well-known black box problem, i.e. the inability to look into the process of forming AI outputs when processing input data. Along with epistemological errors, normative problems inevitably arise, including patient confidentiality and discrimination of some social groups due to the refusal of some patients to provide medical data for training algorithms and as part of the analyzed databases, which will lead to inaccurate AI conclusions in cases of certain gender, race, and age. Importantly, the methodology of the AI data analysis depends on the program code set by the programmer, whose epistemological and logical errors are projected onto the AI. Hence the problem of determining responsibility in the case of erroneous conclusions, i.e. its distribution between the program itself, the developer, and the executor. Numerous professional associations design ethical standards for developers and a statutory framework to regulate responsibility between the links described. However, the state must play the greatest role in the development and approval of such legislation. The use of AI in medicine, despite its advantages, is accompanied by many ethical, legal, and social challenges. The development of RAI has the potential both to solve these challenges and to further the active and secure deployment of AI systems in digital medicine and healthcare.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Tyrranen, V. A. „ARTIFICIAL INTELLIGENCE CRIMES“. Territory Development, Nr. 3(17) (2019): 10–13. http://dx.doi.org/10.32324/2412-8945-2019-3-10-13.

Der volle Inhalt der Quelle
Annotation:
The article is devoted to current threats to information security associated with the widespread dissemination of computer technology. The author considers one of the aspects of cybercrime, namely crime using artificial intelligence. The concept of artificial intelligence is analyzed, a definition is proposed that is sufficient for effective enforcement. The article discusses the problems of criminalizing such crimes, the difficulties of solving the issue of legal personality and delinquency of artificial intelligence are shown. The author gives various cases, explaining why difficulties arise in determining the person responsible for the crime, gives an objective assessment of the possibility of criminal prosecution of the creators of the software, in the work of which there were errors that caused harm to the rights protected by criminal law and legitimate interests.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Responsible Artificial Intelligence"

1

Svedberg, Peter O. S. „Steps towards an empirically responsible AI : a methodological and theoretical framework“. Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2004. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-246.

Der volle Inhalt der Quelle
Annotation:

Initially we pursue a minimal model of a cognitive system. This in turn form the basis for the development of amethodological and theoretical framework. Two methodological requirements of the model are that explanation be from the perspective of the phenomena, and that we have structural determination. The minimal model is derived from the explanatory side of a biologically based cognitive science. Fransisco Varela is our principal source for this part. The model defines the relationship between a formally defined autonomous system and an environment, in such a way as to generate the world of the system, its actual environment. The minimal model is a modular explanation in that we find it on different levels in bio-cognitive systems, from the cell to small social groups. For the latter and for the role played by artefactual systems we bring in Edwin Hutchins' observational study of a cognitive system in action. This necessitates the introduction of a complementary form of explanation. A key aspect of Hutchins' findings is the social domain as environment for humans. Aspects of human cognitive abilities usually attributed to the person are more properly attributed to the social system, including artefactual systems.

Developing the methodological and theoretical framework means making a transition from the bio-cognitive to the computational. The two complementary forms of explanation are important for the ability to develop a methodology that supports the construction of actual systems. This has to be able to handle the transition from external determination of a system in design to internal determination (autonomy) in operation.

Once developed, the combined framework is evaluated in an application area. This is done by comparing the standard conception of the Semantic Web with how this notion looks from the perspective of the framework. This includes the development of the methodological framework as a metalevel external knowledge representation. A key difference between the two approaches is the directness by which the semantic is approached. Our perspective puts the focus on interaction and the structural regularities this engenders in the external representation. Regularities which in turn form the basis for machine processing. In this regard we see the relationship between representation and inference as analogous to the relationship between environment and system. Accordingly we have the social domain as environment for artefactual agents. For human level cognitive abilities the social domain as environment is important. We argue that a reasonable shortcut to systems we can relate to, about that very domain, is for artefactual agents to have an external representation of the social domain as environment.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Ounissi, Mehdi. „Decoding the Black Box : Enhancing Interpretability and Trust in Artificial Intelligence for Biomedical Imaging - a Step Toward Responsible Artificial Intelligence“. Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS237.

Der volle Inhalt der Quelle
Annotation:
À une époque dominée par l'IA, son processus décisionnel opaque, connu sous le nom de problème de la "boîte noire", pose des défis significatifs, particulièrement dans des domaines critiques comme l'imagerie biomédicale où la précision et la confiance sont essentielles. Notre recherche se concentre sur l'amélioration de l'interprétabilité de l'IA dans les applications biomédicales. Nous avons développé un cadre pour l'analyse d'images biomédicales qui quantifie la phagocytose dans les maladies neurodégénératives à l'aide de la microscopie vidéo à contraste de phase en accéléré. Les méthodes traditionnelles ont souvent du mal avec les interactions cellulaires rapides et la distinction des cellules par rapport aux arrière-plans, essentielles pour étudier des conditions telles que la démence frontotemporale (DFT). Notre cadre évolutif et en temps réel comprend un module de segmentation cellulaire explicable qui simplifie les algorithmes d'apprentissage profond, améliore l'interprétabilité et maintient des performances élevées en incorporant des explications visuelles et par simplifications. Nous abordons également les problèmes dans les modèles génératifs visuels, tels que les hallucinations en pathologie computationnelle, en utilisant un encodeur unique pour la coloration Hématoxyline et Éosine couplé avec plusieurs décodeurs. Cette méthode améliore la précision et la fiabilité de la génération de coloration synthétique, utilisant des fonctions de perte innovantes et des techniques de régularisation qui renforcent les performances et permettent des colorations synthétiques précises cruciales pour l'analyse pathologique. Nos méthodologies ont été validées contre plusieurs benchmarks publics, montrant des performances de premier ordre. Notamment, notre cadre a distingué entre les cellules microgliales mutantes et contrôles dans la DFT, fournissant de nouveaux aperçus biologiques sur ce phénomène non prouvé. De plus, nous avons introduit un système basé sur le cloud qui intègre des modèles complexes et fournit des retours en temps réel, facilitant une adoption plus large et des améliorations itératives grâce aux insights des pathologistes. La publication de nouveaux ensembles de données, incluant la microscopie vidéo sur la phagocytose des cellules microgliales et un ensemble de données de coloration virtuelle lié à la maladie de Crohn pédiatrique, ainsi que tous les codes sources, souligne notre engagement envers la collaboration scientifique ouverte et transparente et l'avancement. Notre recherche met en évidence l'importance de l'interprétabilité dans l'IA, plaidant pour une technologie qui s'intègre de manière transparente avec les besoins des utilisateurs et les normes éthiques dans les soins de santé. Une interprétabilité améliorée permet aux chercheurs de mieux comprendre les données et d'améliorer les performances des outils
In an era dominated by AI, its opaque decision-making --known as the "black box" problem-- poses significant challenges, especially in critical areas like biomedical imaging where accuracy and trust are crucial. Our research focuses on enhancing AI interpretability in biomedical applications. We have developed a framework for analyzing biomedical images that quantifies phagocytosis in neurodegenerative diseases using time-lapse phase-contrast video microscopy. Traditional methods often struggle with rapid cellular interactions and distinguishing cells from backgrounds, critical for studying conditions like frontotemporal dementia (FTD). Our scalable, real-time framework features an explainable cell segmentation module that simplifies deep learning algorithms, enhances interpretability, and maintains high performance by incorporating visual explanations and by model simplification. We also address issues in visual generative models, such as hallucinations in computational pathology, by using a unique encoder for Hematoxylin and Eosin staining coupled with multiple decoders. This method improves the accuracy and reliability of synthetic stain generation, employing innovative loss functions and regularization techniques that enhance performance and enable precise synthetic stains crucial for pathological analysis. Our methodologies have been validated against several public benchmarks, showing top-tier performance. Notably, our framework distinguished between mutant and control microglial cells in FTD, providing new biological insights into this unproven phenomenon. Additionally, we introduced a cloud-based system that integrates complex models and provides real-time feedback, facilitating broader adoption and iterative improvements through pathologist insights. The release of novel datasets, including video microscopy on microglial cell phagocytosis and a virtual staining dataset related to pediatric Crohn's disease, along with all source codes, underscores our commitment to transparent open scientific collaboration and advancement. Our research highlights the importance of interpretability in AI, advocating for technology that integrates seamlessly with user needs and ethical standards in healthcare. Enhanced interpretability allows researchers to better understand data and improve tool performance
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Haidar, Ahmad. „Responsible Artificial Intelligence : Designing Frameworks for Ethical, Sustainable, and Risk-Aware Practices“. Electronic Thesis or Diss., université Paris-Saclay, 2024. https://www.biblio.univ-evry.fr/theses/2024/interne/2024UPASI008.pdf.

Der volle Inhalt der Quelle
Annotation:
L'intelligence artificielle (IA) transforme rapidement le monde, redéfinissant les relations entre technologie et société. Cette thèse explore le besoin essentiel de développer, de gouverner et d'utiliser l'IA et l'IA générative (IAG) de manière responsable et durable. Elle traite des risques éthiques, des lacunes réglementaires et des défis associés aux systèmes d'IA, tout en proposant des cadres concrets pour promouvoir une Intelligence Artificielle Responsable (IAR) et une Innovation Numérique Responsable (INR).La thèse commence par une analyse approfondie de 27 déclarations éthiques mondiales sur l'IA pour identifier des principes dominants tels que la transparence, l'équité, la responsabilité et la durabilité. Bien que significatifs, ces principes manquent souvent d'outils pratiques pour leur mise en œuvre. Pour combler cette lacune, la deuxième étude de la recherche présente un cadre intégrateur pour l'IAR basé sur quatre dimensions : technique, IA pour la durabilité, juridique et gestion responsable de l'innovation.La troisième partie de la thèse porte sur l'INR à travers une étude qualitative basée sur 18 entretiens avec des gestionnaires de secteurs divers. Cinq dimensions clés sont identifiées : stratégie, défis spécifiques au numérique, indicateurs de performance organisationnels, impact sur les utilisateurs finaux et catalyseurs. Ces dimensions permettent aux entreprises d'adopter des pratiques d'innovation durable et responsable tout en surmontant les obstacles à leur mise en œuvre.La quatrième étude analyse les risques émergents liés à l'IAG, tels que la désinformation, les biais, les atteintes à la vie privée, les préoccupations environnementales et la suppression d'emplois. À partir d'un ensemble de 858 incidents, cette recherche utilise une régression logistique binaire pour examiner l'impact sociétal de ces risques. Les résultats soulignent l'urgence d'établir des cadres réglementaires renforcés, une responsabilité numérique des entreprises et une gouvernance éthique de l'IA.En conclusion, cette thèse apporte des contributions critiques aux domaines de l'INR et de l'IAR en évaluant les principes éthiques, en proposant des cadres intégratifs et en identifiant des risques émergents. Elle souligne l'importance d'aligner la gouvernance de l'IA sur les normes internationales afin de garantir que les technologies d'IA servent l'humanité de manière durable et équitable
Artificial Intelligence (AI) is rapidly transforming the world, redefining the relationship between technology and society. This thesis investigates the critical need for responsible and sustainable development, governance, and usage of AI and Generative AI (GAI). The study addresses the ethical risks, regulatory gaps, and challenges associated with AI systems while proposing actionable frameworks for fostering Responsible Artificial Intelligence (RAI) and Responsible Digital Innovation (RDI).The thesis begins with a comprehensive review of 27 global AI ethical declarations to identify dominant principles such as transparency, fairness, accountability, and sustainability. Despite their significance, these principles often lack the necessary tools for practical implementation. To address this gap, the second study in the research presents an integrative framework for RAI based on four dimensions: technical, AI for sustainability, legal, and responsible innovation management.The third part of the thesis focuses on RDI through a qualitative study of 18 interviews with managers from diverse sectors. Five key dimensions are identified: strategy, digital-specific challenges, organizational KPIs, end-user impact, and catalysts. These dimensions enable companies to adopt sustainable and responsible innovation practices while overcoming obstacles in implementation.The fourth study analyzes emerging risks from GAI, such as misinformation, disinformation, bias, privacy breaches, environmental concerns, and job displacement. Using a dataset of 858 incidents, this research employs binary logistic regression to examine the societal impact of these risks. The results highlight the urgent need for stronger regulatory frameworks, corporate digital responsibility, and ethical AI governance. Thus, this thesis provides critical contributions to the fields of RDI and RAI by evaluating ethical principles, proposing integrative frameworks, and identifying emerging risks. It emphasizes the importance of aligning AI governance with international standards to ensure that AI technologies serve humanity sustainably and equitably
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Sugianto, Nehemia. „Responsible AI for Automated Analysis of Integrated Video Surveillance in Public Spaces“. Thesis, Griffith University, 2021. http://hdl.handle.net/10072/409586.

Der volle Inhalt der Quelle
Annotation:
Understanding customer experience in real-time can potentially support people’s safety and comfort while in public spaces. Existing techniques, such as surveys and interviews, can only analyse data at specific times. Therefore, organisations that manage public spaces, such as local government or business entities, cannot respond immediately when urgent actions are needed. Manual monitoring through surveillance cameras can enable organisation personnel to observe people. However, fatigue and human distraction during constant observation cannot ensure reliable and timely analysis. Artificial intelligence (AI) can automate people observation and analyse their movement and any related properties in real-time. Analysing people’s facial expressions can provide insight into how comfortable they are in a certain area, while analysing crowd density can inform us of the area’s safety level. By observing the long-term patterns of crowd density, movement, and spatial data, the organisation can also gain insight to develop better strategies for improving people’s safety and comfort. There are three challenges to making an AI-enabled video surveillance system work well in public spaces. First is the readiness of AI models to be deployed in public space settings. Existing AI models are designed to work in generic/particular settings and will suffer performance degradation when deployed in a real-world setting. Therefore, the models require further development to tailor them for the specific environment of the targeted deployment setting. Second is the inclusion of AI continual learning capability to adapt the models to the environment. AI continual learning aims to learn from new data collected from cameras to adapt the models to constant visual changes introduced in the setting. Existing continuous learning approaches require long-term data retention and past data, which then raise data privacy issues. Third, most of the existing AI-enabled surveillance systems rely on centralised processing, meaning data are transmitted to a central/cloud machine for video analysis purposes. Such an approach involves data privacy and security risks. Serious data threats, such as data theft, eavesdropping or cyberattack, can potentially occur during data transmission. This study aims to develop an AI-enabled intelligent video surveillance system based on deep learning techniques for public spaces established on responsible AI principles. This study formulates three responsible AI criteria, which become the guidelines to design, develop, and evaluate the system. Based on the criteria, a framework is constructed to scale up the system over time to be readily deployed in a specific real-world environment while respecting people’s privacy. The framework incorporates three AI learning approaches to iteratively refine the AI models within the ethical use of data. First is the AI knowledge transfer approach to adapt existing AI models from generic deployment to specific real-world deployment with limited surveillance datasets. Second is the AI continuous learning approach to continuously adapt AI models to visual changes introduced by the environment without long-period data retention and the need for past data. Third is the AI federated learning approach to limit sensitive and identifiable data transmission by performing computation locally on edge devices rather than transmitting to the central machine. This thesis contributes to the study of responsible AI specifically in the video surveillance context from both technical and non-technical perspectives. It uses three use cases at an international airport as the application context to understand passenger experience in real-time to ensure people’s safety and comfort. A new video surveillance system is developed based on the framework to provide automated people observation in the application context. Based on real deployment using the airport’s selected cameras, the evaluation demonstrates that the system can provide real-time automated video analysis for three use cases while respecting people’s privacy. Based on comprehensive experiments, AI knowledge transfer can be an effective way to address limited surveillance datasets issue by transferring knowledge from similar datasets rather than training from scratch on surveillance datasets. It can be further improved by incrementally transferring knowledge from multi-datasets with smaller gaps rather than a one-stage process. Learning without Forgetting is a viable approach for AI continuous learning in the video surveillance context. It consistently outperforms fine-tuning and joint-training approaches with lower data retention and without the need for past data. AI federated learning can be a feasible solution to allow continuous learning in the video surveillance context without compromising model accuracy. It can obtain comparable accuracy with quicker training time compared to joint-training.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Dept Bus Strategy & Innovation
Griffith Business School
Full Text
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kessing, Maria. „Fairness in AI : Discussion of a Unified Approach to Ensure Responsible AI Development“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-299936.

Der volle Inhalt der Quelle
Annotation:
Besides entailing various benefits, AI technologies have also led to increased ethical concerns. Due to the growing attention, a large number of frameworks discussing responsible AI development have been released since 2016. This work aims at analyzing some of these proposals to answer the question (1) “Which approaches can be found to ensure responsible AI development?” For this, the theory section of this paper is looking at various approaches, including (inter-)governmental regulations, research organizations and private companies.  Further, expert interviews have been conducted to answer the second research question (2) “How can a unified solution be reached to ensure responsible AI development?” The results of the study have identified the governments as the main driver of this process. Overall, a detailed plan is necessary that brings together the public and private sector as well as research organizations. The paper also points out the importance of education in regard to making AI explainable and comprehensive for everyone.
Utöver de fördelar som AI-teknologier har bidragit med, så har även etiska dilemman och problem uppstått. På grund av ökat fokus, har ett stort antal förslag till system och regelverk som diskuterar ansvarstagande AI-utveckling publicerats sedan 2016. Denna rapport kommer analysera ett urval av dessa förslag med avsikt att besvara frågan (1) “Vilka tillvägagångssätt kan försäkra oss om en ansvarsfull AI-utveckling?” För att utforska denna fråga kommer denna rapport analysera olika metoder och tillvägagångssätt, på bland annat mellanstatliga- och statliga regelverk, forskningsgrupper samt privata företag. Dessutom har expertintervjuer genomförts för att besvara den andra problemformuleringen (2) “Hur kan vi nå en övergripande, gemensam, lösning för att försäkra oss om ansvarsfull AI-utveckling?” Denna rapport redogör för att statliga organisationer och myndigheter är den främsta drivkraften för att detta ska ske. Vidare krävs en detaljerad plan som knyter ihop forskningsgrupper med den offentliga- och privata sektorn. Slutligen anser rapporten även att det är av stor vikt för vidare utbildning när det kommer till att göra AI förklarbart och tydligt för alla.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Umurerwa, Janviere, und Maja Lesjak. „AI IMPLEMENTATION AND USAGE : A qualitative study of managerial challenges in implementation and use of AI solutions from the researchers’ perspective“. Thesis, Umeå universitet, Institutionen för informatik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-187810.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence (AI) technologies are developing rapidly and cause radical changes in organizations, companies, society, and individual levels. Managers are facing new challenges that they might not be prepared for. In this work, we seek to explore managerial challenges experienced while implementing and using AI technologies from the researchers’ perspective. Moreover, we explore how appropriate ethical deliberations should be applied when using big data concerning AI and the meaning of understanding or defining it. We describe qualitative research, the triangulation that includes related literature, in-depth interviews with researchers working on related topics from various fields, and a focus group discussion. Our findings show that AI algorithms are not universal, objective, or neutral and therefore researchers believe, it requires managers to have a solid understanding of the complexity of AI technologies and the nature of big data. Those are necessary to develop sufficient purchase capabilities and apply appropriate ethical considerations. Based on our results, we believe researchers are aware that those issues should be handled, but so far have too little attention. Therefore, we suggest further discussion and encourage research in this field.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

„Responsible Governance of Artificial Intelligence: An Assessment, Theoretical Framework, and Exploration“. Doctoral diss., 2019. http://hdl.handle.net/2286/R.I.55667.

Der volle Inhalt der Quelle
Annotation:
abstract: While artificial intelligence (AI) has seen enormous technical progress in recent years, less progress has occurred in understanding the governance issues raised by AI. In this dissertation, I make four contributions to the study and practice of AI governance. First, I connect AI to the literature and practices of responsible research and innovation (RRI) and explore their applicability to AI governance. I focus in particular on AI’s status as a general purpose technology (GPT), and suggest some of the distinctive challenges for RRI in this context such as the critical importance of publication norms in AI and the need for coordination. Second, I provide an assessment of existing AI governance efforts from an RRI perspective, synthesizing for the first time a wide range of literatures on AI governance and highlighting several limitations of extant efforts. This assessment helps identify areas for methodological exploration. Third, I explore, through several short case studies, the value of three different RRI-inspired methods for making AI governance more anticipatory and reflexive: expert elicitation, scenario planning, and formal modeling. In each case, I explain why these particular methods were deployed, what they produced, and what lessons can be learned for improving the governance of AI in the future. I find that RRI-inspired methods have substantial potential in the context of AI, and early utility to the GPT-oriented perspective on what RRI in AI entails. Finally, I describe several areas for future work that would put RRI in AI on a sounder footing.
Dissertation/Thesis
Doctoral Dissertation Human and Social Dimensions of Science and Technology 2019
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Arienti, João Henrique Leal. „Time series forecasting applied to an energy management system ‐ A comparison between Deep Learning Models and other Machine Learning Models“. Master's thesis, 2020. http://hdl.handle.net/10362/108172.

Der volle Inhalt der Quelle
Annotation:
Project Work presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics
A large amount of energy used by the world comes from buildings’ energy consumption. HVAC (Heat, Ventilation, and Air Conditioning) systems are the biggest offenders when it comes to buildings’ energy consumption. It is important to provide environmental comfort in buildings but indoor wellbeing is directly related to an increase in energy consumption. This dilemma creates a huge opportunity for a solution that balances occupant comfort and energy consumption. Within this context, the Ambiosensing project was launched to develop a complete energy management system that differentiates itself from other existing commercial solutions by being an inexpensive and intelligent system. The Ambiosensing project focused on the topic of Time Series Forecasting to achieve the goal of creating predictive models to help the energy management system to anticipate indoor environmental scenarios. A good approach for Time Series Forecasting problems is to apply Machine Learning, more specifically Deep Learning. This work project intends to investigate and develop Deep Learning and other Machine Learning models that can deal with multivariate Time Series Forecasting, to assess how well can a Deep Learning approach perform on a Time Series Forecasting problem, especially, LSTM (Long Short-Term Memory) Recurrent Neural Networks (RNN) and to establish a comparison between Deep Learning and other Machine Learning models like Linear Regression, Decision Trees, Random Forest, Gradient Boosting Machines and others within this context.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Voarino, Nathalie. „Systèmes d’intelligence artificielle et santé : les enjeux d’une innovation responsable“. Thèse, 2019. http://hdl.handle.net/1866/23526.

Der volle Inhalt der Quelle
Annotation:
L’avènement de l’utilisation de systèmes d’intelligence artificielle (IA) en santé s’inscrit dans le cadre d’une nouvelle médecine « haute définition » qui se veut prédictive, préventive et personnalisée en tirant partie d’une quantité inédite de données aujourd’hui disponibles. Au cœur de l’innovation numérique en santé, le développement de systèmes d’IA est à la base d’un système de santé interconnecté et auto-apprenant qui permettrait, entre autres, de redéfinir la classification des maladies, de générer de nouvelles connaissances médicales, ou de prédire les trajectoires de santé des individus en vue d’une meilleure prévention. Différentes applications en santé de la recherche en IA sont envisagées, allant de l’aide à la décision médicale par des systèmes experts à la médecine de précision (ex. ciblage pharmacologique), en passant par la prévention individualisée grâce à des trajectoires de santé élaborées sur la base de marqueurs biologiques. Des préoccupations éthiques pressantes relatives à l’impact de l’IA sur nos sociétés émergent avec le recours grandissant aux algorithmes pour analyser un nombre croissant de données relatives à la santé (souvent personnelles, sinon sensibles) ainsi que la réduction de la supervision humaine de nombreux processus automatisés. Les limites de l’analyse des données massives, la nécessité de partage et l’opacité des décisions algorithmiques sont à la source de différentes préoccupations éthiques relatives à la protection de la vie privée et de l’intimité, au consentement libre et éclairé, à la justice sociale, à la déshumanisation des soins et du patient, ou encore à la sécurité. Pour répondre à ces enjeux, de nombreuses initiatives se sont penchées sur la définition et l’application de principes directeurs en vue d’une gouvernance éthique de l’IA. L’opérationnalisation de ces principes s’accompagne cependant de différentes difficultés de l’éthique appliquée, tant relatives à la portée (universelle ou plurielle) desdits principes qu’à la façon de les mettre en pratique (des méthodes inductives ou déductives). S’il semble que ces difficultés trouvent des réponses dans la démarche éthique (soit une approche sensible aux contextes d’application), cette manière de faire se heurte à différents défis. L’analyse des craintes et des attentes citoyennes qui émanent des discussions ayant eu lieu lors de la coconstruction de la Déclaration de Montréal relativement au développement responsable de l’IA permet d’en dessiner les contours. Cette analyse a permis de mettre en évidence trois principaux défis relatifs à l’exercice de la responsabilité qui pourrait nuire à la mise en place d’une gouvernance éthique de l’IA en santé : l’incapacitation des professionnels de santé et des patients, le problème des mains multiples et l’agentivité artificielle. Ces défis demandent de se pencher sur la création de systèmes d’IA capacitants et de préserver l’agentivité humaine afin de favoriser le développement d’une responsabilité (pragmatique) partagée entre les différentes parties prenantes du développement des systèmes d’IA en santé. Répondre à ces différents défis est essentiel afin d’adapter les mécanismes de gouvernance existants et de permettre le développement d’une innovation numérique en santé responsable, qui doit garder l’humain au centre de ses développements.
The use of artificial intelligence (AI) systems in health is part of the advent of a new "high definition" medicine that is predictive, preventive and personalized, benefiting from the unprecedented amount of data that is today available. At the heart of digital health innovation, the development of AI systems promises to lead to an interconnected and self-learning healthcare system. AI systems could thus help to redefine the classification of diseases, generate new medical knowledge, or predict the health trajectories of individuals for prevention purposes. Today, various applications in healthcare are being considered, ranging from assistance to medical decision-making through expert systems to precision medicine (e.g. pharmacological targeting), as well as individualized prevention through health trajectories developed on the basis of biological markers. However, urgent ethical concerns emerge with the increasing use of algorithms to analyze a growing number of data related to health (often personal and sensitive) as well as the reduction of human intervention in many automated processes. From the limitations of big data analysis, the need for data sharing and the algorithmic decision ‘opacity’ stems various ethical concerns relating to the protection of privacy and intimacy, free and informed consent, social justice, dehumanization of care and patients, and/or security. To address these challenges, many initiatives have focused on defining and applying principles for an ethical governance of AI. However, the operationalization of these principles faces various difficulties inherent to applied ethics, which originate either from the scope (universal or plural) of these principles or the way these principles are put into practice (inductive or deductive methods). These issues can be addressed with context-specific or bottom-up approaches of applied ethics. However, people who embrace these approaches still face several challenges. From an analysis of citizens' fears and expectations emerging from the discussions that took place during the coconstruction of the Montreal Declaration for a Responsible Development of AI, it is possible to get a sense of what these difficulties look like. From this analysis, three main challenges emerge: the incapacitation of health professionals and patients, the many hands problem, and artificial agency. These challenges call for AI systems that empower people and that allow to maintain human agency, in order to foster the development of (pragmatic) shared responsibility among the various stakeholders involved in the development of healthcare AI systems. Meeting these challenges is essential in order to adapt existing governance mechanisms and enable the development of a responsible digital innovation in healthcare and research that allows human beings to remain at the center of its development.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Responsible Artificial Intelligence"

1

Dignum, Virginia. Responsible Artificial Intelligence. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30371-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Schmidpeter, René, und Reinhard Altenburger, Hrsg. Responsible Artificial Intelligence. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-09245-9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Khoshnevisan, Mohammad. Artificial intelligence and responsive optimization. 2. Aufl. Phoenix: Xiquan, 2003.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Khoshnevisan, Mohammad. Artificial intelligence and responsive optimization. Phoenix: Xiquan, 2003.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Khamparia, Aditya, Deepak Gupta, Ashish Khanna und Valentina E. Balas, Hrsg. Biomedical Data Analysis and Processing Using Explainable (XAI) and Responsive Artificial Intelligence (RAI). Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1476-8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Responsible Artificial Intelligence. Springer, 2020.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Altenburger, Reinhard, und René Schmidpeter. Responsible Artificial Intelligence: Challenges for Sustainable Management. Springer International Publishing AG, 2022.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Responsible Artificial Intelligence: Challenges for Sustainable Management. Springer International Publishing AG, 2024.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Kaplan, Jerry. Artificial Intelligence. Oxford University Press, 2016. http://dx.doi.org/10.1093/wentk/9780190602383.001.0001.

Der volle Inhalt der Quelle
Annotation:
Over the coming decades, Artificial Intelligence will profoundly impact the way we live, work, wage war, play, seek a mate, educate our young, and care for our elderly. It is likely to greatly increase our aggregate wealth, but it will also upend our labor markets, reshuffle our social order, and strain our private and public institutions. Eventually it may alter how we see our place in the universe, as machines pursue goals independent of their creators and outperform us in domains previously believed to be the sole dominion of humans. Whether we regard them as conscious or unwitting, revere them as a new form of life or dismiss them as mere clever appliances, is beside the point. They are likely to play an increasingly critical and intimate role in many aspects of our lives. The emergence of systems capable of independent reasoning and action raises serious questions about just whose interests they are permitted to serve, and what limits our society should place on their creation and use. Deep ethical questions that have bedeviled philosophers for ages will suddenly arrive on the steps of our courthouses. Can a machine be held accountable for its actions? Should intelligent systems enjoy independent rights and responsibilities, or are they simple property? Who should be held responsible when a self-driving car kills a pedestrian? Can your personal robot hold your place in line, or be compelled to testify against you? If it turns out to be possible to upload your mind into a machine, is that still you? The answers may surprise you.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Knowings, L. D. Ethical AI: Navigating the Future With Responsible Artificial Intelligence. Sandiver Publishing, 2024.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Responsible Artificial Intelligence"

1

Dignum, Virginia. „What Is Artificial Intelligence?“ In Responsible Artificial Intelligence, 9–34. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30371-6_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Leopold, Helmut. „Mastering Trustful Artificial Intelligence“. In Responsible Artificial Intelligence, 133–58. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-09245-9_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Altenburger, Reinhard. „Artificial Intelligence: Management Challenges and Responsibility“. In Responsible Artificial Intelligence, 1–8. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-09245-9_1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Dignum, Virginia. „Introduction“. In Responsible Artificial Intelligence, 1–7. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30371-6_1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Dignum, Virginia. „Ethical Decision-Making“. In Responsible Artificial Intelligence, 35–46. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30371-6_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Dignum, Virginia. „Taking Responsibility“. In Responsible Artificial Intelligence, 47–69. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30371-6_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Dignum, Virginia. „Can AI Systems Be Ethical?“ In Responsible Artificial Intelligence, 71–92. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30371-6_5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Dignum, Virginia. „Ensuring Responsible AI in Practice“. In Responsible Artificial Intelligence, 93–105. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30371-6_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Dignum, Virginia. „Looking Further“. In Responsible Artificial Intelligence, 107–20. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30371-6_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Schindler, Matthias, und Frederik Schmihing. „Technology Serves People: Democratising Analytics and AI in the BMW Production System“. In Responsible Artificial Intelligence, 159–82. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-09245-9_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Responsible Artificial Intelligence"

1

Herrera, Framcisco. „Responsible Artificial Intelligence Systems: From Trustworthiness to Governance“. In 2024 Design, Automation & Test in Europe Conference & Exhibition (DATE), 1–2. IEEE, 2024. http://dx.doi.org/10.23919/date58400.2024.10546553.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

R, Sukumar, Vinima Gambhir und Jyoti Seth. „Investigating the Ethical Implications of Artificial Intelligence and Establishing Guidelines for Responsible AI Development“. In 2024 International Conference on Advances in Computing Research on Science Engineering and Technology (ACROSET), 1–6. IEEE, 2024. http://dx.doi.org/10.1109/acroset62108.2024.10743915.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Yeasin, Mohammed. „Keynote Speaker ICOM'24: Perspective on Convergence of Mechatronics and Artificial Intelligence in Responsible Innovation“. In 2024 9th International Conference on Mechatronics Engineering (ICOM), XIV. IEEE, 2024. http://dx.doi.org/10.1109/icom61675.2024.10652385.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Dignum, Virginia. „Responsible Autonomy“. In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/655.

Der volle Inhalt der Quelle
Annotation:
As intelligent systems are increasingly making decisions that directly affect society, perhaps the most important upcoming research direction in AI is to rethink the ethical implications of their actions. Means are needed to integrate moral, societal and legal values with technological developments in AI, both during the design process as well as part of the deliberation algorithms employed by these systems. In this paper, we describe leading ethics theories and propose alternative ways to ensure ethical behavior by artificial systems. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Wang, Yichuan, Mengran Xiong und Hossein Olya. „Toward an Understanding of Responsible Artificial Intelligence Practices“. In Hawaii International Conference on System Sciences. Hawaii International Conference on System Sciences, 2020. http://dx.doi.org/10.24251/hicss.2020.610.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Wang, Shoujin, Ninghao Liu, Xiuzhen Zhang, Yan Wang, Francesco Ricci und Bamshad Mobasher. „Data Science and Artificial Intelligence for Responsible Recommendations“. In KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3534678.3542916.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Calvo, Albert, Nil Ortiz, Alejandro Espinosa, Aleksandar Dimitrievikj, Ignasi Oliva, Jordi Guijarro und Shuaib Sidiqqi. „Safe AI: Ensuring Safe and Responsible Artificial Intelligence“. In 2023 JNIC Cybersecurity Conference (JNIC). IEEE, 2023. http://dx.doi.org/10.23919/jnic58574.2023.10205749.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Dong, Tian, Shaofeng Li, Guoxing Chen, Minhui Xue, Haojin Zhu und Zhen Liu. „RAI2: Responsible Identity Audit Governing the Artificial Intelligence“. In Network and Distributed System Security Symposium. Reston, VA: Internet Society, 2023. http://dx.doi.org/10.14722/ndss.2023.241012.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Tahaei, Mohammad, Marios Constantinides, Daniele Quercia, Sean Kennedy, Michael Muller, Simone Stumpf, Q. Vera Liao et al. „Human-Centered Responsible Artificial Intelligence: Current & Future Trends“. In CHI '23: CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3544549.3583178.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Iliadis, Eduard. „AI-GFA: Applied Framework for Producing Responsible Artificial Intelligence“. In GoodIT '24: International Conference on Information Technology for Social Good, 93–99. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3677525.3678646.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Responsible Artificial Intelligence"

1

Stanley-Lockman, Zoe. Responsible and Ethical Military AI. Center for Security and Emerging Technology, August 2021. http://dx.doi.org/10.51593/20200091.

Der volle Inhalt der Quelle
Annotation:
Allies of the United States have begun to develop their own policy approaches to responsible military use of artificial intelligence. This issue brief looks at key allies with articulated, emerging, and nascent views on how to manage ethical risk in adopting military AI. The report compares their convergences and divergences, offering pathways for the United States, its allies, and multilateral institutions to develop common approaches to responsible AI implementation. Download Full Report
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Lehoux, Pascale, Hassane Alami, Carl Mörch, Lysanne Rivard, Robson Rocha und Hudson Silva. Can we innovate responsibly during a pandemic? Artificial intelligence, digital solutions and SARS-CoV-2. Observatoire international sur les impacts sociétaux de l’intelligence artificielle et du numérique, Juni 2020. http://dx.doi.org/10.61737/ueti5496.

Der volle Inhalt der Quelle
Annotation:
As a part of the research project of the International Observatory on the societal impacts of AI and digital technology (OBVIA) regarding the societal effects of A.I. systems and digital tools deployed to combat the spread of COVID-19 and supported by the Québec Research Funds (FRQ), the In Fieri research team leaded by the professor Pascale Lehoux, have produced a policy brief for public decision-makers and developers of AI and digital solutions about responsible innovation during pandemic : Can we innovate responsibly during a pandemic? Artificial intelligence, digital solutions and SARS-CoV-2
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Narayanan, Mina, und Christian Schoeberl. A Matrix for Selecting Responsible AI Frameworks. Center for Security and Emerging Technology, Juni 2023. http://dx.doi.org/10.51593/20220029.

Der volle Inhalt der Quelle
Annotation:
Process frameworks provide a blueprint for organizations implementing responsible artificial intelligence (AI), but the sheer number of frameworks, along with their loosely specified audiences, can make it difficult for organizations to select ones that meet their needs. This report presents a matrix that organizes approximately 40 public process frameworks according to their areas of focus and the teams that can use them. Ultimately, the matrix helps organizations select the right resources for implementing responsible AI.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Burstein, Jill. Duolingo English Test Responsible AI Standards. Duolingo, März 2023. http://dx.doi.org/10.46999/vcae5025.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence (AI) is now instantiated in digital learning and assessment platforms. Many sectors, including tech, government, legal, and military sectors, now have used formalized principles to develop responsible AI standards. While there is a substantial literature around responsible AI more generally (e.g., Fjeld et al., 2020; Gianni et al., 2022; and, NIST, 20231 ), traditional validity frameworks (such as, Xi, 2010a; Chapelle et al., 2008; Kunnan, 2000; and, Kane, 1992) pre-date AI advances, and do not provide formal standards for the use of AI in assessment. The AERA/APA/NCME Standards (2014) pre-date modern AI advances, and include limited discussion about the use of AI and technology in educational measurement. Some research discusses AI application in terms of validity (such as Huggins-Manley et al., 2022, Williamson et al., 2012, and Xi, 2010b). In earlier work, Aiken and Epstein (2000) discuss ethical considerations for AI in education. More recently, Dignum (2021) proposed a high-level vision for responsible AI for education, and Dieterle et al (2022) and OECD (2023) discuss guidelines and issues associated with AI in testing. The Duolingo English Test (DET)’s Responsible AI Standards were informed by the ATP (2021) and ITC-ATP (2022) guidelines, which provide comprehensive and relevant guidelines about AI and technology use for assessment. New guidelines for responsible AI are continually being developed (Department for Science, Technology & Innovation, 2023).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Faveri, Benjamin, und Graeme Auld. nforming Possible Futures for the use of Third-Party Audits in AI Regulations. Regulatory Governance Initiative, Carleton University, November 2023. http://dx.doi.org/10.22215/sppa-rgi-nov2023.

Der volle Inhalt der Quelle
Annotation:
This background paper framed discussions at workshop on AI regulation that took place at Carleton University on November 9, 2023. Themes discussed at the workshop were added to this final version. Funding for this work comes from a Connection Grant from the Social Sciences and Humanities Research Council of Canada (# 611-2022-0314). The authors also thank Carleton University, the Regulatory Governance Initiative, and the Responsible Artificial Intelligence Institute for their support.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Goode, Kayla, Heeu Millie Kim und Melissa Deng. Examining Singapore’s AI Progress. Center for Security and Emerging Technology, März 2023. http://dx.doi.org/10.51593/2021ca014.

Der volle Inhalt der Quelle
Annotation:
Despite being a small city-state, Singapore’s star continues to rise as an artificial intelligence hub presenting significant opportunities for international collaboration. Initiatives such as fast-tracking patent approval, incentivizing private investment, and addressing talent shortfalls are making the country a rapidly growing global AI hub. Such initiatives offer potential models for those seeking to leverage the technology and opportunities for collaboration in AI education and talent exchanges, research and development, and governance. The United States and Singapore share similar goals regarding the development and use of trusted and responsible AI and should continue to foster greater collaboration among public and private sector entities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Tabassi, Elham. AI Risk Management Framework. Gaithersburg, MD: National Institute of Standards and Technology, 2023. http://dx.doi.org/10.6028/nist.ai.100-1.

Der volle Inhalt der Quelle
Annotation:
As directed by the National Artificial Intelligence Initiative Act of 2020 (P.L. 116-283), the goal of the AI RMF is to offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems. The Framework is intended to be voluntary, rights-preserving, non-sector specific, and use-case agnostic, providing flexibility to organizations of all sizes and in all sectors and throughout society to implement the approaches in the Framework. The AI RMF is intended to be practical, to adapt to the AI landscape as AI technologies continue to develop, and to be operationalized by organizations in varying degrees and capacities so society can benefit from AI while also being protected from its potential harms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Toney, Autumn, und Emelia Probasco. Who Cares About Trust? Center for Security and Emerging Technology, Juli 2023. http://dx.doi.org/10.51593/20230014b.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence-enabled systems are transforming society and driving an intense focus on what policy and technical communities can do to ensure that those systems are trustworthy and used responsibly. This analysis draws on prior work about the use of trustworthy AI terms to identify 18 clusters of research papers that contribute to the development of trustworthy AI. In identifying these clusters, the analysis also reveals that some concepts, like "explainability," are forming distinct research areas, whereas other concepts, like "reliability," appear to be accepted as metrics and broadly applied.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Gautrais, Vincent, und Nicolas Aubin. Assessment Model of Factors Relating to Data Flow: Instrument for the Protection of Privacy as well as Rights and Freedoms in the Development and Use of Artificial Intelligence. Observatoire international sur les impacts sociétaux de l'intelligence artificielle et du numérique, März 2022. http://dx.doi.org/10.61737/haoj6662.

Der volle Inhalt der Quelle
Annotation:
This document proposes a model for assessing factors relating to data flow. Indeed, in order to increase the diligence of the players involved, systematic use is made of internal policies in which the latter spell out the guarantees they offer, whether in terms of privacy, transparency, security, fundamental freedoms, and so on. Based on the Guide des bonnes pratiques en intelligence artificielle (only available in French), this model seems to us to be a way for a provider to expose its diligence and the efforts it intends to make to handle data responsibly.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Daudelin, Francois, Lina Taing, Lucy Chen, Claudia Abreu Lopes, Adeniyi Francis Fagbamigbe und Hamid Mehmood. Mapping WASH-related disease risk: A review of risk concepts and methods. United Nations University Institute for Water, Environment and Health, Dezember 2021. http://dx.doi.org/10.53328/uxuo4751.

Der volle Inhalt der Quelle
Annotation:
The report provides a review of how risk is conceived of, modelled, and mapped in studies of infectious water, sanitation, and hygiene (WASH) related diseases. It focuses on spatial epidemiology of cholera, malaria and dengue to offer recommendations for the field of WASH-related disease risk mapping. The report notes a lack of consensus on the definition of disease risk in the literature, which limits the interpretability of the resulting analyses and could affect the quality of the design and direction of public health interventions. In addition, existing risk frameworks that consider disease incidence separately from community vulnerability have conceptual overlap in their components and conflate the probability and severity of disease risk into a single component. The report identifies four methods used to develop risk maps, i) observational, ii) index-based, iii) associative modelling and iv) mechanistic modelling. Observational methods are limited by a lack of historical data sets and their assumption that historical outcomes are representative of current and future risks. The more general index-based methods offer a highly flexible approach based on observed and modelled risks and can be used for partially qualitative or difficult-to-measure indicators, such as socioeconomic vulnerability. For multidimensional risk measures, indices representing different dimensions can be aggregated to form a composite index or be considered jointly without aggregation. The latter approach can distinguish between different types of disease risk such as outbreaks of high frequency/low intensity and low frequency/high intensity. Associative models, including machine learning and artificial intelligence (AI), are commonly used to measure current risk, future risk (short-term for early warning systems) or risk in areas with low data availability, but concerns about bias, privacy, trust, and accountability in algorithms can limit their application. In addition, they typically do not account for gender and demographic variables that allow risk analyses for different vulnerable groups. As an alternative, mechanistic models can be used for similar purposes as well as to create spatial measures of disease transmission efficiency or to model risk outcomes from hypothetical scenarios. Mechanistic models, however, are limited by their inability to capture locally specific transmission dynamics. The report recommends that future WASH-related disease risk mapping research: - Conceptualise risk as a function of the probability and severity of a disease risk event. Probability and severity can be disaggregated into sub-components. For outbreak-prone diseases, probability can be represented by a likelihood component while severity can be disaggregated into transmission and sensitivity sub-components, where sensitivity represents factors affecting health and socioeconomic outcomes of infection. -Employ jointly considered unaggregated indices to map multidimensional risk. Individual indices representing multiple dimensions of risk should be developed using a range of methods to take advantage of their relative strengths. -Develop and apply collaborative approaches with public health officials, development organizations and relevant stakeholders to identify appropriate interventions and priority levels for different types of risk, while ensuring the needs and values of users are met in an ethical and socially responsible manner. -Enhance identification of vulnerable populations by further disaggregating risk estimates and accounting for demographic and behavioural variables and using novel data sources such as big data and citizen science. This review is the first to focus solely on WASH-related disease risk mapping and modelling. The recommendations can be used as a guide for developing spatial epidemiology models in tandem with public health officials and to help detect and develop tailored responses to WASH-related disease outbreaks that meet the needs of vulnerable populations. The report’s main target audience is modellers, public health authorities and partners responsible for co-designing and implementing multi-sectoral health interventions, with a particular emphasis on facilitating the integration of health and WASH services delivery contributing to Sustainable Development Goals (SDG) 3 (good health and well-being) and 6 (clean water and sanitation).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie