Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Responsible Artificial Intelligence“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Responsible Artificial Intelligence" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Responsible Artificial Intelligence"
Tawei Wang, Tawei Wang. „Responsible Use of Artificial Intelligen“. International Journal of Computer Auditing 4, Nr. 1 (Dezember 2022): 001–3. http://dx.doi.org/10.53106/256299802022120401001.
Der volle Inhalt der QuelleTeng, C. L., A. S. Bhullar, P. Jermain, D. Jordon, R. Nawfel, P. Patel, R. Sean, M. Shang und D. H. Wu. „Responsible Artificial Intelligence in Radiation Oncology“. International Journal of Radiation Oncology*Biology*Physics 120, Nr. 2 (Oktober 2024): e659. http://dx.doi.org/10.1016/j.ijrobp.2024.07.1446.
Der volle Inhalt der QuelleGregor, Shirley. „Responsible Artificial Intelligence and Journal Publishing“. Journal of the Association for Information Systems 25, Nr. 1 (2024): 48–60. http://dx.doi.org/10.17705/1jais.00863.
Der volle Inhalt der QuelleHaidar, Ahmad. „An Integrative Theoretical Framework for Responsible Artificial Intelligence“. International Journal of Digital Strategy, Governance, and Business Transformation 13, Nr. 1 (15.12.2023): 1–23. http://dx.doi.org/10.4018/ijdsgbt.334844.
Der volle Inhalt der QuelleShneiderman, Ben. „Responsible AI“. Communications of the ACM 64, Nr. 8 (August 2021): 32–35. http://dx.doi.org/10.1145/3445973.
Der volle Inhalt der QuelleDignum, Virginia. „Responsible Artificial Intelligence --- From Principles to Practice“. ACM SIGIR Forum 56, Nr. 1 (Juni 2022): 1–6. http://dx.doi.org/10.1145/3582524.3582529.
Der volle Inhalt der QuelleRodrigues, Rowena, Anais Resseguier und Nicole Santiago. „When Artificial Intelligence Fails“. Public Governance, Administration and Finances Law Review 8, Nr. 2 (14.12.2023): 17–28. http://dx.doi.org/10.53116/pgaflr.7030.
Der volle Inhalt der QuelleVASYLKIVSKYI, Mikola, Ganna VARGATYUK und Olga BOLDYREVA. „INTELLIGENT RADIO INTERFACE WITH THE SUPPORT OF ARTIFICIAL INTELLIGENCE“. Herald of Khmelnytskyi National University. Technical sciences 217, Nr. 1 (23.02.2023): 26–32. http://dx.doi.org/10.31891/2307-5732-2023-317-1-26-32.
Der volle Inhalt der QuelleGermanov, Nikolai S. „The concept of responsible artificial intelligence as the future of artificial intelligence in medicine“. Digital Diagnostics 4, Nr. 1S (26.06.2023): 27–29. http://dx.doi.org/10.17816/dd430334.
Der volle Inhalt der QuelleTyrranen, V. A. „ARTIFICIAL INTELLIGENCE CRIMES“. Territory Development, Nr. 3(17) (2019): 10–13. http://dx.doi.org/10.32324/2412-8945-2019-3-10-13.
Der volle Inhalt der QuelleDissertationen zum Thema "Responsible Artificial Intelligence"
Svedberg, Peter O. S. „Steps towards an empirically responsible AI : a methodological and theoretical framework“. Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2004. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-246.
Der volle Inhalt der QuelleInitially we pursue a minimal model of a cognitive system. This in turn form the basis for the development of amethodological and theoretical framework. Two methodological requirements of the model are that explanation be from the perspective of the phenomena, and that we have structural determination. The minimal model is derived from the explanatory side of a biologically based cognitive science. Fransisco Varela is our principal source for this part. The model defines the relationship between a formally defined autonomous system and an environment, in such a way as to generate the world of the system, its actual environment. The minimal model is a modular explanation in that we find it on different levels in bio-cognitive systems, from the cell to small social groups. For the latter and for the role played by artefactual systems we bring in Edwin Hutchins' observational study of a cognitive system in action. This necessitates the introduction of a complementary form of explanation. A key aspect of Hutchins' findings is the social domain as environment for humans. Aspects of human cognitive abilities usually attributed to the person are more properly attributed to the social system, including artefactual systems.
Developing the methodological and theoretical framework means making a transition from the bio-cognitive to the computational. The two complementary forms of explanation are important for the ability to develop a methodology that supports the construction of actual systems. This has to be able to handle the transition from external determination of a system in design to internal determination (autonomy) in operation.
Once developed, the combined framework is evaluated in an application area. This is done by comparing the standard conception of the Semantic Web with how this notion looks from the perspective of the framework. This includes the development of the methodological framework as a metalevel external knowledge representation. A key difference between the two approaches is the directness by which the semantic is approached. Our perspective puts the focus on interaction and the structural regularities this engenders in the external representation. Regularities which in turn form the basis for machine processing. In this regard we see the relationship between representation and inference as analogous to the relationship between environment and system. Accordingly we have the social domain as environment for artefactual agents. For human level cognitive abilities the social domain as environment is important. We argue that a reasonable shortcut to systems we can relate to, about that very domain, is for artefactual agents to have an external representation of the social domain as environment.
Ounissi, Mehdi. „Decoding the Black Box : Enhancing Interpretability and Trust in Artificial Intelligence for Biomedical Imaging - a Step Toward Responsible Artificial Intelligence“. Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS237.
Der volle Inhalt der QuelleIn an era dominated by AI, its opaque decision-making --known as the "black box" problem-- poses significant challenges, especially in critical areas like biomedical imaging where accuracy and trust are crucial. Our research focuses on enhancing AI interpretability in biomedical applications. We have developed a framework for analyzing biomedical images that quantifies phagocytosis in neurodegenerative diseases using time-lapse phase-contrast video microscopy. Traditional methods often struggle with rapid cellular interactions and distinguishing cells from backgrounds, critical for studying conditions like frontotemporal dementia (FTD). Our scalable, real-time framework features an explainable cell segmentation module that simplifies deep learning algorithms, enhances interpretability, and maintains high performance by incorporating visual explanations and by model simplification. We also address issues in visual generative models, such as hallucinations in computational pathology, by using a unique encoder for Hematoxylin and Eosin staining coupled with multiple decoders. This method improves the accuracy and reliability of synthetic stain generation, employing innovative loss functions and regularization techniques that enhance performance and enable precise synthetic stains crucial for pathological analysis. Our methodologies have been validated against several public benchmarks, showing top-tier performance. Notably, our framework distinguished between mutant and control microglial cells in FTD, providing new biological insights into this unproven phenomenon. Additionally, we introduced a cloud-based system that integrates complex models and provides real-time feedback, facilitating broader adoption and iterative improvements through pathologist insights. The release of novel datasets, including video microscopy on microglial cell phagocytosis and a virtual staining dataset related to pediatric Crohn's disease, along with all source codes, underscores our commitment to transparent open scientific collaboration and advancement. Our research highlights the importance of interpretability in AI, advocating for technology that integrates seamlessly with user needs and ethical standards in healthcare. Enhanced interpretability allows researchers to better understand data and improve tool performance
Haidar, Ahmad. „Responsible Artificial Intelligence : Designing Frameworks for Ethical, Sustainable, and Risk-Aware Practices“. Electronic Thesis or Diss., université Paris-Saclay, 2024. https://www.biblio.univ-evry.fr/theses/2024/interne/2024UPASI008.pdf.
Der volle Inhalt der QuelleArtificial Intelligence (AI) is rapidly transforming the world, redefining the relationship between technology and society. This thesis investigates the critical need for responsible and sustainable development, governance, and usage of AI and Generative AI (GAI). The study addresses the ethical risks, regulatory gaps, and challenges associated with AI systems while proposing actionable frameworks for fostering Responsible Artificial Intelligence (RAI) and Responsible Digital Innovation (RDI).The thesis begins with a comprehensive review of 27 global AI ethical declarations to identify dominant principles such as transparency, fairness, accountability, and sustainability. Despite their significance, these principles often lack the necessary tools for practical implementation. To address this gap, the second study in the research presents an integrative framework for RAI based on four dimensions: technical, AI for sustainability, legal, and responsible innovation management.The third part of the thesis focuses on RDI through a qualitative study of 18 interviews with managers from diverse sectors. Five key dimensions are identified: strategy, digital-specific challenges, organizational KPIs, end-user impact, and catalysts. These dimensions enable companies to adopt sustainable and responsible innovation practices while overcoming obstacles in implementation.The fourth study analyzes emerging risks from GAI, such as misinformation, disinformation, bias, privacy breaches, environmental concerns, and job displacement. Using a dataset of 858 incidents, this research employs binary logistic regression to examine the societal impact of these risks. The results highlight the urgent need for stronger regulatory frameworks, corporate digital responsibility, and ethical AI governance. Thus, this thesis provides critical contributions to the fields of RDI and RAI by evaluating ethical principles, proposing integrative frameworks, and identifying emerging risks. It emphasizes the importance of aligning AI governance with international standards to ensure that AI technologies serve humanity sustainably and equitably
Sugianto, Nehemia. „Responsible AI for Automated Analysis of Integrated Video Surveillance in Public Spaces“. Thesis, Griffith University, 2021. http://hdl.handle.net/10072/409586.
Der volle Inhalt der QuelleThesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Dept Bus Strategy & Innovation
Griffith Business School
Full Text
Kessing, Maria. „Fairness in AI : Discussion of a Unified Approach to Ensure Responsible AI Development“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-299936.
Der volle Inhalt der QuelleUtöver de fördelar som AI-teknologier har bidragit med, så har även etiska dilemman och problem uppstått. På grund av ökat fokus, har ett stort antal förslag till system och regelverk som diskuterar ansvarstagande AI-utveckling publicerats sedan 2016. Denna rapport kommer analysera ett urval av dessa förslag med avsikt att besvara frågan (1) “Vilka tillvägagångssätt kan försäkra oss om en ansvarsfull AI-utveckling?” För att utforska denna fråga kommer denna rapport analysera olika metoder och tillvägagångssätt, på bland annat mellanstatliga- och statliga regelverk, forskningsgrupper samt privata företag. Dessutom har expertintervjuer genomförts för att besvara den andra problemformuleringen (2) “Hur kan vi nå en övergripande, gemensam, lösning för att försäkra oss om ansvarsfull AI-utveckling?” Denna rapport redogör för att statliga organisationer och myndigheter är den främsta drivkraften för att detta ska ske. Vidare krävs en detaljerad plan som knyter ihop forskningsgrupper med den offentliga- och privata sektorn. Slutligen anser rapporten även att det är av stor vikt för vidare utbildning när det kommer till att göra AI förklarbart och tydligt för alla.
Umurerwa, Janviere, und Maja Lesjak. „AI IMPLEMENTATION AND USAGE : A qualitative study of managerial challenges in implementation and use of AI solutions from the researchers’ perspective“. Thesis, Umeå universitet, Institutionen för informatik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-187810.
Der volle Inhalt der Quelle„Responsible Governance of Artificial Intelligence: An Assessment, Theoretical Framework, and Exploration“. Doctoral diss., 2019. http://hdl.handle.net/2286/R.I.55667.
Der volle Inhalt der QuelleDissertation/Thesis
Doctoral Dissertation Human and Social Dimensions of Science and Technology 2019
Arienti, João Henrique Leal. „Time series forecasting applied to an energy management system ‐ A comparison between Deep Learning Models and other Machine Learning Models“. Master's thesis, 2020. http://hdl.handle.net/10362/108172.
Der volle Inhalt der QuelleA large amount of energy used by the world comes from buildings’ energy consumption. HVAC (Heat, Ventilation, and Air Conditioning) systems are the biggest offenders when it comes to buildings’ energy consumption. It is important to provide environmental comfort in buildings but indoor wellbeing is directly related to an increase in energy consumption. This dilemma creates a huge opportunity for a solution that balances occupant comfort and energy consumption. Within this context, the Ambiosensing project was launched to develop a complete energy management system that differentiates itself from other existing commercial solutions by being an inexpensive and intelligent system. The Ambiosensing project focused on the topic of Time Series Forecasting to achieve the goal of creating predictive models to help the energy management system to anticipate indoor environmental scenarios. A good approach for Time Series Forecasting problems is to apply Machine Learning, more specifically Deep Learning. This work project intends to investigate and develop Deep Learning and other Machine Learning models that can deal with multivariate Time Series Forecasting, to assess how well can a Deep Learning approach perform on a Time Series Forecasting problem, especially, LSTM (Long Short-Term Memory) Recurrent Neural Networks (RNN) and to establish a comparison between Deep Learning and other Machine Learning models like Linear Regression, Decision Trees, Random Forest, Gradient Boosting Machines and others within this context.
Voarino, Nathalie. „Systèmes d’intelligence artificielle et santé : les enjeux d’une innovation responsable“. Thèse, 2019. http://hdl.handle.net/1866/23526.
Der volle Inhalt der QuelleThe use of artificial intelligence (AI) systems in health is part of the advent of a new "high definition" medicine that is predictive, preventive and personalized, benefiting from the unprecedented amount of data that is today available. At the heart of digital health innovation, the development of AI systems promises to lead to an interconnected and self-learning healthcare system. AI systems could thus help to redefine the classification of diseases, generate new medical knowledge, or predict the health trajectories of individuals for prevention purposes. Today, various applications in healthcare are being considered, ranging from assistance to medical decision-making through expert systems to precision medicine (e.g. pharmacological targeting), as well as individualized prevention through health trajectories developed on the basis of biological markers. However, urgent ethical concerns emerge with the increasing use of algorithms to analyze a growing number of data related to health (often personal and sensitive) as well as the reduction of human intervention in many automated processes. From the limitations of big data analysis, the need for data sharing and the algorithmic decision ‘opacity’ stems various ethical concerns relating to the protection of privacy and intimacy, free and informed consent, social justice, dehumanization of care and patients, and/or security. To address these challenges, many initiatives have focused on defining and applying principles for an ethical governance of AI. However, the operationalization of these principles faces various difficulties inherent to applied ethics, which originate either from the scope (universal or plural) of these principles or the way these principles are put into practice (inductive or deductive methods). These issues can be addressed with context-specific or bottom-up approaches of applied ethics. However, people who embrace these approaches still face several challenges. From an analysis of citizens' fears and expectations emerging from the discussions that took place during the coconstruction of the Montreal Declaration for a Responsible Development of AI, it is possible to get a sense of what these difficulties look like. From this analysis, three main challenges emerge: the incapacitation of health professionals and patients, the many hands problem, and artificial agency. These challenges call for AI systems that empower people and that allow to maintain human agency, in order to foster the development of (pragmatic) shared responsibility among the various stakeholders involved in the development of healthcare AI systems. Meeting these challenges is essential in order to adapt existing governance mechanisms and enable the development of a responsible digital innovation in healthcare and research that allows human beings to remain at the center of its development.
Bücher zum Thema "Responsible Artificial Intelligence"
Dignum, Virginia. Responsible Artificial Intelligence. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30371-6.
Der volle Inhalt der QuelleSchmidpeter, René, und Reinhard Altenburger, Hrsg. Responsible Artificial Intelligence. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-09245-9.
Der volle Inhalt der QuelleKhoshnevisan, Mohammad. Artificial intelligence and responsive optimization. 2. Aufl. Phoenix: Xiquan, 2003.
Den vollen Inhalt der Quelle findenKhoshnevisan, Mohammad. Artificial intelligence and responsive optimization. Phoenix: Xiquan, 2003.
Den vollen Inhalt der Quelle findenKhamparia, Aditya, Deepak Gupta, Ashish Khanna und Valentina E. Balas, Hrsg. Biomedical Data Analysis and Processing Using Explainable (XAI) and Responsive Artificial Intelligence (RAI). Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1476-8.
Der volle Inhalt der QuelleResponsible Artificial Intelligence. Springer, 2020.
Den vollen Inhalt der Quelle findenAltenburger, Reinhard, und René Schmidpeter. Responsible Artificial Intelligence: Challenges for Sustainable Management. Springer International Publishing AG, 2022.
Den vollen Inhalt der Quelle findenResponsible Artificial Intelligence: Challenges for Sustainable Management. Springer International Publishing AG, 2024.
Den vollen Inhalt der Quelle findenKaplan, Jerry. Artificial Intelligence. Oxford University Press, 2016. http://dx.doi.org/10.1093/wentk/9780190602383.001.0001.
Der volle Inhalt der QuelleKnowings, L. D. Ethical AI: Navigating the Future With Responsible Artificial Intelligence. Sandiver Publishing, 2024.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Responsible Artificial Intelligence"
Dignum, Virginia. „What Is Artificial Intelligence?“ In Responsible Artificial Intelligence, 9–34. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30371-6_2.
Der volle Inhalt der QuelleLeopold, Helmut. „Mastering Trustful Artificial Intelligence“. In Responsible Artificial Intelligence, 133–58. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-09245-9_6.
Der volle Inhalt der QuelleAltenburger, Reinhard. „Artificial Intelligence: Management Challenges and Responsibility“. In Responsible Artificial Intelligence, 1–8. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-09245-9_1.
Der volle Inhalt der QuelleDignum, Virginia. „Introduction“. In Responsible Artificial Intelligence, 1–7. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30371-6_1.
Der volle Inhalt der QuelleDignum, Virginia. „Ethical Decision-Making“. In Responsible Artificial Intelligence, 35–46. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30371-6_3.
Der volle Inhalt der QuelleDignum, Virginia. „Taking Responsibility“. In Responsible Artificial Intelligence, 47–69. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30371-6_4.
Der volle Inhalt der QuelleDignum, Virginia. „Can AI Systems Be Ethical?“ In Responsible Artificial Intelligence, 71–92. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30371-6_5.
Der volle Inhalt der QuelleDignum, Virginia. „Ensuring Responsible AI in Practice“. In Responsible Artificial Intelligence, 93–105. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30371-6_6.
Der volle Inhalt der QuelleDignum, Virginia. „Looking Further“. In Responsible Artificial Intelligence, 107–20. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30371-6_7.
Der volle Inhalt der QuelleSchindler, Matthias, und Frederik Schmihing. „Technology Serves People: Democratising Analytics and AI in the BMW Production System“. In Responsible Artificial Intelligence, 159–82. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-09245-9_7.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Responsible Artificial Intelligence"
Herrera, Framcisco. „Responsible Artificial Intelligence Systems: From Trustworthiness to Governance“. In 2024 Design, Automation & Test in Europe Conference & Exhibition (DATE), 1–2. IEEE, 2024. http://dx.doi.org/10.23919/date58400.2024.10546553.
Der volle Inhalt der QuelleR, Sukumar, Vinima Gambhir und Jyoti Seth. „Investigating the Ethical Implications of Artificial Intelligence and Establishing Guidelines for Responsible AI Development“. In 2024 International Conference on Advances in Computing Research on Science Engineering and Technology (ACROSET), 1–6. IEEE, 2024. http://dx.doi.org/10.1109/acroset62108.2024.10743915.
Der volle Inhalt der QuelleYeasin, Mohammed. „Keynote Speaker ICOM'24: Perspective on Convergence of Mechatronics and Artificial Intelligence in Responsible Innovation“. In 2024 9th International Conference on Mechatronics Engineering (ICOM), XIV. IEEE, 2024. http://dx.doi.org/10.1109/icom61675.2024.10652385.
Der volle Inhalt der QuelleDignum, Virginia. „Responsible Autonomy“. In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/655.
Der volle Inhalt der QuelleWang, Yichuan, Mengran Xiong und Hossein Olya. „Toward an Understanding of Responsible Artificial Intelligence Practices“. In Hawaii International Conference on System Sciences. Hawaii International Conference on System Sciences, 2020. http://dx.doi.org/10.24251/hicss.2020.610.
Der volle Inhalt der QuelleWang, Shoujin, Ninghao Liu, Xiuzhen Zhang, Yan Wang, Francesco Ricci und Bamshad Mobasher. „Data Science and Artificial Intelligence for Responsible Recommendations“. In KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3534678.3542916.
Der volle Inhalt der QuelleCalvo, Albert, Nil Ortiz, Alejandro Espinosa, Aleksandar Dimitrievikj, Ignasi Oliva, Jordi Guijarro und Shuaib Sidiqqi. „Safe AI: Ensuring Safe and Responsible Artificial Intelligence“. In 2023 JNIC Cybersecurity Conference (JNIC). IEEE, 2023. http://dx.doi.org/10.23919/jnic58574.2023.10205749.
Der volle Inhalt der QuelleDong, Tian, Shaofeng Li, Guoxing Chen, Minhui Xue, Haojin Zhu und Zhen Liu. „RAI2: Responsible Identity Audit Governing the Artificial Intelligence“. In Network and Distributed System Security Symposium. Reston, VA: Internet Society, 2023. http://dx.doi.org/10.14722/ndss.2023.241012.
Der volle Inhalt der QuelleTahaei, Mohammad, Marios Constantinides, Daniele Quercia, Sean Kennedy, Michael Muller, Simone Stumpf, Q. Vera Liao et al. „Human-Centered Responsible Artificial Intelligence: Current & Future Trends“. In CHI '23: CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3544549.3583178.
Der volle Inhalt der QuelleIliadis, Eduard. „AI-GFA: Applied Framework for Producing Responsible Artificial Intelligence“. In GoodIT '24: International Conference on Information Technology for Social Good, 93–99. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3677525.3678646.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Responsible Artificial Intelligence"
Stanley-Lockman, Zoe. Responsible and Ethical Military AI. Center for Security and Emerging Technology, August 2021. http://dx.doi.org/10.51593/20200091.
Der volle Inhalt der QuelleLehoux, Pascale, Hassane Alami, Carl Mörch, Lysanne Rivard, Robson Rocha und Hudson Silva. Can we innovate responsibly during a pandemic? Artificial intelligence, digital solutions and SARS-CoV-2. Observatoire international sur les impacts sociétaux de l’intelligence artificielle et du numérique, Juni 2020. http://dx.doi.org/10.61737/ueti5496.
Der volle Inhalt der QuelleNarayanan, Mina, und Christian Schoeberl. A Matrix for Selecting Responsible AI Frameworks. Center for Security and Emerging Technology, Juni 2023. http://dx.doi.org/10.51593/20220029.
Der volle Inhalt der QuelleBurstein, Jill. Duolingo English Test Responsible AI Standards. Duolingo, März 2023. http://dx.doi.org/10.46999/vcae5025.
Der volle Inhalt der QuelleFaveri, Benjamin, und Graeme Auld. nforming Possible Futures for the use of Third-Party Audits in AI Regulations. Regulatory Governance Initiative, Carleton University, November 2023. http://dx.doi.org/10.22215/sppa-rgi-nov2023.
Der volle Inhalt der QuelleGoode, Kayla, Heeu Millie Kim und Melissa Deng. Examining Singapore’s AI Progress. Center for Security and Emerging Technology, März 2023. http://dx.doi.org/10.51593/2021ca014.
Der volle Inhalt der QuelleTabassi, Elham. AI Risk Management Framework. Gaithersburg, MD: National Institute of Standards and Technology, 2023. http://dx.doi.org/10.6028/nist.ai.100-1.
Der volle Inhalt der QuelleToney, Autumn, und Emelia Probasco. Who Cares About Trust? Center for Security and Emerging Technology, Juli 2023. http://dx.doi.org/10.51593/20230014b.
Der volle Inhalt der QuelleGautrais, Vincent, und Nicolas Aubin. Assessment Model of Factors Relating to Data Flow: Instrument for the Protection of Privacy as well as Rights and Freedoms in the Development and Use of Artificial Intelligence. Observatoire international sur les impacts sociétaux de l'intelligence artificielle et du numérique, März 2022. http://dx.doi.org/10.61737/haoj6662.
Der volle Inhalt der QuelleDaudelin, Francois, Lina Taing, Lucy Chen, Claudia Abreu Lopes, Adeniyi Francis Fagbamigbe und Hamid Mehmood. Mapping WASH-related disease risk: A review of risk concepts and methods. United Nations University Institute for Water, Environment and Health, Dezember 2021. http://dx.doi.org/10.53328/uxuo4751.
Der volle Inhalt der Quelle