Academic literature on the topic 'Representation learning (artifical intelligence)'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Representation learning (artifical intelligence).'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Representation learning (artifical intelligence)":
Hamilton, William L. "Graph Representation Learning." Synthesis Lectures on Artificial Intelligence and Machine Learning 14, no. 3 (September 15, 2020): 1–159. http://dx.doi.org/10.2200/s01045ed1v01y202009aim046.
Konidaris, George, Leslie Pack Kaelbling, and Tomas Lozano-Perez. "From Skills to Symbols: Learning Symbolic Representations for Abstract High-Level Planning." Journal of Artificial Intelligence Research 61 (January 31, 2018): 215–89. http://dx.doi.org/10.1613/jair.5575.
Rezayi, Saed. "Learning Better Representations Using Auxiliary Knowledge." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (June 26, 2023): 16133–34. http://dx.doi.org/10.1609/aaai.v37i13.26927.
FROMMBERGER, LUTZ. "LEARNING TO BEHAVE IN SPACE: A QUALITATIVE SPATIAL REPRESENTATION FOR ROBOT NAVIGATION WITH REINFORCEMENT LEARNING." International Journal on Artificial Intelligence Tools 17, no. 03 (June 2008): 465–82. http://dx.doi.org/10.1142/s021821300800400x.
Haghir Chehreghani, Morteza, and Mostafa Haghir Chehreghani. "Learning representations from dendrograms." Machine Learning 109, no. 9-10 (August 16, 2020): 1779–802. http://dx.doi.org/10.1007/s10994-020-05895-3.
Saitta, Lorenza. "Representation change in machine learning." AI Communications 9, no. 1 (1996): 14–20. http://dx.doi.org/10.3233/aic-1996-9102.
Rives, Alexander, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, et al. "Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences." Proceedings of the National Academy of Sciences 118, no. 15 (April 5, 2021): e2016239118. http://dx.doi.org/10.1073/pnas.2016239118.
Kang, Zhao, Xiao Lu, Jian Liang, Kun Bai, and Zenglin Xu. "Relation-Guided Representation Learning." Neural Networks 131 (November 2020): 93–102. http://dx.doi.org/10.1016/j.neunet.2020.07.014.
Prorok, Máté. "Applications of artificial intelligence systems." Deliberationes 15, Különszám (2022): 76–88. http://dx.doi.org/10.54230/delib.2022.k.sz.76.
Mazoure, Bogdan, Thang Doan, Tianyu Li, Vladimir Makarenkov, Joelle Pineau, Doina Precup, and Guillaume Rabusseau. "Low-Rank Representation of Reinforcement Learning Policies." Journal of Artificial Intelligence Research 75 (October 27, 2022): 597–636. http://dx.doi.org/10.1613/jair.1.13854.
Dissertations / Theses on the topic "Representation learning (artifical intelligence)":
Li, Hao. "Towards Fast and Efficient Representation Learning." Thesis, University of Maryland, College Park, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10845690.
The success of deep learning and convolutional neural networks in many fields is accompanied by a significant increase in the computation cost. With the increasing model complexity and pervasive usage of deep neural networks, there is a surge of interest in fast and efficient model training and inference on both cloud and embedded devices. Meanwhile, understanding the reasons for trainability and generalization is fundamental for its further development. This dissertation explores approaches for fast and efficient representation learning with a better understanding of the trainability and generalization. In particular, we ask following questions and provide our solutions: 1) How to reduce the computation cost for fast inference? 2) How to train low-precision models on resources-constrained devices? 3) What does the loss surface looks like for neural nets and how it affects generalization?
To reduce the computation cost for fast inference, we propose to prune filters from CNNs that are identified as having a small effect on the prediction accuracy. By removing filters with small norms together with their connected feature maps, the computation cost can be reduced accordingly without using special software or hardware. We show that simple filter pruning approach can reduce the inference cost while regaining close to the original accuracy by retraining the networks.
To further reduce the inference cost, quantizing model parameters with low-precision representations has shown significant speedup, especially for edge devices that have limited computing resources, memory capacity, and power consumption. To enable on-device learning on lower-power systems, removing the dependency of full-precision model during training is the key challenge. We study various quantized training methods with the goal of understanding the differences in behavior, and reasons for success or failure. We address the issue of why algorithms that maintain floating-point representations work so well, while fully quantized training methods stall before training is complete. We show that training algorithms that exploit high-precision representations have an important greedy search phase that purely quantized training methods lack, which explains the difficulty of training using low-precision arithmetic.
Finally, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. We introduce a simple filter normalization method that helps us visualize loss function curvature, and make meaningful side-by-side comparisons between loss functions. The sharpness of minimizers correlates well with generalization error when this visualization is used. Then, using a variety of visualizations, we explore how training hyper-parameters affect the shape of minimizers, and how network architecture affects the loss landscape.
Denize, Julien. "Self-supervised representation learning and applications to image and video analysis." Electronic Thesis or Diss., Normandie, 2023. http://www.theses.fr/2023NORMIR37.
In this thesis, we develop approaches to perform self-supervised learning for image and video analysis. Self-supervised representation learning allows to pretrain neural networks to learn general concepts without labels before specializing in downstream tasks faster and with few annotations. We present three contributions to self-supervised image and video representation learning. First, we introduce the theoretical paradigm of soft contrastive learning and its practical implementation called Similarity Contrastive Estimation (SCE) connecting contrastive and relational learning for image representation. Second, SCE is extended to global temporal video representation learning. Lastly, we propose COMEDIAN a pipeline for local-temporal video representation learning for transformers. These contributions achieved state-of-the-art results on multiple benchmarks and led to several academic and technical published contributions
Aboul-Enien, Hisham Abdel-Ghaffer. "Neural network learning and knowledge representation in a multi-agent system." Thesis, Imperial College London, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.252040.
Carvalho, Micael. "Deep representation spaces." Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS292.
In recent years, Deep Learning techniques have swept the state-of-the-art of many applications of Machine Learning, becoming the new standard approach for them. The architectures issued from these techniques have been used for transfer learning, which extended the power of deep models to tasks that did not have enough data to fully train them from scratch. This thesis' subject of study is the representation spaces created by deep architectures. First, we study properties inherent to them, with particular interest in dimensionality redundancy and precision of their features. Our findings reveal a strong degree of robustness, pointing the path to simple and powerful compression schemes. Then, we focus on refining these representations. We choose to adopt a cross-modal multi-task problem, and design a loss function capable of taking advantage of data coming from multiple modalities, while also taking into account different tasks associated to the same dataset. In order to correctly balance these losses, we also we develop a new sampling scheme that only takes into account examples contributing to the learning phase, i.e. those having a positive loss. Finally, we test our approach in a large-scale dataset of cooking recipes and associated pictures. Our method achieves a 5-fold improvement over the state-of-the-art, and we show that the multi-task aspect of our approach promotes a semantically meaningful organization of the representation space, allowing it to perform subtasks never seen during training, like ingredient exclusion and selection. The results we present in this thesis open many possibilities, including feature compression for remote applications, robust multi-modal and multi-task learning, and feature space refinement. For the cooking application, in particular, many of our findings are directly applicable in a real-world context, especially for the detection of allergens, finding alternative recipes due to dietary restrictions, and menu planning
Newman-Griffis, Denis R. "Capturing Domain Semantics with Representation Learning: Applications to Health and Function." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587658607378958.
Cao, Xi Hang. "On Leveraging Representation Learning Techniques for Data Analytics in Biomedical Informatics." Diss., Temple University Libraries, 2019. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/586006.
Ph.D.
Representation Learning is ubiquitous in state-of-the-art machine learning workflow, including data exploration/visualization, data preprocessing, data model learning, and model interpretations. However, the majority of the newly proposed Representation Learning methods are more suitable for problems with a large amount of data. Applying these methods to problems with a limited amount of data may lead to unsatisfactory performance. Therefore, there is a need for developing Representation Learning methods which are tailored for problems with ``small data", such as, clinical and biomedical data analytics. In this dissertation, we describe our studies of tackling the challenging clinical and biomedical data analytics problem from four perspectives: data preprocessing, temporal data representation learning, output representation learning, and joint input-output representation learning. Data scaling is an important component in data preprocessing. The objective in data scaling is to scale/transform the raw features into reasonable ranges such that each feature of an instance will be equally exploited by the machine learning model. For example, in a credit flaw detection task, a machine learning model may utilize a person's credit score and annual income as features, but because the ranges of these two features are different, a machine learning model may consider one more heavily than another. In this dissertation, I thoroughly introduce the problem in data scaling and describe an approach for data scaling which can intrinsically handle the outlier problem and lead to better model prediction performance. Learning new representations for data in the unstandardized form is a common task in data analytics and data science applications. Usually, data come in a tubular form, namely, the data is represented by a table in which each row is a feature (row) vector of an instance. However, it is also common that the data are not in this form; for example, texts, images, and video/audio records. In this dissertation, I describe the challenge of analyzing imperfect multivariate time series data in healthcare and biomedical research and show that the proposed method can learn a powerful representation to encounter various imperfections and lead to an improvement of prediction performance. Learning output representations is a new aspect of Representation Learning, and its applications have shown promising results in complex tasks, including computer vision and recommendation systems. The main objective of an output representation algorithm is to explore the relationship among the target variables, such that a prediction model can efficiently exploit the similarities and potentially improve prediction performance. In this dissertation, I describe a learning framework which incorporates output representation learning to time-to-event estimation. Particularly, the approach learns the model parameters and time vectors simultaneously. Experimental results do not only show the effectiveness of this approach but also show the interpretability of this approach from the visualizations of the time vectors in 2-D space. Learning the input (feature) representation, output representation, and predictive modeling are closely related to each other. Therefore, it is a very natural extension of the state-of-the-art by considering them together in a joint framework. In this dissertation, I describe a large-margin ranking-based learning framework for time-to-event estimation with joint input embedding learning, output embedding learning, and model parameter learning. In the framework, I cast the functional learning problem to a kernel learning problem, and by adopting the theories in Multiple Kernel Learning, I propose an efficient optimization algorithm. Empirical results also show its effectiveness on several benchmark datasets.
Temple University--Theses
Panesar, Kulvinder. "Conversational artificial intelligence - demystifying statistical vs linguistic NLP solutions." Universitat Politécnica de Valéncia, 2020. http://hdl.handle.net/10454/18121.
This paper aims to demystify the hype and attention on chatbots and its association with conversational artificial intelligence. Both are slowly emerging as a real presence in our lives from the impressive technological developments in machine learning, deep learning and natural language understanding solutions. However, what is under the hood, and how far and to what extent can chatbots/conversational artificial intelligence solutions work – is our question. Natural language is the most easily understood knowledge representation for people, but certainly not the best for computers because of its inherent ambiguous, complex and dynamic nature. We will critique the knowledge representation of heavy statistical chatbot solutions against linguistics alternatives. In order to react intelligently to the user, natural language solutions must critically consider other factors such as context, memory, intelligent understanding, previous experience, and personalized knowledge of the user. We will delve into the spectrum of conversational interfaces and focus on a strong artificial intelligence concept. This is explored via a text based conversational software agents with a deep strategic role to hold a conversation and enable the mechanisms need to plan, and to decide what to do next, and manage the dialogue to achieve a goal. To demonstrate this, a deep linguistically aware and knowledge aware text based conversational agent (LING-CSA) presents a proof-of-concept of a non-statistical conversational AI solution.
Tamaazousti, Youssef. "Vers l’universalité des représentations visuelle et multimodales." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC038/document.
Because of its key societal, economic and cultural stakes, Artificial Intelligence (AI) is a hot topic. One of its main goal, is to develop systems that facilitates the daily life of humans, with applications such as household robots, industrial robots, autonomous vehicle and much more. The rise of AI is highly due to the emergence of tools based on deep neural-networks which make it possible to simultaneously learn, the representation of the data (which were traditionally hand-crafted), and the task to solve (traditionally learned with statistical models). This resulted from the conjunction of theoretical advances, the growing computational capacity as well as the availability of many annotated data. A long standing goal of AI is to design machines inspired humans, capable of perceiving the world, interacting with humans, in an evolutionary way. We categorize, in this Thesis, the works around AI, in the two following learning-approaches: (i) Specialization: learn representations from few specific tasks with the goal to be able to carry out very specific tasks (specialized in a certain field) with a very good level of performance; (ii) Universality: learn representations from several general tasks with the goal to perform as many tasks as possible in different contexts. While specialization was extensively explored by the deep-learning community, only a few implicit attempts were made towards universality. Thus, the goal of this Thesis is to explicitly address the problem of improving universality with deep-learning methods, for image and text data. We have addressed this topic of universality in two different forms: through the implementation of methods to improve universality (“universalizing methods”); and through the establishment of a protocol to quantify its universality. Concerning universalizing methods, we proposed three technical contributions: (i) in a context of large semantic representations, we proposed a method to reduce redundancy between the detectors through, an adaptive thresholding and the relations between concepts; (ii) in the context of neural-network representations, we proposed an approach that increases the number of detectors without increasing the amount of annotated data; (iii) in a context of multimodal representations, we proposed a method to preserve the semantics of unimodal representations in multimodal ones. Regarding the quantification of universality, we proposed to evaluate universalizing methods in a Transferlearning scheme. Indeed, this technical scheme is relevant to assess the universal ability of representations. This also led us to propose a new framework as well as new quantitative evaluation criteria for universalizing methods
Liu, Xudong. "MODELING, LEARNING AND REASONING ABOUT PREFERENCE TREES OVER COMBINATORIAL DOMAINS." UKnowledge, 2016. http://uknowledge.uky.edu/cs_etds/43.
Cleland, Benjamin George. "Reinforcement Learning for Racecar Control." The University of Waikato, 2006. http://hdl.handle.net/10289/2507.
Books on the topic "Representation learning (artifical intelligence)":
Pacific, Rim International Conference on Artificial Intelligence (4th 1996 Cairns Qld ). PRICAI '96: Topics in artificial intelligence : 4th Pacific Rim International Conference on Artificial Intelligence, Cairns, Australia, August 26-30, 1996 : proceedings. Berlin: Springer, 1996.
Brighton, England) International Conference on Artificial Intelligence in Education (14th 2009. Artificial intelligence in education: Building learning systems that care : from knowledge representation to affective modelling. Amsterdam: IOS Press, 2009.
1953-, Benjamin D. Paul, ed. Change of representation and inductive bias. Boston: Kluwer Academic, 1990.
Andrée, Tiberghien, Mandl Heinz, and NATO Advanced Research Workshop on Knowledge Acquisition in the Domain of Physics and Intelligent Learning Environments (1990 : Lyon, France), eds. Intelligent learning environments and knowledge acquisition in physics. Berlin: Springer-Verlag, 1992.
Tiberghien, Andrée. Intelligent Learning Environments and Knowledge Acquisition in Physics. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992.
Workshop on Reasoning with Incomplete and Changing Information (1996 Cairns, Qld.). Learning and reasoning with complex representations: PRICAI'96 Workshops on Reasoning with Incomplete and Changing Information and on Inducing Complex Representations, Cairns, Australia, August 26-30, 1996 : selected papers. Berlin: Springer, 1998.
Fisseler, Jens. Learning and modeling with probabilistic conditional logic. Heidelberg: Ios Press, 2010.
KR4HC 2009 (2009 Verona, Italy). Knowledge representation for health-care: Data, processes and guidelines : AIME 2009 workshop KR4HC 2009, Verona, Italy, July 19, 2009 : revised selected papers. Berlin: Springer, 2010.
Riaño, David. Knowledge Representation for Health-Care: ECAI 2010 Workshop KR4HC 2010, Lisbon, Portugal, August 17, 2010, Revised Selected Papers. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011.
United States. National Aeronautics and Space Administration., ed. Instructable autonomous agents: CSE-TR-193-94. [Washington, DC: National Aeronautics and Space Administration, 1994.
Book chapters on the topic "Representation learning (artifical intelligence)":
Li, Yifeng. "Sparse Representation for Machine Learning." In Advances in Artificial Intelligence, 352–57. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38457-8_38.
Bao, Feng. "Disentangled Variational Information Bottleneck for Multiview Representation Learning." In Artificial Intelligence, 91–102. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-93049-3_8.
Mai, Gengchen, Ziyuan Li, and Ni Lao. "Spatial Representation Learning in GeoAI." In Handbook of Geospatial Artificial Intelligence, 99–120. Boca Raton: CRC Press, 2023. http://dx.doi.org/10.1201/9781003308423-6.
Sharifirad, Sima, and Stan Matwin. "Deep Multi-cultural Graph Representation Learning." In Advances in Artificial Intelligence, 407–10. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57351-9_46.
Reynolds, Stuart I. "Adaptive Representation Methods for Reinforcement Learning." In Advances in Artificial Intelligence, 345–48. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45153-6_34.
Joshi, Ameet V. "Data Understanding, Representation, and Visualization." In Machine Learning and Artificial Intelligence, 21–29. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-26622-6_3.
Joshi, Ameet V. "Data Understanding, Representation, and Visualization." In Machine Learning and Artificial Intelligence, 21–29. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12282-8_3.
He, Yiming, and Wei Hu. "3D Hand Pose Estimation via Regularized Graph Representation Learning." In Artificial Intelligence, 540–52. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-93046-2_46.
Xiao, Chaojun, Zhiyuan Liu, Yankai Lin, and Maosong Sun. "Legal Knowledge Representation Learning." In Representation Learning for Natural Language Processing, 401–32. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1600-9_11.
Belle, Vaishak. "Representation Matters." In Synthesis Lectures on Artificial Intelligence and Machine Learning, 15–26. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-21003-7_2.
Conference papers on the topic "Representation learning (artifical intelligence)":
Xie, Ruobing, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. "Image-embodied Knowledge Representation Learning." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/438.
Qian, Sheng, Guanyue Li, Wen-Ming Cao, Cheng Liu, Si Wu, and Hau San Wong. "Improving representation learning in autoencoders via multidimensional interpolation and dual regularizations." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/453.
Li, Sheng, and Handong Zhao. "A Survey on Representation Learning for User Modeling." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/695.
Nozawa, Kento, and Issei Sato. "Evaluation Methods for Representation Learning: A Survey." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/776.
Bai, Yang, Min Cao, Daming Gao, Ziqiang Cao, Chen Chen, Zhenfeng Fan, Liqiang Nie, and Min Zhang. "RaSa: Relation and Sensitivity Aware Representation Learning for Text-based Person Search." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/62.
Gao, Li, Hong Yang, Chuan Zhou, Jia Wu, Shirui Pan, and Yue Hu. "Active Discriminative Network Representation Learning." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/296.
Dumancic, Sebastijan, and Hendrik Blockeel. "Clustering-Based Relational Unsupervised Representation Learning with an Explicit Distributed Representation." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/226.
Le, Lei, Raksha Kumaraswamy, and Martha White. "Learning Sparse Representations in Reinforcement Learning with Sparse Coding." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/287.
Wang, Pengyang, Yanjie Fu, Yuanchun Zhou, Kunpeng Liu, Xiaolin Li, and Kien Hua. "Exploiting Mutual Information for Substructure-aware Graph Representation Learning." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/472.
Chu, Guanyi, Xiao Wang, Chuan Shi, and Xunqiang Jiang. "CuCo: Graph Representation with Curriculum Contrastive Learning." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/317.
Reports on the topic "Representation learning (artifical intelligence)":
Goodwin, Sarah, Yigal Attali, Geoffrey LaFlair, Yena Park, Andrew Runge, Alina von Davier, and Kevin Yancey. Duolingo English Test - Writing Construct. Duolingo, March 2023. http://dx.doi.org/10.46999/arxn5612.
Nguyen, Kim, and Jonathan Hambur. Adoption of Emerging Digital General-purpose Technologies: Determinants and Effects. Reserve Bank of Australia, December 2023. http://dx.doi.org/10.47688/rdp2023-10.
Varastehpour, Soheil, Hamid Sharifzadeh, and Iman Ardekani. A Comprehensive Review of Deep Learning Algorithms. Unitec ePress, 2021. http://dx.doi.org/10.34074/ocds.092.
Shukla, Indu, Rajeev Agrawal, Kelly Ervin, and Jonathan Boone. AI on digital twin of facility captured by reality scans. Engineer Research and Development Center (U.S.), November 2023. http://dx.doi.org/10.21079/11681/47850.
Daudelin, Francois, Lina Taing, Lucy Chen, Claudia Abreu Lopes, Adeniyi Francis Fagbamigbe, and Hamid Mehmood. Mapping WASH-related disease risk: A review of risk concepts and methods. United Nations University Institute for Water, Environment and Health, December 2021. http://dx.doi.org/10.53328/uxuo4751.