Academic literature on the topic 'Out-of-distribution generalization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Out-of-distribution generalization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Out-of-distribution generalization"

1

Ye, Nanyang, Lin Zhu, Jia Wang, et al. "Certifiable Out-of-Distribution Generalization." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (2023): 10927–35. http://dx.doi.org/10.1609/aaai.v37i9.26295.

Full text
Abstract:
Machine learning methods suffer from test-time performance degeneration when faced with out-of-distribution (OoD) data whose distribution is not necessarily the same as training data distribution. Although a plethora of algorithms have been proposed to mitigate this issue, it has been demonstrated that achieving better performance than ERM simultaneously on different types of distributional shift datasets is challenging for existing approaches. Besides, it is unknown how and to what extent these methods work on any OoD datum without theoretical guarantees. In this paper, we propose a certifiable out-of-distribution generalization method that provides provable OoD generalization performance guarantees via a functional optimization framework leveraging random distributions and max-margin learning for each input datum. With this approach, the proposed algorithmic scheme can provide certified accuracy for each input datum's prediction on the semantic space and achieves better performance simultaneously on OoD datasets dominated by correlation shifts or diversity shifts. Our code is available at https://github.com/ZlatanWilliams/StochasticDisturbanceLearning.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Bowen, Haoyang Li, Shuning Wang, Shuo Nie, and Shanghang Zhang. "Subgraph Aggregation for Out-of-Distribution Generalization on Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 18 (2025): 18763–71. https://doi.org/10.1609/aaai.v39i18.34065.

Full text
Abstract:
Out-of-distribution (OOD) generalization in Graph Neural Networks (GNNs) has gained significant attention due to its critical importance in graph-based predictions in real-world scenarios. Existing methods primarily focus on extracting a single causal subgraph from the input graph to achieve generalizable predictions. However, relying on a single subgraph can lead to susceptibility to spurious correlations and is insufficient for learning invariant patterns behind graph data. Moreover, in many real-world applications, such as molecular property prediction, multiple critical subgraphs may influence the target label property. To address these challenges, we propose a novel framework, SubGraph Aggregation(SuGAr), designed to learn a diverse set of subgraphs that are crucial for OOD generalization on graphs. Specifically, SuGAr employs a tailored subgraph sampler and diversity regularizer to extract a diverse set of invariant subgraphs. These invariant subgraphs are then aggregated by averaging their representations, which enriches the subgraph signals and enhances coverage of the underlying causal structures, thereby improving OOD generalization. Extensive experiments on both synthetic and real-world datasets demonstrate that SuGAr outperforms state-of-the-art methods, achieving up to a 24% improvement in OOD generalization on graphs. To the best of our knowledge, this is the first work to study graph OOD generalization by learning multiple invariant subgraphs.
APA, Harvard, Vancouver, ISO, and other styles
3

Yuan, Lingxiao, Harold S. Park, and Emma Lejeune. "Towards out of distribution generalization for problems in mechanics." Computer Methods in Applied Mechanics and Engineering 400 (October 2022): 115569. http://dx.doi.org/10.1016/j.cma.2022.115569.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Anji, Hongming Xu, Guy Van den Broeck, and Yitao Liang. "Out-of-Distribution Generalization by Neural-Symbolic Joint Training." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 10 (2023): 12252–59. http://dx.doi.org/10.1609/aaai.v37i10.26444.

Full text
Abstract:
This paper develops a novel methodology to simultaneously learn a neural network and extract generalized logic rules. Different from prior neural-symbolic methods that require background knowledge and candidate logical rules to be provided, we aim to induce task semantics with minimal priors. This is achieved by a two-step learning framework that iterates between optimizing neural predictions of task labels and searching for a more accurate representation of the hidden task semantics. Notably, supervision works in both directions: (partially) induced task semantics guide the learning of the neural network and induced neural predictions admit an improved semantic representation. We demonstrate that our proposed framework is capable of achieving superior out-of-distribution generalization performance on two tasks: (i) learning multi-digit addition, where it is trained on short sequences of digits and tested on long sequences of digits; (ii) predicting the optimal action in the Tower of Hanoi, where the model is challenged to discover a policy independent of the number of disks in the puzzle.
APA, Harvard, Vancouver, ISO, and other styles
5

Yu, Yemin, Luotian Yuan, Ying Wei, et al. "RetroOOD: Understanding Out-of-Distribution Generalization in Retrosynthesis Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 1 (2024): 374–82. http://dx.doi.org/10.1609/aaai.v38i1.27791.

Full text
Abstract:
Machine learning-assisted retrosynthesis prediction models have been gaining widespread adoption, though their performances oftentimes degrade significantly when deployed in real-world applications embracing out-of-distribution (OOD) molecules or reactions. Despite steady progress on standard benchmarks, our understanding of existing retrosynthesis prediction models under the premise of distribution shifts remains stagnant. To this end, we first formally sort out two types of distribution shifts in retrosynthesis prediction and construct two groups of benchmark datasets. Next, through comprehensive experiments, we systematically compare state-of-the-art retrosynthesis prediction models on the two groups of benchmarks, revealing the limitations of previous in-distribution evaluation and re-examining the advantages of each model. More remarkably, we are motivated by the above empirical insights to propose two model-agnostic techniques that can improve the OOD generalization of arbitrary off-the-shelf retrosynthesis prediction algorithms. Our preliminary experiments show their high potential with an average performance improvement of 4.6%, and the established benchmarks serve as a foothold for further retrosynthesis prediction research towards OOD generalization.
APA, Harvard, Vancouver, ISO, and other styles
6

Du, Hongyi, Xuewei Li, and Minglai Shao. "Graph out-of-distribution generalization through contrastive learning paradigm." Knowledge-Based Systems 315 (April 2025): 113316. https://doi.org/10.1016/j.knosys.2025.113316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xu, Yiming, Bin Shi, Zhen Peng, Huixiang Liu, Bo Dong, and Chen Chen. "Out-of-Distribution Generalization on Graphs via Progressive Inference." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 12 (2025): 12963–71. https://doi.org/10.1609/aaai.v39i12.33414.

Full text
Abstract:
The development and evaluation of graph neural networks (GNNs) generally follow the independent and identically distributed (i.i.d.) assumption. Yet this assumption is often untenable in practice due to the uncontrollable data generation mechanism. In particular, when the data distribution shows a significant shift, most GNNs would fail to produce reliable predictions and may even make decisions randomly. One of the most promising solutions to improve the model generalization is to pick out causal invariant parts in the input graph. Nonetheless, we observe a significant distribution gap between the causal parts learned by existing methods and the ground-truth, leading to undesirable performance. In response to the above issues, this paper presents GPro, a model that learns graph causal invariance with progressive inference. Specifically, the complicated graph causal invariant learning is decomposed into multiple intermediate inference steps from easy to hard, and the perception of GPro is continuously strengthened through a progressive inference process to extract causal features that are stable to distribution shifts. We also enlarge the training distribution by creating counterfactual samples to enhance the capability of the GPro in capturing the causal invariant parts. Extensive experiments demonstrate that our proposed GPro outperforms the state-of-the-art methods by 4.91% on average. For datasets with more severe distribution shifts, the performance improvement can be up to 6.86%.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhu, Lin, Xinbing Wang, Chenghu Zhou, and Nanyang Ye. "Bayesian Cross-Modal Alignment Learning for Few-Shot Out-of-Distribution Generalization." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (2023): 11461–69. http://dx.doi.org/10.1609/aaai.v37i9.26355.

Full text
Abstract:
Recent advances in large pre-trained models showed promising results in few-shot learning. However, their generalization ability on two-dimensional Out-of-Distribution (OoD) data, i.e., correlation shift and diversity shift, has not been thoroughly investigated. Researches have shown that even with a significant amount of training data, few methods can achieve better performance than the standard empirical risk minimization method (ERM) in OoD generalization. This few-shot OoD generalization dilemma emerges as a challenging direction in deep neural network generalization research, where the performance suffers from overfitting on few-shot examples and OoD generalization errors. In this paper, leveraging a broader supervision source, we explore a novel Bayesian cross-modal image-text alignment learning method (Bayes-CAL) to address this issue. Specifically, the model is designed as only text representations are fine-tuned via a Bayesian modelling approach with gradient orthogonalization loss and invariant risk minimization (IRM) loss. The Bayesian approach is essentially introduced to avoid overfitting the base classes observed during training and improve generalization to broader unseen classes. The dedicated loss is introduced to achieve better image-text alignment by disentangling the causal and non-casual parts of image features. Numerical experiments demonstrate that Bayes-CAL achieved state-of-the-art OoD generalization performances on two-dimensional distribution shifts. Moreover, compared with CLIP-like models, Bayes-CAL yields more stable generalization performances on unseen classes. Our code is available at https://github.com/LinLLLL/BayesCAL.
APA, Harvard, Vancouver, ISO, and other styles
9

Lavda, Frantzeska, and Alexandros Kalousis. "Semi-Supervised Variational Autoencoders for Out-of-Distribution Generation." Entropy 25, no. 12 (2023): 1659. http://dx.doi.org/10.3390/e25121659.

Full text
Abstract:
Humans are able to quickly adapt to new situations, learn effectively with limited data, and create unique combinations of basic concepts. In contrast, generalizing out-of-distribution (OOD) data and achieving combinatorial generalizations are fundamental challenges for machine learning models. Moreover, obtaining high-quality labeled examples can be very time-consuming and expensive, particularly when specialized skills are required for labeling. To address these issues, we propose BtVAE, a method that utilizes conditional VAE models to achieve combinatorial generalization in certain scenarios and consequently to generate out-of-distribution (OOD) data in a semi-supervised manner. Unlike previous approaches that use new factors of variation during testing, our method uses only existing attributes from the training data but in ways that were not seen during training (e.g., small objects of a specific shape during training and large objects of the same shape during testing).
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Xiao, Sunhao Dai, Jun Xu, Yong Liu, and Zhenhua Dong. "AdaO2B: Adaptive Online to Batch Conversion for Out-of-Distribution Generalization." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 21 (2025): 22596–604. https://doi.org/10.1609/aaai.v39i21.34418.

Full text
Abstract:
Online to batch conversion involves constructing a new batch learner by utilizing a series of models generated by an existing online learning algorithm, for achieving generalization guarantees under i.i.d assumption. However, when applied to real-world streaming applications such as streaming recommender systems, the data stream may be sampled from time-varying distributions instead of persistently being i.i.d. This poses a challenge in terms of out-of-distribution (OOD) generalization. Existing approaches employ fixed conversion mechanisms that are unable to adapt to novel testing distributions, hindering the testing accuracy of the batch learner. To address these issues, we propose AdaO2B, an adaptive online to batch conversion approach under the bandit setting. AdaO2B is designed to be aware of the distribution shifts in the testing data and achieves OOD generalization guarantees. Specifically, AdaO2B can dynamically combine the sequence of models learned by a contextual bandit algorithm and determine appropriate combination weights using a context-aware weighting function. This innovative approach allows for the conversion of a sequence of models into a batch learner that facilitates OOD generalization. Theoretical analysis provides justification for why and how the learned adaptive batch learner can achieve OOD generalization error guarantees. Experimental results have demonstrated that AdaO2B significantly outperforms state-of-the-art baselines on both synthetic and real-world recommendation datasets.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Out-of-distribution generalization"

1

Kirchmeyer, Matthieu. "Out-of-distribution Generalization in Deep Learning : Classification and Spatiotemporal Forecasting." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS080.

Full text
Abstract:
L’apprentissage profond a émergé comme une approche puissante pour la modélisation de données statiques comme les images et, plus récemment, pour la modélisation de systèmes dynamiques comme ceux sous-jacents aux séries temporelles, aux vidéos ou aux phénomènes physiques. Cependant, les réseaux neuronaux ne généralisent pas bien en dehors de la distribution d’apprentissage, en d’autres termes, hors-distribution. Ceci limite le déploiement de l’apprentissage profond dans les systèmes autonomes ou les systèmes de production en ligne, qui sont confrontés à des données en constante évolution. Dans cette thèse, nous concevons de nouvelles stratégies d’apprentissage pour la généralisation hors-distribution. Celles-ci tiennent compte des défis spécifiques posés par deux tâches d’application principales, la classification de données statiques et la prévision de dynamiques spatiotemporelles. Les deux premières parties de cette thèse étudient la classification. Nous présentons d’abord comment utiliser des données d’entraînement en quantité limitée d’un domaine cible pour l’adaptation. Nous explorons ensuite comment généraliser à des domaines non observés sans accès à de telles données. La dernière partie de cette thèse présente diverses tâches de généralisation, spécifiques à la prévision spatiotemporelle<br>Deep learning has emerged as a powerful approach for modelling static data like images and more recently for modelling dynamical systems like those underlying times series, videos or physical phenomena. Yet, neural networks were observed to not generalize well outside the training distribution, in other words out-of-distribution. This lack of generalization limits the deployment of deep learning in autonomous systems or online production pipelines, which are faced with constantly evolving data. In this thesis, we design new strategies for out-of-distribution generalization. These strategies handle the specific challenges posed by two main application tasks, classification of static data and spatiotemporal dynamics forecasting. The first two parts of this thesis consider the classification problem. We first investigate how we can efficiently leverage some observed training data from a target domain for adaptation. We then explore how to generalize to unobserved domains without access to such data. The last part of this thesis handles various generalization problems specific to spatiotemporal forecasting
APA, Harvard, Vancouver, ISO, and other styles
2

Soum-Fontez, Louis. "LiDAR-based domain generalization and unknown 3D object detection." Electronic Thesis or Diss., Université Paris sciences et lettres, 2025. http://www.theses.fr/2025UPSLM002.

Full text
Abstract:
Le développement de systèmes de perception robustes est fondamental pour assurer le fonctionnement sûr et efficace des véhicules autonomes. Ces systèmes accomplissent des tâches essentielles telles que la détection d'objets en 3D, permettant aux véhicules d'identifier et de localiser des obstacles, notamment d'autres véhicules, des piétons et divers objets présents dans leur environnement. Une détection précise est cruciale pour une prise de décision efficace et une navigation sûre dans des scénarios de conduite complexes. Cependant, la détection d'objets en 3D pose des défis importants en raison de la diversité des conditions du monde réel, qui incluent une grande variété de configurations de capteurs, d'environnements géographiques et de complexités de scènes. Premièrement, ce travail de thèse identifie et aborde les différences de domaine (domain shifts) qui surviennent entre différents jeux de données LiDAR en raison de variations dans les spécifications des capteurs, les environnements géographiques et les attributs propres à chaque jeu de données. Ces différences entraînent souvent des écarts de performance importants lors du transfert de modèles entre jeux de données. Pour pallier ces problèmes, un cadre d'entraînement multi-dataset, nommé MDT3D, est introduit. MDT3D intègre des données provenant de diverses sources et applique des techniques novatrices d'augmentation et d'harmonisation des étiquettes, permettant de créer des modèles capables de se généraliser efficacement dans des conditions variées. Deuxièmement, la thèse présente ParisLuco3D, un jeu de données capturé dans les zones urbaines autour du Jardin du Luxembourg à Paris, conçu pour tester la robustesse des modèles dans des scénarios complexes du monde réel. Ce jeu de données offre une plateforme dédiée pour évaluer la généralisation des modèles, avec des benchmarks soulignant les performances limitées des méthodes classiques de généralisation. Enfin, le dernier axe de généralisation exploré concerne la détection d'objets inconnus. Nous reformulons la détection d'objets inconnus comme un problème hors distribution (Out-Of Distribution, OOD), permettant aux modèles de différencier les objets connus des objets inédits sans compromettre la précision sur les catégories connues<br>The development of robust perception systems is fundamental for the safe and efficient operation of autonomous vehicles. These systems perform essential tasks such as 3D object detection, enabling vehicles to identify and localize obstacles, including other vehicles, pedestrians, and various objects within their environment. Accurate detection is critical for effective decision-making and navigation through complex driving scenarios. However, 3D object detection presents significant challenges due to the diverse nature of real-world conditions, which encompass a broad range of sensor setups, geographic environments, and scene complexities. Firstly, this thesis work identifies and addresses the domain shifts that occur between different LiDAR datasets due to variations in sensor specifications, geographic environments, and dataset-specific attributes. These shifts often lead to significant performance gaps when models are transferred between datasets. To mitigate these issues, a multi-dataset training framework called MDT3D is introduced. MDT3D integrates data from various sources and employs novel augmentation and label harmonization techniques to create models that can generalize effectively across different conditions. Secondly, the thesis presents ParisLuco3D, a dataset captured in urban areas around the Luxembourg Garden in Paris, designed to test model robustness in complex real-world scenarios. This provides a dedicated dataset to test model generalization, which we benchmark to highlight the poor performance of baseline generalization methods. Lastly, the last axis of generalization explored is generalizing detection to novel unknown objects. We reframe the detection of unknown objects as an out-of-distribution (OOD) problem, allowing models to differentiate between known and previously unseen objects without compromising accuracy on familiar categories
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Out-of-distribution generalization"

1

Zabrodin, Anton. Financial applications of random matrix theory: a short review. Edited by Gernot Akemann, Jinho Baik, and Philippe Di Francesco. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198744191.013.40.

Full text
Abstract:
This article reviews some applications of random matrix theory (RMT) in the context of financial markets and econometric models, with emphasis on various theoretical results (for example, the Marčenko-Pastur spectrum and its various generalizations, random singular value decomposition, free matrices, largest eigenvalue statistics) as well as some concrete applications to portfolio optimization and out-of-sample risk estimation. The discussion begins with an overview of principal component analysis (PCA) of the correlation matrix, followed by an analysis of return statistics and portfolio theory. In particular, the article considers single asset returns, multivariate distribution of returns, risk and portfolio theory, and nonequal time correlations and more general rectangular correlation matrices. It also presents several RMT results on the bulk density of states that can be obtained using the concept of matrix freeness before concluding with a description of empirical correlation matrices of stock returns.
APA, Harvard, Vancouver, ISO, and other styles
2

James, Philip. The Biology of Urban Environments. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198827238.001.0001.

Full text
Abstract:
Urban environments are characterized by the density of buildings and elements of a number of infrastructures that support urban residents in their daily life. These built elements and the activities that take place within towns and cities create a distinctive climate and increase air, water, and soil pollution. Within this context the elements of the natural environment that either are residual areas representative of the pre-urbanized area or are created by people contain distinctive floral and faunal communities that do not exist in the wild. The diverse prions, viruses, micro-organisms, plants, and animals that live there for all or part of their life cycle and their relationships with each other and with humans are illustrated with examples of diseases, parasites, and pests. Plants and animals are found inside as well as outside buildings. The roles of plants inside buildings and of domestic and companion animals are evaluated. Temporal and spatial distribution patterns of plants and animals living outside buildings are set out and generalizations are drawn, while exceptions are also discussed. The strategies used and adaptions (genotypic, phenotypic, and behavioural) adopted by plants and animals in face of the challenges presented by urban environments are explained. The final two chapters contain discussions of the impacts of urban environments on human biology and how humans might change these environments in order to address the illnesses that are characteristic of urbanites in the early twenty-first century.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Out-of-distribution generalization"

1

Chen, Zining, Weiqiu Wang, Zhicheng Zhao, Aidong Men, and Hong Chen. "Bag of Tricks for Out-of-Distribution Generalization." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-25075-0_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Moruzzi, Caterina. "Toward Out-of-Distribution Generalization Through Inductive Biases." In Studies in Applied Philosophy, Epistemology and Rational Ethics. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09153-7_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Dongqi, Zhu Teng, Qirui Li, and Ziyin Wang. "Sharpness-Aware Minimization for Out-of-Distribution Generalization." In Communications in Computer and Information Science. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8126-7_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Fawu, Kang Zhang, Zhengyu Liu, Xia Yuan, and Chunxia Zhao. "Deep Relevant Feature Focusing for Out-of-Distribution Generalization." In Pattern Recognition and Computer Vision. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-18907-4_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nguyen, Bac, Stefan Uhlich, Fabien Cardinaux, Lukas Mauch, Marzieh Edraki, and Aaron Courville. "SAFT: Towards Out-of-Distribution Generalization in Fine-Tuning." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. https://doi.org/10.1007/978-3-031-72890-7_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Teevno, Mansoor Ali, Gilberto Ochoa-Ruiz, and Sharib Ali. "Tackling Domain Generalization for Out-of-Distribution Endoscopic Imaging." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-73290-4_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jung, Yoon Gyo, Jaewoo Park, Xingbo Dong, Hojin Park, Andrew Beng Jin Teoh, and Octavia Camps. "Face Reconstruction Transfer Attack as Out-of-Distribution Generalization." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-73226-3_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Yuqing, Xiangxian Li, Zhuang Qi, et al. "Meta-Causal Feature Learning for Out-of-Distribution Generalization." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-25075-0_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yu, Haoran, Baodi Liu, Yingjie Wang, Kai Zhang, Dapeng Tao, and Weifeng Liu. "A Stable Vision Transformer for Out-of-Distribution Generalization." In Pattern Recognition and Computer Vision. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8543-2_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Xingxuan, Yue He, Tan Wang, et al. "NICO Challenge: Out-of-Distribution Generalization for Image Recognition Challenges." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-25075-0_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Out-of-distribution generalization"

1

Cho, Daniel, Christopher Ebersole, and Edmund Zelnio. "Predictive measures of out-of-distribution generalization." In Algorithms for Synthetic Aperture Radar Imagery XXXII, edited by Edmund Zelnio and Frederick D. Garber. SPIE, 2025. https://doi.org/10.1117/12.3054360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Song, Xiaodong Yang, Rashidul Islam, et al. "Enhancing Distribution and Label Consistency for Graph Out-of-Distribution Generalization." In 2024 IEEE International Conference on Data Mining (ICDM). IEEE, 2024. https://doi.org/10.1109/icdm59182.2024.00108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jin, Kaiyu, Chenwang Wu, and Defu Lian. "Out-of-Distribution Generalization via Style and Spuriousness Eliminating." In 2024 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2024. http://dx.doi.org/10.1109/icme57554.2024.10687911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Wenliang, Guanding Yu, Lele Wang, and Renjie Liao. "An Information-Theoretic Framework for Out-of-Distribution Generalization." In 2024 IEEE International Symposium on Information Theory (ISIT). IEEE, 2024. http://dx.doi.org/10.1109/isit57864.2024.10619471.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Zining, Weiqiu Wang, Zhicheng Zhao, Fei Su, and Aidong Men. "Selective Cross-Correlation Consistency Loss for Out-of-Distribution Generalization." In 2024 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2024. http://dx.doi.org/10.1109/icme57554.2024.10688222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chowdhury, Jawad, and Gabriel Terejanu. "CGLearn: Consistent Gradient-Based Learning for Out-of-Distribution Generalization." In 14th International Conference on Pattern Recognition Applications and Methods. SCITEPRESS - Science and Technology Publications, 2025. https://doi.org/10.5220/0013260400003905.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Min, Zifeng Zhuang, Zhitao Wang, and Donglin Wang. "RotoGBML: Towards Out-of-distribution Generalization for Gradient-based Meta-learning." In 2024 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2024. http://dx.doi.org/10.1109/icme57554.2024.10687395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Qiao, Yi, Yang Liu, Qing He, and Xiang Ao. "Domain-aware Node Representation Learning for Graph Out-of-Distribution Generalization." In ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2025. https://doi.org/10.1109/icassp49660.2025.10889630.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Qi, Zhuang, Weihao He, Xiangxu Meng, and Lei Meng. "Attentive Modeling and Distillation for Out-of-Distribution Generalization of Federated Learning." In 2024 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2024. http://dx.doi.org/10.1109/icme57554.2024.10687423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Petchhan, Jirayu, Muhammad Firdaus Alhakim, and Shun-Feng Su. "Out-of-Distribution Awareness via Domain Generalization for Industrial Production Surveillance Employment." In 2024 International Conference on Consumer Electronics - Taiwan (ICCE-Taiwan). IEEE, 2024. http://dx.doi.org/10.1109/icce-taiwan62264.2024.10674660.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Out-of-distribution generalization"

1

Matthias, Caro, Huang Hsin-Yuan, Cincio Lukasz, et al. Out-of-Distribution Generalization for Learning Quantum Dynamics. Office of Scientific and Technical Information (OSTI), 2022. http://dx.doi.org/10.2172/2377336.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography