Littérature scientifique sur le sujet « Out-of-distribution generalization »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Out-of-distribution generalization ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Out-of-distribution generalization"

1

Ye, Nanyang, Lin Zhu, Jia Wang, Zhaoyu Zeng, Jiayao Shao, Chensheng Peng, Bikang Pan, Kaican Li et Jun Zhu. « Certifiable Out-of-Distribution Generalization ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 9 (26 juin 2023) : 10927–35. http://dx.doi.org/10.1609/aaai.v37i9.26295.

Texte intégral
Résumé :
Machine learning methods suffer from test-time performance degeneration when faced with out-of-distribution (OoD) data whose distribution is not necessarily the same as training data distribution. Although a plethora of algorithms have been proposed to mitigate this issue, it has been demonstrated that achieving better performance than ERM simultaneously on different types of distributional shift datasets is challenging for existing approaches. Besides, it is unknown how and to what extent these methods work on any OoD datum without theoretical guarantees. In this paper, we propose a certifiable out-of-distribution generalization method that provides provable OoD generalization performance guarantees via a functional optimization framework leveraging random distributions and max-margin learning for each input datum. With this approach, the proposed algorithmic scheme can provide certified accuracy for each input datum's prediction on the semantic space and achieves better performance simultaneously on OoD datasets dominated by correlation shifts or diversity shifts. Our code is available at https://github.com/ZlatanWilliams/StochasticDisturbanceLearning.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Yuan, Lingxiao, Harold S. Park et Emma Lejeune. « Towards out of distribution generalization for problems in mechanics ». Computer Methods in Applied Mechanics and Engineering 400 (octobre 2022) : 115569. http://dx.doi.org/10.1016/j.cma.2022.115569.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Liu, Anji, Hongming Xu, Guy Van den Broeck et Yitao Liang. « Out-of-Distribution Generalization by Neural-Symbolic Joint Training ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 10 (26 juin 2023) : 12252–59. http://dx.doi.org/10.1609/aaai.v37i10.26444.

Texte intégral
Résumé :
This paper develops a novel methodology to simultaneously learn a neural network and extract generalized logic rules. Different from prior neural-symbolic methods that require background knowledge and candidate logical rules to be provided, we aim to induce task semantics with minimal priors. This is achieved by a two-step learning framework that iterates between optimizing neural predictions of task labels and searching for a more accurate representation of the hidden task semantics. Notably, supervision works in both directions: (partially) induced task semantics guide the learning of the neural network and induced neural predictions admit an improved semantic representation. We demonstrate that our proposed framework is capable of achieving superior out-of-distribution generalization performance on two tasks: (i) learning multi-digit addition, where it is trained on short sequences of digits and tested on long sequences of digits; (ii) predicting the optimal action in the Tower of Hanoi, where the model is challenged to discover a policy independent of the number of disks in the puzzle.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Yu, Yemin, Luotian Yuan, Ying Wei, Hanyu Gao, Fei Wu, Zhihua Wang et Xinhai Ye. « RetroOOD : Understanding Out-of-Distribution Generalization in Retrosynthesis Prediction ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 1 (24 mars 2024) : 374–82. http://dx.doi.org/10.1609/aaai.v38i1.27791.

Texte intégral
Résumé :
Machine learning-assisted retrosynthesis prediction models have been gaining widespread adoption, though their performances oftentimes degrade significantly when deployed in real-world applications embracing out-of-distribution (OOD) molecules or reactions. Despite steady progress on standard benchmarks, our understanding of existing retrosynthesis prediction models under the premise of distribution shifts remains stagnant. To this end, we first formally sort out two types of distribution shifts in retrosynthesis prediction and construct two groups of benchmark datasets. Next, through comprehensive experiments, we systematically compare state-of-the-art retrosynthesis prediction models on the two groups of benchmarks, revealing the limitations of previous in-distribution evaluation and re-examining the advantages of each model. More remarkably, we are motivated by the above empirical insights to propose two model-agnostic techniques that can improve the OOD generalization of arbitrary off-the-shelf retrosynthesis prediction algorithms. Our preliminary experiments show their high potential with an average performance improvement of 4.6%, and the established benchmarks serve as a foothold for further retrosynthesis prediction research towards OOD generalization.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Zhu, Lin, Xinbing Wang, Chenghu Zhou et Nanyang Ye. « Bayesian Cross-Modal Alignment Learning for Few-Shot Out-of-Distribution Generalization ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 9 (26 juin 2023) : 11461–69. http://dx.doi.org/10.1609/aaai.v37i9.26355.

Texte intégral
Résumé :
Recent advances in large pre-trained models showed promising results in few-shot learning. However, their generalization ability on two-dimensional Out-of-Distribution (OoD) data, i.e., correlation shift and diversity shift, has not been thoroughly investigated. Researches have shown that even with a significant amount of training data, few methods can achieve better performance than the standard empirical risk minimization method (ERM) in OoD generalization. This few-shot OoD generalization dilemma emerges as a challenging direction in deep neural network generalization research, where the performance suffers from overfitting on few-shot examples and OoD generalization errors. In this paper, leveraging a broader supervision source, we explore a novel Bayesian cross-modal image-text alignment learning method (Bayes-CAL) to address this issue. Specifically, the model is designed as only text representations are fine-tuned via a Bayesian modelling approach with gradient orthogonalization loss and invariant risk minimization (IRM) loss. The Bayesian approach is essentially introduced to avoid overfitting the base classes observed during training and improve generalization to broader unseen classes. The dedicated loss is introduced to achieve better image-text alignment by disentangling the causal and non-casual parts of image features. Numerical experiments demonstrate that Bayes-CAL achieved state-of-the-art OoD generalization performances on two-dimensional distribution shifts. Moreover, compared with CLIP-like models, Bayes-CAL yields more stable generalization performances on unseen classes. Our code is available at https://github.com/LinLLLL/BayesCAL.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Lavda, Frantzeska, et Alexandros Kalousis. « Semi-Supervised Variational Autoencoders for Out-of-Distribution Generation ». Entropy 25, no 12 (14 décembre 2023) : 1659. http://dx.doi.org/10.3390/e25121659.

Texte intégral
Résumé :
Humans are able to quickly adapt to new situations, learn effectively with limited data, and create unique combinations of basic concepts. In contrast, generalizing out-of-distribution (OOD) data and achieving combinatorial generalizations are fundamental challenges for machine learning models. Moreover, obtaining high-quality labeled examples can be very time-consuming and expensive, particularly when specialized skills are required for labeling. To address these issues, we propose BtVAE, a method that utilizes conditional VAE models to achieve combinatorial generalization in certain scenarios and consequently to generate out-of-distribution (OOD) data in a semi-supervised manner. Unlike previous approaches that use new factors of variation during testing, our method uses only existing attributes from the training data but in ways that were not seen during training (e.g., small objects of a specific shape during training and large objects of the same shape during testing).
Styles APA, Harvard, Vancouver, ISO, etc.
7

Su, Hang, et Wei Wang. « An Out-of-Distribution Generalization Framework Based on Variational Backdoor Adjustment ». Mathematics 12, no 1 (26 décembre 2023) : 85. http://dx.doi.org/10.3390/math12010085.

Texte intégral
Résumé :
In practical applications, learning models that can perform well even when the data distribution is different from the training set are essential and meaningful. Such problems are often referred to as out-of-distribution (OOD) generalization problems. In this paper, we propose a method for OOD generalization based on causal inference. Unlike the prevalent OOD generalization methods, our approach does not require the environment labels associated with the data in the training set. We analyze the causes of distributional shifts in data from a causal modeling perspective and then propose a backdoor adjustment method based on variational inference. Finally, we constructed a unique network structure to simulate the variational inference process. The proposed variational backdoor adjustment (VBA) framework can be combined with any mainstream backbone network. In addition to theoretical derivation, we conduct experiments on different datasets to demonstrate that our method performs well in prediction accuracy and generalization gaps. Furthermore, by comparing the VBA framework with other mainstream OOD methods, we show that VBA performs better than mainstream methods.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Cao, Linfeng, Aofan Jiang, Wei Li, Huaying Wu et Nanyang Ye. « OoDHDR-Codec : Out-of-Distribution Generalization for HDR Image Compression ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 1 (28 juin 2022) : 158–66. http://dx.doi.org/10.1609/aaai.v36i1.19890.

Texte intégral
Résumé :
Recently, deep learning has been proven to be a promising approach in standard dynamic range (SDR) image compression. However, due to the wide luminance distribution of high dynamic range (HDR) images and the lack of large standard datasets, developing a deep model for HDR image compression is much more challenging. To tackle this issue, we view HDR data as distributional shifts of SDR data and the HDR image compression can be modeled as an out-of-distribution generalization (OoD) problem. Herein, we propose a novel out-of-distribution (OoD) HDR image compression framework (OoDHDR-codec). It learns the general representation across HDR and SDR environments, and allows the model to be trained effectively using a large set of SDR datases supplemented with much fewer HDR samples. Specifically, OoDHDR-codec consists of two branches to process the data from two environments. The SDR branch is a standard blackbox network. For the HDR branch, we develop a hybrid system that models luminance masking and tone mapping with white-box modules and performs content compression with black-box neural networks. To improve the generalization from SDR training data on HDR data, we introduce an invariance regularization term to learn the common representation for both SDR and HDR compression. Extensive experimental results show that the OoDHDR codec achieves strong competitive in-distribution performance and state-of-the-art OoD performance. To the best of our knowledge, our proposed approach is the first work to model HDR compression as OoD generalization problems and our OoD generalization algorithmic framework can be applied to any deep compression model in addition to the network architectural choice demonstrated in the paper. Code available at https://github.com/caolinfeng/OoDHDR-codec.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Deng, Bin, et Kui Jia. « Counterfactual Supervision-Based Information Bottleneck for Out-of-Distribution Generalization ». Entropy 25, no 2 (18 janvier 2023) : 193. http://dx.doi.org/10.3390/e25020193.

Texte intégral
Résumé :
Learning invariant (causal) features for out-of-distribution (OOD) generalization have attracted extensive attention recently, and among the proposals, invariant risk minimization (IRM) is a notable solution. In spite of its theoretical promise for linear regression, the challenges of using IRM in linear classification problems remain. By introducing the information bottleneck (IB) principle into the learning of IRM, the IB-IRM approach has demonstrated its power to solve these challenges. In this paper, we further improve IB-IRM from two aspects. First, we show that the key assumption of support overlap of invariant features used in IB-IRM guarantees OOD generalization, and it is still possible to achieve the optimal solution without this assumption. Second, we illustrate two failure modes where IB-IRM (and IRM) could fail in learning the invariant features, and to address such failures, we propose a Counterfactual Supervision-based Information Bottleneck (CSIB) learning algorithm that recovers the invariant features. By requiring counterfactual inference, CSIB works even when accessing data from a single environment. Empirical experiments on several datasets verify our theoretical results.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Ashok, Arjun, Chaitanya Devaguptapu et Vineeth N. Balasubramanian. « Learning Modular Structures That Generalize Out-of-Distribution (Student Abstract) ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 11 (28 juin 2022) : 12905–6. http://dx.doi.org/10.1609/aaai.v36i11.21589.

Texte intégral
Résumé :
Out-of-distribution (O.O.D.) generalization remains to be a key challenge for real-world machine learning systems. We describe a method for O.O.D. generalization that, through training, encourages models to only preserve features in the network that are well reused across multiple training domains. Our method combines two complementary neuron-level regularizers with a probabilistic differentiable binary mask over the network, to extract a modular sub-network that achieves better O.O.D. performance than the original network. Preliminary evaluation on two benchmark datasets corroborates the promise of our method.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Out-of-distribution generalization"

1

Kirchmeyer, Matthieu. « Out-of-distribution Generalization in Deep Learning : Classification and Spatiotemporal Forecasting ». Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS080.

Texte intégral
Résumé :
L’apprentissage profond a émergé comme une approche puissante pour la modélisation de données statiques comme les images et, plus récemment, pour la modélisation de systèmes dynamiques comme ceux sous-jacents aux séries temporelles, aux vidéos ou aux phénomènes physiques. Cependant, les réseaux neuronaux ne généralisent pas bien en dehors de la distribution d’apprentissage, en d’autres termes, hors-distribution. Ceci limite le déploiement de l’apprentissage profond dans les systèmes autonomes ou les systèmes de production en ligne, qui sont confrontés à des données en constante évolution. Dans cette thèse, nous concevons de nouvelles stratégies d’apprentissage pour la généralisation hors-distribution. Celles-ci tiennent compte des défis spécifiques posés par deux tâches d’application principales, la classification de données statiques et la prévision de dynamiques spatiotemporelles. Les deux premières parties de cette thèse étudient la classification. Nous présentons d’abord comment utiliser des données d’entraînement en quantité limitée d’un domaine cible pour l’adaptation. Nous explorons ensuite comment généraliser à des domaines non observés sans accès à de telles données. La dernière partie de cette thèse présente diverses tâches de généralisation, spécifiques à la prévision spatiotemporelle
Deep learning has emerged as a powerful approach for modelling static data like images and more recently for modelling dynamical systems like those underlying times series, videos or physical phenomena. Yet, neural networks were observed to not generalize well outside the training distribution, in other words out-of-distribution. This lack of generalization limits the deployment of deep learning in autonomous systems or online production pipelines, which are faced with constantly evolving data. In this thesis, we design new strategies for out-of-distribution generalization. These strategies handle the specific challenges posed by two main application tasks, classification of static data and spatiotemporal dynamics forecasting. The first two parts of this thesis consider the classification problem. We first investigate how we can efficiently leverage some observed training data from a target domain for adaptation. We then explore how to generalize to unobserved domains without access to such data. The last part of this thesis handles various generalization problems specific to spatiotemporal forecasting
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Out-of-distribution generalization"

1

Zabrodin, Anton. Financial applications of random matrix theory : a short review. Sous la direction de Gernot Akemann, Jinho Baik et Philippe Di Francesco. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198744191.013.40.

Texte intégral
Résumé :
This article reviews some applications of random matrix theory (RMT) in the context of financial markets and econometric models, with emphasis on various theoretical results (for example, the Marčenko-Pastur spectrum and its various generalizations, random singular value decomposition, free matrices, largest eigenvalue statistics) as well as some concrete applications to portfolio optimization and out-of-sample risk estimation. The discussion begins with an overview of principal component analysis (PCA) of the correlation matrix, followed by an analysis of return statistics and portfolio theory. In particular, the article considers single asset returns, multivariate distribution of returns, risk and portfolio theory, and nonequal time correlations and more general rectangular correlation matrices. It also presents several RMT results on the bulk density of states that can be obtained using the concept of matrix freeness before concluding with a description of empirical correlation matrices of stock returns.
Styles APA, Harvard, Vancouver, ISO, etc.
2

James, Philip. The Biology of Urban Environments. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198827238.001.0001.

Texte intégral
Résumé :
Urban environments are characterized by the density of buildings and elements of a number of infrastructures that support urban residents in their daily life. These built elements and the activities that take place within towns and cities create a distinctive climate and increase air, water, and soil pollution. Within this context the elements of the natural environment that either are residual areas representative of the pre-urbanized area or are created by people contain distinctive floral and faunal communities that do not exist in the wild. The diverse prions, viruses, micro-organisms, plants, and animals that live there for all or part of their life cycle and their relationships with each other and with humans are illustrated with examples of diseases, parasites, and pests. Plants and animals are found inside as well as outside buildings. The roles of plants inside buildings and of domestic and companion animals are evaluated. Temporal and spatial distribution patterns of plants and animals living outside buildings are set out and generalizations are drawn, while exceptions are also discussed. The strategies used and adaptions (genotypic, phenotypic, and behavioural) adopted by plants and animals in face of the challenges presented by urban environments are explained. The final two chapters contain discussions of the impacts of urban environments on human biology and how humans might change these environments in order to address the illnesses that are characteristic of urbanites in the early twenty-first century.
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Out-of-distribution generalization"

1

Chen, Zining, Weiqiu Wang, Zhicheng Zhao, Aidong Men et Hong Chen. « Bag of Tricks for Out-of-Distribution Generalization ». Dans Lecture Notes in Computer Science, 465–76. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-25075-0_31.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Moruzzi, Caterina. « Toward Out-of-Distribution Generalization Through Inductive Biases ». Dans Studies in Applied Philosophy, Epistemology and Rational Ethics, 57–66. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09153-7_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Li, Dongqi, Zhu Teng, Qirui Li et Ziyin Wang. « Sharpness-Aware Minimization for Out-of-Distribution Generalization ». Dans Communications in Computer and Information Science, 555–67. Singapore : Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8126-7_43.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Wang, Fawu, Kang Zhang, Zhengyu Liu, Xia Yuan et Chunxia Zhao. « Deep Relevant Feature Focusing for Out-of-Distribution Generalization ». Dans Pattern Recognition and Computer Vision, 245–53. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-18907-4_19.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Wang, Yuqing, Xiangxian Li, Zhuang Qi, Jingyu Li, Xuelong Li, Xiangxu Meng et Lei Meng. « Meta-Causal Feature Learning for Out-of-Distribution Generalization ». Dans Lecture Notes in Computer Science, 530–45. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-25075-0_36.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Yu, Haoran, Baodi Liu, Yingjie Wang, Kai Zhang, Dapeng Tao et Weifeng Liu. « A Stable Vision Transformer for Out-of-Distribution Generalization ». Dans Pattern Recognition and Computer Vision, 328–39. Singapore : Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8543-2_27.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Zhang, Xingxuan, Yue He, Tan Wang, Jiaxin Qi, Han Yu, Zimu Wang, Jie Peng et al. « NICO Challenge : Out-of-Distribution Generalization for Image Recognition Challenges ». Dans Lecture Notes in Computer Science, 433–50. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-25075-0_29.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Long, Xi, Ying Cheng, Xiao Mu, Lian Liu et Jingxin Liu. « Domain Adaptive Cascade R-CNN for MItosis DOmain Generalization (MIDOG) Challenge ». Dans Biomedical Image Registration, Domain Generalisation and Out-of-Distribution Analysis, 73–76. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-97281-3_11.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Jahanifar, Mostafa, Adam Shepard, Neda Zamanitajeddin, R. M. Saad Bashir, Mohsin Bilal, Syed Ali Khurram, Fayyaz Minhas et Nasir Rajpoot. « Stain-Robust Mitotic Figure Detection for the Mitosis Domain Generalization Challenge ». Dans Biomedical Image Registration, Domain Generalisation and Out-of-Distribution Analysis, 48–52. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-97281-3_6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Wang, Jiahao, Hao Wang, Zhuojun Dong, Hua Yang, Yuting Yang, Qianyue Bao, Fang Liu et LiCheng Jiao. « A Three-Stage Model Fusion Method for Out-of-Distribution Generalization ». Dans Lecture Notes in Computer Science, 488–99. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-25075-0_33.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Out-of-distribution generalization"

1

Wang, Fawu, Ruizhe Li, Kang Zhang, Xia Yuan et Chunxia Zhao. « Data Distribution Transfer for Out Of Distribution Generalization ». Dans 2022 IEEE 24th International Workshop on Multimedia Signal Processing (MMSP). IEEE, 2022. http://dx.doi.org/10.1109/mmsp55362.2022.9949199.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Wang, Ruoyu, Mingyang Yi, Zhitang Chen et Shengyu Zhu. « Out-of-distribution Generalization with Causal Invariant Transformations ». Dans 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.00047.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Deng, Xun, Wenjie Wang, Fuli Feng, Hanwang Zhang, Xiangnan He et Yong Liao. « Counterfactual Active Learning for Out-of-Distribution Generalization ». Dans Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1 : Long Papers). Stroudsburg, PA, USA : Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.acl-long.636.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Zhang, Xingxuan, Peng Cui, Renzhe Xu, Linjun Zhou, Yue He et Zheyan Shen. « Deep Stable Learning for Out-Of-Distribution Generalization ». Dans 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021. http://dx.doi.org/10.1109/cvpr46437.2021.00533.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Wu, Qitian, Fan Nie, Chenxiao Yang, Tianyi Bao et Junchi Yan. « Graph Out-of-Distribution Generalization via Causal Intervention ». Dans WWW '24 : The ACM Web Conference 2024. New York, NY, USA : ACM, 2024. http://dx.doi.org/10.1145/3589334.3645604.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Kamani, Mohammad Mahdi, Sadegh Farhang, Mehrdad Mahdavi et James Z. Wang. « Targeted Data-driven Regularization for Out-of-Distribution Generalization ». Dans KDD '20 : The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York, NY, USA : ACM, 2020. http://dx.doi.org/10.1145/3394486.3403131.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Wang, Xin, Peng Cui et Wenwu Zhu. « Out-of-distribution Generalization and Its Applications for Multimedia ». Dans MM '21 : ACM Multimedia Conference. New York, NY, USA : ACM, 2021. http://dx.doi.org/10.1145/3474085.3478876.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Miao, Qiaowei, Junkun Yuan, Shengyu Zhang, Fei Wu et Kun Kuang. « Domaindiff : Boost out-of-Distribution Generalization with Synthetic Data ». Dans ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024. http://dx.doi.org/10.1109/icassp48485.2024.10446788.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Bai, Haoyue, Fengwei Zhou, Lanqing Hong, Nanyang Ye, S. H. Gary Chan et Zhenguo Li. « NAS-OoD : Neural Architecture Search for Out-of-Distribution Generalization ». Dans 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021. http://dx.doi.org/10.1109/iccv48922.2021.00821.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Sun, Yihong, Adam Kortylewski et Alan Yuille. « Amodal Segmentation through Out-of-Task and Out-of-Distribution Generalization with a Bayesian Model ». Dans 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.00128.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie