Literatura científica selecionada sobre o tema "Out-of-distribution generalization"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Out-of-distribution generalization".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Out-of-distribution generalization"

1

Ye, Nanyang, Lin Zhu, Jia Wang, Zhaoyu Zeng, Jiayao Shao, Chensheng Peng, Bikang Pan, Kaican Li e Jun Zhu. "Certifiable Out-of-Distribution Generalization". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 9 (26 de junho de 2023): 10927–35. http://dx.doi.org/10.1609/aaai.v37i9.26295.

Texto completo da fonte
Resumo:
Machine learning methods suffer from test-time performance degeneration when faced with out-of-distribution (OoD) data whose distribution is not necessarily the same as training data distribution. Although a plethora of algorithms have been proposed to mitigate this issue, it has been demonstrated that achieving better performance than ERM simultaneously on different types of distributional shift datasets is challenging for existing approaches. Besides, it is unknown how and to what extent these methods work on any OoD datum without theoretical guarantees. In this paper, we propose a certifiable out-of-distribution generalization method that provides provable OoD generalization performance guarantees via a functional optimization framework leveraging random distributions and max-margin learning for each input datum. With this approach, the proposed algorithmic scheme can provide certified accuracy for each input datum's prediction on the semantic space and achieves better performance simultaneously on OoD datasets dominated by correlation shifts or diversity shifts. Our code is available at https://github.com/ZlatanWilliams/StochasticDisturbanceLearning.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Yuan, Lingxiao, Harold S. Park e Emma Lejeune. "Towards out of distribution generalization for problems in mechanics". Computer Methods in Applied Mechanics and Engineering 400 (outubro de 2022): 115569. http://dx.doi.org/10.1016/j.cma.2022.115569.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Liu, Anji, Hongming Xu, Guy Van den Broeck e Yitao Liang. "Out-of-Distribution Generalization by Neural-Symbolic Joint Training". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 10 (26 de junho de 2023): 12252–59. http://dx.doi.org/10.1609/aaai.v37i10.26444.

Texto completo da fonte
Resumo:
This paper develops a novel methodology to simultaneously learn a neural network and extract generalized logic rules. Different from prior neural-symbolic methods that require background knowledge and candidate logical rules to be provided, we aim to induce task semantics with minimal priors. This is achieved by a two-step learning framework that iterates between optimizing neural predictions of task labels and searching for a more accurate representation of the hidden task semantics. Notably, supervision works in both directions: (partially) induced task semantics guide the learning of the neural network and induced neural predictions admit an improved semantic representation. We demonstrate that our proposed framework is capable of achieving superior out-of-distribution generalization performance on two tasks: (i) learning multi-digit addition, where it is trained on short sequences of digits and tested on long sequences of digits; (ii) predicting the optimal action in the Tower of Hanoi, where the model is challenged to discover a policy independent of the number of disks in the puzzle.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Yu, Yemin, Luotian Yuan, Ying Wei, Hanyu Gao, Fei Wu, Zhihua Wang e Xinhai Ye. "RetroOOD: Understanding Out-of-Distribution Generalization in Retrosynthesis Prediction". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 1 (24 de março de 2024): 374–82. http://dx.doi.org/10.1609/aaai.v38i1.27791.

Texto completo da fonte
Resumo:
Machine learning-assisted retrosynthesis prediction models have been gaining widespread adoption, though their performances oftentimes degrade significantly when deployed in real-world applications embracing out-of-distribution (OOD) molecules or reactions. Despite steady progress on standard benchmarks, our understanding of existing retrosynthesis prediction models under the premise of distribution shifts remains stagnant. To this end, we first formally sort out two types of distribution shifts in retrosynthesis prediction and construct two groups of benchmark datasets. Next, through comprehensive experiments, we systematically compare state-of-the-art retrosynthesis prediction models on the two groups of benchmarks, revealing the limitations of previous in-distribution evaluation and re-examining the advantages of each model. More remarkably, we are motivated by the above empirical insights to propose two model-agnostic techniques that can improve the OOD generalization of arbitrary off-the-shelf retrosynthesis prediction algorithms. Our preliminary experiments show their high potential with an average performance improvement of 4.6%, and the established benchmarks serve as a foothold for further retrosynthesis prediction research towards OOD generalization.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Zhu, Lin, Xinbing Wang, Chenghu Zhou e Nanyang Ye. "Bayesian Cross-Modal Alignment Learning for Few-Shot Out-of-Distribution Generalization". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 9 (26 de junho de 2023): 11461–69. http://dx.doi.org/10.1609/aaai.v37i9.26355.

Texto completo da fonte
Resumo:
Recent advances in large pre-trained models showed promising results in few-shot learning. However, their generalization ability on two-dimensional Out-of-Distribution (OoD) data, i.e., correlation shift and diversity shift, has not been thoroughly investigated. Researches have shown that even with a significant amount of training data, few methods can achieve better performance than the standard empirical risk minimization method (ERM) in OoD generalization. This few-shot OoD generalization dilemma emerges as a challenging direction in deep neural network generalization research, where the performance suffers from overfitting on few-shot examples and OoD generalization errors. In this paper, leveraging a broader supervision source, we explore a novel Bayesian cross-modal image-text alignment learning method (Bayes-CAL) to address this issue. Specifically, the model is designed as only text representations are fine-tuned via a Bayesian modelling approach with gradient orthogonalization loss and invariant risk minimization (IRM) loss. The Bayesian approach is essentially introduced to avoid overfitting the base classes observed during training and improve generalization to broader unseen classes. The dedicated loss is introduced to achieve better image-text alignment by disentangling the causal and non-casual parts of image features. Numerical experiments demonstrate that Bayes-CAL achieved state-of-the-art OoD generalization performances on two-dimensional distribution shifts. Moreover, compared with CLIP-like models, Bayes-CAL yields more stable generalization performances on unseen classes. Our code is available at https://github.com/LinLLLL/BayesCAL.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Lavda, Frantzeska, e Alexandros Kalousis. "Semi-Supervised Variational Autoencoders for Out-of-Distribution Generation". Entropy 25, n.º 12 (14 de dezembro de 2023): 1659. http://dx.doi.org/10.3390/e25121659.

Texto completo da fonte
Resumo:
Humans are able to quickly adapt to new situations, learn effectively with limited data, and create unique combinations of basic concepts. In contrast, generalizing out-of-distribution (OOD) data and achieving combinatorial generalizations are fundamental challenges for machine learning models. Moreover, obtaining high-quality labeled examples can be very time-consuming and expensive, particularly when specialized skills are required for labeling. To address these issues, we propose BtVAE, a method that utilizes conditional VAE models to achieve combinatorial generalization in certain scenarios and consequently to generate out-of-distribution (OOD) data in a semi-supervised manner. Unlike previous approaches that use new factors of variation during testing, our method uses only existing attributes from the training data but in ways that were not seen during training (e.g., small objects of a specific shape during training and large objects of the same shape during testing).
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Su, Hang, e Wei Wang. "An Out-of-Distribution Generalization Framework Based on Variational Backdoor Adjustment". Mathematics 12, n.º 1 (26 de dezembro de 2023): 85. http://dx.doi.org/10.3390/math12010085.

Texto completo da fonte
Resumo:
In practical applications, learning models that can perform well even when the data distribution is different from the training set are essential and meaningful. Such problems are often referred to as out-of-distribution (OOD) generalization problems. In this paper, we propose a method for OOD generalization based on causal inference. Unlike the prevalent OOD generalization methods, our approach does not require the environment labels associated with the data in the training set. We analyze the causes of distributional shifts in data from a causal modeling perspective and then propose a backdoor adjustment method based on variational inference. Finally, we constructed a unique network structure to simulate the variational inference process. The proposed variational backdoor adjustment (VBA) framework can be combined with any mainstream backbone network. In addition to theoretical derivation, we conduct experiments on different datasets to demonstrate that our method performs well in prediction accuracy and generalization gaps. Furthermore, by comparing the VBA framework with other mainstream OOD methods, we show that VBA performs better than mainstream methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Cao, Linfeng, Aofan Jiang, Wei Li, Huaying Wu e Nanyang Ye. "OoDHDR-Codec: Out-of-Distribution Generalization for HDR Image Compression". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 1 (28 de junho de 2022): 158–66. http://dx.doi.org/10.1609/aaai.v36i1.19890.

Texto completo da fonte
Resumo:
Recently, deep learning has been proven to be a promising approach in standard dynamic range (SDR) image compression. However, due to the wide luminance distribution of high dynamic range (HDR) images and the lack of large standard datasets, developing a deep model for HDR image compression is much more challenging. To tackle this issue, we view HDR data as distributional shifts of SDR data and the HDR image compression can be modeled as an out-of-distribution generalization (OoD) problem. Herein, we propose a novel out-of-distribution (OoD) HDR image compression framework (OoDHDR-codec). It learns the general representation across HDR and SDR environments, and allows the model to be trained effectively using a large set of SDR datases supplemented with much fewer HDR samples. Specifically, OoDHDR-codec consists of two branches to process the data from two environments. The SDR branch is a standard blackbox network. For the HDR branch, we develop a hybrid system that models luminance masking and tone mapping with white-box modules and performs content compression with black-box neural networks. To improve the generalization from SDR training data on HDR data, we introduce an invariance regularization term to learn the common representation for both SDR and HDR compression. Extensive experimental results show that the OoDHDR codec achieves strong competitive in-distribution performance and state-of-the-art OoD performance. To the best of our knowledge, our proposed approach is the first work to model HDR compression as OoD generalization problems and our OoD generalization algorithmic framework can be applied to any deep compression model in addition to the network architectural choice demonstrated in the paper. Code available at https://github.com/caolinfeng/OoDHDR-codec.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Deng, Bin, e Kui Jia. "Counterfactual Supervision-Based Information Bottleneck for Out-of-Distribution Generalization". Entropy 25, n.º 2 (18 de janeiro de 2023): 193. http://dx.doi.org/10.3390/e25020193.

Texto completo da fonte
Resumo:
Learning invariant (causal) features for out-of-distribution (OOD) generalization have attracted extensive attention recently, and among the proposals, invariant risk minimization (IRM) is a notable solution. In spite of its theoretical promise for linear regression, the challenges of using IRM in linear classification problems remain. By introducing the information bottleneck (IB) principle into the learning of IRM, the IB-IRM approach has demonstrated its power to solve these challenges. In this paper, we further improve IB-IRM from two aspects. First, we show that the key assumption of support overlap of invariant features used in IB-IRM guarantees OOD generalization, and it is still possible to achieve the optimal solution without this assumption. Second, we illustrate two failure modes where IB-IRM (and IRM) could fail in learning the invariant features, and to address such failures, we propose a Counterfactual Supervision-based Information Bottleneck (CSIB) learning algorithm that recovers the invariant features. By requiring counterfactual inference, CSIB works even when accessing data from a single environment. Empirical experiments on several datasets verify our theoretical results.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Ashok, Arjun, Chaitanya Devaguptapu e Vineeth N. Balasubramanian. "Learning Modular Structures That Generalize Out-of-Distribution (Student Abstract)". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 11 (28 de junho de 2022): 12905–6. http://dx.doi.org/10.1609/aaai.v36i11.21589.

Texto completo da fonte
Resumo:
Out-of-distribution (O.O.D.) generalization remains to be a key challenge for real-world machine learning systems. We describe a method for O.O.D. generalization that, through training, encourages models to only preserve features in the network that are well reused across multiple training domains. Our method combines two complementary neuron-level regularizers with a probabilistic differentiable binary mask over the network, to extract a modular sub-network that achieves better O.O.D. performance than the original network. Preliminary evaluation on two benchmark datasets corroborates the promise of our method.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Out-of-distribution generalization"

1

Kirchmeyer, Matthieu. "Out-of-distribution Generalization in Deep Learning : Classification and Spatiotemporal Forecasting". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS080.

Texto completo da fonte
Resumo:
L’apprentissage profond a émergé comme une approche puissante pour la modélisation de données statiques comme les images et, plus récemment, pour la modélisation de systèmes dynamiques comme ceux sous-jacents aux séries temporelles, aux vidéos ou aux phénomènes physiques. Cependant, les réseaux neuronaux ne généralisent pas bien en dehors de la distribution d’apprentissage, en d’autres termes, hors-distribution. Ceci limite le déploiement de l’apprentissage profond dans les systèmes autonomes ou les systèmes de production en ligne, qui sont confrontés à des données en constante évolution. Dans cette thèse, nous concevons de nouvelles stratégies d’apprentissage pour la généralisation hors-distribution. Celles-ci tiennent compte des défis spécifiques posés par deux tâches d’application principales, la classification de données statiques et la prévision de dynamiques spatiotemporelles. Les deux premières parties de cette thèse étudient la classification. Nous présentons d’abord comment utiliser des données d’entraînement en quantité limitée d’un domaine cible pour l’adaptation. Nous explorons ensuite comment généraliser à des domaines non observés sans accès à de telles données. La dernière partie de cette thèse présente diverses tâches de généralisation, spécifiques à la prévision spatiotemporelle
Deep learning has emerged as a powerful approach for modelling static data like images and more recently for modelling dynamical systems like those underlying times series, videos or physical phenomena. Yet, neural networks were observed to not generalize well outside the training distribution, in other words out-of-distribution. This lack of generalization limits the deployment of deep learning in autonomous systems or online production pipelines, which are faced with constantly evolving data. In this thesis, we design new strategies for out-of-distribution generalization. These strategies handle the specific challenges posed by two main application tasks, classification of static data and spatiotemporal dynamics forecasting. The first two parts of this thesis consider the classification problem. We first investigate how we can efficiently leverage some observed training data from a target domain for adaptation. We then explore how to generalize to unobserved domains without access to such data. The last part of this thesis handles various generalization problems specific to spatiotemporal forecasting
Estilos ABNT, Harvard, Vancouver, APA, etc.

Livros sobre o assunto "Out-of-distribution generalization"

1

Zabrodin, Anton. Financial applications of random matrix theory: a short review. Editado por Gernot Akemann, Jinho Baik e Philippe Di Francesco. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198744191.013.40.

Texto completo da fonte
Resumo:
This article reviews some applications of random matrix theory (RMT) in the context of financial markets and econometric models, with emphasis on various theoretical results (for example, the Marčenko-Pastur spectrum and its various generalizations, random singular value decomposition, free matrices, largest eigenvalue statistics) as well as some concrete applications to portfolio optimization and out-of-sample risk estimation. The discussion begins with an overview of principal component analysis (PCA) of the correlation matrix, followed by an analysis of return statistics and portfolio theory. In particular, the article considers single asset returns, multivariate distribution of returns, risk and portfolio theory, and nonequal time correlations and more general rectangular correlation matrices. It also presents several RMT results on the bulk density of states that can be obtained using the concept of matrix freeness before concluding with a description of empirical correlation matrices of stock returns.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

James, Philip. The Biology of Urban Environments. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198827238.001.0001.

Texto completo da fonte
Resumo:
Urban environments are characterized by the density of buildings and elements of a number of infrastructures that support urban residents in their daily life. These built elements and the activities that take place within towns and cities create a distinctive climate and increase air, water, and soil pollution. Within this context the elements of the natural environment that either are residual areas representative of the pre-urbanized area or are created by people contain distinctive floral and faunal communities that do not exist in the wild. The diverse prions, viruses, micro-organisms, plants, and animals that live there for all or part of their life cycle and their relationships with each other and with humans are illustrated with examples of diseases, parasites, and pests. Plants and animals are found inside as well as outside buildings. The roles of plants inside buildings and of domestic and companion animals are evaluated. Temporal and spatial distribution patterns of plants and animals living outside buildings are set out and generalizations are drawn, while exceptions are also discussed. The strategies used and adaptions (genotypic, phenotypic, and behavioural) adopted by plants and animals in face of the challenges presented by urban environments are explained. The final two chapters contain discussions of the impacts of urban environments on human biology and how humans might change these environments in order to address the illnesses that are characteristic of urbanites in the early twenty-first century.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Out-of-distribution generalization"

1

Chen, Zining, Weiqiu Wang, Zhicheng Zhao, Aidong Men e Hong Chen. "Bag of Tricks for Out-of-Distribution Generalization". In Lecture Notes in Computer Science, 465–76. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-25075-0_31.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Moruzzi, Caterina. "Toward Out-of-Distribution Generalization Through Inductive Biases". In Studies in Applied Philosophy, Epistemology and Rational Ethics, 57–66. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09153-7_5.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Li, Dongqi, Zhu Teng, Qirui Li e Ziyin Wang. "Sharpness-Aware Minimization for Out-of-Distribution Generalization". In Communications in Computer and Information Science, 555–67. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8126-7_43.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Wang, Fawu, Kang Zhang, Zhengyu Liu, Xia Yuan e Chunxia Zhao. "Deep Relevant Feature Focusing for Out-of-Distribution Generalization". In Pattern Recognition and Computer Vision, 245–53. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-18907-4_19.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Wang, Yuqing, Xiangxian Li, Zhuang Qi, Jingyu Li, Xuelong Li, Xiangxu Meng e Lei Meng. "Meta-Causal Feature Learning for Out-of-Distribution Generalization". In Lecture Notes in Computer Science, 530–45. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-25075-0_36.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Yu, Haoran, Baodi Liu, Yingjie Wang, Kai Zhang, Dapeng Tao e Weifeng Liu. "A Stable Vision Transformer for Out-of-Distribution Generalization". In Pattern Recognition and Computer Vision, 328–39. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8543-2_27.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Zhang, Xingxuan, Yue He, Tan Wang, Jiaxin Qi, Han Yu, Zimu Wang, Jie Peng et al. "NICO Challenge: Out-of-Distribution Generalization for Image Recognition Challenges". In Lecture Notes in Computer Science, 433–50. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-25075-0_29.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Long, Xi, Ying Cheng, Xiao Mu, Lian Liu e Jingxin Liu. "Domain Adaptive Cascade R-CNN for MItosis DOmain Generalization (MIDOG) Challenge". In Biomedical Image Registration, Domain Generalisation and Out-of-Distribution Analysis, 73–76. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-97281-3_11.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Jahanifar, Mostafa, Adam Shepard, Neda Zamanitajeddin, R. M. Saad Bashir, Mohsin Bilal, Syed Ali Khurram, Fayyaz Minhas e Nasir Rajpoot. "Stain-Robust Mitotic Figure Detection for the Mitosis Domain Generalization Challenge". In Biomedical Image Registration, Domain Generalisation and Out-of-Distribution Analysis, 48–52. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-97281-3_6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Wang, Jiahao, Hao Wang, Zhuojun Dong, Hua Yang, Yuting Yang, Qianyue Bao, Fang Liu e LiCheng Jiao. "A Three-Stage Model Fusion Method for Out-of-Distribution Generalization". In Lecture Notes in Computer Science, 488–99. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-25075-0_33.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Out-of-distribution generalization"

1

Wang, Fawu, Ruizhe Li, Kang Zhang, Xia Yuan e Chunxia Zhao. "Data Distribution Transfer for Out Of Distribution Generalization". In 2022 IEEE 24th International Workshop on Multimedia Signal Processing (MMSP). IEEE, 2022. http://dx.doi.org/10.1109/mmsp55362.2022.9949199.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Wang, Ruoyu, Mingyang Yi, Zhitang Chen e Shengyu Zhu. "Out-of-distribution Generalization with Causal Invariant Transformations". In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.00047.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Deng, Xun, Wenjie Wang, Fuli Feng, Hanwang Zhang, Xiangnan He e Yong Liao. "Counterfactual Active Learning for Out-of-Distribution Generalization". In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.acl-long.636.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Zhang, Xingxuan, Peng Cui, Renzhe Xu, Linjun Zhou, Yue He e Zheyan Shen. "Deep Stable Learning for Out-Of-Distribution Generalization". In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021. http://dx.doi.org/10.1109/cvpr46437.2021.00533.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Wu, Qitian, Fan Nie, Chenxiao Yang, Tianyi Bao e Junchi Yan. "Graph Out-of-Distribution Generalization via Causal Intervention". In WWW '24: The ACM Web Conference 2024. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3589334.3645604.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Kamani, Mohammad Mahdi, Sadegh Farhang, Mehrdad Mahdavi e James Z. Wang. "Targeted Data-driven Regularization for Out-of-Distribution Generalization". In KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3394486.3403131.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Wang, Xin, Peng Cui e Wenwu Zhu. "Out-of-distribution Generalization and Its Applications for Multimedia". In MM '21: ACM Multimedia Conference. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3474085.3478876.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Miao, Qiaowei, Junkun Yuan, Shengyu Zhang, Fei Wu e Kun Kuang. "Domaindiff: Boost out-of-Distribution Generalization with Synthetic Data". In ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024. http://dx.doi.org/10.1109/icassp48485.2024.10446788.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Bai, Haoyue, Fengwei Zhou, Lanqing Hong, Nanyang Ye, S. H. Gary Chan e Zhenguo Li. "NAS-OoD: Neural Architecture Search for Out-of-Distribution Generalization". In 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021. http://dx.doi.org/10.1109/iccv48922.2021.00821.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Sun, Yihong, Adam Kortylewski e Alan Yuille. "Amodal Segmentation through Out-of-Task and Out-of-Distribution Generalization with a Bayesian Model". In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.00128.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia