Letteratura scientifica selezionata sul tema "OOD generalization"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "OOD generalization".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "OOD generalization"

1

Ye, Nanyang, Lin Zhu, Jia Wang, Zhaoyu Zeng, Jiayao Shao, Chensheng Peng, Bikang Pan, Kaican Li e Jun Zhu. "Certifiable Out-of-Distribution Generalization". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 9 (26 giugno 2023): 10927–35. http://dx.doi.org/10.1609/aaai.v37i9.26295.

Testo completo
Abstract (sommario):
Machine learning methods suffer from test-time performance degeneration when faced with out-of-distribution (OoD) data whose distribution is not necessarily the same as training data distribution. Although a plethora of algorithms have been proposed to mitigate this issue, it has been demonstrated that achieving better performance than ERM simultaneously on different types of distributional shift datasets is challenging for existing approaches. Besides, it is unknown how and to what extent these methods work on any OoD datum without theoretical guarantees. In this paper, we propose a certifiable out-of-distribution generalization method that provides provable OoD generalization performance guarantees via a functional optimization framework leveraging random distributions and max-margin learning for each input datum. With this approach, the proposed algorithmic scheme can provide certified accuracy for each input datum's prediction on the semantic space and achieves better performance simultaneously on OoD datasets dominated by correlation shifts or diversity shifts. Our code is available at https://github.com/ZlatanWilliams/StochasticDisturbanceLearning.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Gwon, Kyungpil, e Joonhyuk Yoo. "Out-of-Distribution (OOD) Detection and Generalization Improved by Augmenting Adversarial Mixup Samples". Electronics 12, n. 6 (16 marzo 2023): 1421. http://dx.doi.org/10.3390/electronics12061421.

Testo completo
Abstract (sommario):
Deep neural network (DNN) models are usually built based on the i.i.d. (independent and identically distributed), also known as in-distribution (ID), assumption on the training samples and test data. However, when models are deployed in a real-world scenario with some distributional shifts, test data can be out-of-distribution (OOD) and both OOD detection and OOD generalization should be simultaneously addressed to ensure the reliability and safety of applied AI systems. Most existing OOD detectors pursue these two goals separately, and therefore, are sensitive to covariate shift rather than semantic shift. To alleviate this problem, this paper proposes a novel adversarial mixup (AM) training method which simply executes OOD data augmentation to synthesize differently distributed data and designs a new AM loss function to learn how to handle OOD data. The proposed AM generates OOD samples being significantly diverged from the support of training data distribution but not completely disjoint to increase the generalization capability of the OOD detector. In addition, the AM is combined with a distributional-distance-aware OOD detector at inference to detect semantic OOD samples more efficiently while being robust to covariate shift due to data tampering. Experimental evaluation validates that the designed AM is effective on both OOD detection and OOD generalization tasks compared to previous OOD detectors and data mixup methods.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Zhu, Lin, Xinbing Wang, Chenghu Zhou e Nanyang Ye. "Bayesian Cross-Modal Alignment Learning for Few-Shot Out-of-Distribution Generalization". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 9 (26 giugno 2023): 11461–69. http://dx.doi.org/10.1609/aaai.v37i9.26355.

Testo completo
Abstract (sommario):
Recent advances in large pre-trained models showed promising results in few-shot learning. However, their generalization ability on two-dimensional Out-of-Distribution (OoD) data, i.e., correlation shift and diversity shift, has not been thoroughly investigated. Researches have shown that even with a significant amount of training data, few methods can achieve better performance than the standard empirical risk minimization method (ERM) in OoD generalization. This few-shot OoD generalization dilemma emerges as a challenging direction in deep neural network generalization research, where the performance suffers from overfitting on few-shot examples and OoD generalization errors. In this paper, leveraging a broader supervision source, we explore a novel Bayesian cross-modal image-text alignment learning method (Bayes-CAL) to address this issue. Specifically, the model is designed as only text representations are fine-tuned via a Bayesian modelling approach with gradient orthogonalization loss and invariant risk minimization (IRM) loss. The Bayesian approach is essentially introduced to avoid overfitting the base classes observed during training and improve generalization to broader unseen classes. The dedicated loss is introduced to achieve better image-text alignment by disentangling the causal and non-casual parts of image features. Numerical experiments demonstrate that Bayes-CAL achieved state-of-the-art OoD generalization performances on two-dimensional distribution shifts. Moreover, compared with CLIP-like models, Bayes-CAL yields more stable generalization performances on unseen classes. Our code is available at https://github.com/LinLLLL/BayesCAL.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Liao, Yufan, Qi Wu e Xing Yan. "Invariant Random Forest: Tree-Based Model Solution for OOD Generalization". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 12 (24 marzo 2024): 13772–81. http://dx.doi.org/10.1609/aaai.v38i12.29283.

Testo completo
Abstract (sommario):
Out-Of-Distribution (OOD) generalization is an essential topic in machine learning. However, recent research is only focusing on the corresponding methods for neural networks. This paper introduces a novel and effective solution for OOD generalization of decision tree models, named Invariant Decision Tree (IDT). IDT enforces a penalty term with regard to the unstable/varying behavior of a split across different environments during the growth of the tree. Its ensemble version, the Invariant Random Forest (IRF), is constructed. Our proposed method is motivated by a theoretical result under mild conditions, and validated by numerical tests with both synthetic and real datasets. The superior performance compared to non-OOD tree models implies that considering OOD generalization for tree models is absolutely necessary and should be given more attention.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Bai, Haoyue, Rui Sun, Lanqing Hong, Fengwei Zhou, Nanyang Ye, Han-Jia Ye, S. H. Gary Chan e Zhenguo Li. "DecAug: Out-of-Distribution Generalization via Decomposed Feature Representation and Semantic Augmentation". Proceedings of the AAAI Conference on Artificial Intelligence 35, n. 8 (18 maggio 2021): 6705–13. http://dx.doi.org/10.1609/aaai.v35i8.16829.

Testo completo
Abstract (sommario):
While deep learning demonstrates its strong ability to handle independent and identically distributed (IID) data, it often suffers from out-of-distribution (OoD) generalization, where the test data come from another distribution (w.r.t. the training one). Designing a general OoD generalization framework for a wide range of applications is challenging, mainly due to different kinds of distribution shifts in the real world, such as the shift across domains or the extrapolation of correlation. Most of the previous approaches can only solve one specific distribution shift, leading to unsatisfactory performance when applied to various OoD benchmarks. In this work, we propose DecAug, a novel decomposed feature representation and semantic augmentation approach for OoD generalization. Specifically, DecAug disentangles the category-related and context-related features by orthogonalizing the two gradients (w.r.t. intermediate features) of losses for predicting category and context labels, where category-related features contain causal information of the target object, while context-related features cause distribution shifts between training and test data. Furthermore, we perform gradient-based augmentation on context-related features to improve the robustness of learned representations. Experimental results show that DecAug outperforms other state-of-the-art methods on various OoD datasets, which is among the very few methods that can deal with different types of OoD generalization challenges.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Shao, Youjia, Shaohui Wang e Wencang Zhao. "A Causality-Aware Perspective on Domain Generalization via Domain Intervention". Electronics 13, n. 10 (11 maggio 2024): 1891. http://dx.doi.org/10.3390/electronics13101891.

Testo completo
Abstract (sommario):
Most mainstream statistical models will achieve poor performance in Out-Of-Distribution (OOD) generalization. This is because these models tend to learn the spurious correlation between data and will collapse when the domain shift exists. If we want artificial intelligence (AI) to make great strides in real life, the current focus needs to be shifted to the OOD problem of deep learning models to explore the generalization ability under unknown environments. Domain generalization (DG) focusing on OOD generalization is proposed, which is able to transfer the knowledge extracted from multiple source domains to the unseen target domain. We are inspired by intuitive thinking about human intelligence relying on causality. Unlike relying on plain probability correlations, we apply a novel causal perspective to DG, which can improve the OOD generalization ability of the trained model by mining the invariant causal mechanism. Firstly, we construct the inclusive causal graph for most DG tasks through stepwise causal analysis based on the data generation process in the natural environment and introduce the reasonable Structural Causal Model (SCM). Secondly, based on counterfactual inference, causal semantic representation learning with domain intervention (CSRDN) is proposed to train a robust model. In this regard, we generate counterfactual representations for different domain interventions, which can help the model learn causal semantics and develop generalization capacity. At the same time, we seek the Pareto optimal solution in the optimization process based on the loss function to obtain a more advanced training model. Extensive experimental results of Rotated MNIST and PACS as well as VLCS datasets verify the effectiveness of the proposed CSRDN. The proposed method can integrate causal inference into domain generalization by enhancing interpretability and applicability and brings a boost to challenging OOD generalization problems.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Su, Hang, e Wei Wang. "An Out-of-Distribution Generalization Framework Based on Variational Backdoor Adjustment". Mathematics 12, n. 1 (26 dicembre 2023): 85. http://dx.doi.org/10.3390/math12010085.

Testo completo
Abstract (sommario):
In practical applications, learning models that can perform well even when the data distribution is different from the training set are essential and meaningful. Such problems are often referred to as out-of-distribution (OOD) generalization problems. In this paper, we propose a method for OOD generalization based on causal inference. Unlike the prevalent OOD generalization methods, our approach does not require the environment labels associated with the data in the training set. We analyze the causes of distributional shifts in data from a causal modeling perspective and then propose a backdoor adjustment method based on variational inference. Finally, we constructed a unique network structure to simulate the variational inference process. The proposed variational backdoor adjustment (VBA) framework can be combined with any mainstream backbone network. In addition to theoretical derivation, we conduct experiments on different datasets to demonstrate that our method performs well in prediction accuracy and generalization gaps. Furthermore, by comparing the VBA framework with other mainstream OOD methods, we show that VBA performs better than mainstream methods.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Zhang, Lily H., e Rajesh Ranganath. "Robustness to Spurious Correlations Improves Semantic Out-of-Distribution Detection". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 12 (26 giugno 2023): 15305–12. http://dx.doi.org/10.1609/aaai.v37i12.26785.

Testo completo
Abstract (sommario):
Methods which utilize the outputs or feature representations of predictive models have emerged as promising approaches for out-of-distribution (OOD) detection of image inputs. However, as demonstrated in previous work, these methods struggle to detect OOD inputs that share nuisance values (e.g. background) with in-distribution inputs. The detection of shared-nuisance OOD (SN-OOD) inputs is particularly relevant in real-world applications, as anomalies and in-distribution inputs tend to be captured in the same settings during deployment. In this work, we provide a possible explanation for these failures and propose nuisance-aware OOD detection to address them. Nuisance-aware OOD detection substitutes a classifier trained via Empirical Risk Minimization (ERM) with one that 1. approximates a distribution where the nuisance-label relationship is broken and 2. yields representations that are independent of the nuisance under this distribution, both marginally and conditioned on the label. We can train a classifier to achieve these objectives using Nuisance-Randomized Distillation (NuRD), an algorithm developed for OOD generalization under spurious correlations. Output- and feature-based nuisance-aware OOD detection perform substantially better than their original counterparts, succeeding even when detection based on domain generalization algorithms fails to improve performance.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Yu, Runpeng, Hong Zhu, Kaican Li, Lanqing Hong, Rui Zhang, Nanyang Ye, Shao-Lun Huang e Xiuqiang He. "Regularization Penalty Optimization for Addressing Data Quality Variance in OoD Algorithms". Proceedings of the AAAI Conference on Artificial Intelligence 36, n. 8 (28 giugno 2022): 8945–53. http://dx.doi.org/10.1609/aaai.v36i8.20877.

Testo completo
Abstract (sommario):
Due to the poor generalization performance of traditional empirical risk minimization (ERM) in the case of distributional shift, Out-of-Distribution (OoD) generalization algorithms receive increasing attention. However, OoD generalization algorithms overlook the great variance in the quality of training data, which significantly compromises the accuracy of these methods. In this paper, we theoretically reveal the relationship between training data quality and algorithm performance, and analyze the optimal regularization scheme for Lipschitz regularized invariant risk minimization. A novel algorithm is proposed based on the theoretical results to alleviate the influence of low quality data at both the sample level and the domain level. The experiments on both the regression and classification benchmarks validate the effectiveness of our method with statistical significance.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Cao, Linfeng, Aofan Jiang, Wei Li, Huaying Wu e Nanyang Ye. "OoDHDR-Codec: Out-of-Distribution Generalization for HDR Image Compression". Proceedings of the AAAI Conference on Artificial Intelligence 36, n. 1 (28 giugno 2022): 158–66. http://dx.doi.org/10.1609/aaai.v36i1.19890.

Testo completo
Abstract (sommario):
Recently, deep learning has been proven to be a promising approach in standard dynamic range (SDR) image compression. However, due to the wide luminance distribution of high dynamic range (HDR) images and the lack of large standard datasets, developing a deep model for HDR image compression is much more challenging. To tackle this issue, we view HDR data as distributional shifts of SDR data and the HDR image compression can be modeled as an out-of-distribution generalization (OoD) problem. Herein, we propose a novel out-of-distribution (OoD) HDR image compression framework (OoDHDR-codec). It learns the general representation across HDR and SDR environments, and allows the model to be trained effectively using a large set of SDR datases supplemented with much fewer HDR samples. Specifically, OoDHDR-codec consists of two branches to process the data from two environments. The SDR branch is a standard blackbox network. For the HDR branch, we develop a hybrid system that models luminance masking and tone mapping with white-box modules and performs content compression with black-box neural networks. To improve the generalization from SDR training data on HDR data, we introduce an invariance regularization term to learn the common representation for both SDR and HDR compression. Extensive experimental results show that the OoDHDR codec achieves strong competitive in-distribution performance and state-of-the-art OoD performance. To the best of our knowledge, our proposed approach is the first work to model HDR compression as OoD generalization problems and our OoD generalization algorithmic framework can be applied to any deep compression model in addition to the network architectural choice demonstrated in the paper. Code available at https://github.com/caolinfeng/OoDHDR-codec.
Gli stili APA, Harvard, Vancouver, ISO e altri

Tesi sul tema "OOD generalization"

1

Araujo, Cynthia Berenice. "The Effects of Sleep for Generalization in 12 Month-Old Infants". Thesis, The University of Arizona, 2014. http://hdl.handle.net/10150/555522.

Testo completo
Abstract (sommario):
Since infants over-specify acoustic details, segregate exemplars by talker voice, and need enough variation to generalize across exemplars, it has been questioned whether sleep would promote generalization in 12-month-old infants even after they have been exposed to multiple speakers. In order to investigate this question, we placed infants in either a nap or non-nap condition to test whether they were able to generalize only after napping. Sleep was expected to result in retention of the grammatical pattern over acoustic details such as talker voice. These results were not expected for infants who did not nap after being familiarized with a grammatical structure and who remained awake between training and testing for an equal amount of time as the infants who napped. The average looking times between grammatical structures were compared to determine the presence of any significant variation. The current data show nonsignificant generalization in both nap and no nap conditions. Even after outlier elimination the data still demonstrate non-significant results. Tasks completed during wake hours in both nap and no nap conditions are considered as limitations.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Abecidan, Rony. "Stratégies d'apprentissage robustes pour la détection de manipulation d'images". Electronic Thesis or Diss., Centrale Lille Institut, 2024. http://www.theses.fr/2024CLIL0025.

Testo completo
Abstract (sommario):
Aujourd'hui, la manipulation d'images à des fins non éthiques est courante, notamment sur les réseaux sociaux et dans la publicité. Les utilisateurs malveillants peuvent par exemple créer des images synthétiques convaincantes pour tromper le public ou dissimuler des messages dans des images numériques, posant des risques pour la sécurité nationale. Les chercheurs en analyse forensique d'image travaillent donc avec les forces de l'ordre pour détecter ces manipulations. Les méthodes d'analyse forensique les plus avancées utilisent notamment des réseaux neuronaux convolutifs pour les détecter. Cependant, ces réseaux sont entraînés sur des données préparées par des équipes de recherche, qui diffèrent largement des données réelles rencontrées en pratique. Cet écart réduit considérablement l'efficacité opérationnelle des détecteurs de manipulations d'images. Cette thèse vise précisément à améliorer l'efficacité des détecteurs de manipulation d'images dans un contexte pratique, en atténuant l'impact de ce décalage de données. Deux stratégies complémentaires sont explorées, toutes deux issues de la littérature en apprentissage automatique : 1. Créer des modèles capables d'apprendre à généraliser sur de nouvelles bases de données ou 2. Sélectionner, voire construire, des bases d'entraînement représentatives des images à examiner. Pour détecter des manipulations sur un grand nombre d'images non étiquetées, les stratégies d'adaptation de domaine cherchant à plonger les distributions d'entraînement et d'évaluation dans un espace latent où elles coïncident peuvent se révéler utiles. Néanmoins, on ne peut nier la faible efficacité opérationnelle de ces stratégies, étant donné qu'elles supposent un équilibre irréaliste entre images vraies et manipulées parmi les images à examiner. En plus de cette hypothèse problématique, les travaux de cette thèse montrent que ces stratégies ne fonctionnent que si la base d'entraînement guidant la détection est suffisamment proche de la base d'images sur laquelle on cherche à évaluer, une condition difficile à garantir pour un praticien. Généraliser sur un petit nombre d'images non étiquetées est encore plus difficile bien que plus réaliste. Dans la seconde partie de cette thèse, nous abordons ce scénario en examinant l'influence des opérations de développement d'images traditionnelles sur le phénomène de décalage de données en détection de manipulation d'images. Cela nous permet de formuler des stratégies pour sélectionner ou créer des bases d'entraînement adaptées à un petit nombre d'images. Notre contribution finale est une méthodologie qui exploite les propriétés statistiques des images pour construire des ensembles d'entraînement pertinents vis-à-vis des images à examiner. Cette approche réduit considérablement le problème du décalage de données et permet aux praticiens de développer des modèles sur mesure pour leur situation
Today, it is easier than ever to manipulate images for unethical purposes. This practice is therefore increasingly prevalent in social networks and advertising. Malicious users can for instance generate convincing deep fakes in a few seconds to lure a naive public. Alternatively, they can also communicate secretly hidding illegal information into images. Such abilities raise significant security concerns regarding misinformation and clandestine communications. The Forensics community thus actively collaborates with Law Enforcement Agencies worldwide to detect image manipulations. The most effective methodologies for image forensics rely heavily on convolutional neural networks meticulously trained on controlled databases. These databases are actually curated by researchers to serve specific purposes, resulting in a great disparity from the real-world datasets encountered by forensic practitioners. This data shift addresses a clear challenge for practitioners, hindering the effectiveness of standardized forensics models when applied in practical situations.Through this thesis, we aim to improve the efficiency of forensics models in practical settings, designing strategies to mitigate the impact of data shift. It starts by exploring literature on out-of-distribution generalization to find existing strategies already helping practitioners to make efficient forensic detectors in practice. Two main frameworks notably hold promise: the implementation of models inherently able to learn how to generalize on images coming from a new database, or the construction of a representative training base allowing forensics models to generalize effectively on scrutinized images. Both frameworks are covered in this manuscript. When faced with many unlabeled images to examine, domain adaptation strategies matching training and testing bases in latent spaces are designed to mitigate data shifts encountered by practitioners. Unfortunately, these strategies often fail in practice despite their theoretical efficiency, because they assume that scrutinized images are balanced, an assumption unrealistic for forensic analysts, as suspects might be for instance entirely innocent. Additionally, such strategies are tested typically assuming that an appropriate training set has been chosen from the beginning, to facilitate adaptation on the new distribution. Trying to generalize on a few images is more realistic but much more difficult by essence. We precisely deal with this scenario in the second part of this thesis, gaining a deeper understanding of data shifts in digital image forensics. Exploring the influence of traditional processing operations on the statistical properties of developed images, we formulate several strategies to select or create training databases relevant for a small amount of images under scrutiny. Our final contribution is a framework leveraging statistical properties of images to build relevant training sets for any testing set in image manipulation detection. This approach improves by far the generalization of classical steganalysis detectors on practical sets encountered by forensic analyst and can be extended to other forensic contexts
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Nachtigäller, Kerstin [Verfasser]. "Long-term word learning in 2-year-old children - How does narrative input about pictures and objects influence retention and generalization of newly acquired spatial prepositions? / Kerstin Nachtigäller". Bielefeld : Universitätsbibliothek Bielefeld, 2015. http://d-nb.info/1078112452/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Libri sul tema "OOD generalization"

1

Klemenhagen, Kristen C., Franklin R. Schneier, Abby J. Fyer, H. Blair Simpson e René Hen. Adult Hippocampal Neurogenesis, Pattern Separation, and Generalization. A cura di Israel Liberzon e Kerry J. Ressler. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780190215422.003.0006.

Testo completo
Abstract (sommario):
Almost one-third of adult Americans will have an anxiety disorder in their lifetime, with enormous personal, societal, and financial costs. Among the most disabling of these disorders are post-traumatic stress disorder (PTSD), obsessive-compulsive disorder (OCD), social anxiety disorder, generalized anxiety disorder, and panic disorder. Although there are evidence-based treatments for these disorders, as many as 50% of patients do not respond, and there is a considerable need for new therapies. This chapter proposes that the excessive generalization seen in patients with pathological anxiety is due to impaired hippocampal functioning, specifically a deficit in the neural process of pattern separation, which relies upon the dentate gyrus and is sensitive to neurogenesis. Preclinical findings indicate that stimulating DG neurogenesis improves pattern separation and reduces anxiety behaviors in mice. As a result the authors hypothesize that pharmacological or environmental manipulations aimed at stimulating neurogenesis will be beneficial for the treatment of anxiety disorders.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Speyer, Augustin, e Helmut Weiß. The prefield after the Old High German period. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198813545.003.0005.

Testo completo
Abstract (sommario):
The filling of the prefield in Modern German is determined by information-structural constraints such as scene-setting, contrastiveness, and topichood. While OHG does not yet show competition between these constraints, competition arises from MHG onward. This has to do with the generalization of the V2 constraint (i.e. the one-constituent property of the prefield) for declarative clauses, in which context the information-structural constraints are loosened. The syntactic change whose result eventually was the loss of multiple XP fronting comprised a change of the feature endowment of C because the fronting of expletive thô (roughly in the OHG of the ninth century) led to the reanalysis of XP fronting as a semantically vacuous movement whose only function is to check the EPP feature of C. Data from doubly filled prefields in ENHG and post-initial connectives indicate that an articulated split CP-structure, as proposed within the cartographic approach, is also at play in German.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Hegedűs, Veronika. Particle-verb order in Old Hungarian and complex predicates. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198747307.003.0006.

Testo completo
Abstract (sommario):
This chapter examines the distribution of verbal particles in Old Hungarian, and argues that despite the word order change from SOV to SVO in Hungarian, the particle-verb order did not change because the previous pre-verbal argument position was reanalysed as a pre-verbal predicative position where complex predicates are formed in overt syntax. Predicative constituents other than particles show significant word order variation in Old Hungarian, apparently due to optionality in predicate movement (while variation found with particle-verb orderings can be attributed to independent factors). It is proposed that after the basic word order was reanalysed as VO, internal arguments and secondary predicates could appear post-verbally and it was the still obligatory movement of particles that triggered the generalization of predicate movement, making all predicates pre-verbal in neutral sentences at later stages. This process involves a period of word order variation as predicate movement gradually generalizes to different types of predicates.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Prochazka, Stephan. The Northern Fertile Crescent. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198701378.003.0009.

Testo completo
Abstract (sommario):
This chapter attempts to reconstruct the linguistic history of the Arabic dialects spoken in south-eastern Turkey and the northern parts of Syria and Iraq. This area is characterized by religious pluralism and by a high linguistic diversity. It can be seen as a transitional zone between the archaic Iraqi-Anatolian dialects and the more innovative Syrian sedentary and Arabian bedouin dialects. The chapter discusses both common features, and striking innovations shared by all or most dialects of the region. The latter in particular may indicate that the sedentary dialects spoken at the northern edge of the Fertile Crescent may have a common origin. Many dialects of the region exhibit a high degree of both preservation and generalization of old features. The region also stands out because of contact-induced innovations that are partly the result of the significant influences that Aramaic, Kurdish, and Turkish had and still have on the local Arabic varieties.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Dutton, Denis. Aesthetics and Evolutionary Psychology. A cura di Jerrold Levinson. Oxford University Press, 2009. http://dx.doi.org/10.1093/oxfordhb/9780199279456.003.0041.

Testo completo
Abstract (sommario):
The applications of the science of psychology to our understanding of the origins and nature of art is not a recent phenomenon; in fact, it is as old as the Greeks. Plato wrote of art not only from the standpoint of metaphysics, but also in terms of the psychic, especially emotional, dangers that art posed to individuals and society. It was Plato's psychology of art that resulted in his famous requirements in The Republic for social control of the forms and contents of art. Aristotle, on the other hand, approached the arts as philosopher more comfortably at home in experiencing the arts; his writings are to that extent more dispassionately descriptive of the psychological features he viewed as universal in what we would call ‘aesthetic experience’. Although Plato and Aristotle both described the arts in terms of generalizations implicitly applicable to all cultures, it was Aristotle who most self-consciously tied his art theory to a general psychology.
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "OOD generalization"

1

Bubboloni, Daniela, Pablo Spiga e Thomas Stefan Weigel. "Odd Dimensional Orthogonal Groups". In Normal 2-Coverings of the Finite Simple Groups and their Generalizations, 87–99. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-62348-6_6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Benczúr, András A., e Ottilia Fülöp. "Fast Algorithms for Even/Odd Minimum Cuts and Generalizations". In Algorithms - ESA 2000, 88–99. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-45253-2_9.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Heismann, Olga, e Ralf Borndörfer. "A Generalization of Odd Set Inequalities for the Set Packing Problem". In Operations Research Proceedings 2013, 193–99. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-07001-8_26.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Eriksson, Fredrik. "Military History and Military Theory". In Handbook of Military Sciences, 1–16. Cham: Springer International Publishing, 2024. http://dx.doi.org/10.1007/978-3-030-02866-4_90-1.

Testo completo
Abstract (sommario):
AbstractThe purpose of this article is to discuss the relationship between military history and military theory through a chronological analysis. Military history in some form has always been used to formulate military theory i.e. generalizations of historical experience to guide action in the present and in the future. History is however hard to interpret, and has served different purposes over time. In the ancient world history linked to morality, and historiography contained practicle advice for generals. The scientfic revolution saw the birth of scientific laws for warfare, inspired by natural sciences i.e. codifying historical experience. The Napoleonic era saw the birth of modern warfare and the development of modern military theory. Jomini synthezised the Enlightenment with experiences of the Napoleonic wars into scientific principles of war. From a Romantic historical tradition came Clausewitz, a historicist general focused on understanding the nature of war. For Clausewitz history was about understanding, and could not be used for scientific principles. In the same era came Marxism – a materialist, deterministic theory of history, influencing for example Russian and Chinese military thinking as well as theories of guerilla war. Using military history to create military theory still revolves around the dialectic, will history repeat itself or not? If it does, then it can be used for formulating theory. If it doesn´t, history can be used for understanding the past and as a guide. Every new generation of the military have reinvented and reinterpreted history. Most of the doctrines and theories of warfare today rests on a mixture of concepts from both Clausewitz and from Jomini – and in every case military history is the very foundation of both. The dialectic relationship between military history and military theory seems to be as old as the phenonema of war itself.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Cao, Hongye, Shangdong Yang, Jing Huo, Xingguo Chen e Yang Gao. "Enhancing OOD Generalization in Offline Reinforcement Learning with Energy-Based Policy Optimization". In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230288.

Testo completo
Abstract (sommario):
Offline Reinforcement Learning (RL) is an important research domain for real-world applications because it can avert expensive and dangerous online exploration. Offline RL is prone to extrapolation errors caused by the distribution shift between offline datasets and states visited by behavior policy. Existing offline RL methods constrain the policy to offline behavior to prevent extrapolation errors. But these methods limit the generalization potential of agents in Out-Of-Distribution (OOD) regions and cannot effectively evaluate OOD generalization behavior. To improve the generalization of the policy in OOD regions while avoiding extrapolation errors, we propose an Energy-Based Policy Optimization (EBPO) method for OOD generalization. An energy function based on the distribution of offline data is proposed for the evaluation of OOD generalization behavior, instead of relying on model discrepancies to constrain the policy. The way of quantifying exploration behavior in terms of energy values can balance the return and risk. To improve the stability of generalization and solve the problem of sparse reward in complex environment, episodic memory is applied to store successful experiences that can improve sample efficiency. Extensive experiments on the D4RL datasets demonstrate that EBPO outperforms the state-of-the-art methods and achieves robust performance on challenging tasks that require OOD generalization.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Gu, Pengfei, e Daao Yu. "OOD Problem Research in Biochemistry Based on Backdoor Adjustment". In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia231392.

Testo completo
Abstract (sommario):
Due to its ability to deal well with the features of graph structure (social network, molecular structure), graph neural network (GNN) has recently shown amazing capabilities in many fields (biology, chemistry), which has aroused the attention of a large number of researchers on the operation mechanism of GNN model. In order to apply graph neural network to real environment, the problem of out-of-distribution generalization is solved. The difference of data distribution between training environment and real environment is an urgent problem to be solved. In the present study, from the point of view of data, the input graph data is processed to find the core part and filter out the noise part; From the perspective of model method, the parts related to this task (graph classification) in the model are extracted, so as to improve the accuracy and efficiency of the model. However, while these methods are valid from a theoretical point of view, they are using methods that are too simple to actually cut off the effects of non-causal components. However, the current methods of model Angle are to cut the network directly, without considering the information transfer between the model parameters. To solve these problems, the main work of this paper is to propose a distance-based environment selection method, which enables the backdoor adjustment to be implemented to the maximum extent and ensures the robustness of the model. At the same time, it proposes a way for the network to squeeze information during the clipping process, so that the effect of the model pruning algorithm can be optimized. Reduce the complexity of the model and improve the generalization of the model. The method presented in this paper has been validated on data sets in several biochemical fields, and the best performance has been achieved.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Angryk, Rafal, Roy Ladner e Frederick E. Petry. "Generalization Data Mining in Fuzzy Object-Oriented Databases". In Data Warehousing and Mining, 2121–40. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-951-9.ch126.

Testo completo
Abstract (sommario):
In this chapter, we consider the application of generalization-based data mining to fuzzy similarity-based object-oriented databases (OODBs). Attribute generalization algorithms have been most commonly applied to relational databases, and we extend these approaches. A key aspect of generalization data mining is the use of a concept hierarchy. The objects of the database are generalized by replacing specific attribute values by the next higher-level term in the hierarchy. This will then eventually result in generalizations that represent a summarization of the information in the database. We focus on the generalization of similarity-based simple fuzzy attributes for an OODB using approaches to the fuzzy concept hierarchy developed from the given similarity relation of the database. Then consideration is given to applying this approach to complex structure-valued data in the fuzzy OODB.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

"generalization, n." In Oxford English Dictionary. 3a ed. Oxford University Press, 2023. http://dx.doi.org/10.1093/oed/8342955435.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Williamson, John B., e Fred C. Pampel. "Toward an Empirical and Theoretical Synthesis". In Old-Age Security in Comparative Perspective, 207–27. Oxford University PressNew York, NY, 1993. http://dx.doi.org/10.1093/oso/9780195068597.003.0010.

Testo completo
Abstract (sommario):
Abstract While the seven historical case studies presented in the preceding chapters each involved some comparative analysis, the focus was on accounting for developments in a specific country, and relatively little effort was made to generalize about oldage security policy. In the previous chapter an effort was made to test several hypotheses derived in part from these historical case studies, but due to the constraints of available data, that analysis was limited to a few key issues. In this chapter our goal is to synthesize the findings of both the qualitative case studies and the quantitative analysis. While the emphasis will be on generalizations derived from the case studies, where appropriate these generalizations will be based on or qualified in light of our quantitative analysis.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Adeleke, S. A. "A generalization of Jordan groups". In Automorphisms of First-Order Structures, 233–40. Oxford University PressOxford, 1994. http://dx.doi.org/10.1093/oso/9780198534686.003.0010.

Testo completo
Abstract (sommario):
Abstract For r to be a Jordan set in a group (G, 0) in the usual definition (see p. 73 in the article by Macpherson in this volume) the pointwise stabilizer, G(o,r), must be transitive on r. In this paper we relax this condition and demand that the setwise stabilizer, G(r), be transitive on r. But we also add as assumptions two properties which hold for Jordan sets in the usual definition; namely that the union of any chain of the (new) Jordan sets be a (new) Jordan set, and that the union of any two non-disjoint (new) Jordan sets be a (new) Jordan set. Although infinite simply primitive Jordan groups in the new sense lead to the same G-invariant structures as the old ones of Adeleke and Neumann (in press b), there are simply primitive groups which are Jordan in the new sense but not in the usual sense. Immediate examples are the pathological groups on linear orders described by Glass (1981, Chapter 6, Section 1.10). We make remarks in Section 4 below about the construction of such examples, and it follows easily that there are more examples of the same type built from semilinear orders and C-relations. Apart from including more groups, the result below should be useful in any eventual classification of general infinite simply primitive groups according to their invariant structures. The new Jordan sets and Jordan groups shall be called c-Jordan sets and c-Jordan groups respectively (‘c-Jordan’ as in ‘Camille Jordan’). The proofs of Adeleke and Neumann (in pressb, to appear) need several adjustments to handle the present context.
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "OOD generalization"

1

Li, Limin, Kuo Yang, Wenjie Du, Zhongchao Yi, Zhengyang Zhou e Yang Wang. "EMoNet: An environment causal learning for molecule OOD generalization". In 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 1552–56. IEEE, 2024. https://doi.org/10.1109/bibm62325.2024.10822221.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Wang, Haoliang, Chen Zhao e Feng Chen. "Feature-Space Semantic Invariance: Enhanced OOD Detection for Open-Set Domain Generalization". In 2024 IEEE International Conference on Big Data (BigData), 8244–46. IEEE, 2024. https://doi.org/10.1109/bigdata62323.2024.10825325.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Xu, Xingcheng, Zihao Pan, Haipeng Zhang e Yanqing Yang. "It Ain’t That Bad: Understanding the Mysterious Performance Drop in OOD Generalization for Generative Transformer Models". In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/727.

Testo completo
Abstract (sommario):
Large language models (LLMs) have achieved remarkable proficiency on solving diverse problems. However, their generalization ability is not always satisfying and the generalization problem is common for generative transformer models in general. Researchers take basic mathematical tasks like n-digit addition or multiplication as important perspectives for investigating their generalization behaviors. It is observed that when training models on n-digit operations (e.g., additions) in which both input operands are n-digit in length, models generalize successfully on unseen n-digit inputs (in-distribution (ID) generalization), but fail miserably on longer, unseen cases (out-of-distribution (OOD) generalization). We bring this unexplained performance drop into attention and ask whether there is systematic OOD generalization. Towards understanding LLMs, we train various smaller language models which may share the same underlying mechanism. We discover that the strong ID generalization stems from structured representations, while behind the unsatisfying OOD performance, the models still exhibit clear learned algebraic structures. Specifically, these models map unseen OOD inputs to outputs with learned equivalence relations in the ID domain, which we call the equivalence generalization. These findings deepen our knowledge regarding the generalizability of generative models including LLMs, and provide insights into potential avenues for improvement.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Yu, Junchi, Jian Liang e Ran He. "Mind the Label Shift of Augmentation-based Graph OOD Generalization". In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01118.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Bai, Haoyue, Fengwei Zhou, Lanqing Hong, Nanyang Ye, S. H. Gary Chan e Zhenguo Li. "NAS-OoD: Neural Architecture Search for Out-of-Distribution Generalization". In 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021. http://dx.doi.org/10.1109/iccv48922.2021.00821.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Ye, Nanyang, Kaican Li, Haoyue Bai, Runpeng Yu, Lanqing Hong, Fengwei Zhou, Zhenguo Li e Jun Zhu. "OoD-Bench: Quantifying and Understanding Two Dimensions of Out-of-Distribution Generalization". In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.00779.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Zhang, Min, Junkun Yuan, Yue He, Wenbin Li, Zhengyu Chen e Kun Kuang. "MAP: Towards Balanced Generalization of IID and OOD through Model-Agnostic Adapters". In 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023. http://dx.doi.org/10.1109/iccv51070.2023.01095.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Zhu, Yun, Haizhou Shi, Zhenshuo Zhang e Siliang Tang. "MARIO: Model Agnostic Recipe for Improving OOD Generalization of Graph Contrastive Learning". In WWW '24: The ACM Web Conference 2024. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3589334.3645322.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Li, Wenjun, Pradeep Varakantham e Dexun Li. "Generalization through Diversity: Improving Unsupervised Environment Design". In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/601.

Testo completo
Abstract (sommario):
Agent decision making using Reinforcement Learning (RL) heavily relies on either a model or simulator of the environment (e.g., moving in an 8x8 maze with three rooms, playing Chess on an 8x8 board). Due to this dependence, small changes in the environment (e.g., positions of obstacles in the maze, size of the board) can severely affect the effectiveness of the policy learned by the agent. To that end, existing work has proposed training RL agents on an adaptive curriculum of environments (generated automatically) to improve performance on out-of-distribution (OOD) test scenarios. Specifically, existing research has employed the potential for the agent to learn in an environment (captured using Generalized Advantage Estimation, GAE) as the key factor to select the next environment(s) to train the agent. However, such a mechanism can select similar environments (with a high potential to learn) thereby making agent training redundant on all but one of those environments. To that end, we provide a principled approach to adaptively identify diverse environments based on a novel distance measure relevant to environment design. We empirically demonstrate the versatility and effectiveness of our method in comparison to multiple leading approaches for unsupervised environment design on three distinct benchmark problems used in literature.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Teney, Damien, Ehsan Abbasnejad, Simon Lucey e Anton Van den Hengel. "Evading the Simplicity Bias: Training a Diverse Set of Models Discovers Solutions with Superior OOD Generalization". In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.01626.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia