To see the other types of publications on this topic, follow the link: Spurious features.

Journal articles on the topic 'Spurious features'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Spurious features.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Chen, Kaitao, Shiliang Sun, and Jing Zhao. "CaMIL: Causal Multiple Instance Learning for Whole Slide Image Classification." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 2 (March 24, 2024): 1120–28. http://dx.doi.org/10.1609/aaai.v38i2.27873.

Full text
Abstract:
Whole slide image (WSI) classification is a crucial component in automated pathology analysis. Due to the inherent challenges of high-resolution WSIs and the absence of patch-level labels, most of the proposed methods follow the multiple instance learning (MIL) formulation. While MIL has been equipped with excellent instance feature extractors and aggregators, it is prone to learn spurious associations that undermine the performance of the model. For example, relying solely on color features may lead to erroneous diagnoses due to spurious associations between the disease and the color of patches. To address this issue, we develop a causal MIL framework for WSI classification, effectively distinguishing between causal and spurious associations. Specifically, we use the expectation of the intervention P(Y | do(X)) for bag prediction rather than the traditional likelihood P(Y | X). By applying the front-door adjustment, the spurious association is effectively blocked, where the intervened mediator is aggregated from patch-level features. We evaluate our proposed method on two publicly available WSI datasets, Camelyon16 and TCGA-NSCLC. Our causal MIL framework shows outstanding performance and is plug-and-play, seamlessly integrating with various feature extractors and aggregators.
APA, Harvard, Vancouver, ISO, and other styles
2

Su, Donglin, Qian Shi, Hui Xu, and Wang Wang. "Nonintrusive Load Monitoring Based on Complementary Features of Spurious Emissions." Electronics 8, no. 9 (September 7, 2019): 1002. http://dx.doi.org/10.3390/electronics8091002.

Full text
Abstract:
In this paper, a novel method that utilizes the fractional correlation-based algorithm and the B-spline curve fitting-based algorithm is proposed to extract the complementary features for detecting the operating states of appliances. The identification of appliance operating states is one of the key parts for nonintrusive load monitoring (NILM). Considering the individual spurious emissions generated because of nonlinear components in each electronic device, the spurious emissions from the power cord can be picked up to solve the problem of data storage. Five types of common household appliances are considered in this study. The fractional correlation-based algorithm and B-spline curve fitting-based algorithm are used to extract two groups of complementary features from the spurious emissions of those five types of appliances. The experimental results show that the feature vectors extracted using the proposed method are obviously distinguishable. In addition, the features extracted show a good long-time stability, which is verified through a five-day experiment. Finally, based on support vector machine (SVM) and Dempster–Shafer (D-S) evidence theory, the identification accuracy reaches 85.5% using a combining classifier incorporated with the features extracted from the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
3

KARIMI, SAEED, and HAMDİ DİBEKLİOĞLU. "Uncovering and mitigating spurious features in domain generalization." Turkish Journal of Electrical Engineering and Computer Sciences 32, no. 2 (March 14, 2024): 320–37. http://dx.doi.org/10.55730/1300-0632.4071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Du, Mengnan, Ruixiang Tang, Weijie Fu, and Xia Hu. "Towards Debiasing DNN Models from Spurious Feature Influence." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 9521–28. http://dx.doi.org/10.1609/aaai.v36i9.21185.

Full text
Abstract:
Recent studies indicate that deep neural networks (DNNs) are prone to show discrimination towards certain demographic groups. We observe that algorithmic discrimination can be explained by the high reliance of the models on fairness sensitive features. Motivated by this observation, we propose to achieve fairness by suppressing the DNN models from capturing the spurious correlation between those fairness sensitive features with the underlying task. Specifically, we firstly train a bias-only teacher model which is explicitly encouraged to maximally employ fairness sensitive features for prediction. The teacher model then counter-teaches a debiased student model so that the interpretation of the student model is orthogonal to the interpretation of the teacher model. The key idea is that since the teacher model relies explicitly on fairness sensitive features for prediction, the orthogonal interpretation loss enforces the student network to reduce its reliance on sensitive features and instead capture more task relevant features for prediction. Experimental analysis indicates that our framework substantially reduces the model's attention on fairness sensitive features. Experimental results on four datasets further validate that our framework has consistently improved the fairness with respect to three group fairness metrics, with a comparable or even better accuracy.
APA, Harvard, Vancouver, ISO, and other styles
5

Ming, Yifei, Hang Yin, and Yixuan Li. "On the Impact of Spurious Correlation for Out-of-Distribution Detection." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 10051–59. http://dx.doi.org/10.1609/aaai.v36i9.21244.

Full text
Abstract:
Modern neural networks can assign high confidence to inputs drawn from outside the training distribution, posing threats to models in real-world deployments. While much research attention has been placed on designing new out-of-distribution (OOD) detection methods, the precise definition of OOD is often left in vagueness and falls short of the desired notion of OOD in reality. In this paper, we present a new formalization and model the data shifts by taking into account both the invariant and environmental (spurious) features. Under such formalization, we systematically investigate how spurious correlation in the training set impacts OOD detection. Our results suggest that the detection performance is severely worsened when the correlation between spurious features and labels is increased in the training set. We further show insights on detection methods that are more effective in reducing the impact of spurious correlation, and provide theoretical analysis on why reliance on environmental features leads to high OOD detection error. Our work aims to facilitate better understanding of OOD samples and their formalization, as well as the exploration of methods that enhance OOD detection. Code is available at https://github.com/deeplearning-wisc/Spurious_OOD.
APA, Harvard, Vancouver, ISO, and other styles
6

Popovic, Brankica, and Ljiljana Maskovic. "Fingerprint minutiae filtering based on multiscale directional information." Facta universitatis - series: Electronics and Energetics 20, no. 2 (2007): 233–44. http://dx.doi.org/10.2298/fuee0702233p.

Full text
Abstract:
Automatic identification of humans based on their fingerprints is still one of the most reliable identification methods in criminal and forensic applications, and is widely applied in civil applications as well. Most automatic systems available today use distinctive fingerprint features called minutiae for fingerprint comparison. Conventional feature extraction algorithm can produce a large number of spurious minutiae if fingerprint pattern contains large regions of broken ridges (often called creases). This can drastically reduce the recognition rate in automatic fingerprint identification systems. We can say that for performance of those systems it is more important not to extract spurious (false) minutia even though it means some genuine might be missing as well. In this paper multiscale directional information obtained from orientation field image is used to filter those spurious minutiae, resulting in multiple decrease of their number.
APA, Harvard, Vancouver, ISO, and other styles
7

ARTUSO, Francesco, Francesco FIDECARO, Francesco D'ALESSANDRO, Gino IANNACE, Gaetano LICITRA, Geremia POMPEI, and Luca FREDIANELLI. "Identifying optimal feature sets for acoustic signal classification in environmental noise measurements." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 270, no. 4 (October 4, 2024): 7540–49. http://dx.doi.org/10.3397/in_2024_3974.

Full text
Abstract:
Whatever the sound source to be evaluated, spurious events or unwanted sounds will always be present in environmental noise measurements. Spurious events are not characteristic of standard residual noise and must be removed prior to subsequent analyses. Currently, the removal step is deferred solely to the objective evaluation of the sound pattern and/or spectrogram by an operator. This results in the loss of many man-hours. Machine learning can be used to develop a tool capable of recognizing and removing spurious events in noise measurements. The tool must be able to account for various sounds, whether human-made or animal, and must be applicable to any environmental scenario. This is not a straightforward task, in fact if humans can easily distinguish between two sounds, such as a birds' chirps and a car passing by, based on prior experience, a machine may not be able to do so without apprenticeship. Therefore, a learning methodology must be constructed for the machine by establishing recognizable patterns. The aim of this paper is to identify the feature sets which allow the algorithm to differentiate spurious sounds in the best way. These features will represent the semantic value of the signal.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Ziliang, Yongsen Zheng, Zhao-Rong Lai, Quanlong Guan, and Liang Lin. "Diagnosing and Rectifying Fake OOD Invariance: A Restructured Causal Approach." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 10 (March 24, 2024): 11471–79. http://dx.doi.org/10.1609/aaai.v38i10.29028.

Full text
Abstract:
Invariant representation learning (IRL) encourages the prediction from invariant causal features to labels deconfounded from the environments, advancing the technical roadmap of out-of-distribution (OOD) generalization. Despite spotlights around, recent theoretical result verified that some causal features recovered by IRLs merely pretend domain-invariantly in the training environments but fail in unseen domains. The fake invariance severely endangers OOD generalization since the trustful objective can not be diagnosed and existing causal remedies are invalid to rectify. In this paper, we review a IRL family (InvRat) under the Partially and Fully Informative Invariant Feature Structural Causal Models (PIIF SCM /FIIF SCM) respectively, to certify their weaknesses in representing fake invariant features, then, unify their causal diagrams to propose ReStructured SCM (RS-SCM). RS-SCM can ideally rebuild the spurious and the fake invariant features simultaneously. Given this, we further develop an approach based on conditional mutual information with respect to RS-SCM, then rigorously rectify the spurious and fake invariant effects. It can be easily implemented by a small feature selection subnet introduced in the IRL family, which is alternatively optimized to achieve our goal. Experiments verified the superiority of our approach to fight against the fake invariant issue across a variety of OOD generalization benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Zhao, and Aron Culotta. "Robustness to Spurious Correlations in Text Classification via Automatically Generated Counterfactuals." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 16 (May 18, 2021): 14024–31. http://dx.doi.org/10.1609/aaai.v35i16.17651.

Full text
Abstract:
Spurious correlations threaten the validity of statistical classifiers. While model accuracy may appear high when the test data is from the same distribution as the training data, it can quickly degrade when the test distribution changes. For example, it has been shown that classifiers perform poorly when humans make minor modifications to change the label of an example. One solution to increase model reliability and generalizability is to identify causal associations between features and classes. In this paper, we propose to train a robust text classifier by augmenting the training data with automatically generated counterfactual data. We first identify likely causal features using a statistical matching approach. Next, we generate counterfactual samples for the original training data by substituting causal features with their antonyms and then assigning opposite labels to the counterfactual samples. Finally, we combine the original data and counterfactual data to train a robust classifier. Experiments on two classification tasks show that a traditional classifier trained on the original data does very poorly on human-generated counterfactual samples (e.g., 10%-37% drop in accuracy). However, the classifier trained on the combined data is more robust and performs well on both the original test data and the counterfactual test data (e.g., 12%-25% increase in accuracy compared with the traditional classifier). Detailed analysis shows that the robust classifier makes meaningful and trustworthy predictions by emphasizing causal features and de-emphasizing non-causal features.
APA, Harvard, Vancouver, ISO, and other styles
10

Smy, T., M. Salahuddin, S. K. Dew, and M. J. Brett. "Explanation of spurious features in tungsten deposition using an atomic momentum model." Journal of Applied Physics 78, no. 6 (September 15, 1995): 4157–63. http://dx.doi.org/10.1063/1.359875.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Schulze, Georg, Andrew Jirasek, Marcia M. L. Yu, Arnel Lim, Robin F. B. Turner, and Michael W. Blades. "Investigation of Selected Baseline Removal Techniques as Candidates for Automated Implementation." Applied Spectroscopy 59, no. 5 (May 2005): 545–74. http://dx.doi.org/10.1366/0003702053945985.

Full text
Abstract:
Observed spectra normally contain spurious features along with those of interest and it is common practice to employ one of several available algorithms to remove the unwanted components. Low frequency spurious components are often referred to as ‘baseline’, ‘background’, and/or ‘background noise’. Here we examine a cross-section of non-instrumental methods designed to remove background features from spectra; the particular methods considered here represent approaches with different theoretical underpinnings. We compare and evaluate their relative performance based on synthetic data sets designed to exemplify vibrational spectroscopic signals in realistic contexts and thereby assess their suitability for computer automation. Each method is presented in a modular format with a concise review of the underlying theory, along with a comparison and discussion of their strengths, weaknesses, and amenability to automation, in order to facilitate the selection of methods best suited to particular applications.
APA, Harvard, Vancouver, ISO, and other styles
12

Parshutkin, Andrey, and Marina Neaskina. "Increasing the Security of Information from Leakage Through Side Electromagnetic Emissions." Voprosy kiberbezopasnosti, no. 3(49) (2022): 82–89. http://dx.doi.org/10.21681/2311-3456-2022-3-82-89.

Full text
Abstract:
The purpose of the article: development of software-implemented ways to increase the security of information from leakage through spurious electromagnetic radiation of a DVI video system. Research method: search for brightness gradations of RGB primary colors that form similar levels of spurious electromagnetic radiation by enumeration of experimental data obtained from fragments of the reconstructed image. Results: the first part of the article provides a review and analysis of the literature on the features of the functioning of the video interface of the DVI standard. The main characteristics of the TMDS coding algorithm, which can affect the parameters of spurious electromagnetic radiation of the video interface of modern computer technology, are considered. The second part of the article presents an experimental setup for analyzing the relationship between the visual contrast of an image and changes in the intensity of electromagnetic spurious emissions from a DVI video system. On the basis of an experimental comparison of the intensity values of side electromagnetic radiation for different color shades specified by RGB combinations, the possibility of using such pairs of color tone combinations that, on the one hand, are acceptable for the perception of information by the operator, and, on the other hand, have practically indistinguishable levels of side effects, is shown. electromagnetic radiation. The third part of the article (this is for the existing one) presents a method developed by the authors to increase the security of information from leakage through spurious electromagnetic radiation of a DVI video system. The possibility of implementing the proposed method for reducing the information content of spurious electromagnetic radiation of a video system when displaying two-color images is shown.
APA, Harvard, Vancouver, ISO, and other styles
13

TSANG, K. M. "RECOGNITION OF 2D STANDALONE AND OCCLUDED OBJECTS USING WAVELET TRANSFORM." International Journal of Pattern Recognition and Artificial Intelligence 15, no. 04 (June 2001): 691–705. http://dx.doi.org/10.1142/s021800140100109x.

Full text
Abstract:
A planar curve descriptor which is invariant to translation, size, rotation and starting point in tracing the boundary is developed based on the periodized wavelet transform. Coefficients obtained from the transform are divided into different bands, and feature vectors are extracted for the recognition of two-dimensional closed boundary curves. The weight vectors which include the width of different bands are also derived to differentiate spurious results arising from noisy samples. The technique is further extended for the recognition of occluded objects by incorporating local features into the features vector to form a feature map. Matching the likeliness of a part of the feature map with that of the reference feature maps indicates which class the occluded object belongs to. Experimental results were obtained to show the effectiveness of the proposed technique.
APA, Harvard, Vancouver, ISO, and other styles
14

Purdie, Rhiannon. "Borrowed Feathers: The Spurious Older Scots Ending to Chaucer’s Parliament of Fowls in Bodleian Library MS Arch. Selden. B. 24." Chaucer Review 59, no. 2 (April 2024): 135–81. http://dx.doi.org/10.5325/chaucerrev.59.2.0135.

Full text
Abstract:
ABSTRACT Oxford, Bodleian Library MS Arch. Selden. B. 24 has attracted attention in recent years for the witness it bears to the reception of Chaucer in late medieval Scotland. Its most striking example is the spurious seventy-nine-line ending attached to the Parliament of Fowls, edited afresh for this article. This article teases apart the work of the spurious ending’s author from that of the Selden copyist and investigates how, when, and why this peculiar Scottish rewriting of the end of Chaucer’s Parliament came about. That the spurious ending is indeed a Scottish composition is demonstrated through its linguistic features and sly allusions to two fifteenth-century Scots poems, Henryson’s “The Cock and the Fox” and Richard Holland’s Buke of the Howlat. These, in combination with textual and codicological details explored here, suggest strongly that this ending was written in conscious defiance, rather than ignorance, of Chaucer’s conclusion to the Parliament.
APA, Harvard, Vancouver, ISO, and other styles
15

Minghim, Rosane, Liz Huancapaza, Erasmo Artur, Guilherme P. Telles, and Ivar V. Belizario. "Graphs from Features: Tree-Based Graph Layout for Feature Analysis." Algorithms 13, no. 11 (November 18, 2020): 302. http://dx.doi.org/10.3390/a13110302.

Full text
Abstract:
Feature Analysis has become a very critical task in data analysis and visualization. Graph structures are very flexible in terms of representation and may encode important information on features but are challenging in regards to layout being adequate for analysis tasks. In this study, we propose and develop similarity-based graph layouts with the purpose of locating relevant patterns in sets of features, thus supporting feature analysis and selection. We apply a tree layout in the first step of the strategy, to accomplish node placement and overview based on feature similarity. By drawing the remainder of the graph edges on demand, further grouping and relationships among features are revealed. We evaluate those groups and relationships in terms of their effectiveness in exploring feature sets for data analysis. Correlation of features with a target categorical attribute and feature ranking are added to support the task. Multidimensional projections are employed to plot the dataset based on selected attributes to reveal the effectiveness of the feature set. Our results have shown that the tree-graph layout framework allows for a number of observations that are very important in user-centric feature selection, and not easy to observe by any other available tool. They provide a way of finding relevant and irrelevant features, spurious sets of noisy features, groups of similar features, and opposite features, all of which are essential tasks in different scenarios of data analysis. Case studies in application areas centered on documents, images and sound data demonstrate the ability of the framework to quickly reach a satisfactory compact representation from a larger feature set.
APA, Harvard, Vancouver, ISO, and other styles
16

Ding, Yujian, Xiaoxu Ma, and Bingxue Yang. "Research on Image Feature Extraction and Environment Inference Based on Invariant Learning." Applied Sciences 14, no. 23 (November 21, 2024): 10770. http://dx.doi.org/10.3390/app142310770.

Full text
Abstract:
As dataset environments evolve, the adaptability of deep models has weakened due to biases in training data collection. Consequently, a critical challenge has emerged: enabling models to effectively learn invariant features across diverse environments while ignoring spurious features introduced by environmental changes. This article proposes an image feature extraction algorithm based on invariant learning, which trains a ResNet18 model that can fully learn invariant features. On the basis of this model, GRAD-CAM algorithm is used to extract environmental features of images. Based on this feature dataset, images are classified according to different environments through K-means clustering, achieving environmental partitioning of mixed datasets. The results show that on the test set, the IRM-ResNet18 network’s prediction accuracy is 88.6%, and its accuracy and stability are significantly better than those of ResNet18. It can fully learn and extract invariant features from images. By segmenting the image based on the extracted environmental features, The findings indicate that the IRM-ResNet18 network’s total environmental segmentation accuracy is 88.2%, which confirms the efficacy of the image environmental segmentation algorithm proposed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
17

Nabavizadeh, Seyed, Mohsen Eshraghi, and Sergio Felicelli. "A Comparative Study of Multiphase Lattice Boltzmann Methods for Bubble-Dendrite Interaction during Solidification of Alloys." Applied Sciences 9, no. 1 (December 24, 2018): 57. http://dx.doi.org/10.3390/app9010057.

Full text
Abstract:
This paper presents a comparative study between the pseudopotential Shan-Chen model and the phase field multiphase lattice Boltzmann method for simulating bubble dynamics during dendritic solidification of binary alloys. The Shan-Chen method is an efficient lattice Boltzmann multiphase method despite having some limitations, including the generation of large spurious currents. The phase field model solves the Cahn-Hilliard equation in addition to the Navier-Stokes equation to track the interface between phases. The phase field method is more accurate than the Shan-Chen model for simulation of fluids with a high-density ratio since it generates an acceptable small spurious current, though at the expense of higher computational costs. For the simulations in this article, the multiphase lattice Boltzmann model was coupled with the cellular automata and finite difference methods to solve temperature and concentration fields. The simulated results were presented and compared regarding the ability of each model to simulate phenomena at a microscale resolution, such as Marangoni convection, the magnitude of spurious current, and the computational costs. It is shown that although Shan-Chen methods can replicate some qualitative features of bubble-dendrite interaction, the generated spurious current is unacceptably large, particularly for practical values of the density ratio between fluid and gas phases. This occurs even after implementation of several enhancements to the original Shan-Chen method. This serious limitation makes the Shan-Chen models unsuitable to simulate fluid flow phenomena, such as Marangoni convection, because the large spurious currents mask completely the physical flow.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhang, Shengyu, Xusheng Feng, Wenyan Fan, Wenjing Fang, Fuli Feng, Wei Ji, Shuo Li, et al. "Video-Audio Domain Generalization via Confounder Disentanglement." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (June 26, 2023): 15322–30. http://dx.doi.org/10.1609/aaai.v37i12.26787.

Full text
Abstract:
Existing video-audio understanding models are trained and evaluated in an intra-domain setting, facing performance degeneration in real-world applications where multiple domains and distribution shifts naturally exist. The key to video-audio domain generalization (VADG) lies in alleviating spurious correlations over multi-modal features. To achieve this goal, we resort to causal theory and attribute such correlation to confounders affecting both video-audio features and labels. We propose a DeVADG framework that conducts uni-modal and cross-modal deconfounding through back-door adjustment. DeVADG performs cross-modal disentanglement and obtains fine-grained confounders at both class-level and domain-level using half-sibling regression and unpaired domain transformation, which essentially identifies domain-variant factors and class-shared factors that cause spurious correlations between features and false labels. To promote VADG research, we collect a VADG-Action dataset for video-audio action recognition with over 5,000 video clips across four domains (e.g., cartoon and game) and ten action classes (e.g., cooking and riding). We conduct extensive experiments, i.e., multi-source DG, single-source DG, and qualitative analysis, validating the rationality of our causal analysis and the effectiveness of the DeVADG framework.
APA, Harvard, Vancouver, ISO, and other styles
19

Wenger, Emily, Xiuyu Li, Ben Y. Zhao, and Vitaly Shmatikov. "Data Isotopes for Data Provenance in DNNs." Proceedings on Privacy Enhancing Technologies 2024, no. 1 (January 2024): 413–29. http://dx.doi.org/10.56553/popets-2024-0024.

Full text
Abstract:
Today, creators of data-hungry deep neural networks (DNNs) scour the Internet for training fodder, leaving users with little control over or knowledge of when their data, and in particular their images, are used to train models. To empower users to counteract unwanted use of their images, we design, implement and evaluate a practical system that enables users to detect if their data was used to train a DNN model for image classification. We show how users can create special images we call isotopes, which introduce ``spurious features'' into DNNs during training. With only query access to a model and no knowledge of the model-training process, nor control of the data labels, a user can apply statistical hypothesis testing to detect if the model learned these spurious features by training on the user's images. Isotopes can be viewed as an application of a particular type of data poisoning. In contrast to backdoors and other poisoning attacks, our purpose is not to cause misclassification but rather to create tell-tale changes in confidence scores output by the model that reveal the presence of isotopes in the training data. Isotopes thus turn DNNs' vulnerability to memorization and spurious correlations into a tool for data provenance. Our results confirm efficacy in multiple image classification settings, detecting and distinguishing between hundreds of isotopes with high accuracy. We further show that our system works on public ML-as-a-service platforms and larger models such as ImageNet, can use physical objects in images instead of digital marks, and remains robust against several adaptive countermeasures.
APA, Harvard, Vancouver, ISO, and other styles
20

BOBROVSKIKH, ALEKSEY, ALEXANDER GUREEV, VLADIMIR LOS, and ALEKSEY MARKOV. "CONSIDERATION OF MULTIPATH IN MODELING THE FIELD OF SECONDARY ELECTROMAGNETIC RADIATION ON INFORMATIZATION OBJECTS." Computational Nanotechnology 9, no. 4 (December 28, 2022): 63–69. http://dx.doi.org/10.33693/2313-223x-2022-9-4-63-69.

Full text
Abstract:
The paper considers the features of the formation of the resulting field of spurious electromagnetic radiation at informatization objects, taking into account the multipath propagation of radio waves, evaluates the achievable error values for determining the attenuation coefficient using known methods for determining the attenuation of the field of spurious electromagnetic radiation based on a deterministic model of radio wave propagation inside the building. The need for a significant refinement of the method for determining the attenuation coefficient is shown, since, in a known form, it is suitable only for measurements in free space and gives very significant errors in urban conditions and inside buildings, or the use of adaptive reception methods when receiving signals (for example, diversity reception), which allows to significantly reduce the level of interference dips in the resulting electromotive force.
APA, Harvard, Vancouver, ISO, and other styles
21

Cai, Jie, Xin Wang, Haoyang Li, Ziwei Zhang, and Wenwu Zhu. "Multimodal Graph Neural Architecture Search under Distribution Shifts." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (March 24, 2024): 8227–35. http://dx.doi.org/10.1609/aaai.v38i8.28663.

Full text
Abstract:
Multimodal graph neural architecture search (MGNAS) has shown great success for automatically designing the optimal multimodal graph neural network (MGNN) architecture by leveraging multimodal representation, crossmodal information and graph structure in one unified framework. However, existing MGNAS fails to handle distribution shifts that naturally exist in multimodal graph data, since the searched architectures inevitably capture spurious statistical correlations under distribution shifts. To solve this problem, we propose a novel Out-of-distribution Generalized Multimodal Graph Neural Architecture Search (OMG-NAS) method which optimizes the MGNN architecture with respect to its performance on decorrelated OOD data. Specifically, we propose a multimodal graph representation decorrelation strategy, which encourages the searched MGNN model to output representations that eliminate spurious correlations through iteratively optimizing the feature weights and controller. In addition, we propose a global sample weight estimator that facilitates the sharing of optimal sample weights learned from existing architectures. This design promotes the effective estimation of the sample weights for candidate MGNN architectures to generate decorrelated multimodal graph representations, concentrating more on the truly predictive relations between invariant features and ground-truth labels. Extensive experiments on real-world multimodal graph datasets demonstrate the superiority of our proposed method over SOTA baselines.
APA, Harvard, Vancouver, ISO, and other styles
22

Fessler, Daniel M. T. "Contextual features of problem-solving and social learning give rise to spurious associations, the raw materials for the evolution of rituals." Behavioral and Brain Sciences 29, no. 6 (December 2006): 617–18. http://dx.doi.org/10.1017/s0140525x06009381.

Full text
Abstract:
If rituals persist in part because of their memory-taxing attributes, from whence do they arise? I suggest that magical practices form the core of rituals, and that many such practices derive from learned pseudo-causal associations. Spurious associations are likely to be acquired during problem-solving under conditions of ambiguity and danger, and are often a consequence of imitative social learning.
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Yingjie, Jiarui Zhang, Tao Wang, and Yun Liang. "Trend-Aware Supervision: On Learning Invariance for Semi-supervised Facial Action Unit Intensity Estimation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 1 (March 24, 2024): 483–91. http://dx.doi.org/10.1609/aaai.v38i1.27803.

Full text
Abstract:
With the increasing need for facial behavior analysis, semi-supervised AU intensity estimation using only keyframe annotations has emerged as a practical and effective solution to relieve the burden of annotation. However, the lack of annotations makes the spurious correlation problem caused by AU co-occurrences and subject variation much more prominent, leading to non-robust intensity estimation that is entangled among AUs and biased among subjects. We observe that trend information inherent in keyframe annotations could act as extra supervision and raising the awareness of AU-specific facial appearance changing trends during training is the key to learning invariant AU-specific features. To this end, we propose Trend-AwareSupervision (TAS), which pursues three kinds of trend awareness, including intra-trend ranking awareness, intra-trend speed awareness, and inter-trend subject awareness. TAS alleviates the spurious correlation problem by raising trend awareness during training to learn AU-specific features that represent the corresponding facial appearance changes, to achieve intensity estimation invariance. Experiments conducted on two commonly used AU benchmark datasets, BP4D and DISFA, show the effectiveness of each kind of awareness. And under trend-aware supervision, the performance can be improved without extra computational or storage costs during inference.
APA, Harvard, Vancouver, ISO, and other styles
24

Wei, Yutao, Wenzheng Shu, Zhangtao Cheng, Wenxin Tai, Chunjing Xiao, and Ting Zhong. "Counterfactual Graph Learning for Anomaly Detection with Feature Disentanglement and Generation (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 21 (March 24, 2024): 23682–83. http://dx.doi.org/10.1609/aaai.v38i21.30524.

Full text
Abstract:
Graph anomaly detection has received remarkable research interests, and various techniques have been employed for enhancing detection performance. However, existing models tend to learn dataset-specific spurious correlations based on statistical associations. A well-trained model might suffer from performance degradation when applied to newly observed nodes with different environments. To handle this situation, we propose CounterFactual Graph Anomaly Detection model, CFGAD. In this model, we design a gradient-based separator to disentangle node features into class features and environment features. Then, we present a weight-varying diffusion model to combine class features and environment features from different nodes to generate counterfactual samples. These counterfactual samples will be adopted to enhance model robustness. Comprehensive experiments demonstrate the effectiveness of our CFGAD.
APA, Harvard, Vancouver, ISO, and other styles
25

Ruiz, Oscar E., Camilo Cortes, Diego A. Acosta, and Mauricio Aristizabal. "Sensitivity analysis in optimized parametric curve fitting." Engineering Computations 32, no. 1 (March 2, 2015): 37–61. http://dx.doi.org/10.1108/ec-03-2013-0086.

Full text
Abstract:
Purpose – Curve fitting from unordered noisy point samples is needed for surface reconstruction in many applications. In the literature, several approaches have been proposed to solve this problem. However, previous works lack formal characterization of the curve fitting problem and assessment on the effect of several parameters (i.e. scalars that remain constant in the optimization problem), such as control points number (m), curve degree (b), knot vector composition (U), norm degree (k), and point sample size (r) on the optimized curve reconstruction measured by a penalty function (f). The paper aims to discuss these issues. Design/methodology/approach – A numerical sensitivity analysis of the effect of m, b, k and r on f and a characterization of the fitting procedure from the mathematical viewpoint are performed. Also, the spectral (frequency) analysis of the derivative of the angle of the fitted curve with respect to u as a means to detect spurious curls and peaks is explored. Findings – It is more effective to find optimum values for m than k or b in order to obtain good results because the topological faithfulness of the resulting curve strongly depends on m. Furthermore, when an exaggerate number of control points is used the resulting curve presents spurious curls and peaks. The authors were able to detect the presence of such spurious features with spectral analysis. Also, the authors found that the method for curve fitting is robust to significant decimation of the point sample. Research limitations/implications – The authors have addressed important voids of previous works in this field. The authors determined, among the curve fitting parameters m, b and k, which of them influenced the most the results and how. Also, the authors performed a characterization of the curve fitting problem from the optimization perspective. And finally, the authors devised a method to detect spurious features in the fitting curve. Practical implications – This paper provides a methodology to select the important tuning parameters in a formal manner. Originality/value – Up to the best of the knowledge, no previous work has been conducted in the formal mathematical evaluation of the sensitivity of the goodness of the curve fit with respect to different possible tuning parameters (curve degree, number of control points, norm degree, etc.).
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Bo, Fangzhou Meng, Hongying Tang, and Guanjun Tong. "Two-Level Attention Module Based on Spurious-3D Residual Networks for Human Action Recognition." Sensors 23, no. 3 (February 3, 2023): 1707. http://dx.doi.org/10.3390/s23031707.

Full text
Abstract:
In recent years, deep learning techniques have excelled in video action recognition. However, currently commonly used video action recognition models minimize the importance of different video frames and spatial regions within some specific frames when performing action recognition, which makes it difficult for the models to adequately extract spatiotemporal features from the video data. In this paper, an action recognition method based on improved residual convolutional neural networks (CNNs) for video frames and spatial attention modules is proposed to address this problem. The network can guide what and where to emphasize or suppress with essentially little computational cost using the video frame attention module and the spatial attention module. It also employs a two-level attention module to emphasize feature information along the temporal and spatial dimensions, respectively, highlighting the more important frames in the overall video sequence and the more important spatial regions in some specific frames. Specifically, we create the video frame and spatial attention map by successively adding the video frame attention module and the spatial attention module to aggregate the spatial and temporal dimensions of the intermediate feature maps of the CNNs to obtain different feature descriptors, thus directing the network to focus more on important video frames and more contributing spatial regions. The experimental results further show that the network performs well on the UCF-101 and HMDB-51 datasets.
APA, Harvard, Vancouver, ISO, and other styles
27

Sridhar, A., V. G. Kouznetsova, and M. G. D. Geers. "Frequency domain boundary value problem analyses of acoustic metamaterials described by an emergent generalized continuum." Computational Mechanics 65, no. 3 (November 28, 2019): 789–805. http://dx.doi.org/10.1007/s00466-019-01795-z.

Full text
Abstract:
AbstractThis paper presents a computational frequency-domain boundary value analysis of acoustic metamaterials and phononic crystals based on a general homogenization framework, which features a novel definition of the macro-scale fields based on the Floquet-Bloch average in combination with a family of characteristic projection functions leading to a generalized macro-scale continuum. Restricting to 1D elastodynamics and the frequency-domain response for the sake of compactness, the boundary value problem on the generalized macro-scale continuum is elaborated. Several challenges are identified, in particular the non-uniqueness in selection of the boundary conditions for the homogenized continuum and the presence of spurious short wave solutions. To this end, procedures for the determination of the homogenized boundary conditions and mitigation of the spurious solutions are proposed. The methodology is validated against the direct numerical simulation on an example periodic 2-phase composite structure.
APA, Harvard, Vancouver, ISO, and other styles
28

Santos, Augusto, Diogo Rente, Rui Seabra, and José M. F. Moura. "Learning the Causal Structure of Networked Dynamical Systems under Latent Nodes and Structured Noise." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (March 24, 2024): 14866–74. http://dx.doi.org/10.1609/aaai.v38i13.29406.

Full text
Abstract:
This paper considers learning the hidden causal network of a linear networked dynamical system (NDS) from the time series data at some of its nodes -- partial observability. The dynamics of the NDS are driven by colored noise that generates spurious associations across pairs of nodes, rendering the problem much harder. To address the challenge of noise correlation and partial observability, we assign to each pair of nodes a feature vector computed from the time series data of observed nodes. The feature embedding is engineered to yield structural consistency: there exists an affine hyperplane that consistently partitions the set of features, separating the feature vectors corresponding to connected pairs of nodes from those corresponding to disconnected pairs. The causal inference problem is thus addressed via clustering the designed features. We demonstrate with simple baseline supervised methods the competitive performance of the proposed causal inference mechanism under broad connectivity regimes and noise correlation levels, including a real world network. Further, we devise novel technical guarantees of structural consistency for linear NDS under the considered regime.
APA, Harvard, Vancouver, ISO, and other styles
29

Yoritomo, John Y., and Richard L. Weaver. "Fluctuations in the cross-correlation for fields lacking full diffusivity: The statistics of spurious features." Journal of the Acoustical Society of America 140, no. 1 (July 2016): 702–13. http://dx.doi.org/10.1121/1.4959002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

weaver, richard, and John Y. Yoritomo. "Fluctuations in the cross-correlation for fields lacking full diffusivity: The statistics of spurious features." Journal of the Acoustical Society of America 141, no. 5 (May 2017): 3472. http://dx.doi.org/10.1121/1.4987221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Zhang, Daiwei, Bo Xu, Han Hu, Qing Zhu, Qiang Wang, Xuming Ge, Min Chen, and Yan Zhou. "Spherical Hough Transform for Robust Line Detection Toward a 2D–3D Integrated Mobile Mapping System." Photogrammetric Engineering & Remote Sensing 89, no. 5 (May 1, 2023): 50–59. http://dx.doi.org/10.14358/pers.22-00112r2.

Full text
Abstract:
Line features are of great importance for the registration of the Vehicle-Borne Mobile Mapping System that contains both lidar and multiple-lens panoramic cameras. In this work, a spherical straight- line model is proposed to detect the unified line features in the panoramic imaging surface based on the Spherical Hough Transform. The local topological constraints and gradient image voting are also combined to register the line features between panoramic images and lidar point clouds within the Hough parameter space. Experimental results show that the proposed method can accurately extract the long strip targets on the panoramic images and avoid spurious or broken line-segments. Meanwhile, the line matching precision between point clouds and panoramic images are also improved.
APA, Harvard, Vancouver, ISO, and other styles
32

Yadav, Amit, Abhijeet Agawal, Pramod Kumar, and Tejaswi Sachwani. "DESIGN AND ANALYSIS OF AN INTELLIGENT FIRE DETECTION SYSTEM FOR AIRCRAFT." International Journal of Engineering Technologies and Management Research 5, no. 2 (May 4, 2020): 260–73. http://dx.doi.org/10.29121/ijetmr.v5.i2.2018.656.

Full text
Abstract:
Fire detection system and fire warning are design features of an aircraft. Fire detection system protects the aircraft and passengers both in case of actual fire during flight. But spurious fire warning during flight creates a panic situation in flight crews and passengers. The conventional fire alarm system of an aircraft is triggered by false signal. ANN based fire detection system provides real observation of deployed zones. An intelligent fire detection system is developed based on artificial neural network using three detection information such as heat (temperature), smoke density and CO gas. This Information helps in determining the probability of three representative of Fire condition which is Fire, smoke and no fire. The simulated MATLAB results Show that the errors in identification are very less. The neural network based fire detection system integrates different types of sensor data and improves the ability of system to correct prediction of fires. It gives early alarm when any kind of fire broke out and helps to decrease in spurious warning.
APA, Harvard, Vancouver, ISO, and other styles
33

YU, DONGGANG, and WEI LAI. "ANALYSIS AND RECOGNITION OF BROKEN HANDWRITTEN DIGITS BASED ON MORPHOLOGICAL STRUCTURE AND SKELETON." International Journal of Pattern Recognition and Artificial Intelligence 19, no. 03 (May 2005): 271–96. http://dx.doi.org/10.1142/s0218001405004095.

Full text
Abstract:
This paper presents an efficient method of reconstructing and recognizing broken handwritten digits. Constrained dilation algorithms are used to bridge small gaps and smooth some spurious points. The contours of broken handwritten digits are smoothed and linearized, and a set of structural points of digits are detected along the outer contours of digits. These structural points are used to describe the morphological structure of broken digits. The broken digits are skeletonized with an improved thinning algorithm. Spurious segments introduced during the extraction of digit fields are detected and deleted based on the structure analysis of digit fields, segment recognition, segment extension, skeleton structure and geometrical features. The broken points of the digits are preselected based on the minimum distance between the "end" points of skeletons of two neighboring regions. The correction rules of the preselected broken points are also based on the structure analysis and comparison of broken digits. Experimental results showing the effectiveness of the method are given.
APA, Harvard, Vancouver, ISO, and other styles
34

Comite, Davide, Paolo Baccarelli, Paolo Burghignoli, and Alessandro Galli. "Wire-medium loaded planar structures: modal analysis, near fields, and radiation features." International Journal of Microwave and Wireless Technologies 8, no. 4-5 (April 12, 2016): 713–22. http://dx.doi.org/10.1017/s1759078716000490.

Full text
Abstract:
A novel transmission-line model is used for the analysis of planar structures, including wire-medium (WM) slabs with vertically aligned wires. The network formalism allows for an effective determination of the relevant spectral Green's functions, of the modal dispersion equation via transverse resonance, as well as of the far-field radiation pattern produced by simple sources via reciprocity, as opposed to the more cumbersome field-matching approach. Numerical results, validated also against state-of-the-art simulation software, confirm the accuracy and effectiveness of the proposed approach. In particular, modal and radiation features are presented for a class of leaky-wave antennas based on planar WM loaded configurations covered by partially reflecting screens, for which leaky unimodal regimes are achieved by minimizing spurious radiation from the quasi-transverse electromagnetic (TEM) mode.
APA, Harvard, Vancouver, ISO, and other styles
35

Ville-Ometz, Fabienne, Jean Royauté, and Alain Zasadzinski. "Enhancing in automatic recognition and extraction of term variants with linguistic features." Terminology 13, no. 1 (June 1, 2007): 35–59. http://dx.doi.org/10.1075/term.13.1.03vil.

Full text
Abstract:
The recognition and extraction of terms and their variants in texts are crucial processes in text mining. We use the ILC platform, an automatic controlled indexing platform, to perform these linguistic processes. We present a methodology for enhancing the recognition of syntactic term variation in English, using syntactic and morpho-syntactic features. Principal spurious variants of terms are ascribed to incorrect word dependencies. To overcome these problems, we consider each term variant as a window on the sentence and introduce two criteria: an internal syntactic criterion which checks that the dependencies between words in the window are respected, and an external criterion which defines boundaries, making it possible to ensure that the window is well positioned in the sentence. The use of these criteria improves filtering of the variants and assists the expert in validating the indexing.
APA, Harvard, Vancouver, ISO, and other styles
36

Franz, Karly S., Grace Reszetnik, and Tom Chau. "On the Need for Accurate Brushstroke Segmentation of Tablet-Acquired Kinematic and Pressure Data: The Case of Unconstrained Tracing." Algorithms 17, no. 3 (March 20, 2024): 128. http://dx.doi.org/10.3390/a17030128.

Full text
Abstract:
Brushstroke segmentation algorithms are critical in computer-based analysis of fine motor control via handwriting, drawing, or tracing tasks. Current segmentation approaches typically rely only on one type of feature, either spatial, temporal, kinematic, or pressure. We introduce a segmentation algorithm that leverages both spatiotemporal and pressure features to accurately identify brushstrokes during a tracing task. The algorithm was tested on both a clinical and validation dataset. Using validation trials with incorrectly identified brushstrokes, we evaluated the impact of segmentation errors on commonly derived biomechanical features used in the literature to detect graphomotor pathologies. The algorithm exhibited robust performance on validation and clinical datasets, effectively identifying brushstrokes while simultaneously eliminating spurious, noisy data. Spatial and temporal features were most affected by incorrect segmentation, particularly those related to the distance between brushstrokes and in-air time, which experienced propagated errors of 99% and 95%, respectively. In contrast, kinematic features, such as velocity and acceleration, were minimally affected, with propagated errors between 0 to 12%. The proposed algorithm may help improve brushstroke segmentation in future studies of handwriting, drawing, or tracing tasks. Spatial and temporal features derived from tablet-acquired data should be considered with caution, given their sensitivity to segmentation errors and instrumentation characteristics.
APA, Harvard, Vancouver, ISO, and other styles
37

Hayashi, Katsuhiko, Shuhei Kondo, and Yuji Matsumoto. "Efficient Stacked Dependency Parsing by Forest Reranking." Transactions of the Association for Computational Linguistics 1 (December 2013): 139–50. http://dx.doi.org/10.1162/tacl_a_00216.

Full text
Abstract:
This paper proposes a discriminative forest reranking algorithm for dependency parsing that can be seen as a form of efficient stacked parsing. A dynamic programming shift-reduce parser produces a packed derivation forest which is then scored by a discriminative reranker, using the 1-best tree output by the shift-reduce parser as guide features in addition to third-order graph-based features. To improve efficiency and accuracy, this paper also proposes a novel shift-reduce parser that eliminates the spurious ambiguity of arc-standard transition systems. Testing on the English Penn Treebank data, forest reranking gave a state-of-the-art unlabeled dependency accuracy of 93.12.
APA, Harvard, Vancouver, ISO, and other styles
38

West, Paul, and Natalia Starostina. "How to Recognize and Avoid AFM Image Artifacts." Microscopy Today 11, no. 3 (June 2003): 20–27. http://dx.doi.org/10.1017/s1551929500052639.

Full text
Abstract:
Images produced by an atomic forces microscope (AFM) often contain spurious features and image distortion that can render accurate imaging and metrology suspect. These “artifacts” are caused by the manner in which the image is produced. Artifacts can originate from the probe tip geometry, scanner non-linearity, image processing software, vibration, sample contamination, electronic noise, and poor sample stability. This article describes and illustrates common AFM image artifacts and suggests means to eliminate or minimize them.
APA, Harvard, Vancouver, ISO, and other styles
39

Yu, M. L., F. X. Giraldo, M. Peng, and Z. J. Wang. "Localized Artificial Viscosity Stabilization of Discontinuous Galerkin Methods for Nonhydrostatic Mesoscale Atmospheric Modeling." Monthly Weather Review 143, no. 12 (November 24, 2015): 4823–45. http://dx.doi.org/10.1175/mwr-d-15-0134.1.

Full text
Abstract:
Abstract Gibbs oscillation can show up near flow regions with strong temperature gradients in the numerical simulation of nonhydrostatic mesoscale atmospheric flows when using the high-order discontinuous Galerkin (DG) method. The authors propose to incorporate flow-feature-based localized Laplacian artificial viscosity in the DG framework to suppress the spurious oscillation in the vicinity of sharp thermal fronts but not to contaminate the smooth flow features elsewhere. The parameters in the localized Laplacian artificial viscosity are modeled based on both physical criteria and numerical features of the DG discretization. The resulting numerical formulation is first validated on several shock-involved test cases, including a shock discontinuity problem with the one-dimensional Burger’s equation, shock–entropy wave interaction, and shock–vortex interaction. Then the efficacy of the developed numerical formulation on stabilizing thermal fronts in nonhydrostatic mesoscale atmospheric modeling is demonstrated by two benchmark test cases: the rising thermal bubble problem and the density current problem. The results indicate that the proposed flow-feature-based localized Laplacian artificial viscosity method can sharply detect the nonsmooth flow features, and stabilize the DG discretization nearby. Furthermore, the numerical stabilization method works robustly for a wide range of grid sizes and polynomial orders without parameter tuning in the localized Laplacian artificial viscosity.
APA, Harvard, Vancouver, ISO, and other styles
40

Zeng, Zengri, Wei Peng, and Baokang Zhao. "Improving the Accuracy of Network Intrusion Detection with Causal Machine Learning." Security and Communication Networks 2021 (November 3, 2021): 1–18. http://dx.doi.org/10.1155/2021/8986243.

Full text
Abstract:
In recent years, machine learning (ML) algorithms have been approved effective in the intrusion detection. However, as the ML algorithms are mainly applied to evaluate the anomaly of the network, the detection accuracy for cyberattacks with multiple types cannot be fully guaranteed. The existing algorithms for network intrusion detection based on ML or feature selection are on the basis of spurious correlation between features and cyberattacks, causing several wrong classifications. In order to tackle the abovementioned problems, this research aimed to establish a novel network intrusion detection system (NIDS) based on causal ML. The proposed system started with the identification of noisy features by causal intervention, while only the features that had a causality with cyberattacks were preserved. Then, the ML algorithm was used to make a preliminary classification to select the most relevant types of cyberattacks. As a result, the unique labeled cyberattack could be detected by the counterfactual detection algorithm. In addition to a relatively stable accuracy, the complexity of cyberattack detection could also be effectively reduced, with a maximum reduction to 94% on the size of training features. Moreover, in case of the availability of several types of cyberattacks, the detection accuracy was significantly improved compared with the previous ML algorithms.
APA, Harvard, Vancouver, ISO, and other styles
41

Bishop, Craig H., and Daniel Hodyss. "Adaptive Ensemble Covariance Localization in Ensemble 4D-VAR State Estimation." Monthly Weather Review 139, no. 4 (April 1, 2011): 1241–55. http://dx.doi.org/10.1175/2010mwr3403.1.

Full text
Abstract:
Abstract An adaptive ensemble covariance localization technique, previously used in “local” forms of the ensemble Kalman filter, is extended to a global ensemble four-dimensional variational data assimilation (4D-VAR) scheme. The purely adaptive part of the localization matrix considered is given by the element-wise square of the correlation matrix of a smoothed ensemble of streamfunction perturbations. It is found that these purely adaptive localization functions have spurious far-field correlations as large as 0.1 with a 128-member ensemble. To attenuate the spurious features of the purely adaptive localization functions, the authors multiply the adaptive localization functions with very broadscale nonadaptive localization functions. Using the Navy’s operational ensemble forecasting system, it is shown that the covariance localization functions obtained by this approach adapt to spatially anisotropic aspects of the flow, move with the flow, and are free of far-field spurious correlations. The scheme is made computationally feasible by (i) a method for inexpensively generating the square root of an adaptively localized global four-dimensional error covariance model in terms of products or modulations of smoothed ensemble perturbations with themselves and with raw ensemble perturbations, and (ii) utilizing algorithms that speed ensemble covariance localization when localization functions are separable, variable-type independent, and/or large scale. In spite of the apparently useful characteristics of adaptive localization, single analysis/forecast experiments assimilating 583 200 observations over both 6- and 12-h data assimilation windows failed to identify any significant difference in the quality of the analyses and forecasts obtained using nonadaptive localization from that obtained using adaptive localization.
APA, Harvard, Vancouver, ISO, and other styles
42

Xin, Shiji, Yifei Wang, Jingtong Su, and Yisen Wang. "On the Connection between Invariant Learning and Adversarial Training for Out-of-Distribution Generalization." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 10519–27. http://dx.doi.org/10.1609/aaai.v37i9.26250.

Full text
Abstract:
Despite impressive success in many tasks, deep learning models are shown to rely on spurious features, which will catastrophically fail when generalized to out-of-distribution (OOD) data. Invariant Risk Minimization (IRM) is proposed to alleviate this issue by extracting domain-invariant features for OOD generalization. Nevertheless, recent work shows that IRM is only effective for a certain type of distribution shift (e.g., correlation shift) while it fails for other cases (e.g., diversity shift). Meanwhile, another thread of method, Adversarial Training (AT), has shown better domain transfer performance, suggesting that it has the potential to be an effective candidate for extracting domain-invariant features. This paper investigates this possibility by exploring the similarity between the IRM and AT objectives. Inspired by this connection, we propose Domain-wise Adversarial Training (DAT), an AT-inspired method for alleviating distribution shift by domain-specific perturbations. Extensive experiments show that our proposed DAT can effectively remove domain-varying features and improve OOD generalization under both correlation shift and diversity shift.
APA, Harvard, Vancouver, ISO, and other styles
43

Wu, Hongqiu, Ruixue Ding, Hai Zhao, Pengjun Xie, Fei Huang, and Min Zhang. "Adversarial Self-Attention for Language Understanding." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (June 26, 2023): 13727–35. http://dx.doi.org/10.1609/aaai.v37i11.26608.

Full text
Abstract:
Deep neural models (e.g. Transformer) naturally learn spurious features, which create a ``shortcut'' between the labels and inputs, thus impairing the generalization and robustness. This paper advances self-attention mechanism to its robust variant for Transformer-based pre-trained language models (e.g. BERT). We propose Adversarial Self-Attention mechanism (ASA), which adversarially biases the attentions to effectively suppress the model reliance on features (e.g. specific keywords) and encourage its exploration of broader semantics. We conduct comprehensive evaluation across a wide range of tasks for both pre-training and fine-tuning stages. For pre-training, ASA unfolds remarkable performance gain compared to naive training for longer steps. For fine-tuning, ASA-empowered models outweigh naive models by a large margin considering both generalization and robustness.
APA, Harvard, Vancouver, ISO, and other styles
44

Pike, J. "Notes on the structure of viscous and numerically-captured shocks." Aeronautical Journal 89, no. 889 (November 1985): 335–38. http://dx.doi.org/10.1017/s0001924000050843.

Full text
Abstract:
SummaryAn exact expression for the flow variables through a viscous shock wave is obtained from the Navier-Stokes equations. The Prandtl number is taken to be ¾, which is close to the value for air, and the viscosity is assumed to be given by Sutherland's formula.By considering the limit as the viscosity tends to zero, it is shown that the solution to the Euler equations has an entropy spike at the shock wave. This explains certain, hitherto considered spurious, features of shock waves captured by numerical solutions of the Euler equations.
APA, Harvard, Vancouver, ISO, and other styles
45

Zemel, Richard S., and Michael C. Mozer. "Localist Attractor Networks." Neural Computation 13, no. 5 (May 1, 2001): 1045–64. http://dx.doi.org/10.1162/08997660151134325.

Full text
Abstract:
Attractor networks, which map an input space to a discrete output space, are useful for pattern completion—cleaning up noisy or missing input features. However, designing a net to have a given set of attractors is notoriously tricky; training procedures are CPU intensive and often produce spurious attractors and ill-conditioned attractor basins. These difficulties occur because each connection in the network participates in the encoding of multiple attractors. We describe an alternative formulation of attractor networks in which the encoding of knowledge is local, not distributed. Although localist attractor networks have similar dynamics to their distributed counterparts, they are much easier to work with and interpret. We propose a statistical formulation of localist attractor net dynamics, which yields a convergence proof and a mathematical interpretation of model parameters. We present simulation experiments that explore the behavior of localist attractor networks, showing that they yield few spurious attractors, and they readily exhibit two desirable properties of psychological and neurobiological models: priming (faster convergence to an attractor if the attractor has been recently visited) and gang effects (in which the presence of an attractor enhances the attractor basins of neighboring attractors).
APA, Harvard, Vancouver, ISO, and other styles
46

Chang, Mingshan, Min Yang, Qingshan Jiang, and Ruifeng Xu. "Counterfactual-Enhanced Information Bottleneck for Aspect-Based Sentiment Analysis." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 16 (March 24, 2024): 17736–44. http://dx.doi.org/10.1609/aaai.v38i16.29726.

Full text
Abstract:
Despite having achieved notable success for aspect-based sentiment analysis (ABSA), deep neural networks are susceptible to spurious correlations between input features and output labels, leading to poor robustness. In this paper, we propose a novel Counterfactual-Enhanced Information Bottleneck framework (called CEIB) to reduce spurious correlations for ABSA. CEIB extends the information bottleneck (IB) principle to a factual-counterfactual balancing setting by integrating augmented counterfactual data, with the goal of learning a robust ABSA model. Concretely, we first devise a multi-pattern prompting method, which utilizes the large language model (LLM) to generate high-quality counterfactual samples from the original samples. Then, we employ the information bottleneck principle and separate the mutual information into factual and counterfactual parts. In this way, we can learn effective and robust representations for the ABSA task by balancing the predictive information of these two parts. Extensive experiments on five benchmark ABSA datasets show that our CEIB approach achieves superior prediction performance and robustness over the state-of-the-art baselines. Code and data to reproduce the results in this paper is available at: https://github.com/shesshan/CEIB.
APA, Harvard, Vancouver, ISO, and other styles
47

Chang, Jing, Xiaohui He, Panle Li, Ting Tian, Xijie Cheng, Mengjia Qiao, Tao Zhou, Beibei Zhang, Ziqian Chang, and Tingwei Fan. "Multi-Scale Attention Network for Building Extraction from High-Resolution Remote Sensing Images." Sensors 24, no. 3 (February 4, 2024): 1010. http://dx.doi.org/10.3390/s24031010.

Full text
Abstract:
The precise building extraction from high-resolution remote sensing images holds significant application for urban planning, resource management, and environmental conservation. In recent years, deep neural networks (DNNs) have garnered substantial attention for their adeptness in learning and extracting features, becoming integral to building extraction methodologies and yielding noteworthy performance outcomes. Nonetheless, prevailing DNN-based models for building extraction often overlook spatial information during the feature extraction phase. Additionally, many existing models employ a simplistic and direct approach in the feature fusion stage, potentially leading to spurious target detection and the amplification of internal noise. To address these concerns, we present a multi-scale attention network (MSANet) tailored for building extraction from high-resolution remote sensing images. In our approach, we initially extracted multi-scale building feature information, leveraging the multi-scale channel attention mechanism and multi-scale spatial attention mechanism. Subsequently, we employed adaptive hierarchical weighting processes on the extracted building features. Concurrently, we introduced a gating mechanism to facilitate the effective fusion of multi-scale features. The efficacy of the proposed MSANet was evaluated using the WHU aerial image dataset and the WHU satellite image dataset. The experimental results demonstrate compelling performance metrics, with the F1 scores registering at 93.76% and 77.64% on the WHU aerial imagery dataset and WHU satellite dataset II, respectively. Furthermore, the intersection over union (IoU) values stood at 88.25% and 63.46%, surpassing benchmarks set by DeepLabV3 and GSMC.
APA, Harvard, Vancouver, ISO, and other styles
48

Zhang, Zhizhong, Xiaoran Shi, Xinyi Guo, and Feng Zhou. "TR-RAGCN-AFF-RESS: A Method for Radar Emitter Signal Sorting." Remote Sensing 16, no. 7 (March 22, 2024): 1121. http://dx.doi.org/10.3390/rs16071121.

Full text
Abstract:
Radar emitter signal sorting (RESS) is a crucial process in contemporary electronic battlefield situation awareness. Separating pulses belonging to the same radar emitter from interleaved radar pulse sequences with a lack of prior information, high density, strong overlap, and wide parameter distribution has attracted increasing attention. In order to improve the accuracy of RESS under scenarios with limited labeled samples, this paper proposes an RESS model called TR-RAGCN-AFF-RESS. This model transforms the RESS problem into a pulse-by-pulse classification task. Firstly, a novel weighted adjacency matrix construction method was proposed to characterize the structural relationships between pulse attribute parameters more accurately. Building upon this foundation, two networks were developed: a Transformer(TR)-based interleaved pulse sequence temporal feature extraction network and a residual attention graph convolutional network (RAGCN) for extracting the structural relationship features of attribute parameters. Finally, the attention feature fusion (AFF) algorithm was introduced to fully integrate the temporal features and attribute parameter structure relationship features, enhancing the richness of feature representation for the original pulses and achieving more accurate sorting results. Compared to existing deep learning-based RESS algorithms, this method does not require many labeled samples for training, making it better suited for scenarios with limited labeled sample availability. Experimental results and analysis confirm that even with only 10% of the training samples, this method achieves a sorting accuracy exceeding 93.91%, demonstrating high robustness against measurement errors, lost pulses, and spurious pulses in non-ideal conditions.
APA, Harvard, Vancouver, ISO, and other styles
49

Honda, Ukyo, Tatsushi Oka, Peinan Zhang, and Masato Mita. "Not Eliminate but Aggregate: Post-Hoc Control over Mixture-of-Experts to Address Shortcut Shifts in Natural Language Understanding." Transactions of the Association for Computational Linguistics 12 (2024): 1268–89. http://dx.doi.org/10.1162/tacl_a_00701.

Full text
Abstract:
Abstract Recent models for natural language understanding are inclined to exploit simple patterns in datasets, commonly known as shortcuts. These shortcuts hinge on spurious correlations between labels and latent features existing in the training data. At inference time, shortcut-dependent models are likely to generate erroneous predictions under distribution shifts, particularly when some latent features are no longer correlated with the labels. To avoid this, previous studies have trained models to eliminate the reliance on shortcuts. In this study, we explore a different direction: pessimistically aggregating the predictions of a mixture-of-experts, assuming each expert captures relatively different latent features. The experimental results demonstrate that our post-hoc control over the experts significantly enhances the model’s robustness to the distribution shift in shortcuts. Additionally, we show that our approach has some practical advantages. We also analyze our model and provide results to support the assumption.1
APA, Harvard, Vancouver, ISO, and other styles
50

Portilla, Jesús, Francisco J. Ocampo-Torres, and Jaak Monbaliu. "Spectral Partitioning and Identification of Wind Sea and Swell." Journal of Atmospheric and Oceanic Technology 26, no. 1 (January 1, 2009): 107–22. http://dx.doi.org/10.1175/2008jtecho609.1.

Full text
Abstract:
Abstract In this paper, different partitioning techniques and methods to identify wind sea and swell are investigated, addressing both 1D and 2D schemes. Current partitioning techniques depend largely on arbitrary parameterizations to assess if wave systems are significant or spurious. This makes the implementation of automated procedures difficult, if not impossible, to calibrate. To avoid this limitation, for the 2D spectrum, the use of a digital filter is proposed to help the algorithm keep the important features of the spectrum and disregard the noise. For the 1D spectrum, a mechanism oriented to neglect the most likely spurious partitions was found sufficient for detecting relevant spectral features. Regarding the identification of wind sea and swell, it was found that customarily used methods sometimes largely differ from one another. Evidently, methods using 2D spectra and wind information are the most consistent. In reference to 1D identification methods, attention is given to two widely used methods, namely, the steepness method used operationally at the National Data Buoy Center (NDBC) and the Pierson–Moskowitz (PM) spectrum peak method. It was found that the steepness method systematically overestimates swell, while the PM method is more consistent, although it tends to underestimate swell. Consistent results were obtained looking at the ratio between the energy at the spectral peak of a partition and the energy at the peak of a PM spectrum with the same peak frequency. It is found that the use of partitioning gives more consistent identification results using both 1D and 2D spectra.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography