Zeitschriftenartikel zum Thema „Transformeur robuste“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Transformeur robuste.

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Transformeur robuste" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Yang, Mingze, Hai Zhu, Runzhe Zhu, Fei Wu, Ling Yin und Yuncheng Yang. „WiTransformer: A Novel Robust Gesture Recognition Sensing Model with WiFi“. Sensors 23, Nr. 5 (27.02.2023): 2612. http://dx.doi.org/10.3390/s23052612.

Der volle Inhalt der Quelle
Annotation:
The past decade has demonstrated the potential of human activity recognition (HAR) with WiFi signals owing to non-invasiveness and ubiquity. Previous research has largely concentrated on enhancing precision through sophisticated models. However, the complexity of recognition tasks has been largely neglected. Thus, the performance of the HAR system is markedly diminished when tasked with increasing complexities, such as a larger classification number, the confusion of similar actions, and signal distortion To address this issue, we eliminated conventional convolutional and recurrent backbones and proposed WiTransformer, a novel tactic based on pure Transformers. Nevertheless, Transformer-like models are typically suited to large-scale datasets as pretraining models, according to the experience of the Vision Transformer. Therefore, we adopted the Body-coordinate Velocity Profile, a cross-domain WiFi signal feature derived from the channel state information, to reduce the threshold of the Transformers. Based on this, we propose two modified transformer architectures, united spatiotemporal Transformer (UST) and separated spatiotemporal Transformer (SST) to realize WiFi-based human gesture recognition models with task robustness. SST intuitively extracts spatial and temporal data features using two encoders, respectively. By contrast, UST can extract the same three-dimensional features with only a one-dimensional encoder, owing to its well-designed structure. We evaluated SST and UST on four designed task datasets (TDSs) with varying task complexities. The experimental results demonstrate that UST has achieved recognition accuracy of 86.16% on the most complex task dataset TDSs-22, outperforming the other popular backbones. Simultaneously, the accuracy decreases by at most 3.18% when the task complexity increases from TDSs-6 to TDSs-22, which is 0.14–0.2 times that of others. However, as predicted and analyzed, SST fails because of excessive lack of inductive bias and the limited scale of the training data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Santamaria-Bonfil, Guillermo, Gustavo Arroyo-Figueroa, Miguel A. Zuniga-Garcia, Carlos Gustavo Azcarraga Ramos und Ali Bassam. „Power Transformer Fault Detection: A Comparison of Standard Machine Learning and autoML Approaches“. Energies 17, Nr. 1 (22.12.2023): 77. http://dx.doi.org/10.3390/en17010077.

Der volle Inhalt der Quelle
Annotation:
A key component for the performance, availability, and reliability of power grids is the power transformer. Although power transformers are very reliable assets, the early detection of incipient degradation mechanisms is very important to preventing failures that may shorten their residual life. In this work, a comparative analysis of standard machine learning (ML) algorithms (such as single and ensemble classification algorithms) and automatic machine learning (autoML) classifiers is presented for the fault diagnosis of power transformers. The goal of this research is to determine whether fully automated ML approaches are better or worse than traditional ML frameworks that require a human in the loop (such as a data scientist) to identify transformer faults from dissolved gas analysis results. The methodology uses a transformer fault database (TDB) gathered from specialized databases and technical literature. Fault data were processed using the Duval pentagon diagnosis approach and user–expert knowledge. Parameters from both single and ensemble classifiers were optimized through standard machine learning procedures. The results showed that the best-suited algorithm to tackle the problem is a robust, automatic machine learning classifier model, followed by standard algorithms, such as neural networks and stacking ensembles. These results highlight the ability of a robust, automatic machine learning model to handle unbalanced power transformer fault datasets with high accuracy, requiring minimum tuning effort by electrical experts. We also emphasize that identifying the most probable transformer fault condition will reduce the time required to find and solve a fault.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Wei, Jiangshu, Jinrong Chen, Yuchao Wang, Hao Luo und Wujie Li. „Improved deep learning image classification algorithm based on Swin Transformer V2“. PeerJ Computer Science 9 (30.10.2023): e1665. http://dx.doi.org/10.7717/peerj-cs.1665.

Der volle Inhalt der Quelle
Annotation:
While convolutional operation effectively extracts local features, their limited receptive fields make it challenging to capture global dependencies. Transformer, on the other hand, excels at global modeling and effectively captures global dependencies. However, the self-attention mechanism used in Transformers lacks a local mechanism for information exchange within specific regions. This article attempts to leverage the strengths of both Transformers and convolutional neural networks (CNNs) to enhance the Swin Transformer V2 model. By incorporating both convolutional operation and self-attention mechanism, the enhanced model combines the local information-capturing capability of CNNs and the long-range dependency-capturing ability of Transformers. The improved model enhances the extraction of local information through the introduction of the Swin Transformer Stem, inverted residual feed-forward network, and Dual-Branch Downsampling structure. Subsequently, it models global dependencies using the improved self-attention mechanism. Additionally, downsampling is applied to the attention mechanism’s Q and K to reduce computational and memory overhead. Under identical training conditions, the proposed method significantly improves classification accuracy on multiple image classification datasets, showcasing more robust generalization capabilities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ottele, Andy, und Rahmat Shoureshi. „Neural Network-Based Adaptive Monitoring System for Power Transformer“. Journal of Dynamic Systems, Measurement, and Control 123, Nr. 3 (11.02.1999): 512–17. http://dx.doi.org/10.1115/1.1387248.

Der volle Inhalt der Quelle
Annotation:
Power transformers are major elements of the electric power transmission and distribution infrastructure. Transformer failure has severe economical impacts from the utility industry and customers. This paper presents analysis, design, development, and experimental evaluation of a robust failure diagnostic technique. Hopfield neural networks are used to identify variations in physical parameters of the system in a systematic way, and adapt the transformer model based on the state of the system. In addition, the Hopfield network is used to design an observer which provides accurate estimates of the internal states of the transformer that can not be accessed or measured during operation. Analytical and experimental results of this adaptive observer for power transformer diagnostics are presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sai, K. N., A. Galodha, P. Jain und D. Sharma. „DEEP AND MACHINE LEARNING FOR MONITORING GROUNDWATER STORAGE BASINS AND HYDROLOGICAL CHANGES USING THE GRAVITY RECOVERY AND CLIMATE EXPERIMENT (GRACE) SATELLITE MISSION AND SENTINEL-1 DATA FOR THE GANGA RIVER BASIN IN THE INDIAN REGION“. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (13.12.2023): 1265–70. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-1265-2023.

Der volle Inhalt der Quelle
Annotation:
Abstract. Accurate estimation of groundwater levels in river basins is paramount for hydro-geological research and sustainable water resource management. In this paper, we introduce a deep learning framework explicitly developed for precise groundwater level estimation in the Ganga River Basin. Leveraging the combined band information of Sentinel-1 synthetic aperture radar (SAR) and GRACE satellite data, our approach capitalizes on the trans-formative capabilities of Vision Transformers (ViT) and their variants, with a particular focus on Swin-Transformer variant enriched with Normalization Attention Modules (NAMs).To address the unique challenges of the Ganga River Basin, we curated a comprehensive dataset, forming a robust foundation for training computer vision models tailored to this distinct geographical region. Through rigorous experiments, our state-of-the-art Vision Transformers demonstrated significant potential in groundwater level estimation, with the Swin-Transformer NAM-based model achieving an outstanding Mean Absolute Error (MAE) of 1.2. These remarkable results surpass conventional methodologies and underscore the substantial advancements achieved through advanced transformer-based architectures in this domain. Moreover, this research contributes a robust dataset for future endeavours, fostering further advancements in groundwater estimation and related fields. This study represents a substantial step towards advancing sustainable groundwater utilization practices in the Ganga River Basin and beyond.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Remigius Obinna Okeke, Akan Ime Ibokette, Onuh Matthew Ijiga, Enyejo, Lawrence Anebi, Godslove Isenyo Ebiega und Odeyemi Michael Olumubo. „THE RELIABILITY ASSESSMENT OF POWER TRANSFORMERS“. Engineering Science & Technology Journal 5, Nr. 4 (03.04.2024): 1149–72. http://dx.doi.org/10.51594/estj.v5i4.981.

Der volle Inhalt der Quelle
Annotation:
This research investigated the assessment of power transformer reliability with emphasis on the transmission network within Rivers State in Nigeria, focusing on the perspectives of electricity consumers, organizational personnel, and business operators using descriptive survey. The study encompassed the entire population of 725,372 electricity consumers in the Port Harcourt Electricity Distribution Company (PEDC) in Rivers State, which included both households and business owners. To select a representative sample, the Convenience Sampling Technique was employed, resulting in the inclusion of 390 electricity consumers in Rivers State. Data collection utilized the Consumers Perception of Electricity Power Transformer Reliability (COPEPT) questionnaire and Structured Interview, with the instrument's reliability established through the test-retest technique, yielding a reliability coefficient for each investigation. Research questions were addressed through the weighted mean score (WMS) analysis. Key findings indicated dissatisfaction among electricity consumers in Rivers State, primarily attributed to factors such as transformer age, overall condition, uncontrolled overloading, adverse weather conditions, and inadequate transformer capacity to meet increasing demand. Addressing these issues, different approaches including upgrading or replacing outdated transformers, implementing limits on transformer loading, introducing robust earthing systems, and increasing transformer capacity, were recommended to enhance consumer satisfaction and overall reliability. The study further revealed that persistent power transformer failures resulted in power outages, adversely impacting businesses, households, communication, and contributing to reduced production and national income. Based on these findings, recommendations in the form of strategies were provided, emphasizing a comprehensive analysis of power surge control during adverse weather conditions, a plan for upgrading or replacing outdated transformers, an assessment of power transformer capacity needs, and collaboration with relevant stakeholders to develop strategies mitigating the negative economic impact and enhancing communication with consumers. Keywords: Reliability Assessment, Power Transformers, Rivers State and Transmission Network.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Paul, Sayak, und Pin-Yu Chen. „Vision Transformers Are Robust Learners“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 2 (28.06.2022): 2071–81. http://dx.doi.org/10.1609/aaai.v36i2.20103.

Der volle Inhalt der Quelle
Annotation:
Transformers, composed of multiple self-attention layers, hold strong promises toward a generic learning primitive applicable to different data modalities, including the recent breakthroughs in computer vision achieving state-of-the-art (SOTA) standard accuracy. What remains largely unexplored is their robustness evaluation and attribution. In this work, we study the robustness of the Vision Transformer (ViT) (Dosovitskiy et al. 2021) against common corruptions and perturbations, distribution shifts, and natural adversarial examples. We use six different diverse ImageNet datasets concerning robust classification to conduct a comprehensive performance comparison of ViT(Dosovitskiy et al. 2021) models and SOTA convolutional neural networks (CNNs), Big-Transfer (Kolesnikov et al. 2020). Through a series of six systematically designed experiments, we then present analyses that provide both quantitative andqualitative indications to explain why ViTs are indeed more robust learners. For example, with fewer parameters and similar dataset and pre-training combinations, ViT gives a top-1accuracy of 28.10% on ImageNet-A which is 4.3x higher than a comparable variant of BiT. Our analyses on image masking, Fourier spectrum sensitivity, and spread on discrete cosine energy spectrum reveal intriguing properties of ViT attributing to improved robustness. Code for reproducing our experiments is available at https://git.io/J3VO0.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Jancarczyk, Daniel, Marcin Bernaś und Tomasz Boczar. „Classification of Low Frequency Signals Emitted by Power Transformers Using Sensors and Machine Learning Methods“. Sensors 19, Nr. 22 (10.11.2019): 4909. http://dx.doi.org/10.3390/s19224909.

Der volle Inhalt der Quelle
Annotation:
This paper proposes a method of automatically detecting and classifying low frequency noise generated by power transformers using sensors and dedicated machine learning algorithms. The method applies the frequency spectra of sound pressure levels generated during operation by transformers in a real environment. The spectra frequency interval and its resolution are automatically optimized for the selected machine learning algorithm. Various machine learning algorithms, optimization techniques, and transformer types were researched: two indoor type transformers from Schneider Electric and two overhead type transformers manufactured by ABB. As a result, a method was proposed that provides a way in which inspections of working transformers (from background) and their type can be performed with an accuracy of over 97%, based on the generated low-frequency noise. The application of the proposed preprocessing stage increased the accuracy of this method by 10%. Additionally, machine learning algorithms were selected which offer robust solutions (with the highest accuracy) for noise classification.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Cortés-Caicedo, Brandon, Oscar Danilo Montoya und Andrés Arias-Londoño. „Application of the Hurricane Optimization Algorithm to Estimate Parameters in Single-Phase Transformers Considering Voltage and Current Measures“. Computers 11, Nr. 4 (11.04.2022): 55. http://dx.doi.org/10.3390/computers11040055.

Der volle Inhalt der Quelle
Annotation:
In this research paper, a combinatorial optimization approach is proposed for parameter estimation in single-phase transformers considering voltage and current measurements at the transformer terminals. This problem is represented through a nonlinear programming model (NLP), whose objective is to minimize the root mean square error between the measured voltage and current values and the calculated values from the equivalent model of the single-phase transformer. These values of voltage and current can be determined by applying Kirchhoff’s Laws to the model T of the transformer, where its parameters, series resistance and reactance as well as the magnetization resistance and reactance, i.e., R1, R2′, X1, X2′, Rc y Xm, are provided by the Hurricane Optimization Algorithm (HOA). The numerical results in the 4 kVA, 10 kVA and 15 kVA single-phase test transformers demonstrate the applicability of the proposed method since it allows the reduction of the average error between the measured and calculated electrical variables by 1000% compared to the methods reported in the specialized literature. This ensures that the parameters estimated by the proposed methodology, in each test transformer, are close to the real value with an accuracy error of less than 6%. Additionally, the computation times required by the algorithm to find the optimal solution are less than 1 second, which makes the proposed HOA robust, reliable, and efficient. All simulations were performed in the MATLAB programming environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Xie, Fei, Dalong Zhang und Chengming Liu. „Global–Local Self-Attention Based Transformer for Speaker Verification“. Applied Sciences 12, Nr. 19 (10.10.2022): 10154. http://dx.doi.org/10.3390/app121910154.

Der volle Inhalt der Quelle
Annotation:
Transformer models are now widely used for speech processing tasks due to their powerful sequence modeling capabilities. Previous work determined an efficient way to model speaker embeddings using the Transformer model by combining transformers with convolutional networks. However, traditional global self-attention mechanisms lack the ability to capture local information. To alleviate these problems, we proposed a novel global–local self-attention mechanism. Instead of using local or global multi-head attention alone, this method performs local and global attention in parallel in two parallel groups to enhance local modeling and reduce computational cost. To better handle local location information, we introduced locally enhanced location encoding in the speaker verification task. The experimental results of the VoxCeleb1 test set and the VoxCeleb2 dev set demonstrated the improved effect of our proposed global–local self-attention mechanism. Compared with the Transformer-based Robust Embedding Extractor Baseline System, the proposed speaker Transformer network exhibited better performance in the speaker verification task.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Li, Jiangtao, Wenzhong Chen, Jianhao Li, Weihua Jiang, Xu Zhong und Yang Gou. „Robust Design for Linear Transformer Driver System“. IEEE Transactions on Plasma Science 43, Nr. 10 (Oktober 2015): 3406–11. http://dx.doi.org/10.1109/tps.2015.2429688.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Yang, Zhiqiang, Chong Xu und Lei Li. „Landslide Detection Based on ResU-Net with Transformer and CBAM Embedded: Two Examples with Geologically Different Environments“. Remote Sensing 14, Nr. 12 (16.06.2022): 2885. http://dx.doi.org/10.3390/rs14122885.

Der volle Inhalt der Quelle
Annotation:
An efficient method of landslide detection can provide basic scientific data for emergency command and landslide susceptibility mapping. Compared to a traditional landslide detection approach, convolutional neural networks (CNN) have been proven to have powerful capabilities in reducing the time consumed for selecting the appropriate features for landslides. Currently, the success of transformers in natural language processing (NLP) demonstrates the strength of self-attention in global semantic information acquisition. How to effectively integrate transformers into CNN, alleviate the limitation of the receptive field, and improve the model generation are hot topics in remote sensing image processing based on deep learning (DL). Inspired by the vision transformer (ViT), this paper first attempts to integrate a transformer into ResU-Net for landslide detection tasks with small datasets, aiming to enhance the network ability in modelling the global context of feature maps and drive the model to recognize landslides with a small dataset. Besides, a spatial and channel attention module was introduced into the decoder to effectually suppress the noise in the feature maps from the convolution and transformer. By selecting two landslide datasets with different geological characteristics, the feasibility of the proposed model was validated. Finally, the standard ResU-Net was chosen as the benchmark to evaluate the proposed model rationality. The results indicated that the proposed model obtained the highest mIoU and F1-score in both datasets, demonstrating that the ResU-Net with a transformer embedded can be used as a robust landslide detection method and thus realize the generation of accurate regional landslide inventory and emergency rescue.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Xiong, Lingxin, Jicun Zhang, Xiaojia Zheng und Yuxin Wang. „Context Transformer and Adaptive Method with Visual Transformer for Robust Facial Expression Recognition“. Applied Sciences 14, Nr. 4 (14.02.2024): 1535. http://dx.doi.org/10.3390/app14041535.

Der volle Inhalt der Quelle
Annotation:
In real-world scenarios, the facial expression recognition task faces several challenges, including lighting variations, image noise, face occlusion, and other factors, which limit the performance of existing models in dealing with complex situations. To cope with these problems, we introduce the CoT module between the CNN and ViT frameworks, which improves the ability to perceive subtle differences by learning the correlations between local area features at a fine-grained level, helping to maintain the consistency between the local area features and the global expression, and making the model more adaptable to complex lighting conditions. Meanwhile, we adopt an adaptive learning method to effectively eliminate the interference of noise and occlusion by dynamically adjusting the parameters of the Transformer Encoder’s self-attention weight matrix. Experiments demonstrate the accuracy of our CoT_AdaViT model in the Oulu-CASIA dataset as (NIR: 87.94%, VL: strong: 89.47%, weak: 84.76%, dark: 82.28%). As well as, CK+, RAF-DB, and FERPlus datasets achieved 99.20%, 91.07%, and 90.57% recognition results, which achieved excellent performance and verified that the model has strong recognition accuracy and robustness in complex scenes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Sarraf, Saman, Arman Sarraf, Danielle D. DeSouza, John A. E. Anderson und Milton Kabia. „OViTAD: Optimized Vision Transformer to Predict Various Stages of Alzheimer’s Disease Using Resting-State fMRI and Structural MRI Data“. Brain Sciences 13, Nr. 2 (03.02.2023): 260. http://dx.doi.org/10.3390/brainsci13020260.

Der volle Inhalt der Quelle
Annotation:
Advances in applied machine learning techniques for neuroimaging have encouraged scientists to implement models to diagnose brain disorders such as Alzheimer’s disease at early stages. Predicting the exact stage of Alzheimer’s disease is challenging; however, complex deep learning techniques can precisely manage this. While successful, these complex architectures are difficult to interrogate and computationally expensive. Therefore, using novel, simpler architectures with more efficient pattern extraction capabilities, such as transformers, is of interest to neuroscientists. This study introduced an optimized vision transformer architecture to predict the group membership by separating healthy adults, mild cognitive impairment, and Alzheimer’s brains within the same age group (>75 years) using resting-state functional (rs-fMRI) and structural magnetic resonance imaging (sMRI) data aggressively preprocessed by our pipeline. Our optimized architecture, known as OViTAD is currently the sole vision transformer-based end-to-end pipeline and outperformed the existing transformer models and most state-of-the-art solutions. Our model achieved F1-scores of 97%±0.0 and 99.55%±0.39 from the testing sets for the rs-fMRI and sMRI modalities in the triple-class prediction experiments. Furthermore, our model reached these performances using 30% fewer parameters than a vanilla transformer. Furthermore, the model was robust and repeatable, producing similar estimates across three runs with random data splits (we reported the averaged evaluation metrics). Finally, to challenge the model, we observed how it handled increasing noise levels by inserting varying numbers of healthy brains into the two dementia groups. Our findings suggest that optimized vision transformers are a promising and exciting new approach for neuroimaging applications, especially for Alzheimer’s disease prediction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Kim, Mintai, und Sungju Lee. „Power Transformer Voltages Classification with Acoustic Signal in Various Noisy Environments“. Sensors 22, Nr. 3 (07.02.2022): 1248. http://dx.doi.org/10.3390/s22031248.

Der volle Inhalt der Quelle
Annotation:
Checking the stable supply voltage of a power distribution transformer in operation is an important issue to prevent mechanical failure. The acoustic signal of the transformer contains sufficient information to analyze the transformer conditions. However, since transformers are often exposed to a variety of noise environments, acoustic signal-based methods should be designed to be robust against these various noises to provide high accuracy. In this study, we propose a method to classify the over-, normal-, and under-voltage levels supplied to the transformer using the acoustic signal of the transformer operating in various noise environments. The acoustic signal of the transformer was converted into a Mel Spectrogram (MS), and used to classify the voltage levels. The classification model was designed based on the U-Net encoder layers to extract and express the important features from the acoustic signal. The proposed approach was used for its robustness against both the known and unknown noise by using the noise rejection method with U-Net and the ensemble model with three datasets. In the experimental environments, the testbeds were constructed using an oil-immersed power distribution transformer with a capacity of 150 kVA. Based on the experimental results, we confirm that the proposed method can improve the classification accuracy of the voltage levels from 72 to 88 and to 94% (baseline to noise rejection and to noise rejection + ensemble), respectively, in various noisy environments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

IKEO, Shigeru, Weidong MA und Kazuhisa ITO. „Robust Position Control of Cylinder Using Hydraulic Transformer“. TRANSACTIONS OF THE JAPAN FLUID POWER SYSTEM SOCIETY 36, Nr. 2 (2005): 45–50. http://dx.doi.org/10.5739/jfps.36.45.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Huang, Keju, Junan Yang, Hui Liu und Pengjiang Hu. „Channel-Robust Specific Emitter Identification Based on Transformer“. Highlights in Science, Engineering and Technology 7 (03.08.2022): 71–76. http://dx.doi.org/10.54097/hset.v7i.1019.

Der volle Inhalt der Quelle
Annotation:
Specific emitter identification (SEI) refers to the process of identifying emitter individuals based on corresponding wireless signals. Although deep learning has been successfully applied in SEI, the performance remains to be improved when the channel changes. In this paper, we suggest that a potential reason of the performance degradation is inadequacy of model capacity. Therefore, Transformer, an advanced neural network architecture with large model capacity, is applied for channel-robust SEI. Experimental results show that Transformer achieves better performance than conventional convolutional neural networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Rao, Bingbing, Ehsan Kazemi, Yifan Ding, Devu M. Shila, Frank M. Tucker und Liqiang Wang. „CTIN: Robust Contextual Transformer Network for Inertial Navigation“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 5 (28.06.2022): 5413–21. http://dx.doi.org/10.1609/aaai.v36i5.20479.

Der volle Inhalt der Quelle
Annotation:
Recently, data-driven inertial navigation approaches have demonstrated their capability of using well-trained neural networks to obtain accurate position estimates from inertial measurement units (IMUs) measurements. In this paper, we propose a novel robust Contextual Transformer-based network for Inertial Navigation (CTIN) to accurately predict velocity and trajectory. To this end, we first design a ResNet-based encoder enhanced by local and global multi-head self-attention to capture spatial contextual information from IMU measurements. Then we fuse these spatial representations with temporal knowledge by leveraging multi-head attention in the Transformer decoder. Finally, multi-task learning with uncertainty reduction is leveraged to improve learning efficiency and prediction accuracy of velocity and trajectory. Through extensive experiments over a wide range of inertial datasets (e.g., RIDI, OxIOD, RoNIN, IDOL, and our own), CTIN is very robust and outperforms state-of-the-art models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Li, Wei, Zhixin Li, Xiwei Yang und Huifang Ma. „Causal-ViT: Robust Vision Transformer by causal intervention“. Engineering Applications of Artificial Intelligence 126 (November 2023): 107123. http://dx.doi.org/10.1016/j.engappai.2023.107123.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Wu, Zhiliang, Changchang Sun, Hanyu Xuan, Gaowen Liu und Yan Yan. „WaveFormer: Wavelet Transformer for Noise-Robust Video Inpainting“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 6 (24.03.2024): 6180–88. http://dx.doi.org/10.1609/aaai.v38i6.28435.

Der volle Inhalt der Quelle
Annotation:
Video inpainting aims to fill in the missing regions of the video frames with plausible content. Benefiting from the outstanding long-range modeling capacity, the transformer-based models have achieved unprecedented performance regarding inpainting quality. Essentially, coherent contents from all the frames along both spatial and temporal dimensions are concerned by a patch-wise attention module, and then the missing contents are generated based on the attention-weighted summation. In this way, attention retrieval accuracy has become the main bottleneck to improve the video inpainting performance, where the factors affecting attention calculation should be explored to maximize the advantages of transformer. Towards this end, in this paper, we theoretically certificate that noise is the culprit that entangles the process of attention calculation. Meanwhile, we propose a novel wavelet transformer network with noise robustness for video inpainting, named WaveFormer. Unlike existing transformer-based methods that utilize the whole embeddings to calculate the attention, our WaveFormer first separates the noise existing in the embedding into high-frequency components by introducing the Discrete Wavelet Transform (DWT), and then adopts clean low-frequency components to calculate the attention. In this way, the impact of noise on attention computation can be greatly mitigated and the missing content regarding different frequencies can be generated by sharing the calculated attention. Extensive experiments validate the superior performance of our method over state-of-the-art baselines both qualitatively and quantitatively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

BRESLIN, J. G., und W. G. HURLEY. „CRITICAL CHOICES IN A SYSTEM FOR OPTIMIZED DESIGN OF ARBITRARY WAVEFORM TRANSFORMERS“. Journal of Circuits, Systems and Computers 13, Nr. 04 (August 2004): 919–28. http://dx.doi.org/10.1142/s0218126604001751.

Der volle Inhalt der Quelle
Annotation:
Traditionally, magnetic component design has been based on power frequency transformers with sinusoidal excitation. However, the movement towards higher density integrated circuits means that reductions in the size of magnetic components must be achieved by operating at higher frequencies, mainly through nonsinusoidal switching circuits. As this trend continues, computing tools are required to carry out designs of magnetic components that also allow evaluation of the high frequency losses in these components. A computer design package is described here that implements a robust transformer design methodology allowing customizable transformer geometries. The concept of a critical frequency is a vital part of this methodology. In addition, the winding choice at high frequencies is optimized to give the most accurate results for the best possible speed. This paper includes a description of the software design processes used and describes the main aspects that were incorporated into the system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Shahzad, Ebrahim, Adnan Umar Khan, Muhammad Iqbal, Ahmad Saeed, Ghulam Hafeez, Athar Waseem, Fahad R. Albogamy und Zahid Ullah. „Sensor Fault-Tolerant Control of Microgrid Using Robust Sliding-Mode Observer“. Sensors 22, Nr. 7 (25.03.2022): 2524. http://dx.doi.org/10.3390/s22072524.

Der volle Inhalt der Quelle
Annotation:
This work investigates sensor fault diagnostics and fault-tolerant control for a voltage source converter based microgrid (model) using a sliding-mode observer. It aims to provide a diagnosis of multiple faults (i.e., magnitude, phase, and harmonics) occurring simultaneously or individually in current/potential transformers. A modified algorithm based on convex optimization is used to determine the gains of the sliding-mode observer, which utilizes the feasibility optimization or trace minimization of a Ricatti equation-based modification of H-Infinity (H∞) constrained linear matrix inequalities. The fault and disturbance estimation method is modified and improved with some corrections in previous works. The stability and finite-time reachability of the observers are also presented for the considered faulty and perturbed microgrid system. A proportional-integral (PI) based control is utilized for the conventional regulations required for frequency and voltage sags occurring in a microgrid. However, the same control block features fault-tolerant control (FTC) functionality. It is attained by incorporating a sliding-mode observer to reconstruct the faults of sensors (transformers), which are fed to the control block after correction. Simulation-based analysis is performed by presenting the results of state/output estimation, state/output estimation errors, fault reconstruction, estimated disturbances, and fault-tolerant control performance. Simulations are performed for sinusoidal, constant, linearly increasing, intermittent, sawtooth, and random sort of often occurring sensor faults. However, this paper includes results for the sinusoidal nature voltage/current sensor (transformer) fault and a linearly increasing type of fault, whereas the remaining results are part of the supplementary data file. The comparison analysis is performed in terms of observer gains being estimated by previously used techniques as compared to the proposed modified approach. It also includes the comparison of the voltage-frequency control implemented with and without the incorporation of the used observer based fault estimation and corrections, in the control block. The faults here are considered for voltage/current sensor transformers, but the approach works for a wide range of sensors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Huerta-Rosales, Jose R., David Granados-Lieberman, Juan P. Amezquita-Sanchez, David Camarena-Martinez und Martin Valtierra-Rodriguez. „Vibration Signal Processing-Based Detection of Short-Circuited Turns in Transformers: A Nonlinear Mode Decomposition Approach“. Mathematics 8, Nr. 4 (13.04.2020): 575. http://dx.doi.org/10.3390/math8040575.

Der volle Inhalt der Quelle
Annotation:
Transformers are vital and indispensable elements in electrical systems, and therefore, their correct operation is fundamental; despite being robust electrical machines, they are susceptible to present different types of faults during their service life. Although there are different faults, the fault of short-circuited turns (SCTs) has attracted the interest of many researchers around the world since the windings in a transformer are one of the most vulnerable parts. In this regard, several works in literature have analyzed the vibration signals that generate a transformer as a source of information to carry out fault diagnosis; however this analysis is not an easy task since the information associated with the fault is embedded in high level noise. This problem becomes more difficult when low levels of fault severity are considered. In this work, as the main contribution, the nonlinear mode decomposition (NMD) method is investigated as a potential signal processing technique to extract features from vibration signals, and thus, detect SCTs in transformers, even in early stages, i.e., low levels of fault severity. Also, the instantaneous root mean square (RMS) value computed using the Hilbert transform is proposed as a fault indicator, demonstrating to be sensitive to fault severity. Finally, a fuzzy logic system is developed for automatic fault diagnosis. To test the proposal, a modified transformer representing diverse levels of SCTs is used. These levels consist of 0 (healthy condition), 5, 10, 15, 20, and 25 SCTs. Results demonstrate the capability of the proposal to extract features from vibration signals and perform automatic fault diagnosis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Richardson, Kyle, und Ashish Sabharwal. „Pushing the Limits of Rule Reasoning in Transformers through Natural Language Satisfiability“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 10 (28.06.2022): 11209–19. http://dx.doi.org/10.1609/aaai.v36i10.21371.

Der volle Inhalt der Quelle
Annotation:
Investigating the reasoning abilities of transformer models, and discovering new challenging tasks for them, has been a topic of much interest. Recent studies have found these models to be surprisingly strong at performing deductive reasoning over formal logical theories expressed in natural language. A shortcoming of these studies, however, is that they do not take into account that logical theories, when sampled uniformly at random, do not necessarily lead to hard instances. We propose a new methodology for creating challenging algorithmic reasoning datasets that focus on natural language satisfiability (NLSat) problems. The key idea is to draw insights from empirical sampling of hard propositional SAT problems and from complexity-theoretic studies of language. This methodology allows us to distinguish easy from hard instances, and to systematically increase the complexity of existing reasoning benchmarks such as RuleTaker. We find that current transformers, given sufficient training data, are surprisingly robust at solving the resulting NLSat problems of substantially increased difficulty. They also exhibit some degree of scale-invariance—the ability to generalize to problems of larger size and scope. Our results, however, reveal important limitations too: careful sampling of training data is crucial for building models that generalize to larger problems, and transformer models’ limited scale-invariance suggests they are far from learning robust deductive reasoning algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Zhang, Jie, Fan Li, Xin Zhang, Yue Cheng und Xinhong Hei. „Multi-Task Mean Teacher Medical Image Segmentation Based on Swin Transformer“. Applied Sciences 14, Nr. 7 (02.04.2024): 2986. http://dx.doi.org/10.3390/app14072986.

Der volle Inhalt der Quelle
Annotation:
As a crucial task for disease diagnosis, existing semi-supervised segmentation approaches process labeled and unlabeled data separately, ignoring the relationships between them, thereby limiting further performance improvements. In this work, we introduce a transformer-based multi-task framework that concurrently leverages both labeled and unlabeled volumes by encoding shared representation patterns. We first integrate transformers into YOLOv5 to enhance segmentation capabilities and adopt a multi-task approach spanning shadow region detection and boundary localization. Subsequently, we leverage the mean teacher model to simultaneously learn from labeled and unlabeled inputs alongside orthogonal view representations, enabling our approach to harness all available annotations. Our network can improve the learning ability and attain superior performance. Extensive experiments demonstrate that the transformer-powered architecture encodes robust inter-sample relationships, unlocking substantial performance gains by capturing shared information between labeled and unlabeled data. By treating both data types concurrently and encoding their shared patterns, our framework addresses the limitations of existing semi-supervised approaches, leading to improved segmentation accuracy and robustness.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Li, Naihan, Yanqing Liu, Yu Wu, Shujie Liu, Sheng Zhao und Ming Liu. „RobuTrans: A Robust Transformer-Based Text-to-Speech Model“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 05 (03.04.2020): 8228–35. http://dx.doi.org/10.1609/aaai.v34i05.6337.

Der volle Inhalt der Quelle
Annotation:
Recently, neural network based speech synthesis has achieved outstanding results, by which the synthesized audios are of excellent quality and naturalness. However, current neural TTS models suffer from the robustness issue, which results in abnormal audios (bad cases) especially for unusual text (unseen context). To build a neural model which can synthesize both natural and stable audios, in this paper, we make a deep analysis of why the previous neural TTS models are not robust, based on which we propose RobuTrans (Robust Transformer), a robust neural TTS model based on Transformer. Comparing to TransformerTTS, our model first converts input texts to linguistic features, including phonemic features and prosodic features, then feed them to the encoder. In the decoder, the encoder-decoder attention is replaced with a duration-based hard attention mechanism, and the causal self-attention is replaced with a "pseudo non-causal attention" mechanism to model the holistic information of the input. Besides, the position embedding is replaced with a 1-D CNN, since it constrains the maximum length of synthesized audio. With these modifications, our model not only fix the robustness problem, but also achieves on parity MOS (4.36) with TransformerTTS (4.37) and Tacotron2 (4.37) on our general set.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Manzari, Omid Nejati, Hossein Kashiani, Hojat Asgarian Dehkordi und Shahriar B. Shokouhi. „Robust transformer with locality inductive bias and feature normalization“. Engineering Science and Technology, an International Journal 38 (Februar 2023): 101320. http://dx.doi.org/10.1016/j.jestch.2022.101320.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Enders, Johannes, Warren B. Powell und David M. Egan. „Robust policies for the transformer acquisition and allocation problem“. Energy Systems 1, Nr. 3 (01.04.2010): 245–72. http://dx.doi.org/10.1007/s12667-010-0011-8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Xie, Yuanlun, Wenhong Tian und Zitong Yu. „Robust facial expression recognition with Transformer Block Enhancement Module“. Engineering Applications of Artificial Intelligence 126 (November 2023): 106795. http://dx.doi.org/10.1016/j.engappai.2023.106795.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Chen, Zhaohui, Elyas Asadi Shamsabadi, Sheng Jiang, Luming Shen und Daniel Dias-da-Costa. „An average pooling designed Transformer for robust crack segmentation“. Automation in Construction 162 (Juni 2024): 105367. http://dx.doi.org/10.1016/j.autcon.2024.105367.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Wu, Shibin, Ruxin Zhang, Jiayi Yan, Chengquan Li, Qicai Liu, Liyang Wang und Haoqian Wang. „High-Speed and Accurate Diagnosis of Gastrointestinal Disease: Learning on Endoscopy Images Using Lightweight Transformer with Local Feature Attention“. Bioengineering 10, Nr. 12 (13.12.2023): 1416. http://dx.doi.org/10.3390/bioengineering10121416.

Der volle Inhalt der Quelle
Annotation:
In response to the pressing need for robust disease diagnosis from gastrointestinal tract (GIT) endoscopic images, we proposed FLATer, a fast, lightweight, and highly accurate transformer-based model. FLATer consists of a residual block, a vision transformer module, and a spatial attention block, which concurrently focuses on local features and global attention. It can leverage the capabilities of both convolutional neural networks (CNNs) and vision transformers (ViT). We decomposed the classification of endoscopic images into two subtasks: a binary classification to discern between normal and pathological images and a further multi-class classification to categorize images into specific diseases, namely ulcerative colitis, polyps, and esophagitis. FLATer has exhibited exceptional prowess in these tasks, achieving 96.4% accuracy in binary classification and 99.7% accuracy in ternary classification, surpassing most existing models. Notably, FLATer could maintain impressive performance when trained from scratch, underscoring its robustness. In addition to the high precision, FLATer boasted remarkable efficiency, reaching a notable throughput of 16.4k images per second, which positions FLATer as a compelling candidate for rapid disease identification in clinical practice.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Tafur Acenjo, Brenda Xiomara, Martin Alexis Tello Pariona und Edwin Jhonatan Escobedo Cárdenas. „Comparativa entre RESNET-50, VGG-16, Vision Transformer y Swin Transformer para el reconocimiento facial con oclusión de una mascarilla“. Interfases, Nr. 017 (23.06.2023): e6361. http://dx.doi.org/10.26439/interfases2023.n017.6361.

Der volle Inhalt der Quelle
Annotation:
En la búsqueda de soluciones sin contacto físico en espacios cerrados para la verificación de identidad en el contexto de la pandemia por SARS-CoV-2, el reconocimiento facial ha tomado relevancia. Uno de los retos del reconocimiento facial es la oclusión por mascarilla, ya que representa una oclusión de más de 50% del rostro. La presente investigación evaluó cuatro modelos preentrenados por aprendizaje por transferencia: VGG-16, RESNET-50, vision transformer (ViT) y swin transformer, los cuales se entrenaron en sus capas superiores con un conjunto de datos propio. Para el entrenamiento sin mascarilla, se obtuvo un accuracy de 24% (RESNET-50), 25% (VGG-16), 96% (ViT) y 91% (Swin). Mientras que con mascarilla se obtuvo un accuracy 32% (RESNET-50), 53% (VGG-16) , 87% (ViT) y 61% (Swin). Estos porcentajes de testing accuracy indican que las arquitecturas más modernas como los Transformers arrojan mejores resultados en el reconocimiento con mascarilla que las CNN (VGG-16 y RESNET-50). El aporte de la investigación recae en la experimentación con dos tipos de arquitecturas: CNN y transformers, así como la creación del conjunto de datos público que se comparte a la comunidad científica. Este trabajo robustece el estado del arte de la visión computacional en el reconocimiento facial por oclusión de una mascarilla, ya que ilustra con experimentos la variación del accuracy con distintos escenarios y arquitecturas.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Olowolafe, Felix, und Kehinde Olukunmi Alawode. „Detection of Incipient Faults in Power Transformers using Fuzzy Logic and Decision Tree Models Based on Dissolved Gas Analysis“. ABUAD Journal of Engineering Research and Development (AJERD) 7, Nr. 1 (31.03.2024): 56–73. http://dx.doi.org/10.53982/ajerd.2024.0701.06-j.

Der volle Inhalt der Quelle
Annotation:
This paper proposes an integrated approach utilizing Fuzzy Logic and Decision Tree algorithms to diagnose early-stage faults in power transformers based on Dissolved Gas Analysis (DGA) test results of transformer insulation oil. Overcoming limitations in conventional methods such as Duval Triangle, Key Gas Analysis, Rogers Ratio, IEC Ratio, and Doernenburg Ratio, our Fuzzy Logic and Decision Tree models address issues like inaccurate diagnosis, inconsistent diagnosis, lack of decisions or out-of-code results, and time-intensive manual calculations for large DGA datasets. The Decision Tree algorithm, a machine learning technique is applied to categorize faults into thermal and electrical types. Trained with over 300 DGA samples from transformers with known faults, the models exhibit robust performance during testing with different datasets. Notably, the Duval Triangle decision tree model attains the highest accuracy among the ten developed models, achieving a 98% accuracy rate when tested with 50 samples with known faults. Moreover, Decision Tree models for KGA, Doernenburg, Rogers, and IEC also demonstrate substantial prediction accuracy at 92%, 86%, 92%, and 90% respectively underscoring the efficacy of artificial intelligence methods over traditional approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Zhou, Qian, Hua Zou und Huanhuan Wu. „LGViT: A Local and Global Vision Transformer with Dynamic Contextual Position Bias Using Overlapping Windows“. Applied Sciences 13, Nr. 3 (03.02.2023): 1993. http://dx.doi.org/10.3390/app13031993.

Der volle Inhalt der Quelle
Annotation:
Vision Transformers (ViTs) have shown their superiority in various visual tasks for the capability of self-attention mechanisms to model long-range dependencies. Some recent works try to reduce the high cost of vision transformers by limiting the self-attention module in a local window. As a price, the adopted window-based self-attention also reduces the ability to capture the long-range dependencies compared with the original self-attention in transformers. In this paper, we propose a Local and Global Vision Transformer (LGViT) that incorporates overlapping windows and multi-scale dilated pooling to robust the self-attention locally and globally. Our proposed self-attention mechanism is composed of a local self-attention module (LSA) and a global self-attention module (GSA), which are performed on overlapping windows partitioned from the input image. In LSA, the key and value sets are expanded by the surroundings of windows to increase the receptive field. For GSA, the key and value sets are expanded by multi-scale dilated pooling to promote global interactions. Moreover, a dynamic contextual positional encoding module is exploited to add positional information more efficiently and flexibly. We conduct extensive experiments on various visual tasks and the experimental results strongly demonstrate the outperformance of our proposed LGViT to state-of-the-art approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Kwak, Young-Sang, Jiun Lee und Jaekwang Kim. „A Transformer-based Arrhythmia Detection Model Robust to ECG Noise“. Journal of Korean Institute of Intelligent Systems 32, Nr. 3 (30.06.2022): 185–92. http://dx.doi.org/10.5391/jkiis.2022.32.3.185.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Kasem, Hossam M., Kwok-Wai Hung und Jianmin Jiang. „Spatial Transformer Generative Adversarial Network for Robust Image Super-Resolution“. IEEE Access 7 (2019): 182993–3009. http://dx.doi.org/10.1109/access.2019.2959940.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Wang, Fei, Dan Guo, Kun Li und Meng Wang. „EulerMormer: Robust Eulerian Motion Magnification via Dynamic Filtering within Transformer“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 6 (24.03.2024): 5345–53. http://dx.doi.org/10.1609/aaai.v38i6.28342.

Der volle Inhalt der Quelle
Annotation:
Video Motion Magnification (VMM) aims to break the resolution limit of human visual perception capability and reveal the imperceptible minor motion that contains valuable information in the macroscopic domain. However, challenges arise in this task due to photon noise inevitably introduced by photographic devices and spatial inconsistency in amplification, leading to flickering artifacts in static fields and motion blur and distortion in dynamic fields in the video. Existing methods focus on explicit motion modeling without emphasizing prioritized denoising during the motion magnification process. This paper proposes a novel dynamic filtering strategy to achieve static-dynamic field adaptive denoising. Specifically, based on Eulerian theory, we separate texture and shape to extract motion representation through inter-frame shape differences, expecting to leverage these subdivided features to solve this task finely. Then, we introduce a novel dynamic filter that eliminates noise cues and preserves critical features in the motion magnification and amplification generation phases. Overall, our unified framework, EulerMormer, is a pioneering effort to first equip with Transformer in learning-based VMM. The core of the dynamic filter lies in a global dynamic sparse cross-covariance attention mechanism that explicitly removes noise while preserving vital information, coupled with a multi-scale dual-path gating mechanism that selectively regulates the dependence on different frequency features to reduce spatial attenuation and complement motion boundaries. We demonstrate extensive experiments that EulerMormer achieves more robust video motion magnification from the Eulerian perspective, significantly outperforming state-of-the-art methods. The source code is available at https://github.com/VUT-HFUT/EulerMormer.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Wang, Yingheng, Shufeng Kong, John M. Gregoire und Carla P. Gomes. „Conformal Crystal Graph Transformer with Robust Encoding of Periodic Invariance“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 1 (24.03.2024): 283–91. http://dx.doi.org/10.1609/aaai.v38i1.27781.

Der volle Inhalt der Quelle
Annotation:
Machine learning techniques, especially in the realm of materials design, hold immense promise in predicting the properties of crystal materials and aiding in the discovery of novel crystals with desirable traits. However, crystals possess unique geometric constraints—namely, E(3) invariance for primitive cell and periodic invariance—which need to be accurately reflected in crystal representations. Though past research has explored various construction techniques to preserve periodic invariance in crystal representations, their robustness remains inadequate. Furthermore, effectively capturing angular information within 3D crystal structures continues to pose a significant challenge for graph-based approaches. This study introduces novel solutions to these challenges. We first present a graph construction method that robustly encodes periodic invariance and a strategy to capture angular information in neural networks without compromising efficiency. We further introduce CrystalFormer, a pioneering graph transformer architecture that emphasizes angle preservation and enhances long-range information. Through comprehensive evaluation, we verify our model's superior performance in 5 crystal prediction tasks, reaffirming the efficiency of our proposed methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Chen, Hui, Zhenhai Wang, Hongyu Tian, Lutao Yuan, Xing Wang und Peng Leng. „A Robust Visual Tracking Method Based on Reconstruction Patch Transformer Tracking“. Sensors 22, Nr. 17 (31.08.2022): 6558. http://dx.doi.org/10.3390/s22176558.

Der volle Inhalt der Quelle
Annotation:
Recently, the transformer model has progressed from the field of visual classification to target tracking. Its primary method replaces the cross-correlation operation in the Siamese tracker. The backbone of the network is still a convolutional neural network (CNN). However, the existing transformer-based tracker simply deforms the features extracted by the CNN into patches and feeds them into the transformer encoder. Each patch contains a single element of the spatial dimension of the extracted features and inputs into the transformer structure to use cross-attention instead of cross-correlation operations. This paper proposes a reconstruction patch strategy which combines the extracted features with multiple elements of the spatial dimension into a new patch. The reconstruction operation has the following advantages: (1) the correlation between adjacent elements combines well, and the features extracted by the CNN are usable for classification and regression; (2) using the performer operation reduces the amount of network computation and the dimension of the patch sent to the transformer, thereby sharply reducing the network parameters and improving the model-tracking speed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Moghadam, M. Bameni, und M. Amani. „Design Optimization of Current Transformers Using Robust Design Methodology“. Quality & Quantity 39, Nr. 5 (Oktober 2005): 671–85. http://dx.doi.org/10.1007/s11135-005-1466-x.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Matsuoka, Ryo, Shunsuke Ono und Masahiro Okuda. „Transformed-Domain Robust Multiple-Exposure Blending With Huber Loss“. IEEE Access 7 (2019): 162282–96. http://dx.doi.org/10.1109/access.2019.2951817.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Scealy, J. L., Patrice de Caritat, Eric C. Grunsky, Michail T. Tsagris und A. H. Welsh. „Robust Principal Component Analysis for Power Transformed Compositional Data“. Journal of the American Statistical Association 110, Nr. 509 (02.01.2015): 136–48. http://dx.doi.org/10.1080/01621459.2014.990563.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Al-Thani, Mansoor G., Ziyu Sheng, Yuting Cao und Yin Yang. „Traffic Transformer: Transformer-based framework for temporal traffic accident prediction“. AIMS Mathematics 9, Nr. 5 (2024): 12610–29. http://dx.doi.org/10.3934/math.2024617.

Der volle Inhalt der Quelle
Annotation:
<abstract><p>Reliable prediction of traffic accidents is crucial for the identification of potential hazards in advance, formulation of effective preventative measures, and reduction of accident incidence. Existing neural network-based models generally suffer from a limited field of perception and poor long-term dependency capturing abilities, which severely restrict their performance. To address the inherent shortcomings of current traffic prediction models, we propose the Traffic Transformer for multidimensional, multi-step traffic accident prediction. Initially, raw datasets chronicling sporadic traffic accidents are transformed into multivariate, regularly sampled sequences that are amenable to sequential modeling through a temporal discretization process. Subsequently, Traffic Transformer captures and learns the hidden relationships between any elements of the input sequence, constructing accurate prediction for multiple forthcoming intervals of traffic accidents. Our proposed Traffic Transformer employs the sophisticated multi-head attention mechanism in lieu of the widely used recurrent architecture. This significant shift enhances the model's ability to capture long-range dependencies within time series data. Moreover, it facilitates a more flexible and comprehensive learning of diverse hidden patterns within the sequences. It also offers the versatility of convenient extension and transference to other diverse time series forecasting tasks, demonstrating robust potential for further development in this field. Extensive comparative experiments conducted on a real-world dataset from Qatar demonstrate that our proposed Traffic Transformer model significantly outperforms existing mainstream time series forecasting models across all evaluation metrics and forecast horizons. Notably, its Mean Absolute Percentage Error reaches a minimal value of only 4.43%, which is substantially lower than the error rates observed in other models. This remarkable performance underscores the Traffic Transformer's state-of-the-art level of in predictive accuracy.</p></abstract>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Yu, Zhongzhi, Yonggan Fu, Sicheng Li, Chaojian Li und Yingyan Lin. „MIA-Former: Efficient and Robust Vision Transformers via Multi-Grained Input-Adaptation“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 8 (28.06.2022): 8962–70. http://dx.doi.org/10.1609/aaai.v36i8.20879.

Der volle Inhalt der Quelle
Annotation:
Vision transformers have recently demonstrated great success in various computer vision tasks, motivating a tremendously increased interest in their deployment into many real-world IoT applications. However, powerful ViTs are often too computationally expensive to be fitted onto real-world resource-constrained platforms, due to (1) their quadratically increased complexity with the number of input tokens and (2) their overparameterized self-attention heads and model depth. In parallel, different images are of varied complexity and their different regions can contain various levels of visual information, e.g., a sky background is not as informative as a foreground object in object classification tasks, indicating that treating those regions equally in terms of model complexity is unnecessary while such opportunities for trimming down ViTs' complexity have not been fully exploited. To this end, we propose a Multi-grained Input-Adaptive Vision Transformer framework dubbed MIA-Former that can input-adaptively adjust the structure of ViTs at three coarse-to-fine-grained granularities (i.e., model depth and the number of model heads/tokens). In particular, our MIA-Former adopts a low-cost network trained with a hybrid supervised and reinforcement learning method to skip the unnecessary layers, heads, and tokens in an input adaptive manner, reducing the overall computational cost. Furthermore, an interesting side effect of our MIA-Former is that its resulting ViTs are naturally equipped with improved robustness against adversarial attacks over their static counterparts, because MIA-Former's multi-grained dynamic control improves the model diversity similar to the effect of ensemble and thus increases the difficulty of adversarial attacks against all its sub-models. Extensive experiments and ablation studies validate that the proposed MIA-Former framework can (1) effectively allocate adaptive computation budgets to the difficulty of input images, achieving state-of-the-art (SOTA) accuracy-efficiency trade-offs, e.g., up to 16.5\% computation savings with the same or even a higher accuracy compared with the SOTA dynamic transformer models, and (2) boost ViTs' robustness accuracy under various adversarial attacks over their vanilla counterparts by 2.4\% and 3.0\%, respectively. Our code is available at https://github.com/RICE-EIC/MIA-Former.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Camarena-Martinez, David, Jose R. Huerta-Rosales, Juan P. Amezquita-Sanchez, David Granados-Lieberman, Juan C. Olivares-Galvan und Martin Valtierra-Rodriguez. „Variational Mode Decomposition-Based Processing for Detection of Short-Circuited Turns in Transformers Using Vibration Signals and Machine Learning“. Electronics 13, Nr. 7 (26.03.2024): 1215. http://dx.doi.org/10.3390/electronics13071215.

Der volle Inhalt der Quelle
Annotation:
Transformers are key elements in electrical systems. Although they are robust machines, different faults can appear due to their inherent operating conditions, e.g., the presence of different electrical and mechanical stresses. Among the different elements that compound a transformer, the winding is one of the most vulnerable parts, where the damage of turn-to-turn short circuits is one of the most studied faults since low-level damage (i.e., a low number of short-circuited turns—SCTs) can lead to the overall fault of the transformer; therefore, early fault detection has become a fundamental task. In this regard, this paper presents a machine learning-based method to diagnose SCTs in the transformer windings by using their vibrational response. In general, the vibration signals are firstly decomposed by means of the variational mode decomposition method, where a comparison with the empirical mode decomposition (EMD) method and the ensemble empirical mode decomposition (EEMD) method is also carried out. Then, entropy, energy, and kurtosis indices are obtained from each decomposition as fault indicators, where both the combination of features and the dimensionality reduction by using the principal component analysis (PCA) method are analyzed for the global effectiveness improvement and the computational burden reduction. Finally, a pattern recognition algorithm based on artificial neural networks (ANNs) is used for automatic fault detection. The obtained results show 100% effectiveness in detecting seven fault conditions, i.e., 0 (healthy), 5, 10, 15, 20, 25, and 30 SCTs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

UENO, Tomohiro, Kazuhisa ITO, Weidong MA und Shigeru IKEO. „DESIGN OF ROBUST POSITION/PRESSURE CONTROLLER FOR CYLINDER USING HYDRAULIC TRANSFORMER“. Proceedings of the JFPS International Symposium on Fluid Power 2005, Nr. 6 (2005): 414–19. http://dx.doi.org/10.5739/isfp.2005.414.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Jiang, Jianmin, Hossam M. Kasem und Kwok-Wai Hung. „A Very Deep Spatial Transformer Towards Robust Single Image Super-Resolution“. IEEE Access 7 (2019): 45618–31. http://dx.doi.org/10.1109/access.2019.2908996.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Gu, Beom W., Jin S. Choi, Ho S. Son, Myoung K. Je, Hyung I. Yun und Chun T. Rim. „Temperature-Robust Air-Gapless EE-Type Transformer Rails for Sliding Doors“. IEEE Transactions on Power Electronics 33, Nr. 9 (September 2018): 7841–57. http://dx.doi.org/10.1109/tpel.2017.2772926.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Umar, Buhari Ugbede, James Garba Ambafi, Olayemi Mikail Olaniyi, James Agajo und Omeiza Rabiu Isah. „Low-cost and Efficient Fault Detection and Protection System for Distribution Transformer“. Kinetik: Game Technology, Information System, Computer Network, Computing, Electronics, and Control, 12.02.2020, 79–86. http://dx.doi.org/10.22219/kinetik.v5i1.987.

Der volle Inhalt der Quelle
Annotation:
Distribution transformers are a vital component of electrical power transmission and distribution system. Frequent Monitoring transformers faults before it occurs can help prevent transformer faults which are expensive to repair and result in a loss of energy and services. The present method of the routine manual check of transformer parameters by the electricity board has proven to be less effective. This research aims to develop a low-cost protection system for the distribution transformer making it safer with improved reliability of service to the users. Therefore, this research work investigated transformer fault types and developed a microcontroller-based system for transformer fault detection and protection system using GSM (the Global System of Mobile Communication) technology for fault reporting. The developed prototype system was tested using voltage, current and temperature, which gave a threshold voltage higher than 220 volts to be overvoltage, a load higher than 200 watts to be overload and temperature greater than 39 degrees Celsius to be over temperature was measured. From the results, there was timely detection of transformer faults of the system, the transformer protection circuits were fully functional, and fault reporting was achieved using the GSM device. Overall, 99% accuracy was achieved. The system can thus be recommended for use by the Electricity Distribution Companies to protect distribution transformers for optimal performance, as the developed system makes the transformers more robust, and intelligent. Hence, a real-time distribution transformer fault monitoring and prevention system is achieved and the cost of transformer maintenance is reduced to an extent.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Zhang, Hao, Yan Piao und Nan Qi. „STFT: Spatial and temporal feature fusion for transformer tracker“. IET Computer Vision, 31.08.2023. http://dx.doi.org/10.1049/cvi2.12233.

Der volle Inhalt der Quelle
Annotation:
AbstractSiamese‐based trackers have demonstrated robust performance in object tracking, while Transformers have achieved widespread success in object detection. Currently, many researchers use a hybrid structure of convolutional neural networks and Transformers to design the backbone network of trackers, aiming to improve performance. However, this approach often underutilises the global feature extraction capability of Transformers. The authors propose a novel Transformer‐based tracker that fuses spatial and temporal features. The tracker consists of a multilayer spatial feature fusion network (MSFFN), a temporal feature fusion network (TFFN), and a prediction head. The MSFFN includes two phases: feature extraction and feature fusion, and both phases are constructed with a Transformer. Compared with the hybrid structure of “CNNs + Transformer,” the proposed method enhances the continuity of feature extraction and the ability of information interaction between features, enabling comprehensive feature extraction. Moreover, to consider the temporal dimension, the authors propose a TFFN for updating the template image. The network utilises the Transformer to fuse the tracking results of multiple frames with the initial frame, allowing the template image to continuously incorporate more information and maintain the accuracy of target features. Extensive experiments show that the tracker STFT achieves state‐of‐the‐art results on multiple benchmarks (OTB100, VOT2018, LaSOT, GOT‐10K, and UAV123). Especially, the tracker STFT achieves remarkable area under the curve score of 0.652 and 0.706 on the LaSOT and OTB100 benchmark respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie