Academic literature on the topic 'Convolutional transformer'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Convolutional transformer.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Convolutional transformer"

1

Li, Pengfei, Peixiang Zhong, Kezhi Mao, Dongzhe Wang, Xuefeng Yang, Yunfeng Liu, Jianxiong Yin, and Simon See. "ACT: an Attentive Convolutional Transformer for Efficient Text Classification." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 15 (May 18, 2021): 13261–69. http://dx.doi.org/10.1609/aaai.v35i15.17566.

Full text
Abstract:
Recently, Transformer has been demonstrating promising performance in many NLP tasks and showing a trend of replacing Recurrent Neural Network (RNN). Meanwhile, less attention is drawn to Convolutional Neural Network (CNN) due to its weak ability in capturing sequential and long-distance dependencies, although it has excellent local feature extraction capability. In this paper, we introduce an Attentive Convolutional Transformer (ACT) that takes the advantages of both Transformer and CNN for efficient text classification. Specifically, we propose a novel attentive convolution mechanism that utilizes the semantic meaning of convolutional filters attentively to transform text from complex word space to a more informative convolutional filter space where important n-grams are captured. ACT is able to capture both local and global dependencies effectively while preserving sequential information. Experiments on various text classification tasks and detailed analyses show that ACT is a lightweight, fast, and effective universal text classifier, outperforming CNNs, RNNs, and attentive models including Transformer.
APA, Harvard, Vancouver, ISO, and other styles
2

He, Ping, Yong Li, Shoulong Chen, Hoghua Xu, Lei Zhu, and Lingyan Wang. "Core looseness fault identification model based on Mel spectrogram-CNN." Journal of Physics: Conference Series 2137, no. 1 (December 1, 2021): 012060. http://dx.doi.org/10.1088/1742-6596/2137/1/012060.

Full text
Abstract:
Abstract In order to realize transformer voiceprint recognition, a transformer voiceprint recognition model based on Mel spectrum convolution neural network is proposed. Firstly, the transformer core looseness fault is simulated by setting different preloads, and the sound signals under different preloads are collected; Secondly, the sound signal is converted into a spectrogram that can be trained by convolutional neural network, and then the dimension is reduced by Mel filter bank to draw Mel spectrogram, which can generate spectrogram data sets under different preloads in batch; Finally, the data set is introduced into convolutional neural network for training, and the transformer voiceprint fault recognition model is obtained. The results show that the training accuracy of the proposed Mel spectrum convolution neural network transformer identification model is 99.91%, which can well identify the core loosening faults.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Xiaopeng, and Shuqin Li. "Transformer Help CNN See Better: A Lightweight Hybrid Apple Disease Identification Model Based on Transformers." Agriculture 12, no. 6 (June 19, 2022): 884. http://dx.doi.org/10.3390/agriculture12060884.

Full text
Abstract:
The complex backgrounds of crop disease images and the small contrast between the disease area and the background can easily cause confusion, which seriously affects the robustness and accuracy of apple disease- identification models. To solve the above problems, this paper proposes a Vision Transformer-based lightweight apple leaf disease- identification model, ConvViT, to extract effective features of crop disease spots to identify crop diseases. Our ConvViT includes convolutional structures and Transformer structures; the convolutional structure is used to extract the global features of the image, and the Transformer structure is used to obtain the local features of the disease region to help the CNN see better. The patch embedding method is improved to retain more edge information of the image and promote the information exchange between patches in the Transformer. The parameters and FLOPs (Floating Point Operations) of the model are significantly reduced by using depthwise separable convolution and linear-complexity multi-head attention operations. Experimental results on a complex background of a self-built apple leaf disease dataset show that ConvViT achieves comparable identification results (96.85%) with the current performance of the state-of-the-art Swin-Tiny. The parameters and FLOPs are only 32.7% and 21.7% of Swin-Tiny, and significantly ahead of MobilenetV3, Efficientnet-b0, and other models, which indicates that the proposed model is indeed an effective disease-identification model with practical application value.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhou, Li, Tongqin Shi, Songquan Huang, Fangchao Ke, Zhenxi Huang, Zhaoyang Zhang, and Jinzheng Liang. "Convolutional neural network for real-time main transformer detection." Journal of Physics: Conference Series 2229, no. 1 (March 1, 2022): 012021. http://dx.doi.org/10.1088/1742-6596/2229/1/012021.

Full text
Abstract:
Abstract For substation constructions, the main transformer is the dominant electrical equipment, and its arrival and operation affect the progress of project directly. In the context of smart grid construction, in order to improve the efficiency of real-time main transformer detection, this paper proposes an identification and detection method based on the SSD algorithm. The SSD algorithm is able to extract the target device (such as main transformer) accurately and the Lenet algorithm module can analyse the features contained in the image. To improve the accuracy of the detection method, the image migration algorithm of VGG-Net is used to expand the negative samples of main transformers to improve the generalisation of the algorithm. Finally, the image set collected in the real substation projects is used for validation, and result shows that the method identifies main transformers more accurately, with high effectiveness and feasibility.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Zhiwen, Teng Li, Xuebin Tang, Xiang Hu, and Yuanxi Peng. "CAEVT: Convolutional Autoencoder Meets Lightweight Vision Transformer for Hyperspectral Image Classification." Sensors 22, no. 10 (May 20, 2022): 3902. http://dx.doi.org/10.3390/s22103902.

Full text
Abstract:
Convolutional neural networks (CNNs) have been prominent in most hyperspectral image (HSI) processing applications due to their advantages in extracting local information. Despite their success, the locality of the convolutional layers within CNNs results in heavyweight models and time-consuming defects. In this study, inspired by the excellent performance of transformers that are used for long-range representation learning in computer vision tasks, we built a lightweight vision transformer for HSI classification that can extract local and global information simultaneously, thereby facilitating accurate classification. Moreover, as traditional dimensionality reduction methods are limited in their linear representation ability, a three-dimensional convolutional autoencoder was adopted to capture the nonlinear characteristics between spectral bands. Based on the aforementioned three-dimensional convolutional autoencoder and lightweight vision transformer, we designed an HSI classification network, namely the “convolutional autoencoder meets lightweight vision transformer” (CAEVT). Finally, we validated the performance of the proposed CAEVT network using four widely used hyperspectral datasets. Our approach showed superiority, especially in the absence of sufficient labeled samples, which demonstrates the effectiveness and efficiency of the CAEVT network.
APA, Harvard, Vancouver, ISO, and other styles
6

Xu, Jun, Zi-Xuan Chen, Hao Luo, and Zhe-Ming Lu. "An Efficient Dehazing Algorithm Based on the Fusion of Transformer and Convolutional Neural Network." Sensors 23, no. 1 (December 21, 2022): 43. http://dx.doi.org/10.3390/s23010043.

Full text
Abstract:
The purpose of image dehazing is to remove the interference from weather factors in degraded images and enhance the clarity and color saturation of images to maximize the restoration of useful features. Single image dehazing is one of the most important tasks in the field of image restoration. In recent years, due to the progress of deep learning, single image dehazing has made great progress. With the success of Transformer in advanced computer vision tasks, some research studies also began to apply Transformer to image dehazing tasks and obtained surprising results. However, both the deconvolution-neural-network-based dehazing algorithm and Transformer based dehazing algorithm magnify their advantages and disadvantages separately. Therefore, this paper proposes a novel Transformer–Convolution fusion dehazing network (TCFDN), which uses Transformer’s global modeling ability and convolutional neural network’s local modeling ability to improve the dehazing ability. In the Transformer–Convolution fusion dehazing network, the classic self-encoder structure is used. This paper proposes a Transformer–Convolution hybrid layer, which uses an adaptive fusion strategy to make full use of the Swin-Transformer and convolutional neural network to extract and reconstruct image features. On the basis of previous research, this layer further improves the ability of the network to remove haze. A series of contrast experiments and ablation experiments not only proved that the Transformer–Convolution fusion dehazing network proposed in this paper exceeded the more advanced dehazing algorithm, but also provided solid and powerful evidence for the basic theory on which it depends.
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Liming, Yihang Yang, Jinghui Yang, Ningyuan Zhao, Ling Wu, Liguo Wang, and Tianrui Wang. "FusionNet: A Convolution–Transformer Fusion Network for Hyperspectral Image Classification." Remote Sensing 14, no. 16 (August 19, 2022): 4066. http://dx.doi.org/10.3390/rs14164066.

Full text
Abstract:
In recent years, deep-learning-based hyperspectral image (HSI) classification networks have become one of the most dominant implementations in HSI classification tasks. Among these networks, convolutional neural networks (CNNs) and attention-based networks have prevailed over other HSI classification networks. While convolutional neural networks with perceptual fields can effectively extract local features in the spatial dimension of HSI, they are poor at capturing the global and sequential features of spectral–spatial information; networks based on attention mechanisms, for example, Transformer, usually have better ability to capture global features, but are relatively weak in discriminating local features. This paper proposes a fusion network of convolution and Transformer for HSI classification, known as FusionNet, in which convolution and Transformer are fused in both serial and parallel mechanisms to achieve the full utilization of HSI features. Experimental results demonstrate that the proposed network has superior classification results compared to previous similar networks, and performs relatively well even on a small amount of training data.
APA, Harvard, Vancouver, ISO, and other styles
8

Ibrahem, Hatem, Ahmed Salem, and Hyun-Soo Kang. "RT-ViT: Real-Time Monocular Depth Estimation Using Lightweight Vision Transformers." Sensors 22, no. 10 (May 19, 2022): 3849. http://dx.doi.org/10.3390/s22103849.

Full text
Abstract:
The latest research in computer vision highlighted the effectiveness of the vision transformers (ViT) in performing several computer vision tasks; they can efficiently understand and process the image globally unlike the convolution which processes the image locally. ViTs outperform the convolutional neural networks in terms of accuracy in many computer vision tasks but the speed of ViTs is still an issue, due to the excessive use of the transformer layers that include many fully connected layers. Therefore, we propose a real-time ViT-based monocular depth estimation (depth estimation from single RGB image) method with encoder-decoder architectures for indoor and outdoor scenes. This main architecture of the proposed method consists of a vision transformer encoder and a convolutional neural network decoder. We started by training the base vision transformer (ViT-b16) with 12 transformer layers then we reduced the transformer layers to six layers, namely ViT-s16 (the Small ViT) and four layers, namely ViT-t16 (the Tiny ViT) to obtain real-time processing. We also try four different configurations of the CNN decoder network. The proposed architectures can learn the task of depth estimation efficiently and can produce more accurate depth predictions than the fully convolutional-based methods taking advantage of the multi-head self-attention module. We train the proposed encoder-decoder architecture end-to-end on the challenging NYU-depthV2 and CITYSCAPES benchmarks then we evaluate the trained models on the validation and test sets of the same benchmarks showing that it outperforms many state-of-the-art methods on depth estimation while performing the task in real-time (∼20 fps). We also present a fast 3D reconstruction (∼17 fps) experiment based on the depth estimated from our method which is considered a real-world application of our method.
APA, Harvard, Vancouver, ISO, and other styles
9

Wu, Jiajing, Zhiqiang Wei, Jinpeng Zhang, Yushi Zhang, Dongning Jia, Bo Yin, and Yunchao Yu. "Full-Coupled Convolutional Transformer for Surface-Based Duct Refractivity Inversion." Remote Sensing 14, no. 17 (September 3, 2022): 4385. http://dx.doi.org/10.3390/rs14174385.

Full text
Abstract:
A surface-based duct (SBD) is an abnormal atmospheric structure with a low probability of occurrence buta strong ability to trap electromagnetic waves. However, the existing research is based on the assumption that the range direction of the surface duct is homogeneous, which will lead to low productivity and large errors when applied in a real-marine environment. To alleviate these issues, we propose a framework for the inversion of inhomogeneous SBD M-profile based on a full-coupled convolutional Transformer (FCCT) deep learning network. We first designed a one-dimensional residual dilated causal convolution autoencoder to extract the feature representations from a high-dimension range direction inhomogeneous M-profile. Second, to improve efficiency and precision, we proposed a full-coupled convolutional Transformer (FCCT) that incorporated dilated causal convolutional layers to gain exponentially receptive field growth of the M-profile and help Transformer-like models improve the receptive field of each range direction inhomogeneous SBD M-profile information. We tested our proposed method performance on two sets of simulated sea clutter power data where the inversion of the simulated data reached 96.99% and 97.69%, which outperformed the existing baseline methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Sowndarya, S., and Sujatha Balaraman. "Diagnosis of Partial Discharge in Power Transformer using Convolutional Neural Network." March 2022 4, no. 1 (April 30, 2022): 29–38. http://dx.doi.org/10.36548/jscp.2022.1.004.

Full text
Abstract:
In an electric power system, power transformers are essential. Transformer failures can degrade the quality of the power and create power outages. Partial Discharges (PD) are a condition that, if not adequately monitored, can cause power transformer failures. This project addresses the diagnosis of PD in power transformer using the Phase Amplitude (PA) response of PRPD (Phase-Resolved Partial Discharge) patterns recorded using PD Detectors. It is a widely used pattern for analysing Partial Discharge. A Convolutional Neural Network (CNN) is used to classify the type of PD defects. The PRPD patterns of 240 PA sample images have been taken from power transformer of rating 132/11 KV and 132/25 KV for training and testing the network. The feature extraction has also been done using CNN. In this work, the classification of PD faults is done using a supervised machine learning technique. The three different classes of PD faults such as Floating PD, Surface PD and Void PD are considered and predicted using Support Vector Machine (SVM) classifier. Simulation study is carried out using MATLAB. Based on the results obtained, it is found that CNN model has achieved a greater classification accuracy and thereby the life span of power transformer is enhanced.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Convolutional transformer"

1

Dronzeková, Michaela. "Analýza polygonálních modelů pomocí neuronových sítí." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-417253.

Full text
Abstract:
This thesis deals with rotation estimation of 3D model of human jaw. It describes and compares methods for direct analysis od 3D models as well as method to analyze model using rasterization. To evaluate perfomance of proposed method, a metric that computes number of cases when prediction was less than 30° from ground truth is used. Proposed method that uses rasterization, takes  three x-ray views of model as an input and processes it with convolutional network. It achieves best preformance, 99% with described metric. Method to directly analyze polygonal model as a sequence uses attention mechanism to do so and was inspired by transformer architecture. A special pooling function was proposed for this network that decreases memory requirements of the network. This method achieves 88%, but does not use rasterization and can process polygonal model directly. It is not as good as rasterization method with x-ray display, byt it is better than rasterization method with model not rendered as x-ray.  The last method uses graph representation of mesh. Graph network had problems with overfitting, that is why it did not get good results and I think this method is not very suitable for analyzing plygonal model.
APA, Harvard, Vancouver, ISO, and other styles
2

Domini, Davide. "Classificazione di Radiografie Toraciche per la Diagnosi del COVID-19 con Reti Convoluzionali e Vision Transformer." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24243/.

Full text
Abstract:
Negli ultimi due anni, per via della pandemia generata dal virus Covid19, la vita in ogni angolo del nostro pianeta è drasticamente cambiata. Ad oggi, nel mondo, sono oltre duecentoventi milioni le persone che hanno contratto questo virus e sono quasi cinque milioni le persone decedute. In alcuni periodi si è arrivati ad avere anche un milione di nuovi contagiati al giorno e mediamente, negli ultimi sei mesi, questo dato è stato di più di mezzo milione al giorno. Gli ospedali, soprattutto nei paesi meno sviluppati, hanno subito un grande stress e molte volte hanno avuto una carenza di risorse per fronteggiare questa grave pandemia. Per questo motivo ogni ricerca in questo campo diventa estremamente importante, soprattutto quelle che, con l'ausilio dell'intelligenza artificiale, riescono a dare supporto ai medici. Queste tecnologie una volta sviluppate e approvate possono essere diffuse a costi molto bassi e accessibili a tutti. In questo elaborato sono stati sperimentati e valutati due diversi approcci alla diagnosi del Covid-19 a partire dalle radiografie toraciche dei pazienti: il primo metodo si basa sul transfer learning di una rete convoluzionale inizialmente pensata per la classificazione di immagini. Il secondo approccio utilizza i Vision Transformer (ViT), un'architettura ampiamente diffusa nel campo del Natural Language Processing adattata ai task di Visione Artificiale. La prima soluzione ha ottenuto un’accuratezza di 0.85 mentre la seconda di 0.92, questi risultati, soprattutto il secondo, sono molto incoraggianti soprattutto vista la minima quantità di dati di training necessaria.
APA, Harvard, Vancouver, ISO, and other styles
3

Richtarik, Lukáš. "Rozpoznávání ručně psaného textu pomocí hlubokých neuronových sítí." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-433517.

Full text
Abstract:
The work deals with the issue of handrwritten text recognition problem with deep neural networks. It focuses on the use of sequence to sequence method using encoder-decoder model. It also includes design of encoder-decoder model for handwritten text recognition using a transformer instead of recurrent neurons and a set of experiments that were performed on it.
APA, Harvard, Vancouver, ISO, and other styles
4

Duhamel, Pierre. "Algorithmes de transformées discrètes rapides pour convolution cyclique et de convolution cyclique pour transformées rapides." Paris 11, 1986. http://www.theses.fr/1986PA112197.

Full text
Abstract:
Nous présentons tout d'abord les différentes transformées qui seront considérées dans ce travail (Transformées en Nombres Entiers - TNE -, Transformées de Fourier - TFR -, Transformées polynomiales - TP -, Transformées en Cosinus Discrètes - TCD -) de manière homogène. Puis, nous utilisons le lien entre TNE et TP pour proposer une nouvelle famille de TNE avec 2 comme racine de l'unité incluant des transformées classiques (Fermat, Mersenne, etc. . . ), mais aussi de nouvelles transformées de plus grande longueur pour une dynamique donnée. Nous montrons également une propriété de décomposition de l'arithmétique dans cette classe de transformées. Nous proposons également un ensemble d'algorithmes de TNE à nombre minimum de décalages. Dans le cadre des TFR, nous avons proposé un algorithme connu sous le nom de "split-radix". L'application de cet algorithme à des données complexes, réelles, ou réelles et symétriques a permis d'obtenir les programmes demandant à chaque fois les nombres d'opérations (multiplications et additions) les plus faibles connus, tout en gardant une structure régulière. Nous avons pu montrer que toute amélioration éventuelle de l'un de ces algorithmes se traduirait par une amélioration correspondante sur tous les autres. D'autre part, nous avons montré que leur structure était semblable à celle des algorithmes optimaux vis à vis du nombre de multiplications. La recherche d'algorithmes de TCD à nombre minimum de multiplications nous a permis de mettre en évidence l'équivalence entre une TCD de longueur 2n et une convolution cyclique. Ceci nous a conduits à proposer deux nouvelles architectures de TCD : l'une présente l'avantage de ne nécessiter qu'une multiplication générale par point de calcul, mais nécessite un arithmétique modulo. L'autre basée sur l'utilisation de l'arithmétique distribuée, présente une structure très simple, réutilisable pour d'autres types de transformées rapides
First, we present the fast transforms to be considered in the following (Number Theoretic Transforms - NTT -, Fast Fourier Transforms - FFT -, Polynomial Transforms - PT -, Discrete Cosine Transforms - OCT -) in a unified manner. Then, we use the link between NTT and PT to propose a new family of NTT with 2 as a root of unity which includes classical ones (Fermat, Mersenne, etc. . . ), but also includes new ones, which allow transforming longer sequences for a given dynamic range. We also establish a decomposition property of the arithmetic in this family of NTT's. We also propose a set of NTT algorithms with minimum number of shifts. Next, for the computation of FFT's, we introduced the "split-radix" algorithm. Application of this algorithm to complex, real or real-symmetric data requires in each case the smallest known number of operations (multiplications and additions), with a regular structure. We also showed that any improvement of one of these algorithms would also bring a corresponding improvement of the other ones. Furthermore, we showed that their structure is very similar to that of the optimum (minimum number of multiplications) algorithm. The search for minimum multiplication DCT algorithms enabled us to show the equivalence between a 2n DCT and a cyclic convolution. We used this equivalence to propose two new architectures for DCT computation. The first one presents the advantage of needing only one general multiplier per point, but necessitates modulo-arithmetic. The second one, based on distributed arithmetic, has a very simple structure that can be used also for other fast transforms (FFT, e. G. )
APA, Harvard, Vancouver, ISO, and other styles
5

Kwok, Yien Chian. "Shah convolution Fourier transform detection." Thesis, Imperial College London, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.428105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Westermark, Pontus. "Wavelets, Scattering transforms and Convolutional neural networks : Tools for image processing." Thesis, Uppsala universitet, Analys och sannolikhetsteori, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-337570.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Highlander, Tyler. "Efficient Training of Small Kernel Convolutional Neural Networks using Fast Fourier Transform." Wright State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=wright1432747175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Martucci, Stephen A. "Symmetric convolution and the discrete sine and cosine transforms : principles and applications." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/15038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dušanka, Perišić. "On Integral Transforms and Convolution Equations on the Spaces of Tempered Ultradistributions." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 1992. https://www.cris.uns.ac.rs/record.jsf?recordId=73337&source=NDLTD&language=en.

Full text
Abstract:
In the thesis are introduced and investigated spaces of Burling and of Roumieu type tempered ultradistributions, which are natural generalization of the space of Schwartz’s tempered distributions in Denjoy-Carleman-Komatsu’s theory of ultradistributions.  It has been proved that the introduced spaces preserve all of the good properties Schwartz space has, among others, a remarkable one, that the Fourier transform maps continuposly the spaces into themselves.In the first chapter the necessary notation and notions are given.In the second chapter, the spaces of ultrarapidly decreasing ultradifferentiable functions and their duals, the spaces of Beurling and of Roumieu tempered ultradistributions, are introduced; their topological properties and relations with the known distribution and ultradistribution spaces and structural properties are investigated;  characterization of  the Hermite expansions  and boundary value representation of the elements of the spaces are given.The spaces of multipliers of the spaces of Beurling and of Roumieu type tempered ultradistributions are determined explicitly in the third chapter.The fourth chapter is devoted to the investigation of  Fourier, Wigner, Bargmann and Hilbert transforms on the spaces of Beurling and of Roumieu type tempered ultradistributions and their test spaces.In the fifth chapter the equivalence of classical definitions of the convolution of Beurling type ultradistributions is proved, and the equivalence of, newly introduced definitions, of ultratempered convolutions of Beurling type ultradistributions is proved.In the last chapter is given a necessary and sufficient condition for a convolutor of a space of tempered ultradistributions to be hypoelliptic in a space of integrable ultradistribution, is given, and hypoelliptic convolution equations are studied in the spaces.Bibliograpy has 70 items.
U ovoj tezi su proučavani prostori temperiranih ultradistribucija Beurlingovog  i Roumieovog tipa, koji su prirodna uopštenja prostora Schwarzovih temperiranih distribucija u Denjoy-Carleman-Komatsuovoj teoriji ultradistribucija. Dokazano je ovi prostori imaju sva dobra svojstva, koja ima i Schwarzov prostor, izmedju ostalog, značajno svojstvo da Furijeova transformacija preslikava te prostore neprekidno na same sebe.U prvom poglavlju su uvedene neophodne oznake i pojmovi.U drugom poglavlju su uvedeni prostori ultrabrzo opadajucih ultradiferencijabilnih funkcija i njihovi duali, prostori Beurlingovih i Rumieuovih temperiranih ultradistribucija; proučavana su njihova topološka svojstva i veze sa poznatim prostorima distribucija i ultradistribucija, kao i strukturne osobine; date su i karakterizacije Ermitskih ekspanzija i graničnih reprezentacija elemenata tih prostora.Prostori multiplikatora Beurlingovih i Roumieuovih temperiranih ultradistribucija su okarakterisani u trećem poglavlju.Četvrto poglavlje je posvećeno proučavanju Fourierove, Wignerove, Bargmanove i Hilbertove transformacije na prostorima Beurlingovih i Rouimieovih temperiranih ultradistribucija i njihovim test prostorima.U petoj glavi je dokazana ekvivalentnost klasičnih definicija konvolucije na Beurlingovim prostorima ultradistribucija, kao i ekvivalentnost novouvedenih definicija ultratemperirane konvolucije ultradistribucija Beurlingovog tipa.U poslednjoj glavi je dat potreban i dovoljan uslov da konvolutor prostora temperiranih ultradistribucija bude hipoeliptičan u prostoru integrabilnih ultradistribucija i razmatrane su neke konvolucione jednačine u tom prostoru.Bibliografija ima 70 bibliografskih jedinica.
APA, Harvard, Vancouver, ISO, and other styles
10

Marir, F. "The application of number theoretic transforms to two dimensional convolutions and adaptive filtering." Thesis, University of Newcastle Upon Tyne, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.374136.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Convolutional transformer"

1

Yakubovich, S. B. The hypergeometric approach to integral transforms and convolutions. Dordrecht: Kluwer Academic, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Convolution and equidistribution: Sato-Tate theorems for finite-field Mellin transforms. Princeton: Princeton University Press, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Laamri, El Haj. Mesures, integration, convolution, et transformee de Fourier des fonctions. Dunod, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Convolutional transformer"

1

Durall, Ricard, Stanislav Frolov, Jörn Hees, Federico Raue, Franz-Josef Pfreundt, Andreas Dengel, and Janis Keuper. "Combining Transformer Generators with Convolutional Discriminators." In KI 2021: Advances in Artificial Intelligence, 67–79. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87626-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Cong, Hongmin Xu, Xiong Zhang, Li Wang, Zhitong Zheng, and Haifeng Liu. "Convolutional Embedding Makes Hierarchical Vision Transformer Stronger." In Lecture Notes in Computer Science, 739–56. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20044-1_42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Jia, Hong-zhen Yang, Hong-fei Xu, Si-ri Pang, Huan-yuan Li, and Wang Luo. "Attribute Segmentation for Transformer Substation Using Convolutional Network." In Advances in Intelligent Systems and Computing, 679–86. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30874-6_63.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Estupiñán-Ojeda, Cristian, Cayetano Guerra-Artal, and Mario Hernández-Tejera. "Informer: An Efficient Transformer Architecture Using Convolutional Layers." In Lecture Notes in Computer Science, 208–17. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-10161-8_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Yuqi, Yin Wang, Haikuan Du, and Shen Cai. "Spherical Transformer: Adapting Spherical Signal to Convolutional Networks." In Pattern Recognition and Computer Vision, 15–27. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18913-5_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jain, Kushal, Fenil Doshi, and Lakshmi Kurup. "Stance Detection Using Transformer Architectures and Temporal Convolutional Networks." In Advances in Computer, Communication and Computational Sciences, 437–47. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-4409-5_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Woon, Wei Lee, Zeyar Aung, and Ayman El-Hag. "Intelligent Monitoring of Transformer Insulation Using Convolutional Neural Networks." In Data Analytics for Renewable Energy Integration. Technologies, Systems and Society, 127–36. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-04303-2_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Jia, Dan Su, Hong-fei Xu, Si-ri Pang, and Wang Luo. "Attribute Classification for Transformer Substation Based on Deep Convolutional Network." In Advances in Intelligent Systems and Computing, 669–77. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30874-6_62.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lin, Ailiang, Jiayu Xu, Jinxing Li, and Guangming Lu. "ConTrans: Improving Transformer with Convolutional Attention for Medical Image Segmentation." In Lecture Notes in Computer Science, 297–307. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-16443-9_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Zi-An, Nai-Rong Zheng, and Feng Wang. "SAR Image Classification by Combining Transformer and Convolutional Neural Networks." In Lecture Notes in Electrical Engineering, 193–200. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-8202-6_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Convolutional transformer"

1

Zeng, Kungan, and Incheon Paik. "A Lightweight Transformer with Convolutional Attention." In 2020 11th International Conference on Awareness Science and Technology (iCAST). IEEE, 2020. http://dx.doi.org/10.1109/icast51195.2020.9319489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Luo, Ge, Ping Wei, Shuwen Zhu, Xinpeng Zhang, Zhenxing Qian, and Sheng Li. "Image Steganalysis with Convolutional Vision Transformer." In ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022. http://dx.doi.org/10.1109/icassp43922.2022.9747091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ke, Nan, Tong Lin, Zhouchen Lin, Xiao-Hua Zhou, and Taoyun Ji. "Convolutional Transformer Networks for Epileptic Seizure Detection." In CIKM '22: The 31st ACM International Conference on Information and Knowledge Management. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3511808.3557568.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lee, Sihaeng, Eojindl Yi, Janghyeon Lee, Jinsu Yoo, Honglak Lee, and Seung Hwan Kim. "Fully Convolutional Transformer with Local-Global Attention." In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2022. http://dx.doi.org/10.1109/iros47612.2022.9981339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, Chao, Jiashu Zhao, and Dawei Yin. "Purchase Intent Forecasting with Convolutional Hierarchical Transformer Networks." In 2021 IEEE 37th International Conference on Data Engineering (ICDE). IEEE, 2021. http://dx.doi.org/10.1109/icde51399.2021.00281.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Prayuda, Alim Wicaksono Hari, Heri Prasetyo, and Jing-Ming Guo. "AWGN-Based Image Denoiser using Convolutional Vision Transformer." In 2021 International Symposium on Electronics and Smart Devices (ISESD). IEEE, 2021. http://dx.doi.org/10.1109/isesd53023.2021.9501567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Alam, Mohammad Mahabub, Gour Karmakar, Syed Islam, Joarder Kamruzzaman, Madhu Chetty, Suryani Lim, Gayan Appuhamillage, Gopi Chattopadhyay, Steve Wilcox, and Vincent Verheyen. "Assessing Transformer Oil Quality using Deep Convolutional Networks." In 2019 29th Australasian Universities Power Engineering Conference (AUPEC). IEEE, 2019. http://dx.doi.org/10.1109/aupec48547.2019.211896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ye, Longqing. "Dynamic Clone Transformer for Efficient Convolutional Neural Netwoks." In ICCAI '22: 2022 8th International Conference on Computing and Artificial Intelligence. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3532213.3532222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bai, Ruwen, Min Li, Bo Meng, Fengfa Li, Miao Jiang, Junxing Ren, and Degang Sun. "Hierarchical Graph Convolutional Skeleton Transformer for Action Recognition." In 2022 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2022. http://dx.doi.org/10.1109/icme52920.2022.9859781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wu, Zhenyu, Chaohui Song, and Haibin Yan. "TC-Net: Transformer-Convolutional Networks for Road Segmentation." In 2022 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2022. http://dx.doi.org/10.1109/icme52920.2022.9859734.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Convolutional transformer"

1

Foltz, Thomas M. Symmetric Convolution. Using Unitary Transform Matrices: A New Approach to Image Reconstruction. Fort Belvoir, VA: Defense Technical Information Center, April 1999. http://dx.doi.org/10.21236/ada389062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nuttall, Albert H. Two-Dimensional Convolutions, Correlations, and Fourier Transforms of Combinations of Wigner Distribution Functions and Complex Ambiguity Functions. Fort Belvoir, VA: Defense Technical Information Center, August 1990. http://dx.doi.org/10.21236/ada226852.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography