Academic literature on the topic 'Convolution dilatée'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Convolution dilatée.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Convolution dilatée"
Wang, Wei, Yiyang Hu, Ting Zou, Hongmei Liu, Jin Wang, and Xin Wang. "A New Image Classification Approach via Improved MobileNet Models with Local Receptive Field Expansion in Shallow Layers." Computational Intelligence and Neuroscience 2020 (August 1, 2020): 1–10. http://dx.doi.org/10.1155/2020/8817849.
Full textPeng, Wenli, Shenglai Zhen, Xin Chen, Qianjing Xiong, and Benli Yu. "Study on convolutional recurrent neural networks for speech enhancement in fiber-optic microphones." Journal of Physics: Conference Series 2246, no. 1 (April 1, 2022): 012084. http://dx.doi.org/10.1088/1742-6596/2246/1/012084.
Full textChim, Seyha, Jin-Gu Lee, and Ho-Hyun Park. "Dilated Skip Convolution for Facial Landmark Detection." Sensors 19, no. 24 (December 4, 2019): 5350. http://dx.doi.org/10.3390/s19245350.
Full textZhao, Feng, Junjie Zhang, Zhe Meng, and Hanqiang Liu. "Densely Connected Pyramidal Dilated Convolutional Network for Hyperspectral Image Classification." Remote Sensing 13, no. 17 (August 26, 2021): 3396. http://dx.doi.org/10.3390/rs13173396.
Full textTang, Jingfan, Meijia Zhou, Pengfei Li, Min Zhang, and Ming Jiang. "Crowd Counting Based on Multiresolution Density Map and Parallel Dilated Convolution." Scientific Programming 2021 (January 20, 2021): 1–10. http://dx.doi.org/10.1155/2021/8831458.
Full textMa, Tian, Xinlei Zhou, Jiayi Yang, Boyang Meng, Jiali Qian, Jiehui Zhang, and Gang Ge. "Dental Lesion Segmentation Using an Improved ICNet Network with Attention." Micromachines 13, no. 11 (November 7, 2022): 1920. http://dx.doi.org/10.3390/mi13111920.
Full textSong, Zhendong, Yupeng Ma, Fang Tan, and Xiaoyi Feng. "Hybrid Dilated and Recursive Recurrent Convolution Network for Time-Domain Speech Enhancement." Applied Sciences 12, no. 7 (March 29, 2022): 3461. http://dx.doi.org/10.3390/app12073461.
Full textViriyasaranon, Thanaporn, Seung-Hoon Chae, and Jang-Hwan Choi. "MFA-net: Object detection for complex X-ray cargo and baggage security imagery." PLOS ONE 17, no. 9 (September 1, 2022): e0272961. http://dx.doi.org/10.1371/journal.pone.0272961.
Full textZhang, Jianming, Chaoquan Lu, Jin Wang, Lei Wang, and Xiao-Guang Yue. "Concrete Cracks Detection Based on FCN with Dilated Convolution." Applied Sciences 9, no. 13 (July 1, 2019): 2686. http://dx.doi.org/10.3390/app9132686.
Full textRahman, Takowa, Md Saiful Islam, and Jia Uddin. "MRI-Based Brain Tumor Classification Using a Dilated Parallel Deep Convolutional Neural Network." Digital 4, no. 3 (June 28, 2024): 529–54. http://dx.doi.org/10.3390/digital4030027.
Full textDissertations / Theses on the topic "Convolution dilatée"
Khalfaoui, Hassani Ismail. "Convolution dilatée avec espacements apprenables." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES017.
Full textIn this thesis, we develop and study the Dilated Convolution with Learnable Spacings (DCLS) method. The DCLS method can be considered as an extension of the standard dilated convolution method, but in which the positions of the weights of a neural network are learned during training by the gradient backpropagation algorithm, thanks to an interpolation technique. We empirically demonstrate the effectiveness of the DCLS method by providing concrete evidence from numerous supervised learning experiments. These experiments are drawn from the fields of computer vision, audio, and speech processing, and all show that the DCLS method has a competitive advantage over standard convolution techniques, as well as over several advanced convolution methods. Our approach is structured in several steps, starting with an analysis of the literature and existing convolution techniques that preceded the development of the DCLS method. We were particularly interested in the methods that are closely related to our own and that remain essential to capture the nuances and uniqueness of our approach. The cornerstone of our study is the introduction and application of the DCLS method to convolutional neural networks (CNNs), as well as to hybrid architectures that rely on both convolutional and visual attention approaches. The DCLS method is particularly noteworthy for its capabilities in supervised computer vision tasks such as classification, semantic segmentation, and object detection, all of which are essential tasks in the field. Having originally developed the DCLS method with bilinear interpolation, we explored other interpolation methods that could replace the bilinear interpolation conventionally used in DCLS, and which aim to make the position parameters of the weights in the convolution kernel differentiable. Gaussian interpolation proved to be slightly better in terms of performance. Our research then led us to apply the DCLS method in the field of spiking neural networks (SNNs) to enable synaptic delay learning within a neural network that could eventually be transferred to so-called neuromorphic chips. The results show that the DCLS method stands out as a new state-of-the-art technique in SNN audio classification for certain benchmark tasks in this field. These tasks involve datasets with a high temporal component. In addition, we show that DCLS can significantly improve the accuracy of artificial neural networks for the multi-label audio classification task, a key achievement in one of the most important audio classification benchmarks. We conclude with a discussion of the chosen experimental setup, its limitations, the limitations of our method, and our results
Börjesson, Lukas. "Forecasting Financial Time Series through Causal and Dilated Convolutional Neural Networks." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-167331.
Full textHighlander, Tyler Clayton. "Conditional Dilated Attention Tracking Model - C-DATM." Wright State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=wright1564652134758139.
Full textYeh, Pin-Yi, and 葉品儀. "Multi-Scale Neural Network with Dilated Convolutions for Image Deblurring." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/vgs5cw.
Full text國立臺灣科技大學
資訊工程系
107
Several deep learning-based approaches are successful in single image deblurring, particularly, convolutional neural networks (CNN). Unlike traditional methods which try to estimate the blur kernel to extract the latent sharp image, CNN-based methods can directly find the mapping from the blurry input image to the latent sharp image. CNN usually has many layers to represent complex spatial relationships, and down-sampling layers are used to reduce the number of parameters (e.g., encoder-decoder architecture). However, down-sampling causes some spatial information to be lost, and this information could be useful in deblurring large regions. The receptive field is the spatial coverage of each feature, and increasing its value allows less loss of spatial information. We used dilated convolution to increase the receptive field of the features without increasing the number of parameters. Furthermore, the "coarse-to-fine" strategy is applied to the network to the blurry input image at different scales in this thesis. By using this strategy, we can progressively improve the outputs, and allow us to capture details from different scales, without adding more parameters. We show that the proposed model not only has better results with the state-of-the-art but also has faster execution time.
Liu, Chien-Chung, and 劉建忠. "Improved Image Super Resolution Technology Based on Dilated Convolutional Neural Network." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/w6cn2k.
Full text國立臺中科技大學
資訊工程系碩士班
106
Image super resolution is wide application in image processing and computer vision. Because original super resolution image can’t be irreversible and it have distorted pixel values after the image is enlarged are challenging subjects. This paper proposed two architectures which is using convolutional neural network architecture of deep learning to carry out image super resolution. They estimate pixels of super resolution image by neurons of convolutional neural network. The first architecture is reduced dilated convolutional neural network. It reduces dilated convolutional neural network to six convolutional layers. In the second layer to fourth layer use convolution of double dilated rate. The first layer output concatenates the fourth layer output and the second layer output concatenates the third layer output are to deeper learning. The other is wide dilated convolutional neural network. It lets input pass convolutions of difference dilated rate to get output. It achieves wide learning. Neural network learns convolutional input of difference dilated rate by concatenating two outputs to be input of next layer at the same time. It is able to more detail feature extraction and achieve effect of wide learning. This experiments use the parameters of convolutional neural network employed dilated convolutional neural network architecture. The experimental parameters include epoch, validation split, validation mode, sub image size, sub image number, batch size. The experiments appoint appropriate parameters to be 500 epoch, 0.2 validation split, random single sub image which is sub images of the image, 41×41 sub image size, 50 sub image number, 64 batch size. Experimental results appoint PSNR of reduced dilated convolutional network higher than dilated convolutional neural network 0.13dB and strand error smaller 0.07dB. PSNR of wide dilated convolutional network higher than dilated convolutional neural network 0.08dB and strand error smaller 0.09dB. Experiments also include difference scale of image super resolution and using difference types of data sets to test difference on the two proposed architectures. Final, proposed method applied to surveillance system. Results appoint image super resolution is able to enhance part of image features. In noise is improved, image texture isn’t blurry after image super resolution.
Book chapters on the topic "Convolution dilatée"
Guru Pradeep Reddy, T., Kandiraju Sai Ashritha, T. M. Prajwala, G. N. Girish, Abhishek R. Kothari, Shashidhar G. Koolagudi, and Jeny Rajan. "Retinal-Layer Segmentation Using Dilated Convolutions." In Proceedings of 3rd International Conference on Computer Vision and Image Processing, 279–92. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-32-9088-4_24.
Full textSun, Wei, Xijie Zhou, Xiaorui Zhang, and Xiaozheng He. "A Lightweight Neural Network Combining Dilated Convolution and Depthwise Separable Convolution." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 210–20. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-48513-9_17.
Full textZhang, Jinglu, Yinyu Nie, Yao Lyu, Hailin Li, Jian Chang, Xiaosong Yang, and Jian Jun Zhang. "Symmetric Dilated Convolution for Surgical Gesture Recognition." In Medical Image Computing and Computer Assisted Intervention – MICCAI 2020, 409–18. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59716-0_39.
Full textShen, Falong, and Gang Zeng. "Gaussian Dilated Convolution for Semantic Image Segmentation." In Advances in Multimedia Information Processing – PCM 2018, 324–34. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00776-8_30.
Full textHu, Haigen, Chenghan Yu, Qianwei Zhou, Qiu Guan, and Qi Chen. "SAMDConv: Spatially Adaptive Multi-scale Dilated Convolution." In Pattern Recognition and Computer Vision, 460–72. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8543-2_37.
Full textGupta, Sachin, Priya Goyal, Bhuman Vyas, Mohammad Shabaz, Suchitra Bala, and Aws Zuhair Sameen. "Dilated convolution model for lightweight neural network." In Next Generation Computing and Information Systems, 119–26. London: CRC Press, 2024. http://dx.doi.org/10.1201/9781003466383-20.
Full textZhang, Jing, Haiguang Li, Chao Zhang, Yangbiao Wu, and Guiyi Liu. "Bearing Remaining Life Prediction Based on Temporal Convolutional Networks with Hybrid Dilated Convolutions." In Proceedings of the UNIfied Conference of DAMAS, IncoME and TEPEN Conferences (UNIfied 2023), 345–53. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-49421-5_27.
Full textChen, Zhaokang, and Bertram E. Shi. "Appearance-Based Gaze Estimation Using Dilated-Convolutions." In Computer Vision – ACCV 2018, 309–24. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20876-9_20.
Full textMaraci, Mohammad Ali, Weidi Xie, and J. Alison Noble. "Can Dilated Convolutions Capture Ultrasound Video Dynamics?" In Machine Learning in Medical Imaging, 116–24. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00919-9_14.
Full textXin, Bin, Yaning Yang, Dongqing Wei, and Shaoliang Peng. "CFCN: A Multi-scale Fully Convolutional Network with Dilated Convolution for Nuclei Classification and Localization." In Bioinformatics Research and Applications, 314–23. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-91415-8_27.
Full textConference papers on the topic "Convolution dilatée"
Liu, Jen-Yu, and Yi-Hsuan Yang. "Dilated Convolution with Dilated GRU for Music Source Separation." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/655.
Full textWu, Lin (Yuanbo), Deyin Liu, Xiaojie Guo, Richang Hong, Liangchen Liu, and Rui Zhang. "Multi-scale Spatial Representation Learning via Recursive Hermite Polynomial Networks." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/204.
Full textNikzad, Mohammad, Yongsheng Gao, and Jun Zhou. "Attention-based Pyramid Dilated Lattice Network for Blind Image Denoising." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/129.
Full textStoller, Daniel, Mi Tian, Sebastian Ewert, and Simon Dixon. "Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/400.
Full textZhou, Shengwei, Caikou Chen, Guojiang Han, and Xielian Hou. "Deep Convolutional Neural Network with Dilated Convolution Using Small Size Dataset." In 2019 Chinese Control Conference (CCC). IEEE, 2019. http://dx.doi.org/10.23919/chicc.2019.8865226.
Full textYang, Yuting, Peisong Shen, and Chi Chen. "A Robust Iris Segmentation Using Fully Convolutional Network with Dilated Convolutions." In 2018 IEEE International Symposium on Multimedia (ISM). IEEE, 2018. http://dx.doi.org/10.1109/ism.2018.00010.
Full textGao, Jianqi, Xiangfeng Luo, Hao Wang, and Zijian Wang. "Causal Event Extraction using Iterated Dilated Convolutions with Semantic Convolutional Filters." In 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2021. http://dx.doi.org/10.1109/ictai52525.2021.00099.
Full textzhong Wu, Cong, Hao Dong, Xuan jie Lin, Han tong Jiang, Li quan Wang, Xin zhi Liu, and Wei kai Shi. "Adaptive Filtering Remote Sensing Image Segmentation Network based on Attention Mechanism." In 10th International Conference on Information Technology Convergence and Services (ITCSE 2021). AIRCC Publishing Corporation, 2021. http://dx.doi.org/10.5121/csit.2021.110903.
Full textVadakkeveedu, Arjun Menon, Debabrata Mandal, Pradeep Ramachandran, and Nitin Chandrachoodan. "Split-Knit Convolution: Enabling Dense Evaluation of Transpose and Dilated Convolutions on GPUs." In 2022 IEEE 29th International Conference on High Performance Computing, Data, and Analytics (HiPC). IEEE, 2022. http://dx.doi.org/10.1109/hipc56025.2022.00014.
Full textYang, Junyan, and Jie Jiang. "Dilated-CBAM: An Efficient Attention Network with Dilated Convolution." In 2021 IEEE International Conference on Unmanned Systems (ICUS). IEEE, 2021. http://dx.doi.org/10.1109/icus52573.2021.9641248.
Full text