Literatura académica sobre el tema "Convolutive Neural Networks"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Convolutive Neural Networks".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Convolutive Neural Networks"
KIREI, B. S., M. D. TOPA, I. MURESAN, I. HOMANA y N. TOMA. "Blind Source Separation for Convolutive Mixtures with Neural Networks". Advances in Electrical and Computer Engineering 11, n.º 1 (2011): 63–68. http://dx.doi.org/10.4316/aece.2011.01010.
Texto completoKarhunen, J., A. Cichocki, W. Kasprzak y P. Pajunen. "On Neural Blind Separation with Noise Suppression and Redundancy Reduction". International Journal of Neural Systems 08, n.º 02 (abril de 1997): 219–37. http://dx.doi.org/10.1142/s0129065797000239.
Texto completoDuan, Yunlong, Ziyu Han y Zhening Tang. "A lightweight plant disease recognition network based on Resnet". Applied and Computational Engineering 5, n.º 1 (14 de junio de 2023): 583–92. http://dx.doi.org/10.54254/2755-2721/5/20230651.
Texto completoTong, Lian, Lan Yang, Xuan Wang y Li Liu. "Self-aware face emotion accelerated recognition algorithm: a novel neural network acceleration algorithm of emotion recognition for international students". PeerJ Computer Science 9 (26 de septiembre de 2023): e1611. http://dx.doi.org/10.7717/peerj-cs.1611.
Texto completoSineglazov, Victor y Petro Chynnyk. "Quantum Convolution Neural Network". Electronics and Control Systems 2, n.º 76 (23 de junio de 2023): 40–45. http://dx.doi.org/10.18372/1990-5548.76.17667.
Texto completoLü Benyuan, 吕本远, 禚真福 Zhuo Zhenfu, 韩永赛 Han Yongsai y 张立朝 Zhang Lichao. "基于Faster区域卷积神经网络的目标检测". Laser & Optoelectronics Progress 58, n.º 22 (2021): 2210017. http://dx.doi.org/10.3788/lop202158.2210017.
Texto completoAnmin, Kong y Zhao Bin. "A Parallel Loading Based Accelerator for Convolution Neural Network". International Journal of Machine Learning and Computing 10, n.º 5 (5 de octubre de 2020): 669–74. http://dx.doi.org/10.18178/ijmlc.2020.10.5.989.
Texto completoSharma, Himanshu y Rohit Agarwal. "Channel Enhanced Deep Convolution Neural Network based Cancer Classification". Journal of Advanced Research in Dynamical and Control Systems 11, n.º 10-SPECIAL ISSUE (31 de octubre de 2019): 610–17. http://dx.doi.org/10.5373/jardcs/v11sp10/20192849.
Texto completoAnem, Smt Jayalaxmi, B. Dharani, K. Raveendra, CH Nikhil y K. Akhil. "Leveraging Convolution Neural Network (CNN) for Skin Cancer Identification". International Journal of Research Publication and Reviews 5, n.º 4 (abril de 2024): 2150–55. http://dx.doi.org/10.55248/gengpi.5.0424.0955.
Texto completoOh, Seokjin, Jiyong An y Kyeong-Sik Min. "Area-Efficient Mapping of Convolutional Neural Networks to Memristor Crossbars Using Sub-Image Partitioning". Micromachines 14, n.º 2 (25 de enero de 2023): 309. http://dx.doi.org/10.3390/mi14020309.
Texto completoTesis sobre el tema "Convolutive Neural Networks"
Heuillet, Alexandre. "Exploring deep neural network differentiable architecture design". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG069.
Texto completoArtificial Intelligence (AI) has gained significant popularity in recent years, primarily due to its successful applications in various domains, including textual data analysis, computer vision, and audio processing. The resurgence of deep learning techniques has played a central role in this success. The groundbreaking paper by Krizhevsky et al., AlexNet, narrowed the gap between human and machine performance in image classification tasks. Subsequent papers such as Xception and ResNet have further solidified deep learning as a leading technique, opening new horizons for the AI community. The success of deep learning lies in its architecture, which is manually designed with expert knowledge and empirical validation. However, these architectures lack the certainty of an optimal solution. To address this issue, recent papers introduced the concept of Neural Architecture Search (NAS), enabling the learning of deep architectures. However, most initial approaches focused on large architectures with specific targets (e.g., supervised learning) and relied on computationally expensive optimization techniques such as reinforcement learning and evolutionary algorithms. In this thesis, we further investigate this idea by exploring automatic deep architecture design, with a particular emphasis on differentiable NAS (DNAS), which represents the current trend in NAS due to its computational efficiency. While our primary focus is on Convolutional Neural Networks (CNNs), we also explore Vision Transformers (ViTs) with the goal of designing cost-effective architectures suitable for real-time applications
Maragno, Alessandro. "Programmazione di Convolutional Neural Networks orientata all'accelerazione su FPGA". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/12476/.
Texto completoAbbasi, Mahdieh. "Toward robust deep neural networks". Doctoral thesis, Université Laval, 2020. http://hdl.handle.net/20.500.11794/67766.
Texto completoIn this thesis, our goal is to develop robust and reliable yet accurate learning models, particularly Convolutional Neural Networks (CNNs), in the presence of adversarial examples and Out-of-Distribution (OOD) samples. As the first contribution, we propose to predict adversarial instances with high uncertainty through encouraging diversity in an ensemble of CNNs. To this end, we devise an ensemble of diverse specialists along with a simple and computationally efficient voting mechanism to predict the adversarial examples with low confidence while keeping the predictive confidence of the clean samples high. In the presence of high entropy in our ensemble, we prove that the predictive confidence can be upper-bounded, leading to have a globally fixed threshold over the predictive confidence for identifying adversaries. We analytically justify the role of diversity in our ensemble on mitigating the risk of both black-box and white-box adversarial examples. Finally, we empirically assess the robustness of our ensemble to the black-box and the white-box attacks on several benchmark datasets.The second contribution aims to address the detection of OOD samples through an end-to-end model trained on an appropriate OOD set. To this end, we address the following central question: how to differentiate many available OOD sets w.r.t. a given in distribution task to select the most appropriate one, which in turn induces a model with a high detection rate of unseen OOD sets? To answer this question, we hypothesize that the “protection” level of in-distribution sub-manifolds by each OOD set can be a good possible property to differentiate OOD sets. To measure the protection level, we then design three novel, simple, and cost-effective metrics using a pre-trained vanilla CNN. In an extensive series of experiments on image and audio classification tasks, we empirically demonstrate the abilityof an Augmented-CNN (A-CNN) and an explicitly-calibrated CNN for detecting a significantly larger portion of unseen OOD samples, if they are trained on the most protective OOD set. Interestingly, we also observe that the A-CNN trained on the most protective OOD set (calledA-CNN) can also detect the black-box Fast Gradient Sign (FGS) adversarial examples. As the third contribution, we investigate more closely the capacity of the A-CNN on the detection of wider types of black-box adversaries. To increase the capability of A-CNN to detect a larger number of adversaries, we augment its OOD training set with some inter-class interpolated samples. Then, we demonstrate that the A-CNN trained on the most protective OOD set along with the interpolated samples has a consistent detection rate on all types of unseen adversarial examples. Where as training an A-CNN on Projected Gradient Descent (PGD) adversaries does not lead to a stable detection rate on all types of adversaries, particularly the unseen types. We also visually assess the feature space and the decision boundaries in the input space of a vanilla CNN and its augmented counterpart in the presence of adversaries and the clean ones. By a properly trained A-CNN, we aim to take a step toward a unified and reliable end-to-end learning model with small risk rates on both clean samples and the unusual ones, e.g. adversarial and OOD samples.The last contribution is to show a use-case of A-CNN for training a robust object detector on a partially-labeled dataset, particularly a merged dataset. Merging various datasets from similar contexts but with different sets of Object of Interest (OoI) is an inexpensive way to craft a large-scale dataset which covers a larger spectrum of OoIs. Moreover, merging datasets allows achieving a unified object detector, instead of having several separate ones, resultingin the reduction of computational and time costs. However, merging datasets, especially from a similar context, causes many missing-label instances. With the goal of training an integrated robust object detector on a partially-labeled but large-scale dataset, we propose a self-supervised training framework to overcome the issue of missing-label instances in the merged datasets. Our framework is evaluated on a merged dataset with a high missing-label rate. The empirical results confirm the viability of our generated pseudo-labels to enhance the performance of YOLO, as the current (to date) state-of-the-art object detector.
Kapoor, Rishika. "Malaria Detection Using Deep Convolution Neural Network". University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1613749143868579.
Texto completoYu, Xiafei. "Wide Activated Separate 3D Convolution for Video Super-Resolution". Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39974.
Texto completoMessou, Ehounoud Joseph Christopher. "Handling Invalid Pixels in Convolutional Neural Networks". Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/98619.
Texto completoMaster of Science
A module at the heart of deep neural networks built for Artificial Intelligence is the convolutional layer. When multiple convolutional layers are used together with other modules, a Convolutional Neural Network (CNN) is obtained. These CNNs can be used for tasks such as image classification where they tell if the object in an image is a chair or a car, for example. Most CNNs use a normal convolutional layer that assumes that all parts of the image fed to the network are valid. However, most models zero pad the image at the beginning to maintain a certain output shape. Zero padding is equivalent to adding a black frame around the image. These added pixels result in adding information that was not initially present. Therefore, this extra information can be considered invalid. Invalid pixels can also be inside the image where they are referred to as holes in completion tasks like image inpainting where the network is asked to fill these holes and give a realistic image. In this work, we look for a method that can handle both types of invalid pixels. We compare on the same test bench two methods previously used to handle invalid pixels outside the image (Partial and Edge convolutions) and one method that was designed for invalid pixels inside the image (Gated convolution). We show that Partial convolution performs the best in image classification while Gated convolution has the advantage on semantic segmentation. As for hotel recognition with masked regions, none of the methods seem appropriate to generate embeddings that leverage the masked regions.
Ngo, Kalle. "FPGA Hardware Acceleration of Inception Style Parameter Reduced Convolution Neural Networks". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-205026.
Texto completoPappone, Francesco. "Graph neural networks: theory and applications". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23893/.
Texto completoSung, Wei-Hong. "Investigating minimal Convolution Neural Networks (CNNs) for realtime embedded eye feature detection". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281338.
Texto completoMed den snabba ökningen av neurala nätverk kan många uppgifter som brukade vara svåra att utföra i traditionella metoder nu lösas bra, särskilt inom datorsynsfältet. Men eftersom uppgifterna vi måste lösa har blivit mer och mer komplexa, blir de neurala nätverken vi använder djupare och större. Därför, även om vissa inbäddade system är kraftfulla för närvarande, lider de flesta inbäddade system fortfarande av minnes- och beräkningsbegränsningar, vilket innebär att det är svårt att distribuera våra stora neurala nätverk på dessa inbäddade enheter. Projektet syftar till att utforska olika metoder för att komprimera den ursprungliga stora modellen. Det vill säga, vi tränar först en baslinjemodell, YOLOv3[1], som är ett berömt objektdetekteringsnätverk, och sedan använder vi två metoder för att komprimera basmodellen. Den första metoden är beskärning med hjälp av sparsity training, och vi kanalskärning enligt skalningsfaktorvärdet efter sparsity training. Baserat på idén om denna metod har vi gjort tre utforskningar. För det första tar vi unionens maskstrategi för att lösa dimensionsproblemet för genvägsrelaterade lager i YOLOv3[1]. För det andra försöker vi absorbera informationen om skiftande faktorer i efterföljande lager. Slutligen implementerar vi lagerskärningen och kombinerar det med kanalbeskärning. Den andra metoden är beskärning med NAS, som använder en djup förstärkningsram för att automatiskt hitta det bästa kompressionsförhållandet för varje lager. I slutet av denna rapport analyserar vi de viktigaste resultaten och slutsatserna i vårt experiment och syftar till det framtida arbetet som potentiellt kan förbättra vårt projekt.
Wu, Jindong. "Pooling strategies for graph convolution neural networks and their effect on classification". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288953.
Texto completoMed utvecklingen av grafneurala nätverk har detta nya neurala nätverk tillämpats i olika område. Ett av de svåra problemen för forskare inom detta område är hur man väljer en lämplig poolningsmetod för en specifik forskningsuppgift från en mängd befintliga poolningsmetoder. I den här arbetet, baserat på de befintliga vanliga grafpoolingsmetoderna, utvecklar vi ett riktmärke för neuralt nätverk ram som kan användas till olika diagram pooling metoders jämförelse. Genom att använda ramverket jämför vi fyra allmängiltig diagram pooling metod och utforska deras egenskaper. Dessutom utvidgar vi två metoder för att förklara beslut om neuralt nätverk från convolution neurala nätverk till diagram neurala nätverk och jämföra dem med befintliga GNNExplainer. Vi kör experiment av grafisk klassificering uppgifter under benchmarkingramverk och hittade olika egenskaper av olika diagram pooling metoder. Dessutom verifierar vi korrekthet i dessa förklarningsmetoder som vi utvecklade och mäter överenskommelserna mellan dem. Till slut, vi försöker utforska egenskaper av olika metoder för att förklara neuralt nätverks beslut och deras betydelse för att välja pooling metoder i grafisk neuralt nätverk.
Capítulos de libros sobre el tema "Convolutive Neural Networks"
Mei, Tiemin, Fuliang Yin, Jiangtao Xi y Joe F. Chicharo. "Cumulant-Based Blind Separation of Convolutive Mixtures". En Advances in Neural Networks – ISNN 2004, 672–77. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-28647-9_110.
Texto completoHe, Zhaoshui, Shengli Xie y Yuli Fu. "FIR Convolutive BSS Based on Sparse Representation". En Advances in Neural Networks – ISNN 2005, 532–37. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11427445_87.
Texto completoZhang, Hua y Dazhang Feng. "Convolutive Blind Separation of Non-white Broadband Signals Based on a Double-Iteration Method". En Advances in Neural Networks - ISNN 2006, 1195–201. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11759966_177.
Texto completoMei, Tiemin, Jiangtao Xi, Fuliang Yin y Joe F. Chicharo. "Joint Diagonalization of Power Spectral Density Matrices for Blind Source Separation of Convolutive Mixtures". En Advances in Neural Networks – ISNN 2005, 520–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11427445_85.
Texto completoPeng, Chunyi, Xianda Zhang y Qutang Cai. "A Block-Adaptive Subspace Method Using Oblique Projections for Blind Separation of Convolutive Mixtures". En Advances in Neural Networks – ISNN 2005, 526–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11427445_86.
Texto completoDai, Qionghai y Yue Gao. "Neural Networks on Hypergraph". En Artificial Intelligence: Foundations, Theory, and Algorithms, 121–43. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-0185-2_7.
Texto completoTang, Lihe, Weidong Yang, Qiang Gao, Rui Xu y Rongzhi Ye. "A Lightweight Verification Scheme Based on Dynamic Convolution". En Proceeding of 2021 International Conference on Wireless Communications, Networking and Applications, 778–87. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2456-9_78.
Texto completoTran, Hoang-Dung, Neelanjana Pal, Patrick Musau, Diego Manzanas Lopez, Nathaniel Hamilton, Xiaodong Yang, Stanley Bak y Taylor T. Johnson. "Robustness Verification of Semantic Segmentation Neural Networks Using Relaxed Reachability". En Computer Aided Verification, 263–86. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81685-8_12.
Texto completoPattabiraman, V. y R. Maheswari. "Image to Text Processing Using Convolution Neural Networks". En Recurrent Neural Networks, 43–52. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003307822-4.
Texto completoMa, Ruipeng, Di Wu, Tao Hu, Dong Yi, Yuqiao Zhang y Jianxia Chen. "Automatic Modulation Classification Based on One-Dimensional Convolution Feature Fusion Network". En Proceeding of 2021 International Conference on Wireless Communications, Networking and Applications, 888–99. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2456-9_90.
Texto completoActas de conferencias sobre el tema "Convolutive Neural Networks"
Acharyya, Ranjan y Fredric M. Ham. "A New Approach for Blind Separation of Convolutive Mixtures". En 2007 International Joint Conference on Neural Networks. IEEE, 2007. http://dx.doi.org/10.1109/ijcnn.2007.4371278.
Texto completoWenwu Wang. "Convolutive non-negative sparse coding". En 2008 IEEE International Joint Conference on Neural Networks (IJCNN 2008 - Hong Kong). IEEE, 2008. http://dx.doi.org/10.1109/ijcnn.2008.4634325.
Texto completode Frein, Ruairi. "Learning convolutive features for storage and transmission between networked sensors". En 2015 International Joint Conference on Neural Networks (IJCNN). IEEE, 2015. http://dx.doi.org/10.1109/ijcnn.2015.7280827.
Texto completoWu, Wenyan y Liming Zhang. "A New Method of Solving Permutation Problem in Blind Source Separation for Convolutive Acoustic Signals in Frequency-domain". En 2007 International Joint Conference on Neural Networks. IEEE, 2007. http://dx.doi.org/10.1109/ijcnn.2007.4371135.
Texto completoKong, Qiuqiang, Yong Xu, Philip J. B. Jackson, Wenwu Wang y Mark D. Plumbley. "Single-Channel Signal Separation and Deconvolution with Generative Adversarial Networks". En Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/381.
Texto completoTong, Gan y Libo Huang. "Fast Convolution based on Winograd Minimum Filtering: Introduction and Development". En 5th International Conference on Computer Science and Information Technology (COMIT 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.111716.
Texto completoXie, Chunyu, Ce Li, Baochang Zhang, Chen Chen, Jungong Han y Jianzhuang Liu. "Memory Attention Networks for Skeleton-based Action Recognition". En Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/227.
Texto completoLin, Luojun, Lingyu Liang, Lianwen Jin y Weijie Chen. "Attribute-Aware Convolutional Neural Networks for Facial Beauty Prediction". En Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/119.
Texto completoPeserico, Nicola, Hangbo Yang, Xiaoxuan Ma, Shurui Li, Jonathan K. George, Puneet Gupta, Chee Wei Wong y Volker J. Sorger. "Design and Model of On-Chip Metalens for Silicon Photonics Convolutional Neural Network". En CLEO: Applications and Technology. Washington, D.C.: Optica Publishing Group, 2023. http://dx.doi.org/10.1364/cleo_at.2023.jw2a.77.
Texto completoZhu, Zulun, Jiaying Peng, Jintang Li, Liang Chen, Qi Yu y Siqiang Luo. "Spiking Graph Convolutional Networks". En Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/338.
Texto completo