Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Convolutive Neural Networks“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Convolutive Neural Networks" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Convolutive Neural Networks"
KIREI, B. S., M. D. TOPA, I. MURESAN, I. HOMANA und N. TOMA. „Blind Source Separation for Convolutive Mixtures with Neural Networks“. Advances in Electrical and Computer Engineering 11, Nr. 1 (2011): 63–68. http://dx.doi.org/10.4316/aece.2011.01010.
Der volle Inhalt der QuelleKarhunen, J., A. Cichocki, W. Kasprzak und P. Pajunen. „On Neural Blind Separation with Noise Suppression and Redundancy Reduction“. International Journal of Neural Systems 08, Nr. 02 (April 1997): 219–37. http://dx.doi.org/10.1142/s0129065797000239.
Der volle Inhalt der QuelleDuan, Yunlong, Ziyu Han und Zhening Tang. „A lightweight plant disease recognition network based on Resnet“. Applied and Computational Engineering 5, Nr. 1 (14.06.2023): 583–92. http://dx.doi.org/10.54254/2755-2721/5/20230651.
Der volle Inhalt der QuelleTong, Lian, Lan Yang, Xuan Wang und Li Liu. „Self-aware face emotion accelerated recognition algorithm: a novel neural network acceleration algorithm of emotion recognition for international students“. PeerJ Computer Science 9 (26.09.2023): e1611. http://dx.doi.org/10.7717/peerj-cs.1611.
Der volle Inhalt der QuelleSineglazov, Victor, und Petro Chynnyk. „Quantum Convolution Neural Network“. Electronics and Control Systems 2, Nr. 76 (23.06.2023): 40–45. http://dx.doi.org/10.18372/1990-5548.76.17667.
Der volle Inhalt der QuelleLü Benyuan, 吕本远, 禚真福 Zhuo Zhenfu, 韩永赛 Han Yongsai und 张立朝 Zhang Lichao. „基于Faster区域卷积神经网络的目标检测“. Laser & Optoelectronics Progress 58, Nr. 22 (2021): 2210017. http://dx.doi.org/10.3788/lop202158.2210017.
Der volle Inhalt der QuelleAnmin, Kong, und Zhao Bin. „A Parallel Loading Based Accelerator for Convolution Neural Network“. International Journal of Machine Learning and Computing 10, Nr. 5 (05.10.2020): 669–74. http://dx.doi.org/10.18178/ijmlc.2020.10.5.989.
Der volle Inhalt der QuelleSharma, Himanshu, und Rohit Agarwal. „Channel Enhanced Deep Convolution Neural Network based Cancer Classification“. Journal of Advanced Research in Dynamical and Control Systems 11, Nr. 10-SPECIAL ISSUE (31.10.2019): 610–17. http://dx.doi.org/10.5373/jardcs/v11sp10/20192849.
Der volle Inhalt der QuelleAnem, Smt Jayalaxmi, B. Dharani, K. Raveendra, CH Nikhil und K. Akhil. „Leveraging Convolution Neural Network (CNN) for Skin Cancer Identification“. International Journal of Research Publication and Reviews 5, Nr. 4 (April 2024): 2150–55. http://dx.doi.org/10.55248/gengpi.5.0424.0955.
Der volle Inhalt der QuelleOh, Seokjin, Jiyong An und Kyeong-Sik Min. „Area-Efficient Mapping of Convolutional Neural Networks to Memristor Crossbars Using Sub-Image Partitioning“. Micromachines 14, Nr. 2 (25.01.2023): 309. http://dx.doi.org/10.3390/mi14020309.
Der volle Inhalt der QuelleDissertationen zum Thema "Convolutive Neural Networks"
Heuillet, Alexandre. „Exploring deep neural network differentiable architecture design“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG069.
Der volle Inhalt der QuelleArtificial Intelligence (AI) has gained significant popularity in recent years, primarily due to its successful applications in various domains, including textual data analysis, computer vision, and audio processing. The resurgence of deep learning techniques has played a central role in this success. The groundbreaking paper by Krizhevsky et al., AlexNet, narrowed the gap between human and machine performance in image classification tasks. Subsequent papers such as Xception and ResNet have further solidified deep learning as a leading technique, opening new horizons for the AI community. The success of deep learning lies in its architecture, which is manually designed with expert knowledge and empirical validation. However, these architectures lack the certainty of an optimal solution. To address this issue, recent papers introduced the concept of Neural Architecture Search (NAS), enabling the learning of deep architectures. However, most initial approaches focused on large architectures with specific targets (e.g., supervised learning) and relied on computationally expensive optimization techniques such as reinforcement learning and evolutionary algorithms. In this thesis, we further investigate this idea by exploring automatic deep architecture design, with a particular emphasis on differentiable NAS (DNAS), which represents the current trend in NAS due to its computational efficiency. While our primary focus is on Convolutional Neural Networks (CNNs), we also explore Vision Transformers (ViTs) with the goal of designing cost-effective architectures suitable for real-time applications
Maragno, Alessandro. „Programmazione di Convolutional Neural Networks orientata all'accelerazione su FPGA“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/12476/.
Der volle Inhalt der QuelleAbbasi, Mahdieh. „Toward robust deep neural networks“. Doctoral thesis, Université Laval, 2020. http://hdl.handle.net/20.500.11794/67766.
Der volle Inhalt der QuelleIn this thesis, our goal is to develop robust and reliable yet accurate learning models, particularly Convolutional Neural Networks (CNNs), in the presence of adversarial examples and Out-of-Distribution (OOD) samples. As the first contribution, we propose to predict adversarial instances with high uncertainty through encouraging diversity in an ensemble of CNNs. To this end, we devise an ensemble of diverse specialists along with a simple and computationally efficient voting mechanism to predict the adversarial examples with low confidence while keeping the predictive confidence of the clean samples high. In the presence of high entropy in our ensemble, we prove that the predictive confidence can be upper-bounded, leading to have a globally fixed threshold over the predictive confidence for identifying adversaries. We analytically justify the role of diversity in our ensemble on mitigating the risk of both black-box and white-box adversarial examples. Finally, we empirically assess the robustness of our ensemble to the black-box and the white-box attacks on several benchmark datasets.The second contribution aims to address the detection of OOD samples through an end-to-end model trained on an appropriate OOD set. To this end, we address the following central question: how to differentiate many available OOD sets w.r.t. a given in distribution task to select the most appropriate one, which in turn induces a model with a high detection rate of unseen OOD sets? To answer this question, we hypothesize that the “protection” level of in-distribution sub-manifolds by each OOD set can be a good possible property to differentiate OOD sets. To measure the protection level, we then design three novel, simple, and cost-effective metrics using a pre-trained vanilla CNN. In an extensive series of experiments on image and audio classification tasks, we empirically demonstrate the abilityof an Augmented-CNN (A-CNN) and an explicitly-calibrated CNN for detecting a significantly larger portion of unseen OOD samples, if they are trained on the most protective OOD set. Interestingly, we also observe that the A-CNN trained on the most protective OOD set (calledA-CNN) can also detect the black-box Fast Gradient Sign (FGS) adversarial examples. As the third contribution, we investigate more closely the capacity of the A-CNN on the detection of wider types of black-box adversaries. To increase the capability of A-CNN to detect a larger number of adversaries, we augment its OOD training set with some inter-class interpolated samples. Then, we demonstrate that the A-CNN trained on the most protective OOD set along with the interpolated samples has a consistent detection rate on all types of unseen adversarial examples. Where as training an A-CNN on Projected Gradient Descent (PGD) adversaries does not lead to a stable detection rate on all types of adversaries, particularly the unseen types. We also visually assess the feature space and the decision boundaries in the input space of a vanilla CNN and its augmented counterpart in the presence of adversaries and the clean ones. By a properly trained A-CNN, we aim to take a step toward a unified and reliable end-to-end learning model with small risk rates on both clean samples and the unusual ones, e.g. adversarial and OOD samples.The last contribution is to show a use-case of A-CNN for training a robust object detector on a partially-labeled dataset, particularly a merged dataset. Merging various datasets from similar contexts but with different sets of Object of Interest (OoI) is an inexpensive way to craft a large-scale dataset which covers a larger spectrum of OoIs. Moreover, merging datasets allows achieving a unified object detector, instead of having several separate ones, resultingin the reduction of computational and time costs. However, merging datasets, especially from a similar context, causes many missing-label instances. With the goal of training an integrated robust object detector on a partially-labeled but large-scale dataset, we propose a self-supervised training framework to overcome the issue of missing-label instances in the merged datasets. Our framework is evaluated on a merged dataset with a high missing-label rate. The empirical results confirm the viability of our generated pseudo-labels to enhance the performance of YOLO, as the current (to date) state-of-the-art object detector.
Kapoor, Rishika. „Malaria Detection Using Deep Convolution Neural Network“. University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1613749143868579.
Der volle Inhalt der QuelleYu, Xiafei. „Wide Activated Separate 3D Convolution for Video Super-Resolution“. Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39974.
Der volle Inhalt der QuelleMessou, Ehounoud Joseph Christopher. „Handling Invalid Pixels in Convolutional Neural Networks“. Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/98619.
Der volle Inhalt der QuelleMaster of Science
A module at the heart of deep neural networks built for Artificial Intelligence is the convolutional layer. When multiple convolutional layers are used together with other modules, a Convolutional Neural Network (CNN) is obtained. These CNNs can be used for tasks such as image classification where they tell if the object in an image is a chair or a car, for example. Most CNNs use a normal convolutional layer that assumes that all parts of the image fed to the network are valid. However, most models zero pad the image at the beginning to maintain a certain output shape. Zero padding is equivalent to adding a black frame around the image. These added pixels result in adding information that was not initially present. Therefore, this extra information can be considered invalid. Invalid pixels can also be inside the image where they are referred to as holes in completion tasks like image inpainting where the network is asked to fill these holes and give a realistic image. In this work, we look for a method that can handle both types of invalid pixels. We compare on the same test bench two methods previously used to handle invalid pixels outside the image (Partial and Edge convolutions) and one method that was designed for invalid pixels inside the image (Gated convolution). We show that Partial convolution performs the best in image classification while Gated convolution has the advantage on semantic segmentation. As for hotel recognition with masked regions, none of the methods seem appropriate to generate embeddings that leverage the masked regions.
Ngo, Kalle. „FPGA Hardware Acceleration of Inception Style Parameter Reduced Convolution Neural Networks“. Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-205026.
Der volle Inhalt der QuellePappone, Francesco. „Graph neural networks: theory and applications“. Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23893/.
Der volle Inhalt der QuelleSung, Wei-Hong. „Investigating minimal Convolution Neural Networks (CNNs) for realtime embedded eye feature detection“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281338.
Der volle Inhalt der QuelleMed den snabba ökningen av neurala nätverk kan många uppgifter som brukade vara svåra att utföra i traditionella metoder nu lösas bra, särskilt inom datorsynsfältet. Men eftersom uppgifterna vi måste lösa har blivit mer och mer komplexa, blir de neurala nätverken vi använder djupare och större. Därför, även om vissa inbäddade system är kraftfulla för närvarande, lider de flesta inbäddade system fortfarande av minnes- och beräkningsbegränsningar, vilket innebär att det är svårt att distribuera våra stora neurala nätverk på dessa inbäddade enheter. Projektet syftar till att utforska olika metoder för att komprimera den ursprungliga stora modellen. Det vill säga, vi tränar först en baslinjemodell, YOLOv3[1], som är ett berömt objektdetekteringsnätverk, och sedan använder vi två metoder för att komprimera basmodellen. Den första metoden är beskärning med hjälp av sparsity training, och vi kanalskärning enligt skalningsfaktorvärdet efter sparsity training. Baserat på idén om denna metod har vi gjort tre utforskningar. För det första tar vi unionens maskstrategi för att lösa dimensionsproblemet för genvägsrelaterade lager i YOLOv3[1]. För det andra försöker vi absorbera informationen om skiftande faktorer i efterföljande lager. Slutligen implementerar vi lagerskärningen och kombinerar det med kanalbeskärning. Den andra metoden är beskärning med NAS, som använder en djup förstärkningsram för att automatiskt hitta det bästa kompressionsförhållandet för varje lager. I slutet av denna rapport analyserar vi de viktigaste resultaten och slutsatserna i vårt experiment och syftar till det framtida arbetet som potentiellt kan förbättra vårt projekt.
Wu, Jindong. „Pooling strategies for graph convolution neural networks and their effect on classification“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288953.
Der volle Inhalt der QuelleMed utvecklingen av grafneurala nätverk har detta nya neurala nätverk tillämpats i olika område. Ett av de svåra problemen för forskare inom detta område är hur man väljer en lämplig poolningsmetod för en specifik forskningsuppgift från en mängd befintliga poolningsmetoder. I den här arbetet, baserat på de befintliga vanliga grafpoolingsmetoderna, utvecklar vi ett riktmärke för neuralt nätverk ram som kan användas till olika diagram pooling metoders jämförelse. Genom att använda ramverket jämför vi fyra allmängiltig diagram pooling metod och utforska deras egenskaper. Dessutom utvidgar vi två metoder för att förklara beslut om neuralt nätverk från convolution neurala nätverk till diagram neurala nätverk och jämföra dem med befintliga GNNExplainer. Vi kör experiment av grafisk klassificering uppgifter under benchmarkingramverk och hittade olika egenskaper av olika diagram pooling metoder. Dessutom verifierar vi korrekthet i dessa förklarningsmetoder som vi utvecklade och mäter överenskommelserna mellan dem. Till slut, vi försöker utforska egenskaper av olika metoder för att förklara neuralt nätverks beslut och deras betydelse för att välja pooling metoder i grafisk neuralt nätverk.
Buchteile zum Thema "Convolutive Neural Networks"
Mei, Tiemin, Fuliang Yin, Jiangtao Xi und Joe F. Chicharo. „Cumulant-Based Blind Separation of Convolutive Mixtures“. In Advances in Neural Networks – ISNN 2004, 672–77. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-28647-9_110.
Der volle Inhalt der QuelleHe, Zhaoshui, Shengli Xie und Yuli Fu. „FIR Convolutive BSS Based on Sparse Representation“. In Advances in Neural Networks – ISNN 2005, 532–37. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11427445_87.
Der volle Inhalt der QuelleZhang, Hua, und Dazhang Feng. „Convolutive Blind Separation of Non-white Broadband Signals Based on a Double-Iteration Method“. In Advances in Neural Networks - ISNN 2006, 1195–201. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11759966_177.
Der volle Inhalt der QuelleMei, Tiemin, Jiangtao Xi, Fuliang Yin und Joe F. Chicharo. „Joint Diagonalization of Power Spectral Density Matrices for Blind Source Separation of Convolutive Mixtures“. In Advances in Neural Networks – ISNN 2005, 520–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11427445_85.
Der volle Inhalt der QuellePeng, Chunyi, Xianda Zhang und Qutang Cai. „A Block-Adaptive Subspace Method Using Oblique Projections for Blind Separation of Convolutive Mixtures“. In Advances in Neural Networks – ISNN 2005, 526–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11427445_86.
Der volle Inhalt der QuelleDai, Qionghai, und Yue Gao. „Neural Networks on Hypergraph“. In Artificial Intelligence: Foundations, Theory, and Algorithms, 121–43. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-0185-2_7.
Der volle Inhalt der QuelleTang, Lihe, Weidong Yang, Qiang Gao, Rui Xu und Rongzhi Ye. „A Lightweight Verification Scheme Based on Dynamic Convolution“. In Proceeding of 2021 International Conference on Wireless Communications, Networking and Applications, 778–87. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2456-9_78.
Der volle Inhalt der QuelleTran, Hoang-Dung, Neelanjana Pal, Patrick Musau, Diego Manzanas Lopez, Nathaniel Hamilton, Xiaodong Yang, Stanley Bak und Taylor T. Johnson. „Robustness Verification of Semantic Segmentation Neural Networks Using Relaxed Reachability“. In Computer Aided Verification, 263–86. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81685-8_12.
Der volle Inhalt der QuellePattabiraman, V., und R. Maheswari. „Image to Text Processing Using Convolution Neural Networks“. In Recurrent Neural Networks, 43–52. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003307822-4.
Der volle Inhalt der QuelleMa, Ruipeng, Di Wu, Tao Hu, Dong Yi, Yuqiao Zhang und Jianxia Chen. „Automatic Modulation Classification Based on One-Dimensional Convolution Feature Fusion Network“. In Proceeding of 2021 International Conference on Wireless Communications, Networking and Applications, 888–99. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2456-9_90.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Convolutive Neural Networks"
Acharyya, Ranjan, und Fredric M. Ham. „A New Approach for Blind Separation of Convolutive Mixtures“. In 2007 International Joint Conference on Neural Networks. IEEE, 2007. http://dx.doi.org/10.1109/ijcnn.2007.4371278.
Der volle Inhalt der QuelleWenwu Wang. „Convolutive non-negative sparse coding“. In 2008 IEEE International Joint Conference on Neural Networks (IJCNN 2008 - Hong Kong). IEEE, 2008. http://dx.doi.org/10.1109/ijcnn.2008.4634325.
Der volle Inhalt der Quellede Frein, Ruairi. „Learning convolutive features for storage and transmission between networked sensors“. In 2015 International Joint Conference on Neural Networks (IJCNN). IEEE, 2015. http://dx.doi.org/10.1109/ijcnn.2015.7280827.
Der volle Inhalt der QuelleWu, Wenyan, und Liming Zhang. „A New Method of Solving Permutation Problem in Blind Source Separation for Convolutive Acoustic Signals in Frequency-domain“. In 2007 International Joint Conference on Neural Networks. IEEE, 2007. http://dx.doi.org/10.1109/ijcnn.2007.4371135.
Der volle Inhalt der QuelleKong, Qiuqiang, Yong Xu, Philip J. B. Jackson, Wenwu Wang und Mark D. Plumbley. „Single-Channel Signal Separation and Deconvolution with Generative Adversarial Networks“. In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/381.
Der volle Inhalt der QuelleTong, Gan, und Libo Huang. „Fast Convolution based on Winograd Minimum Filtering: Introduction and Development“. In 5th International Conference on Computer Science and Information Technology (COMIT 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.111716.
Der volle Inhalt der QuelleXie, Chunyu, Ce Li, Baochang Zhang, Chen Chen, Jungong Han und Jianzhuang Liu. „Memory Attention Networks for Skeleton-based Action Recognition“. In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/227.
Der volle Inhalt der QuelleLin, Luojun, Lingyu Liang, Lianwen Jin und Weijie Chen. „Attribute-Aware Convolutional Neural Networks for Facial Beauty Prediction“. In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/119.
Der volle Inhalt der QuellePeserico, Nicola, Hangbo Yang, Xiaoxuan Ma, Shurui Li, Jonathan K. George, Puneet Gupta, Chee Wei Wong und Volker J. Sorger. „Design and Model of On-Chip Metalens for Silicon Photonics Convolutional Neural Network“. In CLEO: Applications and Technology. Washington, D.C.: Optica Publishing Group, 2023. http://dx.doi.org/10.1364/cleo_at.2023.jw2a.77.
Der volle Inhalt der QuelleZhu, Zulun, Jiaying Peng, Jintang Li, Liang Chen, Qi Yu und Siqiang Luo. „Spiking Graph Convolutional Networks“. In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/338.
Der volle Inhalt der Quelle