Gotowa bibliografia na temat „Deep Photonic Neural Networks”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Deep Photonic Neural Networks”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Deep Photonic Neural Networks"
Pai, Sunil, Zhanghao Sun, Tyler W. Hughes, Taewon Park, Ben Bartlett, Ian A. D. Williamson, Momchil Minkov i in. "Experimentally realized in situ backpropagation for deep learning in photonic neural networks". Science 380, nr 6643 (28.04.2023): 398–404. http://dx.doi.org/10.1126/science.ade8450.
Pełny tekst źródłaSheng, Huayi. "Review of Integrated Diffractive Deep Neural Networks". Highlights in Science, Engineering and Technology 24 (27.12.2022): 264–78. http://dx.doi.org/10.54097/hset.v24i.3957.
Pełny tekst źródłaJiang, Jiaqi, i Jonathan A. Fan. "Multiobjective and categorical global optimization of photonic structures based on ResNet generative neural networks". Nanophotonics 10, nr 1 (22.09.2020): 361–69. http://dx.doi.org/10.1515/nanoph-2020-0407.
Pełny tekst źródłaMao, Simei, Lirong Cheng, Caiyue Zhao, Faisal Nadeem Khan, Qian Li i H. Y. Fu. "Inverse Design for Silicon Photonics: From Iterative Optimization Algorithms to Deep Neural Networks". Applied Sciences 11, nr 9 (23.04.2021): 3822. http://dx.doi.org/10.3390/app11093822.
Pełny tekst źródłaDang, Dharanidhar, Sai Vineel Reddy Chittamuru, Sudeep Pasricha, Rabi Mahapatra i Debashis Sahoo. "BPLight-CNN: A Photonics-Based Backpropagation Accelerator for Deep Learning". ACM Journal on Emerging Technologies in Computing Systems 17, nr 4 (31.10.2021): 1–26. http://dx.doi.org/10.1145/3446212.
Pełny tekst źródłaAhmed, Moustafa, Yas Al-Hadeethi, Ahmed Bakry, Hamed Dalir i Volker J. Sorger. "Integrated photonic FFT for photonic tensor operations towards efficient and high-speed neural networks". Nanophotonics 9, nr 13 (26.06.2020): 4097–108. http://dx.doi.org/10.1515/nanoph-2020-0055.
Pełny tekst źródłaSun, Yichen, Mingli Dong, Mingxin Yu, Jiabin Xia, Xu Zhang, Yuchen Bai, Lidan Lu i Lianqing Zhu. "Nonlinear All-Optical Diffractive Deep Neural Network with 10.6 μm Wavelength for Image Classification". International Journal of Optics 2021 (27.02.2021): 1–16. http://dx.doi.org/10.1155/2021/6667495.
Pełny tekst źródłaRen, Yangming, Lingxuan Zhang, Weiqiang Wang, Xinyu Wang, Yufang Lei, Yulong Xue, Xiaochen Sun i Wenfu Zhang. "Genetic-algorithm-based deep neural networks for highly efficient photonic device design". Photonics Research 9, nr 6 (24.05.2021): B247. http://dx.doi.org/10.1364/prj.416294.
Pełny tekst źródłaAsano, Takashi, i Susumu Noda. "Iterative optimization of photonic crystal nanocavity designs by using deep neural networks". Nanophotonics 8, nr 12 (16.11.2019): 2243–56. http://dx.doi.org/10.1515/nanoph-2019-0308.
Pełny tekst źródłaLi, Renjie, Xiaozhe Gu, Yuanwen Shen, Ke Li, Zhen Li i Zhaoyu Zhang. "Smart and Rapid Design of Nanophotonic Structures by an Adaptive and Regularized Deep Neural Network". Nanomaterials 12, nr 8 (16.04.2022): 1372. http://dx.doi.org/10.3390/nano12081372.
Pełny tekst źródłaRozprawy doktorskie na temat "Deep Photonic Neural Networks"
Liu, Qian. "Deep spiking neural networks". Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/deep-spiking-neural-networks(336e6a37-2a0b-41ff-9ffb-cca897220d6c).html.
Pełny tekst źródłaSquadrani, Lorenzo. "Deep neural networks and thermodynamics". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.
Znajdź pełny tekst źródłaMancevo, del Castillo Ayala Diego. "Compressing Deep Convolutional Neural Networks". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217316.
Pełny tekst źródłaAbbasi, Mahdieh. "Toward robust deep neural networks". Doctoral thesis, Université Laval, 2020. http://hdl.handle.net/20.500.11794/67766.
Pełny tekst źródłaIn this thesis, our goal is to develop robust and reliable yet accurate learning models, particularly Convolutional Neural Networks (CNNs), in the presence of adversarial examples and Out-of-Distribution (OOD) samples. As the first contribution, we propose to predict adversarial instances with high uncertainty through encouraging diversity in an ensemble of CNNs. To this end, we devise an ensemble of diverse specialists along with a simple and computationally efficient voting mechanism to predict the adversarial examples with low confidence while keeping the predictive confidence of the clean samples high. In the presence of high entropy in our ensemble, we prove that the predictive confidence can be upper-bounded, leading to have a globally fixed threshold over the predictive confidence for identifying adversaries. We analytically justify the role of diversity in our ensemble on mitigating the risk of both black-box and white-box adversarial examples. Finally, we empirically assess the robustness of our ensemble to the black-box and the white-box attacks on several benchmark datasets.The second contribution aims to address the detection of OOD samples through an end-to-end model trained on an appropriate OOD set. To this end, we address the following central question: how to differentiate many available OOD sets w.r.t. a given in distribution task to select the most appropriate one, which in turn induces a model with a high detection rate of unseen OOD sets? To answer this question, we hypothesize that the “protection” level of in-distribution sub-manifolds by each OOD set can be a good possible property to differentiate OOD sets. To measure the protection level, we then design three novel, simple, and cost-effective metrics using a pre-trained vanilla CNN. In an extensive series of experiments on image and audio classification tasks, we empirically demonstrate the abilityof an Augmented-CNN (A-CNN) and an explicitly-calibrated CNN for detecting a significantly larger portion of unseen OOD samples, if they are trained on the most protective OOD set. Interestingly, we also observe that the A-CNN trained on the most protective OOD set (calledA-CNN) can also detect the black-box Fast Gradient Sign (FGS) adversarial examples. As the third contribution, we investigate more closely the capacity of the A-CNN on the detection of wider types of black-box adversaries. To increase the capability of A-CNN to detect a larger number of adversaries, we augment its OOD training set with some inter-class interpolated samples. Then, we demonstrate that the A-CNN trained on the most protective OOD set along with the interpolated samples has a consistent detection rate on all types of unseen adversarial examples. Where as training an A-CNN on Projected Gradient Descent (PGD) adversaries does not lead to a stable detection rate on all types of adversaries, particularly the unseen types. We also visually assess the feature space and the decision boundaries in the input space of a vanilla CNN and its augmented counterpart in the presence of adversaries and the clean ones. By a properly trained A-CNN, we aim to take a step toward a unified and reliable end-to-end learning model with small risk rates on both clean samples and the unusual ones, e.g. adversarial and OOD samples.The last contribution is to show a use-case of A-CNN for training a robust object detector on a partially-labeled dataset, particularly a merged dataset. Merging various datasets from similar contexts but with different sets of Object of Interest (OoI) is an inexpensive way to craft a large-scale dataset which covers a larger spectrum of OoIs. Moreover, merging datasets allows achieving a unified object detector, instead of having several separate ones, resultingin the reduction of computational and time costs. However, merging datasets, especially from a similar context, causes many missing-label instances. With the goal of training an integrated robust object detector on a partially-labeled but large-scale dataset, we propose a self-supervised training framework to overcome the issue of missing-label instances in the merged datasets. Our framework is evaluated on a merged dataset with a high missing-label rate. The empirical results confirm the viability of our generated pseudo-labels to enhance the performance of YOLO, as the current (to date) state-of-the-art object detector.
Lu, Yifei. "Deep neural networks and fraud detection". Thesis, Uppsala universitet, Tillämpad matematik och statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-331833.
Pełny tekst źródłaKalogiras, Vasileios. "Sentiment Classification with Deep Neural Networks". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217858.
Pełny tekst źródłaSentiment analysis is a subfield of natural language processing (NLP) that attempts to analyze the sentiment of written text.It is is a complex problem that entails different challenges. For this reason, it has been studied extensively. In the past years traditional machine learning algorithms or handcrafted methodologies used to provide state of the art results. However, the recent deep learning renaissance shifted interest towards end to end deep learning models. On the one hand this resulted into more powerful models but on the other hand clear mathematical reasoning or intuition behind distinct models is still lacking. As a result, in this thesis, an attempt to shed some light on recently proposed deep learning architectures for sentiment classification is made.A study of their differences is performed as well as provide empirical results on how changes in the structure or capacity of a model can affect its accuracy and the way it represents and ''comprehends'' sentences.
Choi, Keunwoo. "Deep neural networks for music tagging". Thesis, Queen Mary, University of London, 2018. http://qmro.qmul.ac.uk/xmlui/handle/123456789/46029.
Pełny tekst źródłaYin, Yonghua. "Random neural networks for deep learning". Thesis, Imperial College London, 2018. http://hdl.handle.net/10044/1/64917.
Pełny tekst źródłaZagoruyko, Sergey. "Weight parameterizations in deep neural networks". Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1129/document.
Pełny tekst źródłaMultilayer neural networks were first proposed more than three decades ago, and various architectures and parameterizations were explored since. Recently, graphics processing units enabled very efficient neural network training, and allowed training much larger networks on larger datasets, dramatically improving performance on various supervised learning tasks. However, the generalization is still far from human level, and it is difficult to understand on what the decisions made are based. To improve on generalization and understanding we revisit the problems of weight parameterizations in deep neural networks. We identify the most important, to our mind, problems in modern architectures: network depth, parameter efficiency, and learning multiple tasks at the same time, and try to address them in this thesis. We start with one of the core problems of computer vision, patch matching, and propose to use convolutional neural networks of various architectures to solve it, instead of manual hand-crafting descriptors. Then, we address the task of object detection, where a network should simultaneously learn to both predict class of the object and the location. In both tasks we find that the number of parameters in the network is the major factor determining it's performance, and explore this phenomena in residual networks. Our findings show that their original motivation, training deeper networks for better representations, does not fully hold, and wider networks with less layers can be as effective as deeper with the same number of parameters. Overall, we present an extensive study on architectures and weight parameterizations, and ways of transferring knowledge between them
Ioannou, Yani Andrew. "Structural priors in deep neural networks". Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/278976.
Pełny tekst źródłaKsiążki na temat "Deep Photonic Neural Networks"
Aggarwal, Charu C. Neural Networks and Deep Learning. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-94463-0.
Pełny tekst źródłaAggarwal, Charu C. Neural Networks and Deep Learning. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-29642-0.
Pełny tekst źródłaMoolayil, Jojo. Learn Keras for Deep Neural Networks. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4240-7.
Pełny tekst źródłaCaterini, Anthony L., i Dong Eui Chang. Deep Neural Networks in a Mathematical Framework. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75304-1.
Pełny tekst źródłaRazaghi, Hooshmand Shokri. Statistical Machine Learning & Deep Neural Networks Applied to Neural Data Analysis. [New York, N.Y.?]: [publisher not identified], 2020.
Znajdź pełny tekst źródłaFingscheidt, Tim, Hanno Gottschalk i Sebastian Houben, red. Deep Neural Networks and Data for Automated Driving. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-01233-4.
Pełny tekst źródłaModrzyk, Nicolas. Real-Time IoT Imaging with Deep Neural Networks. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5722-7.
Pełny tekst źródłaIba, Hitoshi. Evolutionary Approach to Machine Learning and Deep Neural Networks. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-0200-8.
Pełny tekst źródłaLu, Le, Yefeng Zheng, Gustavo Carneiro i Lin Yang, red. Deep Learning and Convolutional Neural Networks for Medical Image Computing. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-42999-1.
Pełny tekst źródłaTetko, Igor V., Věra Kůrková, Pavel Karpov i Fabian Theis, red. Artificial Neural Networks and Machine Learning – ICANN 2019: Deep Learning. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30484-3.
Pełny tekst źródłaCzęści książek na temat "Deep Photonic Neural Networks"
Sheu, Bing J., i Joongho Choi. "Photonic Neural Networks". W Neural Information Processing and VLSI, 369–96. Boston, MA: Springer US, 1995. http://dx.doi.org/10.1007/978-1-4615-2247-8_13.
Pełny tekst źródłaCalin, Ovidiu. "Neural Networks". W Deep Learning Architectures, 167–98. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-36721-3_6.
Pełny tekst źródłaVasudevan, Shriram K., Sini Raj Pulari i Subashri Vasudevan. "Recurrent Neural Networks". W Deep Learning, 157–83. New York: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-7.
Pełny tekst źródłaYu, Dong, i Li Deng. "Deep Neural Networks". W Automatic Speech Recognition, 57–77. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-5779-3_4.
Pełny tekst źródłaAwad, Mariette, i Rahul Khanna. "Deep Neural Networks". W Efficient Learning Machines, 127–47. Berkeley, CA: Apress, 2015. http://dx.doi.org/10.1007/978-1-4302-5990-9_7.
Pełny tekst źródłaSun, Yanan, Gary G. Yen i Mengjie Zhang. "Deep Neural Networks". W Evolutionary Deep Neural Architecture Search: Fundamentals, Methods, and Recent Advances, 9–30. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-16868-0_2.
Pełny tekst źródłaDenuit, Michel, Donatien Hainaut i Julien Trufin. "Deep Neural Networks". W Springer Actuarial, 63–82. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-25827-6_3.
Pełny tekst źródłaHopgood, Adrian A. "Deep Neural Networks". W Intelligent Systems for Engineers and Scientists, 229–45. Wyd. 4. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003226277-9.
Pełny tekst źródłaXiong, Momiao. "Deep Neural Networks". W Artificial Intelligence and Causal Inference, 1–44. Boca Raton: Chapman and Hall/CRC, 2022. http://dx.doi.org/10.1201/9781003028543-1.
Pełny tekst źródłaWang, Liang, i Jianxin Zhao. "Deep Neural Networks". W Architecture of Advanced Numerical Analysis Systems, 121–47. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8853-5_5.
Pełny tekst źródłaStreszczenia konferencji na temat "Deep Photonic Neural Networks"
Shastri, Bhavin J., Matthew J. Filipovich, Zhimu Guo, Paul R. Prucnal, Sudip Shekhar i Volker J. Sorger. "Silicon Photonics Neural Networks for Training and Inference". W Photonic Networks and Devices. Washington, D.C.: Optica Publishing Group, 2022. http://dx.doi.org/10.1364/networks.2022.new2d.2.
Pełny tekst źródłaLeelar, Bhawani Shankar, E. S. Shivaleela i T. Srinivas. "Learning with Deep Photonic Neural Networks". W 2017 IEEE Workshop on Recent Advances in Photonics (WRAP). IEEE, 2017. http://dx.doi.org/10.1109/wrap.2017.8468594.
Pełny tekst źródłaShastri, Bhavin J., Matthew J. Filipovich, Zhimu Guo, Paul R. Prucnal, Sudip Shekhar i Volker J. Sorger. "Silicon Photonics for Training Deep Neural Networks". W Conference on Lasers and Electro-Optics/Pacific Rim. Washington, D.C.: Optica Publishing Group, 2022. http://dx.doi.org/10.1364/cleopr.2022.ctha13b_02.
Pełny tekst źródłaAshtiani, Farshid, Mehmet Berkay On, David Sanchez-Jacome, Daniel Perez-Lopez, S. J. Ben Yoo i Andrea Blanco-Redondo. "Photonic Max-Pooling for Deep Neural Networks Using a Programmable Photonic Platform". W Optical Fiber Communication Conference. Washington, D.C.: Optica Publishing Group, 2023. http://dx.doi.org/10.1364/ofc.2023.m1j.6.
Pełny tekst źródłaTanimura, Takahito, Yuichi Akiyama i Takeshi Hoshida. "Physical-layer Visualization and Analysis toward Efficient Network Operation by Deep Neural Networks". W Photonic Networks and Devices. Washington, D.C.: OSA, 2019. http://dx.doi.org/10.1364/networks.2019.neth1d.2.
Pełny tekst źródłaPicco, Enrico, i Serge Massar. "Real-Time Photonic Deep Reservoir Computing for Speech Recognition". W 2023 International Joint Conference on Neural Networks (IJCNN). IEEE, 2023. http://dx.doi.org/10.1109/ijcnn54540.2023.10191786.
Pełny tekst źródłaBeyene, Yonatan, Nicola Peserico, Xiaoxuan Ma i Volker J. Sorger. "Towards the full integration of Silicon Photonic Chip for Deep Neural Networks". W Bragg Gratings, Photosensitivity and Poling in Glass Waveguides and Materials. Washington, D.C.: Optica Publishing Group, 2022. http://dx.doi.org/10.1364/bgppm.2022.jw3a.31.
Pełny tekst źródłaAshtiani, Farshid, Mehmet Berkay On, David Sanchez-Jacome, Daniel Perez-Lopez, S. J. Ben Yoo i Andrea Blanco-Redondo. "Photonic Max-Pooling for Deep Neural Networks Using a Programmable Photonic Platform". W 2023 Optical Fiber Communications Conference and Exhibition (OFC). IEEE, 2023. http://dx.doi.org/10.23919/ofc49934.2023.10116774.
Pełny tekst źródłaPankov, Artem V., Oleg S. Sidelnikov, Ilya D. Vatnik, Dmitry V. Churkin i Andrey A. Sukhorukov. "Deep Neural Networks with Time-Domain Synthetic Photonic Lattices". W 2021 Conference on Lasers and Electro-Optics Europe & European Quantum Electronics Conference (CLEO/Europe-EQEC). IEEE, 2021. http://dx.doi.org/10.1109/cleo/europe-eqec52157.2021.9542271.
Pełny tekst źródłaShi, B., N. Calabretta i R. Stabile. "SOA-Based Photonic Integrated Deep Neural Networks for Image Classification". W CLEO: Science and Innovations. Washington, D.C.: OSA, 2019. http://dx.doi.org/10.1364/cleo_si.2019.sf1n.5.
Pełny tekst źródłaRaporty organizacyjne na temat "Deep Photonic Neural Networks"
Yu, Haichao, Haoxiang Li, Honghui Shi, Thomas S. Huang i Gang Hua. Any-Precision Deep Neural Networks. Web of Open Science, grudzień 2020. http://dx.doi.org/10.37686/ejai.v1i1.82.
Pełny tekst źródłaKoh, Christopher Fu-Chai, i Sergey Igorevich Magedov. Bond Order Prediction Using Deep Neural Networks. Office of Scientific and Technical Information (OSTI), sierpień 2019. http://dx.doi.org/10.2172/1557202.
Pełny tekst źródłaShevitski, Brian, Yijing Watkins, Nicole Man i Michael Girard. Digital Signal Processing Using Deep Neural Networks. Office of Scientific and Technical Information (OSTI), kwiecień 2023. http://dx.doi.org/10.2172/1984848.
Pełny tekst źródłaTalathi, S. S. Deep Recurrent Neural Networks for seizure detection and early seizure detection systems. Office of Scientific and Technical Information (OSTI), czerwiec 2017. http://dx.doi.org/10.2172/1366924.
Pełny tekst źródłaArmstrong, Derek Elswick, i Joseph Gabriel Gorka. Using Deep Neural Networks to Extract Fireball Parameters from Infrared Spectral Data. Office of Scientific and Technical Information (OSTI), maj 2020. http://dx.doi.org/10.2172/1623398.
Pełny tekst źródłaThulasidasan, Sunil, Gopinath Chennupati, Jeff Bilmes, Tanmoy Bhattacharya i Sarah E. Michalak. On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks. Office of Scientific and Technical Information (OSTI), czerwiec 2019. http://dx.doi.org/10.2172/1525811.
Pełny tekst źródłaEllis, John, Attila Cangi, Normand Modine, John Stephens, Aidan Thompson i Sivasankaran Rajamanickam. Accelerating Finite-temperature Kohn-Sham Density Functional Theory\ with Deep Neural Networks. Office of Scientific and Technical Information (OSTI), październik 2020. http://dx.doi.org/10.2172/1677521.
Pełny tekst źródłaEllis, Austin, Lenz Fielder, Gabriel Popoola, Normand Modine, John Stephens, Aidan Thompson i Sivasankaran Rajamanickam. Accelerating Finite-Temperature Kohn-Sham Density Functional Theory with Deep Neural Networks. Office of Scientific and Technical Information (OSTI), czerwiec 2021. http://dx.doi.org/10.2172/1817970.
Pełny tekst źródłaStevenson, G. Analysis of Pre-Trained Deep Neural Networks for Large-Vocabulary Automatic Speech Recognition. Office of Scientific and Technical Information (OSTI), lipiec 2016. http://dx.doi.org/10.2172/1289367.
Pełny tekst źródłaChronopoulos, Ilias, Katerina Chrysikou, George Kapetanios, James Mitchell i Aristeidis Raftapostolos. Deep Neural Network Estimation in Panel Data Models. Federal Reserve Bank of Cleveland, lipiec 2023. http://dx.doi.org/10.26509/frbc-wp-202315.
Pełny tekst źródła