Добірка наукової літератури з теми "Deep Photonic Neural Networks"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Deep Photonic Neural Networks".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Deep Photonic Neural Networks"
Pai, Sunil, Zhanghao Sun, Tyler W. Hughes, Taewon Park, Ben Bartlett, Ian A. D. Williamson, Momchil Minkov, et al. "Experimentally realized in situ backpropagation for deep learning in photonic neural networks." Science 380, no. 6643 (April 28, 2023): 398–404. http://dx.doi.org/10.1126/science.ade8450.
Повний текст джерелаSheng, Huayi. "Review of Integrated Diffractive Deep Neural Networks." Highlights in Science, Engineering and Technology 24 (December 27, 2022): 264–78. http://dx.doi.org/10.54097/hset.v24i.3957.
Повний текст джерелаJiang, Jiaqi, and Jonathan A. Fan. "Multiobjective and categorical global optimization of photonic structures based on ResNet generative neural networks." Nanophotonics 10, no. 1 (September 22, 2020): 361–69. http://dx.doi.org/10.1515/nanoph-2020-0407.
Повний текст джерелаMao, Simei, Lirong Cheng, Caiyue Zhao, Faisal Nadeem Khan, Qian Li, and H. Y. Fu. "Inverse Design for Silicon Photonics: From Iterative Optimization Algorithms to Deep Neural Networks." Applied Sciences 11, no. 9 (April 23, 2021): 3822. http://dx.doi.org/10.3390/app11093822.
Повний текст джерелаDang, Dharanidhar, Sai Vineel Reddy Chittamuru, Sudeep Pasricha, Rabi Mahapatra, and Debashis Sahoo. "BPLight-CNN: A Photonics-Based Backpropagation Accelerator for Deep Learning." ACM Journal on Emerging Technologies in Computing Systems 17, no. 4 (October 31, 2021): 1–26. http://dx.doi.org/10.1145/3446212.
Повний текст джерелаAhmed, Moustafa, Yas Al-Hadeethi, Ahmed Bakry, Hamed Dalir, and Volker J. Sorger. "Integrated photonic FFT for photonic tensor operations towards efficient and high-speed neural networks." Nanophotonics 9, no. 13 (June 26, 2020): 4097–108. http://dx.doi.org/10.1515/nanoph-2020-0055.
Повний текст джерелаSun, Yichen, Mingli Dong, Mingxin Yu, Jiabin Xia, Xu Zhang, Yuchen Bai, Lidan Lu та Lianqing Zhu. "Nonlinear All-Optical Diffractive Deep Neural Network with 10.6 μm Wavelength for Image Classification". International Journal of Optics 2021 (27 лютого 2021): 1–16. http://dx.doi.org/10.1155/2021/6667495.
Повний текст джерелаRen, Yangming, Lingxuan Zhang, Weiqiang Wang, Xinyu Wang, Yufang Lei, Yulong Xue, Xiaochen Sun, and Wenfu Zhang. "Genetic-algorithm-based deep neural networks for highly efficient photonic device design." Photonics Research 9, no. 6 (May 24, 2021): B247. http://dx.doi.org/10.1364/prj.416294.
Повний текст джерелаAsano, Takashi, and Susumu Noda. "Iterative optimization of photonic crystal nanocavity designs by using deep neural networks." Nanophotonics 8, no. 12 (November 16, 2019): 2243–56. http://dx.doi.org/10.1515/nanoph-2019-0308.
Повний текст джерелаLi, Renjie, Xiaozhe Gu, Yuanwen Shen, Ke Li, Zhen Li, and Zhaoyu Zhang. "Smart and Rapid Design of Nanophotonic Structures by an Adaptive and Regularized Deep Neural Network." Nanomaterials 12, no. 8 (April 16, 2022): 1372. http://dx.doi.org/10.3390/nano12081372.
Повний текст джерелаДисертації з теми "Deep Photonic Neural Networks"
Liu, Qian. "Deep spiking neural networks." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/deep-spiking-neural-networks(336e6a37-2a0b-41ff-9ffb-cca897220d6c).html.
Повний текст джерелаSquadrani, Lorenzo. "Deep neural networks and thermodynamics." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.
Знайти повний текст джерелаMancevo, del Castillo Ayala Diego. "Compressing Deep Convolutional Neural Networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217316.
Повний текст джерелаAbbasi, Mahdieh. "Toward robust deep neural networks." Doctoral thesis, Université Laval, 2020. http://hdl.handle.net/20.500.11794/67766.
Повний текст джерелаIn this thesis, our goal is to develop robust and reliable yet accurate learning models, particularly Convolutional Neural Networks (CNNs), in the presence of adversarial examples and Out-of-Distribution (OOD) samples. As the first contribution, we propose to predict adversarial instances with high uncertainty through encouraging diversity in an ensemble of CNNs. To this end, we devise an ensemble of diverse specialists along with a simple and computationally efficient voting mechanism to predict the adversarial examples with low confidence while keeping the predictive confidence of the clean samples high. In the presence of high entropy in our ensemble, we prove that the predictive confidence can be upper-bounded, leading to have a globally fixed threshold over the predictive confidence for identifying adversaries. We analytically justify the role of diversity in our ensemble on mitigating the risk of both black-box and white-box adversarial examples. Finally, we empirically assess the robustness of our ensemble to the black-box and the white-box attacks on several benchmark datasets.The second contribution aims to address the detection of OOD samples through an end-to-end model trained on an appropriate OOD set. To this end, we address the following central question: how to differentiate many available OOD sets w.r.t. a given in distribution task to select the most appropriate one, which in turn induces a model with a high detection rate of unseen OOD sets? To answer this question, we hypothesize that the “protection” level of in-distribution sub-manifolds by each OOD set can be a good possible property to differentiate OOD sets. To measure the protection level, we then design three novel, simple, and cost-effective metrics using a pre-trained vanilla CNN. In an extensive series of experiments on image and audio classification tasks, we empirically demonstrate the abilityof an Augmented-CNN (A-CNN) and an explicitly-calibrated CNN for detecting a significantly larger portion of unseen OOD samples, if they are trained on the most protective OOD set. Interestingly, we also observe that the A-CNN trained on the most protective OOD set (calledA-CNN) can also detect the black-box Fast Gradient Sign (FGS) adversarial examples. As the third contribution, we investigate more closely the capacity of the A-CNN on the detection of wider types of black-box adversaries. To increase the capability of A-CNN to detect a larger number of adversaries, we augment its OOD training set with some inter-class interpolated samples. Then, we demonstrate that the A-CNN trained on the most protective OOD set along with the interpolated samples has a consistent detection rate on all types of unseen adversarial examples. Where as training an A-CNN on Projected Gradient Descent (PGD) adversaries does not lead to a stable detection rate on all types of adversaries, particularly the unseen types. We also visually assess the feature space and the decision boundaries in the input space of a vanilla CNN and its augmented counterpart in the presence of adversaries and the clean ones. By a properly trained A-CNN, we aim to take a step toward a unified and reliable end-to-end learning model with small risk rates on both clean samples and the unusual ones, e.g. adversarial and OOD samples.The last contribution is to show a use-case of A-CNN for training a robust object detector on a partially-labeled dataset, particularly a merged dataset. Merging various datasets from similar contexts but with different sets of Object of Interest (OoI) is an inexpensive way to craft a large-scale dataset which covers a larger spectrum of OoIs. Moreover, merging datasets allows achieving a unified object detector, instead of having several separate ones, resultingin the reduction of computational and time costs. However, merging datasets, especially from a similar context, causes many missing-label instances. With the goal of training an integrated robust object detector on a partially-labeled but large-scale dataset, we propose a self-supervised training framework to overcome the issue of missing-label instances in the merged datasets. Our framework is evaluated on a merged dataset with a high missing-label rate. The empirical results confirm the viability of our generated pseudo-labels to enhance the performance of YOLO, as the current (to date) state-of-the-art object detector.
Lu, Yifei. "Deep neural networks and fraud detection." Thesis, Uppsala universitet, Tillämpad matematik och statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-331833.
Повний текст джерелаKalogiras, Vasileios. "Sentiment Classification with Deep Neural Networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217858.
Повний текст джерелаSentiment analysis is a subfield of natural language processing (NLP) that attempts to analyze the sentiment of written text.It is is a complex problem that entails different challenges. For this reason, it has been studied extensively. In the past years traditional machine learning algorithms or handcrafted methodologies used to provide state of the art results. However, the recent deep learning renaissance shifted interest towards end to end deep learning models. On the one hand this resulted into more powerful models but on the other hand clear mathematical reasoning or intuition behind distinct models is still lacking. As a result, in this thesis, an attempt to shed some light on recently proposed deep learning architectures for sentiment classification is made.A study of their differences is performed as well as provide empirical results on how changes in the structure or capacity of a model can affect its accuracy and the way it represents and ''comprehends'' sentences.
Choi, Keunwoo. "Deep neural networks for music tagging." Thesis, Queen Mary, University of London, 2018. http://qmro.qmul.ac.uk/xmlui/handle/123456789/46029.
Повний текст джерелаYin, Yonghua. "Random neural networks for deep learning." Thesis, Imperial College London, 2018. http://hdl.handle.net/10044/1/64917.
Повний текст джерелаZagoruyko, Sergey. "Weight parameterizations in deep neural networks." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1129/document.
Повний текст джерелаMultilayer neural networks were first proposed more than three decades ago, and various architectures and parameterizations were explored since. Recently, graphics processing units enabled very efficient neural network training, and allowed training much larger networks on larger datasets, dramatically improving performance on various supervised learning tasks. However, the generalization is still far from human level, and it is difficult to understand on what the decisions made are based. To improve on generalization and understanding we revisit the problems of weight parameterizations in deep neural networks. We identify the most important, to our mind, problems in modern architectures: network depth, parameter efficiency, and learning multiple tasks at the same time, and try to address them in this thesis. We start with one of the core problems of computer vision, patch matching, and propose to use convolutional neural networks of various architectures to solve it, instead of manual hand-crafting descriptors. Then, we address the task of object detection, where a network should simultaneously learn to both predict class of the object and the location. In both tasks we find that the number of parameters in the network is the major factor determining it's performance, and explore this phenomena in residual networks. Our findings show that their original motivation, training deeper networks for better representations, does not fully hold, and wider networks with less layers can be as effective as deeper with the same number of parameters. Overall, we present an extensive study on architectures and weight parameterizations, and ways of transferring knowledge between them
Ioannou, Yani Andrew. "Structural priors in deep neural networks." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/278976.
Повний текст джерелаКниги з теми "Deep Photonic Neural Networks"
Aggarwal, Charu C. Neural Networks and Deep Learning. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-94463-0.
Повний текст джерелаAggarwal, Charu C. Neural Networks and Deep Learning. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-29642-0.
Повний текст джерелаMoolayil, Jojo. Learn Keras for Deep Neural Networks. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4240-7.
Повний текст джерелаCaterini, Anthony L., and Dong Eui Chang. Deep Neural Networks in a Mathematical Framework. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75304-1.
Повний текст джерелаRazaghi, Hooshmand Shokri. Statistical Machine Learning & Deep Neural Networks Applied to Neural Data Analysis. [New York, N.Y.?]: [publisher not identified], 2020.
Знайти повний текст джерелаFingscheidt, Tim, Hanno Gottschalk, and Sebastian Houben, eds. Deep Neural Networks and Data for Automated Driving. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-01233-4.
Повний текст джерелаModrzyk, Nicolas. Real-Time IoT Imaging with Deep Neural Networks. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5722-7.
Повний текст джерелаIba, Hitoshi. Evolutionary Approach to Machine Learning and Deep Neural Networks. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-0200-8.
Повний текст джерелаLu, Le, Yefeng Zheng, Gustavo Carneiro, and Lin Yang, eds. Deep Learning and Convolutional Neural Networks for Medical Image Computing. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-42999-1.
Повний текст джерелаTetko, Igor V., Věra Kůrková, Pavel Karpov, and Fabian Theis, eds. Artificial Neural Networks and Machine Learning – ICANN 2019: Deep Learning. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30484-3.
Повний текст джерелаЧастини книг з теми "Deep Photonic Neural Networks"
Sheu, Bing J., and Joongho Choi. "Photonic Neural Networks." In Neural Information Processing and VLSI, 369–96. Boston, MA: Springer US, 1995. http://dx.doi.org/10.1007/978-1-4615-2247-8_13.
Повний текст джерелаCalin, Ovidiu. "Neural Networks." In Deep Learning Architectures, 167–98. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-36721-3_6.
Повний текст джерелаVasudevan, Shriram K., Sini Raj Pulari, and Subashri Vasudevan. "Recurrent Neural Networks." In Deep Learning, 157–83. New York: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-7.
Повний текст джерелаYu, Dong, and Li Deng. "Deep Neural Networks." In Automatic Speech Recognition, 57–77. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-5779-3_4.
Повний текст джерелаAwad, Mariette, and Rahul Khanna. "Deep Neural Networks." In Efficient Learning Machines, 127–47. Berkeley, CA: Apress, 2015. http://dx.doi.org/10.1007/978-1-4302-5990-9_7.
Повний текст джерелаSun, Yanan, Gary G. Yen, and Mengjie Zhang. "Deep Neural Networks." In Evolutionary Deep Neural Architecture Search: Fundamentals, Methods, and Recent Advances, 9–30. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-16868-0_2.
Повний текст джерелаDenuit, Michel, Donatien Hainaut, and Julien Trufin. "Deep Neural Networks." In Springer Actuarial, 63–82. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-25827-6_3.
Повний текст джерелаHopgood, Adrian A. "Deep Neural Networks." In Intelligent Systems for Engineers and Scientists, 229–45. 4th ed. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003226277-9.
Повний текст джерелаXiong, Momiao. "Deep Neural Networks." In Artificial Intelligence and Causal Inference, 1–44. Boca Raton: Chapman and Hall/CRC, 2022. http://dx.doi.org/10.1201/9781003028543-1.
Повний текст джерелаWang, Liang, and Jianxin Zhao. "Deep Neural Networks." In Architecture of Advanced Numerical Analysis Systems, 121–47. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8853-5_5.
Повний текст джерелаТези доповідей конференцій з теми "Deep Photonic Neural Networks"
Shastri, Bhavin J., Matthew J. Filipovich, Zhimu Guo, Paul R. Prucnal, Sudip Shekhar, and Volker J. Sorger. "Silicon Photonics Neural Networks for Training and Inference." In Photonic Networks and Devices. Washington, D.C.: Optica Publishing Group, 2022. http://dx.doi.org/10.1364/networks.2022.new2d.2.
Повний текст джерелаLeelar, Bhawani Shankar, E. S. Shivaleela, and T. Srinivas. "Learning with Deep Photonic Neural Networks." In 2017 IEEE Workshop on Recent Advances in Photonics (WRAP). IEEE, 2017. http://dx.doi.org/10.1109/wrap.2017.8468594.
Повний текст джерелаShastri, Bhavin J., Matthew J. Filipovich, Zhimu Guo, Paul R. Prucnal, Sudip Shekhar, and Volker J. Sorger. "Silicon Photonics for Training Deep Neural Networks." In Conference on Lasers and Electro-Optics/Pacific Rim. Washington, D.C.: Optica Publishing Group, 2022. http://dx.doi.org/10.1364/cleopr.2022.ctha13b_02.
Повний текст джерелаAshtiani, Farshid, Mehmet Berkay On, David Sanchez-Jacome, Daniel Perez-Lopez, S. J. Ben Yoo, and Andrea Blanco-Redondo. "Photonic Max-Pooling for Deep Neural Networks Using a Programmable Photonic Platform." In Optical Fiber Communication Conference. Washington, D.C.: Optica Publishing Group, 2023. http://dx.doi.org/10.1364/ofc.2023.m1j.6.
Повний текст джерелаTanimura, Takahito, Yuichi Akiyama, and Takeshi Hoshida. "Physical-layer Visualization and Analysis toward Efficient Network Operation by Deep Neural Networks." In Photonic Networks and Devices. Washington, D.C.: OSA, 2019. http://dx.doi.org/10.1364/networks.2019.neth1d.2.
Повний текст джерелаPicco, Enrico, and Serge Massar. "Real-Time Photonic Deep Reservoir Computing for Speech Recognition." In 2023 International Joint Conference on Neural Networks (IJCNN). IEEE, 2023. http://dx.doi.org/10.1109/ijcnn54540.2023.10191786.
Повний текст джерелаBeyene, Yonatan, Nicola Peserico, Xiaoxuan Ma, and Volker J. Sorger. "Towards the full integration of Silicon Photonic Chip for Deep Neural Networks." In Bragg Gratings, Photosensitivity and Poling in Glass Waveguides and Materials. Washington, D.C.: Optica Publishing Group, 2022. http://dx.doi.org/10.1364/bgppm.2022.jw3a.31.
Повний текст джерелаAshtiani, Farshid, Mehmet Berkay On, David Sanchez-Jacome, Daniel Perez-Lopez, S. J. Ben Yoo, and Andrea Blanco-Redondo. "Photonic Max-Pooling for Deep Neural Networks Using a Programmable Photonic Platform." In 2023 Optical Fiber Communications Conference and Exhibition (OFC). IEEE, 2023. http://dx.doi.org/10.23919/ofc49934.2023.10116774.
Повний текст джерелаPankov, Artem V., Oleg S. Sidelnikov, Ilya D. Vatnik, Dmitry V. Churkin, and Andrey A. Sukhorukov. "Deep Neural Networks with Time-Domain Synthetic Photonic Lattices." In 2021 Conference on Lasers and Electro-Optics Europe & European Quantum Electronics Conference (CLEO/Europe-EQEC). IEEE, 2021. http://dx.doi.org/10.1109/cleo/europe-eqec52157.2021.9542271.
Повний текст джерелаShi, B., N. Calabretta, and R. Stabile. "SOA-Based Photonic Integrated Deep Neural Networks for Image Classification." In CLEO: Science and Innovations. Washington, D.C.: OSA, 2019. http://dx.doi.org/10.1364/cleo_si.2019.sf1n.5.
Повний текст джерелаЗвіти організацій з теми "Deep Photonic Neural Networks"
Yu, Haichao, Haoxiang Li, Honghui Shi, Thomas S. Huang, and Gang Hua. Any-Precision Deep Neural Networks. Web of Open Science, December 2020. http://dx.doi.org/10.37686/ejai.v1i1.82.
Повний текст джерелаKoh, Christopher Fu-Chai, and Sergey Igorevich Magedov. Bond Order Prediction Using Deep Neural Networks. Office of Scientific and Technical Information (OSTI), August 2019. http://dx.doi.org/10.2172/1557202.
Повний текст джерелаShevitski, Brian, Yijing Watkins, Nicole Man, and Michael Girard. Digital Signal Processing Using Deep Neural Networks. Office of Scientific and Technical Information (OSTI), April 2023. http://dx.doi.org/10.2172/1984848.
Повний текст джерелаTalathi, S. S. Deep Recurrent Neural Networks for seizure detection and early seizure detection systems. Office of Scientific and Technical Information (OSTI), June 2017. http://dx.doi.org/10.2172/1366924.
Повний текст джерелаArmstrong, Derek Elswick, and Joseph Gabriel Gorka. Using Deep Neural Networks to Extract Fireball Parameters from Infrared Spectral Data. Office of Scientific and Technical Information (OSTI), May 2020. http://dx.doi.org/10.2172/1623398.
Повний текст джерелаThulasidasan, Sunil, Gopinath Chennupati, Jeff Bilmes, Tanmoy Bhattacharya, and Sarah E. Michalak. On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks. Office of Scientific and Technical Information (OSTI), June 2019. http://dx.doi.org/10.2172/1525811.
Повний текст джерелаEllis, John, Attila Cangi, Normand Modine, John Stephens, Aidan Thompson, and Sivasankaran Rajamanickam. Accelerating Finite-temperature Kohn-Sham Density Functional Theory\ with Deep Neural Networks. Office of Scientific and Technical Information (OSTI), October 2020. http://dx.doi.org/10.2172/1677521.
Повний текст джерелаEllis, Austin, Lenz Fielder, Gabriel Popoola, Normand Modine, John Stephens, Aidan Thompson, and Sivasankaran Rajamanickam. Accelerating Finite-Temperature Kohn-Sham Density Functional Theory with Deep Neural Networks. Office of Scientific and Technical Information (OSTI), June 2021. http://dx.doi.org/10.2172/1817970.
Повний текст джерелаStevenson, G. Analysis of Pre-Trained Deep Neural Networks for Large-Vocabulary Automatic Speech Recognition. Office of Scientific and Technical Information (OSTI), July 2016. http://dx.doi.org/10.2172/1289367.
Повний текст джерелаChronopoulos, Ilias, Katerina Chrysikou, George Kapetanios, James Mitchell, and Aristeidis Raftapostolos. Deep Neural Network Estimation in Panel Data Models. Federal Reserve Bank of Cleveland, July 2023. http://dx.doi.org/10.26509/frbc-wp-202315.
Повний текст джерела