Literatura académica sobre el tema "Deep Photonic Neural Networks"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Deep Photonic Neural Networks".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Deep Photonic Neural Networks"
Pai, Sunil, Zhanghao Sun, Tyler W. Hughes, Taewon Park, Ben Bartlett, Ian A. D. Williamson, Momchil Minkov et al. "Experimentally realized in situ backpropagation for deep learning in photonic neural networks". Science 380, n.º 6643 (28 de abril de 2023): 398–404. http://dx.doi.org/10.1126/science.ade8450.
Texto completoSheng, Huayi. "Review of Integrated Diffractive Deep Neural Networks". Highlights in Science, Engineering and Technology 24 (27 de diciembre de 2022): 264–78. http://dx.doi.org/10.54097/hset.v24i.3957.
Texto completoJiang, Jiaqi y Jonathan A. Fan. "Multiobjective and categorical global optimization of photonic structures based on ResNet generative neural networks". Nanophotonics 10, n.º 1 (22 de septiembre de 2020): 361–69. http://dx.doi.org/10.1515/nanoph-2020-0407.
Texto completoMao, Simei, Lirong Cheng, Caiyue Zhao, Faisal Nadeem Khan, Qian Li y H. Y. Fu. "Inverse Design for Silicon Photonics: From Iterative Optimization Algorithms to Deep Neural Networks". Applied Sciences 11, n.º 9 (23 de abril de 2021): 3822. http://dx.doi.org/10.3390/app11093822.
Texto completoDang, Dharanidhar, Sai Vineel Reddy Chittamuru, Sudeep Pasricha, Rabi Mahapatra y Debashis Sahoo. "BPLight-CNN: A Photonics-Based Backpropagation Accelerator for Deep Learning". ACM Journal on Emerging Technologies in Computing Systems 17, n.º 4 (31 de octubre de 2021): 1–26. http://dx.doi.org/10.1145/3446212.
Texto completoAhmed, Moustafa, Yas Al-Hadeethi, Ahmed Bakry, Hamed Dalir y Volker J. Sorger. "Integrated photonic FFT for photonic tensor operations towards efficient and high-speed neural networks". Nanophotonics 9, n.º 13 (26 de junio de 2020): 4097–108. http://dx.doi.org/10.1515/nanoph-2020-0055.
Texto completoSun, Yichen, Mingli Dong, Mingxin Yu, Jiabin Xia, Xu Zhang, Yuchen Bai, Lidan Lu y Lianqing Zhu. "Nonlinear All-Optical Diffractive Deep Neural Network with 10.6 μm Wavelength for Image Classification". International Journal of Optics 2021 (27 de febrero de 2021): 1–16. http://dx.doi.org/10.1155/2021/6667495.
Texto completoRen, Yangming, Lingxuan Zhang, Weiqiang Wang, Xinyu Wang, Yufang Lei, Yulong Xue, Xiaochen Sun y Wenfu Zhang. "Genetic-algorithm-based deep neural networks for highly efficient photonic device design". Photonics Research 9, n.º 6 (24 de mayo de 2021): B247. http://dx.doi.org/10.1364/prj.416294.
Texto completoAsano, Takashi y Susumu Noda. "Iterative optimization of photonic crystal nanocavity designs by using deep neural networks". Nanophotonics 8, n.º 12 (16 de noviembre de 2019): 2243–56. http://dx.doi.org/10.1515/nanoph-2019-0308.
Texto completoLi, Renjie, Xiaozhe Gu, Yuanwen Shen, Ke Li, Zhen Li y Zhaoyu Zhang. "Smart and Rapid Design of Nanophotonic Structures by an Adaptive and Regularized Deep Neural Network". Nanomaterials 12, n.º 8 (16 de abril de 2022): 1372. http://dx.doi.org/10.3390/nano12081372.
Texto completoTesis sobre el tema "Deep Photonic Neural Networks"
Liu, Qian. "Deep spiking neural networks". Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/deep-spiking-neural-networks(336e6a37-2a0b-41ff-9ffb-cca897220d6c).html.
Texto completoSquadrani, Lorenzo. "Deep neural networks and thermodynamics". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.
Buscar texto completoMancevo, del Castillo Ayala Diego. "Compressing Deep Convolutional Neural Networks". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217316.
Texto completoAbbasi, Mahdieh. "Toward robust deep neural networks". Doctoral thesis, Université Laval, 2020. http://hdl.handle.net/20.500.11794/67766.
Texto completoIn this thesis, our goal is to develop robust and reliable yet accurate learning models, particularly Convolutional Neural Networks (CNNs), in the presence of adversarial examples and Out-of-Distribution (OOD) samples. As the first contribution, we propose to predict adversarial instances with high uncertainty through encouraging diversity in an ensemble of CNNs. To this end, we devise an ensemble of diverse specialists along with a simple and computationally efficient voting mechanism to predict the adversarial examples with low confidence while keeping the predictive confidence of the clean samples high. In the presence of high entropy in our ensemble, we prove that the predictive confidence can be upper-bounded, leading to have a globally fixed threshold over the predictive confidence for identifying adversaries. We analytically justify the role of diversity in our ensemble on mitigating the risk of both black-box and white-box adversarial examples. Finally, we empirically assess the robustness of our ensemble to the black-box and the white-box attacks on several benchmark datasets.The second contribution aims to address the detection of OOD samples through an end-to-end model trained on an appropriate OOD set. To this end, we address the following central question: how to differentiate many available OOD sets w.r.t. a given in distribution task to select the most appropriate one, which in turn induces a model with a high detection rate of unseen OOD sets? To answer this question, we hypothesize that the “protection” level of in-distribution sub-manifolds by each OOD set can be a good possible property to differentiate OOD sets. To measure the protection level, we then design three novel, simple, and cost-effective metrics using a pre-trained vanilla CNN. In an extensive series of experiments on image and audio classification tasks, we empirically demonstrate the abilityof an Augmented-CNN (A-CNN) and an explicitly-calibrated CNN for detecting a significantly larger portion of unseen OOD samples, if they are trained on the most protective OOD set. Interestingly, we also observe that the A-CNN trained on the most protective OOD set (calledA-CNN) can also detect the black-box Fast Gradient Sign (FGS) adversarial examples. As the third contribution, we investigate more closely the capacity of the A-CNN on the detection of wider types of black-box adversaries. To increase the capability of A-CNN to detect a larger number of adversaries, we augment its OOD training set with some inter-class interpolated samples. Then, we demonstrate that the A-CNN trained on the most protective OOD set along with the interpolated samples has a consistent detection rate on all types of unseen adversarial examples. Where as training an A-CNN on Projected Gradient Descent (PGD) adversaries does not lead to a stable detection rate on all types of adversaries, particularly the unseen types. We also visually assess the feature space and the decision boundaries in the input space of a vanilla CNN and its augmented counterpart in the presence of adversaries and the clean ones. By a properly trained A-CNN, we aim to take a step toward a unified and reliable end-to-end learning model with small risk rates on both clean samples and the unusual ones, e.g. adversarial and OOD samples.The last contribution is to show a use-case of A-CNN for training a robust object detector on a partially-labeled dataset, particularly a merged dataset. Merging various datasets from similar contexts but with different sets of Object of Interest (OoI) is an inexpensive way to craft a large-scale dataset which covers a larger spectrum of OoIs. Moreover, merging datasets allows achieving a unified object detector, instead of having several separate ones, resultingin the reduction of computational and time costs. However, merging datasets, especially from a similar context, causes many missing-label instances. With the goal of training an integrated robust object detector on a partially-labeled but large-scale dataset, we propose a self-supervised training framework to overcome the issue of missing-label instances in the merged datasets. Our framework is evaluated on a merged dataset with a high missing-label rate. The empirical results confirm the viability of our generated pseudo-labels to enhance the performance of YOLO, as the current (to date) state-of-the-art object detector.
Lu, Yifei. "Deep neural networks and fraud detection". Thesis, Uppsala universitet, Tillämpad matematik och statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-331833.
Texto completoKalogiras, Vasileios. "Sentiment Classification with Deep Neural Networks". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217858.
Texto completoSentiment analysis is a subfield of natural language processing (NLP) that attempts to analyze the sentiment of written text.It is is a complex problem that entails different challenges. For this reason, it has been studied extensively. In the past years traditional machine learning algorithms or handcrafted methodologies used to provide state of the art results. However, the recent deep learning renaissance shifted interest towards end to end deep learning models. On the one hand this resulted into more powerful models but on the other hand clear mathematical reasoning or intuition behind distinct models is still lacking. As a result, in this thesis, an attempt to shed some light on recently proposed deep learning architectures for sentiment classification is made.A study of their differences is performed as well as provide empirical results on how changes in the structure or capacity of a model can affect its accuracy and the way it represents and ''comprehends'' sentences.
Choi, Keunwoo. "Deep neural networks for music tagging". Thesis, Queen Mary, University of London, 2018. http://qmro.qmul.ac.uk/xmlui/handle/123456789/46029.
Texto completoYin, Yonghua. "Random neural networks for deep learning". Thesis, Imperial College London, 2018. http://hdl.handle.net/10044/1/64917.
Texto completoZagoruyko, Sergey. "Weight parameterizations in deep neural networks". Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1129/document.
Texto completoMultilayer neural networks were first proposed more than three decades ago, and various architectures and parameterizations were explored since. Recently, graphics processing units enabled very efficient neural network training, and allowed training much larger networks on larger datasets, dramatically improving performance on various supervised learning tasks. However, the generalization is still far from human level, and it is difficult to understand on what the decisions made are based. To improve on generalization and understanding we revisit the problems of weight parameterizations in deep neural networks. We identify the most important, to our mind, problems in modern architectures: network depth, parameter efficiency, and learning multiple tasks at the same time, and try to address them in this thesis. We start with one of the core problems of computer vision, patch matching, and propose to use convolutional neural networks of various architectures to solve it, instead of manual hand-crafting descriptors. Then, we address the task of object detection, where a network should simultaneously learn to both predict class of the object and the location. In both tasks we find that the number of parameters in the network is the major factor determining it's performance, and explore this phenomena in residual networks. Our findings show that their original motivation, training deeper networks for better representations, does not fully hold, and wider networks with less layers can be as effective as deeper with the same number of parameters. Overall, we present an extensive study on architectures and weight parameterizations, and ways of transferring knowledge between them
Ioannou, Yani Andrew. "Structural priors in deep neural networks". Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/278976.
Texto completoLibros sobre el tema "Deep Photonic Neural Networks"
Aggarwal, Charu C. Neural Networks and Deep Learning. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-94463-0.
Texto completoAggarwal, Charu C. Neural Networks and Deep Learning. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-29642-0.
Texto completoMoolayil, Jojo. Learn Keras for Deep Neural Networks. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4240-7.
Texto completoCaterini, Anthony L. y Dong Eui Chang. Deep Neural Networks in a Mathematical Framework. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75304-1.
Texto completoRazaghi, Hooshmand Shokri. Statistical Machine Learning & Deep Neural Networks Applied to Neural Data Analysis. [New York, N.Y.?]: [publisher not identified], 2020.
Buscar texto completoFingscheidt, Tim, Hanno Gottschalk y Sebastian Houben, eds. Deep Neural Networks and Data for Automated Driving. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-01233-4.
Texto completoModrzyk, Nicolas. Real-Time IoT Imaging with Deep Neural Networks. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5722-7.
Texto completoIba, Hitoshi. Evolutionary Approach to Machine Learning and Deep Neural Networks. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-0200-8.
Texto completoLu, Le, Yefeng Zheng, Gustavo Carneiro y Lin Yang, eds. Deep Learning and Convolutional Neural Networks for Medical Image Computing. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-42999-1.
Texto completoTetko, Igor V., Věra Kůrková, Pavel Karpov y Fabian Theis, eds. Artificial Neural Networks and Machine Learning – ICANN 2019: Deep Learning. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30484-3.
Texto completoCapítulos de libros sobre el tema "Deep Photonic Neural Networks"
Sheu, Bing J. y Joongho Choi. "Photonic Neural Networks". En Neural Information Processing and VLSI, 369–96. Boston, MA: Springer US, 1995. http://dx.doi.org/10.1007/978-1-4615-2247-8_13.
Texto completoCalin, Ovidiu. "Neural Networks". En Deep Learning Architectures, 167–98. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-36721-3_6.
Texto completoVasudevan, Shriram K., Sini Raj Pulari y Subashri Vasudevan. "Recurrent Neural Networks". En Deep Learning, 157–83. New York: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-7.
Texto completoYu, Dong y Li Deng. "Deep Neural Networks". En Automatic Speech Recognition, 57–77. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-5779-3_4.
Texto completoAwad, Mariette y Rahul Khanna. "Deep Neural Networks". En Efficient Learning Machines, 127–47. Berkeley, CA: Apress, 2015. http://dx.doi.org/10.1007/978-1-4302-5990-9_7.
Texto completoSun, Yanan, Gary G. Yen y Mengjie Zhang. "Deep Neural Networks". En Evolutionary Deep Neural Architecture Search: Fundamentals, Methods, and Recent Advances, 9–30. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-16868-0_2.
Texto completoDenuit, Michel, Donatien Hainaut y Julien Trufin. "Deep Neural Networks". En Springer Actuarial, 63–82. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-25827-6_3.
Texto completoHopgood, Adrian A. "Deep Neural Networks". En Intelligent Systems for Engineers and Scientists, 229–45. 4a ed. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003226277-9.
Texto completoXiong, Momiao. "Deep Neural Networks". En Artificial Intelligence and Causal Inference, 1–44. Boca Raton: Chapman and Hall/CRC, 2022. http://dx.doi.org/10.1201/9781003028543-1.
Texto completoWang, Liang y Jianxin Zhao. "Deep Neural Networks". En Architecture of Advanced Numerical Analysis Systems, 121–47. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8853-5_5.
Texto completoActas de conferencias sobre el tema "Deep Photonic Neural Networks"
Shastri, Bhavin J., Matthew J. Filipovich, Zhimu Guo, Paul R. Prucnal, Sudip Shekhar y Volker J. Sorger. "Silicon Photonics Neural Networks for Training and Inference". En Photonic Networks and Devices. Washington, D.C.: Optica Publishing Group, 2022. http://dx.doi.org/10.1364/networks.2022.new2d.2.
Texto completoLeelar, Bhawani Shankar, E. S. Shivaleela y T. Srinivas. "Learning with Deep Photonic Neural Networks". En 2017 IEEE Workshop on Recent Advances in Photonics (WRAP). IEEE, 2017. http://dx.doi.org/10.1109/wrap.2017.8468594.
Texto completoShastri, Bhavin J., Matthew J. Filipovich, Zhimu Guo, Paul R. Prucnal, Sudip Shekhar y Volker J. Sorger. "Silicon Photonics for Training Deep Neural Networks". En Conference on Lasers and Electro-Optics/Pacific Rim. Washington, D.C.: Optica Publishing Group, 2022. http://dx.doi.org/10.1364/cleopr.2022.ctha13b_02.
Texto completoAshtiani, Farshid, Mehmet Berkay On, David Sanchez-Jacome, Daniel Perez-Lopez, S. J. Ben Yoo y Andrea Blanco-Redondo. "Photonic Max-Pooling for Deep Neural Networks Using a Programmable Photonic Platform". En Optical Fiber Communication Conference. Washington, D.C.: Optica Publishing Group, 2023. http://dx.doi.org/10.1364/ofc.2023.m1j.6.
Texto completoTanimura, Takahito, Yuichi Akiyama y Takeshi Hoshida. "Physical-layer Visualization and Analysis toward Efficient Network Operation by Deep Neural Networks". En Photonic Networks and Devices. Washington, D.C.: OSA, 2019. http://dx.doi.org/10.1364/networks.2019.neth1d.2.
Texto completoPicco, Enrico y Serge Massar. "Real-Time Photonic Deep Reservoir Computing for Speech Recognition". En 2023 International Joint Conference on Neural Networks (IJCNN). IEEE, 2023. http://dx.doi.org/10.1109/ijcnn54540.2023.10191786.
Texto completoBeyene, Yonatan, Nicola Peserico, Xiaoxuan Ma y Volker J. Sorger. "Towards the full integration of Silicon Photonic Chip for Deep Neural Networks". En Bragg Gratings, Photosensitivity and Poling in Glass Waveguides and Materials. Washington, D.C.: Optica Publishing Group, 2022. http://dx.doi.org/10.1364/bgppm.2022.jw3a.31.
Texto completoAshtiani, Farshid, Mehmet Berkay On, David Sanchez-Jacome, Daniel Perez-Lopez, S. J. Ben Yoo y Andrea Blanco-Redondo. "Photonic Max-Pooling for Deep Neural Networks Using a Programmable Photonic Platform". En 2023 Optical Fiber Communications Conference and Exhibition (OFC). IEEE, 2023. http://dx.doi.org/10.23919/ofc49934.2023.10116774.
Texto completoPankov, Artem V., Oleg S. Sidelnikov, Ilya D. Vatnik, Dmitry V. Churkin y Andrey A. Sukhorukov. "Deep Neural Networks with Time-Domain Synthetic Photonic Lattices". En 2021 Conference on Lasers and Electro-Optics Europe & European Quantum Electronics Conference (CLEO/Europe-EQEC). IEEE, 2021. http://dx.doi.org/10.1109/cleo/europe-eqec52157.2021.9542271.
Texto completoShi, B., N. Calabretta y R. Stabile. "SOA-Based Photonic Integrated Deep Neural Networks for Image Classification". En CLEO: Science and Innovations. Washington, D.C.: OSA, 2019. http://dx.doi.org/10.1364/cleo_si.2019.sf1n.5.
Texto completoInformes sobre el tema "Deep Photonic Neural Networks"
Yu, Haichao, Haoxiang Li, Honghui Shi, Thomas S. Huang y Gang Hua. Any-Precision Deep Neural Networks. Web of Open Science, diciembre de 2020. http://dx.doi.org/10.37686/ejai.v1i1.82.
Texto completoKoh, Christopher Fu-Chai y Sergey Igorevich Magedov. Bond Order Prediction Using Deep Neural Networks. Office of Scientific and Technical Information (OSTI), agosto de 2019. http://dx.doi.org/10.2172/1557202.
Texto completoShevitski, Brian, Yijing Watkins, Nicole Man y Michael Girard. Digital Signal Processing Using Deep Neural Networks. Office of Scientific and Technical Information (OSTI), abril de 2023. http://dx.doi.org/10.2172/1984848.
Texto completoTalathi, S. S. Deep Recurrent Neural Networks for seizure detection and early seizure detection systems. Office of Scientific and Technical Information (OSTI), junio de 2017. http://dx.doi.org/10.2172/1366924.
Texto completoArmstrong, Derek Elswick y Joseph Gabriel Gorka. Using Deep Neural Networks to Extract Fireball Parameters from Infrared Spectral Data. Office of Scientific and Technical Information (OSTI), mayo de 2020. http://dx.doi.org/10.2172/1623398.
Texto completoThulasidasan, Sunil, Gopinath Chennupati, Jeff Bilmes, Tanmoy Bhattacharya y Sarah E. Michalak. On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks. Office of Scientific and Technical Information (OSTI), junio de 2019. http://dx.doi.org/10.2172/1525811.
Texto completoEllis, John, Attila Cangi, Normand Modine, John Stephens, Aidan Thompson y Sivasankaran Rajamanickam. Accelerating Finite-temperature Kohn-Sham Density Functional Theory\ with Deep Neural Networks. Office of Scientific and Technical Information (OSTI), octubre de 2020. http://dx.doi.org/10.2172/1677521.
Texto completoEllis, Austin, Lenz Fielder, Gabriel Popoola, Normand Modine, John Stephens, Aidan Thompson y Sivasankaran Rajamanickam. Accelerating Finite-Temperature Kohn-Sham Density Functional Theory with Deep Neural Networks. Office of Scientific and Technical Information (OSTI), junio de 2021. http://dx.doi.org/10.2172/1817970.
Texto completoStevenson, G. Analysis of Pre-Trained Deep Neural Networks for Large-Vocabulary Automatic Speech Recognition. Office of Scientific and Technical Information (OSTI), julio de 2016. http://dx.doi.org/10.2172/1289367.
Texto completoChronopoulos, Ilias, Katerina Chrysikou, George Kapetanios, James Mitchell y Aristeidis Raftapostolos. Deep Neural Network Estimation in Panel Data Models. Federal Reserve Bank of Cleveland, julio de 2023. http://dx.doi.org/10.26509/frbc-wp-202315.
Texto completo