Academic literature on the topic 'Deep neural networks (DNNs)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Deep neural networks (DNNs).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Deep neural networks (DNNs)"

1

Zhang, Lei, Shengyuan Zhou, Tian Zhi, Zidong Du, and Yunji Chen. "TDSNN: From Deep Neural Networks to Deep Spike Neural Networks with Temporal-Coding." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 1319–26. http://dx.doi.org/10.1609/aaai.v33i01.33011319.

Full text
Abstract:
Continuous-valued deep convolutional networks (DNNs) can be converted into accurate rate-coding based spike neural networks (SNNs). However, the substantial computational and energy costs, which is caused by multiple spikes, limit their use in mobile and embedded applications. And recent works have shown that the newly emerged temporal-coding based SNNs converted from DNNs can reduce the computational load effectively. In this paper, we propose a novel method to convert DNNs to temporal-coding SNNs, called TDSNN. Combined with the characteristic of the leaky integrate-andfire (LIF) neural mode
APA, Harvard, Vancouver, ISO, and other styles
2

Galván, Edgar. "Neuroevolution in deep neural networks." ACM SIGEVOlution 14, no. 1 (2021): 3–7. http://dx.doi.org/10.1145/3460310.3460311.

Full text
Abstract:
A variety of methods have been applied to the architectural configuration and learning or training of artificial deep neural networks (DNNs). These methods play a crucial role in the success or failure of the DNNs for most problems. Evolutionary Algorithms are gaining momentum as a computationally feasible method for the automated optimisation of DNNs. Neuroevolution is a term that describes these processes. This newsletter article summarises the full version available at https://arxiv.org/abs/2006.05415.
APA, Harvard, Vancouver, ISO, and other styles
3

Saravanan, Kavya, and Abbas Z. Kouzani. "Advancements in On-Device Deep Neural Networks." Information 14, no. 8 (2023): 470. http://dx.doi.org/10.3390/info14080470.

Full text
Abstract:
In recent years, rapid advancements in both hardware and software technologies have resulted in the ability to execute artificial intelligence (AI) algorithms on low-resource devices. The combination of high-speed, low-power electronic hardware and efficient AI algorithms is driving the emergence of on-device AI. Deep neural networks (DNNs) are highly effective AI algorithms used for identifying patterns in complex data. DNNs, however, contain many parameters and operations that make them computationally intensive to execute. Accordingly, DNNs are usually executed on high-resource backend proc
APA, Harvard, Vancouver, ISO, and other styles
4

Díaz-Vico, David, Jesús Prada, Adil Omari, and José Dorronsoro. "Deep support vector neural networks." Integrated Computer-Aided Engineering 27, no. 4 (2020): 389–402. http://dx.doi.org/10.3233/ica-200635.

Full text
Abstract:
Kernel based Support Vector Machines, SVM, one of the most popular machine learning models, usually achieve top performances in two-class classification and regression problems. However, their training cost is at least quadratic on sample size, making them thus unsuitable for large sample problems. However, Deep Neural Networks (DNNs), with a cost linear on sample size, are able to solve big data problems relatively easily. In this work we propose to combine the advanced representations that DNNs can achieve in their last hidden layers with the hinge and ϵ insensitive losses that are used in t
APA, Harvard, Vancouver, ISO, and other styles
5

Awan, Burhan Humayun. "Deep Learning Neural Networks in the Cloud." International Journal of Advanced Engineering, Management and Science 9, no. 10 (2023): 09–26. http://dx.doi.org/10.22161/ijaems.910.2.

Full text
Abstract:
Deep Neural Networks (DNNs) are currently used in a wide range of critical real-world applications as machine learning technology. Due to the high number of parameters that make up DNNs, learning and prediction tasks require millions of floating-point operations (FLOPs). Implementing DNNs into a cloud computing system with centralized servers and data storage sub-systems equipped with high-speed and high-performance computing capabilities is a more effective strategy. This research presents an updated analysis of the most recent DNNs used in cloud computing. It highlights the necessity of clou
APA, Harvard, Vancouver, ISO, and other styles
6

Cai, Chenghao, Yanyan Xu, Dengfeng Ke, and Kaile Su. "Deep Neural Networks with Multistate Activation Functions." Computational Intelligence and Neuroscience 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/721367.

Full text
Abstract:
We propose multistate activation functions (MSAFs) for deep neural networks (DNNs). These MSAFs are new kinds of activation functions which are capable of representing more than two states, including theN-order MSAFs and the symmetrical MSAF. DNNs with these MSAFs can be trained via conventional Stochastic Gradient Descent (SGD) as well as mean-normalised SGD. We also discuss how these MSAFs perform when used to resolve classification problems. Experimental results on the TIMIT corpus reveal that, on speech recognition tasks, DNNs with MSAFs perform better than the conventional DNNs, getting a
APA, Harvard, Vancouver, ISO, and other styles
7

Yu, Haichao, Haoxiang Li, Humphrey Shi, Thomas S. Huang, and Gang Hua. "Any-Precision Deep Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (2021): 10763–71. http://dx.doi.org/10.1609/aaai.v35i12.17286.

Full text
Abstract:
We present any-precision deep neural networks (DNNs), which are trained with a new method that allows the learned DNNs to be flexible in numerical precision during inference. The same model in runtime can be flexibly and directly set to different bit-widths, by truncating the least significant bits, to support dynamic speed and accuracy trade-off. When all layers are set to low-bits, we show that the model achieved accuracy comparable to dedicated models trained at the same precision. This nice property facilitates flexible deployment of deep learning models in real-world applications, where i
APA, Harvard, Vancouver, ISO, and other styles
8

Tao, Zhe, Stephanie Nawas, Jacqueline Mitchell, and Aditya V. Thakur. "Architecture-Preserving Provable Repair of Deep Neural Networks." Proceedings of the ACM on Programming Languages 7, PLDI (2023): 443–67. http://dx.doi.org/10.1145/3591238.

Full text
Abstract:
Deep neural networks (DNNs) are becoming increasingly important components of software, and are considered the state-of-the-art solution for a number of problems, such as image recognition. However, DNNs are far from infallible, and incorrect behavior of DNNs can have disastrous real-world consequences. This paper addresses the problem of architecture-preserving V-polytope provable repair of DNNs. A V-polytope defines a convex bounded polytope using its vertex representation. V-polytope provable repair guarantees that the repaired DNN satisfies the given specification on the infinite set of po
APA, Harvard, Vancouver, ISO, and other styles
9

Verpoort, Philipp C., Alpha A. Lee, and David J. Wales. "Archetypal landscapes for deep neural networks." Proceedings of the National Academy of Sciences 117, no. 36 (2020): 21857–64. http://dx.doi.org/10.1073/pnas.1919995117.

Full text
Abstract:
The predictive capabilities of deep neural networks (DNNs) continue to evolve to increasingly impressive levels. However, it is still unclear how training procedures for DNNs succeed in finding parameters that produce good results for such high-dimensional and nonconvex loss functions. In particular, we wish to understand why simple optimization schemes, such as stochastic gradient descent, do not end up trapped in local minima with high loss values that would not yield useful predictions. We explain the optimizability of DNNs by characterizing the local minima and transition states of the los
APA, Harvard, Vancouver, ISO, and other styles
10

Marrow, Scythia, Eric J. Michaud, and Erik Hoel. "Examining the Causal Structures of Deep Neural Networks Using Information Theory." Entropy 22, no. 12 (2020): 1429. http://dx.doi.org/10.3390/e22121429.

Full text
Abstract:
Deep Neural Networks (DNNs) are often examined at the level of their response to input, such as analyzing the mutual information between nodes and data sets. Yet DNNs can also be examined at the level of causation, exploring “what does what” within the layers of the network itself. Historically, analyzing the causal structure of DNNs has received less attention than understanding their responses to input. Yet definitionally, generalizability must be a function of a DNN’s causal structure as it reflects how the DNN responds to unseen or even not-yet-defined future inputs. Here, we introduce a s
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Deep neural networks (DNNs)"

1

Michailoff, John. "Email Classification : An evaluation of Deep Neural Networks with Naive Bayes." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-37590.

Full text
Abstract:
Machine learning (ML) is an area of computer science that gives computers the ability to learn data patterns without prior programming for those patterns. Using neural networks in this area is based on simulating the biological functions of neurons in brains to learn patterns in data, giving computers a predictive ability to comprehend how data can be clustered. This research investigates the possibilities of using neural networks for classifying email, i.e. working as an email case manager. A Deep Neural Network (DNN) are multiple layers of neurons connected to each other by trainable weights
APA, Harvard, Vancouver, ISO, and other styles
2

Tong, Zheng. "Evidential deep neural network in the framework of Dempster-Shafer theory." Thesis, Compiègne, 2022. http://www.theses.fr/2022COMP2661.

Full text
Abstract:
Les réseaux de neurones profonds (DNN) ont obtenu un succès remarquable sur de nombreuses applications du monde réel (par exemple, la reconnaissance de formes et la segmentation sémantique), mais sont toujours confrontés au problème de la gestion de l'incertitude. La théorie de Dempster-Shafer (DST) fournit un cadre bien fondé et élégant pour représenter et raisonner avec des informations incertaines. Dans cette thèse, nous avons proposé un nouveau framework utilisant DST et DNNs pour résoudre les problèmes d'incertitude. Dans le cadre proposé, nous hybridons d'abord DST et DNN en branchant un
APA, Harvard, Vancouver, ISO, and other styles
3

Wasnik, Sachinkumar. "Fatigue Detection in EEG Time Series Data Using Deep Learning." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/24917.

Full text
Abstract:
Fatigue has widespread effects on the brain’s executive function, reaction time and information processing, causing loss of alertness, that affect safety, and productivity. There are various subjective and behavioural methods to measure fatigue. However, none of them is precise. The work in this thesis employs physiological measures such as heart rate, blood pressure, and breathing that are objective and quantitative indicators. These are thought to provide reliable measures of fatigue and may be easier to deploy in real world scenarios, compared to the subjective or behavioural methods. In p
APA, Harvard, Vancouver, ISO, and other styles
4

Buratti, Luca. "Visualisation of Convolutional Neural Networks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Find full text
Abstract:
Le Reti Neurali, e in particolare le Reti Neurali Convoluzionali, hanno recentemente dimostrato risultati straordinari in vari campi. Purtroppo, comunque, non vi è ancora una chiara comprensione del perchè queste architetture funzionino così bene e soprattutto è difficile spiegare il comportamento nel caso di fallimenti. Questa mancanza di chiarezza è quello che separa questi modelli dall’essere applicati in scenari concreti e critici della vita reale, come la sanità o le auto a guida autonoma. Per questa ragione, durante gli ultimi anni sono stati portati avanti diversi studi in modo tale d
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Qian. "Deep spiking neural networks." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/deep-spiking-neural-networks(336e6a37-2a0b-41ff-9ffb-cca897220d6c).html.

Full text
Abstract:
Neuromorphic Engineering (NE) has led to the development of biologically-inspired computer architectures whose long-term goal is to approach the performance of the human brain in terms of energy efficiency and cognitive capabilities. Although there are a number of neuromorphic platforms available for large-scale Spiking Neural Network (SNN) simulations, the problem of programming these brain-like machines to be competent in cognitive applications still remains unsolved. On the other hand, Deep Learning has emerged in Artificial Neural Network (ANN) research to dominate state-of-the-art solutio
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Dongfu. "Deep Neural Network Approach for Single Channel Speech Enhancement Processing." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34472.

Full text
Abstract:
Speech intelligibility represents how comprehensible a speech is. It is more important than speech quality in some applications. Single channel speech intelligibility enhancement is much more difficult than multi-channel intelligibility enhancement. It has recently been reported that training-based single channel speech intelligibility enhancement algorithms perform better than Signal to Noise Ratio (SNR) based algorithm. In this thesis, a training-based Deep Neural Network (DNN) is used to improve single channel speech intelligibility. To increase the performance of the DNN, the Multi-Resolut
APA, Harvard, Vancouver, ISO, and other styles
7

Shuvo, Md Kamruzzaman. "Hardware Efficient Deep Neural Network Implementation on FPGA." OpenSIUC, 2020. https://opensiuc.lib.siu.edu/theses/2792.

Full text
Abstract:
In recent years, there has been a significant push to implement Deep Neural Networks (DNNs) on edge devices, which requires power and hardware efficient circuits to carry out the intensive matrix-vector multiplication (MVM) operations. This work presents hardware efficient MVM implementation techniques using bit-serial arithmetic and a novel MSB first computation circuit. The proposed designs take advantage of the pre-trained network weight parameters, which are already known in the design stage. Thus, the partial computation results can be pre-computed and stored into look-up tables. Then the
APA, Harvard, Vancouver, ISO, and other styles
8

Squadrani, Lorenzo. "Deep neural networks and thermodynamics." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
Deep learning is the most effective and used approach to artificial intelligence, and yet it is far from being properly understood. The understanding of it is the way to go to further improve its effectiveness and in the best case to gain some understanding of the "natural" intelligence. We attempt a step in this direction with the aim of physics. We describe a convolutional neural network for image classification (trained on CIFAR-10) within the descriptive framework of Thermodynamics. In particular we define and study the temperature of each component of the network. Our results provides a n
APA, Harvard, Vancouver, ISO, and other styles
9

Mancevo, del Castillo Ayala Diego. "Compressing Deep Convolutional Neural Networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217316.

Full text
Abstract:
Deep Convolutional Neural Networks and "deep learning" in general stand at the cutting edge on a range of applications, from image based recognition and classification to natural language processing, speech and speaker recognition and reinforcement learning. Very deep models however are often large, complex and computationally expensive to train and evaluate. Deep learning models are thus seldom deployed natively in environments where computational resources are scarce or expensive. To address this problem we turn our attention towards a range of techniques that we collectively refer to as "mo
APA, Harvard, Vancouver, ISO, and other styles
10

Abbasi, Mahdieh. "Toward robust deep neural networks." Doctoral thesis, Université Laval, 2020. http://hdl.handle.net/20.500.11794/67766.

Full text
Abstract:
Dans cette thèse, notre objectif est de développer des modèles d’apprentissage robustes et fiables mais précis, en particulier les Convolutional Neural Network (CNN), en présence des exemples anomalies, comme des exemples adversaires et d’échantillons hors distribution –Out-of-Distribution (OOD). Comme la première contribution, nous proposons d’estimer la confiance calibrée pour les exemples adversaires en encourageant la diversité dans un ensemble des CNNs. À cette fin, nous concevons un ensemble de spécialistes diversifiés avec un mécanisme de vote simple et efficace en termes de calcul pour
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Deep neural networks (DNNs)"

1

Aggarwal, Charu C. Neural Networks and Deep Learning. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-94463-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Aggarwal, Charu C. Neural Networks and Deep Learning. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-29642-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Moolayil, Jojo. Learn Keras for Deep Neural Networks. Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4240-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Caterini, Anthony L., and Dong Eui Chang. Deep Neural Networks in a Mathematical Framework. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75304-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Razaghi, Hooshmand Shokri. Statistical Machine Learning & Deep Neural Networks Applied to Neural Data Analysis. [publisher not identified], 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fingscheidt, Tim, Hanno Gottschalk, and Sebastian Houben, eds. Deep Neural Networks and Data for Automated Driving. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-01233-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Modrzyk, Nicolas. Real-Time IoT Imaging with Deep Neural Networks. Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5722-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Iba, Hitoshi. Evolutionary Approach to Machine Learning and Deep Neural Networks. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-0200-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lu, Le, Yefeng Zheng, Gustavo Carneiro, and Lin Yang, eds. Deep Learning and Convolutional Neural Networks for Medical Image Computing. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-42999-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tetko, Igor V., Věra Kůrková, Pavel Karpov, and Fabian Theis, eds. Artificial Neural Networks and Machine Learning – ICANN 2019: Deep Learning. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30484-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Deep neural networks (DNNs)"

1

Sotoudeh, Matthew, and Aditya V. Thakur. "SyReNN: A Tool for Analyzing Deep Neural Networks." In Tools and Algorithms for the Construction and Analysis of Systems. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72013-1_15.

Full text
Abstract:
AbstractDeep Neural Networks (DNNs) are rapidly gaining popularity in a variety of important domains. Formally, DNNs are complicated vector-valued functions which come in a variety of sizes and applications. Unfortunately, modern DNNs have been shown to be vulnerable to a variety of attacks and buggy behavior. This has motivated recent work in formally analyzing the properties of such DNNs. This paper introduces SyReNN, a tool for understanding and analyzing a DNN by computing its symbolic representation. The key insight is to decompose the DNN into linear functions. Our tool is designed for a
APA, Harvard, Vancouver, ISO, and other styles
2

Guo, Xingwu, Ziwei Zhou, Yueling Zhang, Guy Katz, and Min Zhang. "OccRob: Efficient SMT-Based Occlusion Robustness Verification of Deep Neural Networks." In Tools and Algorithms for the Construction and Analysis of Systems. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30823-9_11.

Full text
Abstract:
AbstractOcclusion is a prevalent and easily realizable semantic perturbation to deep neural networks (DNNs). It can fool a DNN into misclassifying an input image by occluding some segments, possibly resulting in severe errors. Therefore, DNNs planted in safety-critical systems should be verified to be robust against occlusions prior to deployment. However, most existing robustness verification approaches for DNNs are focused on non-semantic perturbations and are not suited to the occlusion case. In this paper, we propose the first efficient, SMT-based approach for formally verifying the occlus
APA, Harvard, Vancouver, ISO, and other styles
3

Zhong, Ziyuan, Yuchi Tian, and Baishakhi Ray. "Understanding Local Robustness of Deep Neural Networks under Natural Variations." In Fundamental Approaches to Software Engineering. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-71500-7_16.

Full text
Abstract:
AbstractDeep Neural Networks (DNNs) are being deployed in a wide range of settings today, from safety-critical applications like autonomous driving to commercial applications involving image classifications. However, recent research has shown that DNNs can be brittle to even slight variations of the input data. Therefore, rigorous testing of DNNs has gained widespread attention.While DNN robustness under norm-bound perturbation got significant attention over the past few years, our knowledge is still limited when natural variants of the input images come. These natural variants, e.g., a rotate
APA, Harvard, Vancouver, ISO, and other styles
4

Ghayoumi, Mehdi. "Deep Neural Networks (DNNs) for Images Analysis." In Deep Learning in Practice. Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003025818-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ghayoumi, Mehdi. "Deep Neural Networks (DNNs) Fundamentals and Architectures." In Deep Learning in Practice. Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003025818-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bartz-Beielstein, Thomas, Sowmya Chandrasekaran, and Frederik Rehbach. "Case Study III: Tuning of Deep Neural Networks." In Hyperparameter Tuning for Machine and Deep Learning with R. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-5170-1_10.

Full text
Abstract:
AbstractA surrogate model based Hyperparameter Tuning (HPT) approach for Deep Learning (DL) is presented. This chapter demonstrates how the architecture-level parameters (hyperparameters) of Deep Neural Networks (DNNs) that were implemented in / can be optimized. The implementation of the tuning procedure is 100% accessible from R, the software environment for statistical computing. How the software packages (, , and ) can be combined in a very efficient and effective manner will be exemplified in this chapter. The hyperparameters of a standard DNN are tuned. The performances of the six Machin
APA, Harvard, Vancouver, ISO, and other styles
7

Ghayoumi, Mehdi. "Deep Neural Networks (DNNs) Fundamentals and Architectures." In Generative Adversarial Networks in Practice. Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003281344-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ghayoumi, Mehdi. "Deep Neural Networks (DNNs) for Virtual Assistant Robots." In Deep Learning in Practice. Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003025818-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jyothsna, P. V., Greeshma Prabha, K. K. Shahina, and Anu Vazhayil. "Detecting DGA Using Deep Neural Networks (DNNs)." In Communications in Computer and Information Science. Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-5826-5_55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chan, Robin, Svenja Uhlemeyer, Matthias Rottmann, and Hanno Gottschalk. "Detecting and Learning the Unknown in Semantic Segmentation." In Deep Neural Networks and Data for Automated Driving. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-01233-4_10.

Full text
Abstract:
AbstractSemantic segmentation is a crucial component for perception in automated driving. Deep neural networks (DNNs) are commonly used for this task, and they are usually trained on a closed set of object classes appearing in a closed operational domain. However, this is in contrast to the open world assumption in automated driving that DNNs are deployed to. Therefore, DNNs necessarily face data that they have never encountered previously, also known as anomalies, which are extremely safety-critical to properly cope with. In this chapter, we first give an overview about anomalies from an info
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Deep neural networks (DNNs)"

1

Sheng, Donghe, Zhe Han, and Huiping Tian. "High-precision Brillouin Curvature Sensors Based on Deep Neural Networks." In CLEO: Applications and Technology. Optica Publishing Group, 2024. http://dx.doi.org/10.1364/cleo_at.2024.atu3a.1.

Full text
Abstract:
We report a high-precision Brillouin curvature sensor assisted by deep neural networks (DNNs). The results show that over an order of magnitude improvement in the sensing accuracy using DNN compared with conventional methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Wencan, Yuyao Huang, Run Sun, Tingzhao Fu, and Hongwei Chen. "Diffraction-based on-chip optical neural network with high computational density." In JSAP-Optica Joint Symposia. Optica Publishing Group, 2024. https://doi.org/10.1364/jsapo.2024.17p_a25_6.

Full text
Abstract:
The rapid advancement of artificial intelligence has led to substantial progress in various fields with deep neural networks (DNNs). However, complex tasks often require increasing power consumption and greater resources of electronics. On-chip optical neural networks (ONNs) are increasingly recognized for their power efficiency, wide bandwidth, and capability for light-speed parallel processing. In our previous work [1], we proposed on-chip diffractive optical neural networks (DONNs) to offer the potential to map a larger number of neurons and connections onto optics. To further improve the c
APA, Harvard, Vancouver, ISO, and other styles
3

Ohno, Hiroshi. "Deep Neural Network 3D Reconstruction Using One-Shot Color Mapping of Reflectance Direction Fields." In JSAP-Optica Joint Symposia. Optica Publishing Group, 2024. https://doi.org/10.1364/jsapo.2024.17a_a37_1.

Full text
Abstract:
In many manufacturing processes, real-time inspection of microscale three-dimensional (3D) surfaces is crucial. Therefore, a method integrating deep neural networks (DNNs) has been proposed for obtaining a microscale 3D surface from a single image, or two images, captured by an imaging system referred to as the one-shot BRDF (Bidirectional Reflectance Distribution Function) system, equipped with a multicolor filter [1-4]. This system can acquire reflectance direction fields using one-shot color mapping that assigns light directions to specific colors. Assuming a smooth and continuous surface,
APA, Harvard, Vancouver, ISO, and other styles
4

Teng, Liu, Xu Guoqiong, and Shi Kai. "Effective Radius Prediction Method for Gas Extraction Based on Adam+DNN." In 2024 International Conference on Artificial Intelligence, Deep Learning and Neural Networks (AIDLNN). IEEE, 2024. https://doi.org/10.1109/aidlnn65358.2024.00032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

FONTES, ALLYSON, and FARJAD SHADMEHRI. "FAILURE PREDICTION OF COMPOSITE MATERIALS USING DEEP NEURAL NETWORKS." In Thirty-sixth Technical Conference. Destech Publications, Inc., 2021. http://dx.doi.org/10.12783/asc36/35822.

Full text
Abstract:
Fiber-reinforced polymer (FRP) composite materials are increasingly used in engineering applications. However, an investigation into the precision of conventional failure criteria, known as the World-Wide Failure Exercise (WWFEI), revealed that current theories remain unable to predict failure within an acceptable degree of accuracy. Deep Neural Networks (DNN) are emerging as an alternate and time-efficient technique for predicting the failure strength of FRP composite materials. The present study examined the applicability of DNNs as a tool for creating a data-driven failure model for composi
APA, Harvard, Vancouver, ISO, and other styles
6

La Malfa, Emanuele, Gabriele La Malfa, Giuseppe Nicosia, and Vito Latora. "Deep Neural Networks via Complex Network Theory: A Perspective." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/482.

Full text
Abstract:
Deep Neural Networks (DNNs) can be represented as graphs whose links and vertices iteratively process data and solve tasks sub-optimally. Complex Network Theory (CNT), merging statistical physics with graph theory, provides a method for interpreting neural networks by analysing their weights and neuron structures. However, classic works adapt CNT metrics that only permit a topological analysis as they do not account for the effect of the input data. In addition, CNT metrics have been applied to a limited range of architectures, mainly including Fully Connected neural networks. In this work, we
APA, Harvard, Vancouver, ISO, and other styles
7

Sahoo, Doyen, Quang Pham, Jing Lu, and Steven C. H. Hoi. "Online Deep Learning: Learning Deep Neural Networks on the Fly." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/369.

Full text
Abstract:
Deep Neural Networks (DNNs) are typically trained by backpropagation in a batch setting, requiring the entire training data to be made available prior to the learning task. This is not scalable for many real-world scenarios where new data arrives sequentially in a stream. We aim to address an open challenge of ``Online Deep Learning" (ODL) for learning DNNs on the fly in an online setting. Unlike traditional online learning that often optimizes some convex objective function with respect to a shallow model (e.g., a linear/kernel-based hypothesis), ODL is more challenging as the optimization ob
APA, Harvard, Vancouver, ISO, and other styles
8

Ruan, Wenjie, Xiaowei Huang, and Marta Kwiatkowska. "Reachability Analysis of Deep Neural Networks with Provable Guarantees." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/368.

Full text
Abstract:
Verifying correctness for deep neural networks (DNNs) is challenging. We study a generic reachability problem for feed-forward DNNs which, for a given set of inputs to the network and a Lipschitz-continuous function over its outputs computes the lower and upper bound on the function values. Because the network and the function are Lipschitz continuous, all values in the interval between the lower and upper bound are reachable. We show how to obtain the safety verification problem, the output range analysis problem and a robustness measure by instantiating the reachability problem. We present a
APA, Harvard, Vancouver, ISO, and other styles
9

Gu, Shuqin, Yuexian Hou, Lipeng Zhang, and Yazhou Zhang. "Regularizing Deep Neural Networks with an Ensemble-based Decorrelation Method." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/301.

Full text
Abstract:
Although Deep Neural Networks (DNNs) have achieved excellent performance in many tasks, improving the generalization capacity of DNNs still remains a challenge. In this work, we propose a novel regularizer named Ensemble-based Decorrelation Method (EDM), which is motivated by the idea of the ensemble learning to improve generalization capacity of DNNs. EDM can be applied to hidden layers in fully connected neural networks or convolutional neural networks. We treat each hidden layer as an ensemble of several base learners through dividing all the hidden units into several non-overlap groups, an
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Yang, Rui Hu, and Prasanna Balaprakash. "Uncertainty Quantification of Deep Neural Network-Based Turbulence Model for Reactor Transient Analysis." In ASME 2021 Verification and Validation Symposium. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/vvs2021-65045.

Full text
Abstract:
Abstract Deep neural networks (DNNs) have demonstrated good performance in learning highly non-linear relationships in large datasets, thus have been considered as a promising surrogate modeling tool for parametric partial differential equations (PDEs). On the other hand, quantifying the predictive uncertainty in DNNs is still a challenging problem. The Bayesian neural network (BNN), a sophisticated method assuming the weights of the DNNs follow certain uncertainty distributions, is considered as a state-of-the-art method for the UQ of DNNs. However, the method is too computationally expensive
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Deep neural networks (DNNs)"

1

Yu, Haichao, Haoxiang Li, Honghui Shi, Thomas S. Huang, and Gang Hua. Any-Precision Deep Neural Networks. Web of Open Science, 2020. http://dx.doi.org/10.37686/ejai.v1i1.82.

Full text
Abstract:
We present Any-Precision Deep Neural Networks (Any- Precision DNNs), which are trained with a new method that empowers learned DNNs to be flexible in any numerical precision during inference. The same model in runtime can be flexibly and directly set to different bit-width, by trun- cating the least significant bits, to support dynamic speed and accuracy trade-off. When all layers are set to low- bits, we show that the model achieved accuracy compara- ble to dedicated models trained at the same precision. This nice property facilitates flexible deployment of deep learn- ing models in real-worl
APA, Harvard, Vancouver, ISO, and other styles
2

Tayeb, Shahab. Taming the Data in the Internet of Vehicles. Mineta Transportation Institute, 2022. http://dx.doi.org/10.31979/mti.2022.2014.

Full text
Abstract:
As an emerging field, the Internet of Vehicles (IoV) has a myriad of security vulnerabilities that must be addressed to protect system integrity. To stay ahead of novel attacks, cybersecurity professionals are developing new software and systems using machine learning techniques. Neural network architectures improve such systems, including Intrusion Detection System (IDSs), by implementing anomaly detection, which differentiates benign data packets from malicious ones. For an IDS to best predict anomalies, the model is trained on data that is typically pre-processed through normalization and f
APA, Harvard, Vancouver, ISO, and other styles
3

Idakwo, Gabriel, Sundar Thangapandian, Joseph Luttrell, Zhaoxian Zhou, Chaoyang Zhang, and Ping Gong. Deep learning-based structure-activity relationship modeling for multi-category toxicity classification : a case study of 10K Tox21 chemicals with high-throughput cell-based androgen receptor bioassay data. Engineer Research and Development Center (U.S.), 2021. http://dx.doi.org/10.21079/11681/41302.

Full text
Abstract:
Deep learning (DL) has attracted the attention of computational toxicologists as it offers a potentially greater power for in silico predictive toxicology than existing shallow learning algorithms. However, contradicting reports have been documented. To further explore the advantages of DL over shallow learning, we conducted this case study using two cell-based androgen receptor (AR) activity datasets with 10K chemicals generated from the Tox21 program. A nested double-loop cross-validation approach was adopted along with a stratified sampling strategy for partitioning chemicals of multiple AR
APA, Harvard, Vancouver, ISO, and other styles
4

Koh, Christopher Fu-Chai, and Sergey Igorevich Magedov. Bond Order Prediction Using Deep Neural Networks. Office of Scientific and Technical Information (OSTI), 2019. http://dx.doi.org/10.2172/1557202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shevitski, Brian, Yijing Watkins, Nicole Man, and Michael Girard. Digital Signal Processing Using Deep Neural Networks. Office of Scientific and Technical Information (OSTI), 2023. http://dx.doi.org/10.2172/1984848.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Landon, Nicholas. A survey of repair strategies for deep neural networks. Iowa State University, 2022. http://dx.doi.org/10.31274/cc-20240624-93.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Talathi, S. S. Deep Recurrent Neural Networks for seizure detection and early seizure detection systems. Office of Scientific and Technical Information (OSTI), 2017. http://dx.doi.org/10.2172/1366924.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Armstrong, Derek Elswick, and Joseph Gabriel Gorka. Using Deep Neural Networks to Extract Fireball Parameters from Infrared Spectral Data. Office of Scientific and Technical Information (OSTI), 2020. http://dx.doi.org/10.2172/1623398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Thulasidasan, Sunil, Gopinath Chennupati, Jeff Bilmes, Tanmoy Bhattacharya, and Sarah E. Michalak. On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks. Office of Scientific and Technical Information (OSTI), 2019. http://dx.doi.org/10.2172/1525811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ellis, John, Attila Cangi, Normand Modine, John Stephens, Aidan Thompson, and Sivasankaran Rajamanickam. Accelerating Finite-temperature Kohn-Sham Density Functional Theory\ with Deep Neural Networks. Office of Scientific and Technical Information (OSTI), 2020. http://dx.doi.org/10.2172/1677521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!