Literatura académica sobre el tema "Machine Learning, Deep Learning, Quantum Computing, Network Theory"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Machine Learning, Deep Learning, Quantum Computing, Network Theory".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Machine Learning, Deep Learning, Quantum Computing, Network Theory"

1

Wiebe, Nathan, Ashish Kapoor y Krysta M. Svore. "Quantum deep learning". Quantum Information and Computation 16, n.º 7&8 (mayo de 2016): 541–87. http://dx.doi.org/10.26421/qic16.7-8-1.

Texto completo
Resumen
In recent years, deep learning has had a profound impact on machine learning and artificial intelligence. At the same time, algorithms for quantum computers have been shown to efficiently solve some problems that are intractable on conventional, classical computers. We show that quantum computing not only reduces the time required to train a deep restricted Boltzmann machine, but also provides a richer and more comprehensive framework for deep learning than classical computing and leads to significant improvements in the optimization of the underlying objective function. Our quantum methods also permit efficient training of multilayer and fully connected models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Crawford, Daniel, Anna Levit, Navid Ghadermarzy, Jaspreet S. Oberoi y Pooya Ronagh. "Reinforcement learning using quantum Boltzmann machines". Quantum Information and Computation 18, n.º 1&2 (febrero de 2018): 51–74. http://dx.doi.org/10.26421/qic18.1-2-3.

Texto completo
Resumen
We investigate whether quantum annealers with select chip layouts can outperform classical computers in reinforcement learning tasks. We associate a transverse field Ising spin Hamiltonian with a layout of qubits similar to that of a deep Boltzmann machine (DBM) and use simulated quantum annealing (SQA) to numerically simulate quantum sampling from this system. We design a reinforcement learning algorithm in which the set of visible nodes representing the states and actions of an optimal policy are the first and last layers of the deep network. In absence of a transverse field, our simulations show that DBMs are trained more effectively than restricted Boltzmann machines (RBM) with the same number of nodes. We then develop a framework for training the network as a quantum Boltzmann machine (QBM) in the presence of a significant transverse field for reinforcement learning. This method also outperforms the reinforcement learning method that uses RBMs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Mercaldo, Francesco, Giovanni Ciaramella, Giacomo Iadarola, Marco Storto, Fabio Martinelli y Antonella Santone. "Towards Explainable Quantum Machine Learning for Mobile Malware Detection and Classification". Applied Sciences 12, n.º 23 (24 de noviembre de 2022): 12025. http://dx.doi.org/10.3390/app122312025.

Texto completo
Resumen
Through the years, the market for mobile devices has been rapidly increasing, and as a result of this trend, mobile malware has become sophisticated. Researchers are focused on the design and development of malware detection systems to strengthen the security and integrity of sensitive and private information. In this context, deep learning is exploited, also in cybersecurity, showing the ability to build models aimed at detecting whether an application is Trusted or malicious. Recently, with the introduction of quantum computing, we have been witnessing the introduction of quantum algorithms in Machine Learning. In this paper, we provide a comparison between five state-of-the-art Convolutional Neural Network models (i.e., AlexNet, MobileNet, EfficientNet, VGG16, and VGG19), one network developed by the authors (called Standard-CNN), and two quantum models (i.e., a hybrid quantum model and a fully quantum neural network) to classify malware. In addition to the classification, we provide explainability behind the model predictions, by adopting the Gradient-weighted Class Activation Mapping to highlight the areas of the image obtained from the application symptomatic of a certain prediction, to the convolutional and to the quantum models obtaining the best performances in Android malware detection. Real-world experiments were performed on a dataset composed of 8446 Android malicious and legitimate applications, obtaining interesting results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Vijayasekaran, G. y M. Duraipandian. "Resource scheduling in edge computing IoT networks using hybrid deep learning algorithm". System research and information technologies, n.º 3 (30 de octubre de 2022): 86–101. http://dx.doi.org/10.20535/srit.2308-8893.2022.3.06.

Texto completo
Resumen
The proliferation of the Internet of Things (IoT) and wireless sensor networks enhances data communication. The demand for data communication rapidly increases, which calls the emerging edge computing paradigm. Edge computing plays a major role in IoT networks and provides computing resources close to the users. Moving the services from the cloud to users increases the communication, storage, and network features of the users. However, massive IoT networks require a large spectrum of resources for their computations. In order to attain this, resource scheduling algorithms are employed in edge computing. Statistical and machine learning-based resource scheduling algorithms have evolved in the past decade, but the performance can be improved if resource requirements are analyzed further. A deep learning-based resource scheduling in edge computing IoT networks is presented in this research work using deep bidirectional recurrent neural network (BRNN) and convolutional neural network algorithms. Before scheduling, the IoT users are categorized into clusters using a spectral clustering algorithm. The proposed model simulation analysis verifies the performance in terms of delay, response time, execution time, and resource utilization. Existing resource scheduling algorithms like a genetic algorithm (GA), Improved Particle Swarm Optimization (IPSO), and LSTM-based models are compared with the proposed model to validate the superior performances.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Gianani, Ilaria y Claudia Benedetti. "Multiparameter estimation of continuous-time quantum walk Hamiltonians through machine learning". AVS Quantum Science 5, n.º 1 (marzo de 2023): 014405. http://dx.doi.org/10.1116/5.0137398.

Texto completo
Resumen
The characterization of the Hamiltonian parameters defining a quantum walk is of paramount importance when performing a variety of tasks, from quantum communication to computation. When dealing with physical implementations of quantum walks, the parameters themselves may not be directly accessible, and, thus, it is necessary to find alternative estimation strategies exploiting other observables. Here, we perform the multiparameter estimation of the Hamiltonian parameters characterizing a continuous-time quantum walk over a line graph with n-neighbor interactions using a deep neural network model fed with experimental probabilities at a given evolution time. We compare our results with the bounds derived from estimation theory and find that the neural network acts as a nearly optimal estimator both when the estimation of two or three parameters is performed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Ding, Li, Haowen Wang, Yinuo Wang y Shumei Wang. "Based on Quantum Topological Stabilizer Color Code Morphism Neural Network Decoder". Quantum Engineering 2022 (20 de julio de 2022): 1–8. http://dx.doi.org/10.1155/2022/9638108.

Texto completo
Resumen
Solving for quantum error correction remains one of the key challenges of quantum computing. Traditional decoding methods are limited by computing power and data scale, which restrict the decoding efficiency of color codes. There are many decoding methods that have been suggested to solve this problem. Machine learning is considered one of the most suitable solutions for decoding task of color code. We project the color code onto the surface code, use the deep Q network to iteratively train the decoding process of the color code and obtain the relationship between the inversion error rate and the logical error rate of the trained model and the performance of error correction. Our results show that through unsupervised learning, when iterative training is at least 300 times, a self-trained model can improve the error correction accuracy to 96.5%, and the error correction speed is about 13.8% higher than that of the traditional algorithm. We numerically show that our decoding method can achieve a fast prediction speed after training and a better error correction threshold.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Ghavasieh, A. y M. De Domenico. "Statistical physics of network structure and information dynamics". Journal of Physics: Complexity 3, n.º 1 (26 de enero de 2022): 011001. http://dx.doi.org/10.1088/2632-072x/ac457a.

Texto completo
Resumen
Abstract In the last two decades, network science has proven to be an invaluable tool for the analysis of empirical systems across a wide spectrum of disciplines, with applications to data structures admitting a representation in terms of complex networks. On the one hand, especially in the last decade, an increasing number of applications based on geometric deep learning have been developed to exploit, at the same time, the rich information content of a complex network and the learning power of deep architectures, highlighting the potential of techniques at the edge between applied math and computer science. On the other hand, studies at the edge of network science and quantum physics are gaining increasing attention, e.g., because of the potential applications to quantum networks for communications, such as the quantum Internet. In this work, we briefly review a novel framework grounded on statistical physics and techniques inspired by quantum statistical mechanics which have been successfully used for the analysis of a variety of complex systems. The advantage of this framework is that it allows one to define a set of information-theoretic tools which find widely used counterparts in machine learning and quantum information science, while providing a grounded physical interpretation in terms of a statistical field theory of information dynamics. We discuss the most salient theoretical features of this framework and selected applications to protein–protein interaction networks, neuronal systems, social and transportation networks, as well as potential novel applications for quantum network science and machine learning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Okey, Ogobuchi Daniel, Siti Sarah Maidin, Renata Lopes Rosa, Waqas Tariq Toor, Dick Carrillo Melgarejo, Lunchakorn Wuttisittikulkij, Muhammad Saadi y Demóstenes Zegarra Rodríguez. "Quantum Key Distribution Protocol Selector Based on Machine Learning for Next-Generation Networks". Sustainability 14, n.º 23 (29 de noviembre de 2022): 15901. http://dx.doi.org/10.3390/su142315901.

Texto completo
Resumen
In next-generation networks, including the sixth generation (6G), a large number of computing devices can communicate with ultra-low latency. By implication, 6G capabilities present a massive benefit for the Internet of Things (IoT), considering a wide range of application domains. However, some security concerns in the IoT involving authentication and encryption protocols are currently under investigation. Thus, mechanisms implementing quantum communications in IoT devices have been explored to offer improved security. Algorithmic solutions that enable better quantum key distribution (QKD) selection for authentication and encryption have been developed, but having limited performance considering time requirements. Therefore, a new approach for selecting the best QKD protocol based on a Deep Convolutional Neural Network model, called Tree-CNN, is proposed using the Tanh Exponential Activation Function (TanhExp) that enables IoT devices to handle more secure quantum communications using the 6G network infrastructure. The proposed model is developed, and its performance is compared with classical Convolutional Neural Networks (CNN) and other machine learning methods. The results obtained are superior to the related works, with an Area Under the Curve (AUC) of 99.89% during testing and a time-cost performance of 0.65 s for predicting the best QKD protocol. In addition, we tested our proposal using different transmission distances and three QKD protocols to demonstrate that the prediction and actual results reached similar values. Hence, our proposed model obtained a fast, reliable, and precise solution to solve the challenges of performance and time consumption in selecting the best QKD protocol.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Okuboyejo, Damilola A. y Oludayo O. Olugbara. "Classification of Skin Lesions Using Weighted Majority Voting Ensemble Deep Learning". Algorithms 15, n.º 12 (24 de noviembre de 2022): 443. http://dx.doi.org/10.3390/a15120443.

Texto completo
Resumen
The conventional dermatology practice of performing noninvasive screening tests to detect skin diseases is a source of escapable diagnostic inaccuracies. Literature suggests that automated diagnosis is essential for improving diagnostic accuracies in medical fields such as dermatology, mammography, and colonography. Classification is an essential component of an assisted automation process that is rapidly gaining attention in the discipline of artificial intelligence for successful diagnosis, treatment, and recovery of patients. However, classifying skin lesions into multiple classes is challenging for most machine learning algorithms, especially for extremely imbalanced training datasets. This study proposes a novel ensemble deep learning algorithm based on the residual network with the next dimension and the dual path network with confidence preservation to improve the classification performance of skin lesions. The distributed computing paradigm was applied in the proposed algorithm to speed up the inference process by a factor of 0.25 for a faster classification of skin lesions. The algorithm was experimentally compared with 16 deep learning and 12 ensemble deep learning algorithms to establish its discriminating prowess. The experimental comparison was based on dermoscopic images congregated from the publicly available international skin imaging collaboration databases. We propitiously recorded up to 82.52% average sensitivity, 99.00% average specificity, 98.54% average balanced accuracy, and 92.84% multiclass accuracy without prior segmentation of skin lesions to outstrip numerous state-of-the-art deep learning algorithms investigated.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Li, Jian y Yongyan Zhao. "Construction of Innovation and Entrepreneurship Platform Based on Deep Learning Algorithm". Scientific Programming 2021 (9 de diciembre de 2021): 1–7. http://dx.doi.org/10.1155/2021/1833979.

Texto completo
Resumen
As the national economy has entered a stage of rapid development, the national economy and social development have also ushered in the “14th Five-Year Plan,” and the country has also issued support policies to encourage and guide college students to start their own businesses. Therefore, the establishment of an innovation and entrepreneurship platform has a significant impact on China’s economy. This gives college students great support and help in starting a business. The theory of deep learning algorithms originated from the development of artificial neural networks and is another important field of machine learning. As the computing power of computers has been greatly improved, especially the computing power of GPU can quickly train deep neural networks, deep learning algorithms have become an important research direction. The deep learning algorithm is a nonlinear network structure and a standard modeling method in the field of machine learning. After modeling various templates, they can be identified and implemented. This article uses a combination of theoretical research and empirical research, based on the views and research content of some scholars in recent years, and introduces the basic framework and research content of this article. Then, deep learning algorithms are used to analyze the experimental data. Data analysis is performed, and relevant concepts of deep learning algorithms are combined. This article focuses on exploring the construction of an IAE (innovation and entrepreneurship) education platform and making full use of the role of deep learning algorithms to realize the construction of innovation and entrepreneurship platforms. Traditional methods need to extract features through manual design, then perform feature classification, and finally realize the function of recognition. The deep learning algorithm has strong data image processing capabilities and can quickly process large-scale data. Research data show that 49.5% of college students and 35.2% of undergraduates expressed their interest in entrepreneurship. Entrepreneurship is a good choice to relieve employment pressure.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Machine Learning, Deep Learning, Quantum Computing, Network Theory"

1

Malmgren, Henrik. "Revision of an artificial neural network enabling industrial sorting". Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-392690.

Texto completo
Resumen
Convolutional artificial neural networks can be applied for image-based object classification to inform automated actions, such as handling of objects on a production line. The present thesis describes theoretical background for creating a classifier and explores the effects of introducing a set of relatively recent techniques to an existing ensemble of classifiers in use for an industrial sorting system.The findings indicate that it's important to use spatial variety dropout regularization for high resolution image inputs, and use an optimizer configuration with good convergence properties. The findings also demonstrate examples of ensemble classifiers being effectively consolidated into unified models using the distillation technique. An analogue arrangement with optimization against multiple output targets, incorporating additional information, showed accuracy gains comparable to ensembling. For use of the classifier on test data with statistics different than those of the dataset, results indicate that augmentation of the input data during classifier creation helps performance, but would, in the current case, likely need to be guided by information about the distribution shift to have sufficiently positive impact to enable a practical application. I suggest, for future development, updated architectures, automated hyperparameter search and leveraging the bountiful unlabeled data potentially available from production lines.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Buffoni, Lorenzo. "Machine learning applications in science". Doctoral thesis, 2021. http://hdl.handle.net/2158/1227616.

Texto completo
Resumen
Machine learning is a broad field of study, with multifaceted applications of cross-disciplinary breadth that ultimately aims at developing computer algorithms that improve automatically through experience. The core idea of artificial intelligence technology is that systems can learn from data, so as to identify distinctive patterns and make consequently decisions, with minimal human intervention. The range of applications of these methodologies is already extremely vast, and still growing at a steady pace due to the pressing need to cope with the efficiently handling of big data. In parallel scientists have increasingly become interested in the potential of Machine Learning for fundamental research, for example in physics, biology and engineering. To some extent, this is not too surprising, since both Machine Learning algorithms and scientists share some of their methods as well as goals. The two fields are both concerned about the process of gathering and analyzing data to design models that can predict the behavior of complex systems. However, the fields prominently differ in the way their fundamental goals are realized. On the one hand, scientists use knowledge, intelligence and intuition to inform their models, on the other hand, Machine Learning models are agnostic and the machine provides the intelligence by extracting it from data often giving little to no insight on the knowledge gathered. Machine learning tools in science are therefore welcomed enthusiastically by some, while being eyed with suspicions by others, albeit producing surprisingly good results in some cases. In this thesis we will argue, using practical cases and applications from biology, network theory and quantum physics, that the communication between these two fields can be not only beneficial but also necessary for the progress of both fields.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Scellier, Benjamin. "A deep learning theory for neural networks grounded in physics". Thesis, 2020. http://hdl.handle.net/1866/25593.

Texto completo
Resumen
Au cours de la dernière décennie, l'apprentissage profond est devenu une composante majeure de l'intelligence artificielle, ayant mené à une série d'avancées capitales dans une variété de domaines. L'un des piliers de l'apprentissage profond est l'optimisation de fonction de coût par l'algorithme du gradient stochastique (SGD). Traditionnellement en apprentissage profond, les réseaux de neurones sont des fonctions mathématiques différentiables, et les gradients requis pour l'algorithme SGD sont calculés par rétropropagation. Cependant, les architectures informatiques sur lesquelles ces réseaux de neurones sont implémentés et entraînés souffrent d’inefficacités en vitesse et en énergie, dues à la séparation de la mémoire et des calculs dans ces architectures. Pour résoudre ces problèmes, le neuromorphique vise à implementer les réseaux de neurones dans des architectures qui fusionnent mémoire et calculs, imitant plus fidèlement le cerveau. Dans cette thèse, nous soutenons que pour construire efficacement des réseaux de neurones dans des architectures neuromorphiques, il est nécessaire de repenser les algorithmes pour les implémenter et les entraîner. Nous présentons un cadre mathématique alternative, compatible lui aussi avec l’algorithme SGD, qui permet de concevoir des réseaux de neurones dans des substrats qui exploitent mieux les lois de la physique. Notre cadre mathématique s'applique à une très large classe de modèles, à savoir les systèmes dont l'état ou la dynamique sont décrits par des équations variationnelles. La procédure pour calculer les gradients de la fonction de coût dans de tels systèmes (qui dans de nombreux cas pratiques ne nécessite que de l'information locale pour chaque paramètre) est appelée “equilibrium propagation” (EqProp). Comme beaucoup de systèmes en physique et en ingénierie peuvent être décrits par des principes variationnels, notre cadre mathématique peut potentiellement s'appliquer à une grande variété de systèmes physiques, dont les applications vont au delà du neuromorphique et touchent divers champs d'ingénierie.
In the last decade, deep learning has become a major component of artificial intelligence, leading to a series of breakthroughs across a wide variety of domains. The workhorse of deep learning is the optimization of loss functions by stochastic gradient descent (SGD). Traditionally in deep learning, neural networks are differentiable mathematical functions, and the loss gradients required for SGD are computed with the backpropagation algorithm. However, the computer architectures on which these neural networks are implemented and trained suffer from speed and energy inefficiency issues, due to the separation of memory and processing in these architectures. To solve these problems, the field of neuromorphic computing aims at implementing neural networks on hardware architectures that merge memory and processing, just like brains do. In this thesis, we argue that building large, fast and efficient neural networks on neuromorphic architectures also requires rethinking the algorithms to implement and train them. We present an alternative mathematical framework, also compatible with SGD, which offers the possibility to design neural networks in substrates that directly exploit the laws of physics. Our framework applies to a very broad class of models, namely those whose state or dynamics are described by variational equations. This includes physical systems whose equilibrium state minimizes an energy function, and physical systems whose trajectory minimizes an action functional (principle of least action). We present a simple procedure to compute the loss gradients in such systems, called equilibrium propagation (EqProp), which requires solely locally available information for each trainable parameter. Since many models in physics and engineering can be described by variational principles, our framework has the potential to be applied to a broad variety of physical systems, whose applications extend to various fields of engineering, beyond neuromorphic computing.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Machine Learning, Deep Learning, Quantum Computing, Network Theory"

1

S., Karthigai Selvi. "Structural and Functional Data Processing in Bio-Computing and Deep Learning". En Structural and Functional Aspects of Biocomputing Systems for Data Processing, 198–215. IGI Global, 2023. http://dx.doi.org/10.4018/978-1-6684-6523-3.ch010.

Texto completo
Resumen
The goal of new biocomputing research is to comprehend bio molecules' structures and functions via the lens of biofuturistic technologies. The amount of data generated every day is tremendous, and data bases are growing exponentially. A majority of computational researchers have been using machine learning for the analysis of bio-informatics data sets. This chapter explores the relationship between deep learning algorithms and the fundamental biological concepts of protein structure, phenotypes and genotype, proteins and protein levels, and the similarities and differences between popular deep learning models. This chapter offers a useful outlook for further research into its theory, algorithms, and applications in computational biology and bioinformatics. Understanding the structural aspects of cellular contact networks helps to comprehend the interdependencies, causal chains, and fundamental functional capabilities that exist across the entire network.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Machine Learning, Deep Learning, Quantum Computing, Network Theory"

1

Buiu, Catalin y Vladrares Danaila. "DATA SCIENCE AND MACHINE LEARNING TECHNIQUES FOR CASE-BASED LEARNING IN MEDICAL BIOENGINEERING EDUCATION". En eLSE 2020. University Publishing House, 2020. http://dx.doi.org/10.12753/2066-026x-20-194.

Texto completo
Resumen
Data science and artificial intelligence (AI) are the main factors driving a new technological revolution. Just recently (November 2019), key U.S. policymakers have announced intentions to create an agency that would invest $100 billion over 5 years on basic research in AI, with a focus on quantum computing, robotics, cybersecurity, and synthetic biology. The need for well educated people in these areas is growing exponentially, and this is more stringent than ever for medical bioengineering professionals who are expected to play a leading role in the promotion of advanced algorithms and methods to advance health care in fields like diagnosis, monitoring, and therapy. In a recent study on the current research areas of big data analytics and AI in health care, the authors have performed a systematic review of literature and found that out the primary interest area proved to be medical image processing and analysis (587 entries out of 2421 articles analysed) followed by decision-support systems and text mining and analysis. Case-based learning is an instructional design model that is learner-centered and intensively used across a variety of disciplines. In this paper, we present a set of tools and a case study that would help medical bioengineering students to grasp both theoretical concepts (both medical, such as gynecological disorders and technological, such as deep learning, neural network architectures, learning algorithms) and delve into practical applications of these techniques in medical image processing. The case study concerns the automated diagnosis of cervigrams (also called cervicographic images), that are colposcopy images used by the gynecologist for cervical cancer diagnosis, study and training. The tools described in this paper are based on using PyTorch, Keras and Tensor Flow. They allow image segmentation, automated detection of cervix, and cervical cancer classification, while also sustaining an intense interaction between participants to the case study. Based on these tools (for which we describe their distinctive advantages and provide comparisons in terms of accuracy and speed), we describe in full details different teaching strategies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía