Books on the topic 'Efficient Neural Networks'

To see the other types of publications on this topic, follow the link: Efficient Neural Networks.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 23 books for your research on the topic 'Efficient Neural Networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse books on a wide variety of disciplines and organise your bibliography correctly.

1

Approximation methods for efficient learning of Bayesian networks. Amsterdam: IOS Press, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Omohundro, Stephen M. Efficient algorithms with neural network behavior. Urbana, Il (1304 W. Springfield Ave., Urbana 61801): Dept. of Computer Science, University of Illinois at Urbana-Champaign, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Costa, Álvaro. Evaluating public transport efficiency with neural network models. Loughborough: Loughborough University, Department of Economics, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Markellos, Raphael N. Robust estimation of nonlinear production frontiers and efficiency: A neural network approach. Loughborough: Loughborough University, Department of Economics, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sze, Vivienne, Yu-Hsin Chen, Tien-Ju Yang, and Joel S. Emer. Efficient Processing of Deep Neural Networks. Morgan & Claypool Publishers, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sze, Vivienne, Yu-Hsin Chen, Tien-Ju Yang, and Joel S. Emer. Efficient Processing of Deep Neural Networks. Springer International Publishing AG, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sze, Vivienne, Yu-Hsin Chen, Tien-Ju Yang, and Joel S. Emer. Efficient Processing of Deep Neural Networks. Morgan & Claypool Publishers, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sze, Vivienne, Yu-Hsin Chen, Tien-Ju Yang, and Joel S. Emer. Efficient Processing of Deep Neural Networks. Morgan & Claypool Publishers, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ghotra, Manpreet Singh, and Rajdeep Dua. Neural Network Programming with TensorFlow: Unleash the power of TensorFlow to train efficient neural networks. Packt Publishing, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Takikawa, Masami. Representations and algorithms for efficient inference in Bayesian networks. 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
11

Karim, Samsul Ariffin Abdul. Intelligent Systems Modeling and Simulation II: Machine Learning, Neural Networks, Efficient Numerical Algorithm and Statistical Methods. Springer International Publishing AG, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
12

Hands-On Mathematics for Deep Learning: Build a Solid Mathematical Foundation for Training Efficient Deep Neural Networks. Packt Publishing, Limited, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
13

Ławryńczuk, Maciej. Computationally Efficient Model Predictive Control Algorithms: A Neural Network Approach. Springer, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
14

Ławryńczuk, Maciej. Computationally Efficient Model Predictive Control Algorithms: A Neural Network Approach. Springer London, Limited, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
15

Scerif, Gaia, and Rachel Wu. Developmental Disorders. Edited by Anna C. (Kia) Nobre and Sabine Kastner. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199675111.013.030.

Full text
Abstract:
Tracing the development of attentional deficits and their cascading effects in genetically and functionally defined disorders allows an understanding of intertwined developing systems on three levels. At the cognitive level, attention influences perception, learning, and memory. Attention and other cognitive processes interact to produce cascading effects across developmental time. At a systems neuroscience level, developmental disorders can reveal the systems and mechanisms necessary to attain adults’ efficient attentional processes. At the level of cellular neuroscience and functional genomics, disorders of known genetic aetiology provide inroads into cellular pathways and protein networks leading to attentional deficits across development. This chapter draws from both genetically defined and functionally defined disorders to delineate the complexities and necessity of studying attentional deficits and their neural correlates. Studying developmental disorders highlights the need to study attentional processes and other cognitive processes (e.g. memory and learning) in tandem, given their inseparable nature.
APA, Harvard, Vancouver, ISO, and other styles
16

Raymont, Vanessa, and Robert D. Stevens. Cognitive Reserve. Oxford University Press, 2014. http://dx.doi.org/10.1093/med/9780199653461.003.0029.

Full text
Abstract:
The cognitive reserve hypothesis suggests that the structure and function of an individual’s brain can modulate the clinical expression of brain damage and illness. This chapter describes passive and active models of reserve, their impact on neurological illness, and how these effects can be assessed. Passive models focus on the protective potential of anatomical features, such as brain size, neural density, and synaptic connectivity, while active models emphasize the connectivity and efficiency of neural networks and active compensation by alternative networks. It is likely that both models represent features of a common biological substrate and could help in the development of strategies to improve outcome following critical illness.
APA, Harvard, Vancouver, ISO, and other styles
17

Macnab, Christopher John Brent. Stable neural-network control of structurally flexible space manipulators: A novel approach featuring fast training and efficient memory. 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
18

Zamarian, L., and Margarete Delazer. Arithmetic Learning in Adults. Edited by Roi Cohen Kadosh and Ann Dowker. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199642342.013.007.

Full text
Abstract:
Neuroimaging has significantly contributed to our understanding of human learning by tracking the neural correlates underlying the acquisition of new expertise. Studies using functional magnetic resonance imaging (fMRI) suggest that the acquisition of arithmetic competence is reflected in a decrease of activation in frontal brain regions and a relative increase of activation in parietal brain regions that are important for arithmetic processing. Activation of the angular gyrus (AG) is related to fact learning, skilled retrieval, and level of automatization. fMRI investigations extend the findings of cognitive studies showing that behavioural differences between trained and untrained sets of items, between different arithmetic operations, and between different training strategies are reflected by specific activation patterns. fMRI studies also reveal inter-individual differences related to arithmetic competence, with low performing individuals showing lower AG activation when answering calculation problems. Importantly, training attenuates inter-individual differences in AG activation. Studies with calculation experts suggest that different strategies may be used to achieve extraordinary performance. While some experts recruit a more extended cerebral network compared with the average population, others use the same frontoparietal network, but more efficiently. In conclusion, brain imaging studies on arithmetic learning and expertise offer a promising view on the adaptivity of the human brain. Although evidence on functional or structural modifications following intervention in dyscalculic patients is still scarce, future studies may contribute to the development of more efficient and targeted rehabilitation programmes after brain damage or in cases of atypical numerical development.
APA, Harvard, Vancouver, ISO, and other styles
19

Townley, Christopher, Mattia Guidi, and Mariana Tavares. The Law and Politics of Global Competition. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780198859789.001.0001.

Full text
Abstract:
This book discusses the International Competition Network’s (ICN) formally neutral structures that provide powerful influence mechanisms for strong national competition authorities (NCAs), non-governmental actors (NGAs), and competition experts over wider state interests. It highlights the legitimacy of ICN from a political and legal theory perspective and analyses the ICN’s effectiveness and efficiency. It also describes the ICN as a transnational network that is set up by its members, without wider state input. The book points out how the ICN has sought to enrich its discussions and outputs through the inclusion of NGAs, principally large multi-nationals and legal and economic professions. It reviews ICN’s mission, which is to advocate the adoption of superior standards and procedures in competition policy around the world, formulate proposals for procedural and substantive convergence, and seek to facilitate effective international cooperation to the benefit of member agencies, consumers, and economies worldwide.
APA, Harvard, Vancouver, ISO, and other styles
20

Allen, Michael P., and Dominic J. Tildesley. Some tricks of the trade. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198803195.003.0005.

Full text
Abstract:
This chapter concentrates on practical tips and tricks for improving the efficiency of computer simulation programs. This includes the effect of using truncated and shifted potentials, and the use of table look-up and neural networks for calculating potentials. Approaches for speeding up simulations, such as the Verlet neighbour list, linked-lists and multiple timestep methods are described. The chapter then proceeds to discuss the general structure of common simulation programs; in particular the choice of the starting configuration and the initial velocities of the particles. The chapter also contains details of the overall approach to organising runs, storing the data, and checking that the program is working correctly.
APA, Harvard, Vancouver, ISO, and other styles
21

Machine Learning: The Ultimate Beginners Guide to Efficiently Learn and Understand Machine Learning, Artificial Neural Network and Data Mining from Beginners to Expert Concepts. Independently Published, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
22

Hilgurt, S. Ya, and O. A. Chemerys. Reconfigurable signature-based information security tools of computer systems. PH “Akademperiodyka”, 2022. http://dx.doi.org/10.15407/akademperiodyka.458.297.

Full text
Abstract:
The book is devoted to the research and development of methods for combining computational structures for reconfigurable signature-based information protection tools for computer systems and networks in order to increase their efficiency. Network security tools based, among others, on such AI-based approaches as deep neural networking, despite the great progress shown in recent years, still suffer from nonzero recognition error probability. Even a low probability of such an error in a critical infrastructure can be disastrous. Therefore, signature-based recognition methods with their theoretically exact matching feature are still relevant when creating information security systems such as network intrusion detection systems, antivirus, anti-spam, and wormcontainment systems. The real time multi-pattern string matching task has been a major performance bottleneck in such systems. To speed up the recognition process, developers use a reconfigurable hardware platform based on FPGA devices. Such platform provides almost software flexibility and near-ASIC performance. The most important component of a signature-based information security system in terms of efficiency is the recognition module, in which the multipattern matching task is directly solved. It must not only check each byte of input data at speeds of tens and hundreds of gigabits/sec against hundreds of thousand or even millions patterns of signature database, but also change its structure every time a new signature appears or the operating conditions of the protected system change. As a result of the analysis of numerous examples of the development of reconfigurable information security systems, three most promising approaches to the construction of hardware circuits of recognition modules were identified, namely, content-addressable memory based on digital comparators, Bloom filter and Aho–Corasick finite automata. A method for fast quantification of components of recognition module and the entire system was proposed. The method makes it possible to exclude resource-intensive procedures for synthesizing digital circuits on FPGAs when building complex reconfigurable information security systems and their components. To improve the efficiency of the systems under study, structural-level combinational methods are proposed, which allow combining into single recognition device several matching schemes built on different approaches and their modifications, in such a way that their advantages are enhanced and disadvantages are eliminated. In order to achieve the maximum efficiency of combining methods, optimization methods are used. The methods of: parallel combining, sequential cascading and vertical junction have been formulated and investigated. The principle of multi-level combining of combining methods is also considered and researched. Algorithms for the implementation of the proposed combining methods have been developed. Software has been created that allows to conduct experiments with the developed methods and tools. Quantitative estimates are obtained for increasing the efficiency of constructing recognition modules as a result of using combination methods. The issue of optimization of reconfigurable devices presented in hardware description languages is considered. A modification of the method of affine transformations, which allows parallelizing such cycles that cannot be optimized by other methods, was presented. In order to facilitate the practical application of the developed methods and tools, a web service using high-performance computer technologies of grid and cloud computing was considered. The proposed methods to increase efficiency of matching procedure can also be used to solve important problems in other fields of science as data mining, analysis of DNA molecules, etc. Keywords: information security, signature, multi-pattern matching, FPGA, structural combining, efficiency, optimization, hardware description language.
APA, Harvard, Vancouver, ISO, and other styles
23

Butz, Martin V., and Esther F. Kutter. Top-Down Predictions Determine Perceptions. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780198739692.003.0009.

Full text
Abstract:
While bottom-up visual processing is important, the brain integrates this information with top-down, generative expectations from very early on in the visual processing hierarchy. Indeed, our brain should not be viewed as a classification system, but rather as a generative system, which perceives something by integrating sensory evidence with the available, learned, predictive knowledge about that thing. The involved generative models continuously produce expectations over time, across space, and from abstracted encodings to more concrete encodings. Bayesian information processing is the key to understand how information integration must work computationally – at least in approximation – also in the brain. Bayesian networks in the form of graphical models allow the modularization of information and the factorization of interactions, which can strongly improve the efficiency of generative models. The resulting generative models essentially produce state estimations in the form of probability densities, which are very well-suited to integrate multiple sources of information, including top-down and bottom-up ones. A hierarchical neural visual processing architecture illustrates this point even further. Finally, some well-known visual illusions are shown and the perceptions are explained by means of generative, information integrating, perceptual processes, which in all cases combine top-down prior knowledge and expectations about objects and environments with the available, bottom-up visual information.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography