Academic literature on the topic 'Parsimonious Neural Networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Parsimonious Neural Networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Parsimonious Neural Networks"

1

Valsecchi, Cecile, Viviana Consonni, Roberto Todeschini, Marco Emilio Orlandi, Fabio Gosetti, and Davide Ballabio. "Parsimonious Optimization of Multitask Neural Network Hyperparameters." Molecules 26, no. 23 (November 30, 2021): 7254. http://dx.doi.org/10.3390/molecules26237254.

Full text
Abstract:
Neural networks are rapidly gaining popularity in chemical modeling and Quantitative Structure–Activity Relationship (QSAR) thanks to their ability to handle multitask problems. However, outcomes of neural networks depend on the tuning of several hyperparameters, whose small variations can often strongly affect their performance. Hence, optimization is a fundamental step in training neural networks although, in many cases, it can be very expensive from a computational point of view. In this study, we compared four of the most widely used approaches for tuning hyperparameters, namely, grid search, random search, tree-structured Parzen estimator, and genetic algorithms on three multitask QSAR datasets. We mainly focused on parsimonious optimization and thus not only on the performance of neural networks, but also the computational time that was taken into account. Furthermore, since the optimization approaches do not directly provide information about the influence of hyperparameters, we applied experimental design strategies to determine their effects on the neural network performance. We found that genetic algorithms, tree-structured Parzen estimator, and random search require on average 0.08% of the hours required by grid search; in addition, tree-structured Parzen estimator and genetic algorithms provide better results than random search.
APA, Harvard, Vancouver, ISO, and other styles
2

WANG, NING, MENG JOO ER, XIAN-YAO MENG, and XIANG LI. "AN ONLINE SELF-ORGANIZING SCHEME FOR PARSIMONIOUS AND ACCURATE FUZZY NEURAL NETWORKS." International Journal of Neural Systems 20, no. 05 (October 2010): 389–403. http://dx.doi.org/10.1142/s0129065710002486.

Full text
Abstract:
In this paper, an online self-organizing scheme for Parsimonious and Accurate Fuzzy Neural Networks (PAFNN), and a novel structure learning algorithm incorporating a pruning strategy into novel growth criteria are presented. The proposed growing procedure without pruning not only simplifies the online learning process but also facilitates the formation of a more parsimonious fuzzy neural network. By virtue of optimal parameter identification, high performance and accuracy can be obtained. The learning phase of the PAFNN involves two stages, namely structure learning and parameter learning. In structure learning, the PAFNN starts with no hidden neurons and parsimoniously generates new hidden units according to the proposed growth criteria as learning proceeds. In parameter learning, parameters in premises and consequents of fuzzy rules, regardless of whether they are newly created or already in existence, are updated by the extended Kalman filter (EKF) method and the linear least squares (LLS) algorithm, respectively. This parameter adjustment paradigm enables optimization of parameters in each learning epoch so that high performance can be achieved. The effectiveness and superiority of the PAFNN paradigm are demonstrated by comparing the proposed method with state-of-the-art methods. Simulation results on various benchmark problems in the areas of function approximation, nonlinear dynamic system identification and chaotic time-series prediction demonstrate that the proposed PAFNN algorithm can achieve more parsimonious network structure, higher approximation accuracy and better generalization simultaneously.
APA, Harvard, Vancouver, ISO, and other styles
3

LEVI, REGEV, EYTAN RUPPIN, YOSSI MATIAS, and JAMES A. REGGIA. "FREQUENCY-SPATIAL TRANSFORMATION: A PROPOSAL FOR PARSIMONIOUS INTRA-CORTICAL COMMUNICATION." International Journal of Neural Systems 07, no. 05 (November 1996): 591–98. http://dx.doi.org/10.1142/s0129065796000579.

Full text
Abstract:
This work examines a neural network model of a cortical module, where neurons are organized on a 2-dimensional sheet and are connected with higher probability to their spatial neighbors. Motivated by recent findings that cortical neurons have a resonant peak in their impedance magnitude function, we present a frequency-spatial transformation scheme that is schematically described as follows: An external input signal, applied to a small input subset of the neurons, spreads along the network. Due to a stochastic component in the dynamics of the neurons, the frequency of the spreading signal decreases as it propagates through the network. Depending on the input signal frequency, different neural assemblies will hence fire at their specific resonance frequency. We show analytically that the resulting frequency-spatial transformation is well-formed; an injective, fixed, mapping is obtained. Extensive numerical simulations demonstrate that a homogeneous, well-formed transformation may also be obtained in neural networks with cortical-like “Mexican-hat” connectivity. We hypothesize that a frequency-spatial transformation may serve as a basis for parsimonious cortical communication.
APA, Harvard, Vancouver, ISO, and other styles
4

Tian, Ye, Yue-Ping Xu, Zongliang Yang, Guoqing Wang, and Qian Zhu. "Integration of a Parsimonious Hydrological Model with Recurrent Neural Networks for Improved Streamflow Forecasting." Water 10, no. 11 (November 14, 2018): 1655. http://dx.doi.org/10.3390/w10111655.

Full text
Abstract:
This study applied a GR4J model in the Xiangjiang and Qujiang River basins for rainfall-runoff simulation. Four recurrent neural networks (RNNs)—the Elman recurrent neural network (ERNN), echo state network (ESN), nonlinear autoregressive exogenous inputs neural network (NARX), and long short-term memory (LSTM) network—were applied in predicting discharges. The performances of models were compared and assessed, and the best two RNNs were selected and integrated with the lumped hydrological model GR4J to forecast the discharges; meanwhile, uncertainties of the simulated discharges were estimated. The generalized likelihood uncertainty estimation method was applied to quantify the uncertainties. The results show that the LSTM and NARX better captured the time-series dynamics than the other RNNs. The hybrid models improved the prediction of high, median, and low flows, particularly in reducing the bias of underestimation of high flows in the Xiangjiang River basin. The hybrid models reduced the uncertainty intervals by more than 50% for median and low flows, and increased the cover ratios for observations. The integration of a hydrological model with a recurrent neural network considering long-term dependencies is recommended in discharge forecasting.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Byoung-Tak, Peter Ohm, and Heinz Mühlenbein. "Evolutionary Induction of Sparse Neural Trees." Evolutionary Computation 5, no. 2 (June 1997): 213–36. http://dx.doi.org/10.1162/evco.1997.5.2.213.

Full text
Abstract:
This paper is concerned with the automatic induction of parsimonious neural networks. In contrast to other program induction situations, network induction entails parametric learning as well as structural adaptation. We present a novel representation scheme called neural trees that allows efficient learning of both network architectures and parameters by genetic search. A hybrid evolutionary method is developed for neural tree induction that combines genetic programming and the breeder genetic algorithm under the unified framework of the minimum description length principle. The method is successfully applied to the induction of higher order neural trees while still keeping the resulting structures sparse to ensure good generalization performance. Empirical results are provided on two chaotic time series prediction problems of practical interest.
APA, Harvard, Vancouver, ISO, and other styles
6

Morchid, Mohamed. "Parsimonious memory unit for recurrent neural networks with application to natural language processing." Neurocomputing 314 (November 2018): 48–64. http://dx.doi.org/10.1016/j.neucom.2018.05.081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Ning, Meng Joo Er, and Xianyao Meng. "A fast and accurate online self-organizing scheme for parsimonious fuzzy neural networks." Neurocomputing 72, no. 16-18 (October 2009): 3818–29. http://dx.doi.org/10.1016/j.neucom.2009.05.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Da Lin, Wei Dong Yang, and Zhu Zhang. "Online Adaptive Fuzzy Neural Identification of a Piezoelectric Tube Actuator System." Applied Mechanics and Materials 275-277 (January 2013): 915–24. http://dx.doi.org/10.4028/www.scientific.net/amm.275-277.915.

Full text
Abstract:
A coupled actuator-flap-circuit system model and its online identification are presented. The coupled system consists of a piezoelectric tube actuator, a trailing-edge flap, and a series R-L-C circuit. The properties of the coupled system are examined using a Mach-scaled rotor simulation on hovering state. According to the high nonlinear hysteretic characteristics of the coupled system, the generalized dynamic fuzzy neural networks (GD-FNN) implementing Takagi-Sugeno-Kang (TSK) fuzzy systems based on extended ellipsoidal radial basis function (EBF) neural network is used to identify the coupled system. The structures and parameters are adaptive adjusted during the learning process, and don’t need too much expert experiences. Simulation studies show that the piezoelectric tube actuator has high authority with a broad frequency bandwidth, satisfies the requirements for helicopter vibration reduction; GD-FNN has a high learning speed, the final networks have a parsimonious network structure and generalize well, possessing broad application prospects in the helicopter vibration reduction.
APA, Harvard, Vancouver, ISO, and other styles
9

GALVÃO, ROBERTO KAWAKAMI HARROP, and TAKASHI YONEYAMA. "Improving the Discriminatory Capabilities of a Neural Classifier by Using a Biased-Wavelet Layer." International Journal of Neural Systems 09, no. 03 (June 1999): 167–74. http://dx.doi.org/10.1142/s0129065799000150.

Full text
Abstract:
In the context of wavelet neural networks (WNN's), two modifications to the basic training algorithms are proposed, namely the introduction of a bias component in the wavelets and the adoption of a weight decay policy. A problem of ECG segment classification is used for illustration purposes. Results suggest that bias improves the discriminatory capabilities of the WNN, which is also compared favourably to a conventional perceptron classifier. The use of weight decay during training, followed by pruning, resulted in a more parsimonious network, which also turned out to be a more conservative classifier. The knowledge embedded in the wavelet layer is interpreted with basis on the concept of super-wavelets.
APA, Harvard, Vancouver, ISO, and other styles
10

Gerber, B. S., T. G. Tape, R. S. Wigton, and P. S. Heckerling. "Selection of Predictor Variables for Pneumonia Using Neural Networks and Genetic Algorithms." Methods of Information in Medicine 44, no. 01 (2005): 89–97. http://dx.doi.org/10.1055/s-0038-1633927.

Full text
Abstract:
Summary Background: Artificial neural networks (ANN) can be used to select sets of predictor variable that incorporate nonlinear interactions between variables. We used a genetic algorithm, with selection based on maximizing network accuracy and minimizing network input-layer cardinality, to evolve parsimonious sets of variables for predicting community-acquired pneumonia among patients with respiratory complaints. Methods: ANN were trained on data from 1044 patients in a training cohort, and were applied to 116 patients in a testing cohort. Chromosomes with binary genes representing input-layer variables were operated on by crossover recombination, mutation, and probabilistic selection based on a fitness function incorporating both network accuracy and input-layer cardinality. Results: The genetic algorithm evolved best 10-variable sets that discriminated pneumonia in the training cohort (ROC areas, 0.838 for selection based on average cross entropy (ENT); 0.954 for selection based on ROC area (ROC)), and in the testing cohort (ROC areas, 0.847 for ENT selection; 0.963 for ROC selection), with no significant differences between cohorts. Best variable sets based on the genetic algorithm using ROC selection discriminated pneumonia more accurately than variable sets based on stepwise neural networks (ROC areas, 0.954 versus 0.879, p = 0.030), or stepwise logistic regression (ROC areas, 0.954 versus 0.830, p = 0.000). Variable sets of lower cardinalities were also evolved, which also accurately discriminated pneumonia. Conclusion: Variable sets derived using a genetic algorithm for neural networks accurately discriminated pneumonia from other respiratory conditions, and did so with greater accuracy than variables derived using stepwise neural networks or logistic regression in some cases.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Parsimonious Neural Networks"

1

Jang, Wen-Sheng, and 張文昇. "A Simplified and Parsimonious Type-2 Fuzzy Neural Network with Two-Stage Learning and FPGA Implementation." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/52624787906811863553.

Full text
Abstract:
碩士
國立中興大學
電機工程學系所
99
This paper proposes a Simplified and Parsimonious Type-2 Fuzzy Neural Network with two-stage learning (SPT2FNN). The antecedent part in each fuzzy rule of SPT2FNN uses interval type-2 fuzzy sets and the consequent part is Takagi-Sugeno-Kang (TSK) type. The SPT2FNN uses a simplified extended-output-calculation operation to reduce the computation time and hardware implementation cost. The initial rule set in the SPT2FNN is empty. The SPT2FNN uses a two-stage learning algorithm to construct interval type-2 fuzzy rules from extension of type-1 fuzzy rules. The objective of the first stage is to construct type-1 fuzzy rules via online structure learning and parameter learning. The second stage first extends the constructed type-1 fuzzy rules to interval type-2 fuzzy rules, where highly overlapped type-1 fuzzy sets are merged to interval type-2 fuzzy sets to reduce the total number of fuzzy sets. This stage then tunes consequent and antecedent parameters in the type-2 fuzzy rules using rule-ordered Kalman filter algorithm and gradient descent algorithm, respectively. SPT2FNN has been applied to simulations on system identification, stock price prediction, chaotic signal prediction, real-time series prediction and the robot arm mapping problems. Comparisons with several type-1 and type-2 fuzzy systems in these examples have verified the effectiveness and efficiency of SPT2FNN. A new hardware circuit is proposed to implement the learned SPT2FNN in an FPGA chip. The simplified function in the SPT2FNN helps to reduce hardware implementation cost.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Parsimonious Neural Networks"

1

Sezener, Can Eren, and Erhan Oztop. "Algorithms for Obtaining Parsimonious Higher Order Neurons." In Artificial Neural Networks and Machine Learning – ICANN 2017, 146–54. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68600-4_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhao, Qijun, Hongtao Lu, and David Zhang. "Parsimonious Feature Extraction Based on Genetic Algorithms and Support Vector Machines." In Advances in Neural Networks - ISNN 2006, 1387–93. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11759966_206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bashtova, Kateryna, Mathieu Causse, Cameron James, Florent Masmoudi, Mohamed Masmoudi, Houcine Turki, and Joshua Wolff. "Application of the Topological Gradient to Parsimonious Neural Networks." In Intelligent Systems, Control and Automation: Science and Engineering, 47–61. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70787-3_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bossley, K. M., D. J. Mills, M. Brown, and C. J. Harris. "Construction and Design of Parsimonious Neurofuzzy Systems." In Neural Network Engineering in Dynamic Control Systems, 153–77. London: Springer London, 1995. http://dx.doi.org/10.1007/978-1-4471-3066-6_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tan, Shing Chiang, Chee Peng Lim, and Junzo Watada. "A Parsimonious Radial Basis Function-Based Neural Network for Data Classification." In Intelligent Decision Technology Support in Practice, 49–60. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-21209-8_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Knowles, Adam, Abir Hussain, Wael El Deredy, Paulo G. J. Lisboa, and Christian L. Dunis. "Higher Order Neural Networks with Bayesian Confidence Measure for the Prediction of the EUR/USD Exchange Rate." In Artificial Higher Order Neural Networks for Economics and Business, 48–59. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-59904-897-0.ch002.

Full text
Abstract:
Multi-Layer Perceptrons (MLP) are the most common type of neural network in use, and their ability to perform complex nonlinear mappings and tolerance to noise in data is well documented. However, MLPs also suffer long training times and often reach only local optima. Another type of network is Higher Order Neural Networks (HONN). These can be considered a ‘stripped-down’ version of MLPs, where joint activation terms are used, relieving the network of the task of learning the relationships between the inputs. The predictive performance of the network is tested with the EUR/USD exchange rate and evaluated using standard financial criteria including the annualized return on investment, showing a 8% increase in the return compared with the MLP. The output of the networks that give the highest annualized return in each category was subjected to a Bayesian based confidence measure. This performance improvement may be explained by the explicit and parsimonious representation of high order terms in Higher Order Neural Networks, which combines robustness against noise typical of distributed models, together with the ability to accurately model higher order interactions for long-term forecasting. The effectiveness of the confidence measure is explained by examining the distribution of each network’s output. We speculate that the distribution can be taken into account during training, thus enabling us to produce neural networks with the properties to take advantage of the confidence measure.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Parsimonious Neural Networks"

1

Yuting Chen and Er Meng Joo. "Biomedical diagnosis and prediction using parsimonious fuzzy neural networks." In IECON 2012 - 38th Annual Conference of IEEE Industrial Electronics. IEEE, 2012. http://dx.doi.org/10.1109/iecon.2012.6388524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sridhar, Shailesh, Snehanshu Saha, Azhar Shaikh, Rahul Yedida, and Sriparna Saha. "Parsimonious Computing: A Minority Training Regime for Effective Prediction in Large Microarray Expression Data Sets." In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. http://dx.doi.org/10.1109/ijcnn48605.2020.9207083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Maya, Haroldo C., and Guilherme A. Barreto. "A GA-Based Approach for Building Regularized Sparse Polynomial Models for Wind Turbine Power Curves." In XV Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2018. http://dx.doi.org/10.5753/eniac.2018.4455.

Full text
Abstract:
In this paper, the classical polynomial model for wind turbines power curve estimation is revisited aiming at an automatic and parsimonious design. In this regard, using genetic algorithms we introduce a methodoloy for estimating a suitable order for the polynomial as well its relevant terms. The proposed methodology is compared with the state of the art in estimating the power curve of wind turbines, such as logistic models (with 4 and 5 parameters), artificial neural networks and weighted polynomial regression. We also show that the proposed approach performs better than the standard LASSO approach for building regularized sparse models. The results indicate that the proposed methodology consistently outperforms all the evaluated alternative methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Khayat, Omid, Javad Razjouyan, Hadi ChahkandiNejad, Mahdi Mohammad Abadi, and Mohammad Mehdi Ebadzadeh. "Fast and parsimonious self-organizing fuzzy neural network." In 2009 14th International CSI Computer Conference (CSICC 2009) (Postponed from July 2009). IEEE, 2009. http://dx.doi.org/10.1109/csicc.2009.5349637.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Ning, Xianyao Meng, and Qingyang Xu. "A fast and parsimonious fuzzy neural network (FPFNN) for function approximation." In 2009 Joint 48th IEEE Conference on Decision and Control (CDC) and 28th Chinese Control Conference (CCC). IEEE, 2009. http://dx.doi.org/10.1109/cdc.2009.5400146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Grobler, T. L., W. Kleynhans, and B. P. Salmon. "A Parsimonious Neural Network for the Classification of Modis Time-Series." In IGARSS 2021 - 2021 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2021. http://dx.doi.org/10.1109/igarss47720.2021.9554069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography