Academic literature on the topic 'Bootstrapping neural networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Bootstrapping neural networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Bootstrapping neural networks"

1

Franke, Jürgen, and Michael H. Neumann. "Bootstrapping Neural Networks." Neural Computation 12, no. 8 (August 1, 2000): 1929–49. http://dx.doi.org/10.1162/089976600300015204.

Full text
Abstract:
Knowledge about the distribution of a statistical estimator is important for various purposes, such as the construction of confidence intervals for model parameters or the determination of critical values of tests. A widely used method to estimate this distribution is the so-called bootstrap, which is based on an imitation of the probabilistic structure of the data-generating process on the basis of the information provided by a given set of random observations. In this article we investigate this classical method in the context of artificial neural networks used for estimating a mapping from input to output space. We establish consistency results for bootstrap estimates of the distribution of parameter estimates.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Xiangsheng, Yanghui Rao, Haoran Xie, Raymond Yiu Keung Lau, Jian Yin, and Fu Lee Wang. "Bootstrapping Social Emotion Classification with Semantically Rich Hybrid Neural Networks." IEEE Transactions on Affective Computing 8, no. 4 (October 1, 2017): 428–42. http://dx.doi.org/10.1109/taffc.2017.2716930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mistry, Sajib, Lie Qu, and Athman Bouguettaya. "Layer-based Composite Reputation Bootstrapping." ACM Transactions on Internet Technology 22, no. 1 (February 28, 2022): 1–28. http://dx.doi.org/10.1145/3448610.

Full text
Abstract:
We propose a novel generic reputation bootstrapping framework for composite services. Multiple reputation-related indicators are considered in a layer-based framework to implicitly reflect the reputation of the component services. The importance of an indicator on the future performance of a component service is learned using a modified Random Forest algorithm. We propose a topology-aware Forest Deep Neural Network (fDNN) to find the correlations between the reputation of a composite service and reputation indicators of component services. The trained fDNN model predicts the reputation of a new composite service with the confidence value. Experimental results with real-world dataset prove the efficiency of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
4

Álvarez-Aparicio, Claudia, Ángel Manuel Guerrero-Higueras, Luis V. Calderita, Francisco J. Rodríguez-Lera, Vicente Matellán, and Camino Fernández-Llamas. "Convolutional Neural Networks Refitting by Bootstrapping for Tracking People in a Mobile Robot." Applied Sciences 11, no. 21 (October 27, 2021): 10043. http://dx.doi.org/10.3390/app112110043.

Full text
Abstract:
Convolutional Neural Networks are usually fitted with manually labelled data. The labelling process is very time-consuming since large datasets are required. The use of external hardware may help in some cases, but it also introduces noise to the labelled data. In this paper, we pose a new data labelling approach by using bootstrapping to increase the accuracy of the PeTra tool. PeTra allows a mobile robot to estimate people’s location in its environment by using a LIDAR sensor and a Convolutional Neural Network. PeTra has some limitations in specific situations, such as scenarios where there are not any people. We propose to use the actual PeTra release to label the LIDAR data used to fit the Convolutional Neural Network. We have evaluated the resulting system by comparing it with the previous one—where LIDAR data were labelled with a Real Time Location System. The new release increases the MCC-score by 65.97%.
APA, Harvard, Vancouver, ISO, and other styles
5

Yan, Yilin, Min Chen, Saad Sadiq, and Mei-Ling Shyu. "Efficient Imbalanced Multimedia Concept Retrieval by Deep Learning on Spark Clusters." International Journal of Multimedia Data Engineering and Management 8, no. 1 (January 2017): 1–20. http://dx.doi.org/10.4018/ijmdem.2017010101.

Full text
Abstract:
The classification of imbalanced datasets has recently attracted significant attention due to its implications in several real-world use cases. The classifiers developed on datasets with skewed distributions tend to favor the majority classes and are biased against the minority class. Despite extensive research interests, imbalanced data classification remains a challenge in data mining research, especially for multimedia data. Our attempt to overcome this hurdle is to develop a convolutional neural network (CNN) based deep learning solution integrated with a bootstrapping technique. Considering that convolutional neural networks are very computationally expensive coupled with big training datasets, we propose to extract features from pre-trained convolutional neural network models and feed those features to another full connected neutral network. Spark implementation shows promising performance of our model in handling big datasets with respect to feasibility and scalability.
APA, Harvard, Vancouver, ISO, and other styles
6

Barth, R., J. IJsselmuiden, J. Hemming, and E. J. Van Henten. "Synthetic bootstrapping of convolutional neural networks for semantic plant part segmentation." Computers and Electronics in Agriculture 161 (June 2019): 291–304. http://dx.doi.org/10.1016/j.compag.2017.11.040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hsiao, Hsiao-Fen, Jiang-Chuan Huang, and Zheng-Wei Lin. "Portfolio construction using bootstrapping neural networks: evidence from global stock market." Review of Derivatives Research 23, no. 3 (July 25, 2019): 227–47. http://dx.doi.org/10.1007/s11147-019-09163-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Richman, Ronald, and Mario V. Wüthrich. "Nagging Predictors." Risks 8, no. 3 (August 4, 2020): 83. http://dx.doi.org/10.3390/risks8030083.

Full text
Abstract:
We define the nagging predictor, which, instead of using bootstrapping to produce a series of i.i.d. predictors, exploits the randomness of neural network calibrations to provide a more stable and accurate predictor than is available from a single neural network run. Convergence results for the family of Tweedie’s compound Poisson models, which are usually used for general insurance pricing, are provided. In the context of a French motor third-party liability insurance example, the nagging predictor achieves stability at portfolio level after about 20 runs. At an insurance policy level, we show that for some policies up to 400 neural network runs are required to achieve stability. Since working with 400 neural networks is impractical, we calibrate two meta models to the nagging predictor, one unweighted, and one using the coefficient of variation of the nagging predictor as a weight, finding that these latter meta networks can approximate the nagging predictor well, only with a small loss of accuracy.
APA, Harvard, Vancouver, ISO, and other styles
9

KUAN, MEI MING, CHEE PENG LIM, and ROBERT F. HARRISON. "ON OPERATING STRATEGIES OF THE FUZZY ARTMAP NEURAL NETWORK: A COMPARATIVE STUDY." International Journal of Computational Intelligence and Applications 03, no. 01 (March 2003): 23–43. http://dx.doi.org/10.1142/s1469026803000847.

Full text
Abstract:
In this paper, the effectiveness of three different operating strategies applied to the Fuzzy ARTMAP (FAM) neural network in pattern classification tasks is analyzed and compared. Three types of FAM, namely average FAM, voting FAM, and ordered FAM, are formed for experimentation. In average FAM, a pool of the FAM networks is trained using random sequences of input patterns, and the performance metrics from multiple networks are averaged. In voting FAM, predictions from a number of FAM networks are combined using the majority-voting scheme to reach a final output. In ordered FAM, a pre-processing procedure known as the ordering algorithm is employed to identify a fixed sequence of input patterns for training the FAM network. Three medical data sets are employed to evaluate the performances of these three types of FAM. The results are analyzed and compared with those from other learning systems. Bootstrapping has also been used to analyze and quantify the results statistically.
APA, Harvard, Vancouver, ISO, and other styles
10

Medina, Oded, Roi Yozevitch, and Nir Shvalb. "Synthetic Sensor Array Training Sets for Neural Networks." Journal of Sensors 2019 (September 10, 2019): 1–10. http://dx.doi.org/10.1155/2019/9254315.

Full text
Abstract:
It is often hard to relate the sensor’s electrical output to the physical scenario when a multidimensional measurement is of interest. An artificial neural network may be a solution. Nevertheless, if the training data set is extracted from a real experimental setup, it can become unreachable in terms of time resources. The same issue arises when the physical measurement is expected to extend across a wide range of values. This paper presents a novel method for overcoming the long training time in a physical experiment set up by bootstrapping a relatively small data set for generating a synthetic data set which can be used for training an artificial neural network. Such a method can be applied to various measurement systems that yield sensor output which combines simultaneous occurrences or wide-range values of physical phenomena of interest. We discuss to which systems our method may be applied. We exemplify our results on three study cases: a seismic sensor array, a linear array of strain gauges, and an optical sensor array. We present the experimental process, its results, and the resulting accuracies.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Bootstrapping neural networks"

1

Van, Lierde Boris. "Developing Box-Pushing Behaviours Using Evolutionary Robotics." Thesis, Högskolan Dalarna, Datateknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:du-6250.

Full text
Abstract:
The context of this report and the IRIDIA laboratory are described in the preface. Evolutionary Robotics and the box-pushing task are presented in the introduction.The building of a test system supporting Evolutionary Robotics experiments is then detailed. This system is made of a robot simulator and a Genetic Algorithm. It is used to explore the possibility of evolving box-pushing behaviours. The bootstrapping problem is explained, and a novel approach for dealing with it is proposed, with results presented.Finally, ideas for extending this approach are presented in the conclusion.
APA, Harvard, Vancouver, ISO, and other styles
2

Lin, Zheng-Wei, and 林政緯. "Portfolio Construction Using Bootstrapping Neural Networks." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/50166282657736133905.

Full text
Abstract:
博士
雲林科技大學
管理研究所博士班
99
ABSTRACT Despite having become firmly established as one of the major cornerstone principles of modern finance, traditional Markowitz mean-variance analysis has, nevertheless, failed to gain widespread acceptance as a practical tool for equity management. The Markowitz optimization enigma essentially centers on the severe estimation risk associated with the input parameters, as well as the resultant financially irrelevant or even false optimal portfolios and asset allocation proposals. We therefore propose a portfolio construction method in the present study which incorporates the adoption of bootstrapping neural network architecture. In specific terms, a residual bootstrapping sample, which is derived from multilayer feedforward neural networks, is incorporated into the estimation of the expected returns and the covariance matrix, which are then, in turn, integrated into the traditional Markowitz optimization procedure. The efficacy of our proposed approach is illustrated by comparing it with traditional Markowitz mean-variance analysis, as well as the James-Stein and minimum-variance estimators, with the empirical results indicating that this novel approach significantly outperforms the benchmark models, in terms of various risk-adjusted performance measures. The evidence provided by this study suggests that this new approach has significant promise with regard to the enhancement of the investment value of Markowitz mean-variance analysis.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Bootstrapping neural networks"

1

Allende, Héctor, Ricardo Ñanculef, and Rodrigo Salas. "Robust Bootstrapping Neural Networks." In MICAI 2004: Advances in Artificial Intelligence, 813–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24694-7_84.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zadel, Stefan. "An algorithm for bootstrapping the core of a biologically inspired motor control system." In Artificial Neural Networks — ICANN 96, 629–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-61510-5_107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chillotti, Ilaria, Marc Joye, and Pascal Paillier. "Programmable Bootstrapping Enables Efficient Homomorphic Inference of Deep Neural Networks." In Lecture Notes in Computer Science, 1–19. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-78086-9_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ai, Na, Jinye Peng, Jun Wang, Lin Wang, and Jin Qi. "Single Image Super-Resolution by Learned Double Sparsity Dictionaries Combining Bootstrapping Method." In Artificial Neural Networks and Machine Learning – ICANN 2017, 565–73. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68612-7_64.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yan, Yilin, Min Chen, Saad Sadiq, and Mei-Ling Shyu. "Efficient Imbalanced Multimedia Concept Retrieval by Deep Learning on Spark Clusters." In Deep Learning and Neural Networks, 274–94. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-0414-7.ch017.

Full text
Abstract:
The classification of imbalanced datasets has recently attracted significant attention due to its implications in several real-world use cases. The classifiers developed on datasets with skewed distributions tend to favor the majority classes and are biased against the minority class. Despite extensive research interests, imbalanced data classification remains a challenge in data mining research, especially for multimedia data. Our attempt to overcome this hurdle is to develop a convolutional neural network (CNN) based deep learning solution integrated with a bootstrapping technique. Considering that convolutional neural networks are very computationally expensive coupled with big training datasets, we propose to extract features from pre-trained convolutional neural network models and feed those features to another full connected neutral network. Spark implementation shows promising performance of our model in handling big datasets with respect to feasibility and scalability.
APA, Harvard, Vancouver, ISO, and other styles
6

Ak, R., V. Vitelli, and E. Zio. "Uncertainty modeling in wind power generation prediction by neural networks and bootstrapping." In Safety, Reliability and Risk Analysis, 3191–96. CRC Press, 2013. http://dx.doi.org/10.1201/b15938-483.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Fangjun, and Gao Niu. "US Medical Expense Analysis Through Frequency and Severity Bootstrapping and Regression Model." In Biomedical and Business Applications Using Artificial Neural Networks and Machine Learning, 177–207. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-8455-2.ch007.

Full text
Abstract:
For the purpose of control health expenditures, there are some papers investigating the characteristics of patients who may incur high expenditures. However fewer papers are found which are based on the overall medical conditions, so this chapter was to find a relationship among the prevalence of medical conditions, utilization of healthcare services, and average expenses per person. The authors used bootstrapping simulation for data preprocessing and then used linear regression and random forest methods to train several models. The metrics root mean square error (RMSE), mean absolute percent error (MAPE), mean absolute error (MAE) all showed that the selected linear regression model performs slightly better than the selected random forest regression model, and the linear model used medical conditions, type of services, and their interaction terms as predictors.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Bootstrapping neural networks"

1

Nguyen, Hung, Matthew Garratt, and Hussein Abbass. "Apprenticeship Bootstrapping." In 2018 International Joint Conference on Neural Networks (IJCNN). IEEE, 2018. http://dx.doi.org/10.1109/ijcnn.2018.8489064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Albert, Paul, Diego Ortego, Eric Arazo, Noel O'Connor, and Kevin McGuinness. "ReLaB: Reliable Label Bootstrapping for Semi-Supervised Learning." In 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9533616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kirstein, Stephan, Heiko Wersing, and Edgar Korner. "Towards autonomous bootstrapping for life-long learning categorization tasks." In 2010 International Joint Conference on Neural Networks (IJCNN). IEEE, 2010. http://dx.doi.org/10.1109/ijcnn.2010.5596344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chai, Sek, Kilho Son, and Jesse Hostetler. "Bootstrapping Deep Neural Networks from Approximate Image Processing Pipelines." In 2019 2nd Workshop on Energy Efficient Machine Learning and Cognitive Computing for Embedded Applications (EMC2). IEEE, 2019. http://dx.doi.org/10.1109/emc249363.2019.00009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Anirudh, Rushil, and Jayaraman J. Thiagarajan. "Bootstrapping Graph Convolutional Neural Networks for Autism Spectrum Disorder Classification." In ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019. http://dx.doi.org/10.1109/icassp.2019.8683547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kottur, Satwik, Xiaoyu Wang, and Vitor Carvalho. "Exploring Personalized Neural Conversational Models." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/521.

Full text
Abstract:
Modeling dialog systems is currently one of the most active problems in Natural Language Processing. Recent advancement in Deep Learning has sparked an interest in the use of neural networks in modeling language, particularly for personalized conversational agents that can retain contextual information during dialog exchanges. This work carefully explores and compares several of the recently proposed neural conversation models, and carries out a detailed evaluation on the multiple factors that can significantly affect predictive performance, such as pretraining, embedding training, data cleaning, diversity reranking, evaluation setting, etc. Based on the tradeoffs of different models, we propose a new generative dialogue model conditioned on speakers as well as context history that outperforms all previous models on both retrieval and generative metrics. Our findings indicate that pretraining speaker embeddings on larger datasets, as well as bootstrapping word and speaker embeddings, can significantly improve performance (up to 3 points in perplexity), and that promoting diversity in using Mutual Information based techniques has a very strong effect in ranking metrics.
APA, Harvard, Vancouver, ISO, and other styles
7

Yan, Lingyong, Xianpei Han, Ben He, and Le Sun. "Global Bootstrapping Neural Network for Entity Set Expansion." In Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.findings-emnlp.331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vlaicu, Dan, and Mike Stojakovic. "Probabilistic Models to Approximate Highly Repetitive Linear and Nonlinear Finite Element Analyses of Nuclear Components." In ASME 2009 Pressure Vessels and Piping Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/pvp2009-77220.

Full text
Abstract:
In the development and technical support of nuclear plants, Engineers have to deal with highly repetitive finite element analyses that involve modeling of local variations of the initial design, local flaws due to corrosion-erosion effects, material properties degradation, and modifications of the loading conditions. This paper presents the development of generic models that emulate the behavior of a complex finite element model in a simplified form, with the statistical representation based on a sampling of base-model data for a variety of test cases. An improved Latin Hypercube algorithm is employed to generate the sampling points based on the number and the range of the variables that are considered in the design space. Four filling methods of the approximation models are discussed in this study: response surface, nonlinear, neural networks, and piecewise polynomial model. Furthermore, a bootstrapping procedure is employed to improve the confidence intervals of the original coefficients, and the single-factor or double-factor analysis of variance is applied to determine whether a significant influence exists between the investigated factors. Two numerical examples highlight the accuracy and efficiency of the methods. The first example is the linear elastic analysis of a pipe bend under pressure loading. The objective of the probabilistic assessment is to determine the relation between the loading conditions as well as the geometrical aspects of this elbow (pipe wall thickness, outside diameter, elbow radius, and maximum ovality tolerance) and the maximum stress in the elbow. The second example is an axisymmetric nozzle under primary and secondary cycling loads. Variations of the geometrical dimensions, nonlinear material properties, and cycling loading are taken as the input parameters, whereas the response variable is defined in terms of Melan’s theorem translated into the Nonlinear Superposition Method.
APA, Harvard, Vancouver, ISO, and other styles
9

Grezl, Frantiseli, and Martin Karafiat. "Semi-supervised bootstrapping approach for neural network feature extractor training." In 2013 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU). IEEE, 2013. http://dx.doi.org/10.1109/asru.2013.6707775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

de Peretti, Christian, and Carole Siani. "Bootstrapping tests for conditional heteroskedasticity based on artificial neural network." In Multiconference on "Computational Engineering in Systems Applications. IEEE, 2006. http://dx.doi.org/10.1109/cesa.2006.4281681.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography