Artigos de revistas sobre o tema "Non-parametric learning"

Siga este link para ver outros tipos de publicações sobre o tema: Non-parametric learning.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Non-parametric learning".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Liu, Bing, Shi-Xiong Xia e Yong Zhou. "Unsupervised non-parametric kernel learning algorithm". Knowledge-Based Systems 44 (maio de 2013): 1–9. http://dx.doi.org/10.1016/j.knosys.2012.12.008.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Esser, Pascal, Maximilian Fleissner e Debarghya Ghoshdastidar. "Non-parametric Representation Learning with Kernels". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 11 (24 de março de 2024): 11910–18. http://dx.doi.org/10.1609/aaai.v38i11.29077.

Texto completo da fonte
Resumo:
Unsupervised and self-supervised representation learning has become popular in recent years for learning useful features from unlabelled data. Representation learning has been mostly developed in the neural network literature, and other models for representation learning are surprisingly unexplored. In this work, we introduce and analyze several kernel-based representation learning approaches: Firstly, we define two kernel Self-Supervised Learning (SSL) models using contrastive loss functions and secondly, a Kernel Autoencoder (AE) model based on the idea of embedding and reconstructing data. We argue that the classical representer theorems for supervised kernel machines are not always applicable for (self-supervised) representation learning, and present new representer theorems, which show that the representations learned by our kernel models can be expressed in terms of kernel matrices. We further derive generalisation error bounds for representation learning with kernel SSL and AE, and empirically evaluate the performance of these methods in both small data regimes as well as in comparison with neural network based models.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Cruz, David Luviano, Francesco José García Luna e Luis Asunción Pérez Domínguez. "Multiagent reinforcement learning using Non-Parametric Approximation". Respuestas 23, n.º 2 (1 de julho de 2018): 53–61. http://dx.doi.org/10.22463/0122820x.1738.

Texto completo da fonte
Resumo:
This paper presents a hybrid control proposal for multi-agent systems, where the advantages of the reinforcement learning and nonparametric functions are exploited. A modified version of the Q-learning algorithm is used which will provide data training for a Kernel, this approach will provide a sub optimal set of actions to be used by the agents. The proposed algorithm is experimentally tested in a path generation task in an unknown environment for mobile robots.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Khadse, Vijay M., Parikshit Narendra Mahalle e Gitanjali R. Shinde. "Statistical Study of Machine Learning Algorithms Using Parametric and Non-Parametric Tests". International Journal of Ambient Computing and Intelligence 11, n.º 3 (julho de 2020): 80–105. http://dx.doi.org/10.4018/ijaci.2020070105.

Texto completo da fonte
Resumo:
The emerging area of the internet of things (IoT) generates a large amount of data from IoT applications such as health care, smart cities, etc. This data needs to be analyzed in order to derive useful inferences. Machine learning (ML) plays a significant role in analyzing such data. It becomes difficult to select optimal algorithm from the available set of algorithms/classifiers to obtain best results. The performance of algorithms differs when applied to datasets from different application domains. In learning, it is difficult to understand if the difference in performance is real or due to random variation in test data, training data, or internal randomness of the learning algorithms. This study takes into account these issues during a comparison of ML algorithms for binary and multivariate classification. It helps in providing guidelines for statistical validation of results. The results obtained show that the performance measure of accuracy for one algorithm differs by critical difference (CD) than others over binary and multivariate datasets obtained from different application domains.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Yoa, Seungdong, Jinyoung Park e Hyunwoo J. Kim. "Learning Non-Parametric Surrogate Losses With Correlated Gradients". IEEE Access 9 (2021): 141199–209. http://dx.doi.org/10.1109/access.2021.3120092.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Rutkowski, Leszek. "Non-parametric learning algorithms in time-varying environments". Signal Processing 18, n.º 2 (outubro de 1989): 129–37. http://dx.doi.org/10.1016/0165-1684(89)90045-5.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Liu, Mingming, Bing Liu, Chen Zhang e Wei Sun. "Embedded non-parametric kernel learning for kernel clustering". Multidimensional Systems and Signal Processing 28, n.º 4 (10 de agosto de 2016): 1697–715. http://dx.doi.org/10.1007/s11045-016-0440-1.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Chen, Changyou, Junping Zhang, Xuefang He e Zhi-Hua Zhou. "Non-Parametric Kernel Learning with robust pairwise constraints". International Journal of Machine Learning and Cybernetics 3, n.º 2 (17 de setembro de 2011): 83–96. http://dx.doi.org/10.1007/s13042-011-0048-6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Kaur, Navdeep, Gautam Kunapuli e Sriraam Natarajan. "Non-parametric learning of lifted Restricted Boltzmann Machines". International Journal of Approximate Reasoning 120 (maio de 2020): 33–47. http://dx.doi.org/10.1016/j.ijar.2020.01.003.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Wang, Mingyang, Zhenshan Bing, Xiangtong Yao, Shuai Wang, Huang Kai, Hang Su, Chenguang Yang e Alois Knoll. "Meta-Reinforcement Learning Based on Self-Supervised Task Representation Learning". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 8 (26 de junho de 2023): 10157–65. http://dx.doi.org/10.1609/aaai.v37i8.26210.

Texto completo da fonte
Resumo:
Meta-reinforcement learning enables artificial agents to learn from related training tasks and adapt to new tasks efficiently with minimal interaction data. However, most existing research is still limited to narrow task distributions that are parametric and stationary, and does not consider out-of-distribution tasks during the evaluation, thus, restricting its application. In this paper, we propose MoSS, a context-based Meta-reinforcement learning algorithm based on Self-Supervised task representation learning to address this challenge. We extend meta-RL to broad non-parametric task distributions which have never been explored before, and also achieve state-of-the-art results in non-stationary and out-of-distribution tasks. Specifically, MoSS consists of a task inference module and a policy module. We utilize the Gaussian mixture model for task representation to imitate the parametric and non-parametric task variations. Additionally, our online adaptation strategy enables the agent to react at the first sight of a task change, thus being applicable in non-stationary tasks. MoSS also exhibits strong generalization robustness in out-of-distributions tasks which benefits from the reliable and robust task representation. The policy is built on top of an off-policy RL algorithm and the entire network is trained completely off-policy to ensure high sample efficiency. On MuJoCo and Meta-World benchmarks, MoSS outperforms prior works in terms of asymptotic performance, sample efficiency (3-50x faster), adaptation efficiency, and generalization robustness on broad and diverse task distributions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Jung, Hyungjoo, e Kwanghoon Sohn. "Single Image Depth Estimation With Integration of Parametric Learning and Non-Parametric Sampling". Journal of Korea Multimedia Society 19, n.º 9 (30 de setembro de 2016): 1659–68. http://dx.doi.org/10.9717/kmms.2016.19.9.1659.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Tanwani, Ajay Kumar, e Sylvain Calinon. "Small-variance asymptotics for non-parametric online robot learning". International Journal of Robotics Research 38, n.º 1 (11 de dezembro de 2018): 3–22. http://dx.doi.org/10.1177/0278364918816374.

Texto completo da fonte
Resumo:
Small-variance asymptotics is emerging as a useful technique for inference in large-scale Bayesian non-parametric mixture models. This paper analyzes the online learning of robot manipulation tasks with Bayesian non-parametric mixture models under small-variance asymptotics. The analysis yields a scalable online sequence clustering (SOSC) algorithm that is non-parametric in the number of clusters and the subspace dimension of each cluster. SOSC groups the new datapoint in low-dimensional subspaces by online inference in a non-parametric mixture of probabilistic principal component analyzers (MPPCA) based on a Dirichlet process, and captures the state transition and state duration information online in a hidden semi-Markov model (HSMM) based on a hierarchical Dirichlet process. A task-parameterized formulation of our approach autonomously adapts the model to changing environmental situations during manipulation. We apply the algorithm in a teleoperation setting to recognize the intention of the operator and remotely adjust the movement of the robot using the learned model. The generative model is used to synthesize both time-independent and time-dependent behaviors by relying on the principles of shared and autonomous control. Experiments with the Baxter robot yield parsimonious clusters that adapt online with new demonstrations and assist the operator in performing remote manipulation tasks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

ZHANG, Chao, e Takuya AKASHI. "Two-Side Agreement Learning for Non-Parametric Template Matching". IEICE Transactions on Information and Systems E100.D, n.º 1 (2017): 140–49. http://dx.doi.org/10.1587/transinf.2016edp7233.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Ma, Yuchao, e Hassan Ghasemzadeh. "LabelForest: Non-Parametric Semi-Supervised Learning for Activity Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 4520–27. http://dx.doi.org/10.1609/aaai.v33i01.33014520.

Texto completo da fonte
Resumo:
Activity recognition is central to many motion analysis applications ranging from health assessment to gaming. However, the need for obtaining sufficiently large amounts of labeled data has limited the development of personalized activity recognition models. Semi-supervised learning has traditionally been a promising approach in many application domains to alleviate reliance on large amounts of labeled data by learning the label information from a small set of seed labels. Nonetheless, existing approaches perform poorly in highly dynamic settings, such as wearable systems, because some algorithms rely on predefined hyper-parameters or distribution models that needs to be tuned for each user or context. To address these challenges, we introduce LabelForest 1, a novel non-parametric semi-supervised learning framework for activity recognition. LabelForest has two algorithms at its core: (1) a spanning forest algorithm for sample selection and label inference; and (2) a silhouette-based filtering method to finalize label augmentation for machine learning model training. Our thorough analysis on three human activity datasets demonstrate that LabelForest achieves a labeling accuracy of 90.1% in presence of a skewed label distribution in the seed data. Compared to self-training and other sequential learning algorithms, LabelForest achieves up to 56.9% and 175.3% improvement in the accuracy on balanced and unbalanced seed data, respectively.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Pareek, Parikshit, Chuan Wang e Hung D. Nguyen. "Non-parametric probabilistic load flow using Gaussian process learning". Physica D: Nonlinear Phenomena 424 (outubro de 2021): 132941. http://dx.doi.org/10.1016/j.physd.2021.132941.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Naeem, Muhammad, e Sohail Asghar. "Structure learning via non-parametric factorized joint likelihood function". Journal of Intelligent & Fuzzy Systems 27, n.º 3 (2014): 1589–99. http://dx.doi.org/10.3233/ifs-141125.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Karumanchi, Sisir, Thomas Allen, Tim Bailey e Steve Scheding. "Non-parametric Learning to Aid Path Planning over Slopes". International Journal of Robotics Research 29, n.º 8 (4 de maio de 2010): 997–1018. http://dx.doi.org/10.1177/0278364910370241.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Dervilis, Nikolaos, Thomas E. Simpson, David J. Wagg e Keith Worden. "Nonlinear modal analysis via non-parametric machine learning tools". Strain 55, n.º 1 (15 de outubro de 2018): e12297. http://dx.doi.org/10.1111/str.12297.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Barut, Emre, e Warren B. Powell. "Optimal learning for sequential sampling with non-parametric beliefs". Journal of Global Optimization 58, n.º 3 (3 de março de 2013): 517–43. http://dx.doi.org/10.1007/s10898-013-0050-5.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Lu, Zhong-Lin, Yukai Zhao, Jiajuan Liu e Barbara Dosher. "Non-parametric Hierarchical Bayesian Modeling of the Learning Curve in Perceptual Learning". Journal of Vision 23, n.º 9 (1 de agosto de 2023): 5752. http://dx.doi.org/10.1167/jov.23.9.5752.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Gaviria-Chavarro, Javier, Isabel Cristina Rojas-Padilla e Yury Vergara-López. "Virtual Learning Object (VLO) for Teaching and Learning Non-Parametric Statistical Methods". Tecné, Episteme y Didaxis: TED, n.º 54 (1 de julho de 2023): 285–302. http://dx.doi.org/10.17227/ted.num54-14155.

Texto completo da fonte
Resumo:
nterpreting, understanding, and applying statistical knowledge, presents, in many cases, some difficulties for students in the training process. For this reason, and thanks to the rise of information and communication technologies; a virtual object was developed for learning the statistical methods of Kruskal Wallis, Mann Whitney U and Wilcoxon, which are included in non-parametric statistics. The objective of this quasi-experimental design study was to apply the virtual object as a teaching-learning strategy for these three statistical methods after its creation and validation in order to support the training of students in biostatistics. The virtual learning object was evaluated by experts through the LORI instrument (tool that allows to evaluate learning objects based on nine variables), granting a quality level in the medium-high range according to the final weighting. The evaluation instrument and the comparative statistical analysis used in this process showed that the learning object is adequate for the purpose and objective set, concluding that there is a significant difference in the academic results of the students to whom this digital tool was applied.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Deco, Gustavo, Ralph Neuneier e Bernd Schümann. "Non-parametric Data Selection for Neural Learning in Non-stationary Time Series". Neural Networks 10, n.º 3 (abril de 1997): 401–7. http://dx.doi.org/10.1016/s0893-6080(96)00108-6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Pal, Dipan K., e Marios Savvides. "Non-Parametric Transformation Networks for Learning General Invariances from Data". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 4667–74. http://dx.doi.org/10.1609/aaai.v33i01.33014667.

Texto completo da fonte
Resumo:
ConvNets, through their architecture, only enforce invariance to translation. In this paper, we introduce a new class of deep convolutional architectures called Non-Parametric Transformation Networks (NPTNs) which can learn general invariances and symmetries directly from data. NPTNs are a natural generalization of ConvNets and can be optimized directly using gradient descent. Unlike almost all previous works in deep architectures, they make no assumption regarding the structure of the invariances present in the data and in that aspect are flexible and powerful. We also model ConvNets and NPTNs under a unified framework called Transformation Networks (TN), which yields a better understanding of the connection between the two. We demonstrate the efficacy of NPTNs on data such as MNIST with extreme transformations and CIFAR10 where they outperform baselines, and further outperform several recent algorithms on ETH-80. They do so while having the same number of parameters. We also show that they are more effective than ConvNets in modelling symmetries and invariances from data, without the explicit knowledge of the added arbitrary nuisance transformations. Finally, we replace ConvNets with NPTNs within Capsule Networks and show that this enables Capsule Nets to perform even better.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Kardan, Ahmad Agha, e Samira Ghareh Gozlou. "A new non-parametric feature learning for supervised link prediction". International Journal of System Control and Information Processing 1, n.º 4 (2015): 319. http://dx.doi.org/10.1504/ijscip.2015.075877.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Yang, Z., e C. W. Chan. "Learning control for non-parametric uncertainties with new convergence property". IET Control Theory & Applications 4, n.º 10 (1 de outubro de 2010): 2177–83. http://dx.doi.org/10.1049/iet-cta.2009.0458.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Wang, Yi, Bin Li, Yang Wang, Fang Chen, Bang Zhang e Zhidong Li. "Robust Bayesian non-parametric dictionary learning with heterogeneous Gaussian noise". Computer Vision and Image Understanding 150 (setembro de 2016): 31–43. http://dx.doi.org/10.1016/j.cviu.2016.05.015.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Li, Der-Chang, e Chun-Wu Yeh. "A non-parametric learning algorithm for small manufacturing data sets". Expert Systems with Applications 34, n.º 1 (janeiro de 2008): 391–98. http://dx.doi.org/10.1016/j.eswa.2006.09.008.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Park, Yeonseok, Anthony Choi e Keonwook Kim. "Parametric Estimations Based on Homomorphic Deconvolution for Time of Flight in Sound Source Localization System". Sensors 20, n.º 3 (10 de fevereiro de 2020): 925. http://dx.doi.org/10.3390/s20030925.

Texto completo da fonte
Resumo:
Vehicle-mounted sound source localization systems provide comprehensive information to improve driving conditions by monitoring the surroundings. The three-dimensional structure of vehicles hinders the omnidirectional sound localization system because of the long and uneven propagation. In the received signal, the flight times between microphones delivers the essential information to locate the sound source. This paper proposes a novel method to design a sound localization system based on the single analog microphone network. This article involves the flight time estimation for two microphones with non-parametric homomorphic deconvolution. The parametric methods are also suggested with Yule-walker, Prony, and Steiglitz-McBride algorithm to derive the coefficient values of the propagation model for flight time estimation. The non-parametric and Steiglitz-McBride method demonstrated significantly low bias and variance for 20 or higher ensemble average length. The Yule-walker and Prony algorithms showed gradually improved statistical performance for increased ensemble average length. Hence, the non-parametric and parametric homomorphic deconvolution well represent the flight time information. The derived non-parametric and parametric output with distinct length will serve as the featured information for a complete localization system based on machine learning or deep learning in future works.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Souaissi, Zina, Taha B. M. J. Ouarda e André St-Hilaire. "Non-parametric, semi-parametric, and machine learning models for river temperature frequency analysis at ungauged basins". Ecological Informatics 75 (julho de 2023): 102107. http://dx.doi.org/10.1016/j.ecoinf.2023.102107.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Maddalena, Emilio T., e Colin N. Jones. "Learning Non-Parametric Models with Guarantees: A Smooth Lipschitz Regression Approach". IFAC-PapersOnLine 53, n.º 2 (2020): 965–70. http://dx.doi.org/10.1016/j.ifacol.2020.12.1265.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Wang, Dongqi, Haoran Wei, Zhirui Zhang, Shujian Huang, Jun Xie e Jiajun Chen. "Non-parametric Online Learning from Human Feedback for Neural Machine Translation". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 10 (28 de junho de 2022): 11431–39. http://dx.doi.org/10.1609/aaai.v36i10.21395.

Texto completo da fonte
Resumo:
We study the problem of online learning with human feedback in the human-in-the-loop machine translation, in which the human translators revise the machine-generated translations and then the corrected translations are used to improve the neural machine translation (NMT) system. However, previous methods require online model updating or additional translation memory networks to achieve high-quality performance, making them inflexible and inefficient in practice. In this paper, we propose a novel non-parametric online learning method without changing the model structure. This approach introduces two k-nearest-neighbor (KNN) modules: one module memorizes the human feedback, which is the correct sentences provided by human translators, while the other balances the usage of the history human feedback and original NMT models adaptively. Experiments conducted on EMEA and JRC-Acquis benchmarks demonstrate that our proposed method obtains substantial improvements on translation accuracy and achieves better adaptation performance with less repeating human correction operations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Tohill, C., L. Ferreira, C. J. Conselice, S. P. Bamford e F. Ferrari. "Quantifying Non-parametric Structure of High-redshift Galaxies with Deep Learning". Astrophysical Journal 916, n.º 1 (1 de julho de 2021): 4. http://dx.doi.org/10.3847/1538-4357/ac033c.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Wirayasa, I. Ketut Adi, Arko Djajadi, H. Andri Santoso e Eko Indrajit. "Comparison Non-Parametric Machine Learning Algorithms for Prediction of Employee Talent". IJCCS (Indonesian Journal of Computing and Cybernetics Systems) 15, n.º 4 (31 de outubro de 2021): 403. http://dx.doi.org/10.22146/ijccs.69366.

Texto completo da fonte
Resumo:
Classification of ordinal data is part of categorical data. Ordinal data consists of features with values based on order or ranking. The use of machine learning methods in Human Resources Management is intended to support decision-making based on objective data analysis, and not on subjective aspects. The purpose of this study is to analyze the relationship between features, and whether the features used as objective factors can classify, and predict certain talented employees or not. This study uses a public dataset provided by IBM analytics. Analysis of the dataset using statistical tests, and confirmatory factor analysis validity tests, intended to determine the relationship or correlation between features in formulating hypothesis testing before building a model by using a comparison of four algorithms, namely Support Vector Machine, K-Nearest Neighbor, Decision Tree, and Artificial Neural Networks. The test results are expressed in the Confusion Matrix, and report classification of each model. The best evaluation is produced by the SVM algorithm with the same Accuracy, Precision, and Recall values, which are 94.00%, Sensitivity 93.28%, False Positive rate 4.62%, False Negative rate 6.72%, and AUC-ROC curve value 0.97 with an excellent category in performing classification of the employee talent prediction model.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Singh, Sumeet, Jonathan Lacotte, Anirudha Majumdar e Marco Pavone. "Risk-sensitive inverse reinforcement learning via semi- and non-parametric methods". International Journal of Robotics Research 37, n.º 13-14 (22 de maio de 2018): 1713–40. http://dx.doi.org/10.1177/0278364918772017.

Texto completo da fonte
Resumo:
The literature on inverse reinforcement learning (IRL) typically assumes that humans take actions to minimize the expected value of a cost function, i.e., that humans are risk neutral. Yet, in practice, humans are often far from being risk neutral. To fill this gap, the objective of this paper is to devise a framework for risk-sensitive (RS) IRL to explicitly account for a human’s risk sensitivity. To this end, we propose a flexible class of models based on coherent risk measures, which allow us to capture an entire spectrum of risk preferences from risk neutral to worst case. We propose efficient non-parametric algorithms based on linear programming and semi-parametric algorithms based on maximum likelihood for inferring a human’s underlying risk measure and cost function for a rich class of static and dynamic decision-making settings. The resulting approach is demonstrated on a simulated driving game with 10 human participants. Our method is able to infer and mimic a wide range of qualitatively different driving styles from highly risk averse to risk neutral in a data-efficient manner. Moreover, comparisons of the RS-IRL approach with a risk-neutral model show that the RS-IRL framework more accurately captures observed participant behavior both qualitatively and quantitatively, especially in scenarios where catastrophic outcomes such as collisions can occur.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Syed, Zeeshan, Ilan Rubinfeld, Pat Patton, Jennifer Ritz, Jack Jordan, Andrea Doud e Vic Velanovich. "Using diagnostic codes for risk adjustment: A non-parametric learning approach". Journal of the American College of Surgeons 211, n.º 3 (setembro de 2010): S99—S100. http://dx.doi.org/10.1016/j.jamcollsurg.2010.06.262.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Nesa, Nashreen, Tania Ghosh e Indrajit Banerjee. "Non-parametric sequence-based learning approach for outlier detection in IoT". Future Generation Computer Systems 82 (maio de 2018): 412–21. http://dx.doi.org/10.1016/j.future.2017.11.021.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Nurul Amelina Nasharuddin e Nurul Shuhada Zamri. "Non-Parametric Machine Learning for Pollinator Image Classification: A Comparative Study". Journal of Advanced Research in Applied Sciences and Engineering Technology 34, n.º 1 (23 de novembro de 2023): 106–15. http://dx.doi.org/10.37934/araset.34.1.106115.

Texto completo da fonte
Resumo:
Pollinators play a crucial role in maintaining the health of our planet's ecosystems by aiding in plant reproduction. However, identifying and differentiating between different types of pollinators can be a difficult task, especially when they have similar appearances. This difficulty in identification can cause significant problems for conservation efforts, as effective conservation requires knowledge of the specific pollinator species present in an ecosystem. Thus, the aim of this study is to identify the most effective methods, features, and classifiers for developing a reliable pollinator classifier. Specifically, this initial study uses two primary features to differentiate between the pollinator types: shape and colour. To develop the pollinator classifiers, a dataset of 186 images of black ants, ladybirds, and yellow jacket wasps was collected. The dataset was then divided into training and testing sets, and four different non-parametric classifiers were used to train the extracted features. The classifiers used were the k-Nearest Neighbour, Decision Tree, Random Forest, and Support Vector Machine classifiers. The results showed that the Random Forest classifier was the most accurate, with a maximum accuracy of 92.11% when the dataset was partitioned into 80% training and 20% testing sets. By developing a reliable pollinator classifier, researchers and conservationists can better understand the roles of different pollinator species in maintaining ecosystem health. This understanding can lead to better conservation strategies to protect these important creatures, ultimately helping to preserve our planet's biodiversity.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Herranz-Matey, Ivan, e Luis Ruiz-Garcia. "New Agricultural Tractor Manufacturer’s Suggested Retail Price (MSRP) Model in Europe". Agriculture 14, n.º 3 (21 de fevereiro de 2024): 342. http://dx.doi.org/10.3390/agriculture14030342.

Texto completo da fonte
Resumo:
Research investigating models for assessing new tractor pricing is notably scarce, despite its fundamental importance in conducting comprehensive cost analyses. This study aims to identify a model that is both user-friendly and robust, evaluating both parametric and Machine Learning-optimized non-parametric models. Among parametric models, the second-order polynomial model demonstrated superior performance in terms of R-squared (R2) of 0.97469 and a Root Mean Square Error (RMSE) of 15,633. Conversely, Machine Learning-optimized Gaussian Processes Regressions exhibited the most favorable overall R-squared (R2) of 0.99951 and a Root Mean Square Error (RMSE) of 2321. While the parametric polynomial model offers a solution with minimal mathematical and computational complexity, the non-parametric GPR model delivers highly robust outcomes, presenting stakeholders involved in new agriculture tractor transactions with superior data-driven decision-making capabilities.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Hakim, Abdul, Nurhikmah H. Nurhikmah, Nur Halisa, Farida Febriati, Latri Aras e Lutfi B. Lutfi. "The Effect of Online Learning on Student Learning Outcomes in Indonesian Subjects". Journal of Innovation in Educational and Cultural Research 4, n.º 1 (21 de janeiro de 2023): 133–40. http://dx.doi.org/10.46843/jiecr.v4i1.312.

Texto completo da fonte
Resumo:
This study employs a Pre-Experimental Design (Non-design) to examine whether online learning has any effect on learning outcomes in Indonesian subjects. The sample size for this study was 16 students, chosen at random. Data collection methods include observation, testing, and documentation. Observations were made by observing both teacher and student activities. The test consists of a pretest before implementing offline learning and a posttest after implementing online learning, as well as documentation for research purposes. The data was analyzed using descriptive statistics and the non-parametric Wilcoxon signed ranks test (Z). The average value of student learning outcomes in an Indonesian class after offline learning was higher than after online learning, according to the results of the non-parametric Wilcoxon signed ranks test (Z) processed using SPSS 22 for windows. The findings revealed that the effect of online learning on learning outcomes in an Indonesian course after implementing offline learning is greater than the effect of online learning after implementing offline learning. The findings reveal that the effect of online learning on learning outcomes in an Indonesian language subject after implementing offline learning is greater than the effect of online learning after implementing offline learning. It is possible to conclude that online learning has an effect on students' learning outcomes in Indonesian Class IV subjects at SD Negeri 1 Bonto-Bonto, Pangkep Regency.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Shi, Chao, e Yu Wang. "Non-parametric machine learning methods for interpolation of spatially varying non-stationary and non-Gaussian geotechnical properties". Geoscience Frontiers 12, n.º 1 (janeiro de 2021): 339–50. http://dx.doi.org/10.1016/j.gsf.2020.01.011.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Yang, Z., e C. W. Chan. "Conditional iterative learning control for non-linear systems with non-parametric uncertainties under alignment condition". IET Control Theory & Applications 3, n.º 11 (1 de novembro de 2009): 1521–27. http://dx.doi.org/10.1049/iet-cta.2008.0532.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Huang, Lei, Yuqing Ma e Xianglong Liu. "A general non-parametric active learning framework for classification on multiple manifolds". Pattern Recognition Letters 130 (fevereiro de 2020): 250–58. http://dx.doi.org/10.1016/j.patrec.2019.01.013.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Shah, Sonali Rajesh, Abhishek Kaushik, Shubham Sharma e Janice Shah. "Opinion-Mining on Marglish and Devanagari Comments of YouTube Cookery Channels Using Parametric and Non-Parametric Learning Models". Big Data and Cognitive Computing 4, n.º 1 (17 de março de 2020): 3. http://dx.doi.org/10.3390/bdcc4010003.

Texto completo da fonte
Resumo:
YouTube is a boon, and through it people can educate, entertain, and express themselves about various topics. YouTube India currently has millions of active users. As there are millions of active users it can be understood that the data present on the YouTube will be large. With India being a very diverse country, many people are multilingual. People express their opinions in a code-mix form. Code-mix form is the mixing of two or more languages. It has become a necessity to perform Sentiment Analysis on the code-mix languages as there is not much research on Indian code-mix language data. In this paper, Sentiment Analysis (SA) is carried out on the Marglish (Marathi + English) as well as Devanagari Marathi comments which are extracted from the YouTube API from top Marathi channels. Several machine-learning models are applied on the dataset along with 3 different vectorizing techniques. Multilayer Perceptron (MLP) with Count vectorizer provides the best accuracy of 62.68% on the Marglish dataset and Bernoulli Naïve Bayes along with the Count vectorizer, which gives accuracy of 60.60% on the Devanagari dataset. Multilayer Perceptron and Bernoulli Naïve Bayes are considered to be the best performing algorithms. 10-fold cross-validation and statistical testing was also carried out on the dataset to confirm the results.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Avramidis, Athanassios N., e Arnoud V. den Boer. "Dynamic pricing with finite price sets: a non-parametric approach". Mathematical Methods of Operations Research 94, n.º 1 (28 de junho de 2021): 1–34. http://dx.doi.org/10.1007/s00186-021-00744-y.

Texto completo da fonte
Resumo:
AbstractWe study price optimization of perishable inventory over multiple, consecutive selling seasons in the presence of demand uncertainty. Each selling season consists of a finite number of discrete time periods, and demand per time period is Bernoulli distributed with price-dependent parameter. The set of feasible prices is finite, and the expected demand corresponding to each price is unknown to the seller, whose objective is to maximize cumulative expected revenue. We propose an algorithm that estimates the unknown parameters in a learning phase, and in each subsequent season applies a policy determined as the solution to a sample dynamic program, which modifies the underlying dynamic program by replacing the unknown parameters by the estimate. Revenue performance is measured by the regret: the expected revenue loss relative to the optimal attainable revenue under full information. For a given number of seasons n, we show that if the number of seasons allocated to learning is asymptotic to $$(n^2\log n)^{1/3}$$ ( n 2 log n ) 1 / 3 , then the regret is of the same order, uniformly over all unknown demand parameters. An extensive numerical study that compares our algorithm to six benchmarks adapted from the literature demonstrates the effectiveness of our approach.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Li, Wei-Ming, e Shi-Ju Ran. "Non-Parametric Semi-Supervised Learning in Many-Body Hilbert Space with Rescaled Logarithmic Fidelity". Mathematics 10, n.º 6 (15 de março de 2022): 940. http://dx.doi.org/10.3390/math10060940.

Texto completo da fonte
Resumo:
In quantum and quantum-inspired machine learning, a key step is to embed the data in the quantum space known as Hilbert space. Studying quantum kernel function, which defines the distances among the samples in the Hilbert space, belongs to the fundamental topics in this direction. In this work, we propose a tunable quantum-inspired kernel function (QIKF) named rescaled logarithmic fidelity (RLF) and a non-parametric algorithm for the semi-supervised learning in the quantum space. The rescaling takes advantage of the non-linearity of the kernel to tune the mutual distances of samples in the Hilbert space, and meanwhile avoids the exponentially-small fidelities between quantum many-qubit states. Being non-parametric excludes the possible effects from the variational parameters, and evidently demonstrates the properties of the kernel itself. Our results on the hand-written digits (MNIST dataset) and movie reviews (IMDb dataset) support the validity of our method, by comparing with the standard fidelity as the QIKF as well as several well-known non-parametric algorithms (naive Bayes classifiers, k-nearest neighbors, and spectral clustering). High accuracy is demonstrated, particularly for the unsupervised case with no labeled samples and the few-shot cases with small numbers of labeled samples. With the visualizations by t-stochastic neighbor embedding, our results imply that the machine learning in the Hilbert space complies with the principles of maximal coding rate reduction, where the low-dimensional data exhibit within-class compressibility, between-class discrimination, and overall diversity. The proposed QIKF and semi-supervised algorithm can be further combined with the parametric models such as tensor networks, quantum circuits, and quantum neural networks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Lasserre, Marvin, Régis Lebrun e Pierre-Henri Wuillemin. "Learning Continuous High-Dimensional Models using Mutual Information and Copula Bayesian Networks". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 13 (18 de maio de 2021): 12139–46. http://dx.doi.org/10.1609/aaai.v35i13.17441.

Texto completo da fonte
Resumo:
We propose a new framework to learn non-parametric graphical models from continuous observational data. Our method is based on concepts from information theory in order to discover independences and causality between variables: the conditional and multivariate mutual information (such as \cite{verny2017learning} for discrete models). To estimate these quantities, we propose non-parametric estimators relying on the Bernstein copula and that are constructed by exploiting the relation between the mutual information and the copula entropy \cite{ma2011mutual, belalia2017testing}. To our knowledge, this relation is only documented for the bivariate case and, for the need of our algorithms, is here extended to the conditional and multivariate mutual information. This framework leads to a new algorithm to learn continuous non-parametric Bayesian network. Moreover, we use this estimator to speed up the BIC algorithm proposed in \cite{elidan2010copula} by taking advantage of the decomposition of the likelihood function in a sum of mutual information \cite{koller2009probabilistic}. Finally, our method is compared in terms of performances and complexity with other state of the art techniques to learn Copula Bayesian Networks and shows superior results. In particular, it needs less data to recover the true structure and generalizes better on data that are not sampled from Gaussian distributions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Guo, Longwei, Hao Zhu, Yuanxun Lu, Menghua Wu e Xun Cao. "RAFaRe: Learning Robust and Accurate Non-parametric 3D Face Reconstruction from Pseudo 2D&3D Pairs". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 1 (26 de junho de 2023): 719–27. http://dx.doi.org/10.1609/aaai.v37i1.25149.

Texto completo da fonte
Resumo:
We propose a robust and accurate non-parametric method for single-view 3D face reconstruction (SVFR). While tremendous efforts have been devoted to parametric SVFR, a visible gap still lies between the result 3D shape and the ground truth. We believe there are two major obstacles: 1) the representation of the parametric model is limited to a certain face database; 2) 2D images and 3D shapes in the fitted datasets are distinctly misaligned. To resolve these issues, a large-scale pseudo 2D&3D dataset is created by first rendering the detailed 3D faces, then swapping the face in the wild images with the rendered face. These pseudo 2D&3D pairs are created from publicly available datasets which eliminate the gaps between 2D and 3D data while covering diverse appearances, poses, scenes, and illumination. We further propose a non-parametric scheme to learn a well-generalized SVFR model from the created dataset, and the proposed hierarchical signed distance function turns out to be effective in predicting middle-scale and small-scale 3D facial geometry. Our model outperforms previous methods on FaceScape-wild/lab and MICC benchmarks and is well generalized to various appearances, poses, expressions, and in-the-wild environments. The code is released at https://github.com/zhuhao-nju/rafare.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Park, Yeonseok, Anthony Choi e Keonwook Kim. "Single-Channel Multiple-Receiver Sound Source Localization System with Homomorphic Deconvolution and Linear Regression". Sensors 21, n.º 3 (23 de janeiro de 2021): 760. http://dx.doi.org/10.3390/s21030760.

Texto completo da fonte
Resumo:
The conventional sound source localization systems require the significant complexity because of multiple synchronized analog-to-digital conversion channels as well as the scalable algorithms. This paper proposes a single-channel sound localization system for transport with multiple receivers. The individual receivers are connected by the single analog microphone network which provides the superimposed signal over simple connectivity based on asynchronized analog circuit. The proposed system consists of two computational stages as homomorphic deconvolution and machine learning stage. A previous study has verified the performance of time-of-flight estimation by utilizing the non-parametric and parametric homomorphic deconvolution algorithms. This paper employs the linear regression with supervised learning for angle-of-arrival prediction. Among the circular configurations of receiver positions, the optimal location is selected for three-receiver structure based on the extensive simulations. The non-parametric method presents the consistent performance and Yule–Walker parametric algorithm indicates the least accuracy. The Steiglitz–McBride parametric algorithm delivers the best predictions with reduced model order as well as other parameter values. The experiments in the anechoic chamber demonstrate the accurate predictions in proper ensemble length and model order.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Long, Alexander, Alan Blair e Herke van Hoof. "Fast and Data Efficient Reinforcement Learning from Pixels via Non-parametric Value Approximation". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 7 (28 de junho de 2022): 7620–27. http://dx.doi.org/10.1609/aaai.v36i7.20728.

Texto completo da fonte
Resumo:
We present Nonparametric Approximation of Inter-Trace returns (NAIT), a Reinforcement Learning algorithm for discrete action, pixel-based environments that is both highly sample and computation efficient. NAIT is a lazy-learning approach with an update that is equivalent to episodic Monte-Carlo on episode completion, but that allows the stable incorporation of rewards while an episode is ongoing. We make use of a fixed domain-agnostic representation, simple distance based exploration and a proximity graph-based lookup to facilitate extremely fast execution. We empirically evaluate NAIT on both the 26 and 57 game variants of ATARI100k where, despite its simplicity, it achieves competitive performance in the online setting with greater than 100x speedup in wall-time.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Lee, SiHun, Kijoo Jang, Haeseong Cho, Haedong Kim e SangJoon Shin. "Parametric non-intrusive model order reduction for flow-fields using unsupervised machine learning". Computer Methods in Applied Mechanics and Engineering 384 (outubro de 2021): 113999. http://dx.doi.org/10.1016/j.cma.2021.113999.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia