Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Ranking learning.

Дисертації з теми "Ranking learning"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Ranking learning".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Latham, Andrew C. "Multiple-Instance Feature Ranking." Case Western Reserve University School of Graduate Studies / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=case1440642294.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Sinsel, Erik W. "Ensemble learning for ranking interesting attributes." Morgantown, W. Va. : [West Virginia University Libraries], 2005. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4400.

Повний текст джерела
Анотація:
Thesis (M.S.)--West Virginia University, 2005.
Title from document title page. Document formatted into pages; contains viii, 81 p. : ill. Includes abstract. Includes bibliographical references (p. 72-74).
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Mattsson, Fredrik, and Anton Gustafsson. "Optimize Ranking System With Machine Learning." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-37431.

Повний текст джерела
Анотація:
This thesis investigates how recommendation systems has been used and can be used with the help of different machine learning algorithms. Algorithms used and presented are decision tree, random forest and singular-value decomposition(SVD). Together with Tingstad, we have tried to implement the SVD function on their recommendation engine in order to enhance the recommendation given. A trivial presentation on how the algorithms work. General information about machine learning and how we tried to implement it with Tingstad’s data. Implementations with Netflix’s and Movielens open-source dataset was done, estimated with RMSE and MAE.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Achab, Mastane. "Ranking and risk-aware reinforcement learning." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT020.

Повний текст джерела
Анотація:
Les travaux de cette thèse se situent à l’interface de deux thématiques de l'apprentissage automatique : l’apprentissage de préférences d'une part, et l’apprentissage par renforcement de l'autre. La première consiste à percoler différents classements d’un même ensemble d’objets afin d’en extraire un ordre général, la seconde à identifier séquentiellement une stratégie optimale en observant des récompenses sanctionnant chaque action essayée. La structure de la thèse suit ce découpage thématique. En première partie, le paradigme de minimisation du risque empirique est utilisé à des fins d'ordonnancement. Partant du problème d’apprentissage supervisé de règles d’ordonnancement à partir de données étiquetées de façon binaire, une extension est proposée au cas où les étiquettes prennent des valeurs continues. Les critères de performance usuels dans le cas binaire, à savoir la courbe caractéristique de l’opérateur de réception (COR) et l’aire sous la courbe COR (ASC), sont étendus au cas continu : les métriques COR intégrée (CORI) et ASC intégrée (ASCI) sont introduites à cet effet. Le second problème d'ordonnancement étudié est celui de l'agrégation de classements à travers l'identification du consensus de Kemeny. En particulier, une relaxation au problème plus général de la réduction de la dimensionnalité dans l'espace des distributions sur le groupe symétrique est formulée à l'aide d'outils mathématiques empruntés à la théorie du transport optimal. La seconde partie de cette thèse s'intéresse à l'apprentissage par renforcement. Des problèmes de bandit manchot sont analysés dans des contextes où la performance moyenne n'est pas pertinente et où la gestion du risque prévaut. Enfin, le problème plus général de l'apprentissage par renforcement distributionnel, dans lequel le décideur cherche à connaître l'entière distribution de sa performance et non pas uniquement sa valeur moyenne, est considéré. De nouveaux opérateurs de programmation dynamique ainsi que leurs pendants atomiques mènent à de nouveaux algorithmes stochastiques distributionnels
This thesis divides into two parts: the first part is on ranking and the second on risk-aware reinforcement learning. While binary classification is the flagship application of empirical risk minimization (ERM), the main paradigm of machine learning, more challenging problems such as bipartite ranking can also be expressed through that setup. In bipartite ranking, the goal is to order, by means of scoring methods, all the elements of some feature space based on a training dataset composed of feature vectors with their binary labels. This thesis extends this setting to the continuous ranking problem, a variant where the labels are taking continuous values instead of being simply binary. The analysis of ranking data, initiated in the 18th century in the context of elections, has led to another ranking problem using ERM, namely ranking aggregation and more precisely the Kemeny's consensus approach. From a training dataset made of ranking data, such as permutations or pairwise comparisons, the goal is to find the single "median permutation" that best corresponds to a consensus order. We present a less drastic dimensionality reduction approach where a distribution on rankings is approximated by a simpler distribution, which is not necessarily reduced to a Dirac mass as in ranking aggregation.For that purpose, we rely on mathematical tools from the theory of optimal transport such as Wasserstein metrics. The second part of this thesis focuses on risk-aware versions of the stochastic multi-armed bandit problem and of reinforcement learning (RL), where an agent is interacting with a dynamic environment by taking actions and receiving rewards, the objective being to maximize the total payoff. In particular, a novel atomic distributional RL approach is provided: the distribution of the total payoff is approximated by particles that correspond to trimmed means
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Korba, Anna. "Learning from ranking data : theory and methods." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT009/document.

Повний текст джерела
Анотація:
Les données de classement, c.à. d. des listes ordonnées d'objets, apparaissent naturellement dans une grande variété de situations, notamment lorsque les données proviennent d’activités humaines (bulletins de vote d'élections, enquêtes d'opinion, résultats de compétitions) ou dans des applications modernes du traitement de données (moteurs de recherche, systèmes de recommendation). La conception d'algorithmes d'apprentissage automatique, adaptés à ces données, est donc cruciale. Cependant, en raison de l’absence de structure vectorielle de l’espace des classements et de sa cardinalité explosive lorsque le nombre d'objets augmente, la plupart des méthodes classiques issues des statistiques et de l’analyse multivariée ne peuvent être appliquées directement. Par conséquent, la grande majorité de la littérature repose sur des modèles paramétriques. Dans cette thèse, nous proposons une théorie et des méthodes non paramétriques pour traiter les données de classement. Notre analyse repose fortement sur deux astuces principales. La première est l’utilisation poussée de la distance du tau de Kendall, qui décompose les classements en comparaisons par paires. Cela nous permet d'analyser les distributions sur les classements à travers leurs marginales par paires et à travers une hypothèse spécifique appelée transitivité, qui empêche les cycles dans les préférences de se produire. La seconde est l'utilisation des fonctions de représentation adaptées aux données de classements, envoyant ces dernières dans un espace vectoriel. Trois problèmes différents, non supervisés et supervisés, ont été abordés dans ce contexte: l'agrégation de classement, la réduction de dimensionnalité et la prévision de classements avec variables explicatives.La première partie de cette thèse se concentre sur le problème de l'agrégation de classements, dont l'objectif est de résumer un ensemble de données de classement par un classement consensus. Parmi les méthodes existantes pour ce problème, la méthode d'agrégation de Kemeny se démarque. Ses solutions vérifient de nombreuses propriétés souhaitables, mais peuvent être NP-difficiles à calculer. Dans cette thèse, nous avons étudié la complexité de ce problème de deux manières. Premièrement, nous avons proposé une méthode pour borner la distance du tau de Kendall entre tout candidat pour le consensus (généralement le résultat d'une procédure efficace) et un consensus de Kemeny, sur tout ensemble de données. Nous avons ensuite inscrit le problème d'agrégation de classements dans un cadre statistique rigoureux en le reformulant en termes de distributions sur les classements, et en évaluant la capacité de généralisation de consensus de Kemeny empiriques.La deuxième partie de cette théorie est consacrée à des problèmes d'apprentissage automatique, qui se révèlent être étroitement liés à l'agrégation de classement. Le premier est la réduction de la dimensionnalité pour les données de classement, pour lequel nous proposons une approche de transport optimal, pour approximer une distribution sur les classements par une distribution montrant un certain type de parcimonie. Le second est le problème de la prévision des classements avec variables explicatives, pour lesquelles nous avons étudié plusieurs méthodes. Notre première proposition est d’adapter des méthodes constantes par morceaux à ce problème, qui partitionnent l'espace des variables explicatives en régions et assignent à chaque région un label (un consensus). Notre deuxième proposition est une approche de prédiction structurée, reposant sur des fonctions de représentations, aux avantages théoriques et computationnels, pour les données de classements
Ranking data, i.e., ordered list of items, naturally appears in a wide variety of situations, especially when the data comes from human activities (ballots in political elections, survey answers, competition results) or in modern applications of data processing (search engines, recommendation systems). The design of machine-learning algorithms, tailored for these data, is thus crucial. However, due to the absence of any vectorial structure of the space of rankings, and its explosive cardinality when the number of items increases, most of the classical methods from statistics and multivariate analysis cannot be applied in a direct manner. Hence, a vast majority of the literature rely on parametric models. In this thesis, we propose a non-parametric theory and methods for ranking data. Our analysis heavily relies on two main tricks. The first one is the extensive use of the Kendall’s tau distance, which decomposes rankings into pairwise comparisons. This enables us to analyze distributions over rankings through their pairwise marginals and through a specific assumption called transitivity, which prevents cycles in the preferences from happening. The second one is the extensive use of embeddings tailored to ranking data, mapping rankings to a vector space. Three different problems, unsupervised and supervised, have been addressed in this context: ranking aggregation, dimensionality reduction and predicting rankings with features.The first part of this thesis focuses on the ranking aggregation problem, where the goal is to summarize a dataset of rankings by a consensus ranking. Among the many ways to state this problem stands out the Kemeny aggregation method, whose solutions have been shown to satisfy many desirable properties, but can be NP-hard to compute. In this work, we have investigated the hardness of this problem in two ways. Firstly, we proposed a method to upper bound the Kendall’s tau distance between any consensus candidate (typically the output of a tractable procedure) and a Kemeny consensus, on any dataset. Then, we have casted the ranking aggregation problem in a rigorous statistical framework, reformulating it in terms of ranking distributions, and assessed the generalization ability of empirical Kemeny consensus.The second part of this thesis is dedicated to machine learning problems which are shown to be closely related to ranking aggregation. The first one is dimensionality reduction for ranking data, for which we propose a mass-transportation approach to approximate any distribution on rankings by a distribution exhibiting a specific type of sparsity. The second one is the problem of predicting rankings with features, for which we investigated several methods. Our first proposal is to adapt piecewise constant methods to this problem, partitioning the feature space into regions and locally assigning as final label (a consensus ranking) to each region. Our second proposal is a structured prediction approach, relying on embedding maps for ranking data enjoying theoretical and computational advantages
Стилі APA, Harvard, Vancouver, ISO та ін.
6

FILHO, FRANCISCO BENJAMIM. "RANKING OF WEB PAGES BY LEARNING MULTIPLE LATENT CATEGORIES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2012. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=19540@1.

Повний текст джерела
Анотація:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
O crescimento explosivo e a acessibilidade generalizada da World Wide Web (WWW) levaram ao aumento da atividade de pesquisa na área da recuperação de informação para páginas Web. A WWW é um rico e imenso ambiente em que as páginas se assemelham a uma comunidade grande de elementos conectada através de hiperlinks em razão da semelhança entre o conteúdo das páginas, a popularidade da página, a autoridade sobre o assunto e assim por diante, sabendo-se que, em verdade, quando um autor de uma página a vincula à outra, está concebendo-a como importante para si. Por isso, a estrutura de hiperlink da WWW é conhecida por melhorar significativamente o desempenho das pesquisas para além do uso de estatísticas de distribuição simples de texto. Nesse sentido, a abordagem Hyperlink Induced Topic Search (HITS) introduz duas categorias básicas de páginas Web, hubs e autoridades, que revelam algumas informações semânticas ocultas a partir da estrutura de hiperlink. Em 2005, fizemos uma primeira extensão do HITS, denominada de Extended Hyperlink Induced Topic Search (XHITS), que inseriu duas novas categorias de páginas Web, quais sejam, novidades e portais. Na presente tese, revisamos o XHITS, transformando-o em uma generalização do HITS, ampliando o modelo de duas categorias para várias e apresentando um algoritmo eficiente de aprendizagem de máquina para calibrar o modelo proposto valendo-se de múltiplas categorias latentes. As descobertas aqui expostas indicam que a nova abordagem de aprendizagem fornece um modelo XHITS mais preciso. É importante registrar, por fim, que os experimentos realizados com a coleção ClueWeb09 25TB de páginas da WWW, baixadas em 2009, mostram que o XHITS pode melhorar significativamente a eficácia da pesquisa Web e produzir resultados comparáveis aos do TREC 2009/2010 Web Track, colocando-o na sexta posição, conforme os resultados publicados.
The rapid growth and generalized accessibility of the World Wide Web (WWW) have led to an increase in research in the field of the information retrieval for Web pages. The WWW is an immense and prodigious environment in which Web pages resemble a huge community of elements. These elements are connected via hyperlinks on the basis of similarity between the content of the pages, the popularity of a given page, the extent to which the information provided is authoritative in relation to a given field etc. In fact, when the author of a Web page links it to another, s/he is acknowledging the importance of the linked page to his/her information. As such the hyperlink structure of the WWW significantly improves research performance beyond the use of simple text distribution statistics. To this effect, the HITS approach introduces two basic categories of Web pages, hubs and authorities which uncover certain hidden semantic information using the hyperlink structure. In 2005, we made a first extension of HITS, called Extended Hyperlink Induced Topic Search (XHITS), which inserted two new categories of Web pages, which are novelties and portals. In this thesis, we revised the XHITS, transforming it into a generalization of HITS, broadening the model from two categories to various and presenting an efficient machine learning algorithm to calibrate the proposed model using multiple latent categories. The findings we set out here indicate that the new learning approach provides a more precise XHITS model. It is important to note, in closing, that experiments with the ClueWeb09 25TB collection of Web pages, downloaded in 2009, demonstrated that the XHITS is capable of significantly improving Web research efficiency and producing results comparable to those of the TREC 2009/2010 Web Track.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Cheung, Chi-Wai. "Probabilistic rank aggregation for multiple SVM ranking /." View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?CSED%202009%20CHEUNG.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Vogel, Robin. "Similarity ranking for biometrics : theory and practice." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT031.

Повний текст джерела
Анотація:
L’augmentation rapide de la population combinée à la mobilité croissante des individus a engendré le besoin de systèmes de gestion d’identités sophistiqués. À cet effet, le terme biométrie se réfère généralement aux méthodes permettant d’identifier les individus en utilisant des caractéristiques biologiques ou comportementales. Les méthodes les plus populaires, c’est-à-dire la reconnaissance d’empreintes digitales, d’iris ou de visages, se basent toutes sur des méthodes de vision par ordinateur. L’adoption de réseaux convolutifs profonds, rendue possible par le calcul générique sur processeur graphique, ont porté les récentes avancées en vision par ordinateur. Ces avancées ont permis une amélioration drastique des performances des méthodes conventionnelles en biométrie, ce qui a accéléré leur adoption pour des usages concrets, et a provoqué un débat public sur l’utilisation de ces techniques. Dans ce contexte, les concepteurs de systèmes biométriques sont confrontés à un grand nombre de challenges dans l’apprentissage de ces réseaux. Dans cette thèse, nous considérons ces challenges du point de vue de l’apprentissage statistique théorique, ce qui nous amène à proposer ou esquisser des solutions concrètes. Premièrement, nous répondons à une prolifération de travaux sur l’apprentissage de similarité pour les réseaux profonds, qui optimisent des fonctions objectif détachées du but naturel d’ordonnancement recherché en biométrie. Précisément, nous introduisons la notion d’ordonnancement par similarité, en mettant en évidence la relation entre l’ordonnancement bipartite et la recherche d’une similarité adaptée à l’identification biométrique. Nous étendons ensuite la théorie sur l’ordonnancement bipartite à ce nouveau problème, tout en l’adaptant aux spécificités de l’apprentissage sur paires, notamment concernant son coût computationnel. Les fonctions objectif usuelles permettent d’optimiser la performance prédictive, mais de récents travaux ont mis en évidence la nécessité de prendre en compte d’autres facteurs lors de l’entraı̂nement d’un système biométrique, comme les biais présents dans les données, la robustesse des prédictions ou encore des questions d’équité. La thèse aborde ces trois exemples, en propose une étude statistique minutieuse, ainsi que des méthodes pratiques qui donnent les outils nécessaires aux concepteurs de systèmes biométriques pour adresser ces problématiques, sans compromettre la performance de leurs algorithmes
The rapid growth in population, combined with the increased mobility of people has created a need for sophisticated identity management systems.For this purpose, biometrics refers to the identification of individuals using behavioral or biological characteristics. The most popular approaches, i.e. fingerprint, iris or face recognition, are all based on computer vision methods. The adoption of deep convolutional networks, enabled by general purpose computing on graphics processing units, made the recent advances incomputer vision possible. These advances have led to drastic improvements for conventional biometric methods, which boosted their adoption in practical settings, and stirred up public debate about these technologies. In this respect, biometric systems providers face many challenges when learning those networks.In this thesis, we consider those challenges from the angle of statistical learning theory, which leads us to propose or sketch practical solutions. First, we answer to the proliferation of papers on similarity learningfor deep neural networks that optimize objective functions that are disconnected with the natural ranking aim sought out in biometrics. Precisely, we introduce the notion of similarity ranking, by highlighting the relationship between bipartite ranking and the requirements for similarities that are well suited to biometric identification. We then extend the theory of bipartite ranking to this new problem, by adapting it to the specificities of pairwise learning, particularly those regarding its computational cost. Usual objective functions optimize for predictive performance, but recentwork has underlined the necessity to consider other aspects when training a biometric system, such as dataset bias, prediction robustness or notions of fairness. The thesis tackles all of those three examplesby proposing their careful statistical analysis, as well as practical methods that provide the necessary tools to biometric systems manufacturers to address those issues, without jeopardizing the performance of their algorithms
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Zacharia, Giorgos 1974. "Regularized algorithms for ranking, and manifold learning for related tasks." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/47753.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Includes bibliographical references (leaves 119-127).
This thesis describes an investigation of regularized algorithms for ranking problems for user preferences and information retrieval problems. We utilize regularized manifold algorithms to appropriately incorporate data from related tasks. This investigation was inspired by personalization challenges in both user preference and information retrieval ranking problems. We formulate the ranking problem of related tasks as a special case of semi-supervised learning. We examine how to incorporate instances from related tasks, with the appropriate penalty in the loss function to optimize performance on the hold out sets. We present a regularized manifold approach that allows us to learn a distance metric for the different instances directly from the data. This approach allows incorporation of information from related task examples, without prior estimation of cross-task coefficient covariances. We also present applications of ranking problems in two text analysis problems: a) Supervise content-word learning, and b) Company Entity matching for record linkage problems.
by Giorgos Zacharia.
Ph.D.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Guo, Li Li. "Direct Optimization of Ranking Measures for Learning to Rank Models." Wright State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=wright1341520987.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Ross, Jacob W. "Features for Ranking Tweets Based on Credibility and Newsworthiness." Wright State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=wright1431037057.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Lyubchyk, Leonid, Oleksy Galuza, and Galina Grinberg. "Ranking Model Real-Time Adaptation via Preference Learning Based on Dynamic Clustering." Thesis, ННК "IПСА" НТУУ "КПI iм. Iгоря Сiкорського", 2017. http://repository.kpi.kharkov.ua/handle/KhPI-Press/36819.

Повний текст джерела
Анотація:
The proposed preference learning on clusters method allows to fully realizing the advantages of the kernel-based approach. While the dimension of the model is determined by a pre-selected number of clusters and its complexity do not grow with increasing number of observations. Thus real-time preference function identification algorithm based on training data stream includes successive estimates of cluster parameter as well as average cluster ranks updating and recurrent kernel-based nonparametric estimation of preference model.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Matsubara, Edson Takashi. "Relações entre ranking, análise ROC e calibração em aprendizado de máquina." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-04032009-114050/.

Повний текст джерела
Анотація:
Aprendizado supervisionado tem sido principalmente utilizado para classificação. Neste trabalho são mostrados os benefícios do uso de rankings ao invés de classificação de exemplos isolados. Um rankeador é um algoritmo que ordena um conjunto de exemplos de tal modo que eles são apresentados do exemplo de maior para o exemplo de menor expectativa de ser positivo. Um ranking é o resultado dessa ordenação. Normalmente, um ranking é obtido pela ordenação do valor de confiança de classificação dado por um classificador. Este trabalho tem como objetivo procurar por novas abordagens para promover o uso de rankings. Desse modo, inicialmente são apresentados as diferenças e semelhanças entre ranking e classificação, bem como um novo algoritmo de ranking que os obtém diretamente sem a necessidade de obter os valores de confiança de classificação, esse algoritmo é denominado de LEXRANK. Uma área de pesquisa bastante importante em rankings é a análise ROC. O estudo de árvores de decisão e análise ROC é bastante sugestivo para o desenvolvimento de uma visualização da construção da árvore em gráficos ROC. Para mostrar passo a passo essa visualização foi desenvolvido uma sistema denominado PROGROC. Ainda do estudo de análise ROC, foi observado que a inclinação (coeficiente angular) dos segmentos que compõem o fecho convexo de curvas ROC é equivalente a razão de verossimilhança que pode ser convertida para probabilidades. Essa conversão é denominada de calibração por fecho convexo de curvas ROC que coincidentemente é equivalente ao algoritmo PAV que implementa regressão isotônica. Esse método de calibração otimiza Brier Score. Ao explorar essa medida foi encontrada uma relação bastante interessante entre Brier Score e curvas ROC. Finalmente, também foram explorados os rankings construídos durante o método de seleção de exemplos do algoritmo de aprendizado semi-supervisionado multi-descrição CO-TRAINING
Supervised learning has been used mostly for classification. In this work we show the benefits of a welcome shift in attention from classification to ranking. A ranker is an algorithm that sorts a set of instances from highest to lowest expectation that the instance is positive, and a ranking is the outcome of this sorting. Usually a ranking is obtained by sorting scores given by classifiers. In this work, we are concerned about novel approaches to promote the use of ranking. Therefore, we present the differences and relations between ranking and classification followed by a proposal of a novel ranking algorithm called LEXRANK, whose rankings are derived not from scores, but from a simple ranking of attribute values obtained from the training data. One very important field which uses rankings as its main input is ROC analysis. The study of decision trees and ROC analysis suggested an interesting way to visualize the tree construction in ROC graphs, which has been implemented in a system called PROGROC. Focusing on ROC analysis, we observed that the slope of segments obtained from the ROC convex hull is equivalent to the likelihood ratio, which can be converted into probabilities. Interestingly, this ROC convex hull calibration method is equivalent to Pool Adjacent Violators (PAV). Furthermore, the ROC convex hull calibration method optimizes Brier Score, and the exploration of this measure leads us to find an interesting connection between the Brier Score and ROC Curves. Finally, we also investigate rankings build in the selection method which increments the labelled set of CO-TRAINING, a semi-supervised multi-view learning algorithm
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Ibstedt, Julia, Elsa Rådahl, Erik Turesson, and Voorde Magdalena vande. "Application and Further Development of TrueSkill™ Ranking in Sports." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-384863.

Повний текст джерела
Анотація:
The aim of this study was to explore the ranking model TrueSkill™ developed by Microsoft, applying it on various sports and constructing extensions to the model. Two different inference methods for TrueSkill was constructed using Gibbs sampling and message passing. Additionally, the sequential method using Gibbs sampling was successfully extended into a batch method, in order to eliminate game order dependency and creating a fairer, although computationally heavier, ranking system. All methods were further implemented with extensions for taking home team advantage, score difference and finally a combination of the two into consideration. The methods were applied on football (Premier League), ice hockey (NHL), and tennis (ATP Tour) and evaluated on the accuracy of their predictions before each game. On football, the extensions improved the prediction accuracy from 55.79% to 58.95% for the sequential methods, while the vanilla Gibbs batch method reached the accuracy of 57.37%. Altogether, the extensions improved the performance of the vanilla methods when applied on all data sets. The home team advantage performed better than the score difference on both football and ice hockey, while the combination of the two reached the highest accuracy. The Gibbs batch method had the highest prediction accuracy on the vanilla model for all sports. The results of this study imply that TrueSkill could be considered a useful ranking model for other sports as well, especially if tuned and implemented with extensions suitable for the particular sport.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Ataman, Kaan. "Learning to rank by maximizing the AUC with linear programming for problems with binary output." Diss., University of Iowa, 2007. http://ir.uiowa.edu/etd/151.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Puthiya, Parambath Shameem Ahamed. "New methods for multi-objective learning." Thesis, Compiègne, 2016. http://www.theses.fr/2016COMP2322/document.

Повний текст джерела
Анотація:
Les problèmes multi-objectifs se posent dans plusieurs scénarios réels dans le monde où on doit trouver une solution optimale qui soit un compromis entre les différents objectifs en compétition. Dans cette thèse, on étudie et on propose des algorithmes pour traiter les problèmes des machines d’apprentissage multi-objectif. On étudie deux méthodes d’apprentissage multi-objectif en détail. Dans la première méthode, on étudie le problème de trouver le classifieur optimal pour réaliser des mesures de performances multivariées. Dans la seconde méthode, on étudie le problème de classer des informations diverses dans les missions de recherche des informations
Multi-objective problems arise in many real world scenarios where one has to find an optimal solution considering the trade-off between different competing objectives. Typical examples of multi-objective problems arise in classification, information retrieval, dictionary learning, online learning etc. In this thesis, we study and propose algorithms for multi-objective machine learning problems. We give many interesting examples of multi-objective learning problems which are actively persuaded by the research community to motivate our work. Majority of the state of the art algorithms proposed for multi-objective learning comes under what is called “scalarization method”, an efficient algorithm for solving multi-objective optimization problems. Having motivated our work, we study two multi-objective learning tasks in detail. In the first task, we study the problem of finding the optimal classifier for multivariate performance measures. The problem is studied very actively and recent papers have proposed many algorithms in different classification settings. We study the problem as finding an optimal trade-off between different classification errors, and propose an algorithm based on cost-sensitive classification. In the second task, we study the problem of diverse ranking in information retrieval tasks, in particular recommender systems. We propose an algorithm for diverse ranking making use of the domain specific information, and formulating the problem as a submodular maximization problem for coverage maximization in a weighted similarity graph. Finally, we conclude that scalarization based algorithms works well for multi-objective learning problems. But when considering algorithms for multi-objective learning problems, scalarization need not be the “to go” approach. It is very important to consider the domain specific information and objective functions. We end this thesis by proposing some of the immediate future work, which are currently being experimented, and some of the short term future work which we plan to carry out
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Stojkovic, Ivan. "Functional Norm Regularization for Margin-Based Ranking on Temporal Data." Diss., Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/522550.

Повний текст джерела
Анотація:
Computer and Information Science
Ph.D.
Quantifying the properties of interest is an important problem in many domains, e.g., assessing the condition of a patient, estimating the risk of an investment or relevance of the search result. However, the properties of interest are often latent and hard to assess directly, making it difficult to obtain classification or regression labels, which are needed to learn a predictive models from observable features. In such cases, it is typically much easier to obtain relative comparison of two instances, i.e. to assess which one is more intense (with respect to the property of interest). One framework able to learn from such kind of supervised information is ranking SVM, and it will make a basis of our approach. Applications in bio-medical datasets typically have specific additional challenges. First, and the major one, is the limited amount of data examples, due to an expensive measuring technology, and/or infrequency of conditions of interest. Such limited number of examples makes both identification of patterns/models and their validation less useful and reliable. Repeated samples from the same subject are collected on multiple occasions over time, which breaks IID sample assumption and introduces dependency structure that needs to be taken into account more appropriately. Also, feature vectors are highdimensional, and typically of much higher cardinality than the number of samples, making models less useful and their learning less efficient. Hypothesis of this dissertation is that use of the functional norm regularization can help alleviating mentioned challenges, by improving generalization abilities and/or learning efficiency of predictive models, in this case specifically of the approaches based on the ranking SVM framework. The temporal nature of data was addressed with loss that fosters temporal smoothness of functional mapping, thus accounting for assumption that temporally proximate samples are more correlated. Large number of feature variables was handled using the sparsity inducing L1 norm, such that most of the features have zero effect in learned functional mapping. Proposed sparse (temporal) ranking objective is convex but non-differentiable, therefore smooth dual form is derived, taking the form of quadratic function with box constraints, which allows efficient optimization. For the case where there are multiple similar tasks, joint learning approach based on matrix norm regularization, using trace norm L* and sparse row L21 norm was also proposed. Alternate minimization with proximal optimization algorithm was developed to solve the mentioned multi-task objective. Generalization potentials of the proposed high-dimensional and multi-task ranking formulations were assessed in series of evaluations on synthetically generated and real datasets. The high-dimensional approach was applied to disease severity score learning from gene expression data in human influenza cases, and compared against several alternative approaches. Application resulted in scoring function with improved predictive performance, as measured by fraction of correctly ordered testing pairs, and a set of selected features of high robustness, according to three similarity measures. The multi-task approach was applied to three human viral infection problems, and for learning the exam scores in Math and English. Proposed formulation with mixed matrix norm was overall more accurate than formulations with single norm regularization.
Temple University--Theses
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Alshehri, Adel. "A Machine Learning Approach to Predicting Community Engagement on Social Media During Disasters." Scholar Commons, 2019. https://scholarcommons.usf.edu/etd/7728.

Повний текст джерела
Анотація:
The use of social media is expanding significantly and can serve a variety of purposes. Over the last few years, users of social media have played an increasing role in the dissemination of emergency and disaster information. It is becoming more common for affected populations and other stakeholders to turn to Twitter to gather information about a crisis when decisions need to be made, and action is taken. However, social media platforms, especially on Twitter, presents some drawbacks when it comes to gathering information during disasters. These drawbacks include information overload, messages are written in an informal format, the presence of noise and irrelevant information. These factors make gathering accurate information online very challenging and confusing, which in turn may affect public, communities, and organizations to prepare for, respond to, and recover from disasters. To address these challenges, we present an integrated three parts (clustering-classification-ranking) framework, which helps users choose through the masses of Twitter data to find useful information. In the first part, we build standard machine learning models to automatically extract and identify topics present in a text and to derive hidden patterns exhibited by a dataset. Next part, we developed a binary and multi-class classification model of Twitter data to categorize each tweet as relevant or irrelevant and to further classify relevant tweets into four types of community engagement: reporting information, expressing negative engagement, expressing positive engagement, and asking for information. In the third part, we propose a binary classification model to categorize the collected tweets into high or low priority tweets. We present an evaluation of the effectiveness of detecting events using a variety of features derived from Twitter posts, namely: textual content, term frequency-inverse document frequency, Linguistic, sentiment, psychometric, temporal, and spatial. Our framework also provides insights for researchers and developers to build more robust socio-technical disasters for identifying types of online community engagement and ranking high-priority tweets in disaster situations.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Salomon, Sophie. "Bias Mitigation Techniques and a Cost-Aware Framework for Boosted Ranking Algorithms." Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1586450345426827.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Reed, Jeremy T. "Acoustic segment modeling and preference ranking for music information retrieval." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37189.

Повний текст джерела
Анотація:
This dissertation focuses on improving content-based recommendation systems for music. Specifically, progress in the development in music content-based recommendation systems has stalled in recent years due to some faulty assumptions: 1. most acoustic content-based systems for music information retrieval (MIR) assume a bag-of-frames model, where it is assumed that a song contains a simplistic, global audio texture 2. genre, style, mood, and authors are appropriate categories for machine-oriented recommendation 3. similarity is a universal construct and does not vary among different users The main contribution of this dissertation is to address these faulty assumptions by describing a novel approach in MIR that provides user-centric, content-based recommendations based on statistics of acoustic sound elements. First, this dissertation presents the acoustic segment modeling framework that describes a piece of music as a temporal sequence of acoustic segment models (ASMs), which represent individual polyphonic sound elements. A dictionary of ASMs generated in an unsupervised process defines a vocabulary of acoustic tokens that are able to transcribe new musical pieces. Next, standard text-based information retrieval algorithms use statistics of ASM counts to perform various retrieval tasks. Despite a simple feature set compared to other content-based genre recommendation algorithms, the acoustic segment modeling approach is highly competitive on standard genre classification databases. Fundamental to the success of the acoustic segment modeling approach is the ability to model acoustical semantics in a musical piece, which is demonstrated by the detection of musical attributes on temporal characteristics. Further, it is shown that the acoustic segment modeling procedure is able to capture the inherent structure of melody by providing near state-of-the-art performance on an automatic chord recognition task. This dissertation demonstrates that some classification tasks, such as genre, possess information that is not contained in the acoustic signal; therefore, attempts at modeling these categories using only the acoustic content is ill-fated. Further, notions of music similarity are personal in nature and are not derived from a universal ontology. Therefore, this dissertation addresses the second and third limitation of previous content-based retrieval approaches by presenting a user-centric preference rating algorithm. Individual users possess their own cognitive construct of similarity; therefore, retrieval algorithms must demonstrate this flexibility. The proposed rating algorithm is based on the principle of minimum classification error (MCE) training, which has been demonstrated to be robust against outliers and also minimizes the Parzen estimate of the theoretical classification risk. The outlier immunity property limits the effect of labels that arise from non-content-based sources. The MCE-based algorithm performs better than a similar ratings prediction algorithm. Further, this dissertation discusses extensions and future work.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Kim, Jinhan. "J-model : an open and social ensemble learning architecture for classification." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/7672.

Повний текст джерела
Анотація:
Ensemble learning is a promising direction of research in machine learning, in which an ensemble classifier gives better predictive and more robust performance for classification problems by combining other learners. Meanwhile agent-based systems provide frameworks to share knowledge from multiple agents in an open context. This thesis combines multi-agent knowledge sharing with ensemble methods to produce a new style of learning system for open environments. We now are surrounded by many smart objects such as wireless sensors, ambient communication devices, mobile medical devices and even information supplied via other humans. When we coordinate smart objects properly, we can produce a form of collective intelligence from their collaboration. Traditional ensemble methods and agent-based systems have complementary advantages and disadvantages in this context. Traditional ensemble methods show better classification performance, while agent-based systems might not guarantee their performance for classification. Traditional ensemble methods work as closed and centralised systems (so they cannot handle classifiers in an open context), while agent-based systems are natural vehicles for classifiers in an open context. We designed an open and social ensemble learning architecture, named J-model, to merge the conflicting benefits of the two research domains. The J-model architecture is based on a service choreography approach for coordinating classifiers. Coordination protocols are defined by interaction models that describe how classifiers will interact with one another in a peer-to-peer manner. The peer ranking algorithm recommends more appropriate classifiers to participate in an interaction model to boost the success rate of results of their interactions. Coordinated participant classifiers who are recommended by the peer ranking algorithm become an ensemble classifier within J-model. We evaluated J-model’s classification performance with 13 UCI machine learning benchmark data sets and a virtual screening problem as a realistic classification problem. J-model showed better performance of accuracy, for 9 benchmark sets out of 13 data sets, than 8 other representative traditional ensemble methods. J-model gave better results of specificity for 7 benchmark sets. In the virtual screening problem, J-model gave better results for 12 out of 16 bioassays than already published results. We defined different interaction models for each specific classification task and the peer ranking algorithm was used across all the interaction models. Our research contributions to knowledge are as follows. First, we showed that service choreography can be an effective ensemble coordination method for classifiers in an open context. Second, we used interaction models that implement task specific coordinations of classifiers to solve a variety of representative classification problems. Third, we designed the peer ranking algorithm which is generally and independently applicable to the task of recommending appropriate member classifiers from a classifier pool based on an open pool of interaction models and classifiers.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Lipscombe, Trevor, and n/a. "Different teachers for different students? : The relationship between learning style, other student variables and students' ranking of teacher characteristics." University of Canberra. Education, 1989. http://erl.canberra.edu.au./public/adt-AUC20060817.141319.

Повний текст джерела
Анотація:
This study examined the influence of selected student variables (learning style , age, sex, nationality (birthplace), academic achievement, and social class) on the ranking of twelve teacher characteristics. 246 ACT TAFE Associate Diploma in Business students formed the sample. Results were compared with a similar study by Travis (1987) of secondary students in Canada and USA. The extent to which different groups of students prefer different teacher characteristics has important implications for the growing practice of student rating of teachers' effectiveness. This practice (operating under a psychometric paradigm) currently assumes that any differences of opinion between student raters are the result of student carelessness (random error) or bias (systematic error). The possibility that these differences of opinion are the result of systematic variation, based on differences between students, is not countenanced. This study demonstrated significant (p=<0.05) systematic variations on four of the six variables studied (age, academic achievement, nationality and social class) in the way that respondents ranked one or more of the teacher characteristics. Comparisons with Travis's results showed marked differences both in the overall ranking of the twelve teacher characteristics and in the influence of student variables on the ranking of individual teacher characteristics. While Travis also showed that some student variables influenced the ranking of teacher characteristics, different relationships are evident. Travis's respondents emphasised the importance of good, supportive relationships with their teachers, while in this study, instrumental characteristics were preferred. This suggests a range of preferred characteristics across student populations. Within both studies there is a wide range of opinion as to the importance of all twelve teacher characteristics. More than half of the present sample also suggested a range of additional characteristics which they believed influenced their learning. These findings support the view that different students prefer different teachers. They suggest that some student variables may have a greater influence than others (e.g. academic achievement level) and that there may similarly be more agreement on some teacher characteristics (e.g. Knowledgeablity) than others. Users of student ratings of teacher effectiveness should be aware of the paradigmatic limitations of aggregated student scores. Validity might be improved by using teacher characteristics which raters agree are important and by grouping raters for influential student variables.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Dokania, Puneet Kumar. "High-Order Inference, Ranking, and Regularization Path for Structured SVM." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC044/document.

Повний текст джерела
Анотація:
Cette thèse présente de nouvelles méthodes pour l'application de la prédiction structurée en vision numérique et en imagerie médicale.Nos nouvelles contributions suivent quatre axes majeurs.La première partie de cette thèse étudie le problème d'inférence d'ordre supérieur.Nous présentons une nouvelle famille de problèmes de minimisation d'énergie discrète, l'étiquetage parcimonieux, encourageant la parcimonie des étiquettes.C'est une extension naturelle des problèmes connus d'étiquetage de métriques aux potentiels d'ordre élevé.Nous proposons par ailleurs une généralisation du modèle Pn-Potts, le modèle Pn-Potts hiérarchique.Enfin, nous proposons un algorithme parallélisable à proposition de mouvements avec de fortes bornes multiplicatives pour l'optimisation du modèle Pn-Potts hiérarchique et l'étiquetage parcimonieux.La seconde partie de cette thèse explore le problème de classement en utilisant de l'information d'ordre élevé.Nous introduisons deux cadres différents pour l'incorporation d'information d'ordre élevé dans le problème de classement.Le premier modèle, que nous nommons SVM binaire d'ordre supérieur (HOB-SVM), optimise une borne supérieure convexe sur l'erreur 0-1 pondérée tout en incorporant de l'information d'ordre supérieur en utilisant un vecteur de charactéristiques jointes.Le classement renvoyé par HOB-SVM est obtenu en ordonnant les exemples selon la différence entre la max-marginales de l'affectation d'un exemple à la classe associée et la max-marginale de son affectation à la classe complémentaire.Le second modèle, appelé AP-SVM d'ordre supérieur (HOAP-SVM), s'inspire d'AP-SVM et de notre premier modèle, HOB-SVM.Le modèle correspond à une optimisation d'une borne supérieure sur la précision moyenne, à l'instar d'AP-SVM, qu'il généralise en permettant également l'incorporation d'information d'ordre supérieur.Nous montrons comment un optimum local du problème d'apprentissage de HOAP-SVM peut être déterminé efficacement grâce à la procédure concave-convexe.En utilisant des jeux de données standards, nous montrons empiriquement que HOAP-SVM surpasse les modèles de référence en utilisant efficacement l'information d'ordre supérieur tout en optimisant directement la fonction d'erreur appropriée.Dans la troisième partie, nous proposons un nouvel algorithme, SSVM-RP, pour obtenir un chemin de régularisation epsilon-optimal pour les SVM structurés.Nous présentons également des variantes intuitives de l'algorithme Frank-Wolfe pour l'optimisation accélérée de SSVM-RP.De surcroît, nous proposons une approche systématique d'optimisation des SSVM avec des contraintes additionnelles de boîte en utilisant BCFW et ses variantes.Enfin, nous proposons un algorithme de chemin de régularisation pour SSVM avec des contraintes additionnelles de positivité/negativité.Dans la quatrième et dernière partie de la thèse, en appendice, nous montrons comment le cadre de l'apprentissage semi-supervisé des SVM à variables latentes peut être employé pour apprendre les paramètres d'un problème complexe de recalage déformable.Nous proposons un nouvel algorithme discriminatif semi-supervisé pour apprendre des métriques de recalage spécifiques au contexte comme une combinaison linéaire des métriques conventionnelles.Selon l'application, les métriques traditionnelles sont seulement partiellement sensibles aux propriétés anatomiques des tissus.Dans ce travail, nous cherchons à déterminer des métriques spécifiques à l'anatomie et aux tissus, par agrégation linéaire de métriques connues.Nous proposons un algorithme d'apprentissage semi-supervisé pour estimer ces paramètres conditionnellement aux classes sémantiques des données, en utilisant un jeu de données faiblement annoté.Nous démontrons l'efficacité de notre approche sur trois jeux de données particulièrement difficiles dans le domaine de l'imagerie médicale, variables en terme de structures anatomiques et de modalités d'imagerie
This thesis develops novel methods to enable the use of structured prediction in computer vision and medical imaging. Specifically, our contributions are four fold. First, we propose a new family of high-order potentials that encourage parsimony in the labeling, and enable its use by designing an accurate graph cuts based algorithm to minimize the corresponding energy function. Second, we show how the average precision SVM formulation can be extended to incorporate high-order information for ranking. Third, we propose a novel regularization path algorithm for structured SVM. Fourth, we show how the weakly supervised framework of latent SVM can be employed to learn the parameters for the challenging deformable registration problem.In more detail, the first part of the thesis investigates the high-order inference problem. Specifically, we present a novel family of discrete energy minimization problems, which we call parsimonious labeling. It is a natural generalization of the well known metric labeling problems for high-order potentials. In addition to this, we propose a generalization of the Pn-Potts model, which we call Hierarchical Pn-Potts model. In the end, we propose parallelizable move making algorithms with very strong multiplicative bounds for the optimization of the hierarchical Pn-Potts model and the parsimonious labeling.Second part of the thesis investigates the ranking problem while using high-order information. Specifically, we introduce two alternate frameworks to incorporate high-order information for the ranking tasks. The first framework, which we call high-order binary SVM (HOB-SVM), optimizes a convex upperbound on weighted 0-1 loss while incorporating high-order information using joint feature map. The rank list for the HOB-SVM is obtained by sorting samples using max-marginals based scores. The second framework, which we call high-order AP-SVM (HOAP-SVM), takes its inspiration from AP-SVM and HOB-SVM (our first framework). Similar to AP-SVM, it optimizes upper bound on average precision. However, unlike AP-SVM and similar to HOB-SVM, it can also encode high-order information. The main disadvantage of HOAP-SVM is that estimating its parameters requires solving a difference-of-convex program. We show how a local optimum of the HOAP-SVM learning problem can be computed efficiently by the concave-convex procedure. Using standard datasets, we empirically demonstrate that HOAP-SVM outperforms the baselines by effectively utilizing high-order information while optimizing the correct loss function.In the third part of the thesis, we propose a new algorithm SSVM-RP to obtain epsilon-optimal regularization path of structured SVM. We also propose intuitive variants of the Block-Coordinate Frank-Wolfe algorithm (BCFW) for the faster optimization of the SSVM-RP algorithm. In addition to this, we propose a principled approach to optimize the SSVM with additional box constraints using BCFW and its variants. In the end, we propose regularization path algorithm for SSVM with additional positivity/negativity constraints.In the fourth and the last part of the thesis (Appendix), we propose a novel weakly supervised discriminative algorithm for learning context specific registration metrics as a linear combination of conventional metrics. Conventional metrics can cope partially - depending on the clinical context - with tissue anatomical properties. In this work we seek to determine anatomy/tissue specific metrics as a context-specific aggregation/linear combination of known metrics. We propose a weakly supervised learning algorithm for estimating these parameters conditionally to the data semantic classes, using a weak training dataset. We show the efficacy of our approach on three highly challenging datasets in the field of medical imaging, which vary in terms of anatomical structures and image modalities
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Jiao, Jian. "A framework for finding and summarizing product defects, and ranking helpful threads from online customer forums through machine learning." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/23159.

Повний текст джерела
Анотація:
The Internet has revolutionized the way users share and acquire knowledge. As important and popular Web-based applications, online discussion forums provide interactive platforms for users to exchange information and report problems. With the rapid growth of social networks and an ever increasing number of Internet users, online forums have accumulated a huge amount of valuable user-generated data and have accordingly become a major information source for business intelligence. This study focuses specifically on product defects, which are one of the central concerns of manufacturing companies and service providers, and proposes a machine learning method to automatically detect product defects in the context of online forums. To complement the detection of product defects , we also present a product feature extraction method to summarize defect threads and a thread ranking method to search for troubleshooting solutions. To this end, we collected different data sets to test these methods experimentally and the results of the tests show that our methods are very promising: in fact, in most cases, they outperformed the current state-of-the-art methods.

Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Safran, Mejdl Sultan. "EFFICIENT LEARNING-BASED RECOMMENDATION ALGORITHMS FOR TOP-N TASKS AND TOP-N WORKERS IN LARGE-SCALE CROWDSOURCING SYSTEMS." OpenSIUC, 2018. https://opensiuc.lib.siu.edu/dissertations/1511.

Повний текст джерела
Анотація:
A pressing need for efficient personalized recommendations has emerged in crowdsourcing systems. On the one hand, workers confront a flood of tasks, and they often spend too much time to find tasks matching their skills and interests. Thus, workers want effective recommendation of the most suitable tasks with regard to their skills and preferences. On the other hand, requesters sometimes receive results in low-quality completion since a less qualified worker may start working on a task before a better-skilled worker may get hands on. Thus, requesters want reliable recommendation of the best workers for their tasks in terms of workers' qualifications and accountability. The task and worker recommendation problems in crowdsourcing systems have brought up unique characteristics that are not present in traditional recommendation scenarios, i.e., the huge flow of tasks with short lifespans, the importance of workers' capabilities, and the quality of the completed tasks. These unique features make traditional recommendation approaches (mostly developed for e-commerce markets) no longer satisfactory for task and worker recommendation in crowdsourcing systems. In this research, we reveal our insight into the essential difference between the tasks in crowdsourcing systems and the products/items in e-commerce markets, and the difference between buyers' interests in products/items and workers' interests in tasks. Our insight inspires us to bring up categories as a key mediation mechanism between workers and tasks. We propose a two-tier data representation scheme (defining a worker-category suitability score and a worker-task attractiveness score) to support personalized task and worker recommendation. We also extend two optimization methods, namely least mean square error (LMS) and Bayesian personalized rank (BPR) in order to better fit the characteristics of task/worker recommendation in crowdsourcing systems. We then integrate the proposed representation scheme and the extended optimization methods along with the two adapted popular learning models, i.e., matrix factorization and kNN, and result in two lines of top-N recommendation algorithms for crowdsourcing systems: (1) Top-N-Tasks (TNT) recommendation algorithms for discovering the top-N most suitable tasks for a given worker, and (2) Top-N-Workers (TNW) recommendation algorithms for identifying the top-N best workers for a task requester. An extensive experimental study is conducted that validates the effectiveness and efficiency of a broad spectrum of algorithms, accompanied by our analysis and the insights gained.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Harrington, Edward, and edwardharrington@homemail com au. "Aspects of Online Learning." The Australian National University. Research School of Information Sciences and Engineering, 2004. http://thesis.anu.edu.au./public/adt-ANU20060328.160810.

Повний текст джерела
Анотація:
Online learning algorithms have several key advantages compared to their batch learning algorithm counterparts: they are generally more memory efficient, and computationally mor efficient; they are simpler to implement; and they are able to adapt to changes where the learning model is time varying. Online algorithms because of their simplicity are very appealing to practitioners. his thesis investigates several online learning algorithms and their application. The thesis has an underlying theme of the idea of combining several simple algorithms to give better performance. In this thesis we investigate: combining weights, combining hypothesis, and (sort of) hierarchical combining.¶ Firstly, we propose a new online variant of the Bayes point machine (BPM), called the online Bayes point machine (OBPM). We study the theoretical and empirical performance of the OBPm algorithm. We show that the empirical performance of the OBPM algorithm is comparable with other large margin classifier methods such as the approximately large margin algorithm (ALMA) and methods which maximise the margin explicitly, like the support vector machine (SVM). The OBPM algorithm when used with a parallel architecture offers potential computational savings compared to ALMA. We compare the test error performance of the OBPM algorithm with other online algorithms: the Perceptron, the voted-Perceptron, and Bagging. We demonstrate that the combinationof the voted-Perceptron algorithm and the OBPM algorithm, called voted-OBPM algorithm has better test error performance than the voted-Perceptron and Bagging algorithms. We investigate the use of various online voting methods against the problem of ranking, and the problem of collaborative filtering of instances. We look at the application of online Bagging and OBPM algorithms to the telecommunications problem of channel equalization. We show that both online methods were successful at reducing the effect on the test error of label flipping and additive noise.¶ Secondly, we introduce a new mixture of experts algorithm, the fixed-share hierarchy (FSH) algorithm. The FSH algorithm is able to track the mixture of experts when the switching rate between the best experts may not be constant. We study the theoretical aspects of the FSH and the practical application of it to adaptive equalization. Using simulations we show that the FSH algorithm is able to track the best expert, or mixture of experts, in both the case where the switching rate is constant and the case where the switching rate is time varying.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Paris, Bruno Mendonça. "Learning to rank: combinação de algoritmos aplicando stacking e análise dos resultados." Universidade Presbiteriana Mackenzie, 2017. http://tede.mackenzie.br/jspui/handle/tede/3494.

Повний текст джерела
Анотація:
Submitted by Marta Toyoda (1144061@mackenzie.br) on 2018-02-21T23:45:28Z No. of bitstreams: 2 Bruno Mendonça Paris.pdf: 2393892 bytes, checksum: 0cd807e0fd978642fc513bf059389c1f (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Approved for entry into archive by Paola Damato (repositorio@mackenzie.br) on 2018-04-04T11:43:59Z (GMT) No. of bitstreams: 2 Bruno Mendonça Paris.pdf: 2393892 bytes, checksum: 0cd807e0fd978642fc513bf059389c1f (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2018-04-04T11:43:59Z (GMT). No. of bitstreams: 2 Bruno Mendonça Paris.pdf: 2393892 bytes, checksum: 0cd807e0fd978642fc513bf059389c1f (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-11-07
With the growth of the amount of information available in recent years, which will continue to grow due to the increase in users, devices and information shared over the internet, accessing the desired information should be done in a quick way so it is not spent too much time looking for what you want. A search in engines like Google, Yahoo, Bing is expected that the rst results bring the desired information. An area that aims to bring relevant documents to the user is known as Information Retrieval and can be aided by Learning to Rank algorithms, which applies machine learning to try to bring important documents to users in the best possible ordering. This work aims to verify a way to get an even better ordering of documents, using a technique of combining algorithms known as Stacking. To do so, it will used the RankLib tool, part of Lemur Project, developed in the Java language that contains several Learning to Rank algorithms, and the datasets from a base maintained by Microsoft Research Group known as LETOR.
Com o crescimento da quantidade de informação disponível nos últimos anos, a qual irá continuar crescendo devido ao aumento de usuários, dispositivos e informações compartilhadas pela internet, acessar a informação desejada deve ser feita de uma maneira rápida afim de não se gastar muito tempo procurando o que se deseja. Uma busca em buscadores como Google, Yahoo, Bing espera-se que os primeiros resultados tragam a informação desejada. Uma área que tem o objetivo de trazer os documentos relevantes para o usuário é conhecida por Recuperação de Informação e pode ser auxiliada por algoritmos Learning to Rank, que aplica aprendizagem de máquina para tentar trazer os documentos importantes aos usuários na melhor ordenação possível. Esse trabalho visa verificar uma maneira de obter uma ordenação ainda melhor de documentos, empregando uma técnica de combinar algoritmos conhecida por Stacking. Para isso será utilizada a ferramenta RankLib, parte de um projeto conhecido por Lemur, desenvolvida na linguagem Java, que contém diversos algoritmos Learning to Rank, e o conjuntos de dados provenientes de uma base mantida pela Microsoft Research Group conhecida por LETOR.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Zhang, Ganqin. "Bipartite RankBoost+: An Improvement to Bipartite RankBoost." Case Western Reserve University School of Graduate Studies / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=case160767885657324.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Arvidsson, Simon, and Marcus Gullstrand. "Predicting forest strata from point clouds using geometric deep learning." Thesis, Jönköping University, JTH, Avdelningen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-54155.

Повний текст джерела
Анотація:
Introduction: Number of strata (NoS) is an informative descriptor of forest structure and is therefore useful in forest management. Collection of NoS as well as other forest properties is performed by fieldworkers and could benefit from automation. Objectives: This study investigates automated prediction of NoS from airborne laser scanned point clouds over Swedish forest plots.Methods: A previously suggested approach of using vertical gap probability is compared through experimentation against the geometric neural network PointNet++ configured for ordinal prediction. For both approaches, the mean accuracy is measured for three datasets: coniferous forest, deciduous forest, and a combination of all forests. Results: PointNet++ displayed a better point performance for two out of three datasets, attaining a top mean accuracy of 46.2%. However only the coniferous subset displayed a statistically significant superiority for PointNet++. Conclusion: This study demonstrates the potential of geometric neural networks for data mining of forest properties. The results show that impediments in the data may need to be addressed for further improvements.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Mittal, Arpit. "Human layout estimation using structured output learning." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:bb290cfd-5216-42d7-b3d2-c2b4b01614bc.

Повний текст джерела
Анотація:
In this thesis, we investigate the problem of human layout estimation in unconstrained still images. This involves predicting the spatial configuration of body parts. We start our investigation with pictorial structure models and propose an efficient method of model fitting using skin regions. To detect the skin, we learn a colour model locally from the image by detecting the facial region. The resulting skin detections are also used for hand localisation. Our next contribution is a comprehensive dataset of 2D hand images. We collected this dataset from publicly available image sources, and annotated images with hand bounding boxes. The bounding boxes are not axis aligned, but are rather oriented with respect to the wrist. Our dataset is quite exhaustive as it includes images of different hand shapes and layout configurations. Using our dataset, we train a hand detector that is robust to background clutter and lighting variations. Our hand detector is implemented as a two-stage system. The first stage involves proposing hand hypotheses using complementary image features, which are then evaluated by the second stage classifier. This improves both precision and recall and results in a state-of-the-art hand detection method. In addition we develop a new method of non-maximum suppression based on super-pixels. We also contribute an efficient training algorithm for structured output ranking. In our algorithm, we reduce the time complexity of an expensive training component from quadratic to linear. This algorithm has a broad applicability and we use it for solving human layout estimation and taxonomic multiclass classification problems. For human layout, we use different body part detectors to propose part candidates. These candidates are then combined and scored using our ranking algorithm. By applying this bottom-up approach, we achieve accurate human layout estimation despite variations in viewpoint and layout configuration. In the multiclass classification problem, we define the misclassification error using a class taxonomy. The problem then reduces to a structured output ranking problem and we use our ranking method to optimise it. This allows inclusion of semantic knowledge about the classes and results in a more meaningful classification system. Lastly, we substantiate our ranking algorithm with theoretical proofs and derive the generalisation bounds for it. These bounds prove that the training error reduces to the lowest possible error asymptotically.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Lin, Xiao. "Leveraging Multimodal Perspectives to Learn Common Sense for Vision and Language Tasks." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/79521.

Повний текст джерела
Анотація:
Learning and reasoning with common sense is a challenging problem in Artificial Intelligence (AI). Humans have the remarkable ability to interpret images and text from different perspectives in multiple modalities, and to use large amounts of commonsense knowledge while performing visual or textual tasks. Inspired by that ability, we approach commonsense learning as leveraging perspectives from multiple modalities for images and text in the context of vision and language tasks. Given a target task (e.g., textual reasoning, matching images with captions), our system first represents input images and text in multiple modalities (e.g., vision, text, abstract scenes and facts). Those modalities provide different perspectives to interpret the input images and text. And then based on those perspectives, the system performs reasoning to make a joint prediction for the target task. Surprisingly, we show that interpreting textual assertions and scene descriptions in the modality of abstract scenes improves performance on various textual reasoning tasks, and interpreting images in the modality of Visual Question Answering improves performance on caption retrieval, which is a visual reasoning task. With grounding, imagination and question-answering approaches to interpret images and text in different modalities, we show that learning commonsense knowledge from multiple modalities effectively improves the performance of downstream vision and language tasks, improves interpretability of the model and is able to make more efficient use of training data. Complementary to the model aspect, we also study the data aspect of commonsense learning in vision and language. We study active learning for Visual Question Answering (VQA) where a model iteratively grows its knowledge through querying informative questions about images for answers. Drawing analogies from human learning, we explore cramming (entropy), curiosity-driven (expected model change), and goal-driven (expected error reduction) active learning approaches, and propose a new goal-driven scoring function for deep VQA models under the Bayesian Neural Network framework. Once trained with a large initial training set, a deep VQA model is able to efficiently query informative question-image pairs for answers to improve itself through active learning, saving human effort on commonsense annotations.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Bjöörn, Anton. "Employing a Transformer Language Model for Information Retrieval and Document Classification : Using OpenAI's generative pre-trained transformer, GPT-2." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281766.

Повний текст джерела
Анотація:
As the information flow on the Internet keeps growing it becomes increasingly easy to miss important news which does not have a mass appeal. Combating this problem calls for increasingly sophisticated information retrieval methods. Pre-trained transformer based language models have shown great generalization performance on many natural language processing tasks. This work investigates how well such a language model, Open AI’s General Pre-trained Transformer 2 model (GPT-2), generalizes to information retrieval and classification of online news articles, written in English, with the purpose of comparing this approach with the more traditional method of Term Frequency-Inverse Document Frequency (TF-IDF) vectorization. The aim is to shed light on how useful state-of-the-art transformer based language models are for the construction of personalized information retrieval systems. Using transfer learning the smallest version of GPT-2 is trained to rank and classify news articles achieving similar results to the purely TF-IDF based approach. While the average Normalized Discounted Cumulative Gain (NDCG) achieved by the GPT-2 based model was about 0.74 percentage points higher the sample size was too small to give these results high statistical certainty.
Informationsflödet på Internet fortsätter att öka vilket gör det allt lättare att missa viktiga nyheter som inte intresserar en stor mängd människor. För att bekämpa detta problem behövs allt mer sofistikerade informationssökningsmetoder. Förtränade transformermodeller har sedan ett par år tillbaka tagit över som de mest framstående neurala nätverken för att hantera text. Det här arbetet undersöker hur väl en sådan språkmodell, Open AIs General Pre-trained Transformer 2 (GPT-2), kan generalisera från att generera text till att användas för informationssökning och klassificering av texter. För att utvärdera detta jämförs en transformerbaserad modell med en mer traditionell Term Frequency- Inverse Document Frequency (TF-IDF) vektoriseringsmodell. Målet är att klargöra hur användbara förtränade transformermodeller faktiskt är i skapandet av specialiserade informationssökningssystem. Den minsta versionen av språkmodellen GPT-2 anpassas och tränas om till att ranka och klassificera nyhetsartiklar, skrivna på engelska, och uppnår liknande prestanda som den TF-IDF baserade modellen. Den GPT-2 baserade modellen hade i genomsnitt 0.74 procentenheter högre Normalized Discounted Cumulative Gain (NDCG) men provstorleken var ej stor nog för att ge dessa resultat hög statistisk säkerhet.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Chen, Xi. "Learning with Sparcity: Structures, Optimization and Applications." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/228.

Повний текст джерела
Анотація:
The development of modern information technology has enabled collecting data of unprecedented size and complexity. Examples include web text data, microarray & proteomics, and data from scientific domains (e.g., meteorology). To learn from these high dimensional and complex data, traditional machine learning techniques often suffer from the curse of dimensionality and unaffordable computational cost. However, learning from large-scale high-dimensional data promises big payoffs in text mining, gene analysis, and numerous other consequential tasks. Recently developed sparse learning techniques provide us a suite of tools for understanding and exploring high dimensional data from many areas in science and engineering. By exploring sparsity, we can always learn a parsimonious and compact model which is more interpretable and computationally tractable at application time. When it is known that the underlying model is indeed sparse, sparse learning methods can provide us a more consistent model and much improved prediction performance. However, the existing methods are still insufficient for modeling complex or dynamic structures of the data, such as those evidenced in pathways of genomic data, gene regulatory network, and synonyms in text data. This thesis develops structured sparse learning methods along with scalable optimization algorithms to explore and predict high dimensional data with complex structures. In particular, we address three aspects of structured sparse learning: 1. Efficient and scalable optimization methods with fast convergence guarantees for a wide spectrum of high-dimensional learning tasks, including single or multi-task structured regression, canonical correlation analysis as well as online sparse learning. 2. Learning dynamic structures of different types of undirected graphical models, e.g., conditional Gaussian or conditional forest graphical models. 3. Demonstrating the usefulness of the proposed methods in various applications, e.g., computational genomics and spatial-temporal climatological data. In addition, we also design specialized sparse learning methods for text mining applications, including ranking and latent semantic analysis. In the last part of the thesis, we also present the future direction of the high-dimensional structured sparse learning from both computational and statistical aspects.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Blumm, Nicolas C. "On the Purpose & Ethics of Elite Higher Education." Scholarship @ Claremont, 2017. http://scholarship.claremont.edu/cmc_theses/1713.

Повний текст джерела
Анотація:
This thesis explores the fundamental ethics and purpose of elite higher education. Beginning with an inquiry into the history of American higher education, this work reveals that the U.S. News & World Report “Best College” and “Best University” ranking lists hold an increasingly important role in distinguishing institutions, particularly those within the elite tier. Following an examination of the U.S. News’ methodology, this analysis confronts concerns with individual access to elite institutions. Although there are potential changes to the U.S. News’ methodology that could improve institutional assessment, this thesis does not propose alternative rankings. Rather, it focuses on many institutions’ problematic choice to use the rankings as a guide for admissions and institutional practice. This work evaluates the potentially stratifying components of elite institutions and questions what American higher education inculcates in students. This endeavor concludes by providing suggestions for how to democratize elite institutions in order to realize their respective missions and improve access to educational opportunities. Chapter I: Introduction & Motivation Chapter II: History Chapter III: The U.S. News & World Report Rankings Chapter IV: The Current System of Higher Education Chapter V: For Society’s Benefit
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Lee, Joonseok. "Local approaches for collaborative filtering." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53846.

Повний текст джерела
Анотація:
Recommendation systems are emerging as an important business application as the demand for personalized services in E-commerce increases. Collaborative filtering techniques are widely used for predicting a user's preference or generating a list of items to be recommended. In this thesis, we develop several new approaches for collaborative filtering based on model combination and kernel smoothing. Specifically, we start with an experimental study that compares a wide variety of CF methods under different conditions. Based on this study, we formulate a combination model similar to boosting but where the combination coefficients are functions rather than constant. In another contribution we formulate and analyze a local variation of matrix factorization. This formulation constructs multiple local matrix factorization models and then combines them into a global model. This formulation is based on the local low-rank assumption, a slightly different but more plausible assumption about the rating matrix. We apply this assumption to both rating prediction and ranking problems, with both empirical validations and theoretical analysis. We contribute with this thesis in four aspects. First, the local approaches we present significantly improve the accuracy of recommendations both in rating prediction and ranking problems. Second, with the more realistic local low-rank assumption, we fundamentally change the underlying assumption for matrix factorization-based recommendation systems. Third, we present highly efficient and scalable algorithms which take advantage of parallelism, suited for recent large scale datasets. Lastly, we provide an open source software implementing the local approaches in this thesis as well as many other recent recommendation algorithms, which can be used both in research and production.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Ben, Qingyan. "Flight Sorting Algorithm Based on Users’ Behaviour." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-294132.

Повний текст джерела
Анотація:
The model predicts the best flight order and recommend best flight to users. The thesis could be divided into the following three parts: Feature choosing, data-preprocessing, and various algorithms experiment. For feature choosing, besides the original information of flight itself, we add the user’s selection status into our model, which the flight class is, together with children or not. In the data preprocessing stage, data cleaning is used to process incomplete and repeated data. Then a normalization method removes the noise in the data. After various balancing processing, the class-imbalance data is corrected best with SMOTE method. Based on our existing data, I choose the classification model and Sequential ranking algorithm. Use price, direct flight or not, travel time, etc. as features, and click or not as label. The classification algorithms I used includes Logistic Regression, Gradient Boosting, KNN, Decision Tree, Random Forest, Gaussian Process Classifier, Gaussian NB Bayesian and Quadratic Discriminant Analysis. In addition, we also adopted Sequential ranking algorithm. The results show that Random Forest-SMOTE performs best with AUC of ROC=0.94, accuracy=0.8998.
Modellen förutsäger den bästa flygordern och rekommenderar bästa flyg till användarna. Avhandlingen kan delas in i följande tre delar: Funktionsval, databehandling och olika algoritms experiment. För funktionsval, förutom den ursprungliga informationen om själva flygningen, lägger vi till användarens urvalsstatus i vår modell, vilken flygklassen är , tillsammans med barn eller inte. Datarengöring används för att hantera dubbletter och ofullständiga data. Därefter tar en normaliserings metod bort bruset i data. Efter olika balanserings behandlingar är SMOTE-metoden mest lämplig för att korrigera klassobalans flyg data. Baserat på våra befintliga data väljer jag klassificerings modell och sekventiell ranknings algoritm. Använd pris, direktflyg eller inte, restid etc. som funktioner, och klicka eller inte som etikett. Klassificerings algoritmerna som jag använde inkluderar Logistic Regression, Gradient Boost, KNN, Decision Tree, Random Forest, Gaussian Process Classifier, Gaussian NB Bayesian and Quadratic Discriminant Analysis. Dessutom antog vi också Sequential ranking algoritm. Resultaten visar att Random Forest-SMOTE presterar bäst med AUC för ROC = 0.94, noggrannhet = 0.8998.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Zeng, Kaiman. "Next Generation of Product Search and Discovery." FIU Digital Commons, 2015. http://digitalcommons.fiu.edu/etd/2312.

Повний текст джерела
Анотація:
Online shopping has become an important part of people’s daily life with the rapid development of e-commerce. In some domains such as books, electronics, and CD/DVDs, online shopping has surpassed or even replaced the traditional shopping method. Compared with traditional retailing, e-commerce is information intensive. One of the key factors to succeed in e-business is how to facilitate the consumers’ approaches to discover a product. Conventionally a product search engine based on a keyword search or category browser is provided to help users find the product information they need. The general goal of a product search system is to enable users to quickly locate information of interest and to minimize users’ efforts in search and navigation. In this process human factors play a significant role. Finding product information could be a tricky task and may require an intelligent use of search engines, and a non-trivial navigation of multilayer categories. Searching for useful product information can be frustrating for many users, especially those inexperienced users. This dissertation focuses on developing a new visual product search system that effectively extracts the properties of unstructured products, and presents the possible items of attraction to users so that the users can quickly locate the ones they would be most likely interested in. We designed and developed a feature extraction algorithm that retains product color and local pattern features, and the experimental evaluation on the benchmark dataset demonstrated that it is robust against common geometric and photometric visual distortions. Besides, instead of ignoring product text information, we investigated and developed a ranking model learned via a unified probabilistic hypergraph that is capable of capturing correlations among product visual content and textual content. Moreover, we proposed and designed a fuzzy hierarchical co-clustering algorithm for the collaborative filtering product recommendation. Via this method, users can be automatically grouped into different interest communities based on their behaviors. Then, a customized recommendation can be performed according to these implicitly detected relations. In summary, the developed search system performs much better in a visual unstructured product search when compared with state-of-art approaches. With the comprehensive ranking scheme and the collaborative filtering recommendation module, the user’s overhead in locating the information of value is reduced, and the user’s experience of seeking for useful product information is optimized.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Alaya, Mili Nourhene. "Managing the empirical hardness of the ontology reasoning using the predictive modelling." Thesis, Paris 8, 2016. http://www.theses.fr/2016PA080062/document.

Повний текст джерела
Анотація:
Multiples techniques d'optimisation ont été implémentées afin de surmonter le compromis entre la complexité des algorithmes du raisonnement et l'expressivité du langage de formulation des ontologies. Cependant les compagnes d'évaluation des raisonneurs continuent de confirmer l'aspect imprévisible et aléatoire des performances de ces logiciels à l'égard des ontologies issues du monde réel. Partant de ces observations, l'objectif principal de cette thèse est d'assurer une meilleure compréhension du comportement empirique des raisonneurs en fouillant davantage le contenu des ontologies. Nous avons déployé des techniques d'apprentissage supervisé afin d'anticiper des comportements futurs des raisonneurs. Nos propositions sont établies sous forme d'un système d'assistance aux utilisateurs d'ontologies, appelé "ADSOR". Quatre composantes principales ont été proposées. La première est un profileur d'ontologies. La deuxième est un module d'apprentissage capable d'établir des modèles prédictifs de la robustesse des raisonneurs et de la difficulté empirique des ontologies. La troisième composante est un module d'ordonnancement par apprentissage, pour la sélection du raisonneur le plus robuste étant donnée une ontologie. Nous avons proposé deux approches d'ordonnancement; la première fondée sur la prédiction mono-label et la seconde sur la prédiction multi-label. La dernière composante offre la possibilité d'extraire les parties potentiellement les plus complexes d'une ontologie. L'identification de ces parties est guidée par notre modèle de prédiction du niveau de difficulté d'une ontologie. Chacune de nos approches a été validée grâce à une large palette d'expérimentations
Highly optimized reasoning algorithms have been developed to allow inference tasks on expressive ontology languages such as OWL (DL). Nevertheless, reasoning remains a challenge in practice. In overall, a reasoner could be optimized for some, but not all ontologies. Given these observations, the main purpose of this thesis is to investigate means to cope with the reasoner performances variability phenomena. We opted for the supervised learning as the kernel theory to guide the design of our solution. Our main claim is that the output quality of a reasoner is closely depending on the quality of the ontology. Accordingly, we first introduced a novel collection of features which characterise the design quality of an OWL ontology. Afterwards, we modelled a generic learning framework to help predicting the overall empirical hardness of an ontology; and to anticipate a reasoner robustness under some online usage constraints. Later on, we discussed the issue of reasoner automatic selection for ontology based applications. We introduced a novel reasoner ranking framework. Correctness and efficiency are our main ranking criteria. We proposed two distinct methods: i) ranking based on single label prediction, and ii) a multi-label ranking method. Finally, we suggested to extract the ontology sub-parts that are the most computationally demanding ones. Our method relies on the atomic decomposition and the locality modules extraction techniques and employs our predictive model of the ontology hardness. Excessive experimentations were carried out to prove the worthiness of our approaches. All of our proposals were gathered in a user assistance system called "ADSOR"
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Clémençon, Stéphan. "Résumé des Travaux en Statistique et Applications des Statistiques." Habilitation à diriger des recherches, Université de Nanterre - Paris X, 2006. http://tel.archives-ouvertes.fr/tel-00138299.

Повний текст джерела
Анотація:
Ce rapport présente brièvement l'essentiel de mon activité de recherche depuis ma thèse de doctorat [53], laquelle visait principalement à étendre l'utilisation des progrès récents de l'Analyse Harmonique Algorithmique pour l'estimation non paramétrique adaptative dans le cadre d'observations i.i.d. (tels que l'analyse par ondelettes) à l'estimation statistique pour des données markoviennes. Ainsi qu'il est éxpliqué dans [123], des résultats relatifs aux propriétés de concentration de la mesure (i.e. des inégalités de probabilité et de moments sur certaines classes fonctionnelles, adaptées à l'approximation non linéaire) sont indispensables pour exploiter ces outils d'analyse dans un cadre probabiliste et obtenir des procédures d'estimation statistique dont les vitesses de convergence surpassent celles de méthodes antérieures. Dans [53] (voir également [54], [55] et [56]), une méthode d'analyse fondée sur le renouvellement, la méthode dite 'régénérative' (voir [185]), consistant à diviser les trajectoires d'une chaîne de Markov Harris récurrente en segments asymptotiquement i.i.d., a été largement utilisée pour établir les résultats probabilistes requis, le comportement à long terme des processus markoviens étant régi par des processus de renouvellement (définissant de façon aléatoire les segments de la trajectoire). Une fois l'estimateur construit, il importe alors de pouvoir quantifier l'incertitude inhérente à l'estimation fournie (mesurée par des quantiles spécifiques, la variance ou certaines fonctionnelles appropriées de la distribution de la statistique considérée). A cet égard et au delà de l'extrême simplicité de sa mise en oeuvre (puisqu'il s'agit simplement d'eectuer des tirages i.i.d. dans l'échantillon de départ et recalculer la statistique sur le nouvel échantillon, l'échantillon bootstrap), le bootstrap possède des avantages théoriques majeurs sur l'approximation asymptotique gaussienne (la distribution bootstrap approche automatiquement la structure du second ordre dans le développement d'Edegworth de la distribution de la statistique). Il m'est apparu naturel de considérer le problème de l'extension de la procédure traditionnelle de bootstrap aux données markoviennes. Au travers des travaux réalisés en collaboration avec Patrice Bertail, la méthode régénérative s'est avérée non seulement être un outil d'analyse puissant pour établir des théorèmes limites ou des inégalités, mais aussi pouvoir fournir des méthodes pratiques pour l'estimation statistique: la généralisation du bootstrap proposée consiste à ré-échantillonner un nombre aléatoire de segments de données régénératifs (ou d'approximations de ces derniers) de manière à imiter la structure de renouvellement sous-jacente aux données. Cette approche s'est révélée également pertinente pour de nombreux autres problèmes statistiques. Ainsi la première partie du rapport vise essentiellement à présenter le principe des méthodes statistiques fondées sur le renouvellement pour des chaînes de Markov Harris. La seconde partie du rapport est consacrée à la construction et à l'étude de méthodes statistiques pour apprendre à ordonner des objets, et non plus seulement à les classer (i.e. leur aecter un label), dans un cadre supervisé. Ce problème difficile est d'une importance cruciale dans de nombreux domaines d' application, allant de l'élaboration d'indicateurs pour le diagnostic médical à la recherche d'information (moteurs de recherche) et pose d'ambitieuses questions théoriques et algorithmiques, lesquelles ne sont pas encore résolues de manière satisfaisante. Une approche envisageable consiste à se ramener à la classification de paires d'observations, ainsi que le suggère un critère largement utilisé dans les applications mentionnées ci-dessus (le critère AUC) pour évaluer la pertinence d'un ordre. Dans un travail mené en collaboration avec Gabor Lugosi et Nicolas Vayatis, plusieurs résultats ont été obtenus dans cette direction, requérant l'étude de U-processus: l'aspect novateur du problème résidant dans le fait que l'estimateur naturel du risque a ici la forme d'une U-statistique. Toutefois, dans de nombreuses applications telles que la recherche d'information, seul l'ordre relatif aux objets les plus pertinents importe véritablement et la recherche de critères correspondant à de tels problèmes (dits d'ordre localisé) et d'algorithmes permettant de construire des règles pour obtenir des 'rangements' optimaux à l'égard de ces derniers constitue un enjeu crucial dans ce domaine. Plusieurs développements en ce sens ont été réalisés dans une série de travaux (se poursuivant encore actuellement) en collaboration avec Nicolas Vayatis. Enfin, la troisième partie du rapport reflète mon intérêt pour les applications des concepts probabilistes et des méthodes statistiques. Du fait de ma formation initiale, j'ai été naturellement conduit à considérer tout d'abord des applications en finance. Et bien que les approches historiques ne suscitent généralement pas d'engouement dans ce domaine, j'ai pu me convaincre progressivement du rôle important que pouvaient jouer les méthodes statistiques non paramétriques pour analyser les données massives (de très grande dimension et de caractère 'haute fréquence') disponibles en finance afin de détecter des structures cachées et en tirer partie pour l'évaluation du risque de marché ou la gestion de portefeuille par exemple. Ce point de vue est illustré par la brève présentation des travaux menés en ce sens en collaboration avec Skander Slim dans cette troisième partie. Ces dernières années, j'ai eu l'opportunité de pouvoir rencontrer des mathématiciens appliqués et des scientifiques travaillant dans d'autres domaines, pouvant également bénéficier des avancées de la modélisation probabiliste et des méthodes statistiques. J'ai pu ainsi aborder des applications relatives à la toxicologie, plus précisément au problème de l'évaluation des risque de contamination par voie alimentaire, lors de mon année de délégation auprès de l'Institut National de la Recherche Agronomique au sein de l'unité Metarisk, unité pluridisciplinaire entièrement consacrée à l'analyse du risque alimentaire. J'ai pu par exemple utiliser mes compétences dans le domaine de la modélisation maarkovienne afin de proposer un modèle stochastique décrivant l'évolution temporelle de la quantité de contaminant présente dans l'organisme (de manère à prendre en compte à la fois le phénomène d'accumulation du aux ingestions successives et la pharmacocinétique propre au contaminant régissant le processus d'élimination) et des méthodes d'inférence statistique adéquates lors de travaux en collaboration avec Patrice Bertail et Jessica Tressou. Cette direction de recherche se poursuit actuellement et l'on peut espérer qu'elle permette à terme de fonder des recommandations dans le domaine de la santé publique. Par ailleurs, j'ai la chance de pouvoir travailler actuellement avec Hector de Arazoza, Bertran Auvert, Patrice Bertail, Rachid Lounes et Viet-Chi Tran sur la modélisation stochastique de l'épidémie du virus VIH à partir des données épidémiologiques recensées sur la population de Cuba, lesquelles constituent l'une des bases de données les mieux renseignées sur l'évolution d'une épidémie de ce type. Et bien que ce projet vise essentiellement à obtenir un modèle numérique (permettant d'effectuer des prévisions quant à l'incidence de l'épidémie à court terme, de manière à pouvoir planifier la fabrication de la quantité d'anti-rétroviraux nécéssaire par exemple), il nous a conduit à aborder des questions théoriques ambitieuses, allant de l'existence d'une mesure quasi-stationnaire décrivant l'évolution à long terme de l'épidémie aux problèmes relatifs au caractère incomplet des données épidémiologiques disponibles. Il m'est malheureusement impossible d'évoquer ces questions ici sans risquer de les dénaturer, la présentation des problèmes mathématiques rencontrés dans ce projet mériterait à elle seule un rapport entier.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Peel, Thomas. "Algorithmes de poursuite stochastiques et inégalités de concentration empiriques pour l'apprentissage statistique." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4769/document.

Повний текст джерела
Анотація:
La première partie de cette thèse introduit de nouveaux algorithmes de décomposition parcimonieuse de signaux. Basés sur Matching Pursuit (MP) ils répondent au problème suivant : comment réduire le temps de calcul de l'étape de sélection de MP, souvent très coûteuse. En réponse, nous sous-échantillonnons le dictionnaire à chaque itération, en lignes et en colonnes. Nous montrons que cette approche fondée théoriquement affiche de bons résultats en pratique. Nous proposons ensuite un algorithme itératif de descente de gradient par blocs de coordonnées pour sélectionner des caractéristiques en classification multi-classes. Celui-ci s'appuie sur l'utilisation de codes correcteurs d'erreurs transformant le problème en un problème de représentation parcimonieuse simultanée de signaux. La deuxième partie expose de nouvelles inégalités de concentration empiriques de type Bernstein. En premier, elles concernent la théorie des U-statistiques et sont utilisées pour élaborer des bornes en généralisation dans le cadre d'algorithmes de ranking. Ces bornes tirent parti d'un estimateur de variance pour lequel nous proposons un algorithme de calcul efficace. Ensuite, nous présentons une version empirique de l'inégalité de type Bernstein proposée par Freedman [1975] pour les martingales. Ici encore, la force de notre borne réside dans l'introduction d'un estimateur de variance calculable à partir des données. Cela nous permet de proposer des bornes en généralisation pour l'ensemble des algorithmes d'apprentissage en ligne améliorant l'état de l'art et ouvrant la porte à une nouvelle famille d'algorithmes d'apprentissage tirant parti de cette information empirique
The first part of this thesis introduces new algorithms for the sparse encoding of signals. Based on Matching Pursuit (MP) they focus on the following problem : how to reduce the computation time of the selection step of MP. As an answer, we sub-sample the dictionary in line and column at each iteration. We show that this theoretically grounded approach has good empirical performances. We then propose a bloc coordinate gradient descent algorithm for feature selection problems in the multiclass classification setting. Thanks to the use of error-correcting output codes, this task can be seen as a simultaneous sparse encoding of signals problem. The second part exposes new empirical Bernstein inequalities. Firstly, they concern the theory of the U-Statistics and are applied in order to design generalization bounds for ranking algorithms. These bounds take advantage of a variance estimator and we propose an efficient algorithm to compute it. Then, we present an empirical version of the Bernstein type inequality for martingales by Freedman [1975]. Again, the strength of our result lies in the variance estimator computable from the data. This allows us to propose generalization bounds for online learning algorithms which improve the state of the art and pave the way to a new family of learning algorithms taking advantage of this empirical information
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Ouni, Zaïd. "Statistique pour l’anticipation des niveaux de sécurité secondaire des générations de véhicules." Thesis, Paris 10, 2016. http://www.theses.fr/2016PA100099/document.

Повний текст джерела
Анотація:
La sécurité routière est une priorité mondiale, européenne et française. Parce que les véhicules légers (ou simplement “les véhicules”) sont évidemment l’un des acteurs principaux de l’activité routière, l'amélioration de la sécurité routière passe nécessairement par l’analyse de leurs caractéristiques accidentologiques. Si les nouveaux véhicules sont développés en bureau d’étude et validés en laboratoire, c’est la réalité accidentologique qui permet de vraiment cerner comment ils se comportent en matière de sécurité secondaire, c’est-à-dire quelle sécurité ils offrent à leurs occupants lors d’un accident. C’est pourquoi les constructeurs souhaitent procéder au classement des générations de véhicules en fonction de leurs niveaux de sécurité secondaire réelle. Nous abordons cette thématique en exploitant les données nationales d’accidents corporels de la route appelées BAAC (Bulletin d’Analyse d’Accident Corporel de la Circulation). En complément de celles-ci, les données de parc automobile permettent d’associer une classe générationelle (CG) à chaque véhicule. Nous élaborons deux méthodes de classement de CGs en termes de sécurité secondaire. La première produit des classements contextuels, c’est-à-dire des classements de CGs plongées dans des contextes d’accident. La seconde produit des classements globaux, c’est-`a-dire des classements de CGs déterminés par rapport à une distribution de contextes d’accident. Pour le classement contextuel, nous procédons par “scoring” : nous cherchons une fonction de score qui associe un nombre réel à toute combinaison de CG et de contexte d’accident ; plus ce nombre est petit, plus la CG est sûre dans le contexte d’accident donné. La fonction de score optimale est estimée par “ensemble learning”, sous la forme d’une combinaison convexe optimale de fonctions de score produites par une librairie d’algorithmes de classement par scoring. Une inégalité oracle illustre les performances du méta-algorithme ainsi obtenu. Le classement global est également basé sur le principe de “scoring” : nous cherchons une fonction de score qui associe à toute CG un nombre réel ; plus ce nombre est petit, plus la CG est jugée sûre globalement. Des arguments causaux permettent d’adapter le méta-algorithme évoqué ci-dessus en s’affranchissant du contexte d’accident. Les résultats des deux méthodes de classement sont conformes aux attentes des experts
Road safety is a world, European and French priority. Because light vehicles (or simply“vehicles”) are obviously one of the main actors of road activity, the improvement of roadsafety necessarily requires analyzing their characteristics in terms of traffic road accident(or simply “accident”). If the new vehicles are developed in engineering department and validated in laboratory, it is the reality of real-life accidents that ultimately characterizesthem in terms of secondary safety, ie, that demonstrates which level of security they offer to their occupants in case of an accident. This is why car makers want to rank generations of vehicles according to their real-life levels of safety. We address this problem by exploiting a French data set of accidents called BAAC (Bulletin d’Analyse d’Accident Corporel de la Circulation). In addition, fleet data are used to associate a generational class (GC) to each vehicle. We elaborate two methods of ranking of GCs in terms of secondary safety. The first one yields contextual rankings, ie, rankings of GCs in specified contexts of accident. The second one yields global rankings, ie, rankings of GCs determined relative to a distribution of contexts of accident. For the contextual ranking, we proceed by “scoring”: we look for a score function that associates a real number to any combination of GC and a context of accident; the smaller is this number, the safer is the GC in the given context. The optimal score function is estimated by “ensemble learning”, under the form of an optimal convex combination of scoring functions produced by a library of ranking algorithms by scoring. An oracle inequality illustrates the performance of the obtained meta-algorithm. The global ranking is also based on “scoring”: we look for a scoring function that associates any GC with a real number; the smaller is this number, the safer is the GC. Causal arguments are used to adapt the above meta-algorithm by averaging out the context. The results of the two ranking procedures are in line with the experts’ expectations
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Лозова, К. А. "Оцінювання якості професорсько-викладацького складу ВНЗ у дистанційній освіті". Thesis, Сумський державний університет, 2015. http://essuir.sumdu.edu.ua/handle/123456789/39684.

Повний текст джерела
Анотація:
Перехід від класичних форм навчання до більш конкурентним формам дистанційного навчання базується на змінах як в організації учбового процесу, так і в підходах що до визначення якості освіти. Відбувається зміна пріоритетності показників в рейтинговій оцінці університетів, факультетів та викладачів.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Mehta, Khushang Samir. "Using Machine Learning for Incremental Aggregation of Collaborative Rankings." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1613745957050039.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Depecker, Marine. "Méthodes d'apprentissage statistique pour le scoring." Phd thesis, Télécom ParisTech, 2010. http://pastel.archives-ouvertes.fr/pastel-00572421.

Повний текст джерела
Анотація:
Cette thèse porte sur le développement d'une méthode non-paramétrique pour l'apprentissage supervisé de règles d'ordonnancement à partir de données étiquetées de façon binaire. Cette méthode repose sur le partitionnement récursif de l'espace des observations et généralise la notion d'arbre de décision au problème de l'ordonnancement, les règles de score produites pouvant être représentées graphiquement par des arbres binaires et orientés. Afin de proposer une méthode d'apprentissage flexible, nous introduisons une procédure permettant, à chaque itération de l'algorithme, de scinder l'espace des observations selon diverses règles, adaptatives et complexes, choisies en fonction du problème considéré. De plus, pour lutter contre le phénomène de sur-apprentissage, nous proposons deux procédures de sélection de modèle, fondées sur la maximisation de l'ASC empirique pénalisée par une mesure de la complexité du modèle. Enfin, dans le but de réduire l'instabilité des arbres d'ordonnancement, inhérente à leur mode de construction, nous adaptons deux procédures d'agrégation de règles de prédiction ré-échantillonnées : le bagging (Breiman, 1996) et les forêts aléatoires (Random Forests, Breiman, 2001). Une étude empirique comparative entre différentes configurations de l'algorithme et quelques méthodes de l'état de l'art est présentée, ainsi que l'application à la problématique industrielle de l'objectivation des prestations d'un véhicule automobile. De plus, nous exploitons cette méthode de scoring pour introduire une heuristique de test d'homogénéité entre deux populations, permettant de généraliser les tests de rangs au cas multi-dimensionnel.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Guan, Wei. "New support vector machine formulations and algorithms with application to biomedical data analysis." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41126.

Повний текст джерела
Анотація:
The Support Vector Machine (SVM) classifier seeks to find the separating hyperplane wx=r that maximizes the margin distance 1/||w||2^2. It can be formalized as an optimization problem that minimizes the hinge loss Ʃ[subscript i](1-y[subscript i] f(x[subscript i]))₊ plus the L₂-norm of the weight vector. SVM is now a mainstay method of machine learning. The goal of this dissertation work is to solve different biomedical data analysis problems efficiently using extensions of SVM, in which we augment the standard SVM formulation based on the application requirements. The biomedical applications we explore in this thesis include: cancer diagnosis, biomarker discovery, and energy function learning for protein structure prediction. Ovarian cancer diagnosis is problematic because the disease is typically asymptomatic especially at early stages of progression and/or recurrence. We investigate a sample set consisting of 44 women diagnosed with serous papillary ovarian cancer and 50 healthy women or women with benign conditions. We profile the relative metabolite levels in the patient sera using a high throughput ambient ionization mass spectrometry technique, Direct Analysis in Real Time (DART). We then reduce the diagnostic classification on these metabolic profiles into a functional classification problem and solve it with functional Support Vector Machine (fSVM) method. The assay distinguished between the cancer and control groups with an unprecedented 99\% accuracy (100\% sensitivity, 98\% specificity) under leave-one-out-cross-validation. This approach has significant clinical potential as a cancer diagnostic tool. High throughput technologies provide simultaneous evaluation of thousands of potential biomarkers to distinguish different patient groups. In order to assist biomarker discovery from these low sample size high dimensional cancer data, we first explore a convex relaxation of the L₀-SVM problem and solve it using mixed-integer programming techniques. We further propose a more efficient L₀-SVM approximation, fractional norm SVM, by replacing the L₂-penalty with L[subscript q]-penalty (q in (0,1)) in the optimization formulation. We solve it through Difference of Convex functions (DC) programming technique. Empirical studies on the synthetic data sets as well as the real-world biomedical data sets support the effectiveness of our proposed L₀-SVM approximation methods over other commonly-used sparse SVM methods such as the L₁-SVM method. A critical open problem in emph{ab initio} protein folding is protein energy function design. We reduce the problem of learning energy function for extit{ab initio} folding to a standard machine learning problem, learning-to-rank. Based on the application requirements, we constrain the reduced ranking problem with non-negative weights and develop two efficient algorithms for non-negativity constrained SVM optimization. We conduct the empirical study on an energy data set for random conformations of 171 proteins that falls into the {it ab initio} folding class. We compare our approach with the optimization approach used in protein structure prediction tool, TASSER. Numerical results indicate that our approach was able to learn energy functions with improved rank statistics (evaluated by pairwise agreement) as well as improved correlation between the total energy and structural dissimilarity.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Liu, Xialei. "Visual recognition in the wild: learning from rankings in small domains and continual learning in new domains." Doctoral thesis, Universitat Autònoma de Barcelona, 2019. http://hdl.handle.net/10803/670154.

Повний текст джерела
Анотація:
Les xarxes neuronals convolucionals profundes (CNNs) han assolit resultats molt positius en diverses aplicacions de reconeixement visual, tals com classificació, detecció o segmentació d’imatges. En aquesta tesis, abordem dues limitacions de les CNNs. La primera, entrenar CNNs profundes requereix grans quantitats de dades etiquetades, les quals són molt costoses i àrdues d’aconseguir. La segona és que entrenar CNNs en sistemes d’aprenentatge continuu és un problema obert per a la recerca. L’oblit catastròfic en xarxes és molt comú quan s’adapta un model entrenat a nous entorns o noves tasques. Per tant, en aquesta tesis, tenim com a objectiu millorar les CNNs per a les aplicacions amb dades limitades i adaptar-les de forma contínua en noves tasques. L’aprenentatge auto-supervisat compensa la falta de dades etiquetades amb la introducció de tasques auxiliars en les quals les dades estan fàcilment disponibles. En la primera part de la tesis, mostrem com els rànquings es poden utilitzar de forma semblant a una tasca auto-supervisada per a problemes de regressió. Després, proposem una tècnica de propagació cap endarrera eficient per a xarxes siameses que prevenen el còmput redundant introduït per les arquitectures de xarxa multi-branca. A més a més, demostrem que mesurar la incertesa de les xarxes en les tasques semblants a les auto-supervisades és una bona mesura de la quantitat d’informació que contenen les dades no etiquetades. Aquesta mesura pot ser, aleshores, utilitzada per a l’execució de algoritmes d’aprenentatge actiu. Aquests marcs que proposem els apliquem doncs a dos problemes de regressió: Avaluació de la Qualitat d’Imatge (IQA) i el comptador de persones. En els dos casos, mostrem com generar de forma automàtica grups d’imatges ranquejades per a les dades no etiquetades. Els nostres resultats mostren que les xarxes entrenades per a la regressió de les anotacions de les dades etiquetades a la vegada que per aprendre a ordenar els rànquings de les dades no etiquetades, obtenen significativament millors resultats que superen l’estat de l’art. També demostrem que l’aprenentatge actiu utilitzant rànquings pot reduir la quantitat d’etiquetatge en un 50% per ambdues tasques de IQA i comptador de persones. A la segona part de la tesis, proposem dosmètodes per a evitar l’oblit catastròfic en escenaris d’aprenentatge seqüencial de tasques. El primer mètode deriva del de Consolidació Elàstica de Pesos, el qual utilitza la diagonal de la Matriu d’Informació de Fisher (FIM) per a mesurar la importància dels paràmetres de la xarxa. No obstant, l’aproximació assumida no és realista. Per tant, diagonalitzem aproximadament la FIMutilitzant un grup de paràmetres de rotació factoritzada proporcionant una millora significativa del rendiment de tasques seqüencials en el cas de l’aprenentatge continu. Per al segon mètode, demostrem que l’oblit es manifesta de forma diferent en cada capa de la xarxa i proposem un mètode híbrid on la destil·lació s’utilitza per a l’extractor de característiques i la rememoració en el classificador mitjançant generació de característiques. El nostremètode soluciona la limitació de la rememoració mitjançant la generació d’imatges i la destil·lació de probabilitats (com l’utilitzat en el mètode Aprenentatge Sense Oblit), i pot afegir de forma natural noves tasques en un únic classificador ben calibrat. Els experiments confirmen que el mètode proposat sobrepassa les mètriques de referència i part de l’estat de l’art.
Las redes neuronales convolucionales profundas (CNNS) han alcanzado resultados muy positivos en diferentes aplicaciones de reconocimiento visual, tales como clasificación, detección o segmentación de imágenes. En esta tesis, abordamos dos limitaciones de las CNNs. La primera, entrenar CNNs profundas requiere grandes cantidades de datos etiquetados, los cuales sonmuy costosos y arduos de conseguir. La segunda es que entrenar en sistemas de aprendizaje continuo es un problema abierto para la investigación. El olvido catastrófico en redes es muy común cuando se adapta un modelo entrenado a nuevos entornos o nuevas tareas. Por lo tanto, en esta tesis, tenemos como objetivo mejorar las CNNs para aplicaciones con datos limitados y adaptarlas de forma continua a nuevas tareas. El aprendizaje auto-supervisado compensa la falta de datos etiquetados con la introducción de tareas auxiliares en las cuales los datos están fácilmente disponibles. En la primera parte de la tesis, mostramos cómo los ránquings se pueden utilizar de forma parecida a una tarea auto-supervisada para los problemas de regresión. Después, proponemos una técnica de propagación hacia atrás eficiente para redes siamesas que previene el computo redundante introducido por las arquitecturas de red multi-rama. Además, demostramos quemedir la incertidumbre de las redes en las tareas parecidas a las auto-supervisadas, es una buena medida de la cantidad de información que contienen los datos no etiquetados. Dicha medida puede ser entonces usada para la ejecución de algoritmos de aprendizaje activo. Estosmarcos que proponemos los aplicamos entonces a dos problemas de regresión: Evaluación de Calidad de Imagen (IQA) y el contador de personas. En los dos casos, mostramos cómo generar de forma automática grupos de imágenes ranqueadas para los datos no etiquetados. Nuestros resultados muestran que las redes entrenadas para la regresión de las anotaciones de los datos etiquetados, a la vez que para aprender a ordenar los ránquings de los datos no etiquetados, obtienen resultados significativamente mejores al estado del arte. También demostramos que el aprendizaje activo utilizando ránquings puede reducir la cantidad de etiquetado en un 50% para ambas tareas de IQA y contador de personas. En la segunda parte de la tesis, proponemos dos métodos para evitar el olvido catastrófico en escenarios de aprendizaje secuencial de tareas. El primer método deriva del de Consolidación Elástica de Pesos, el cuál utiliza la diagonal de laMatriz de Información de Fisher (FIM) para medir la importancia de los pesos de la red. No obstante, la aproximación asumida no es realista. Por lo tanto, diagonalizamos la aproximación de la FIM utilizando un grupo de parámetros de rotación factorizada proporcionando una mejora significativa en el rendimiento de tareas secuenciales para el caso del aprendizaje continuo. Para el segundo método, demostramos que el olvido se manifiesta de forma diferente en cada capa de la red y proponemos un método híbrido donde la destilación se utiliza para el extractor de características y la rememoración en el clasificador mediante generación de características. Nuestro método soluciona la limitación de la rememoración mediante generación de imágenes y la destilación de probabilidades (como la utilizada en elmétodo Aprendizaje Sin Olvido), y puede añadir de forma natural nuevas tareas en un único clasificador bien calibrado. Los experimentos confirman que el método propuesto sobrepasa las métricas de referencia y parte del estado del arte.
Deep convolutional neural networks (CNNs) have achieved superior performance in many visual recognition application, such as image classification, detection and segmentation. In this thesis we address two limitations of CNNs. Training deep CNNs requires huge amounts of labeled data, which is expensive and labor intensive to collect. Another limitation is that training CNNs in a continual learning setting is still an open research question. Catastrophic forgetting is very likely when adapting trainedmodels to new environments or new tasks. Therefore, in this thesis, we aim to improve CNNs for applications with limited data and to adapt CNNs continually to new tasks. Self-supervised learning leverages unlabelled data by introducing an auxiliary task for which data is abundantly available. In the first part of the thesis, we show how rankings can be used as a proxy self-supervised task for regression problems. Then we propose an efficient backpropagation technique for Siamese networks which prevents the redundant computation introduced by the multi-branch network architecture. In addition, we show that measuring network uncertainty on the self-supervised proxy task is a good measure of informativeness of unlabeled data. This can be used to drive an algorithm for active learning. We then apply our framework on two regression problems: Image Quality Assessment (IQA) and Crowd Counting. For both, we show how to automatically generate ranked image sets from unlabeled data. Our results show that networks trained to regress to the ground truth targets for labeled data and to simultaneously learn to rank unlabeled data obtain significantly better, state-of-the-art results. We further show that active learning using rankings can reduce labeling effort by up to 50% for both IQA and crowd counting. In the second part of the thesis, we propose two approaches to avoiding catastrophic forgetting in sequential task learning scenarios. The first approach is derived from ElasticWeight Consolidation, which uses a diagonal Fisher InformationMatrix (FIM) tomeasure the importance of the parameters of the network. However the diagonal assumption is unrealistic. Therefore, we approximately diagonalize the FIM using a set of factorized rotation parameters. This leads to significantly better performance on continual learning of sequential tasks. For the second approach, we show that forgetting manifests differently at different layers in the network and propose a hybrid approach where distillation is used in the feature extractor and replay in the classifier via feature generation. Our method addresses the limitations of generative image replay and probability distillation (i.e. learning without forgetting) and can naturally aggregate new tasks in a single, well-calibrated classifier. Experiments confirmthat our proposed approach outperforms the baselines and some start-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Prati, Ronaldo Cristiano. ""Novas abordagens em aprendizado de máquina para a geração de regras, classes desbalanceadas e ordenação de casos"." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-01092006-155445/.

Повний текст джерела
Анотація:
Algoritmos de aprendizado de máquina são frequentemente os mais indicados em uma grande variedade de aplicações de mineração dados. Entretanto, a maioria das pesquisas em aprendizado de máquina refere-se ao problema bem definido de encontrar um modelo (geralmente de classificação) de um conjunto de dados pequeno, relativamente bem preparado para o aprendizado, no formato atributo-valor, no qual os atributos foram previamente selecionados para facilitar o aprendizado. Além disso, o objetivo a ser alcançado é simples e bem definido (modelos de classificação precisos, no caso de problemas de classificação). Mineração de dados propicia novas direções para pesquisas em aprendizado de máquina e impõe novas necessidades para outras. Com a mineração de dados, algoritmos de aprendizado estão quebrando as restrições descritas anteriormente. Dessa maneira, a grande contribuição da área de aprendizado de máquina para a mineração de dados é retribuída pelo efeito inovador que a mineração de dados provoca em aprendizado de máquina. Nesta tese, exploramos alguns desses problemas que surgiram (ou reaparecem) com o uso de algoritmos de aprendizado de máquina para mineração de dados. Mais especificamente, nos concentramos seguintes problemas: Novas abordagens para a geração de regras. Dentro dessa categoria, propomos dois novos métodos para o aprendizado de regras. No primeiro, propomos um novo método para gerar regras de exceção a partir de regras gerais. No segundo, propomos um algoritmo para a seleção de regras denominado Roccer. Esse algoritmo é baseado na análise ROC. Regras provêm de um grande conjunto externo de regras e o algoritmo proposto seleciona regras baseado na região convexa do gráfico ROC. Proporção de exemplos entre as classes. Investigamos vários aspectos relacionados a esse tópico. Primeiramente, realizamos uma série de experimentos em conjuntos de dados artificiais com o objetivo de testar nossa hipótese de que o grau de sobreposição entre as classes é um fator complicante em conjuntos de dados muito desbalanceados. Também executamos uma extensa análise experimental com vários métodos (alguns deles propostos neste trabalho) para balancear artificialmente conjuntos de dados desbalanceados. Finalmente, investigamos o relacionamento entre classes desbalanceadas e pequenos disjuntos, e a influência da proporção de classes no processo de rotulação de exemplos no algoritmo de aprendizado de máquina semi-supervisionado Co-training. Novo método para a combinação de rankings. Propomos um novo método, chamado BordaRank, para construir ensembles de rankings baseado no método de votação borda count. BordaRank pode ser aplicado em qualquer problema de ordenação binária no qual vários rankings estejam disponíveis. Resultados experimentais mostram uma melhora no desempenho com relação aos rankings individuais, alem de um desempenho comparável com algoritmos mais sofisticados que utilizam a predição numérica, e não rankings, para a criação de ensembles para o problema de ordenação binária.
Machine learning algorithms are often the most appropriate algorithms for a great variety of data mining applications. However, most machine learning research to date has mainly dealt with the well-circumscribed problem of finding a model (generally a classifier) given a single, small and relatively clean dataset in the attribute-value form, where the attributes have previously been chosen to facilitate learning. Furthermore, the end-goal is simple and well-defined, such as accurate classifiers in the classification problem. Data mining opens up new directions for machine learning research, and lends new urgency to others. With data mining, machine learning is now removing each one of these constraints. Therefore, machine learning's many valuable contributions to data mining are reciprocated by the latter's invigorating effect on it. In this thesis, we explore this interaction by proposing new solutions to some problems due to the application of machine learning algorithms to data mining applications. More specifically, we contribute to the following problems. New approaches to rule learning. In this category, we propose two new methods for rule learning. In the first one, we propose a new method for finding exceptions to general rules. The second one is a rule selection algorithm based on the ROC graph. Rules come from an external larger set of rules and the algorithm performs a selection step based on the current convex hull in the ROC graph. Proportion of examples among classes. We investigated several aspects related to this issue. Firstly, we carried out a series of experiments on artificial data sets in order to verify our hypothesis that overlapping among classes is a complicating factor in highly skewed data sets. We also carried out a broadly experimental analysis with several methods (some of them proposed by us) that artificially balance skewed datasets. Our experiments show that, in general, over-sampling methods perform better than under-sampling methods. Finally, we investigated the relationship between class imbalance and small disjuncts, as well as the influence of the proportion of examples among classes in the process of labelling unlabelled cases in the semi-supervised learning algorithm Co-training. New method for combining rankings. We propose a new method called BordaRanking to construct ensembles of rankings based on borda count voting, which could be applied whenever only the rankings are available. Results show an improvement upon the base-rankings constructed by taking into account the ordering given by classifiers which output continuous-valued scores, as well as a comparable performance with the fusion of such scores.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Efes, Stergios. "Zero-shot, One Kill: BERT for Neural Information Retrieval." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-444835.

Повний текст джерела
Анотація:
[Background]: The advent of bidirectional encoder representation from trans- formers (BERT) language models (Devlin et al., 2018) and MS Marco, a large scale human-annotated dataset for machine reading comprehension (Bajaj et al., 2016) that made publicly available, led the field of information retrieval (IR) to experience a revolution (Lin et al., 2020). The retrieval model based on BERT of Nogueira and Cho (2019), by the time they published their paper, became the top entry in the MS Marco passage-reranking leaderboard, surpassing the previous state of the art by 27% in MRR@10. However, training such neural IR models for different domains than MS Marco is still hard because neural approaches often require a vast amount of training data to perform effectively, which is not always available. To address the problem of the shortage of labelled data a new line of research emerged, training neural models with weak supervision. In weak supervision, given an unlabelled dataset labels are generated automatically using an existing model and then a machine learning model is trained upon the artificial “weak“ data. In case of weak supervision for IR, the training dataset comes in the form of a tuple (query, passage). Dehghani et al. (2017) in their work used the AOL query logs (Pass et al., 2006), which is a set of millions of real web queries, and BM25 to retrieve the relevant passages for each of the user queries. A drawback with this approach is that it is hard to obtain query logs for every single different domain. [Objective]: This thesis proposes an intuitive approach for addressing the shortage of data in domains with limited or no data at all through transfer learning in the context of IR. We leverage Wikipedia’s structure for creating a Wikipedia-based generic IR training dataset for zero-shot neural models. [Method]: We create the “pseudo-queries“ by concatenating the titles of Wikipedia’s articles along with each of their title sections and we consider the associated section’s passage as the relevant passage of the pseudo-queries. All of our experiments are evaluated on a standard collection: MS Marco, which is a large scale web collection. For our zero-shot experiments, our proposed model, called “Wiki“, is a BERT model trained on the artificial Wikipedia-based dataset and the baseline is a default BERT model without any additional training. In our second line of experiments, we explore the benefits gained by pre-fine- tuning on the Wikipedia-based IR dataset and further fine-tuning on in-domain data. Our proposed model, "Wiki+Ma", is a BERT model pre-fine-tuned in the Wikipedia-based dataset and further fine-tuned in MS Marco, while the baseline is a BERT model fine-tuned only in MS Marco. [Results]: Results regarding our first experiments show that our BERT model trained on the Wikipedia-based IR dataset, called "Wiki", achieves a performance of 0.197 in MRR@10, which is about +10 points more in comparison to a BERT model with default weights; in addition, results in the development set indicate that the “Wiki“ model performs better than BERT model trained on in-domain data when the data is between 10k-50k instances. Results regarding our second line of experiments show that pre-fine-tuning on the Wikipedia-based IR dataset benefits later fine-tuning steps on in-domain data in terms of stability. [Conclusion]: Our findings suggest that transfer learning for IR tasks by leveraging the generic knowledge incorporated in Wikipedia is possible, though more experimentation is needed to understand its limitations in comparison with the traditional approaches such as the BM25.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Gentile, Teresa Anna Rita, Nito Ernesto De, and Walter Vesperi. "A Survey on Knowledge Management in Universities in the QS Rankings: E-learning and MOOCs." TUDpress, 2016. https://tud.qucosa.de/id/qucosa%3A33980.

Повний текст джерела
Анотація:
Purpose – Many public organizations are employing Information Technology “IT” in Knowledge Management “KM” (Silwattananusarn and Tuamsuk, 2012; Alavi and Leidner, 2001; Chatti et al., 2007). Within universities, the use of IT could be an enabler to create and facilitate the development of knowledge (Joia, 2000; Garcia, 2007; Tian et al., 2009; Sandelands, 1997); to improve knowledge sharing (Aurelie Bechina Arntzen et al., 2009; Alavi and Gallupe, 2003); to develop communities of practice (Adams and Freeman, 2000). In the educational organizations IT is also a tool to improve the quality of learning (EC, 2000). E-learning is based on digital technologies (Aspen Institute Italy, 2014), through multiple teaching methods (Derouin et al., 2005), as tools for KM (Wild et al., 2002). The websites of some universities allows anyone to follow free lessons, through the internet. These types of free online courses are known as Massive Open Online Courses „MOOCs“ (EC, 2014; Sinclair et al., 2015). The purpose of this study is to verify the type of teaching adopted by European universities and understand how training through e-learning can improve the processes of transmission and sharing of knowledge allowing everyone, not only to students, to take lessons through the web. Design/methodology/approach – The analysis allows detecting data on universities by region through the study of the websites of the top 100 European universities present in a ranking called Quacquarelli Symonds, “QS World University Rankings 2015/16”. The method used to collect the data was marked by the creation of a specific database in which are inserted, for each university, different information: status (public/private), size, age, number of enrolled students, references on websites. In this Excel spreadsheet was also taken into account the type of educational offer provided by each university, with particular reference to the provision of online courses and courses open to all. Originality/value – The article aims to provide a detailed study on the use of technology in the educational context. The exploration allows you to design, within other universities unranked, styles of teaching online to share knowledge. Practical implications – The survey, currently, is the first step of a larger project which aims to analyse the different types of e-learning platforms used by 100 universities in the European rankings QS to make teaching online. From the results of this first phase, it has emerged that all the surveyed European universities provide training not only through classroom lessons, but also with a variety of courses through e-learning even for free through MOOCs.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Coimbra, Andre Rodrigues. "Método automático para descoberta de funções de ordenação utilizando programação genética paralela em GPU." Universidade Federal de Goiás, 2014. http://repositorio.bc.ufg.br/tede/handle/tede/4525.

Повний текст джерела
Анотація:
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2015-05-15T13:33:06Z No. of bitstreams: 2 Dissertação - André Rodrigues Coimbra - 2014.pdf: 5214859 bytes, checksum: d951502129d7be5d60b6a785516c3ad1 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-05-15T13:37:45Z (GMT) No. of bitstreams: 2 Dissertação - André Rodrigues Coimbra - 2014.pdf: 5214859 bytes, checksum: d951502129d7be5d60b6a785516c3ad1 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Made available in DSpace on 2015-05-15T13:37:45Z (GMT). No. of bitstreams: 2 Dissertação - André Rodrigues Coimbra - 2014.pdf: 5214859 bytes, checksum: d951502129d7be5d60b6a785516c3ad1 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2014-03-28
Ranking functions have a vital role in the performance of information retrieval systems ensuring that documents more related to the user’s search need – represented as a query – are shown in the top results, preventing the user from having to examine a range of documents that are not really relevant. Therefore, this work uses Genetic Programming (GP), an Evolutionary Computation technique, to find ranking functions automaticaly and systematicaly. Moreover, in this project the technique of GP was developed following a strategy that exploits parallelism through graphics processing units. Other known methods in the context of information retrieval as classification committees and the Lazy strategy were combined with the proposed approach – called Finch. These combinations were only feasible due to the GP nature and the use of parallelism. The experimental results with the Finch, regarding the ranking functions quality, surpassed the results of several strategies known in the literature. Considering the time performance, significant gains were also achieved. The solution developed exploiting the parallelism spends around twenty times less time than the solution using only the central processing unit.
Funções de ordenação têm um papel vital no desempenho de sistemas de recuperação de informação garantindo que os documentos mais relacionados com o desejo do usuário – representado através de uma consulta – sejam trazidos no topo dos resultados, evitando que o usuário tenha que analisar uma série de documentos que não sejam realmente relevantes. Assim, utiliza-se a Programação Genética (PG), uma técnica da Computação Evolucionária, para descobrir de forma automática e sistemática funções de ordenação. Além disso, neste trabalho a técnica de PG foi desenvolvida seguindo uma estratégia que explora o paralelismo através de unidades gráficas de processamento. Foram agregados ainda na abordagem proposta – denominada Finch – outros métodos conhecidos no contexto de recuperação de informação como os comitês de classificação e a estratégia Lazy. Sendo que essa complementação só foi viável devido a natureza da PG e em virtude da utilização do paralelismo. Os resultados experimentais encontrados com a Finch, em relação à qualidade das funções de ordenação descobertas, superaram os resultados de diversas estratégias conhecidas na literatura. Considerando o desempenho da abordagem em função do tempo, também foram alcançados ganhos significativos. A solução desenvolvida explorando o paralelismo gasta, em média, vinte vezes menos tempo que a solução utilizando somente a unidade central de processamento.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії