Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Online algorithm with advice.

Dissertationen zum Thema „Online algorithm with advice“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Online algorithm with advice" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Renault, Marc Paul. „Lower and upper bounds for online algorithms with advice“. Paris 7, 2014. http://www.theses.fr/2014PA077196.

Der volle Inhalt der Quelle
Annotation:
Les algorithmes en ligne fonctionnent dans un contexte où l'entrée est révélé au fur et à mesure du temps; chaque morceau révélé est appelé une demande. Après réception de chaque demahde, les algorithmes en ligne doivent prendre une action avant que la prochaine demande soit révélée, c'est-à-dire que les algorithmes en ligne doivent prendre une décision irrévocable basée sur les demandes déjà révélées sans aucune connaissance des demandes à venir. Le but est d'optimiser une fonction de coût dépendante de l'entrée. L'analyse compétitive est la méthode standard utilisée pour analyser la qualité des algorithmes en ligne. Le ratio compétitif est un ratio de pire cas, parmi toutes les séquences de demande finis, entre la performance de l'algorithme en ligne contre un algorithme optimal hors ligne pour la même séquence. Le ratio compétitif compare la performance d'un algorithme sans aucune connaissance de l'avenir contre un algorithme en pleine connaissance de l'avenir. Car l'absence totale de connaissance de l'avenir n'est souvent pas une hypothèse raisonnable, des modèles ont été proposés, appelés algorithmes en ligne avec conseil, qui donne les algorithmes en ligne l'accès à une quantité quantifiée des connaissances de l'avenir. L'intérêt de ce modèle est d'examiner comment le ratio compétitif change en fonction de la quantité de conseil. Dans cette thèse, il est présenté des bornes supérieures et inférieures dans ce modèle pour des problèmes en ligne classiques, tels que le problème de la k-serveur, de bin packing, de dual bin packing (sac à dos multiple), d'ordonnancement sur m machines identiques, du tampon de réordonnancement et de la mise à jour de la liste
Online algorithms operate in a setting where the input is revealed piece by piece; the pieces are called requests. After receiving each request, online algorithms must take an action before the next request is revealed, i. E. Online algorithms must make irrevocable decisions based on the input revealed so far without any knowledge of the future input. The goal is to optimize some cost function over the input. Competitive analysis is the standard method used to analyse the quality of online algorithms. The competitive ratio is the worst case ratio, over all valid finite request sequences, of the online algorithm's performance against an optimal offline algorithm for the same request sequence. The competitive ratio compares the performance of an algorithm with no knowledge about the future against an algorithm with full knowledge about the future. Since the complete absence of future knowledge is often not a reasonable assumption, models, termed online algorithms with advice, which give the online algorithms access to a quantified amount of future knowledge, have been proposed. The interest in this model is in examining how the competitive ratio changes as a function of the amount of advice. In this thesis, we present upper and lower bounds in the advice model for classical online problems such as the k-server problem, the bin packing problem, the dual bin packing (multiple knapsack) problem, scheduling problem on m identical machines, the reordering buffer management problem and the list update problem
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Jin, Shendan. „Online computation beyond standard models“. Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS152.

Der volle Inhalt der Quelle
Annotation:
Dans le cadre standard du calcul en ligne, l’entrée de l’algorithme n’est pas entièrement connue à l’avance, mais elle est révélée progressivement sous forme d’une séquence de requêtes. Chaque fois qu'une requête arrive, l'algorithme en ligne doit prendre des décisions irrévocables pour servir la demande, sans connaissance des requêtes futures. Dans le domaine des algorithmes en ligne, le cadre standard utilisé pour évaluer les performances des algorithmes en ligne est l’analyse compétitive. De manière informelle, le concept d’analyse compétitive consiste à comparer les performances d’un algorithme en ligne dans le pire des cas à une solution optimale hors ligne qui aurait pu être calculée si toutes les données étaient connues d’avance. Dans cette thèse, nous étudierons de nouvelles façons d'approcher les problèmes en ligne. Dans un premier temps, nous étudions le calcul en ligne dans le modèle avec ré-optimisation, dans lequel l'irrévocabilité des décisions en ligne est relâchée. Autrement dit, l'algorithme en ligne est autorisé à revenir en arrière et changer les décisions précédemment prises. Plus précisément, nous montrons comment identifier le compromis entre le nombre de ré-optimisation et les performances des algorithmes en ligne pour le problème de couplage maximale en ligne. De plus, nous étudions des mesures autres que l'analyse compétitive pour évaluer les performances des algorithmes en ligne. Nous observons que parfois, l'analyse compétitive ne peut pas distinguer les performances de différents algorithmes en raison de la nature la plus défavorable du ratio de compétitivité. Nous démontrons qu'une situation similaire se pose dans le problème de la recherche linéaire. Plus précisément, nous revisitons le problème de la recherche linéaire et introduisons une mesure, qui peut être appliquée comme un raffinement du ratio de compétitivité. Enfin, nous étudions le calcul en ligne dans le modèle avec des conseils, dans lequel l'algorithme reçoit en entrée non seulement une séquence de requêtes, mais aussi quelques conseils sur la séquence de requêtes. Plus précisément, nous étudions un modèle récent avec des conseils non fiables, dans lequel les conseils peuvent être fiables ou non. Supposons que dans ce dernier cas, les conseils peuvent être générés à partir d'une source malveillante. Nous montrons comment identifier une stratégie optimale de Pareto pour le problème online bidding dans le modèle de conseil non fiable
In the standard setting of online computation, the input is not entirely available from the beginning, but is revealed incrementally, piece by piece, as a sequence of requests. Whenever a request arrives, the online algorithm has to make immediately irrevocable decisions to serve the request, without knowledge on the future requests. Usually, the standard framework to evaluate the performance of online algorithms is competitive analysis, which compares the worst-case performance of an online algorithm to an offline optimal solution. In this thesis, we will study some new ways of looking at online problems. First, we study the online computation in the recourse model, in which the irrevocability on online decisions is relaxed. In other words, the online algorithm is allowed to go back and change previously made decisions. More precisely, we show how to identify the trade-off between the number of re-optimization and the performance of online algorithms for the online maximum matching problem. Moreover, we study measures other than competitive analysis for evaluating the performance of online algorithms. We observe that sometimes, competitive analysis cannot distinguish the performance of different algorithms due to the worst-case nature of the competitive ratio. We demonstrate that a similar situation arises in the linear search problem. More precisely, we revisit the linear search problem and introduce a measure, which can be applied as a refinement of the competitive ratio. Last, we study the online computation in the advice model, in which the algorithm receives as input not only a sequence of requests, but also some advice on the request sequence. Specifically, we study a recent model with untrusted advice, in which the advice can be either trusted or untrusted. Assume that in the latter case, the advice can be generated from a malicious source. We show how to identify a Pareto optimal strategy for the online bidding problem in the untrusted advice model
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Cayuela, Rafols Marc. „Algorithmic Study on Prediction with Expert Advice : Study of 3 novel paradigms with Grouped Experts“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254344.

Der volle Inhalt der Quelle
Annotation:
The main work for this thesis has been a thorough study of the novel Prediction with Partially Monitored Grouped Expert Advice and Side Information paradigm. This is newly proposed in this thesis, and it extends the widely studied Prediction with Expert Advice paradigm. The extension is based on two assumptions and one restriction that modify the original problem. The first assumption, Grouped, presumes that the experts are structured into groups. The second assumption, Side Information, introduces additional information that can be used to timely relate predictions with groups. Finally, the restriction, Partially Monitored, imposes that the groups’ predictions are only known for one group at a time. The study of this paradigm includes the design of a complete prediction algorithm, the proof of a theoretical bound of the worse-case cumulative regret for such algorithm, and an experimental evaluation of the algorithm (proving the existence of cases where this paradigm outperforms Prediction with Expert Advice). Furthermore, since the development of the algorithm is constructive, it allows to easily build two additional prediction algorithms for the Prediction with Grouped Expert Advice and Prediction with Grouped Expert Advice and Side Information paradigms. Therefore, this thesis presents three novel prediction algorithms, with corresponding regret bounds, and a comparative experimental evaluation including the original Prediction with Expert Advice paradigm.
Huvudarbetet för den här avhandlingen har varit en grundlig studie av den nya Prediction with Partially Monitored Grouped Expert Advice and Side Information paradigmet. Detta är nyligen föreslagit i denna avhandling, och det utökar det brett studerade Prediction with Expert Advice paradigmet. Förlängningen baseras på två antaganden och en begränsning som ändrar det ursprungliga problemet. Det första antagandet, Grouped, förutsätter att experterna är inbyggda i grupper. Det andra antagandet, Side Information, introducerar ytterligare information som kan användas för att i tid relatera förutsägelser med grupper. Slutligen innebär begränsningen, Partially Monitored, att gruppens förutsägelser endast är kända för en grupp i taget. Studien av detta paradigm innefattar utformningen av en komplett förutsägelsesalgoritm, beviset på en teoretisk bindning till det sämre fallet kumulativa ånger för en sådan algoritm och en experimentell utvärdering av algoritmen (bevisar förekomsten av fall där detta paradigm överträffar Prediction with Expert Advice). Eftersom algoritmens utveckling är konstruktiv tillåter den dessutom att enkelt bygga två ytterligare prediksionsalgoritmer för Prediction with Grouped Expert Advice och Prediction with Grouped Expert Advice and Side Information paradigmer. Därför presenterar denna avhandling tre nya prediktionsalgoritmer med motsvarande ångergränser och en jämförande experimentell utvärdering inklusive det ursprungliga Prediction with Expert Advice paradigmet.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Henke, Hans-Christian. „Online Advice : Konzeption eines ergebnisbasierten Simulationsansatzes /“. [S.l. : s.n.], 2003. http://www.gbv.de/dms/zbw/362397171.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Furkin, Jennifer D. „MOM TO MOM: ONLINE BREASTFEEDING ADVICE“. UKnowledge, 2018. https://uknowledge.uky.edu/comm_etds/64.

Der volle Inhalt der Quelle
Annotation:
Exploring online support groups has gained more and more popularity in the last decade. Investigating the type of support messages users send each other has broadened the already extensive social support framework built in the last forty years. Mothers utilize online support for various topics, and a very common topic is breastfeeding. The perception of breastfeeding has changed throughout history with shifting beliefs and societal norms coupled with solid facts about its importance in the sustaining of infants. Online breastfeeding support has been previously explored through the categorization of types of support and themes within the interactions. This study extended this by investigating deeper into the advice solicitation patterns and directness of advice patterns. Results indicated that informational support most commonly was responded to support seekers. Support seekers utilized the requesting an opinion or information solicitation type most often when posting to the discussion board. Mothers most commonly offered storytelling as responses to posts and embedded advice within the stories.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Porter, Noriko. „Japanese and U. S. mother's concerns and experts' advice content analysis of mothers' questions on online message boards and experts' advice in parenting magazines /“. Diss., Columbia, Mo. : University of Missouri-Columbia, 2008. http://hdl.handle.net/10355/5517.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--University of Missouri-Columbia, 2008.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on June 15, 2009) Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Fowler-Dawson, Amy E. „Expand your online reach with these 10 social media tips from the pros: An analysis of online social networking advice“. OpenSIUC, 2016. https://opensiuc.lib.siu.edu/theses/2047.

Der volle Inhalt der Quelle
Annotation:
Researchers have suggested that social networking sites are especially suited to creating a two-way communication with audiences as described by Kent & Taylor’s dialogic communication theory. However, researchers have also shown that most organizations are failing to actually create this type of dialogue with their followers on SNS. This leads to the question: why are organizations failing to realize this potential? In this study, I consider one possible reason: that organizations are following advice offered online by self-appointed “experts” on SNS strategy and that advice is not effective. I performed a content analysis of 29 websites that promise easy tips to increase social media engagement, identified by their placement at the top of Google search listings, then tested some of the most common advice from these sites on the Facebook and Twitter pages of a group of state-level advocacy organizations to see whether that advice is effective in increasing engagement or overall reach. I found many sites advising organizations to interact with followers, create engaging content and to include visual elements in posts. However, the recommendations were often hedged with limitations, or backed up by unreliable statistics or anecdotal evidence. My own experiment showed that using a call to action increased engagement on Twitter and including a photo increased reach on Facebook, but no other test variable had an effect on impressions, reach or engagement on either site. This suggests that the advice offered online is not reliable, and organizations may fail to create dialogic communication with their followers because they are relying on faulty advice to build their SNS strategies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Barbaro, Billy. „Tuning Hyperparameters for Online Learning“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case1522419008006144.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Murphy, Nicholas John. „An online learning algorithm for technical trading“. Master's thesis, Faculty of Science, 2019. http://hdl.handle.net/11427/31048.

Der volle Inhalt der Quelle
Annotation:
We use an adversarial expert based online learning algorithm to learn the optimal parameters required to maximise wealth trading zero-cost portfolio strategies. The learning algorithm is used to determine the relative population dynamics of technical trading strategies that can survive historical back-testing as well as form an overall aggregated portfolio trading strategy from the set of underlying trading strategies implemented on daily and intraday Johannesburg Stock Exchange data. The resulting population time-series are investigated using unsupervised learning for dimensionality reduction and visualisation. A key contribution is that the overall aggregated trading strategies are tested for statistical arbitrage using a novel hypothesis test proposed by Jarrow et al. [31] on both daily sampled and intraday time-scales. The (low frequency) daily sampled strategies fail the arbitrage tests after costs, while the (high frequency) intraday sampled strategies are not falsified as statistical arbitrages after costs. The estimates of trading strategy success, cost of trading and slippage are considered along with an offline benchmark portfolio algorithm for performance comparison. In addition, the algorithms generalisation error is analysed by recovering a probability of back-test overfitting estimate using a nonparametric procedure introduced by Bailey et al. [19]. The work aims to explore and better understand the interplay between different technical trading strategies from a data-informed perspective.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Orlansky, Emily. „Beauty is in the mouth of the beholder advice networks at Haverford College /“. Diss., Connect to the thesis, 2009. http://hdl.handle.net/10066/3707.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Hiller, Benjamin [Verfasser]. „Online Optimization: Probabilistic Analysis and Algorithm Engineering / Benjamin Hiller“. München : Verlag Dr. Hut, 2012. http://d-nb.info/1025821319/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Laflamme, Simon M. Eng Massachusetts Institute of Technology. „Online learning algorithm for structural control using magnetorheological actuators“. Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/39271.

Der volle Inhalt der Quelle
Annotation:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2007.
Includes bibliographical references (p. 83-84).
Magnetorheological actuators are promising devices for mitigating vibrations because they only require a fraction of energy for a similar performance to active control. Conversely, these semi-active devices have limited maximum forces and are hard to model due to the rheological properties of their fluid. When considering structural control, classical theories necessitate full knowledge of the structural dynamic states and properties most of which can only be estimated when considering large-scale control, which may be difficult or inaccurate for complicated geometries due to the non-linear behaviour of structures. Additionally, most of these theories do not take into account the response delay of the actuators which may result in structural instabilities. To address the problem, learning algorithms using offline learning have been proposed in order to have the structure learn its behaviour, but they can be perceived as unrealistic because earthquake data can hardly be produced to train these schemes. Here, an algorithm using online learning feedback is proposed to address this problem where the structure observes, compares and adapts its performance at each time step, analogous to a child learning his or her motor functions.
(cont.) The algorithm uses a machine learning technique, Gaussian kernels, to prescribe forces upon structural states, where states are evaluated strictly based on displacement and acceleration feedback. The algorithm has been simulated and performances assessed by comparing it with two classical control theories: clipped-optimal and passive controls. The proposed scheme is found to be stable and performs well in mitigating vibrations for a low energy input, but does not perform as well compared to clipped-optimal case. This relative performance would be expected to be better for large-scale structures because of the adaptability of the proposed algorithm.
by Simon Laflamme.
M.Eng.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Bozorgmehr, Pouya. „An efficient online feature extraction algorithm for neural networks“. Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p1470604.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--University of California, San Diego, 2009.
Title from first page of PDF file (viewed January 13, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 61-63).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Vitiello, Thomas. „Peeking on the campaign : online Voting Advice Applications : challenges and prospects for electoral studies in the digital era“. Thesis, Paris, Institut d'études politiques, 2018. http://www.theses.fr/2018IEPP0001/document.

Der volle Inhalt der Quelle
Annotation:
Les Systèmes d’Aide au Vote (SAV) comparent, sur des enjeux variés, les positions des utilisateurs avec celles des partis ou des candidats qui sont établies à partir d’une analyse de contenu de leur programme électoral. Les SAV sont un outil d’analyse novateur à usage des politistes puisqu’ils leur permettent de récolter des données empiriques à grande échelle tout au long d’une campagne électorale. L'objectif principal de cette thèse est d'utiliser les données collectées par les SAV afin d’analyser la diffusion d’un site Web à caractère informatif et politique, i.e. les SAV, auprès des internautes dans des systèmes médiatiques différents. Cette thèse teste l'hypothèse selon laquelle l'utilisation des SAV par différents groupes d'électeurs (électeurs partisans, hésitants et indécis) varie selon les systèmes médiatiques. Les analyses des données collectées par des SAV dans sept démocraties électorales représentant trois différents types de systèmes médiatiques (Democratic Corporatist, Liberal et Pluralist Polarized) montrent que les systèmes médiatiques structurent les comportements et influent sur le degré d’exposition à des informations politiques en ligne. Le second apport de cette thèse est l’utilisation des données collectées par un SAV pour l'analyse électorale, notamment pour l’étude du vote sur enjeux et des dynamiques de campagne. Plusieurs analyses sont réalisées dans cette thèse à partir des données recueillies par le SAV français de La Boussole présidentielle. Cette thèse montre que, bien qu’étant non-probabilistes, les échantillons SAV sont très informatifs à condition d’être intégrés dans un cadre de recherche approprié et d’ajuster les biais statistiques
Online Voting Advice Applications (VAAs) are websites or online applications that show voters which party or candidate is closest to their own political ideas based on how they mark their positions on an ample range of policy issues. In addition to providing voters with reliable information in a structured manner, VAAs are an innovative data-collection tool on issue positions and on a wide set of other indicators. The main scope of this dissertation is to use VAA-collected data to learn about online information exposure during campaigns across media systems. Building on the realistic view of the Web’s political potential and its impact on the public, this dissertation test the hypothesis that VAA use by different voter groups (partisan, doubting and undecided voters) varies across media systems. The analyses of VAA-collected data in seven electoral democracies across three different types of media systems (Democratic Corporatist, Liberal, and Polarized Pluralist) show that media systems are key mediators to explain online information exposure. The second scope of this dissertation is to use VAA-collected data for the sake of electoral analysis, in particular to study issue-voting and campaign dynamics analyses. Several analyses are carried out using data collected by the French VAA of La Boussole présidentielle. This dissertation shows that, despite being non-probabilistic, VAA samples can serve as a very informative tool for the study of political and communication processes during electoral campaigns if integrated within an appropriate research framework and with the use of proper statistical adjustment
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Kaplunovich, Petr A. (Petr Alexandrovich). „Efficient algorithm for online N - 2 power grid contingency selection“. Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/88388.

Der volle Inhalt der Quelle
Annotation:
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 55-57).
Multiple element outages (N - k contingencies) have caused some of the most massive blackouts and disturbances in the power grid. Such outages affect millions of people and cost the world economy billions of dollars annually. The impact of the N - k contingencies is anticipated to grow as the electrical power grid becomes increasingly more loaded. As the result power system operators face the need for advanced techniques to select and mitigate high order contingencies. This study presents a novel algorithm for the fast N - 2 contingency selection to address this problem. The developed algorithm identifies all potentially dangerous contingencies with zero missing rate. The complexity of the algorithm is shown to be of the same order as the complexity of N - 1 contingency selection, which makes it much more efficient than brute force enumeration. The study first derives the equations describing the set of the dangerous N - 2 contingencies in the symmetric form and presents an effective way to bound them. The derived bounding technique is then used to develop an iterative pruning algorithm. Next, the performance of the algorithm is validated using various grid cases under different load conditions. The efficiency of the algorithm is shown to be rather promising. For the Summer Polish grid case with more than 3500 lines it manages to reduce the size of the contingency candidates set by the factor of 1000 in just 2 iterations. Finally, the reasons behind the efficiency of the algorithm are discussed and intuition around the connection of its performance to the grid structure is provided.
by Petr A. Kaplunovich.
S.M.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Zhang, Xiaoyu. „Effective Search in Online Knowledge Communities: A Genetic Algorithm Approach“. Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/35059.

Der volle Inhalt der Quelle
Annotation:
Online Knowledge Communities, also known as online forum, are popular web-based tools that allow members to seek and share knowledge. Documents to answer varieties of questions are associated with the process of knowledge exchange. The social network of members in an Online Knowledge Community is an important factor to improve search precision. However, prior ranking functions donâ t handle this kind of document with using this information. In this study, we try to resolve the problem of finding authoritative documents for a user query within an Online Knowledge Community. Unlike prior ranking functions which consider either content based feature, hyperlink based feature, or document structure based feature, we explored the Online Knowledge Community social network structure and members social interaction activities to design features that can gauge the two major factors affecting user knowledge adoption decision: argument quality and source credibility. We then design a customized Genetic Algorithm to adjust the weights for new features we proposed. We compared the performance of our ranking strategy with several others baselines on a real world data www.vbcity.com/forums/. The evaluation results demonstrated that our method could improve the user search satisfaction with an obviously percentage. At the end, we concluded that our approach based on knowledge adoption model and Genetic Algorithm is a better ranking strategy in the Online Knowledge Community.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Lövgren, Tobias. „Kostråd på internet : En tvärsnittsstudie bland unga vuxna“. Thesis, Högskolan i Gävle, Avdelningen för arbets- och folkhälsovetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-19706.

Der volle Inhalt der Quelle
Annotation:
Aim The study aims were to identify young adults’ sources of nutritional advice on the Internet and how they perceive the credibility of these. The study also aimed at exploring young adults' knowledge of national recommendations issued by the National Food Agency. Method A web-based survey was distributed on the social media platform Facebook. The questionnaire contained a total of 14 questions regarding online nutritional advice and its credibility. Finally, it requested the respondent's knowledge and credibility of the national nutritional recommendations. The survey quickly gained a large spread and 302 respondents between 20 and 30 years old took part in the survey. Results The study results showed that 59 percent of the respondents were looking for nutritional advice on the Internet, with a larger proportion of women than men. In addition, the women searched more frequently. The main sources of nutritional advice were blogs. The perceived credibility of blogs was relatively high. The primary purpose of searching for nutritional advice on the Internet was weight loss for women and muscle increase for men. The study also showed that 67 percent of the respondents were aware of the nutritional recommendations by the National Food Agency. Conclusion This study demonstrates that the Internet is a powerful tool in the creation of young adults’ identity and also in influencing their view of healthy eating habits. It is of significant importance for the future of public health authorities that they must understand and find their role in the modern information society.
Sammanfattning Denna studie syftade till att identifiera unga vuxnas källor till kostråd på internet och hur de upplevde trovärdigheten av dessa. Studien syftade även till att undersöka unga vuxnas kännedom om Livsmedelsverkets näringsrekommendationer En webbaserad enkätundersökning distribuerades via det sociala mediet Facebook. Enkätens formulär innehöll totalt 14 frågor om kostråd på internet och dess trovärdighet. Slutligen efterfrågades respondentens kännedom och i förekommande fall, trovärdigheten av Livsmedelsverkets näringsrekommendationer. Undersökningen fick snabbt en stor spridning och 302 respondenter mellan 20 och 30 år deltog i undersökningen. Studiens resultat visade att 59 procent av respondenterna sökte efter kostråd på internet. Det var en större andel kvinnor än män som sökte, dessutom sökte kvinnorna mer frekvent. Den främsta källan till kostråd på internet bland respondenterna var bloggar. Den upplevda trovärdigheten för bloggar var relativt hög. Det främsta syftet för att söka efter kostråd på internet var viktnedgång för kvinnor och muskelökning för män. Studien visade även att 67 procent av respondenterna kände till Livsmedelsverkets näringsrekommendationer. Den upplevda trovärdigheten för Livsmedelsverkets näringsrekommendationer var högre än den upplevda trovärdigheten för de främsta källorna till kostråd på internet. Denna studie visar att internet är ett kraftfullt verktyg i skapandet av unga vuxnas identitet och även när det gäller påverkan av deras syn på hälsosamma matvanor. Det är av stor betydelse för den framtida folkhälsan att politiker och hälsovårdande myndigheter finner och förstår sin roll i det moderna informationssamhället.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Witney, Cynthia Ann. „Just a “Click” away from evidence-based online breast cancer information, advice and support provided by a specialist nurse: An ethnonetnographic study“. Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2015. https://ro.ecu.edu.au/theses/1679.

Der volle Inhalt der Quelle
Annotation:
Breast cancer has had, and will continue to have, a devastating impact on the lives of many Australian women, their families, friends and the wider community. The concomitant treatment of this disease places a considerable burden on the health care system and the supporters of the person diagnosed with this disease. While there are many government and non-government organisations that provide treatment and support services for the person with breast cancer, these services are usually provided in person either in the home or at the organisation’s offices. This study extended the information advice and support aspects of these services to the online or Internet based realm via the design and development of a breast cancer focused online support community www.breastcancerclick.com.au and explored the role of the expert nurse through the employment of a specialist breast care nurse as a member, moderator and health professional within this online community. This study used an ethnonetnographic approach, including online (on the Internet) and offline (face-to-face) methods, to explore the role of the specialist breast care nurse within the online, breast cancer support, community. The study was comprised of three phases, Phase One, the offline and online identification of the information, advice and support needs of Western Australian women with breast cancer and their Internet use; development of a website designed to meet those needs and to foster the development of an online support community; Phase Two, the employment and introduction of a specialist breast care nurse as a member and provider of evidence-based information, advice and support for online community members; Phase Three, the online and offline collection of data relevant to the role of the specialist breast care nurse within the online support community. The identification of the expert nurse as a linchpin in the patient’s care and communication has implications for future nursing practice and curricula as well as consumers of health care. Recommendations arose from the findings in relation to further research, nursing practice, education these recommendations indicate an innovative extension to expert nursing practice and together the elementary guidelines for health professional when developing an illness specific online support community foreshadow a future direction for nursing, in line with the digital age.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Gagliolo, Matteo. „Online Dynamic Algorithm Portfolios: Minimizing the computational cost of problem solving“. Doctoral thesis, Università della Svizzera italiana, Lugano, Switzerland, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/250787.

Der volle Inhalt der Quelle
Annotation:
This thesis presents methods for minimizing the computational effort of problem solving. Rather than looking at a particular algorithm, we consider the issue of computational complexity at a higher level, and propose techniques that, given a set of candidate algorithms, of unknown performance, learn to use these algorithms while solving a sequence of problem instances, with the aim of solving all instances in a minimum time. An analogous meta-level approach to problem solving has been adopted in many different fields, with different aims and terminology. A widely accepted term to describe it is algorithm selection. Algorithm portfolios represent a more general framework, in which computation time is allocated to a set of algorithms running on one or more processors.Automating algorithm selection is an old dream of the AI community, which has been brought closer to reality in the last decade. Most available selection techniques are based on a model of algorithm performance, assumed to be available, or learned during a separate offline training sequence, which is often prohibitively expensive. The model is used to perform a static allocation of resources, with no feedback from the actual execution of the algorithms. There is a trade-off between the performance of model-based selection, and the cost of learning the model. In this thesis, we formulate this trade-off as a bandit problem.We propose GambleTA, a fully dynamic and online algorithm portfolio selection technique, with no separate training phase: all candidate algorithms are run in parallel, while a model incrementally learns their runtime distributions. A redundant set of time allocators uses the partially trained model to optimize machine time shares for the algorithms, in order to minimize runtime. A bandit problem solver picks the allocator to use on each instance, gradually increasing the impact of the best time allocators as the model improves. A similar approach is adopted for learning restart strategies online (GambleR). In both cases, the runtime distributions are modeled using survival analysis techniques; unsuccessful runs are correctly considered as censored runtime observations, allowing to save further computation time.The methods proposed are validated with several experiments, mostly based on data from solver competitions, displaying a robust performance in a variety of settings, and showing that rough performance models already allow to allocate resources efficiently, reducing the risk of wasting computation time.
Permanent URL: http://doc.rero.ch/record/20245
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Jankovic, Anja. „Towards Online Landscape-Aware Algorithm Selection in Numerical Black-Box Optimization“. Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS302.

Der volle Inhalt der Quelle
Annotation:
Les algorithmes d'optimisation de boîte noire (BBOA) sont conçus pour des scénarios où les formulations exactes de problèmes sont inexistantes, inaccessibles, ou trop complexes pour la résolution analytique. Les BBOA sont le seul moyen de trouver une bonne solution à un tel problème. En raison de leur applicabilité générale, les BBOA présentent des comportements différents lors de l'optimisation de différents types de problèmes. Cela donne un problème de méta-optimisation consistant à choisir l'algorithme le mieux adapté à un problème particulier, appelé problème de sélection d'algorithmes (AS). La vision d'automatiser cette sélection a vite gagné du terrain dans la communauté. Un moyen important de le faire est l'AS tenant compte du paysage, où le choix de l'algorithme est basé sur la prédiction de ses performances via des représentations numériques d'instances de problèmes appelées caractéristiques. Un défi clé auquel l'AS tenant compte du paysage est confrontée est le coût de calcul de l'extraction des caractéristiques, une étape qui précède l'optimisation. Dans cette thèse, nous proposons une approche d'AS tenant compte du paysage basée sur la trajectoire de recherche qui intègre cette étape d'extraction dans celle d'optimisation. Nous montrons que les caractéristiques calculées à l'aide de la trajectoire conduisent à des prédictions robustes et fiables des performances des algorithmes, et à de puissants modèles d'AS construits dessus. Nous présentons aussi plusieurs analyses préparatoires, y compris une perspective de combinaison de 2 stratégies de régression complémentaires qui surpasse des modèles classiques de régression simple et amplifie la qualité du sélecteur
Black-box optimization algorithms (BBOAs) are conceived for settings in which exact problem formulations are non-existent, inaccessible, or too complex for an analytical solution. BBOAs are essentially the only means of finding a good solution to such problems. Due to their general applicability, BBOAs can exhibit different behaviors when optimizing different types of problems. This yields a meta-optimization problem of choosing the best suited algorithm for a particular problem, called the algorithm selection (AS) problem. By reason of inherent human bias and limited expert knowledge, the vision of automating the selection process has quickly gained traction in the community. One prominent way of doing so is via so-called landscape-aware AS, where the choice of the algorithm is based on predicting its performance by means of numerical problem instance representations called features. A key challenge that landscape-aware AS faces is the computational overhead of extracting the features, a step typically designed to precede the actual optimization. In this thesis, we propose a novel trajectory-based landscape-aware AS approach which incorporates the feature extraction step within the optimization process. We show that the features computed using the search trajectory samples lead to robust and reliable predictions of algorithm performance, and to powerful algorithm selection models built atop. We also present several preparatory analyses, including a novel perspective of combining two complementary regression strategies that outperforms any of the classical, single regression models, to amplify the quality of the final selector
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Alon, Alexander Joel Dacara. „The AlgoViz Project: Building an Algorithm Visualization Web Community“. Thesis, Virginia Tech, 2010. http://hdl.handle.net/10919/34246.

Der volle Inhalt der Quelle
Annotation:
Algorithm visualizations (AVs) have become a popular teaching aid in classes on algorithms and data structures. The AlgoViz Project attempts to provide an online venue for educators, students, developers,researchers, and other AV users. The Project is comprised of two websites. The first, the AlgoViz Portal, provides two major informational resources: an AV catalog that provides both descriptive and evaluative metadata of indexed visualizations, and an annotated bibliography of research literature. Both resources have over 500 entries and are actively updated by the AV community. The Portal also provides field reports, discussion forums, and other community-building mechanisms. The second website, OpenAlgoViz, is a SourceForge site intended to showcase exemplary AVs, as well as provide logistical and hosting support to AV developers.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Goemans, Michel X., Maurice Queyranne, Andreas S. Schulz, Martin Skutella und Yaoguang Wang. „Single Machine Scheduling with Release Dates“. Massachusetts Institute of Technology, Operations Research Center, 1999. http://hdl.handle.net/1721.1/5211.

Der volle Inhalt der Quelle
Annotation:
We consider the scheduling problem of minimizing the average weighted completion time of n jobs with release dates on a single machine. We first study two linear programming relaxations of the problem, one based on a time-indexed formulation, the other on a completiontime formulation. We show their equivalence by proving that a O(n log n) greedy algorithm leads to optimal solutions to both relaxations. The proof relies on the notion of mean busy times of jobs, a concept which enhances our understanding of these LP relaxations. Based on the greedy solution, we describe two simple randomized approximation algorithms, which are guaranteed to deliver feasible schedules with expected objective value within factors of 1.7451 and 1.6853, respectively, of the optimum. They are based on the concept of common and independent a-points, respectively. The analysis implies in particular that the worst-case relative error of the LP relaxations is at most 1.6853, and we provide instances showing that it is at least e/(e - 1) 1.5819. Both algorithms may be derandomized, their deterministic versions running in O(n2 ) time. The randomized algorithms also apply to the on-line setting, in which jobs arrive dynamically over time and one must decide which job to process without knowledge of jobs that will be released afterwards.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Boccalini, Gabriele. „An optical sensor for online hematocrit measurement: characterization and fitting algorithm development“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/6932/.

Der volle Inhalt der Quelle
Annotation:
Lo scopo del presente lavoro di tesi riguarda la caratterizzazione di un sensore ottico per la lettura di ematocrito e lo sviluppo dell’algoritmo di calibrazione del dispositivo. In altre parole, utilizzando dati ottenuti da una sessione di calibrazione opportunamente pianificata, l’algoritmo sviluppato ha lo scopo di restituire la curva di interpolazione dei dati che caratterizza il trasduttore. I passi principali del lavoro di tesi svolto sono sintetizzati nei punti seguenti: 1) Pianificazione della sessione di calibrazione necessaria per la raccolta dati e conseguente costruzione di un modello black box.  Output: dato proveniente dal sensore ottico (lettura espressa in mV)  Input: valore di ematocrito espresso in punti percentuali ( questa grandezza rappresenta il valore vero di volume ematico ed è stata ottenuta con un dispositivo di centrifugazione sanguigna) 2) Sviluppo dell’algoritmo L’algoritmo sviluppato e utilizzato offline ha lo scopo di restituire la curva di regressione dei dati. Macroscopicamente, il codice possiamo distinguerlo in due parti principali: 1- Acquisizione dei dati provenienti da sensore e stato di funzionamento della pompa bifasica 2- Normalizzazione dei dati ottenuti rispetto al valore di riferimento del sensore e implementazione dell’algoritmo di regressione. Lo step di normalizzazione dei dati è uno strumento statistico fondamentale per poter mettere a confronto grandezze non uniformi tra loro. Studi presenti, dimostrano inoltre un mutazione morfologica del globulo rosso in risposta a sollecitazioni meccaniche. Un ulteriore aspetto trattato nel presente lavoro, riguarda la velocità del flusso sanguigno determinato dalla pompa e come tale grandezza sia in grado di influenzare la lettura di ematocrito.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Zubeir, Abdulghani Ismail. „OAP: An efficient online principal component analysis algorithm for streaming EEG data“. Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-392403.

Der volle Inhalt der Quelle
Annotation:
Data processing on streaming data poses computational as well as statistical challenges. Streaming data requires that data processing algorithms are able to process a new data point within micro-seconds. This is especially challenging on dimension reduction, where traditional methods as Principal Component Analysis (PCA) require eigenvectors decomposition of a matrix based on the complete dataset. So a proper online version of PCA should avoid this computational involved step in favor for a more efficient update rule. This is implemented by an algorithm named Online Angle Preservation (OAP), which is able to handle large dimensions in the required time limitations. This project presents an application of OAP in the case of Electroencephalography (EEG). For this, an interface was coded from an openBCI EEG device, through a Java API to a streaming environment called Stream Analyzer (sa.engine). The performance of this solution was compared to a standard Windowised PCA solution, indicating its competitive performance. This report details this setup and details the results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Knight, Melissa. „Accelerated Online and Hybrid RN-to-BSN Programs: A Predictive Retention Algorithm“. ScholarWorks, 2019. https://scholarworks.waldenu.edu/dissertations/6345.

Der volle Inhalt der Quelle
Annotation:
Predicting retention and time to graduation within accelerated online and a hybrid RN-to-BSN programs are significant elements in leveraging the pipeline of qualified RNs with BSN degrees, but the literature lacks significant accounts of retention and time to graduation outcomes within these programs and predictive algorithm developments to offset high attrition rates. The purpose of this study was to quantitatively examine the relationships between pre-entry attributes, academic integration, and institutional characteristics on retention and time to graduation within accelerated online RN-to-BSN programs in order to begin developing a global predictive retention algorithm. This study was guided by Tinto's theories of integration and student departure (1975, 1984, 1993) and Rovai's composite persistence model. Retrospective datasets from 390 student academic records were obtained. Findings of this study revealed pre-entry GPA, number of education credits, enrollment status, 1st and 2nd course grades and GPA index scores, failed course type, size and geographic region, admission GPA standards, prerequisite criteria, academic support and retention methods were statistically significant predictors of retention and timely graduation (p <.05). A decision tree model was performed in SPSS modeler to compare multiple regression and binary logistic regression results, yielding a 96% accuracy rate on retention predictions and a 46 % on timely graduation predictions. Recommendations for future research are to examine other variables that may be associated with retention and time to graduation for results can be used to redevelop accurate predictive retention models. Having accurate predictive retention models will affect positive social change because RN-to-BSN students that successfully complete a BSN degree will impact the quality and safety of patient care.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Stummer, Gudrun. „A reflexive action research project to investigate the development of an educational public health website with an integrated online advice service“. Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.500798.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Vlasenko, Anton. „Developing and Evaluating Web Marking Tools as a Complementary Service for Medical Telephone-Based Advice-Giving“. Thesis, Linnéuniversitetet, Institutionen för medieteknik (ME), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-69498.

Der volle Inhalt der Quelle
Annotation:
This master thesis reports on potentially useful applications for “The social layer concept”, consisting of a combination of telephone-based health advice-giving and dynamic marking of shared web pages, with the aim to contribute to the online health counselling domain. An experimental user study was performed to test a web marking tool prototype. The experimental tool was shown to be useful in helping clients focus on relevant health information and dynamic web marking does provide a useful and complementary service to telephone-based advice-giving. It was considered most useful for complex health advice-giving issues.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Farghally, Mohammed Fawzi Seddik. „Visualizing Algorithm Analysis Topics“. Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/73539.

Der volle Inhalt der Quelle
Annotation:
Data Structures and Algorithms (DSA) courses are critical for any computer science curriculum. DSA courses emphasize concepts related to procedural dynamics and Algorithm Analysis (AA). These concepts are hard for students to grasp when conveyed using traditional textbook material relying on text and static images. Algorithm Visualizations (AVs) emerged as a technique for conveying DSA concepts using interactive visual representations. Historically, AVs have dealt with portraying algorithm dynamics, and the AV developer community has decades of successful experience with this. But there exist few visualizations to present algorithm analysis concepts. This content is typically still conveyed using text and static images. We have devised an approach that we term Algorithm Analysis Visualizations (AAVs), capable of conveying AA concepts visually. In AAVs, analysis is presented as a series of slides where each statement of the explanation is connected to visuals that support the sentence. We developed a pool of AAVs targeting the basic concepts of AA. We also developed AAVs for basic sorting algorithms, providing a concrete depiction about how the running time analysis of these algorithms can be calculated. To evaluate AAVs, we conducted a quasi-experiment across two offerings of CS3114 at Virginia Tech. By analyzing OpenDSA student interaction logs, we found that intervention group students spent significantly more time viewing the material as compared to control group students who used traditional textual content. Intervention group students gave positive feedback regarding the usefulness of AAVs to help them understand the AA concepts presented in the course. In addition, intervention group students demonstrated better performance than control group students on the AA part of the final exam. The final exam taken by both the control and intervention groups was based on a pilot version of the Algorithm Analysis Concept Inventory (AACI) that was developed to target fundamental AA concepts and probe students' misconceptions about these concepts. The pilot AACI was developed using a Delphi process involving a group of DSA instructors, and was shown to be a valid and reliable instrument to gauge students' understanding of the basic AA topics.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Deane, Jason. „Scheduling online advertisements using information retrieval and neural network/genetic algorithm based metaheuristics“. [Gainesville, Fla.] : University of Florida, 2006. http://purl.fcla.edu/fcla/etd/UFE0015400.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Provatas, Spyridon. „An Online Machine Learning Algorithm for Heat Load Forecasting in District Heating Systems“. Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3475.

Der volle Inhalt der Quelle
Annotation:
Context. Heat load forecasting is an important part of district heating optimization. In particular, energy companies aim at minimizing peak boiler usage, optimizing combined heat and power generation and planning base production. To achieve resource efficiency, the energy companies need to estimate how much energy is required to satisfy the market demand. Objectives. We suggest an online machine learning algorithm for heat load forecasting. Online algorithms are increasingly used due to their computational efficiency and their ability to handle changes of the predictive target variable over time. We extend the implementation of online bagging to make it compatible to regression problems and we use the Fast Incremental Model Trees with Drift Detection (FIMT-DD) as the base model. Finally, we implement and incorporate to the algorithm a mechanism that handles missing values, measurement errors and outliers. Methods. To conduct our experiments, we use two machine learning software applications, namely Waikato Environment for Knowledge Analysis (WEKA) and Massive Online Analysis (MOA). The predictive ability of the suggested algorithm is evaluated on operational data from a part of the Karlshamn District Heating network. We investigate two approaches for aggregating the data from the nodes of the network. The algorithm is evaluated on 100 runs using the repeated measures experimental design. A paired T-test is run to test the hypothesis that the the choice of approach does not have a significant effect on the predictive error of the algorithm. Results. The presented algorithm forecasts the heat load with a mean absolute percentage error of 4.77\%. This means that there is a sufficiently accurate estimation of the actual values of the heat load, which can enable heat suppliers to plan and manage more effectively the heat production. Conclusions. Experimental results show that the presented algorithm can be a viable alternative to state-of-the-art algorithms that are used for heat load forecasting. In addition to its predictive ability, it is memory-efficient and can process data in real time. Robust heat load forecasting is an important part of increased system efficiency within district heating, and the presented algorithm provides a concrete foundation for operational usage of online machine learning algorithms within the domain.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Kamath, Akash S. „An efficient algorithm for caching online analytical processing objects in a distributed environment“. Ohio : Ohio University, 2002. http://www.ohiolink.edu/etd/view.cgi?ohiou1174678903.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Withum, David Grant. „Serological testing algorithm for recent HIV 1 seroconversion (STARHS) : standardisation and online application“. Thesis, King's College London (University of London), 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.249615.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Stark, Annegret, und Fritz Hoffmann. „Online-Vorbereitungskurse Mathematik und Physik: Fachlich gut gewappnet ins Studium starten“. TUDpress, 2020. https://tud.qucosa.de/id/qucosa%3A74306.

Der volle Inhalt der Quelle
Annotation:
Eine große Anzahl der fach- und lehramtsbezogenen Studienangebote der TU Dresden baut auf Grundlagenwissen in naturwissenschaftlichen Fächern auf, das zu Studienbeginn auf Abiturniveau vorausgesetzt wird. Rückmeldungen der Lehrenden wie auch der Studierenden zeigen allerdings, dass es eine Diskrepanz zwischen den in der Schule vermittelten und den zu Studienbeginn erwarteten Kenntnissen gibt. Das spiegelt sich auch in den Abbruchgründen wider: Durchschnittlich 30 Prozent der Studienabbrecherinnen und -abbrecher scheitern an hohen Studienanforderungen bzw. fehlenden fachlichen Voraussetzungen für ihr Studium (Heublein et al., 2017). Um Leistungsproblemen vorzubeugen und ein möglichst einheitliches Studieneingangsniveau zu erreichen, werden in vielen Einrichtungen Vorbereitungskurse in den Fächern angeboten, die den Studierenden in den ersten Semestern erfahrungsgemäß die größten Schwierigkeiten bereiten. Auch an der TU Dresden gibt es bereits seit vielen Jahren fachbezogene Brückenkurse in den Fächern Mathematik, Physik und Chemie, bestehend aus Vorlesungen und ergänzenden Übungen. Allerdings erreichen diese Maßnahmen mitunter nicht diejenigen, an die sie gerichtet sind. Laut der 3. Sächsischen Studierendenbefragung (Lenz, Winter, Stephan, Herklotz & Gaaw, 2018) hat nur knapp ein Drittel der Befragten die Angebote zur Studienvorbereitung genutzt. Zudem gibt es Zielgruppen, denen eine Teilnahme an den Vor-Ort-Angeboten nicht möglich ist, sei es, dass sie sich zum Zeitpunkt der Durchführung (noch) nicht am Studienort befinden oder ein Fernstudium belegen Daraus ergab sich der Bedarf, zusätzlich zu den Präsenz-Brückenkursen ein eigenes Online-Angebot zu entwickeln, welches es den Studienanfängerinnen und -anfängern ermöglicht, ihren aktuellen Wissensstand zu ermitteln und eventuell vorhandene Lücken im individuell passenden Lerntempo orts- und zeitunabhängig zu schließen.Durch die webbasierte Bereitstellung der Vorbereitungskurse wird ein von Bildungsstand, Geschlecht und Herkunft unabhängiger sowie örtlich und zeitlich uneingeschränkter Zugriff auf die Inhalte ermöglicht. Weiterhin wird bei der Kurserstellung auf eine mobilfähige und barrierearme Gestaltung des Angebots geachtet, um die Zugänglichkeit und die Nutzbarkeit für möglichst alle potenziellen Anwenderinnen und Anwender zu gewährleisten. [Aus der Einleitung]
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Nayyar, Krati. „Input Sensitive Analysis of a Minimum Metric Bipartite Matching Algorithm“. Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/86518.

Der volle Inhalt der Quelle
Annotation:
In various business and military settings, there is an expectation of on-demand delivery of supplies and services. Typically, several delivery vehicles (also called servers) carry these supplies. Requests arrive one at a time and when a request arrives, a server is assigned to this request at a cost that is proportional to the distance between the server and the request. Bad assignments will not only lead to larger costs but will also create bottlenecks by increasing delivery time. There is, therefore, a need to design decision-making algorithms that produce cost-effective assignments of servers to requests in real-time. In this thesis, we consider the online bipartite matching problem where each server can serve exactly one request. In the online minimum metric bipartite matching problem, we are provided with a set of server locations in a metric space. Requests arrive one at a time that have to be immediately and irrevocably matched to a free server. The total cost of matching all the requests to servers, also known as the online matching is the sum of the cost of all the edges in the matching. There are many well-studied models for request generation. We study the problem in the adversarial model where an adversary who knows the decisions made by the algorithm generates a request sequence to maximize ratio of the cost of the online matching and the minimum-cost matching (also called the competitive ratio). An algorithm is a-competitive if the cost of online matching is at most 'a' times the minimum cost. A recently discovered robust and deterministic online algorithm (we refer to this as the robust matching or the RM-Algorithm) was shown to have optimal competitive ratios in the adversarial model and a relatively weaker random arrival model. We extend the analysis of the RM-Algorithm in the adversarial model and show that the competitive ratio of the algorithm is sensitive to the input, i.e., for "nice" input metric spaces or "nice" server placements, the performance guarantees of the RM-Algorithm is significantly better. In fact, we show that the performance is almost optimal for any fixed metric space and server locations.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Kuß, Julia, Anja Abdel-Haq, Anne Jacob und Theresia Zimmermann. „Entwicklung von Online-Self-Assessments für Studiengänge der Ingenieurwissenschaften an der TU Dresden“. TUDpress, 2020. https://tud.qucosa.de/id/qucosa%3A74310.

Der volle Inhalt der Quelle
Annotation:
Ein Online-Self-Assessment (OSA) für Studieninteressierte ist ein webbasierter Selbsteinschätzungstest, der künftigen Studierenden eine realistische Selbsteinschätzung und eine darauf aufbauende, fundierte Studienwahl ermöglichen soll. Solch ein Test umfasst Aufgaben und Fragen (sogenannte Items), die von den Studieninteressierten selbstständig bearbeitet werden. Das Feedback auf den bearbeiteten Test unterstützt die Studieninteressierten bei ihrer Studienwahl, indem sie eine Einschätzung zu ihren orhandenen Kompetenzen, Fähigkeiten, Interessen und Erwartungen bezogen auf die tatsächlichen Anforderungen, Inhalte und Rahmenbedingungen des favorisierten Studiengangs erhalten. Das OSA wirkt unterstützend in der Studienorientierungsphase und fördert frühzeitig eine bewusste Studienwahlentscheidung. [Aus der Einleitung]
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Chen, Jian. „Maintaining Stream Data Distribution Over Sliding Window“. Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-35321.

Der volle Inhalt der Quelle
Annotation:
In modern applications, it is a big challenge that analyzing the order statistics about the most recent parts of the high-volume and high velocity stream data. There are some online quantile algorithms that can keep the sketch of the data in the sliding window and they can answer the quantile or rank query in a very short time. But most of them take the GK algorithm as the subroutine, which is not known to be mergeable. In this paper, we propose another algorithm to keep the sketch that maintains the order statistics over sliding windows. For the fixed-size window, the existing algorithms can’t maintain the correctness in the process of updating the sliding window. Our algorithm not only can maintain the correctness but also can achieve similar performance of the optimal algorithm. Under the basis of maintaining the correctness, the insert time and query time are close to the best results, while others can't maintain the correctness. In addition to the fixed-size window algorithm, we also provide the time-based window algorithm that the window size varies over time. Last but not least, we provide the window aggregation algorithm which can help extend our algorithm into the distributed system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Wedenberg, Kim, und Alexander Sjöberg. „Online inference of topics : Implementation of the topic model Latent Dirichlet Allocation using an online variational bayes inference algorithm to sort news articles“. Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-222429.

Der volle Inhalt der Quelle
Annotation:
The client of the project has problems with complex queries and noisewhen querying their stream of five million news articles per day. Thisresults in much manual work when sorting and pruning the search result of their query. Instead of using direct text matching, the approachof the project was to use a topic model to describe articles in terms oftopics covered and to use this new information to sort the articles. An online version of the topic model Latent Dirichlet Allocationwas implemented using online variational Bayes inference to handlestreamed data. Using 100 dimensions, topics such as sports and politics emerged during training on a 1.7 million articles big simulatedstream. These topics were used to sort articles based on context. Theimplementation was found accurate enough to be useful for the client aswell as fast and stable enough to be a feasible solution to the problem.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Jaradat, Shatha. „OLLDA: Dynamic and Scalable Topic Modelling for Twitter : AN ONLINE SUPERVISED LATENT DIRICHLET ALLOCATION ALGORITHM“. Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177535.

Der volle Inhalt der Quelle
Annotation:
Providing high quality of topics inference in today's large and dynamic corpora, such as Twitter, is a challenging task. This is especially challenging taking into account that the content in this environment contains short texts and many abbreviations. This project proposes an improvement of a popular online topics modelling algorithm for Latent Dirichlet Allocation (LDA), by incorporating supervision to make it suitable for Twitter context. This improvement is motivated by the need for a single algorithm that achieves both objectives: analyzing huge amounts of documents, including new documents arriving in a stream, and, at the same time, achieving high quality of topics’ detection in special case environments, such as Twitter. The proposed algorithm is a combination of an online algorithm for LDA and a supervised variant of LDA - labeled LDA. The performance and quality of the proposed algorithm is compared with these two algorithms. The results demonstrate that the proposed algorithm has shown better performance and quality when compared to the supervised variant of LDA, and it achieved better results in terms of quality in comparison to the online algorithm. These improvements make our algorithm an attractive option when applied to dynamic environments, like Twitter. An environment for analyzing and labelling data is designed to prepare the dataset before executing the experiments. Possible application areas for the proposed algorithm are tweets recommendation and trends detection.
Tillhandahålla högkvalitativa ämnen slutsats i dagens stora och dynamiska korpusar, såsom Twitter, är en utmanande uppgift. Detta är särskilt utmanande med tanke på att innehållet i den här miljön innehåller korta texter och många förkortningar. Projektet föreslår en förbättring med en populär online ämnen modellering algoritm för Latent Dirichlet Tilldelning (LDA), genom att införliva tillsyn för att göra den lämplig för Twitter sammanhang. Denna förbättring motiveras av behovet av en enda algoritm som uppnår båda målen: analysera stora mängder av dokument, inklusive nya dokument som anländer i en bäck, och samtidigt uppnå hög kvalitet på ämnen "upptäckt i speciella fall miljöer, till exempel som Twitter. Den föreslagna algoritmen är en kombination av en online-algoritm för LDA och en övervakad variant av LDA - Labeled LDA. Prestanda och kvalitet av den föreslagna algoritmen jämförs med dessa två algoritmer. Resultaten visar att den föreslagna algoritmen har visat bättre prestanda och kvalitet i jämförelse med den övervakade varianten av LDA, och det uppnådde bättre resultat i fråga om kvalitet i jämförelse med den online-algoritmen. Dessa förbättringar gör vår algoritm till ett attraktivt alternativ när de tillämpas på dynamiska miljöer, som Twitter. En miljö för att analysera och märkning uppgifter är utformad för att förbereda dataset innan du utför experimenten. Möjliga användningsområden för den föreslagna algoritmen är tweets rekommendation och trender upptäckt.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Breakiron, Daniel Aubrey. „Evaluating the Integration of Online, Interactive Tutorials into a Data Structures and Algorithms Course“. Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/23107.

Der volle Inhalt der Quelle
Annotation:
OpenDSA is a collection of open source tutorials for teaching data structures and algorithms. It was created with the goals of visualizing complex, abstract topics; increasing the amount of practice material available to students; and providing immediate feedback and incremental assessment. In this thesis, I first describe aspects of the OpenDSA architecture relevant to collecting user interaction data. I then present an analysis of the interaction log data gathered from three classes during Spring 2013. The analysis focuses on determining the time distribution of student activity, determining the time required for assignment completion, and exploring \credit-seeking" behaviors and behavior related to non-required exercises. We identified clusters of students based on when they completed exercises, verified the reliability of estimated time requirements for exercises, provided evidence that a majority of students do not read the text, discovered a measurement that could be used to identify exercises that require additional development, and found evidence that students complete exercises after obtaining credit. Furthermore, we determined that slideshow usage was fairly high (even when credit was not ordered), and skipping to the end of slideshows was more common when credit was offered but also occurred when it was not.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Kunwar, Rituraj. „Incremental / Online Learning and its Application to Handwritten Character Recognition“. Thesis, Griffith University, 2017. http://hdl.handle.net/10072/366964.

Der volle Inhalt der Quelle
Annotation:
In real world scenarios where we use machine learning algorithms, we often have to deal with cases where input data changes its nature with time. In order to maintain the accuracy of the learning algorithm, we frequently have to retrain our learning system, thereby making the system inconvenient and unreliable. This problem can be solved by using learning algorithms which can learn continuously with time (incremental/ online learning). Another common problem of real-world learning scenarios that we often have to deal with is acquiring large amounts of data which is expensive and time consuming. Semi-supervised learning is the machine learning paradigm concerned with utilizing unlabeled data to improve the precision of classifier or regressor. Unlabeled data is a powerful and easily available resource and it should be utilized to build an accurate learning system. It has often been observed that there is a vast amount of redundancy in any huge, real-time database and it is not advisable to process every redundant sample to gain the same (already acquired) knowledge. Active learning is the learning setting which can handle this issue. Therefore in this research we propose an online semi-supervised learning framework which can learn actively. We have proposed an "online semi-supervised Random Naive Bayes (RNB)" classifier and as the name implies it can learn in an online manner and make use of both labeled and unlabeled data to learn. In order to boost accuracy we improved the network structure of NB (using Bayes net) to propose an Augmented Naive Bayes (ANB) classifier and achieved a substantial jump in accuracy. In order to reduce the processing of redundant data and achieve faster convergence of learning, we proposed to conduct incremental semi-supervised learning in active manner. We applied the proposed methods on the "Tamil script handwritten character recognition" problem and have obtained favorable results. Experimental results prove that our proposed online classifiers does as well as and sometimes better than its batch learning counterpart. And active learning helps to achieve learning convergence with much less number of samples.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information and Communication Technology
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Harrington, Edward, und edwardharrington@homemail com au. „Aspects of Online Learning“. The Australian National University. Research School of Information Sciences and Engineering, 2004. http://thesis.anu.edu.au./public/adt-ANU20060328.160810.

Der volle Inhalt der Quelle
Annotation:
Online learning algorithms have several key advantages compared to their batch learning algorithm counterparts: they are generally more memory efficient, and computationally mor efficient; they are simpler to implement; and they are able to adapt to changes where the learning model is time varying. Online algorithms because of their simplicity are very appealing to practitioners. his thesis investigates several online learning algorithms and their application. The thesis has an underlying theme of the idea of combining several simple algorithms to give better performance. In this thesis we investigate: combining weights, combining hypothesis, and (sort of) hierarchical combining.¶ Firstly, we propose a new online variant of the Bayes point machine (BPM), called the online Bayes point machine (OBPM). We study the theoretical and empirical performance of the OBPm algorithm. We show that the empirical performance of the OBPM algorithm is comparable with other large margin classifier methods such as the approximately large margin algorithm (ALMA) and methods which maximise the margin explicitly, like the support vector machine (SVM). The OBPM algorithm when used with a parallel architecture offers potential computational savings compared to ALMA. We compare the test error performance of the OBPM algorithm with other online algorithms: the Perceptron, the voted-Perceptron, and Bagging. We demonstrate that the combinationof the voted-Perceptron algorithm and the OBPM algorithm, called voted-OBPM algorithm has better test error performance than the voted-Perceptron and Bagging algorithms. We investigate the use of various online voting methods against the problem of ranking, and the problem of collaborative filtering of instances. We look at the application of online Bagging and OBPM algorithms to the telecommunications problem of channel equalization. We show that both online methods were successful at reducing the effect on the test error of label flipping and additive noise.¶ Secondly, we introduce a new mixture of experts algorithm, the fixed-share hierarchy (FSH) algorithm. The FSH algorithm is able to track the mixture of experts when the switching rate between the best experts may not be constant. We study the theoretical aspects of the FSH and the practical application of it to adaptive equalization. Using simulations we show that the FSH algorithm is able to track the best expert, or mixture of experts, in both the case where the switching rate is constant and the case where the switching rate is time varying.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Luo, Lingzhi. „Distributed Algorithm Design for Constrained Multi-robot Task Assignment“. Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/426.

Der volle Inhalt der Quelle
Annotation:
The task assignment problem is one of the fundamental combinatorial optimization problems. It has been extensively studied in operation research, management science, computer science and robotics. Task assignment problems arise in various applications of multi-robot systems (MRS), such as environmental monitoring, disaster response, extraterrestrial exploration, sensing data collection and collaborative autonomous manufacturing. In these MRS applications, there are realistic constraints on robots and tasks that must be taken into account both from the modeling perspective and the algorithmic perspective. From the modeling aspect, such constraints include (a) Task group constraints: where tasks form disjoint groups and each robot can be assigned to at most one task in each group. One example of the group constraints comes from tightly-coupled tasks, where multiple micro tasks form one tightly-coupled macro task and need multiple robots to perform each simultaneously. (b) Task deadline constraints: where tasks must be assigned to meet their deadlines. (c) Dynamically-arising tasks: where tasks arrive dynamically and the payoffs of future tasks are unknown. Such tasks arise in scenarios like searchrescue, where new victims are found dynamically. (d) Robot budget constraints: where the number of tasks each robot can perform is bounded according to the resource it possesses (e.g., energy). From the solution aspect, there is often a need for decentralized solution that are implemented on individual robots, especially when no powerful centralized controller exists or when the system needs to avoid single-point failure or be adaptive to environmental changes. Most existing algorithms either do not consider the above constraints in problem modeling, are centralized or do not provide formal performance guarantees. In this thesis, I propose methods to address these issues for two classes of problems, namely, the constrained linear assignment problem and constrained generalized assignment problem. Constrained linear assignment problem belongs to P, while constrained generalized assignment problem is NP-hard. I develop decomposition-based distributed auction algorithms with performance guarantees for both problem classes. The multi-robot assignment problem is decomposed into an optimization problem for each robot and each robot iteratively solving its own optimization problem leads to a provably good solution to the overall problem. For constrained linear assignment problem, my approaches provides an almost optimal solution. For constrained generalized assignment problem, I present a distributed algorithm that provides a solution within a constant factor of the optimal solution. I also study the online version of the task allocation problem with task group constraints. For the online problem, I prove that a repeated greedy version of my algorithm gives solution with constant factor competitive ratio. I include simulation results to evaluate the average-case performance of the proposed algorithms. I also include results on multi-robot cooperative package transport to illustrate the approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Alim, Sophia. „Vulnerability in online social network profiles : a framework for measuring consequences of information disclosure in online social networks“. Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5507.

Der volle Inhalt der Quelle
Annotation:
The increase in online social network (OSN) usage has led to personal details known as attributes being readily displayed in OSN profiles. This can lead to the profile owners being vulnerable to privacy and social engineering attacks which include identity theft, stalking and re identification by linking. Due to a need to address privacy in OSNs, this thesis presents a framework to quantify the vulnerability of a user's OSN profile. Vulnerability is defined as the likelihood that the personal details displayed on an OSN profile will spread due to the actions of the profile owner and their friends in regards to information disclosure. The vulnerability measure consists of three components. The individual vulnerability is calculated by allocating weights to profile attribute values disclosed and neighbourhood features which may contribute towards the personal vulnerability of the profile user. The relative vulnerability is the collective vulnerability of the profiles' friends. The absolute vulnerability is the overall profile vulnerability which considers the individual and relative vulnerabilities. The first part of the framework details a data retrieval approach to extract MySpace profile data to test the vulnerability algorithm using real cases. The profile structure presented significant extraction problems because of the dynamic nature of the OSN. Issues of the usability of a standard dataset including ethical concerns are discussed. Application of the vulnerability measure on extracted data emphasised how so called 'private profiles' are not immune to vulnerability issues. This is because some profile details can still be displayed on private profiles. The second part of the framework presents the normalisation of the measure, in the context of a formal approach which includes the development of axioms and validation of the measure but with a larger dataset of profiles. The axioms highlight that changes in the presented list of profile attributes, and the attributes' weights in making the profile vulnerable, affect the individual vulnerability of a profile. iii Validation of the measure showed that vulnerability involving OSN profiles does occur and this provides a good basis for other researchers to build on the measure further. The novelty of this vulnerability measure is that it takes into account not just the attributes presented on each individual profile but features of the profiles' neighbourhood.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Mahajan, Rutvij Sanjay. „Empirical Analysis of Algorithms for the k-Server and Online Bipartite Matching Problems“. Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/96725.

Der volle Inhalt der Quelle
Annotation:
The k–server problem is of significant importance to the theoretical computer science and the operations research community. In this problem, we are given k servers, their initial locations and a sequence of n requests that arrive one at a time. All these locations are points from some metric space and the cost of serving a request is given by the distance between the location of the request and the current location of the server selected to process the request. We must immediately process the request by moving a server to the request location. The objective in this problem is to minimize the total distance traveled by the servers to process all the requests. In this thesis, we present an empirical analysis of a new online algorithm for k-server problem. This algorithm maintains two solutions, online solution, and an approximately optimal offline solution. When a request arrives we update the offline solution and use this update to inform the online assignment. This algorithm is motivated by the Robust-Matching Algorithm [RMAlgorithm, Raghvendra, APPROX 2016] for the closely related online bipartite matching problem. We then give a comprehensive experimental analysis of this algorithm and also provide a graphical user interface which can be used to visualize execution instances of the algorithm. We also consider these problems under stochastic setting and implement a lookahead strategy on top of the new online algorithm.
MS
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Botha, Marlene. „Online traffic engineering for MPLS networks“. Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/50049.

Der volle Inhalt der Quelle
Annotation:
Thesis (MSc) -- Stellenbosch University, 2004.
ENGLISH ABSTRACT: The Internet is fast evolving into a commercial platform that carries a mixture of narrow- and broadband applications such as voice, video, and data. Users expect a certain level of guaranteed service from their service providers and consequently the need exists for efficient Internet traffic engineering to enable better Quality of Service (QoS) capabilities. Multi-protocol Label Switching (MPLS) is a label switching protocol that has emerged as an enabling technology to achieve efficient traffic engineering for QoS management in IP networks. The ability of the MPLS protocol to create explicit virtual connections called Label Switched Paths (LSPs) to carry network traffic significantly enhances the traffic engineering capabilities of communication networks. The MPLS protocol supports two options for explicit LSP selection: offline LSP computation using an optimization method and dynamic route selection where a single node makes use of current available network state information in order to compute an explicit LSP online. This thesis investigates various methods for the selection of explicit bandwidth guaranteed LSPs through dynamic route selection. We address the problem of computing a sequence of optimal LSPs where each LSP can carry a specific traffic demand and we assume that no prior information regarding the future traffic demands are available and that the arrival sequence of LSP requests to the network is unknown. Furthermore, we investigate the rerouting abilities of the online LSP selection methods to perform MPLS failure restoration upon link failure. We propose a new online routing framework known as Least Interference Optimization (LIO) that utilizes the current bandwidth availability and traffic flow distribution to achieve efficient traffic engineering. We present the Least Interference Optimization Algorithm (LIOA) that reduces the interference among competing network flows by balancing the number and quantity of flows carried by a link for the setup of bandwidth guaranteed LSPs in MPLS networks. The LIOA routing strategy is evaluated and compared against well-known routing strategies such as the Minimum Hop Algorithm (MHA), Minimum Interference Routing Algorithm (MIRA), Open Shortest Path First (OSPF) and Constraint Shortest Path First (CSPF) by means of simulation. Simulation results revealed that, for the network topologies under consideration, the routing strategies that employed dynamic network state information in their routing decisions (LIOA, CSPF and MIRA) generally outperformed the routing strategies that only rely on static network information (OSPF and MHA). In most simulation experiments the best performance was achieved by the LIOA routing strategy while the MHA performed the worse. Furthermore we observed that the computational complexity of the MIRA routing strategy does not translate into equivalent performance gains. We employed the online routing strategies for MPLS failure recovery upon link failure. In particular we investigated two aspects to determine the efficiency of the routing strategies for MPLS rerouting: the suitability of the LSP configuration that results due to the establishment of LSPs prior to link failure and the ability of the online routing strategy to reroute failed LSPs upon link failure. Simulation results revealed similar rerouting performance for all online routing strategies under investigation, but a LSP configuration most suitable for online rerouting was observed for the LIOA routing strategy.
AFRIKAANSE OPSOMMING:Die Internet is voordurend besig om te evoleer in 'n medium wat 'n wye reeks moderne kommunikasietegnologiee ondersteun, insluitende telefoon, video en data. Internet gebruikers verwag gewaarborgde diens van hul diensverskaffers en daar bestaan dus 'n vraag na doeltreffende televerkeerbeheer vir gewaarborgde Internet diensgehalte. Multiprotokol Etiketskakeling (MPLS) is 'n etiketskakeling protokol wat doeltreffende televerkeerbeheer en diensgehalte moontlik maak deur die eksplisiete seleksie van virtuele konneksies vir die transmissie van netwerkverkeer in Internetprotokol (IP) netwerke. Hierdie virtuele konneksies staan bekend as etiketgeskakelde paaie. Die MPLS protokol ondersteun tans twee moontlikhede vir eksplisiete seleksie van etiketgeskakelde paaie: aflyn padberekening met behulp van optimeringsmetodes en dinamiese aanlyn padseleksie waar 'n gekose node 'n eksplisiete pad bereken deur die huidige stand van die netwerk in ag te neem. In hierdie tesis word verskeie padseleksiemetodes vir die seleksie van eksplisiete bandwydte-gewaarborgde etiketgeskakelde paaie deur mid del van dinamiese padseleksie ondersoek. Die probleem om 'n reeks optimale etiketgeskakelde paaie te bereken wat elk 'n gespesifeerde verkeersaanvraag kan akkommodeer word aangespreek. Daar word aanvaar dat geen informasie in verband met die toekomstige verkeersaanvraag bekend is nie en dat die aankomsvolgorde van etiketgeskakelde pad verso eke onbekend is. Ons ondersoek verder die herroeteringsmoontlikhede van die aanlyn padseleksiemetodes vir MPLS foutrestorasie in die geval van skakelonderbreking. Vir hierdie doel word 'n nuwe aanlyn roeteringsraamwerk naamlik Laagste Inwerking Optimering (LIO) voorgestel. LIO benut die huidige beskikbare bandwydte en verkeersvloeidistribusie van die netwerk om doeltreffende televerkeerbeheer moontlik te maak. Ons beskryf 'n Laagste Inwerking Optimering Algoritme (LIOA) wat die inwerking tussen kompeterende verkeersvloei verminder deur 'n balans te handhaaf tussen die aantal en kwantiteit van die verkeersvloeistrome wat gedra word deur elke netwerkskakel. Die LIOA roeteringstrategie word geevalueer met behulp van simulasie en die resultate word vergelyk met ander bekende roeteringstrategiee insluitende die Minimum Node Algorithme (MHA), die Minimum Inwerking Algoritme (MIRA), die Wydste Kortste Pad Eerste Algoritme (OSPF) en die Beperkte Kortste Pad Eerste Algoritme (CSPF). Die resultate van die simulasie-eksperimente to on dat, vir die netwerk topologiee onder eksperimentasie, die roeteringstratgiee wat roeteringsbesluite op dinamiese netwerk informasie baseer (LIOA, MIRA, CSPF) oor die algemeen beter vaar as die wat slegs staatmaak op statiese netwerkinformasie (MHA, OSPF). In die meeste simulasie-eksperimente vaar die LIOA roeteringstrategie die beste en die MHA roeteringstrategie die slegste. Daar word verder waargeneem dat die komputasiekomplesiteit van die MIRA roeteringstrategie nie noodwendig weerspieel word in die sukses van roeteringsuitkoms nie. In die geval waar die aanlyn roeteringstrategiee aangewend word vir MPLS foutrestorasie, toon die resultate van simulasie-eksperimente dat al die roeteringstrategiee min of meer dieselfde uitkoms lewer ten opsigte van herroetering van onderbreekte verkeersvloei. Die konfigurasie van etiketgeskakelde paaie deur die LIOA roeteringstrategie voor skakelonderbreking is egter die geskikste vir televerkeer herroetering na skakelonderbreking
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Morimoto, Naoyuki. „Design and Analysis of Algorithms for Graph Exploration and Resource Allocation Problems and Their Application to Energy Management“. 京都大学 (Kyoto University), 2014. http://hdl.handle.net/2433/189687.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Campanini, Alessandro. „Online Parameters Estimation in Battery Systems for EV and PHEV Applications“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Den vollen Inhalt der Quelle finden
Annotation:
The main target of this thesis is to assess whether two among of the most advanced algorithms are able to perform an online parameters estimation. Starting from a current profile generated by a real driving cycle and applied to an Electric Circuit Model (ECM) with known parameters, a voltage profile is generated. Then, Extended Kalman Filter (EKF) and Varied-Parameters Approach (VPA) will be employed both to the known system and to a real battery cell profile with unknown parameters. The research has led to the result that even if the two algorithms present opposite characteristics in terms of accuracy and computational effort, the are some common results. Convergence and accuracy are strictly dependent on the prior knowledge of the ECM parameter curves and on the hypothesis done to simplify the model, such as variables dependences, circuital complexity etc. Therefore, when applying the algorithms to a known system, perfect correspondence between estimated and real parameters is found, whereas when they are applied to an unknow system the converge in not reached. Therefore, for future researches might be recommend introducing Temperature, Current and Aging dependence in the system model, as well as generating voltage profiles from more complex ECMs and performing simulations with the same ECM used in this thesis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Raykhel, Ilya Igorevitch. „Real-Time Automatic Price Prediction for eBay Online Trading“. BYU ScholarsArchive, 2008. https://scholarsarchive.byu.edu/etd/1631.

Der volle Inhalt der Quelle
Annotation:
While Machine Learning is one of the most popular research areas in Computer Science, there are still only a few deployed applications intended for use by the general public. We have developed an exemplary application that can be directly applied to eBay trading. Our system predicts how much an item would sell for on eBay based on that item's attributes. We ran our experiments on the eBay laptop category, with prior trades used as training data. The system implements a feature-weighted k-Nearest Neighbor algorithm, using genetic algorithms to determine feature weights. Our results demonstrate an average prediction error of 16%; we have also shown that this application greatly reduces the time a reseller would need to spend on trading activities, since the bulk of market research is now done automatically with the help of the learned model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Holmgren, Faghihi Josef, und Paul Gorgis. „Time efficiency and mistake rates for online learning algorithms : A comparison between Online Gradient Descent and Second Order Perceptron algorithm and their performance on two different data sets“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-260087.

Der volle Inhalt der Quelle
Annotation:
This dissertation investigates the differences between two different online learning algorithms: Online Gradient Descent (OGD) and Second-Order Perceptron (SOP) algorithm, and how well they perform on different data sets in terms of mistake rate, time cost and number of updates. By studying different online learning algorithms and how they perform in different environments will help understand and develop new strategies to handle further online learning tasks. The study includes two different data sets, Pima Indians Diabetes and Mushroom, together with the LIBOL library for testing. The results in this dissertation show that Online Gradient Descent performs overall better concerning the tested data sets. In the first data set, Online Gradient Descent recorded a notably lower mistake rate. For the second data set, although it recorded a slightly higher mistake rate, the algorithm was remarkably more time efficient compared to Second-Order Perceptron. Future work would include a wider range of testing with more, and different, data sets as well as other relative algorithms. This will lead to better result and higher credibility.
Den här avhandlingen undersöker skillnaden mellan två olika “online learning”-algoritmer: Online Gradient Descent och Second-Order Perceptron, och hur de presterar på olika datasets med fokus på andelen felklassificeringar, tidseffektivitet och antalet uppdateringar. Genom att studera olika “online learning”-algoritmer och hur de fungerar i olika miljöer, kommer det hjälpa till att förstå och utveckla nya strategier för att hantera vidare “online learning”-problem. Studien inkluderar två olika dataset, Pima Indians Diabetes och Mushroom, och använder biblioteket LIBOL för testning. Resultatet i denna avhandling visar att Online Gradient Descent presterar bättre överlag på de testade dataseten. För det första datasetet visade Online Gradient Descent ett betydligt lägre andel felklassificeringar. För det andra datasetet visade OGD lite högre andel felklassificeringar, men samtidigt var algoritmen anmärkningsvärt mer tidseffektiv i jämförelse med Second-Order Perceptron. Framtida studier inkluderar en bredare testning med mer, och olika, datasets och andra relaterade algoritmer. Det leder till bättre resultat och höjer trovärdigheten.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Holm, Raven R. „Natural language processing of online propaganda as a means of passively monitoring an adversarial ideology“. Thesis, Monterey, California: Naval Postgraduate School, 2017. http://hdl.handle.net/10945/52993.

Der volle Inhalt der Quelle
Annotation:
Approved for public release; distribution is unlimited
Reissued 30 May 2017 with Second Reader’s non-NPS affiliation added to title page.
Online propaganda embodies a potent new form of warfare; one that extends the strategic reach of our adversaries and overwhelms analysts. Foreign organizations have effectively leveraged an online presence to influence elections and distance-recruit. The Islamic State has also shown proficiency in outsourcing violence, proving that propaganda can enable an organization to wage physical war at very little cost and without the resources traditionally required. To augment new counter foreign propaganda initiatives, this thesis presents a pipeline for defining, detecting and monitoring ideology in text. A corpus of 3,049 modern online texts was assembled and two classifiers were created: one for detecting authorship and another for detecting ideology. The classifiers demonstrated 92.70% recall and 95.84% precision in detecting authorship, and detected ideological content with 76.53% recall and 95.61% precision. Both classifiers were combined to simulate how an ideology can be detected and how its composition could be passively monitored across time. Implementation of such a system could conserve manpower in the intelligence community and add a new dimension to analysis. Although this pipeline makes presumptions about the quality and integrity of input, it is a novel contribution to the fields of Natural Language Processing and Information Warfare.
Lieutenant, United States Coast Guard
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie