To see the other types of publications on this topic, follow the link: Information filtering.

Dissertations / Theses on the topic 'Information filtering'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Information filtering.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chambers, Brian D. "Adaptive Bayesian information filtering." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0007/MQ45945.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Webster, David Edward. "Realising context-oriented information filtering." Thesis, University of Hull, 2010. http://hydra.hull.ac.uk/resources/hull:2724.

Full text
Abstract:
The notion of information overload is an increasing factor in modern information service environments where information is ‘pushed’ to the user. As increasing volumes of information are presented to computing users in the form of email, web sites, instant messaging and news feeds, there is a growing need to filter and prioritise the importance of this information. ‘Information management’ needs to be undertaken in a manner that not only prioritises what information we do need, but to also dispose of information that is sent, which is of no (or little) use to us.The development of a model to aid information filtering in a context-aware way is developed as an objective for this thesis. A key concern in the conceptualisation of a single concept is understanding the context under which that concept exists (or can exist). An example of a concept is a concrete object, for instance a book. This contextual understanding should provide us with clear conceptual identification of a concept including implicit situational information and detail of surrounding concepts.Existing solutions to filtering information suffer from their own unique flaws: textbased filtering suffers from problems of inaccuracy; ontology-based solutions suffer from scalability challenges; taxonomies suffer from problems with collaboration. A major objective of this thesis is to explore the use of an evolving community maintained knowledge-base (that of Wikipedia) in order to populate the context model from prioritise concepts that are semantically relevant to the user’s interest space. Wikipedia can be classified as a weak knowledge-base due to its simple TBox schema and implicit predicates, therefore, part of this objective is to validate the claim that a weak knowledge-base is fit for this purpose. The proposed and developed solution, therefore, provides the benefits of high recall filtering with low fallout and a dependancy on a scalable and collaborative knowledge-base.A simple web feed aggregator has been built using the Java programming language that we call DAVe’s Rss Organisation System (DAVROS-2) as a testbed environment to demonstrate specific tests used within this investigation. The motivation behind the experiments is to demonstrate that the combination of the concept framework instantiated through Wikipedia can provide a framework to aid in concept comparison, and therefore be used in news filtering scenario as an example of information overload. In order to evaluate the effectiveness of the method well understood measures of information retrieval are used. This thesis demonstrates that the utilisation of the developed contextual concept expansion framework (instantiated using Wikipedia) improved the quality of concept filtering over a baseline based on string matching. This has been demonstrated through the analysis of recall and fallout measures.
APA, Harvard, Vancouver, ISO, and other styles
3

Yu, Kai. "Statistical Learning Approaches to Information Filtering." Diss., lmu, 2004. http://nbn-resolving.de/urn:nbn:de:bvb:19-25120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dolbear, Catherine. "Personalised information filtering using event causality." Thesis, University of Oxford, 2004. http://ora.ox.ac.uk/objects/uuid:31e94de4-5dda-4312-968b-d0ef34dea8e2.

Full text
Abstract:
Previous research on multimedia information filtering has mainly concentrated on key frame identification and video skim generation for browsing purposes, however applications requiring the generation of summaries as the final product for user con- sumption are of equal scientific and commercial interest. Recent advances in computer vision have enabled the extraction of semantic events from an audio-visual signal, so it can be assumed for our purposes that such semantic labels are already available for use. We concentrate instead on developing methods to prioritise these semantic elements for inclusion in a summary which can be personalised to meet a particular user's needs. Our work differentiates itself from that in the literature as it is driven by the results of a knowledge elicitation study with expert summarisers. The experts in our study believe that summaries structured as a narrative are better able to convey the content of the original data to a user. Motivated by the information filtering problem, the primary contribution of this thesis is the design and implementation of a system to summarise sequences of events by automatic modelling of the causal relationships between them. We show, by com- parison against summaries generated by experts and with the introduction of a new coherence metric, that modelling the causal relationships between events increases the coherence and accuracy of summaries. We suggest that this claim is valid, not only in the domain of soccer highlights generation, in which we carry out the bulk of our experiments, but also in any other domain in which causal relationships can be iden- tified between events. This proposal is tested by applying our summarisation system to another, significantly different domain, that of business meeting summarisation, using the soccer training set and a manually generated ontology mapping. We introduce the concept of a context-group of causally related events as a first step towards modelling narrative episodes and present a comparison between a case based reasoning and a two-stage Markov model approach to summarisation. For both methods we show that by including entire context-groups in the summary, rather than single events in isolation, more accurate summaries can be generated. Our approach to personalisation biases a summary according to particular narrative plotlines using different subsets of the training data. Results show that the number of instances of certain event classes can be increased by biasing the training set appropriately. This method gives very similar results to a standard weighting method, while avoiding the need to tailor the weights to a particular application domain.
APA, Harvard, Vancouver, ISO, and other styles
5

Shardanand, Upendra. "Social information filtering for music recommendation." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/11667.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Olsson, Tomas. "Information Filtering with Collaborative Interface Agents." Thesis, SICS, 1998. http://urn.kb.se/resolve?urn=urn:nbn:se:ri:diva-22235.

Full text
Abstract:
This report describes a distributed approach to social filtering based on the agent metaphor. Firstly, previous approaches are described, such as cognitive filtering and social filtering. Then a couple of previously implemented systems are presented and then a new system design is proposed. The main goal is to give the requirements and design of an agent-based system that recommends web-documents. The presented approach combines cognitive and social filtering to get the advantages from both techniques. Finally, a prototype implementation called WebCondor is described and results of testing the system are reported and discussed.
APA, Harvard, Vancouver, ISO, and other styles
7

Lanquillon, Carsten. "Enhancing text classification to improve information filtering." [S.l. : s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=963801805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Khan, Imran. "Personal adaptive web agent for information filtering." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq23361.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sheth, Beerud Dilip. "A learning approach to personalized information filtering." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/37998.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (leaves 96-100).
by Beerud Dilip Sheth.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
10

Akkapeddi, Raghu C. "Grouping annotating and filtering history information in VKB." Thesis, Texas A&M University, 2003. http://hdl.handle.net/1969.1/227.

Full text
Abstract:
History mechanisms available in hypertext systems allow users access to past interactions with the system and help users incorporate those interactions into the current context. The history information can be useful to both the system and the user. The Visual Knowledge Builder (VKB) creates spatial hypertexts - visual workspaces for collecting, organizing, and sharing. It is based on prior work on VIKI. VKB records all edit events and presents them in the form of a "navigable history" as end-users work within an information workspace. My thesis explores attaching user interpretations of history via the grouping and annotation of edit events. Annotations can take the form of a plain text statement or one or more attribute/value pairs attached to individual events or group of events in the list. Moreover, I explore the value of history event filtering, limiting the edits and groups presented to those that match user descriptions. My contribution in this thesis is the addition of mechanisms whereby users can cope with larger history records in VKB via the process of grouping, annotating and filtering history information.
APA, Harvard, Vancouver, ISO, and other styles
11

Rydberg, Christoffer. "Time Efficiency of Information Retrieval with Geographic Filtering." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-172918.

Full text
Abstract:
This study addresses the question of time efficiency of two major models within Information Retrieval (IR): the Extended Boolean Model (EBM) and the Vector Space Model (VSM). Both models use the same weighting scheme, based on term-frequency-inverse document frequency (tf-idf). The VSM uses a cosine score computation to rank the document-query similarity. In the EBM, P-norm scores are used, which ranks documents not just by matching terms, but also by taking the Boolean interconnections between the terms in the query into account. Additionally, this study investigates how documents with a single geographic affiliation can be retrieved based on features such as the location and geometry of the geographic surface. Furthermore, we want to answer how to best integrate this geographic search with the two IR-models previously described. From previous research we conclude that using an index based on Z-Space Filling Curves (Z-SFC) is the best approach for documents containing a single geographic affiliation. When documents are retrieved from the Z-SFC-index, there are no guarantees that the retrieved documents are relevant for the search area. It is, however, guaranteed that only the retrieved documents can be relevant. Furthermore, the ranked output of the IR models gives a great advantage to the geographic search, namely that we can focus on documents with a high relevance. We intersect the results from one of the IR models with the results from the Z-SFC index and sort the resulting list of documents by relevance. At this point we can iterate over the list, check for intersections of each document's geometry and the search geometry, and only retrieve documents whose geometries are relevant for the search. Since the user is only interested in the top results we can stop as soon as a sufficient amount of results have been obtained. The conclusion of this study is that the VSM is an easy-to-implement, time efficient, retrieval model. It is inferior to the EBM in the sense that it is a rather simple bag-of-words model, while the EBM allows to specify term- conjunctions and disjunctions. The geographic search has shown to be time efficient and independent of which of the two IR models that is used. The gap in efficiency between the VSM and the EBM, however, drastically increases as the query gets longer and more results are obtained. Depending on the requirements of the user, the collection size, the length of queries, etc., the benefits of the EBM might outweigh the downside of performance. For search engines with a big document collection and many users, however, it is likely to be too slow.
Den här studien addresserar tidseffektiviteten av två större modeller inom informationssökning: ”Extended Boolean Model” (EBM) och ”Vector Space Model” (VSM) . Båda modellerna använder samma typ av viktningsschema, som bygger på ”term frequency–inverse document frequency“ (tf- idf). I VSM rankas varje dokument, utifrån en söksträng, genom en skalärprodukt av dokumentets och söksträngens vektorrepresentationer. I EBM används såkallade ”p-norm score functions” som rankar dokument, inte bara utifrån matchande termer, utan genom att ta hänsyn till de Booleska sammanbindningar som finns mellan sökorden. Utöver detta undersöker studien hur dokument med en geografisk anknytning kan hämtas baserat på positionen och geometrin av den geografiska ytan. Vidare vill vi besvara hur denna geografiska sökning på bästa sätt kan integreras med de två informationssökningmodellerna. Utifrån tidigare forskning dras slutsatsen att det bästa tillvägagångssättet för dokument med endast en geografisk anknytning är att använda ett index baserat på ”Z-Space Filling Curves” (Z-SFC). När dokument hämtas genom Z-SFC-indexet finns det inga garantier att de hämtade dokumenten är relevanta för sökytan. Det är däremot garanterat att endast dessa dokument kan vara relevanta. Vidare är det rankade utdatat från IR-modellerna till en stor fördel för den geografiska sökningen, nämligen att vi kan fokusera på dokument med hög relevans. Detta görs genom att jämföra resultaten från vald IR-modell med resultaten från Z-SFC-indexet och sortera de matchande dokumenten efter relevans. Därefter kan vi iterera över listan och beräkna vilka dokuments geometrier som skär sökningens geometri. Eftersom användaren endast är intresserad av de högst rankade dokumenten kan vi avbryta när vi har tillräckligt många sökresultat. Slutsatsen av studien är att VSM är enkel att implementera och mycket tidseffektiv jämfört med EBM. Modellen är underlägsen EBM i den mening att det är en ganska enkel ”bag of words”-modell, medan EBM tillåter specificering av konjuktioner och disjunktioner. Den geografiska sökningen har visats vara tidseffektiv och oberoende av vilken av de två IR-modellerna som används.Skillnaden i tidseffektivitet mellan VSM och EBM ökar däremot drastiskt när söksträngen blir längre och fler resultat erhålls. Emellertid, beroende på användarens krav, storleken på dokumentsamlingen, söksträngens längd, etc., kan fördelarna med EBM ibland överväga nackdelen av den lägre prestandan. För sökmotorer med stora dokumentsamlingar och många användare är dock modellen sannolikt för långsam.
APA, Harvard, Vancouver, ISO, and other styles
12

Tam, Ming-wai, and 譚銘威. "Scalable collaborative filtering using updatable indexing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B40687351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Tam, Ming-wai. "Scalable collaborative filtering using updatable indexing." Click to view the E-thesis via HKUTO, 2008. http://sunzi.lib.hku.hk/hkuto/record/B40687351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Bizer, Christian. "Quality-driven Information Filtering in the Context of web-based Information Systems." [S.l.] : [s.n.], 2007. http://www.diss.fu-berlin.de/2007/217/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Liang, Winnie H. (Winnie Hui-Ning). "Managing information overload on the Web with collaborative filtering." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/32181.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (leaves 102-103).
by Winnie H. Liang.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
16

Maltz, David A. (David Aaron). "Distributing information for collaborative filtering on Usenet net news." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/36464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Sriram, Bharath. "Short Text Classification in Twitter to Improve Information Filtering." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1275406094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kapanipathi, Pavan. "Personalized and Adaptive Semantic Information Filtering for Social Media." Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1464541093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Claubnitzer, Diana. "Bacterial chemotaxis: sensory adaptation, noise filtering, and information transmission." Thesis, Imperial College London, 2011. http://hdl.handle.net/10044/1/6918.

Full text
Abstract:
Chemotaxis is a fundamental cellular process by which cells sense and navigate in theirenvironment. The molecular signalling pathway in the bacterium Escherichia coli is experimentallywell-characterised and, hence, ideal for quantitative analysis and modelling.Chemoreceptors sense gradients of a multitude of substances and regulate an intracellularsignalling pathway, which modulates the swimming behaviour. We studied the chemotaxispathway in E. coli (i) to quantitatively understand molecular interactions in the signallingnetwork, (ii) to gain a systems view of the workings of the pathway, including the effectsof noise generated by biomolecular reactions during signalling, and (iii) to understandgeneral design principles relevant for many sensory systems. Specifically, we investigatedthe adaptation dynamics due to covalent chemoreceptor modification, which includes numerouslayers of feedback regulation. In collaboration with an experimental group, weundertook quantitative experiments using wild-type cells and mutants for proteins involvedin adaptation using in vivo fluorescence resonance transfer (FRET). We developeda dynamical model for chemotactic signalling based on cooperative chemoreceptors andadaptation of the sensory response. This model quantitatively explains an interestingasymmetry of the response to favourable and unfavourable stimuli observed in the experiments.In a whole-pathway description, we further studied the response to controlledconcentration stimuli, as well as how fluctuations from the environment and due to intracellularsignalling affect the detection of input signals. Finally, the chemotaxis pathwayis characterised by high sensitivity, a wide dynamic range and the need for informationtransmission, properties shared with many other sensory systems. Based on FRET data,we investigated the emergence, limits and biological significance of Weber?s law which predictsthat the system detects stimuli relative to the background stimulus. Furthermore, westudied the information transmission from input concentrations into intracellular signals.We connect Weber?s law, as well as information transmission, to swimming bacteria andpredict typically encountered chemical inputs.
APA, Harvard, Vancouver, ISO, and other styles
20

Tong, Shan. "Dynamic physiological information recovery : a sampled-data filtering framework /." View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?ECED%202008%20TONG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Nguyen, Tran Diem Hanh. "Semantic-based topic evaluation and application in information filtering." Thesis, Queensland University of Technology, 2021. https://eprints.qut.edu.au/209882/1/Tran%20Diem%20Hanh_Nguyen_Thesis.pdf.

Full text
Abstract:
Topic modelling techniques are used to find the main themes in a collection of documents automatically. This thesis presents effective topic evaluation models to measure the quality of the discovered topics. The proposed techniques use human defined knowledge to solve the problems of evaluating topics in terms of semantic meaning of the topics. The thesis also proposed methods to modelling user interest based on the topic model generated from the user’s documents. The proposed techniques help to measure the quality of the topics and significantly improve the performance of text mining applications.
APA, Harvard, Vancouver, ISO, and other styles
22

Gao, Yang. "Pattern-based topic modelling and its application for information filtering and information retrieval." Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/83982/1/Yang_Gao_Thesis.pdf.

Full text
Abstract:
This thesis targets on a challenging issue that is to enhance users' experience over massive and overloaded web information. The novel pattern-based topic model proposed in this thesis can generate high-quality multi-topic user interest models technically by incorporating statistical topic modelling and pattern mining. We have successfully applied the pattern-based topic model to both fields of information filtering and information retrieval. The success of the proposed model in finding the most relevant information to users mainly comes from its precisely semantic representations to represent documents and also accurate classification of the topics at both document level and collection level.
APA, Harvard, Vancouver, ISO, and other styles
23

Yang, Li. "Building an Intelligent Filtering System Using Idea Indexing." Thesis, University of North Texas, 2003. https://digital.library.unt.edu/ark:/67531/metadc4275/.

Full text
Abstract:
The widely used vector model maintains its popularity because of its simplicity, fast speed, and the appeal of using spatial proximity for semantic proximity. However, this model faces a disadvantage that is associated with the vagueness from keywords overlapping. Efforts have been made to improve the vector model. The research on improving document representation has been focused on four areas, namely, statistical co-occurrence of related items, forming term phrases, grouping of related words, and representing the content of documents. In this thesis, we propose the idea-indexing model to improve document representation for the filtering task in IR. The idea-indexing model matches document terms with the ideas they express and indexes the document with these ideas. This indexing scheme represents the document with its semantics instead of sets of independent terms. We show in this thesis that indexing with ideas leads to better performance.
APA, Harvard, Vancouver, ISO, and other styles
24

Widyantoro, Dwi Hendratmo. "Concept drift learning and its application to adaptive information filtering." Diss., Texas A&M University, 2003. http://hdl.handle.net/1969.1/170.

Full text
Abstract:
Tracking the evolution of user interests is a problem instance of concept drift learning. Keeping track of multiple interest categories is a natural phenomenon as well as an interesting tracking problem because interests can emerge and diminish at different time frames. The first part of this dissertation presents a Multiple Three-Descriptor Representation (MTDR) algorithm, a novel algorithm for learning concept drift especially built for tracking the dynamics of multiple target concepts in the information filtering domain. The learning process of the algorithm combines the long-term and short-term interest (concept) models in an attempt to benefit from the strength of both models. The MTDR algorithm improves over existing concept drift learning algorithms in the domain. Being able to track multiple target concepts with a few examples poses an even more important and challenging problem because casual users tend to be reluctant to provide the examples needed, and learning from a few labeled data is generally difficult. The second part presents a computational Framework for Extending Incomplete Labeled Data Stream (FEILDS). The system modularly extends the capability of an existing concept drift learner in dealing with incomplete labeled data stream. It expands the learner's original input stream with relevant unlabeled data; the process generates a new stream with improved learnability. FEILDS employs a concept formation system for organizing its input stream into a concept (cluster) hierarchy. The system uses the concept and cluster hierarchy to identify the instance's concept and unlabeled data relevant to a concept. It also adopts the persistence assumption in temporal reasoning for inferring the relevance of concepts. Empirical evaluation indicates that FEILDS is able to improve the performance of existing learners particularly when learning from a stream with a few labeled data. Lastly, a new concept formation algorithm, one of the key components in the FEILDS architecture, is presented. The main idea is to discover intrinsic hierarchical structures regardless of the class distribution and the shape of the input stream. Experimental evaluation shows that the algorithm is relatively robust to input ordering, consistently producing a hierarchy structure of high quality.
APA, Harvard, Vancouver, ISO, and other styles
25

Moukas, Alexandros G. "Amalthaea--information filtering and discovery using a multiagent evolving system." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/62338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Strunjas, Svetlana. "Algorithms and Models for Collaborative Filtering from Large Information Corpora." University of Cincinnati / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1220001182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Mohd, Azmi Nurulhuda Firdaus. "Artificial immune systems for information filtering : focusing on profile adaptation." Thesis, University of York, 2014. http://etheses.whiterose.ac.uk/6695/.

Full text
Abstract:
The human immune system has characteristics such as self-organisation, robustness and adaptivity that may be useful in the development of adaptive systems. One suitable application area for adaptive systems is Information Filtering (IF). Within the context of IF, learning and adapting user profiles is an important research area. In an individual profile, an IF system has to rely on the ability of the user profile to maintain a satisfactory level of filtering accuracy for as long as it is being used. This thesis explores a possible way to enable Artificial Immune Systems (AIS) to filter information in the context of profile adaptation. Previous work has investigated this issue from the perspective of self-organisation based on Autopoetic Theory. In contrast, this current work approaches the problem from the perspective of diversity inspired by the concept of dynamic clonal selection and gene library to maintain sufficient diversity. An immune inspired IF for profile adaptation is proposed and developed. This algorithm is demonstrated to work in detecting relevant documents by using a single profile to recognize a user’s interests and to adapt to changes in them. We employed a virtual user tested on a web document corpus to test the profile on learning of an emerging new topic of interest and forgetting uninteresting topics. The results clearly indicate the profile’s ability to adapt to frequent variations and radical changes in user interest. This work has focused on textual information, but it may have the potential to be applied in other media such as audio and images in which adaptivity to dynamic environments is crucial. These are all interesting future directions in which this work might develop.
APA, Harvard, Vancouver, ISO, and other styles
28

Blankenburg, Sven. "Theoretical mechanisms of information filtering in stochastic single neuron models." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17577.

Full text
Abstract:
Die vorliegende Arbeit beschäftigt sich mit Mechanismen, die in Einzelzellmodellen zu einer frequenzabhängigen Informationsübertragung führen können. Um dies zu untersuchen, werden Methoden aus der theoretischen Physik (Statistische Physik) und der Informationstheorie angewandt. Die Informationsfilterung in mehreren stochastischen Neuronmodellen, in denen unterschiedliche Mechanismen zur Informationsfilterung führen können, werden numerisch und, falls möglich, analytisch untersucht. Die Bandbreite der betrachteten Modelle erstreckt sich von reduzierten strombasierten ’Integrate-and-Fire’ (IF) Modellen bis zu biophysikalisch realistischeren leitfähigkeitsbasierten Modellen. Anhand numerischer Untersuchungen wird aufgezeigt, dass viele Varianten der IF-Neuronenmodelle vorzugsweise Information über langsame Anteile eines zeitabhängigen Eingangssignals übertragen. Der einfachste Vertreter der oben genannten Klasse der IF-Neuronmodelle wird dahingehend erweitert, dass ein Konzept von neuronalem ’Gedächtnis’, vermittelst positiver Korrelationen zwischen benachbarten Intervallen aufeinander- folgender Spikes, integriert wird. Dieses Model erlaubt eine analytische störungstheoretische Untersuchung der Auswirkungen positiver Korrelationen auf die Informationsfilterung. Um zu untersuchen, wie sich sogenannte ’unterschwelligen Resonanzen’ auf die Signalübertragung auswirken, werden Neuronenmodelle mit verschiedenen Nichtlinearitäten anhand numerischer Computersimulationen analysiert. Abschließend wird die Signalübertragung in einem neuronalen Kaskadensystem, bestehend aus linearen und nichtlinearen Elementen, betrachtet. Neuronale Nichtlinearitäten bewirken eine gegenläufige Abhängigkeit (engl. "trade-off") zwischen qualitativer, d.h. frequenzselektiver, und quantitativer Informations-übertragung, welche in allen von mir untersuchten Modellen diskutiert wird. Diese Arbeit hebt die Gewichtigkeit von Nichtlinearitäten in der neuronalen Informationsfilterung hervor.
Neurons transmit information about time-dependent input signals via highly non-linear responses, so-called action potentials or spikes. This type of information transmission can be frequency-dependent and allows for preferences for certain stimulus components. A single neuron can transmit either slow components (low pass filter), fast components (high pass filter), or intermediate components (band pass filter) of a time-dependent input signal. Using methods developed in theoretical physics (statistical physics) within the framework of information theory, in this thesis, cell-intrinsic mechanisms are being investigated that can lead to frequency selectivity on the level of information transmission. Various stochastic single neuron models are examined numerically and, if tractable analytically. Ranging from simple spiking models to complex conductance-based models with and without nonlinearities, these models include integrator as well as resonator dynamics. First, spectral information filtering characteristics of different types of stochastic current-based integrator neuron models are being studied. Subsequently, the simple deterministic PIF model is being extended with a stochastic spiking rule, leading to positive correlations between successive interspike intervals (ISIs). Thereafter, models are being examined which show subthreshold resonances (so-called resonator models) and their effects on the spectral information filtering characteristics are being investigated. Finally, the spectral information filtering properties of stochastic linearnonlinear cascade neuron models are being researched by employing different static nonlinearities (SNLs). The trade-off between frequency-dependent signal transmission and the total amount of transmitted information will be demonstrated in all models and constitutes a direct consequence of the nonlinear formulation of the models.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhou, Xujuan. "Rough set-based reasoning and pattern mining for information filtering." Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/29350/1/Xujuan_Zhou_Thesis.pdf.

Full text
Abstract:
An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).
APA, Harvard, Vancouver, ISO, and other styles
30

Zhou, Xujuan. "Rough set-based reasoning and pattern mining for information filtering." Queensland University of Technology, 2008. http://eprints.qut.edu.au/29350/.

Full text
Abstract:
An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).
APA, Harvard, Vancouver, ISO, and other styles
31

Bikdash, Marwan. "Analysis and filtering of time-varying signals." Thesis, Virginia Polytechnic Institute and State University, 1988. http://hdl.handle.net/10919/80015.

Full text
Abstract:
The characterization, analysis and filtering of a slowly time-varying (STV) deterministic signal are considered. A STV signal is characterized as a sophisticated signal whose windowed sections are elementary signals. Mixed time-frequency representations (MTFRs) such as the Wigner distribution (WD), the Pseudo-Wigner distribution (PWD), the Short-time Fourier transform (STFT) and the optimally smoothed Wigner distribution (OSWD) used in analyzing STV signals are analyzed and compared. The OSWD is shown to perform satisfactorily even if the signals are amplitude modulated. The OSWD is shown to yield the exact instantaneous frequency for STV signals having quadratic phase: and to have a minimal and meaningful Bandwidth (BW) that does not depend on the slope of the instantaneous frequency curve in the time-frequency plane, unlike the BW of the spectrogram. We also present some contributions to the ongoing debate addressing the issue of choosing the MTFR that is best suited to the analysis of STV signals. Using analytical and experimental results, the performances of the different MTFRs are compared, and the conditions under which a given MTFR performs better are considered. The filtering of a signal from a noise-corrupted measurement, and the decomposition of a STV signal into its components in the presence of noise, are considered. These two related problems have been solved through masking the MTFRs of the measured signal. This approach has been successfully used in the case of the WD, PWD and the STFT. We propose extending the use of this approach to the OSWD. An equivalent time-domain implementation based on linear shift-variant (LSV) filters is derived and fully analyzed. It is based on the concept of local nonstationarity cancellation. The proposed filter is shown to have a superior performance when compared to the filter based on masking the STFT. The sensitivity of the filter is studied. The filter ability to suppress white noise and to decompose a STV signal into its components is analyzed and illustrated.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
32

Turner, Brett Ronald. "An investigation into the efficacy of URL content filtering systems." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2021. https://ro.ecu.edu.au/theses/2409.

Full text
Abstract:
Content filters are used to restrict to restrict minors from accessing to online content deemed inappropriate. While much research and evaluation has been done on the efficiency of content filters, there is little in the way of empirical research as to their efficacy. The accessing of inappropriate material by minors, and the role content filtering systems can play in preventing the accessing of inappropriate material, is largely assumed with little or no evidence. This thesis investigates if a content filter implemented with the stated aim of restricting specific Internet content from high school students achieved the goal of stopping students from accessing the identified material. The case is of a high school in Western Australia where the logs of a proxy content filter that included all Internet traffic requested by students were examined to determine the efficacy of the content filter. Using text extraction and pattern matching techniques to look for evidence of access to restricted content within this study, the results demonstrate that the belief that content filtering systems reliably prevent access to restricted content is misplaced. in this study there is direct evidence of circumvention of the content filter. This is single case study in one school and as such, the results are not generalisable to all schools or even through subsequent systems that replaced the content filter examined in this study, but it does raise the issue of the ability of these content filter systems to restrict content from high school students. Further studies across multiple schools and more complex circumvention methods would be required to identify if circumvention of content filters is a widespread issue.
APA, Harvard, Vancouver, ISO, and other styles
33

Lau, Raymond Yiu Keung. "Belief revision for adaptive information agents." Thesis, Queensland University of Technology, 2003. https://eprints.qut.edu.au/15789/1/Raymond_Lau_Thesis.pdf.

Full text
Abstract:
As the richness and diversity of information available to us in our everyday lives has expanded, so the need to manage this information grows. The lack of effective information management tools has given rise to what is colloquially known as the information overload problem. Intelligent agent technologies have been explored to develop personalised tools for autonomous information retrieval (IR). However, these so-called adaptive information agents are still primitive in terms of their learning autonomy, inference power, and explanatory capabilities. For instance, users often need to provide large amounts of direct relevance feedback to train the agents before these agents can acquire the users' specific information requirements. Existing information agents are also weak in dealing with the serendipity issue in IR because they cannot infer document relevance with respect to the possibly related IR contexts. This thesis exploits the theories and technologies from the fields of Information Retrieval (IR), Symbolic Artificial Intelligence and Intelligent Agents for the development of the next generation of adaptive information agents to alleviate the problem of information overload. In particular, the fundamental issues such as representation, learning, and classjfication (e.g., classifying documents as relevant or not) pertaining to these agents are examined. The design of the adaptive information agent model stems from a basic intuition in IR. By way of illustration, given the retrieval context involving a science student, and a query "Java", what information items should an intelligent information agent recommend to its user? The agent should recommend documents about "Computer Programming" if it believes that its user is a computer science student and every computer science student needs to learn programming. However, if the agent later discovers that its user is studying "volcanology", and the agent also believes that volcanists are interested in the volcanos in Java, the agent may recommend documents about "Merapi" (a volcano in Java with a recent eruption in 1994). This scenario illustrates that a retrieval context is not only about a set of terms and their frequencies but also the relationships among terms (e.g., java ? science ? computer, computer ? programming, java ? science ? volcanology ? merapi, etc.) In addition, retrieval contexts represented in information agents should be revised in accordance with the changing information requirements of the users. Therefore, to enhance the adaptive and proactive IR behaviour of information agents, an expressive representation language is needed to represent complex retrieval contexts and an effective learning mechanism is required to revise the agents' beliefs about the changing retrieval contexts. Moreover, a sound reasoning mechanism is essential for information agents to infer document relevance with respect to some retrieval contexts to enhance their proactiveness and learning autonomy. The theory of belief revision advocated by Alchourrón, Gärdenfors, and Makinson (AGM) provides a rigorous formal foundation to model evolving retrieval contexts in terms of changing epistemic states in adaptive information agents. The expressive power of the AGM framework allows sufficient details of retrieval contexts to be captured. Moreover, the AGM framework enforces the principles of minimal and consistent belief changes. These principles coincide with the requirements of modelling changing information retrieval contexts. The AGM belief revision logic has a close connection with the Logical Uncertainty Principle which describes the fundamental approach for logic-based IR models. Accordingly, the AGM belief functions are applied to develop the learning components of adaptive information agents. Expectationinference which is characterised by axioms leading to conservatively monotonic IR behaviour plays a significant role in developing the agents' classification components. Because of the direct connection between the AGM belief functions and the expectation inference relations, seamless integration of the information agents' learning and classification components is made possible. Essentially, the learning functions and the classification functions of adaptive information agents are conceptualised by and q d respectively. This conceptualisation can be interpreted as: (1) learning is the process of revising the representation K of a retrieval context with respect to a user's relevance feedback q which can be seen as a refined query; (2) classification is the process of determining the degree of relevance of a document d with respect to the refined query q given the agent's expectation (i.e., beliefs) K about the retrieval context. At the computational level, how to induce epistemic entrenchment which defines the AGM belief functions, and how to implement the AGM belief functions by means of an effective and efficient computational algorithm are among the core research issues addressed. Automated methods of discovering context sensitive term associations such as (computer ? programming) and preclusion relations such as (volcanology /? programming) are explored. In addition, an effective classification method which is underpinned by expectation inference is developed for adaptive information agents. Last but not least, quantitative evaluations, which are based on well-known IR bench-marking processes, are applied to examine the performance of the prototype agent system. The performance of the belief revision based information agent system is compared with that of a vector space based agent system and other adaptive information filtering systems participated in TREC-7. As a whole, encouraging results are obtained from our initial experiments.
APA, Harvard, Vancouver, ISO, and other styles
34

Lau, Raymond Yiu Keung. "Belief Revision for Adaptive Information Agents." Queensland University of Technology, 2003. http://eprints.qut.edu.au/15789/.

Full text
Abstract:
As the richness and diversity of information available to us in our everyday lives has expanded, so the need to manage this information grows. The lack of effective information management tools has given rise to what is colloquially known as the information overload problem. Intelligent agent technologies have been explored to develop personalised tools for autonomous information retrieval (IR). However, these so-called adaptive information agents are still primitive in terms of their learning autonomy, inference power, and explanatory capabilities. For instance, users often need to provide large amounts of direct relevance feedback to train the agents before these agents can acquire the users' specific information requirements. Existing information agents are also weak in dealing with the serendipity issue in IR because they cannot infer document relevance with respect to the possibly related IR contexts. This thesis exploits the theories and technologies from the fields of Information Retrieval (IR), Symbolic Artificial Intelligence and Intelligent Agents for the development of the next generation of adaptive information agents to alleviate the problem of information overload. In particular, the fundamental issues such as representation, learning, and classjfication (e.g., classifying documents as relevant or not) pertaining to these agents are examined. The design of the adaptive information agent model stems from a basic intuition in IR. By way of illustration, given the retrieval context involving a science student, and a query "Java", what information items should an intelligent information agent recommend to its user? The agent should recommend documents about "Computer Programming" if it believes that its user is a computer science student and every computer science student needs to learn programming. However, if the agent later discovers that its user is studying "volcanology", and the agent also believes that volcanists are interested in the volcanos in Java, the agent may recommend documents about "Merapi" (a volcano in Java with a recent eruption in 1994). This scenario illustrates that a retrieval context is not only about a set of terms and their frequencies but also the relationships among terms (e.g., java Λ science → computer, computer → programming, java Λ science Λ volcanology → merapi, etc.) In addition, retrieval contexts represented in information agents should be revised in accordance with the changing information requirements of the users. Therefore, to enhance the adaptive and proactive IR behaviour of information agents, an expressive representation language is needed to represent complex retrieval contexts and an effective learning mechanism is required to revise the agents' beliefs about the changing retrieval contexts. Moreover, a sound reasoning mechanism is essential for information agents to infer document relevance with respect to some retrieval contexts to enhance their proactiveness and learning autonomy. The theory of belief revision advocated by Alchourrón, Gärdenfors, and Makinson (AGM) provides a rigorous formal foundation to model evolving retrieval contexts in terms of changing epistemic states in adaptive information agents. The expressive power of the AGM framework allows sufficient details of retrieval contexts to be captured. Moreover, the AGM framework enforces the principles of minimal and consistent belief changes. These principles coincide with the requirements of modelling changing information retrieval contexts. The AGM belief revision logic has a close connection with the Logical Uncertainty Principle which describes the fundamental approach for logic-based IR models. Accordingly, the AGM belief functions are applied to develop the learning components of adaptive information agents. Expectationinference which is characterised by axioms leading to conservatively monotonic IR behaviour plays a significant role in developing the agents' classification components. Because of the direct connection between the AGM belief functions and the expectation inference relations, seamless integration of the information agents' learning and classification components is made possible. Essentially, the learning functions and the classification functions of adaptive information agents are conceptualised by and q d respectively. This conceptualisation can be interpreted as: (1) learning is the process of revising the representation K of a retrieval context with respect to a user's relevance feedback q which can be seen as a refined query; (2) classification is the process of determining the degree of relevance of a document d with respect to the refined query q given the agent's expectation (i.e., beliefs) K about the retrieval context. At the computational level, how to induce epistemic entrenchment which defines the AGM belief functions, and how to implement the AGM belief functions by means of an effective and efficient computational algorithm are among the core research issues addressed. Automated methods of discovering context sensitive term associations such as (computer → programming) and preclusion relations such as (volcanology ⁄→ programming) are explored. In addition, an effective classification method which is underpinned by expectation inference is developed for adaptive information agents. Last but not least, quantitative evaluations, which are based on well-known IR bench-marking processes, are applied to examine the performance of the prototype agent system. The performance of the belief revision based information agent system is compared with that of a vector space based agent system and other adaptive information filtering systems participated in TREC-7. As a whole, encouraging results are obtained from our initial experiments.
APA, Harvard, Vancouver, ISO, and other styles
35

Fletcher, Douglas Dwayne. "Adaptive filtering for extracting asymmetric rotating body information from measurement sensors." Thesis, Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/15661.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Thompson, Gordon A. (Gordon Alexander). "Inertial measurement unit calibration using Full Information Maximum Likelihood Optimal Filtering." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/34136.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2005.
Includes bibliographical references (p. 105-108).
The robustness of Full Information Maximum Likelihood Optimal Filtering (FIMLOF) for inertial measurement unit (IMU) calibration in high-g centrifuge environments is considered. FIMLOF uses an approximate Newton's Method to identify Kalman Filter parameters such as process and measurement noise intensities. Normally, IMU process noise intensities and measurement standard deviations are determined by laboratory testing in a 1-g field. In this thesis, they are identified along with the calibration of the IMU during centrifuge testing. The partial derivatives of the Kalman Filter equations necessary to identify these parameters are developed. Using synthetic measurements, the sensitivity of FIMLOF to initial parameter estimates and filter suboptimality is investigated. The filter residuals, the FIMLOF parameters, and their associated statistics are examined. The results show that FIMLOF can be very successful at tuning suboptimal filter models. For systems with significant mismodeling, FIMLOF can substantially improve the IMU calibration and subsequent navigation performance. In addition, FIMLOF can be used to detect mismodeling in a system, through disparities between the laboratory-derived parameter estimates and the FIMLOF parameter estimates.
by Gordon A. Thompson.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
37

Almosallam, Ibrahim Ahmad Shang Yi. "A new adaptive framework for collaborative filtering prediction." Diss., Columbia, Mo. : University of Missouri-Columbia, 2008. http://hdl.handle.net/10355/5630.

Full text
Abstract:
Thesis (M.S.)--University of Missouri-Columbia, 2008.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on August 22, 2008) Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
38

De, la Rouviere Simon. "Effectiveness of user-curated filtering as coping strategy for information overload on microblogging services." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/86215.

Full text
Abstract:
Thesis (MA)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: We are living in an increasingly global and connected society with information creation increasing at exponential rates. The research sets out to help solve the problem of mitigating the effects of information overload in order to increase the novelty of our interactions in the digital age. Online social-networks and microblogging services allow people across the world to take part in a public conversation. These tools have inherent constraints on how much communication can feasibly occur. Become too connected and a user will receive too much information to reasonably process. On Twitter (a microblogging service), lists are a tool for users to create separate feeds. The research determines whether lists are an effective tool for coping with information overload (abundance of updates). Using models of sustainable online discourse and information overload on computer-mediated communication tools, the research found that lists are an effective tool to cope with information overload on microblogging services. Quantitatively, individuals who make use of lists follow more users and when they start using lists they increase the amount of information resources (following other users) at a greater rate than those who do not use lists. Qualitatively, the research also provides insight into the reasons why people use lists. The research adds new academic relevance to ‘information overload’ and ‘online sustainability’ models previously not used in the context of feed-based online CMC tools, and deepens the understanding and importance of usercurated filtering as a way to reap the benefits from the increasing abundance of information in the digital age.
AFRIKAANSE OPSOMMING: Ons leef in ’n toenemend globale en gekonnekteerde samelewing waarin inligtingskepping toeneem teen ’n eksponensiële koers. Hierdie navorsing het ten doel om die newe-effekte van die oorvloed van inligting te verlig sodat daar meer waarde uit ons interaksies in die digitale era kan geput kan word. Aanlyn sosiale-netwerke en mikroblog-dienste laat mense wêreldwyd toe om deel te neem in ’n openbare gesprek. Hierdie aanlyn gereedskap het egter inherente beperkinge op hoeveel kommunikasie prakties moontlik is. Wanneer gebruikers té gekonnekteer raak, word daar te veel ingligting ontvang om redelikerwys verwerk te kan word. Op Twitter (’n mikroblog-diens) is lyste ’n hulpmiddel waarmee gebruikers afsonderlike strome van inligting kan skep. Deur die gebruik van modelle van ‘volhoubare aanlyn diskoers’ en ‘inligtingoorlading’, bewys hierdie navorsing dat lyste ’n doeltreffende hulpmiddel is om die oorvloed van inligting te verlig op mikroblog-dienste. Kwantitatief volg gebruikers wat lyste gebruik meer gebruikers vergeleke met die wat nie lyste gebruik nie. Wanner hul lyste begin gebruik, volg hulle gebruikers teen ’n hoër koers as dié wat nie lyste gebruik nie. Kwalitatief bied die navorsing ook insig oor die redes vir die gebruik van lyste. Die navorsing onderstreep die akademiese relevansie van ‘inligtingoorlading’ en ‘aanlyn volhoubaarheid’ modelle wat nie voorheen gebruik is in die konteks van stroom-gebaseerde aanlyn gereedskap nie, en verdiep die begrip en belangrikheid van gebruiker-saamgestelde filtrering as ’n manier om die voordele te trek uit die toenemende oorvloed van inligting in die digitale era.
APA, Harvard, Vancouver, ISO, and other styles
39

Wei, Chen. "Multi-collaborative filtering trust network for online recommendation systems." Thesis, University of Macau, 2011. http://umaclib3.umac.mo/record=b2550571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Reimer, James Allen. "On the recovery of images from partial information using [delta]²G filtering." Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/29169.

Full text
Abstract:
This thesis considers the recovery of a sampled image from partial information, based on the 'edges' or zero crossings found in ∇²G filtered versions of the image. A scheme is presented for separating an image into a family of multiresolution images, using low pass filtering, subsampling, and ∇²G filtering. A scheme is also presented for merging this family of ∇²G filtered images to rebuild the original. The recovery of each of the ∇²G filtered images from their 'edges' or zero crossings is then considered. It has been suggested that ∇²G filtered images might be characterized by their zero crossing locations. It is shown that ∇²G filtered images, filtered in 1-D or 2-D are not, in general, uniquely given within a scalar by their zero crossing locations. Two theorems in support of such a suggestion are considered. The differences between the constraints of Logan's theorem and ∇²G filtering are considered, and it is shown that the zero crossings which result from these two situations differ significantly in number and location. Logan's theorem is therefore not applicable to ∇²G filtered images. A recent theorem by Curtis on the adequacy of zero crossings of 2-D functions is also considered. It is shown that the requirements of Curtis' theorem are not satisfied by all ∇²G filtered images. Further, it is shown that it is very difficult to establish if an image meets the requirements of Curtis' theorem. Examples of different ∇²G filtered images with the same zero crossings are also presented. While not all ∇²G filtered images are uniquely characterized by their zero crossing locations, the practical recovery of real camera images from this partial information is considered. An iterative scheme is developed for the reconstruction of a ∇²G filtered image from its sampled zero crossings. The zero crossing samples are localized to the original image sample grid. Experimental results are presented which show that the recovered images, while retaining many of the features of the original, suffer significant loss. It is shown that, in general, the full recovery of these images in a practical situation is not possible from this partial information. From this experimental experience, it is proposed that ∇²G filtered images might be practically recovered from their zero crossings, with some additional characterization of the image in the vicinity of each zero crossing point. A simple, non-iterative scheme is developed for extracting a characterization of the ∇²G filtered image, through the use of an image edge model and a local estimation of a contrast figure in the vicinity of each zero crossing sample. A redrawing algorithm is then used to recover an approximation of the ∇²G filtered image from its zero crossing locations and the extracted characterizations. This system is evaluated using natural scene and synthetic images. Resulting image quality is good, but is shown to vary depending on the nature of the image. The advantages and disadvantages of this technique are discussed. The primary shortcoming of the implemented local estimation technique is an assumption of edge independence. A second approach is developed for characterizing the ∇²G filtered image zero crossings, which eliminates this assumption. This method is based on 2-D filtering, and provides a new technique for the recovery of a ∇²G filtered image from its sampled zero crossings. The method does not involve iteration or the solution of simultaneous equations. Good image reconstruction is shown for natural scene images, with the ∇²G filtered image zero crossings localized only to the original image sample grid. The advantages and disadvantages of this technique are discussed. The application of this recovery from partial information technique is then considered for image compression. A simple coding scheme is developed for representing the zero crossing segments with linear vector segments. A comparative study is then considered, examining the tradeoffs between compression tuning parameters and the resulting recovered image quality.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
41

Bergqvist, Martin, and Jim Glansk. "Fördelar med att applicera Collaborative Filtering på Steam : En utforskande studie." Thesis, Högskolan i Borås, Akademin för bibliotek, information, pedagogik och IT, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-14129.

Full text
Abstract:
Rekommendationssystem används överallt. På populära plattformar såsom Netflix och Amazon får du alltid rekommendationer på vad som är nästa lämpliga film eller inköp, baserat på din personliga profil. Detta sker genom korsreferering mellan användare och produkter för att finna sannolika mönster. Syftet med studien har varit att jämföra de två prevalenta tillvägagångssätten att skapa rekommendationer, på en annorlunda datamängd, där ”best practice” inte nödvändigtvis är tillämpbart. Som följd därav, har jämförelse gjorts på effektiviteten av Content-based Filtering kontra Collaborative Filtering, på Steams spelplattform, i syfte att etablera potential för en bättre lösning. Detta angreps genom att samla in data från Steam; Bygga en Content-based Filtering motor som baslinje för att representera Steams nuvarande rekommendationssystem, samt en motsvarande Collaborative Filtering motor, baserad på en standard-implementation, att jämföra mot. Under studiens gång visade det sig att Content-based Filtering prestanda initiellt växte linjärt medan spelarbasen på ett givet spel ökade. Collaborative Filtering däremot hade en exponentiell prestationskurva för spel med få spelare, för att sedan plana ut på en nivå som prestationsmässigt överträffade jämförelsmetoden. Den praktiska signifikansen av dessa resultat torde rättfärdiga en mer utbredd implementering av Collaborative Filtering även där man normalt avstår till förmån för Content-based Filtering då det är enklare att implementera och ger acceptabla resultat. Då våra resultat visar på såpass stor avvikelse redan vid basmodeller, är det här en attityd som mycket väl kan förändras. Collaborative Filtering har varit sparsamt använt på mer mångfacetterade datamängder, men våra resultat visar på potential att överträffa Content-based Filtering med relativt liten insats även på sådana datamängder. Detta kan gynna alla inköps- och community-kombinerade plattformar, då det finns möjlighet att övervaka användandet av inköpen i realtid, vilket möjliggör för justeringar av de faktorer som kan visa sig resultera i felrepresentation.
The use of recommender systems is everywhere. On popular platforms such as Netflix and Amazon, you are always given new recommendations on what to consume next, based on your specific profiling. This is done by cross-referencing users and products to find probable patterns. The aims of this study were to compare the two main ways of generating recommendations, in an unorthodox dataset where “best practice” might not apply. Subsequently, recommendation efficiency was compared between Content Based Filtering and Collaborative Filtering, on the gaming-platform of Steam, in order to establish if there was potential for a better solution. We approached this by gathering data from Steam, building a representational baseline Content-based Filtering recommendation-engine based on what is currently used by Steam, and a competing Collaborative Filtering engine based on a standard implementation. In the course of this study, we found that while Content-based Filtering performance initially grew linearly as the player base of a game increased, Collaborative Filtering’s performance grew exponentially from a small player base, to plateau at a performance-level exceeding the comparison. The practical consequence of these findings would be the justification to apply Collaborative Filtering even on smaller, more complex sets of data than is normally done; The justification being that Content-based Filtering is easier to implement and yields decent results. With our findings showing such a big discrepancy even at basic models, this attitude might well change. The usage of Collaborative Filtering has been used scarcely on the more multifaceted datasets, but our results show that the potential to exceed Content-based Filtering is rather easily obtainable on such sets as well. This potentially benefits all purchase/community-combined platforms, as the usage of the purchase is monitorable on-line, and allows for the adjustments of misrepresentational factors as they appear.
APA, Harvard, Vancouver, ISO, and other styles
42

Chotikakamthorn, Nopporn. "A pre-filtering maximum likelihood approach to multiple source direction estimation." Thesis, Imperial College London, 1996. http://hdl.handle.net/10044/1/8634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Fournier, Kevin L. "Revisiting organizations as information processors organizational structure as a predictor of noise filtering." Thesis, Monterey, Calif. : Naval Postgraduate School, 2008. http://handle.dtic.mil/100.2/ADA483732.

Full text
Abstract:
Thesis (M.S. in Systems Technology (Command, Control and Communications (C3))--Naval Postgraduate School, June 2008.
Thesis Advisor(s): Pfeiffer, Karl. "June 2008." Description based on title screen as viewed on August 25, 2008. Includes bibliographical references (p. 49-52). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
44

Andersson, Morgan. "Personal news video recommendations based on implicit feedback : An evaluation of different recommender systems with sparse data." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-234137.

Full text
Abstract:
The amount of video content online will nearly triple in quantity by 2021 compared to 2016. The implementation of sophisticated filters is of paramount importance to manage this information flow. The research question of this thesis asks to what extent it is possible to generate personal recommendations, based on the data that news videos implies. The objective is to evaluate how different recommender systems compare to complete random, each other and how they are received by users in a test environment. This study was performed during the spring of 2018, and explore four different algorithms. These recommender systems include a content-based, a collaborative-filter, a hybrid model and a popularity model as a baseline. The dataset originates from a news media startup called Newstag, who provide video news on a global scale. The data is sparse and includes implicit feedback only. Three offline experiments and a user test were performed. The metric that guided the algorithms offline performance was their recall at 5 and 10, due to the fact that the top list of recommended items are of most interest. A comparison was done on different amounts of meta-data included during training. Another test explored respective algorithms performance as the density of the data increased. In the user test, a mean opinion score was calculated based on the quality of recommendations that each of the algorithms generated for the test subjects. The user test also included randomly sampled news videos to compare with as a baseline. The results indicate that for this specific setting and data set, the content-based recommender system performed best in both the recall at five and ten, as well as in the user test. All of the algorithms outperformed the random baseline.
Mängden video som finns tillgänglig på internet förväntas att tredubblas år 2021 jämfört med 2016. Detta innebär ett behov av sofistikerade filter för att kunna hantera detta informationsflöde. Detta examensarbete ämnar att svara på till vilken grad det går att generera personliga rekommendationer baserat på det data som nyhetsvideo innebär. Syftet är att utvärdera och jämföra olika rekommendationssystem och hur de står sig i ett användartest. Studien utfördes under våren 2018 och utvärderar fyra olika algoritmer. Dessa olika rekommendationssystem innefattar tekniker som content-based, collaborative-filter, hybrid och en popularitetsmodell används som basvärde. Det dataset som används är glest och har endast implicita attribut. Tre experiment utförs samt ett användartest. Mätpunkten för algoritmernas prestanda utgjordes av recall at 5 och recall at 10, dvs. att man mäter hur väl algoritmerna lyckas generera värdefulla rekommendationer i en topp-fem respektive topp-10-lista av videoklipp. Detta då det är av intresse att ha de mest relevanta videorna högst upp i sin lista av resultat. En jämförelse gjordes mellan olika mängd metadata som inkluderades vid träning. Ett annat test gick ut på att utforska hur algoritmerna presterar då datasetet blir mindre glest. I användartestet användes en utvärderingsmetod kallad mean-opinion-score och denna räknades ut per algoritm genom att testanvändare gav betyg på respektive rekommendation, baserat på hur intressant videon var för dem. Användartestet inkluderade även slumpmässigt generade videos för att kunna jämföras i form av basvärde. Resultaten indikerar, för detta dataset, att algoritmen content-based presterar bäst både med hänsyn till recall at 5 & 10 samt den totala poängen i användartestet. Alla algoritmer presterade bättre än slumpen.
APA, Harvard, Vancouver, ISO, and other styles
45

Cenek, Martin. "Information Processing in Two-Dimensional Cellular Automata." PDXScholar, 2011. https://pdxscholar.library.pdx.edu/open_access_etds/275.

Full text
Abstract:
Cellular automata (CA) have been widely used as idealized models of spatially-extended dynamical systems and as models of massively parallel distributed computation devices. Despite their wide range of applications and the fact that CA are capable of universal computation (under particular constraints), the full potential of these models is unrealized to-date. This is for two reasons: (1) the absence of a programming paradigm to control these models to solve a given problem and (2) the lack of understanding of how these models compute a given task. This work addresses the notion of computation in two-dimensional cellular automata. Solutions using a decentralized parallel model of computation require information processing on a global level. CA have been used to solve the so-called density (or majority) classification task that requires a system-wide coordination of cells. To better understand and challenge the ability of CA to solve problems, I define, solve, and analyze novel tasks that require solutions with global information processing mechanisms. The ability of CA to perform parallel, collective computation is attributed to the complex pattern-forming system behavior. I further develop the computational mechanics framework to study the mechanism of collective computation in two-dimensional cellular automata. I define several approaches to automatically identify the spatiotemporal structures with information content. Finally, I demonstrate why an accurate model of information processing in two-dimensional cellular automata cannot be constructed from the space-time behavior of these structures.
APA, Harvard, Vancouver, ISO, and other styles
46

Ho, Yi Fong. "GA-based collaborative filtering for online recommendation." Thesis, University of Macau, 2007. http://umaclib3.umac.mo/record=b1684526.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Backlund, Alexander. "Switching hybrid recommender system to aid the knowledge seekers." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-414623.

Full text
Abstract:
In our daily life, time is of the essence. People do not have time to browse through hundreds of thousands of digital items every day to find the right item for them. This is where a recommendation system shines. Tigerhall is a company that distributes podcasts, ebooks and events to subscribers. They are expanding their digital content warehouse which leads to more data for the users to filter. To make it easier for users to find the right podcast or the most exciting e-book or event, a recommendation system has been implemented. A recommender system can be implemented in many different ways. There are content-based filtering methods that can be used that focus on information about the items and try to find relevant items based on that. Another alternative is to use collaboration filtering methods that use information about what the consumer has previously consumed in correlation with what other users have consumed to find relevant items. In this project, a hybrid recommender system that uses a k-nearest neighbors algorithm alongside a matrix factorization algorithm has been implemented. The k-nearest neighbors algorithm performed well despite the sparse data while the matrix factorization algorithm performs worse. The matrix factorization algorithm performed well when the user has consumed plenty of items.
APA, Harvard, Vancouver, ISO, and other styles
48

Olsson, Jakob, and Viktor Yberg. "Log data filtering in embedded sensor devices." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-175367.

Full text
Abstract:
Data filtering is the disposal of unnecessary data in a data set, to save resources such as server capacity and bandwidth. The method is used to reduce the amount of stored data and thereby prevent valuable resources from processing insignificant information.The purpose of this thesis is to find algorithms for data filtering and to find out which algorithm gives the best effect in embedded devices with resource limitations. This means that the algorithm needs to be resource efficient in terms of memory usage and performance, while saving enough data points to avoid modification or loss of information. After an algorithm has been found it will also be implemented to fit the Exqbe system.The study has been done by researching previously done studies in line simplification algorithms and their applications. A comparison between several well-known and studied algorithms has been done to find which suits this thesis problem best.The comparison between the different line simplification algorithms resulted in an implementation of an extended version of the Ramer-Douglas-Peucker algorithm. The algorithm has been optimized and a new filter has been implemented in addition to the algorithm.
Datafiltrering är att ta bort onödig data i en datamängd, för att spara resurser såsom serverkapacitet och bandbredd. Metoden används för att minska mängden lagrad data och därmed förhindra att värdefulla resurser används för att bearbeta obetydlig information. Syftet med denna tes är att hitta algoritmer för datafiltrering och att undersöka vilken algoritm som ger bäst resultat i inbyggda system med resursbegränsningar. Det innebär att algoritmen bör vara resurseffektiv vad gäller minnesanvändning och prestanda, men spara tillräckligt många datapunkter för att inte modifiera eller förlora information. Efter att en algoritm har hittats kommer den även att implementeras för att passa Exqbe-systemet. Studien är genomförd genom att studera tidigare gjorda studier om datafiltreringsalgoritmer och dess applikationer. Jämförelser mellan flera välkända algoritmer har utförts för att hitta vilken som passar denna tes bäst. Jämförelsen mellan de olika filtreringsalgoritmerna resulterade i en implementation av en utökad version av Ramer-Douglas-Peucker-algoritmen. Algoritmen har optimerats och ett nytt filter har implementerats utöver algoritmen.
APA, Harvard, Vancouver, ISO, and other styles
49

Ercan, Eda. "Probabilistic Matrix Factorization Based Collaborative Filtering With Implicit Trust Derived From Review Ratings Information." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612529/index.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Kuropka, Dominik. "Modelle zur Repräsentation natürlichsprachlicher Dokumente : Ontologie-basiertes Information-Filtering und -Retrieval mit relationalen Datenbanken /." Berlin : Logos-Verl, 2004. http://www.gbv.de/dms/ilmenau/toc/385143648kurop.PDF.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography