Dissertations / Theses on the topic 'Search engine'

To see the other types of publications on this topic, follow the link: Search engine.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Search engine.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Blaauw, Pieter. "Search engine poisoning and its prevalence in modern search engines." Thesis, Rhodes University, 2013. http://hdl.handle.net/10962/d1002037.

Full text
Abstract:
The prevalence of Search Engine Poisoning in trending topics and popular search terms on the web within search engines is investigated. Search Engine Poisoning is the act of manipulating search engines in order to display search results from websites infected with malware. Research done between February and August 2012, using both manual and automated techniques, shows us how easily the criminal element manages to insert malicious content into web pages related to popular search terms within search engines. In order to provide the reader with a clear overview and understanding of the motives and the methods of the operators of Search Engine Poisoning campaigns, an in-depth review of automated and semi-automated web exploit kits is done, as well as looking into the motives for running these campaigns. Three high profile case studies are examined, and the various Search Engine Poisoning campaigns associated with these case studies are discussed in detail to the reader. From February to August 2012, data was collected from the top trending topics on Google’s search engine along with the top listed sites related to these topics, and then passed through various automated tools to discover if these results have been infiltrated by the operators of Search Engine Poisoning campaings, and the results of these automated scans are then discussed in detail. During the research period, manual searching for Search Engine Poisoning campaigns was also done, using high profile news events and popular search terms. These results are analysed in detail to determine the methods of attack, the purpose of the attack and the parties behind it
APA, Harvard, Vancouver, ISO, and other styles
2

Fahlström, Kamilla, and Caroline Jensen. "Search Engine Marketing in SMEs : The motivations behind using search engine marketing." Thesis, Högskolan i Gävle, Avdelningen för ekonomi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-21116.

Full text
Abstract:
Abstract   Title: Search Engine Marketing in SMEs Level: Final assignment for Bachelor Degree in Business Administration Authors: Kamilla Fahlström & Caroline Jensen Supervisor: Jens Eklinder Frick Date: 2016 January Purpose: The purpose of this study is to use Expectancy theory to describe and analyze small company owners’ motivations for their usage of Search Engine Marketing, in terms of their perceived Valence, Expectancy and Instrumentality. Method: To research the aim of this study a qualitative research approach was used. The empirical data was compiled through ten semi-structured interviews from a varied selection of Swedish companies in the service sector. The data was analyzed with previous research to create an understanding of the motivations for using Search Engine Marketing. Conclusions: The result of this study, when analyzed alongside Expectancy theory, indicates that small business owners are motivated to use Search Engine Marketing. Furthermore, which method of Search Engine Marketing that the owners are motivated to use is dependent on their perceptions of the different methods. Future research: Due to the lack of research into the attitudinal and psychological aspects of Search Engine Marketing and the limitations of this study, it would be interesting if more research were done into this area. For example, it would be interesting to study if trust-based companies are motivated to use Search Engine Marketing, and if demographics affect the motivations. Contribution: This study contributes with results on a previously unexplored area within the research field of Search Engine Marketing. The study also contribute with some information to practice regarding small service company owners’ thoughts about their usage of Search Engine Marketing.  Key words: Search Engine Marketing, SMEs, Expectancy theory, Motivation, Website visibility
APA, Harvard, Vancouver, ISO, and other styles
3

Hurlock, Jonathan. "Twitter search : building a useful search engine." Thesis, Swansea University, 2015. https://cronfa.swan.ac.uk/Record/cronfa43037.

Full text
Abstract:
Millions of digital communications are posted over social media every day. Whilst some state that a large proportion of these posts are considered to be babble, we know that some of these posts actually contain useful information. In this thesis we specifically look at how we can identify reasons as to what makes some of these communications useful or not useful to someone searching for information over social media. In particular we look at what makes messages (tweets) from the social network Twitter useful or not useful users performing search over a corpus of tweets. We identify 16 features that help a tweet be deemed useful, and 17 features as to why a tweet may be deemed not useful to someone performing a search task. From these findings we describe a distributed architecture we have compiled to process large datasets and allow us to perform search over a corpus of tweets. Utilizing this architecture we are able to index tweets based on our findings and describe a crowdsourcing study we ran to help optimize weightings for these features via learning to rank, which quantifies how important each feature is in understanding what makes tweets useful or not for common search tasks performed over twitter. We release a corpus of tweets for the purpose of evaluating other usefulness systems.
APA, Harvard, Vancouver, ISO, and other styles
4

Narayan, Nitesh. "Advanced Intranet Search Engine." Thesis, Mälardalen University, School of Innovation, Design and Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-9408.

Full text
Abstract:

Information retrieval has been a prevasive part of human society since its existence.With the advent of internet and World wide Web it became an extensive area of researchand major foucs, which lead to development of various search engines to locate the de-sired information, mostly for globally connected computer networks viz. internet.Butthere is another major part of computer network viz. intranet, which has not seen muchof advancement in information retrieval approaches, in spite of being a major source ofinformation within a large number of organizations.Most common technique for intranet based search engines is still mere database-centric. Thus practically intranets are unable to avail the benefits of sophisticated tech-niques that have been developed for internet based search engines without exposing thedata to commercial search engines.In this Master level thesis we propose a ”state of the art architecture” for an advancedsearch engine for intranet which is capable of dealing with continuously growing sizeof intranets knowledge base. This search engine employs lexical processing of doc-umetns,where documents are indexed and searched based on standalone terms or key-words, along with the semantic processing of the documents where the context of thewords and the relationship among them is given more importance.Combining lexical and semantic processing of the documents give an effective ap-proach to handle navigational queries along with research queries, opposite to the modernsearch engines which either uses lexical processing or semantic processing (or one as themajor) of the documents. We give equal importance to both the approaches in our design,considering best of the both world.This work also takes into account various widely acclaimed concepts like inferencerules, ontologies and active feedback from the user community to continuously enhanceand improve the quality of search results along with the possibility to infer and deducenew knowledge from the existing one, while preparing for the advent of semantic web.

APA, Harvard, Vancouver, ISO, and other styles
5

King, John D. "Search engine content analysis." Queensland University of Technology, 2008. http://eprints.qut.edu.au/26241/.

Full text
Abstract:
Search engines have forever changed the way people access and discover knowledge, allowing information about almost any subject to be quickly and easily retrieved within seconds. As increasingly more material becomes available electronically the influence of search engines on our lives will continue to grow. This presents the problem of how to find what information is contained in each search engine, what bias a search engine may have, and how to select the best search engine for a particular information need. This research introduces a new method, search engine content analysis, in order to solve the above problem. Search engine content analysis is a new development of traditional information retrieval field called collection selection, which deals with general information repositories. Current research in collection selection relies on full access to the collection or estimations of the size of the collections. Also collection descriptions are often represented as term occurrence statistics. An automatic ontology learning method is developed for the search engine content analysis, which trains an ontology with world knowledge of hundreds of different subjects in a multilevel taxonomy. This ontology is then mined to find important classification rules, and these rules are used to perform an extensive analysis of the content of the largest general purpose Internet search engines in use today. Instead of representing collections as a set of terms, which commonly occurs in collection selection, they are represented as a set of subjects, leading to a more robust representation of information and a decrease of synonymy. The ontology based method was compared with ReDDE (Relevant Document Distribution Estimation method for resource selection) using the standard R-value metric, with encouraging results. ReDDE is the current state of the art collection selection method which relies on collection size estimation. The method was also used to analyse the content of the most popular search engines in use today, including Google and Yahoo. In addition several specialist search engines such as Pubmed and the U.S. Department of Agriculture were analysed. In conclusion, this research shows that the ontology based method mitigates the need for collection size estimation.
APA, Harvard, Vancouver, ISO, and other styles
6

Edlund, Joakim. "Cognitive Search Engine Optimization." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281882.

Full text
Abstract:
The use of search engines is a common way to navigate through information today. The field of information retrieval is the field of finding documents in large unstructured collections. Within this field there are widely researched baseline solutions to solve this problem. There are also more advanced techniques (often based on machine learning) to improve relevant results further. However, picking the right algorithm or technique when implementing a search engine is no trivial task and deciding which performs better might seem hard. This project takes a commonly used baseline search engine implementation (elasticsearch) and measures its relevance score using standard measurements within the field of information retrieval (precision, recall, f-measure). After establishing a baseline configuration a query expansion algorithm (based on Word2Vec) is implemented in parallel with a recommendation algorithm (collaborative filtering) to compare against each other and the baseline configuration. Finally a combined model using both the query expansion algorithm and collaborative filtering is used to see if they can utilize each other’s strengths to make an even better setup. Findings show that both Word2Vec and collaborative filtering improves relevance over all three measurements (precision, recall, f-measure). These findings could also be confirmed to be significant through statistical analysis. Collaborative filtering seems to be performing better than Word2Vec for the topmost results while Word2Vec improves more the longer the result set is set to be. The combined model did show a significant improvement to all measurements for result sets of sizes 3 and 5 but larger result sets show less of an improvement and even worse performance.
Användandet av sökmotorer är idag en vanlig metod för att navigera genom information. Det akademiska området informationssökning studerar metoder för att hitta dokument inom stora ostrukturerade samlingar av dokument. Det finns flera standardlösningar inom området som ämnas att lösa problemet. Det finns även ett flertal mer avancerade tekniker, ofta baserade på maskininlärning, vars mål är att öka relevansen hos resultaten ytterligare. Att välja rätt algoritm är dock inte trivialt och att avgöra vilken som ger bäst resultat kan tyckas vara svårt. I det här projektet används en ofta använd sökmotor, elasticsearch, i dess standarduppsättning och utvärderas mot vanligen använda mätvärden inom informationssökning (precision, täckning och f-värde). Efter att standaruppsättningens resultat har etablerats så implementeras en frågeutvidgningsalgoritm (query expansion), baserad på Word2Vec, och en rekommendationsalgoritm baserad på collaborative filtering. Alla tre modellerna jämförs senare mot varandra efter de tre mätvärdena. Slutligen implementeras även en kombinerad modell av både Word2Vec och collaborative filtering för att se om det går att nyttja båda modellernas styrkor för en ännu bättre modell. Resultaten visar att både Word2Vec och collaborative filtering ger bättre resultat för alla mätvärden. Resultatförbättringarna kunde verifieras som signifikant bättre efter en statistisk analys. Collaborative filtering verkar prestera bäst när man endast tillåter ett få- tal dokument i resultatmängden medan word2vec blir bättre desto större resultatmängden är. Den kombinerade modellen visade en signifikant förbättring för resultatmängder i storlekarna 3 och 5. Större resultatmängder visade dock ingen förbättring eller till och med en försämring gentemot word2vec och collaborative filtering.
APA, Harvard, Vancouver, ISO, and other styles
7

King, John Douglas. "Search engine content analysis." Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/26241/1/John_King_Thesis.pdf.

Full text
Abstract:
Search engines have forever changed the way people access and discover knowledge, allowing information about almost any subject to be quickly and easily retrieved within seconds. As increasingly more material becomes available electronically the influence of search engines on our lives will continue to grow. This presents the problem of how to find what information is contained in each search engine, what bias a search engine may have, and how to select the best search engine for a particular information need. This research introduces a new method, search engine content analysis, in order to solve the above problem. Search engine content analysis is a new development of traditional information retrieval field called collection selection, which deals with general information repositories. Current research in collection selection relies on full access to the collection or estimations of the size of the collections. Also collection descriptions are often represented as term occurrence statistics. An automatic ontology learning method is developed for the search engine content analysis, which trains an ontology with world knowledge of hundreds of different subjects in a multilevel taxonomy. This ontology is then mined to find important classification rules, and these rules are used to perform an extensive analysis of the content of the largest general purpose Internet search engines in use today. Instead of representing collections as a set of terms, which commonly occurs in collection selection, they are represented as a set of subjects, leading to a more robust representation of information and a decrease of synonymy. The ontology based method was compared with ReDDE (Relevant Document Distribution Estimation method for resource selection) using the standard R-value metric, with encouraging results. ReDDE is the current state of the art collection selection method which relies on collection size estimation. The method was also used to analyse the content of the most popular search engines in use today, including Google and Yahoo. In addition several specialist search engines such as Pubmed and the U.S. Department of Agriculture were analysed. In conclusion, this research shows that the ontology based method mitigates the need for collection size estimation.
APA, Harvard, Vancouver, ISO, and other styles
8

Khan, Saiful. "Visualization assisted enterprise search engine." Thesis, University of Oxford, 2015. https://ora.ox.ac.uk/objects/uuid:d1790b99-c30e-487b-b87e-98d4e3a8b2bb.

Full text
Abstract:
In most organizations, the number of files increases at a rate similar to the growth of data. As one of the big data challenges, many enterprises encounter a common difficulty in a routine operation, that is, finding files in a large-scale file system typically distributed across several physical sites and accessed by thousands of users. This thesis addresses a central question: whether or not visualization techniques can be used to improve the effectiveness and efficiency in performing numerous file searching operations at an industrial scale. All work conducted in this research was done in partnership with Laing O'Rourke as an industrial collaborator. The main technical approaches to support file searching operations include (a) the use of a database to manage searchable records of files and (b) the use of a search engine to add the exploration of a less-structured file repository. With the rapid increase of files, the former approach incurs a huge cost on entering records of files into the database, while the latter suffers from unreliable search results (false positives and false negatives) and difficulties in collaborative search. This thesis focuses on the second approach, that is, to develop a visualization-assisted enterprise search engine. In this thesis, we propose two novel visualization techniques in conjunction with an experimental enterprise search engine. The first technique provides users with focus+context visualization of search results (focus) in relation to the search space (context). This assists users in identifying false positives rapidly, and helps users hypothesize potential false negatives and investigate them through the refinement of search criteria. A number of methods for depicting the multivariate information associated to search results were designed, implemented and compared. Empirical studies were conducted to discover the visual attributes for glyph-based and animation-based methods, and to evaluate different visual designs. The second technique provides users with support for search activities over a period of time and in collaboration. We developed the novel concept of Search Provenance Graph (SPG), and a method for connecting semantically similar queries in SPGs. Methods and software for visualizing SPGs were designed and implemented, enabling users in collaboration to acquire provenance information efficiently and formulate/reformulate queries effectively. In conjunction with the research on visualization techniques, we developed an experimental enterprise search engine, which allows visualization components to be integrated. The search engine is knowledge-based, and is supported by multiple ontologies and crawler agents for exploring the search space. We used query-expansion and results ranking to reduce false positives and negatives, active-learning to enable dynamic learning during search operations, and history-based indexing to facilitate real-time return of search results. This research is the first step towards the development of visualization-assisted enterprise search engines as a new technology that can address a major big data challenge in industry, and can bring a significant amount of cost-effectiveness to everyday operations.
APA, Harvard, Vancouver, ISO, and other styles
9

Slavík, Michal. "Search Engine Marketing neziskových organizací." Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-124602.

Full text
Abstract:
The goal of this thesis is to design methodics for Search Engine Marketing (SEM) in nonprofit organizations (NPOs) which takes advantage of their specifics. Other goals include practical evaluation of the methodics and analysis of the current state of NPOs websites. Determined goals are reached by merging theoretical background from relevant literature with the knowledge gained during field research and with author's experience. Designed methodics is built on the following hypotheses: NPOs are able to negotiate better trade terms than trading companies, NPOs can delegate their volunteers to do some SEM activities. Field research confirmed both hypotheses. Hypothesis that NPOs websites are static because NPOs see no profit in regular publishing was disproved. The methodics consists of four phases and also includes recommended tools, metrics, topics for publishing and a list of linkbaiting activities. The thesis consists of five chapters. The first chapter summarizes the necessary theoretical background, while the second chapter defines terms and premises. The main methodics can be found in chapter three. The fourth chapter contains current state analysis based on examination of 31 websites. A comparison of the methodics' hypotheses and activities against the experience of 21 NPOs representatives and 3 experts in the field of SEO is given in the last chapter. Opinions of the both groups of respondents are compared too. Based on the respondents' judgments on costs and utility of the methodics' activities a rank of these activities is finally created. The main contribution of this thesis is a conversion of the universal SEM theory into the specific conditions and language of NPOs practitioners and an analysis of the current state in this field.
APA, Harvard, Vancouver, ISO, and other styles
10

FISTER, JUSTIN M. "CORRELATION ANALYSIS OF ON-PAGE ATTRIBUTES AND SEARCH ENGINE RANKINGS." University of Cincinnati / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1178730597.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Henriksson, Adam. "Alternative Search : From efficiency to experience." Thesis, Umeå universitet, Institutionen Designhögskolan, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-97836.

Full text
Abstract:
Search engines of today are focusing on efficiently and accurately generating search results.Yet, there is much to be explored in the way people interact with the applications and relate to the content. Individuals are commonly unique, with complex preferences, motives and expectations. Not only is it important to be sensitive to these differences, but to accommodate the extremes. Enhancing a search engine does not only rely on technological development, but to explore potential user experiences in broader perspectives - which not only gratifies the needs for information, but supports a diversity of journeys. The aim of the project is to develop an alternate search engine with different functionality based on new values that reflects contemporary needs. The result, Exposeek, is an experiential prototype supporting exploratory browsing based on principles of distributed infrastructure, transparent computation and serendipitous information. Suggestive queries, legible algorithms and augmented results provide additional insights and present an alternative way to seek and peruse the Web.
Search Engines, Interaction Design
APA, Harvard, Vancouver, ISO, and other styles
12

Aghajani, Nooshin. "Semoogle - An Ontology Based Search Engine." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-19086.

Full text
Abstract:
In this thesis, we present a prototype for search engine to show how such a semantic search application based on ontology techniques contributes to save time for user, and improve the quality of relevant search results compared to a traditional search engine. This system is built as a query improvement module, which uses ontology and sorts the results search based on four predefined categories. The first and important part of the implementation of search engine prototype is to apply ontology to define the meaning and the relations between the queries in default domain of the study. Next, categorization of the results is carried out in order to improve the quality of result search presentation based on categorization-list. The ontology used in this search engine prototype includes sample of terms in safety and security domain, which is capable to be modified in this domain, or can be substituted by another ontology in the other fields of study. The process is continued by searching the enriched query through the Web using Google interface application search engine. The application uses ranking algorithms to categorize and organize the results of Google search in four categories, i.e. History, Mechanism, Prevention, and Case study. The predefined categories can be substituted to the other categories based on user preferences in other studies using different categorizes.
APA, Harvard, Vancouver, ISO, and other styles
13

Garcia, Steven, and steven garcia@student rmit edu au. "Search Engine Optimisation Using Past Queries." RMIT University. Computer Science and Information Technology, 2008. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080501.093229.

Full text
Abstract:
World Wide Web search engines process millions of queries per day from users all over the world. Efficient query evaluation is achieved through the use of an inverted index, where, for each word in the collection the index maintains a list of the documents in which the word occurs. Query processing may also require access to document specific statistics, such as document length; access to word statistics, such as the number of unique documents in which a word occurs; and collection specific statistics, such as the number of documents in the collection. The index maintains individual data structures for each these sources of information, and repeatedly accesses each to process a query. A by-product of a web search engine is a list of all queries entered into the engine: a query log. Analyses of query logs have shown repetition of query terms in the requests made to the search system. In this work we explore techniques that take advantage of the repetition of user queries to improve the accuracy or efficiency of text search. We introduce an index organisation scheme that favours those documents that are most frequently requested by users and show that, in combination with early termination heuristics, query processing time can be dramatically reduced without reducing the accuracy of the search results. We examine the stability of such an ordering and show that an index based on as little as 100,000 training queries can support at least 20 million requests. We show the correlation between frequently accessed documents and relevance, and attempt to exploit the demonstrated relationship to improve search effectiveness. Finally, we deconstruct the search process to show that query time redundancy can be exploited at various levels of the search process. We develop a model that illustrates the improvements that can be achieved in query processing time by caching different components of a search system. This model is then validated by simulation using a document collection and query log. Results on our test data show that a well-designed cache can reduce disk activity by more than 30%, with a cache that is one tenth the size of the collection.
APA, Harvard, Vancouver, ISO, and other styles
14

Chen, Xue. "An Internet multiple-encoding search engine." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0033/MQ65479.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Edward M. 1976. "Supreme Court audio file search engine." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/17997.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (leaves 73-74).
Search engines have evolved from simple text indexing to indexing other forms of media, such as audio and video. I have designed and implemented a web-based system that permits people to search the transcripts of selected Supreme Court cases, and retrieve audio file clips relevant to the search terms. The system development compared two implementation approaches, one based on transcript aligning technologies developed by Hewlett-Packard, the other is a servlet-based search system designed to return pre-parsed audio file clips. While the first approach has the potential to revolutionize audio content search, it could not consistently deliver successively parsed audio file clips with the same user friendly content and speed as the simpler second approach. This web service, implemented with the second approach, is currently deployed and publicly available at www.supremecourtaudio.net .
by Edward M. Wang.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
16

Chung, Jack V. (Jack Vinh) 1978. "Search engine for online physiologic databases." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86654.

Full text
Abstract:
Thesis (M.Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2001.
Includes bibliographical references (leaf 40).
by Jack V. Chung.
M.Eng.and S.B.
APA, Harvard, Vancouver, ISO, and other styles
17

Wong, Brian Wai Fung. "Deep-web search engine ranking algorithms." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61246.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 79-80).
The deep web refers to content that is hidden behind HTML forms. The deep web contains a large collection of data that are unreachable by link-based search engines. A study conducted at University of California, Berkeley estimated that the deep web consists of around 91,000 terabytes of data, whereas the surface web is only about 167 terabytes. To access this content, one must submit valid input values to the HTML form. Several researchers have studied methods for crawling deep web content. One of the most promising methods uses unique wrappers for HTML forms. User inputs are first filtered through the wrappers before being submitted to the forms. However, this method requires a new algorithm for ranking search results generated by the wrappers. In this paper, I explore methods for ranking search results returned from a wrapped-based deep web search engine.
by Brian Wai Fung Wong.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
18

Malladi, Rajavardhan. "Recipe search engine using Yummly API." Kansas State University, 2016. http://hdl.handle.net/2097/32661.

Full text
Abstract:
Master of Science
Department of Computing and Information Sciences
Daniel A. Andresen
In this project I have built a web application "Recipe Search Engine Using Yummly API". This application is central information hub for the kitchen--connecting consumers with recipe ideas, ingredient lists, and cooking instructions. It will serve best for the people who uses digital tools to plan their cooking, these days almost everyone does. The various features available for users in this application are as following. Users can search for their favorite dishes. The search results contain information about ingredients list, total time needed for cooking, user's rating and cooking directions. Basic search filters are provided to filter out the search results like Breakfast, Lunch and Dinner recipes. The order of displayed results can be sorted according to ratings, total time required to prepare the dish. User can create an account and build their own favorite recipe collection by liking the recipes displayed. The liked recipes are stored into user’s account and user can view, add and delete those recipes anytime from his recipe collection. Users can use their social networking platform Facebook account credentials to log into this application or create a new account in this application. The application will communicate with the Yummly API to consume data from it. The Yummly API is largest recipe information aggregator with over one million recipes data.
APA, Harvard, Vancouver, ISO, and other styles
19

Watson, Veronica. "Basic system configuration in search engine." Xavier University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=xavier1545566567119888.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Na, Jin-Cheon, Christopher S. G. Khoo, and Syin Chan. "A sentiment-based meta search engine." School of Communication & Information, Nanyang Technological University, 2006. http://hdl.handle.net/10150/106241.

Full text
Abstract:
This study is in the area of sentiment classification: classifying online review documents according to the overall sentiment expressed in them. This paper presents a prototype sentiment-based meta search engine that has been developed to perform sentiment categorization of Web search results. It assists users to quickly focus on recommended or non-recommended information by classifying Web search results into four categories: positive, negative, neutral, and non-review documents. It does this by using an automatic classifier based on a supervised machine learning algorithm, Support Vector Machine (SVM). This paper also discusses various issues we have encountered during the prototype development, and presents our approaches for resolving them. A user evaluation of the prototype was carried out with positive responses from users.
APA, Harvard, Vancouver, ISO, and other styles
21

Dennis, Johansson. "Search Engine Optimization and the Long Tail of Web Search." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-296388.

Full text
Abstract:
In the subject of search engine optimization, many methods exist and many aspects are important to keep in mind. This thesis studies the relation between keywords and website ranking in Google Search, and how one can create the biggest positive impact. Keywords with smaller search volume are called "long tail" keywords, and they bear the potential to expand visibility of the website to a larger crowd by increasing the rank of the website for the large fraction of keywords that might not be as common on their own, but together make up for a large amount of the total web searches. This thesis will analyze where on the web page these keywords should be placed, and a case study will be performed in which the goal is to increase the rank of a website with knowledge from previous tests in mind.
APA, Harvard, Vancouver, ISO, and other styles
22

Nilsson, Rebecca, and Christa Alanko. "STREAMLINE THE SEARCH ENGINE MARKETING STRATEGY : Generational Driven Search Behavior on Google." Thesis, Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-70149.

Full text
Abstract:
The expanded internet usage has resulted in an increased activity at web-based search engines. Companies are therefore devoting a large portion of their online marketing budget on Search Engine Marketing (abbreviated SEM) in order to reach potential online consumers searching for products. SEM comprises Search Engine Advertising (SEA) and Search Engine Optimization (SEO) which are two dissimilar marketing tools companies can invest in to reach the desired customer segments. It is therefore of great interest for companies in different product markets to have knowledge of which SEM strategy to utilize. The statement leads to the purpose of the thesis which is to investigate which SEM strategy is the most suitable for companies in different markets, SEA or SEO?. The purpose of the thesis is derived to the research problem: How does the search behavior of consumers differ between the two SEM tools, SEO and SEA?. Initially, in order to answer the research problem, a theoretical framework was conducted consisting of theories from previous research. To collect primary data observations of 60 test subjects was performed in accordance with the Experimental Vignette Methodology. The analysis consists of a comparison between the collected data and the theories included in the frame of reference, to identify similarities and differences. The SPSS analysis of the result revealed numerous findings such as the two-way interactions of the factors degree of involvement and the click rate of SEM, as well as the choice of either a head or a tail keyword and the degree of involvement. The analysis further revealed a three-way interaction which suggests that the degree of involvement, and the use of either a head or tail keyword affects the choice of SEM. Additionally, the result shows that customers using brands as keywords are more likely to click on an organic link rather than on a paid ad. However, when adding the factor age to the analysis the results turn insignificant. As the area of search behavior of customers using search engines is relatively scientifically unexplored, the thesis has contributed with knowledge useful for companies, marketing agencies, among others. However, due to the ongoing expansion of search engine usage, it is of great interest to conduct further research in the area to reveal additional findings.
APA, Harvard, Vancouver, ISO, and other styles
23

Ogbonna, Antoine I. "The Psychology of a Web Search Engine." Youngstown State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1328897147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Høyum, Øystein. "Redistribution of Documents across Search Engine Clusters." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8967.

Full text
Abstract:

The goal of this master thesis has been to evaluate methods for redistribution of data on search engine clusters. For all of the methods the redistribution is done when the cluster changes size. Redistribution methods that are specifically designed for search engines are not common, so the methods compared in this thesis are based on other distributed settings. This is from among other things distributed database systems, distributed files and continuous media systems. The evaluation of the methods consists of two parts, a theoretical analysis and an implementation and testing of the methods. In the theoretical analysis the methods are compared by deduction of expressions of performance. In the practical approach the algorithms are implemented on a simplified search engine cluster of 6 computers. The methods have been evaluated using three criteria. The first criteria of evaluation are how well the methods distribute documents across the cluster. In the theoretical analysis this also includes worst case scenarios. The practical evaluation compares the distribution at the end of the tests. The second criterion of evaluation is efficiency of document access. The theoretical approach focuses on the number of operations required while the practical approach calculates indexing throughput. The last area of focus examined is the document volume transported during redistribution. For the final part of the comparison of the methods, some relevant scenarios are introduced. These scenarios focus on dynamic data sets with high frequency of updates, often new documents and much searching. Using the scenarios and results from the method testing, we found some methods that performed be better than others. It is worth noting that the conclusions are for a given the type of workload from the scenarios and the setting for the test. Given other situations, other methods might be more suitable. When concluding our results we found, for the give scenarios, the best distribution method was the distributed version of linear hashing (LH*). The results from the method using hashing/range-partitioning also showed to be the least suitable as a consequence of high transport volume.

APA, Harvard, Vancouver, ISO, and other styles
25

Deolikar, Piyush P. "Lecture Video Search Engine Using Hadoop MapReduce." Thesis, California State University, Long Beach, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10638908.

Full text
Abstract:

With the advent of the Internet and ease of uploading video content over video libraries and social networking sites, the video data availability was increased very rapidly during this decade. Universities are uploading video tutorials in the online courses. Companies like Udemy, coursera, Lynda, etc. made video tutorials available over the Internet. We propose and implement a scalable solution, which helps to find relevant videos with respect to a query provided by the user. Our solution maintains an updated list of the available videos on the web and assigns a rank according to their relevance. The proposed solution consists of three main components that can mutually interact. The first component, called the crawler, continuously visits and locally stores the relevant information of all the webpages with videos available on the Internet. The crawler has several threads, concurrently parsing webpages. The second component obtains the inverted index of the web pages stored by the crawler. Given a query, the inverted index is used to obtain the videos that contain the words in the query. The third component computes the rank of the video. This rank is then used to display the results in the order of relevance. We implement a scalable solution in the Apache Hadoop Framework. Hadoop is a distributed operating system that provides a distributed file system able to handle large files as well as distributed computation among the participants.

APA, Harvard, Vancouver, ISO, and other styles
26

Aly, Mazen. "Automated Bid Adjustments in Search Engine Advertising." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210651.

Full text
Abstract:
In digital advertising, major search engines allow advertisers to set bid adjustments on their ad campaigns in order to capture the valuation differences that are a function of query dimensions. In this thesis, a model that uses bid adjustments is developed in order to increase the number of conversions and decrease the cost per conversion. A statistical model is used to select campaigns and dimensions that need bid adjustments along with several techniques to determine their values since they can be between -90% and 900%. In addition, an evaluation procedure is developed that uses campaign historical data in order to evaluate the calculation methods as well as to validate different approaches. We study the problem of interactions between different adjustments and a solution is formulated. Real-time experiments showed that our bid adjustments model improved the performance of online advertising campaigns with statistical significance. It increased the number of conversions by 9%, and decreased the cost per conversion by 10%.
I digital marknadsföring tillåter de dominerande sökmotorerna en annonsör att ändra sina bud med hjälp av så kallade budjusteringar baserat på olika dimensioner i sökförfrågan, i syfte att kompensera för olika värden de dimensionerna medför. I det här arbetet tas en modell fram för att sätta budjusteringar i syfte att öka mängden konverteringar och samtidigt minska kostnaden per konvertering. En statistisk modell används för att välja kampanjer och dimensioner som behöver justeringar och flera olika tekniker för att bestämma justeringens storlek, som kan spänna från -90% till 900%, undersöks. Utöver detta tas en evalueringsmetod fram som använder en kampanjs historiska data för att utvärdera de olika metoderna och validera olika tillvägagångssätt. Vi studerar interaktionsproblemet mellan olika dimensioners budjusteringar och en lösning formuleras. Realtidsexperiment visar att vår modell för budjusteringar förbättrade prestandan i marknadsföringskampanjerna med statistisk signifikans. Konverteringarna ökade med 9% och kostnaden per konvertering minskade med 10%.
APA, Harvard, Vancouver, ISO, and other styles
27

Robisch, Katherine A. "Search Engine Optimization: A New Literacy Practice." University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1394533925.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Turchyn, Sergiy. "A Visual Search Engine for Gesture Annotation." Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1499424165650622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Tångring, Anton. "Analysing Search Engine Trends related to Antibiotics." Thesis, Uppsala universitet, Institutionen för informatik och media, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-329105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Mätäsaho, T. (Timo). "Text search engine for digitized historical book." Master's thesis, University of Oulu, 2015. http://jultika.oulu.fi/Record/nbnfioulu-201505061448.

Full text
Abstract:
Abstract. There’s need to digitalize numerous historical books and texts and make it possible to read them electronically. Also it is often wanted to preserve their original appearance, not just the text itself. For these operations there is a need for systems, which understand the books and text as they are and are able to distinguish the text information from other context. Traditional optical character recognition systems perform well when processing modern printed text, but they might face problems with old handwritten texts. These types of texts need to be analyzed with systems, which can analyse and segment the text areas well from other irrelevant information. That is why it is important, that the document image segmentation works well. This thesis focuses on manual rectification, automatic segmentation and text line search on document images in Orationes project. When the document images are segmented and text lines found, information from XML transcript is used to find characters and words from the segmented document images. Search engine was developed with with Python programmin language. Python was chosen to ensure high platform independency.Tekstinhakujärjestelmä digitoidulle historialliselle kirjalle. Tiivistelmä. Lukuisia historiallisia kirjoja halutaan digitalisoida ja siirtää sähköisesti luettaviksi. Usein ne halutaan myös säilyttää alkuperäisessä ulkoasussaan. Tällaista operaatiota varten tarvitaan järjestelmiä, jotka osaavat ymmärtää kirjat ja tekstit sellaisinaan ja osaavat erottaa tekstin kirjan muusta kontekstista. Perinteiset optiset kirjaimentunnistusmenetelmät suorituvat hyvin painettujen tekstien analysoinnista, mutta ongelmia aiheuttavat käsinkirjoitetut vanhat tekstit. Tällaisten tekstien kohdalla dokumenttikuvat pitää pystyä ensin analysoimaan hyvin ja erottelemaan tekstialueet muusta tekstin kannalta irrelevantista informaatiosta. Siksi onkin tärkeää, että dokumenttikuvan segmentaatio onnistuu hyvin. Tässä työssä keskitytään Orationes projektin dokumenttikuvien manuaaliseen suoristamiseen, segmentaatioon ja tekstirivien löytämiseen. Lisäksi segmentaation jälkeen segmentoidusta dokumenttikuvasta yritetään löytää haluttuja kirjaimia ja sanoja, dokumenttikuvan XML transkriptista saadun informaation avulla. Hakumoottori toteutettiin Python ohjelmointikielellä, jotta saavutettiin alustariippumattomuus hakumoottorille.
APA, Harvard, Vancouver, ISO, and other styles
31

Chiravirakul, Pawitra. "Search satisfaction : choice overload, variety seeking and serendipity in search engine use." Thesis, University of Bath, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.665389.

Full text
Abstract:
Users of current web search engines are often presented with a large number of returns after submitting a search term and choosing from the list might lead to them suffering from the effect of “choice overload”, as reported in earlier work. However, these search results are typically presented in an ordered list so as to simplify the search process, which may influence search behaviour and moderate the effect of number of choices. In this thesis, the effects of the number of search returns and their ordering on user behaviour and satisfaction are explored. A mixed methods approach combining multiple data collection and analysis techniques is employed in order to investigate these effects in terms of three specific issues, namely, choice overload in search engine use, variety seeking behaviour in a situation where multiple aspects of search results are required, and the chance of encountering serendipity. The participants were given search tasks and asked to choose from the sets of returns under experimental conditions. The results from the first three experiments revealed that large numbers of search results returned from a search engine tended to be associated with more satisfaction with the selected options when the decision was made without a time limit. In addition, when time was more strongly constrained the choices from a small number of returns led to relatively higher satisfaction than for a large number. Moreover, users’ behaviour was strongly influenced by the ordering of options in that they often looked and selected options presented near the top of the result lists when they perceived the ranking was reliable. The next experiment further investigated the ranking reliance behaviour when potentially useful search results were presented in supplementary lists. The findings showed that when users required a variety of options, they relied less on the ordering and tended to adapt their search strategies to seek variety by browsing more returns through the list, selecting options located further down, and/or choosing the supplementary web pages provided. Finally, with the aim of illustrating how chance encountering can be supported, a model of an automated synonym-enhanced search was developed and employed in a real-world literature search. The results showed that the synonym search was occasionally useful for providing a variety of search results, which in turn increased users’ opportunity to come across serendipitous experiences.
APA, Harvard, Vancouver, ISO, and other styles
32

Kalinov, Pavel. "Intelligent Web Exploration." Thesis, Griffith University, 2012. http://hdl.handle.net/10072/365635.

Full text
Abstract:
The hyperlinked part of the internet known as "the Web" arose without much planning for a future of millions of publishers and countless pieces of online content. It has no in-built mechanism to find anything, so tools external to it were introduced: initially web directories and then search engines. Search engines are based on machine learning and have been extremely successful. However, they have some inherent limitations and cannot, by design, address some needs: they serve the "information locating" need only and not "information discovery". Search engine users have learned to accept them and in many cases do not realise how their search has been limited by shortcomings of the model. Before the advent of the search engine, web directories were the only information-finding tool on the web. They were manually built and could not compete economically with the effciency of search engines. This lead to their virtual extinction, with the effect that the "information discovery" need of users is no longer served by any major information provider. Furthermore, none of the dominant information-finding models account for the person of the user in any meaningful way controllable by (or even visible to) the user. This work proposes a method to combine a search engine, a web directory and a personal information management agent into an intelligent Web Exploration Engine in a way which bridges the gaps between these seemingly unrelated tools. Our hybrid, for which we have developed a proof-of-concept prototype [Kalinov et al., 2010b], allows users to both locate specific data and to discover new information. Information discovery is served by a web directory which is built with the assistance of a dynamic hierarchical classifier we developed [Kalinov et al., 2010a]. The category structure achieved by it is also the basis of a large number of nested search engines, allowing information locating both in general (similar to a "standard" search engine) and in a variety of contexts selectable by the user.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information and Communication Technology
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
33

Koval, Mariia <1989&gt. "Search Engine Dominance and Quality of Organic Search: Consequences for Online Advertising." Master's Degree Thesis, Università Ca' Foscari Venezia, 2013. http://hdl.handle.net/10579/3192.

Full text
Abstract:
In this dissertation work we study in which way an Internet search engine can lower the quality (score) of the organic links, and how it will affect the probability of the high quality website to get a high position in the organic search results. We investigate the impact of such organic search results manipulation on advertisers’ strategies, search engine’s profits, and consumer welfare. We find that when organic link quality becomes too high, a search engine can face the cannibalisation problem when consumers prefer to click on organic links first and consequently satisfy their needs, rather than click on sponsored links – revenue source for the search engine. Thus, a search engine starts reducing organic link quality that reflects in lower consumer surplus. According to the performed analysis, investment in paid placement gives a guarantee that the website will be visited at least on the sponsored site. Modelling the impact of search engine’s organic search results manipulation on the advertisers’ optimal search engine marketing strategies reveals that, facing the downward pressure on the organic rank, at least one advertiser will invest in paid placement.
APA, Harvard, Vancouver, ISO, and other styles
34

Marshall, Oliver. "Search Engine Optimization and the connection with Knowledge Graphs." Thesis, Högskolan i Gävle, Företagsekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-35165.

Full text
Abstract:
Aim: The aim of this study is to analyze the usage of Search Engine Optimization and Knowledge Graphs and the connection between them to achieve profitable business visibility and reach. Methods: Following a qualitative method together with an inductive approach, ten marketing professionals were interviewed via an online questionnaire. To conduct this study both primary and secondary data was utilized. Scientific theory together with empirical findings were linked and discussed in the analysis chapter. Findings: This study establishes current Search Engine Optimization utilization by businesses regarding common techniques and methods. We demonstrate their effectiveness on the Google Knowledge Graph, Google My Business and resulting positive business impact for increased visibility and reach. Difficulties remain in accurate tracking procedures to analyze quantifiable results. Contribution of the thesis: This study contributes to the literature of both Search Engine Optimization and Knowledge Graphs by providing a new perspective on how these subjects have been utilized in modern marketing. In addition, this study provides an understanding of the benefits of SEO utilization on Knowledge Graphs. Suggestions for further research: We suggest more extensive investigation on the elements and utilization of Knowledge Graphs; how the structure can be affected; which techniques are most effective on a bigger scale and how effectively the benefits can be measured. Key Words: Search Engine, Search Engine Optimization, SEO, Knowledge Graphs, Google My Business, Google Search Engine, Online Marketing.
APA, Harvard, Vancouver, ISO, and other styles
35

Li, Zhongmiao. "A Domain Specific Search Engine WithExplicit Document Relations." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-141654.

Full text
Abstract:
The current web consists of documents that are highly heterogeneous and hard for machines to understand. The SemanticWeb is a progressive movement of the Word Wide Web, aiming at converting the current web of unstructured documents to the web of data. In the Semantic Web, web documents are annotated with metadata using standardized ontology language. These annotated documents are directly processable by machines and it highly improves their usability and usefulness. In Ericsson, similar problems occur. There are massive documents being created with well-defined structures. Though these documents are about domain specific knowledge and can have rich relations, they are currently managed by a traditional search engine, which ignores the rich domain specific information and presents few data to users. Motivated by the Semantic Web, we aim to find standard ways to process these documents, extract rich domain specific information and annotate these data to documents with formal markup languages. We propose this project to develop a domain specific search engine for processing different documents and building explicit relations for them. This research project consists of the three main focuses: examining different domain specific documents and finding ways to extract their metadata; integrating a text search engine with an ontology server; exploring novel ways to build relations for documents. We implement this system and demonstrate its functions. As a prototype, the system provides required features and will be extended in the future.
APA, Harvard, Vancouver, ISO, and other styles
36

Aboulkhasam, Salaheldin Ali. "An intelligent voice-driven intranet search engine (AIVDISE)." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0028/MQ52023.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Poignant, Pierre 1975. "Peer-to-peer search engine : the Araignee Project." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/85752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Lakshmi, Shriram. "Web-based search engine for Radiology Teaching File." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE0000559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Lallali, Saliha. "A scalable search engine for the Personal Cloud." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLV009.

Full text
Abstract:
Un nouveau moteur de recherche embarqué conçu pour les objets intelligents. Ces dispositifs sont généralement équipés d'extrêmement de faible quantité de RAM et une grande capacité de stockage Flash NANAD. Pour faire face à ces contraintes matérielles contradictoires, les moteurs de recherche classique privilégient soit la scalabilité en insertion ou la scalabilité en requête, et ne peut pas répondre à ces deux exigences en même temps. En outre, très peu de solutions prennent en charge les suppressions de documents et mises à jour dans ce contexte. nous avons introduit trois principes de conception, à savoir y Write-Once Partitioning, Linear Pipelining and Background Linear Merging, et montrent comment ils peuvent être combinés pour produire un moteur de recherche intégré concilier un niveau élevé d'insertion / de suppression / et des mises à jour. Nous avons mis en place notre moteur de recherche sur une Board de développement ayant un représentant de configuration matérielle pour les objets intelligents et avons mené de vastes expériences en utilisant deux ensembles de données représentatives. Le dispositif expérimental résultats démontrent la scalabilité de l'approche et sa supériorité par rapport à l'état des procédés de l'art
A new embedded search engine designed for smart objects. Such devices are generally equipped with extremely low RAM and large Flash storage capacity. To tackle these conflicting hardware constraints, conventional search engines privilege either insertion or query scalability but cannot meet both requirements at the same time. Moreover, very few solutions support document deletions and updates in this context. we introduce three design principles, namely Write-Once Partitioning, Linear Pipelining and Background Linear Merging, and show how they can be combined to produce an embedded search engine reconciling high insert/delete/update rate and query scalability. We have implemented our search engine on a development board having a hardware configuration representative for smart objects and have conducted extensive experiments using two representative datasets. The experimental results demonstrate the scalability of the approach and its superiority compared to state of the art methods
APA, Harvard, Vancouver, ISO, and other styles
40

Vaziri, Farzad <1986&gt. "Discovering Single-Query Tasks from Search Engine Logs." Master's Degree Thesis, Università Ca' Foscari Venezia, 2014. http://hdl.handle.net/10579/5389.

Full text
Abstract:
When a user uses a search engine to find out some information, for each query that he does, search engine gives some links as result. Search engines provide these results for the user and on the other hand save the log information of each user about queries have done by them. These log information through query mining methods and algorithms are used to extract useful knowledge. In previous works a concept has defined as “task” which is a set of possibly noncontiguous queries which refer to the same need of information. Clustering methods have been used for discovering collective tasks by aggregative similar user tasks, possibly performed by distinct users. All these studies were taking into account just the queries which have done in a session and then concept of mission came up and it was trying to find multi tasks even in different sessions. Well, till now all studies were trying to cluster and aggregate the queries which are done for same information need and none of them cares about queries which are done independently and without any relation to other queries. Queries that we can call them singleton or single task queries. We thought that may be finding these queries could have some more interesting benefits for search engine either by taking them off and eliminating them or studying more on them and improving the results of search engines about responding to these kind of queries. Our contribution is using classification methods to find and classify these single task queries than multi tasks. Based on saved information about query logs we tried to define features for a single query and use these features in classification algorithms to achieve to our goal.
APA, Harvard, Vancouver, ISO, and other styles
41

White, Stephanie. "SEARCH ENGINE UTILIZATION ANALYSIS EXPLORING LINKS BETWEEN PERSONALITY TRAITS AND INTERNET SEARCH BEHAVIOR." Thesis, The University of Arizona, 2009. http://hdl.handle.net/10150/193530.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Xian, Yikun, and Liu Zhang. "Semantic Search with Information Integration." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-13832.

Full text
Abstract:
Since the search engine was first released in 1993, the development has never been slow down and various search engines emerged to vied for popularity. However, current traditional search engines like Google and Yahoo! are based on key words which lead to results impreciseness and information redundancy. A new search engine with semantic analysis can be the alternate solution in the future. It is more intelligent and informative, and provides better interaction with users.        This thesis discusses the detail on semantic search, explains advantages of semantic search over other key-word-based search and introduces how to integrate semantic analysis with common search engines. At the end of this thesis, there is an example of implementation of a simple semantic search engine.
APA, Harvard, Vancouver, ISO, and other styles
43

Neethling, Riaan. "Search engine optimisation or paid placement systems-user preference /." Thesis, [S.l. : s.n.], 2007. http://dk.cput.ac.za/cgi/viewcontent.cgi?article=1076&context=td_cput.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Al-Kamha, Reema. "Grouping Search-Engine Returned Citations for Person-Name Queries." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd472.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Movin, Maria. "Spelling Correction in a Music Entity Search Engine by Learning from Historical Search Queries." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229716.

Full text
Abstract:
Query spelling correction is an important component of modern search engines that can help users to express their intent, and thus improve search quality. In this study, we investigated with what accuracy a sequence-to-sequence recurrent neural network (RNN) can recognise and correct misspellings in a music search engine, when the model is trained with old search queries. A sequence-to-sequence RNN was chosen as the model in this study since it has achieved state-of-the-art performance on similar tasks, such as machine translation and speech recognition. The findings from the study imply that the model learns to correct and complete queries with higher accuracy compared to a baseline model that returns the input query. However, we suggest that, for a model that would be good enough for production, more work needs to be done. Especially, work on creating a cleaner, less biased training dataset. Nevertheless, our work strengthens the idea that sequence-to-sequence RNNs could be used as a spell correction system in search engines.
Stavningskorrigering av söksträngar är en viktig komponent i moderna sökmotorer. Stavningskorrigering kan hjälpa användarna att uttrycka sig och därmed förbättra kvaliteten i sökningen. I det här arbetet undersökte vi med vilken noggrannhet en Recurrent neural network (RNN) modell kan lära sig att korrigera felstavningar i söksträngar från en sökmotor för musik. RNN modellen tränades med söksträngar från historiska sökningar från sökmotorn. Anledningen till att RNN valdes som modell i den här studien var för att den har uppnått hittills bästa möjliga resultat på liknande uppgifter, såsom maskinöversättning och taligenkänning. Resultaten från vår studie visar att modellen lär sig att korrigera och komplettera söksträngar med högre noggrannhet än en basmodell som enbart returnerar indatasträngen. För att utveckla en modell som är tillräckligt bra för produktion föreslår vi emellertid att mer arbete måste utföras. Framför allt är vi övertygade om att ett renare, mindre systematiskt avvikande träningsdataset skulle förbättra modellen. På det hela taget stärker dock vårt arbete hypothesen att RNN modeller kan användas som stavningskorrigeringssystem i sökmotorer.
APA, Harvard, Vancouver, ISO, and other styles
46

Li, Chaoyang, and Ke Liu. "Smart Search Engine : A Design and Test of Intelligent Search of News with Classification." Thesis, Högskolan Dalarna, Institutionen för information och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:du-37601.

Full text
Abstract:
Background Google, Bing, and Baidu are the most commonly used search engines in the world. They also have some problems. For example, when searching for Jaguar, most of the search  results are cars, not animals. This is the problem of polysemy. Search engines always provide the most popular but not the most correct results. Aim We want to design and implement a search function and explore whether the method of classified news can improve the precision of users searching for news. Method In this research, we collect data by using a web crawler. We use a web crawler to crawl    the data of news in BBC news. Then we use NLTK, inverted index to do data pre-processing, and use BM25 to do data processing. Results Compare to the normal search function, our  function has a lower recall rate and a higher precision. Conclusions This search function can improve the precision when people search for news. Implications This search function can be used not only to search news but to search everything. It has a great future in search engines. It can be combined with machine learning to analyze users' search habits to search and classify more accurately.
APA, Harvard, Vancouver, ISO, and other styles
47

Nardei, Stephanie A. "Search Engine Optimization." 2004. http://hdl.handle.net/10150/106179.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Hung, Chia-Lien, and 洪佳蓮. "A Study Of The Search Engine Optimization On Search Engine Ranking." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/41853781632152770378.

Full text
Abstract:
碩士
逢甲大學
經營管理碩士在職專班
99
According to a survey conducted in 2008 by iProspect, an American search engine company, 68% of search engine users click on the top 10 search results, 85% users click on the top 20 search results and 92% users click on the top 30 search results. Therefore, how to make your websites appear in the first three pages of search results becomes the crucial issue.   The purpose of this paper is to develop a system of search engine optimization by researching and analyzing the biggest search engine of the world, Google, hoping it can become the influential references to improve the website rankings in the search results. For example, tweaking the structure and design of the websites, one of the major SEO factors, can get a better position in the search results, and then increase exposures and traffics without pouring a large amount of money on advertisement. It also creates more marketing opportunities to establish company identities and brand awareness.   The research will elaborate the process of search engine optimization and analyze the performance of optimized websites on the search results. We will present the website examples implemented with some SEO factors, such as keyword targeting, construction of search engine friendly sites (no-barrier websites) and link strategy, etc. Through measures of analysis and examination on the website examples, we will learn what works effectively in increasing the ranking, and further clarify the goals and objectives.
APA, Harvard, Vancouver, ISO, and other styles
49

Lin, Yel-Ku, and 林彥谷. "WWW Image Search Engine." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/89357424017909984168.

Full text
Abstract:
碩士
國立中正大學
資訊工程研究所
91
There are large number of web pages and images in the WWW because of the rapid growth of Internet. In some sense, the WWW is like a database which contains a huge number of images. Therefore, the main purpose of the image search engine is to help users quickly find the images which they want with convenience. In the thesis, we will focus on keyword-based image search methods, find the words related to images according to the analysis of web pages, and develop a search engine that let users search the images by query words. In addition, before output the results, we propose some technique which can improve the accuracy of query results.
APA, Harvard, Vancouver, ISO, and other styles
50

Fang, Chuang-Hsiung, and 方壯雄. "Using Public Search Engine to Search Private Document." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/96685557489768725432.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
94
We believed, the best encryption is lets others not think this document has already encrypted. Using Chinese words characteristic, grammar, we hope to discover a methods that is still readable after encryption. But the meaning has already entirely different. Then we can achieves the goal of encryption. The paper will design and implement a system for Chinese information hiding. Let those encrypted document can search by the public search engine. But others cannot understand the original meaning of these documents. And also we can use public space to store these documents.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography