Dissertations / Theses on the topic 'Query formulation'

To see the other types of publications on this topic, follow the link: Query formulation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 17 dissertations / theses for your research on the topic 'Query formulation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

White, Barbara Jo. "Evaluating the impact of typical images for visual query formulation on search efficacy /." Full text available from ProQuest UM Digital Dissertations, 2005. http://0-proquest.umi.com.umiss.lib.olemiss.edu/pqdweb?index=0&did=1253473101&SrchMode=1&sid=3&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1193754304&clientId=22256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Munir, Kamran. "Ontology-Driven Relational Query Formulation Using the Semantic and Assertion Capabilities of OWL-DL." Thesis, University of the West of England, Bristol, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.524696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Maxwell, Kylie Tamsin. "Term selection in information retrieval." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/20389.

Full text
Abstract:
Systems trained on linguistically annotated data achieve strong performance for many language processing tasks. This encourages the idea that annotations can improve any language processing task if applied in the right way. However, despite widespread acceptance and availability of highly accurate parsing software, it is not clear that ad hoc information retrieval (IR) techniques using annotated documents and requests consistently improve search performance compared to techniques that use no linguistic knowledge. In many cases, retrieval gains made using language processing components, such as part-of-speech tagging and head-dependent relations, are offset by significant negative effects. This results in a minimal positive, or even negative, overall impact for linguistically motivated approaches compared to approaches that do not use any syntactic or domain knowledge. In some cases, it may be that syntax does not reveal anything of practical importance about document relevance. Yet without a convincing explanation for why linguistic annotations fail in IR, the intuitive appeal of search systems that ‘understand’ text can result in the repeated application, and mis-application, of language processing to enhance search performance. This dissertation investigates whether linguistics can improve the selection of query terms by better modelling the alignment process between natural language requests and search queries. It is the most comprehensive work on the utility of linguistic methods in IR to date. Term selection in this work focuses on identification of informative query terms of 1-3 words that both represent the semantics of a request and discriminate between relevant and non-relevant documents. Approaches to word association are discussed with respect to linguistic principles, and evaluated with respect to semantic characterization and discriminative ability. Analysis is organised around three theories of language that emphasize different structures for the identification of terms: phrase structure theory, dependency theory and lexicalism. The structures identified by these theories play distinctive roles in the organisation of language. Evidence is presented regarding the value of different methods of word association based on these structures, and the effect of method and term combinations. Two highly effective, novel methods for the selection of terms from verbose queries are also proposed and evaluated. The first method focuses on the semantic phenomenon of ellipsis with a discriminative filter that leverages diverse text features. The second method exploits a term ranking algorithm, PhRank, that uses no linguistic information and relies on a network model of query context. The latter focuses queries so that 1-5 terms in an unweighted model achieve better retrieval effectiveness than weighted IR models that use up to 30 terms. In addition, unlike models that use a weighted distribution of terms or subqueries, the concise terms identified by PhRank are interpretable by users. Evaluation with newswire and web collections demonstrates that PhRank-based query reformulation significantly improves performance of verbose queries up to 14% compared to highly competitive IR models, and is at least as good for short, keyword queries with the same models. Results illustrate that linguistic processing may help with the selection of word associations but does not necessarily translate into improved IR performance. Statistical methods are necessary to overcome the limits of syntactic parsing and word adjacency measures for ad hoc IR. As a result, probabilistic frameworks that discover, and make use of, many forms of linguistic evidence may deliver small improvements in IR effectiveness, but methods that use simple features can be substantially more efficient and equally, or more, effective. Various explanations for this finding are suggested, including the probabilistic nature of grammatical categories, a lack of homomorphism between syntax and semantics, the impact of lexical relations, variability in collection data, and systemic effects in language systems.
APA, Harvard, Vancouver, ISO, and other styles
4

Phillips, Robert H. "The effect of denormalized schemas on ad-hoc query formulation: a human factors experiment in database design." Diss., Virginia Polytechnic Institute and State University, 1989. http://hdl.handle.net/10919/54262.

Full text
Abstract:
The information systems literature is rich with studies of database organization and its impact on machine, programmer, and administrative efficiency. Little attention, however, has been paid to the impact of database organization on end-user interactions with computer systems. This research effort addressed this increasingly important issue by examining the effects of database organization on the ability of end-users to locate and extract desired information. The study examined the impact of normalization levels of external relational database schema on end-user query success. It has been suggested in the literature that end-user query success might be improved by presenting external schema in lower level normal forms. This speculation is based on an analytical study of one particular class of query, queries involving join operations. The research presented here provides empirical support for this assertion. However, the implicit assumption that all other queries are neutral in their bias toward a particular level of normalization was found to be false. A class of queries requiring decomposition of prejoined relations was identified which strongly biases normalized relations. Thus, no particular normalization level was shown to dominate unless assumptions were made as to the class of query being formulated. Evidence from field research may be required to completely resolve the issue. The study also examined the interaction effects between normalization levels and other key variables known to impact query success. Significant interactions with user skill and the complexity of the query being made were found. The level of normalization did not impact high skilled users making easy queries or low skilled users making difficult queries. The impact of these interactions, as well as the main effects of the related variables, on query syntax and logic errors holds important implications for database administrators as well as those involved with the development of database query languages.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Appleton, Elizabeth A. "Exploring the Use of Evidence Based Practice Questions to Improve the Search Process." Thesis, School of Information and Library Science, 2007. http://hdl.handle.net/1901/386.

Full text
Abstract:
Evidence Based Practice (EBP) is a relatively new approach that professionals are using to cope with the ever-growing body of literature in their fields. The goal of EBP is to effectively use this body of literature to improve professional practice, thus improving the quality of services. A major component of EBP is asking a focused, well-built question, referred to in this paper as an Evidence Based Practice Question (EBPQ). This paper reports the findings of an exploratory study that examines the use an EBPQ to respond to reference questions emailed to a university library reference desk. A purposive sample of 30 randomly selected reference emails was divided into two groups, the EBPQ group and the control group. The professional searcher who conducted the searches used the same approach in responding to each emailed reference question, except that the EBPQ group searches were guided by EBPQs, and the control group’s responses were not. The results indicate that searches guided by using EBPQs are more focused, apply more resources to the search process, and take less time than searches not guided by using EBPQs. These conclusions suggest that EBPQs appear to be useful for improving that search process and that further research is warranted.
APA, Harvard, Vancouver, ISO, and other styles
6

Ziane, Mikal, and François Bouillé. "Optimisation de requêtes pour un système de gestion de bases de données parallèle." Paris 6, 1992. http://www.theses.fr/1992PA066689.

Full text
Abstract:
Dans le cadre du projet ESPRIT II EDS nous avons conçu et réalisé un optimiseur physique pour un système de gestion de bases de données parallèle. Cet optimiseur prend en compte plusieurs types de parallélisme, d'algorithmes parallèles et de stratégies de fragmentation. D'autre part, nous dégageons quels types de connaissance déterminent l'extensibilité et l'efficacité d'un optimiseur. Enfin, nous proposons une nouvelle méthode d'optimisation de la traversée de chemins dans les bases de données à objets, qui améliore les méthodes traditionnelles.
APA, Harvard, Vancouver, ISO, and other styles
7

Webber, Carine Geltrudes. "O estudo e desenvolvimento do protótipo de uma ferramenta de apoio a formulação de consultas a bases de dados na área da saúde." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 1997. http://hdl.handle.net/10183/18243.

Full text
Abstract:
O objetivo deste trabalho é, através do estudo de diversas tecnologias, desenvolver o protótipo de uma ferramenta capaz de oferecer suporte ao usuário na formulacdo de uma consulta a MEDLINE (Medical Literature Analysis and Retrieval System On Line). A MEDLINE é um sistema de recuperação de informações bibliográficas, na área da biomedicina, desenvolvida pela National Library of Medicine. Ela é uma ferramenta cuja utilizando tem sido ampliada nesta área em decorrência do aumento da utilizando de literatura, disponível eletronicamente, por profissionais da área da saúde. As pessoas, em geral, buscam informação e esperam encontrá-la exatamente de acordo com as suas expectativas, de forma ágil e utilizando todas as fontes de recursos disponíveis. Foi com este propósito que surgiram os primeiros Sistema de Recuperação de Informação (SRI) onde, de forma simplificada, um usuário constrói uma consulta, a qual expressa sua necessidade de informação, em seguida o sistema a processa e os resultados obtidas através dela retornam ao usuário. Grande parte dos usuários encontram dificuldades em representar a sua necessidade de informação de forma a obter resultados satisfatórios em um SRI. Os termos que o usuário escolhe para compor a consulta nem sempre são os mesmos que o sistema reconhece. A fim de que um usuário seja bem sucedido na definição dos termos que compõem a sua consulta é aconselhável que ele conheça a terminologia que foi empregada na indexação dos itens que ele deseja recuperar ou que possa contar com um intermediário que possua esse conhecimento. Em situações em que nenhuma dessas possibilidades seja verdadeira recursos que viabilizem uma consulta bem sucedida se fazem necessários. Este trabalho, inicialmente, apresenta um estudo geral sobre os Sistemas de Recuperação de Informações (SRI), enfocando todos os processos envolvidos e relacionados ao armazenamento, organização e a própria recuperação. Posteriormente, são destacados aspectos relacionados aos vocabulários e classificações medicas em uso, os quais serão Úteis para uma maior compreensão das dificuldades encontradas pelos usuários durante a interação com um sistema com esta finalidade. E, finalmente, é apresentado o protótipo do Sistema para Formulação de Consultas a MEDLINE, bem como seus componentes e funcionalidades. O Sistema para Formulação de Consultas a MEDLINE foi desenvolvido com o intuito de permitir que o usuário utilize qualquer termo na formulação de uma consulta destinada a MEDLINE. Ele possibilita a integração de diferentes terminologias médicas, originárias de vocabulários e classificações disponíveis em língua portuguesa e atualmente em uso. Esta abordagem permite a criação de uma terminologia biomédica mais completa, sendo que cada termo mantém relacionamentos, os quais descrevem a sua semântica, com outros.
The goal of this work is, through the study of many technologies, to develop the prototype of a tool able to offer support to the user in query formulation to the MEDLINE (Medical Literature Analysis and Retrieval System On Line). The MEDLINE is a bibliographical information retrieval system in the biomedicine area developed by National Library of Medicine. It is a tool whose usefulness has been amplifyed in this area by the increase of literature utilization, eletronically available, by health care profissionals. People, in general, look for information and are interested in finding it exactly like their expectations, in an agile way and using every single information source available. With this purpouse the first Information Retrieval System (IRS ) emerged, where in a simplifyed way, a user defines a query, that expresses an information necessity and, one step ahead, the system processes it and returns to the user answers from the query. Most of the users think is difficult to represent their information necessity in order to be succesful in searching an IRS. The terms that the user selects to compose the query are not always the same that the system recognizes. In order to be successfull in the definition of the terms that will compose his/her query is advisable that the user know the terminology that was employed in the indexing process of the wanted items or that he/she can have an intermediary person who knows about it. In many situations where no one of these possibilities can be true, resources that make a successfull query possible will be needed. This work, firstly, presents a general study on IRS focusing all the process involved and related to the storage, organization and retrieval. Lately, aspects related to the medical classifications and vocabulary are emphasized, which will be usefull for a largest comprehension of the difficulties found by users during interaction with a system like this. And, finally, the prototype of the Query Formulation System to MEDLINE is presented, as well as its components and funcionalities. The Query Formulation System to MEDLINE was developed with the intention of allowing the user to use any term in the formulation of a query to the MEDLINE. It allows the integration of different medical terminologies originated from classifications and vocabulary available in Portuguese language and in use today. This approach permits the creation of a more complete biomedical terminology in which each term maintains relationships that describe its semantic.
APA, Harvard, Vancouver, ISO, and other styles
8

Latour, Marilyne. "Du besoin d'informations à la formulation des requêtes : étude des usages de différents types d'utilisateurs visant l'amélioration d'un système de recherche d'informations." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENL015/document.

Full text
Abstract:
Devant des collections massives et hétérogènes de données, les systèmes de RI doivent désormais pouvoir appréhender des comportements d'utilisateurs aussi variés qu'imprévisibles. L'objectif de notre travail est d'évaluer la façon dont un même utilisateur verbalise un besoin informationnel à travers un énoncé de type « expression libre » (appelé langage naturel) et un énoncé de type mots-clés (appelé langage de requêtes). Pour cela, nous nous situons dans un contexte applicatif, à savoir des demandes de remboursement des utilisateurs d'un moteur de recherche dédié à des études économiques en français. Nous avons recueilli via ce moteur, les deux types d'énoncés sur 5 années consécutives totalisant un corpus de 1398 demandes en langage naturel et de 3427 requêtes. Nous avons alors comparé l'expression en tant que tel du besoin informationnel et mis en avant ce qu'apportait, en termes d'informations et de précisions, le recours à l'un ou l'autre du langage utilisé
With the massive and heterogeneous web document collections, IR system must analyze the behaviors of users which are unpredictable and varied. The approach described in this thesis provides a comparison of the verbalizations for both natural language and web query for the same information need by the same user. For this, we used data collected (i.e. users' complaints in natural language and web queries) through a search engine dedicated to economic reports in French over 5 consecutive years totaling a corpus of 1398 natural language requests and 3427 web queries. Then, we compared the expression of the information need and highlighted the contributions in terms of information and clarification, the use of either language used
APA, Harvard, Vancouver, ISO, and other styles
9

Limbu, Dilip Kumar. "Contextual information retrieval from the WWW." Click here to access this resource online, 2008. http://hdl.handle.net/10292/450.

Full text
Abstract:
Contextual information retrieval (CIR) is a critical technique for today’s search engines in terms of facilitating queries and returning relevant information. Despite its importance, little progress has been made in its application, due to the difficulty of capturing and representing contextual information about users. This thesis details the development and evaluation of the contextual SERL search, designed to tackle some of the challenges associated with CIR from the World Wide Web. The contextual SERL search utilises a rich contextual model that exploits implicit and explicit data to modify queries to more accurately reflect the user’s interests as well as to continually build the user’s contextual profile and a shared contextual knowledge base. These profiles are used to filter results from a standard search engine to improve the relevance of the pages displayed to the user. The contextual SERL search has been tested in an observational study that has captured both qualitative and quantitative data about the ability of the framework to improve the user’s web search experience. A total of 30 subjects, with different levels of search experience, participated in the observational study experiment. The results demonstrate that when the contextual profile and the shared contextual knowledge base are used, the contextual SERL search improves search effectiveness, efficiency and subjective satisfaction. The effectiveness improves as subjects have actually entered fewer queries to reach the target information in comparison to the contemporary search engine. In the case of a particularly complex search task, the efficiency improves as subjects have browsed fewer hits, visited fewer URLs, made fewer clicks and have taken less time to reach the target information when compared to the contemporary search engine. Finally, subjects have expressed a higher degree of satisfaction on the quality of contextual support when using the shared contextual knowledge base in comparison to using their contextual profile. These results suggest that integration of a user’s contextual factors and information seeking behaviours are very important for successful development of the CIR framework. It is believed that this framework and other similar projects will help provide the basis for the next generation of contextual information retrieval from the Web.
APA, Harvard, Vancouver, ISO, and other styles
10

Wien, Sigurd. "Efficient Top-K Fuzzy Interactive Query Expansion While Formulating a Query : From a Performance Perspective." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-23010.

Full text
Abstract:
Interactive query expansion and fuzzy search are two efficient techniques for assisting a user in an information retrieval process. Interactive query expansion helps the user refine a query by giving suggestions on how a query might be extended to further specify the actual information need of the user. Fuzzy search, on the other hand, supports the user by including results for terms that approximately equals the query string. This avoids reformulating queries with slight misspellings and will retrieve results for indexed terms not spelled as expected. This study will look at the performance aspects of combining these concepts to give the user real time suggestions on how to complete query as the query is formulated letter by letter. These suggestions will be a set of terms from the index that are fuzzy matches of the query string terms, and are chosen based on the individual rank of the term, the semantic correlation between the individual term and the edit distance between the query and the suggestion.The combination of these techniques is challenging from a performance aspect because each of them requires a lot of computation, and their relationship is such that these computations will be multiplicative when combined. Giving suggestions letter by letter as the user types requires a lookup for each letter and fuzzy search will expand each of these lookups with the fuzzy matches of the prefix to match against the index. For each of these different completions of the fuzzy matched prefixes, we will need to calculate the semantic correlation it has to the previous matched terms.This study will present three algorithms to give top-k suggestions for the single term case and then extend these in three ways to handle multi term queries. These algorithms will use a trie based term index with some extensions to enable fast lookup of top-k terms that match a given prefix and to assess the semantic correlation between the terms in the suggestion. The performance review will demonstrate that our approach will be viable to use for presenting the user with suggestions in real time even with a fairly large number of terms.
APA, Harvard, Vancouver, ISO, and other styles
11

Schulte, Stefan. "Web Service Discovery Based on Semantic Information - Query Formulation and Adaptive Matchmaking." Phd thesis, 2010. https://tuprints.ulb.tu-darmstadt.de/2293/1/FB18-Stefan_Schulte-DrArbeit_Published.pdf.

Full text
Abstract:
Service-oriented Computing introduces a range of possible applications spanning from the combination of Web services in software mashups to the design and implementation of entire IT system landscapes following the paradigm of Service-oriented Architectures. The discovery of services which provide a desired capability is one of the basic operations in Service-oriented Computing and is deemed to be one of the grand challenges in Web service research. This applies in particular to scenarios with a large number of service offers, where it is desirable to automate the discovery process to some degree. Service discovery is based on the description of service components, e.g., interfaces or operations. As the syntactic description of a Web service is often imprecise, semantic Web services are considered to play a decisive role in the facilitation of service discovery. In this context, the application and utilization of semantic information in service discovery concerns the ability of service providers to describe services, the ability of requesters to specify requirements, and the effectiveness of the service matchmaker, i.e., an algorithm that takes into account a request and finds the best fitting services from a set of service offers. Matchmaking research focuses on the selection of the necessary elements from a service description, similarity metrics, and the combination of the resulting similarity values. This thesis provides several contributions to the improvement and ease of service discovery based on semantic information. The main contributions are made in the fields of service matchmaking and query formulation. Regarding the first-mentioned contribution, two approaches to matchmaking for semantic Web services are presented. The first of which, LOG4SWS.KOM, is based on "classical" subsumption matching and introduces an innovative way to weight and combine different matching degrees. LOG4SWS.KOM is self-adaptive to different basic assumptions regarding the semantic concepts applied in a service description. This includes different presumptions regarding what a semantic annotation on a distinct service abstraction level actually denotes as well as the meaning of different subsumption relationships between semantic concepts. LOG4SWS.KOM is applied to different abstraction levels of a service description, which may not necessarily be completely described using semantic information. Hence, the matchmaker includes a linguistic-based fallback strategy, triggering the need to incorporate syntactic information. The second matchmaker, COV4SWS.KOM, deviates from logic-based similarity measurement and applies methods from the field of relatedness measurement of semantic concepts in ontologies. This way, COV4SWS.KOM allows more fine-grained relationships than conventional subsumption matching-based approaches. Additionally, COV4SWS.KOM introduces the adaptation to varying quality and usefulness of syntactic descriptions and semantic annotations at different abstraction levels of a service description. Both matchmakers are implemented for SAWSDL and provide, to the best of our knowledge, the best matchmaking results for this Web service standard regarding Information Retrieval metrics, so far. Regarding the second focus of this thesis - query formulation for semantic Web service discovery - an extensive analysis of requirements towards a unified service query formalism has been conducted. Based on this analysis, two different approaches to query formulation for semantic Web services have been designed, developed, and implemented. The first is a lightweight approach making use of already existing standards and technologies: Here, a slightly extended SPARQL syntax for SAWSDL-based service descriptions is integrated into UDDI. However, the usage of existing standards imposes some constraints, as especially SPARQL has not been explicitly designed for query formulation for semantic Web services. Hence, a second, more advanced approach, has been implemented, where a distinct, SPARQL-based query language is conceptualized and integrated in a service registry. This language - SWS2QL - allows a service requester to address different service abstraction levels, incorporate and parameterize matchmakers, define thresholds, etc., leading to a sophisticated, fine-grained definition of service requests. Even though the corresponding proof of concept implementation makes use of ebXML as service registry standard and SAWSDL as service formalism, results can be easily transferred to other registry and service technologies, as the approach is based on abstract service data and query models. This way, a unified service query formalism is provided. Apart from the main contributions, this thesis also provides a general framework based on ebXML, which features the integration of semantic Web service descriptions and different service matchmakers into this registry standard.
APA, Harvard, Vancouver, ISO, and other styles
12

Yang, Ping Jing, and 楊平京. "Click-Search: Supporting Information Search with Interactive Image-to-Keyword Query Formulation." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/30063011799325044561.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
104
Information search is a common yet important task in everyday work and life. It remains a challenging design issue how to help users search for information or things they don’t necessarily know how to express with words. Also, even when people know how to express, the cognitive cost required to retrieve the concepts and formulate the queries can be excessive. In this paper, we present Click-Search, a search user interface that allows people to indicate their search intents by merely selecting and cropping segments of image contents. The system automatically converts selected image segments to keywords based on known associations between image pixels and semantic labels created by prior crowdsourced image tagging. Through a user study, we found that Click-Search can support a range of searching activities effectively. We discuss the implications of the new approach of searching through interactions with images.
APA, Harvard, Vancouver, ISO, and other styles
13

Schulte, Stefan [Verfasser]. "Web service discovery based on semantic information : query formulation and adaptive matchmaking / von Stefan Schulte." 2010. http://d-nb.info/1007432985/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Peixoto, Rui José Viegas. "A informática na educação." Master's thesis, 2005. http://hdl.handle.net/10400.2/561.

Full text
Abstract:
Dissertação de Mestrado em Ensino das Ciências apresentada à Universidade Aberta
O estudo da relação entre o conhecimento que os professores têm sobre informática, a utilização que fazem do computador na sala de aula e a forma como encaram as opções metodológicas que proporcionem o sucesso do ensinoaprendizagem, foi o objectivo desta dissertação. Neste sentido, procurou-se saber: (1) a relação entre a experiência profissional e o momento em que os professores das diversas disciplinas, se sentiram à vontade na utilização dos computadores; (2) quais as actividades mais utilizadas em cada nível de ensino; (3) quantas aulas são utilizadas para cada software; (4) como cada área de ensino utiliza determinado software e quais os mais utilizados; (5) quais os objectivos que os professores de cada área de ensino pretendem atingir quando utilizam os computadores e quais os mais utilizados; (6) quais os objectivos que os professores têm quando utilizam um determinado software e quais os mais utilizados. A metodologia adoptada foi do tipo quantitativo, e utilizou-se um inquérito para recolha de dados. O estudo realizado envolveu todas as escolas secundárias dos concelhos de Lisboa, Oeiras, Cascais, Amadora, Sintra, Odivelas e Loures, num total de 17 escolas. A população foi constituída por professores do ensino secundário das 17 escolas seleccionadas, das seguintes áreas disciplinares: Português, Matemática, Inglês, Biologia e Geologia, Física e Química, Geometria Descritiva, História, Geografia, e Economia. Em cada escola foi pedida a colaboração a três professores de cada área disciplinar, correspondendo a um total de 27 professores. O número de professores envolvidos no estudo foi de 459. Concluiu-se que os professores com menos anos de serviço e mais jovens adquirem conhecimentos de utilização de computadores mais cedo, ainda no ensino secundário ou enquanto estudantes universitários, e são os que mais o utilizam. A grande maioria dos professores sentiu-se à vontade na utilização dos computadores recentemente. Quando utilizam os computadores em actividades na sala de aula, aproximadamente 63% dos professores pedem aos alunos para trabalharem individualmente ou para colaborarem em actividades de projecto. Os três softwares mais utilizados pelos professores são: Autoria Multimédia (25,4%), WWW Browser e Simulação/Exploração (19%). O software específico é utilizado por 32,3% dos professores de Matemática. Os três principais objectivos quando os professores utilizam os computadores ou um software na actividade lectiva, são: melhorar a aprendizagem, melhorar a compreensão, pesquisa de informações e ideias. O estudo sugere que os professores utilizam os computadores em actividades extra-curriculares e que a grande maioria não utiliza os computadores em actividades curriculares. Sugere também a existência de um défice entre os conhecimentos manifestados e como estes são utilizados na sala de aula. Sugere ainda, que as principais causas do insucesso das tecnologias nas escolas são: a falta de equipamento informático; a insegurança dos professores na utilização das tecnologias; o software inadaptável às actividades curriculares; a falta de formação dos professores em software específico da disciplina que leccionam.
L’étude du rapport entre les connaissances que les professeurs ont sur l’informatique, l’utilisation de l’ordinateur dans la salle de classe et la façon dont ils envisagent les options méthodologiques facilitant la réussite scolaire a été l’objectif de cette dissertation. Dans ce sens, on a cherché à savoir: (1) le rapport entre l’expérience professionnelle et le moment où les professeurs des différentes matières scolaires se sont sentis à l’aise dans l’utilisation des ordinateurs; (2) les activités les plus pratiquées dans chaque degré d’enseignement; (3) le nombre de cours utilisés pour chaque logiciel; (4) comment chaque domaine d’enseignement utilise un certain type de logiciel et ceux qui sont les plus utilisés; (5) les objectifs que chaque domaine d’enseignement prétend atteindre en utilisant les ordinateurs et quels ordinateurs sont utilisés; (6) les objectifs des professeurs quand ils utilisent un déterminé logiciel et ceux qui sont les plus utilisés. La méthodologie adoptée a été de type quantitatif et pour le recueil de données une enquête a été utilisée. L’étude réalisée a concerné toutes les écoles secondaires dês municipalités de Oeiras, Cascais, Amadora, Sintra, Odivelas et Loures, dans un total de 17 écoles. Cette étude s’est dirigée aux professeurs de l’enseignement secondaire des 17 écoles sélectionnées et des matières scolaires suivantes: Portugais, Mathématiques, Anglais, Biologie et Géologie, Physicochimie, Géométrie Descriptive, Histoire, Géographie et Économie. Dans chaque école, on a demandé la collaboration à trois professeurs de chaque matière, ce qui correspond à un total de 27 professeurs. Le nombre total de professeurs concernés par l’enquête a été de 459. On a conclu que les professeurs avec moins d’ancienneté et plus jeunes, acquièrent des connaissances dans l’utilisation des ordinateurs plus tôt lorqu’ils sont encore dans l’enseignement secondaire ou universitaire et ce sont eux qui utilisent le plus les ordinateurs. La grande majorité des professeurs s’est sentie à l’aise dans l’utilisation des ordinateurs récemment. Quand ils utilisent les ordinateurs dans des activités en salle de classe, à peu près 63% des professeurs demandent à leurs élèves de travailler individuellement ou de participer dans d’activités des projets. Les trios logiciels les plus utilisés par les professeurs sont: Applications Multimédia (25,4%), WWW Browser et Simulation/Exploitation (19%). Le logiciel didacticiel est utilisé par 32,3% des professeurs de Mathématiques. Quand ils utilisent les ordinateurs ou unlogiciel en salle de classe les trois principaux objectifs des professeurs sont: faciliter l’apprentissage, faciliter la compréhension, recherche d’informations et d’idées. L’étude suggère que les professeurs utilisent les ordinateurs en activités extrascolaires et la grande majorité ne les utilise pas en activités scolaires. Cette étude suggère aussi l’existence d’un déficit entre les connaissances manifestées et la façon dont celles-ci sont utilisées en classe. Il y est aussi suggéré que les principales causes de l’échec des technologies dans les écoles sont: le manque d’équipement informatique ; le manque d’assurance des professeurs dans l’utilisation des technologies; les logiciels inadaptés aux activités scolaires; le manque de formation des professeurs noté dans dês stages pédagogiques à propos de logiciels didacticiels de la matière qu’ils enseignent
The aim of this essay was studying the relationship between the knowledge teachers have of computing, the use they make of the computer in the classroom and the way they face the methodological options that allow the success of the teaching-learning process. To do it, we tried to know (1) the relationship between the teaching experience and the moment teachers felt comfortable with using computers; (2) which activities are more used in each school level; (3) how many classes are used for each software; (4) how each area of teaching uses a given software and which ones are more used; (5) which are the goals each area of teaching is trying to achieve when using computers and which are more used; (6) which goals teachers have when using a certain software and which are the more used. We adopted a quantitative methodology and we used an enquiry to obtain the data. The study took place in the secondary schools of the areas of Lisbon, Oeiras, Cascais, Amadora, Sintra, Odivelas and Loures totalling 17 schools. The population was constituted by teachers of the secondary level from the 17 selected schools and teaching different subjects: Portuguese, Mathematics, English, Biology and Geology, Physics and Chemistry, Geometry, History, Geography and Economics. We asked for the cooperation of three teachers from each area of teaching in a total of 27 teachers. The number of teachers taking part in the study was 459. We concluded that younger teachers and with less teaching experience acquire knowledge on how to use computers sooner in their lives, some while they are still in secondary school or while at university. These teachers are the ones who use computers the most. Most teachers feel comfortable with using computers recently. When using computers in classroom activities, around 63% of the teachers ask their students to work individually or to cooperate in projects activity. Three different types of software, Authoring Multimedia (25,4%), WWW Browser and Simulations/Exploratory (19%), are the most used by teachers. 32,3% of Mathematics teachers use specific software. When using computers or software in classroom activities the three main goals of teachers are: improving the process of learning, improving the process of understanding, researching for information and ideas. The study suggests that teachers use computers in extra curricular activities and most of them do not use the computers in curricular activities. It also suggests a deficit between the knowledge teachers have and the way this knowledge is used in the classroom. Finally it suggests that the main causes for the lack of success in using technologies in the classroom are: lack of computing equipment; teachers feeling insecure on how to use computing technology; software not adaptable to curricular activities; not enough teacher training activities on specific software for the subject they teach
APA, Harvard, Vancouver, ISO, and other styles
15

Peetz-Ullman, Juliane. "The difficult task of finding digitized music manuscripts in online library collection." Thesis, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-23676.

Full text
Abstract:
Music researchers are seldom at the center of attention as a user group within LIS. Thus, investigations of search possibilities for digitized music manuscript collections with a user perspective are lacking. Here, three digitized music manuscript collections (the Schrank II collection in Dresden, the Utile Dulci collection in Stockholm, and the Düben collection in Uppsala) are examined with regard to the accessibility of their contents to the target user group in two steps: First, music researchers are asked about their information seeking process and queries. They are observed in surveys, interviews, and think-aloud protocols. Second, the three retrieval systems are subjected to a performance evaluation by means of precision, recall, and F1 measures. The results show that music researchers are seeking information either with known-item searching, browsing, or subject search, yet the latter with considerably different subjects than, for example, in the domain of literature. In addition, while music researchers are expressing their satisfaction with the discovery systems, the observations from protocol analysis and the performance evaluation show that all three have issues in retrieving relevant documents for music-specific queries.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhang, Hao. "Formulating Complex Queries Using Templates." Thesis, 2009. http://hdl.handle.net/10012/4248.

Full text
Abstract:
While many users have relatively general information needs, users who are familiar with a certain topic may have more specific or complex information needs. Such users already have some knowledge of a subject and its concepts, and they need to find information on a specific aspect of a certain entity, such as its cause, effect, and relationships between entities. To successfully resolve this kind of complex information needs, in our study, we investigated the effectiveness of topic-independent query templates as a tool for assisting users in articulating their information needs. A set of query templates, which were written in the form of fill-in-the-blanks was designed to represent general semantic relationships between concepts, such as cause-effect and problem-solution. To conduct the research, we designed a control interface with a single query textbox and an experimental interface with the query templates. A user study was performed with 30 users. Okapi information retrieval system was used to retrieve documents in response to the users’ queries. The analysis in this paper indicates that while users found the template-based query formulation less easy to use, the queries written using templates performed better than the queries written using the control interface with one query textbox. Our analysis of a group of users and some specific topics demonstrates that the experimental interface tended to help users create more detailed search queries and the users were able to think about different aspects of their complex information needs and fill in many templates. In the future, an interesting research direction would be to tune the templates, adapting them to users’ specific query requests and avoiding showing non-relevant templates to users by automatically selecting related templates from a larger set of templates.
APA, Harvard, Vancouver, ISO, and other styles
17

Ghorashi, Seyed Soroush. "Leyline : a provenance-based desktop search system using graphical sketchpad user interface." Thesis, 2011. http://hdl.handle.net/1957/28032.

Full text
Abstract:
While there are powerful keyword search systems that index all kinds of resources including emails and web pages, people have trouble recalling semantic facts such as the name, location, edit dates and keywords that uniquely identifies resources in their personal repositories. Reusing information exasperates this problem. A rarely used approach is to leverage episodic memory of file provenance. Provenance is traditionally defined as "the history of ownership of a valued object". In terms of documents, we consider not only the ownership, but also the operations performed on the document, especially those that related it to other people, events, or resources. This thesis investigates the potential advantages of using provenance data in desktop search, and consists of two manuscripts. First, a numerical analysis using field data from a longitudinal study shows that provenance information can effectively be used to identify files and resources in realistic repositories. We introduce the Leyline, the first provenance-based search system that supports dynamic relations between files and resources such as copy/paste, save as, file rename. The Leyline allows users to search by drawing search queries as graphs in a sketchpad. The Leyline overlays provenance information that may help users identify targets or explore information flow. A limited controlled experiment showed that this approach is feasible in terms of time and effort. Second, we explore the design of the Leyline, compare it to previous provenance-based desktop search systems, including their underlying assumptions and focus, search coverage and flexibility, and features and limitations.
Graduation date: 2012
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography