Dissertations / Theses on the topic 'Search Log'

To see the other types of publications on this topic, follow the link: Search Log.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Search Log.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Doversten, Martin. "Log Search : En form av datamining." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-159179.

Full text
Abstract:
This report examines the possibility of optimizing troubleshooting log files generated when constructing new Volvo trucks. Errors occur when CAD models are stored and managed by the versioning system PDMLink. By developing a new diagnostic tool, Log Search, the troubleshooting process is automated and thereby streamlines the current manual search.
APA, Harvard, Vancouver, ISO, and other styles
2

Mendoza, Rocha Marcelo Gabriel. "Query log mining in search engines." Tesis, Universidad de Chile, 2007. http://www.repositorio.uchile.cl/handle/2250/102877.

Full text
Abstract:
Doctor en Ciencias, Mención Computación
La Web es un gran espacio de información donde muchos recursos como documentos, imágenes u otros contenidos multimediales pueden ser accesados. En este contexto, varias tecnologías de la información han sido desarrolladas para ayudar a los usuarios a satisfacer sus necesidades de búsqueda en la Web, y las más usadas de estas son los motores de búsqueda. Los motores de búsqueda permiten a los usuarios encontrar recursos formulando consultas y revisando una lista de respuestas. Uno de los principales desafíos para la comunidad de la Web es diseñar motores de búsqueda que permitan a los usuarios encontrar recursos semánticamente conectados con sus consultas. El gran tamaño de la Web y la vaguedad de los términos más comúnmente usados en la formulación de consultas es un gran obstáculo para lograr este objetivo. En esta tesis proponemos explorar las selecciones de los usuarios registradas en los logs de los motores de búsqueda para aprender cómo los usuarios buscan y también para diseñar algoritmos que permitan mejorar la precisión de las respuestas recomendadas a los usuarios. Comenzaremos explorando las propiedades de estos datos. Esta exploración nos permitirá determinar la naturaleza dispersa de estos datos. Además presentaremos modelos que nos ayudarán a entender cómo los usuarios buscan en los motores de búsqueda. Luego, exploraremos las selecciones de los usuarios para encontrar asociaciones útiles entre consultas registradas en los logs. Concentraremos los esfuerzos en el diseño de técnicas que permitirán a los usuarios encontrar mejores consultas que la consulta original. Como una aplicación, diseñaremos métodos de reformulación de consultas que ayudarán a los usuarios a encontrar términos más útiles mejorando la representación de sus necesidades. Usando términos de documentos construiremos representaciones vectoriales para consultas. Aplicando técnicas de clustering podremos determinar grupos de consultas similares. Usando estos grupos de consultas, introduciremos métodos para recomendación de consultas y documentos que nos permitirán mejorar la precisión de las recomendaciones. Finalmente, diseñaremos técnicas de clasificación de consultas que nos permitirán encontrar conceptos semánticamente relacionados con la consulta original. Para lograr esto, clasificaremos las consultas de los usuarios en directorios Web. Como una aplicación, introduciremos métodos para la manutención automática de los directorios.
APA, Harvard, Vancouver, ISO, and other styles
3

Rajabli, Nijat. "Improving Biometric Log Detection with Partitioning and Filtering of the Search Space." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-101629.

Full text
Abstract:
Tracking of tree logs from a harvesting site to its processing site is a legal requirement for timber-based industries for social and economic reasons. Biometric tree log detection systems use images of the tree logs to track the logs by checking whether a given log image matches any of the logs registered in the system. However, as the number of registered tree logs in the database increases, the number of pairwise comparisons, and consequently the search time increase proportionally. Growing search space degrades the accuracy and the response time of matching queries and slows down the tracking process, costing time and resources. This work introduces database filtering and partitioning approaches based on discriminative log-end features to reduce the search space of the biometric log identification algorithms. In this study, 252 unique log images are used to train and test models for extracting features from the log images and to filter and cluster a database of logs. Experiments are carried out to show the end-to-end accuracy and speed-up impact of the individual approaches as well as the combinations thereof. The findings of this study indicate that the proposed approaches are suited for speeding-up tree log identification systems and highlight further opportunities in this field
APA, Harvard, Vancouver, ISO, and other styles
4

Broccolo, Daniele <1984&gt. "Query log based techniques to improve the performance of a web search engine." Doctoral thesis, Università Ca' Foscari Venezia, 2014. http://hdl.handle.net/10579/4635.

Full text
Abstract:
Every user leaves traces of her/his behaviour when she/he surfs the Web. All the usage data generated by users is stored in logs of several web applications, and such logs can be used to extract useful knowledge for enhancing and improving performance of online services. Also Search Engines (SEs) store usage information in so-called query logs, which can be used in different ways to improve the SE user experience. In this thesis we focus on improving the performance of a SE, in particular its effectiveness and efficiency, through query log mining. We propose to enhance the performance of SEs by discussing a novel Query Recommender System. We prove that is possible to decrease the length of a user's query session by unloading the SE of part of the queries that the user submits in order to refine his initial search. This approach helps the user find what she/he is searching in a shorter period of time, while at the same time decreasing the number of queries that the SE must process, and thus decreasing the overall server load. We also discuss how to enhance the SE efficiency by optimizing the use of its computational resources. The knowledge extracted from a query log is used to dynamically adjust the query processing method by adapting the pruning strategy to the SE load. In particular query logs permit to build a regressive model used to predict the response time for any query, when different pruning strategies are applied during query processing. The prediction is used to ensure a minimum quality of service when the system is heavily loaded, by trying to process the various enqueued queries by a given deadline. Our study also addresses the problem of the effectiveness of query results by comparing their quality when dynamic pruning is adopted to reduce the query processing times. Finally, we also study how response times and results vary when, in presence of high loads, processing is either interrupted after a fixed time threshold elapses or dropped completely. Moreover, we introduce a novel query dropping strategy based on the same query performance predictors discussed above.
APA, Harvard, Vancouver, ISO, and other styles
5

Jadhav, Ashutosh. "Knowledge Driven Search Intent Mining." Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1464464707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dubuc, Clémence. "A Real- time Log Correlation System for Security Information and Event Management." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300452.

Full text
Abstract:
The correlation of several events in a period of time is a necessity for a threat detection platform. In the case of multistep attacks (attacks characterized by a sequence of executed commands), it allows detecting the different steps one by one and correlating them to raise an alert. It also allows detecting abnormal behaviors on the IT system, for example, multiple suspicious actions performed by the same account. The correlation of security events increases the security of the system and reduces the number of false positives. The correlation of the events is made thanks to pre- existing correlation rules. The goal of this thesis is to evaluate the feasibility of using a correlation engine based on Apache Spark. There is a necessity of changing the actual correlation system because it is not scalable, it cannot handle all the incoming data and it cannot perform some types of correlation like aggregating the events by attributes or counting the cardinality. The novelty is the improvement of the performance and the correlation capacities of the system. Two systems are proposed for correlating events in this project. The first one is based on Apache Spark Structured Streaming and analyzed the flow of security logs in real- time. As the results are not satisfactory, a second system is implemented. It uses a more traditional approach by storing the logs into an Elastic Search cluster and does correlation queries on it. In the end, the two systems are able to correlate the logs of the platform. Nevertheless, the system based on Apache Spark uses too many resources by correlation rule and it is too expensive to launch hundreds of correlation queries at the same time. For those reasons, the system based on Elastic Search is preferred and is implemented in the workflow.
Korrelation av flera händelser under en viss tidsperiod är en nödvändighet för plattformen för hotdetektering. När det gäller attacker i flera steg (attacker som kännetecknas av en sekvens av utförda kommandon) gör det möjligt att upptäcka de olika stegen ett efter ett och korrelera dem för att utlösa en varning. Den gör det också möjligt att upptäcka onormala beteenden i IT- systemet, t.ex. flera misstänkta åtgärder som utförs av samma konto. Korrelationen av säkerhetshändelser ökar systemets säkerhet och minskar antalet falska positiva upptäckter. Korrelationen av händelserna görs tack vare redan existerande korrelationsregler. Målet med den här avhandlingen är att utvärdera genomförbarheten av en korrelationsmotor baserad på Apache Spark. Det är nödvändigt att ändra det nuvarande korrelationssystemet eftersom det inte är skalbart, det kan inte hantera alla inkommande data och det kan inte utföra vissa typer av korrelation, t.ex. aggregering av händelserna efter attribut eller beräkning av kardinaliteten. Det nya är att förbättra systemets prestanda och korrelationskapacitet. I detta projekt föreslås två system för korrelering av händelser. Det första bygger på Apache Spark Structured Streaming och analyserade flödet av säkerhetsloggar i realtid. Eftersom resultaten inte var tillfredsställande har ett andra system införts. Det använder ett mer traditionellt tillvägagångssätt genom att lagra loggarna i ett Elastic Searchkluster och göra korrelationsförfrågningar på dem. I slutändan kan de två systemen korrelera plattformens loggar. Det system som bygger på Apache Spark använder dock för många resurser per korrelationsregel och det är för dyrt att starta hundratals korrelationsförfrågningar samtidigt. Av dessa skäl föredras systemet baserat på Elastic Search och det implementeras i arbetsflödet.
APA, Harvard, Vancouver, ISO, and other styles
7

Ekman, Niklas. "Handling Big Data using a Distributed Search Engine : Preparing Log Data for On-Demand Analysis." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-222373.

Full text
Abstract:
Big data are datasets that is very large and computational complex. With an increasing volume of data the time a trivial processing task can be challenging. Companies collects data at a fast rate but knowing what to do with the data can be hard. A search engine is a system that indexes data making it efficiently queryable by users. When a bug occurs in a computer system log data is consulted in order to understand why, but processing big log data can take a long time. The purpose of this thesis is to investigate, compare and implement a distributed search engine that can prepare log data for analysis, which will make it easier for a developer to investigate bugs. There are three popular search engines: Apache Lucene, Elasticsearch and Apache Solr. Elasticsearch and Apache Solr are built as distributed systems making them capable of handling big data. Requirements was established through interviews. Big log data of totally 40 GB was provided that would be indexed in the selected search engine. The log data provided was generated in a proprietary binary format and it had to be decoded before. The distributed search engines was evaluated based on: Distributed architecture, text analysis, indexing and querying. Elasticsearch was selected for implementation. A cluster was set up on Amazon Web Services and tests was executed in order to determine how different configurations performed. An indexing software was written that would transfer data to the cluster. Results was verified through a case-study with participants of the stakeholder.
Stordata är en datamängd som är mycket stora och komplexa att göra beräkningar på. När en datamängd ökar blir en trivial bearbetningsuppgift betydligt mera utmanande. Företagen samlar idag in data i allt snabbare takt men det är svårt att veta exakt vad man ska göra med den data. En sökmotor är ett system som indexerar data och gör det effektivt att för användare att söka i det. När ett fel inträffar i ett datorsystem går utvecklare igenom loggdata för att få en insikt i varför, men det kan ta lång tid att söka igenom en stor mängd loggdata. Syftet med denna avhandling är att undersöka, jämföra och implementera en distribuerad sökmotor som kan förbereda loggdata för analys, vilket gör det lättare för utvecklare att undersöka buggar. Det finns tre populära sökmotorer: Apache Lucene, Elasticsearch och Apache Solr. Elasticsearch och Apache Solr är byggda som distribuerade system och kan därav hantera stordata. Krav fastställdes genom intervjuer. En stor mängd loggdata på totalt 40 GB indexerades i den valda sökmotorn. Den loggdata som användes genererades i en proprietär binärt format som behövdes avkodas för att kunna användas. De distribuerade sökmotorerna utvärderades utifrån kriterierna: Distribuerad arkitektur, textanalys, indexering och förfrågningar. Elasticsearch valdes för att implementeras. Ett kluster sattes upp på Amazon Web Services och test utfördes för att bestämma hur olika konfigurationer presterade. En indexeringsprogramvara skrevs som skulle överföra data till klustret. Resultatet verifierades genom en studie med deltagare från intressenten.
APA, Harvard, Vancouver, ISO, and other styles
8

Tolomei, Gabriele <1980&gt. "Enhancing web search user experience : from document retrieval to task recommendation." Doctoral thesis, Università Ca' Foscari Venezia, 2011. http://hdl.handle.net/10579/1231.

Full text
Abstract:
Il World Wide Web è la più grande sorgente dati mai realizzata dall’uomo. Ciò ha fatto sì che il Web divenisse sempre più il “luogo” di riferimento per accedere a qualsiasi tipo di informazione, attraverso l’uso dei motori di ricerca. Infatti, gli utenti tendono a rivolgersi ai motori di ricerca non solo per consultare pagine Web ma per eseguire vere e proprie attività (ad es., per organizzare vacanze, ottenere un visto, organizzare una festa, etc.). In questa tesi di dottorato, si descrivono e affrontano due sfide fondamentali tese a migliorare l’esperienza di ricerca sul Web offerta dagli attuali motori di ricerca, ovvero la scoperta e la raccomandazione di cosiddetti “Web tasks”. Entrambe queste sfide si basano su una reale comprensione dei comportamenti di ricerca degli utenti, che può essere raggiunta mediante l’applicazione di tecniche di query log mining. I processi di ricerca degli utenti sono analizzati ad un più alto livello di astrazione, ovvero da una prospettiva “task-by-task” anziché “query-by-query”. In questo modo è possible realizzare un modello di attività di ricerca che fornisca adeguato supporto alla “vita sul Web” degli utenti.
The World Wide Web is the biggest and most heterogeneous database that humans have ever built, making it the place of choice where people search for any sort of information through Web search engines. Indeed, users are increasingly asking Web search engines for performing their daily tasks (e.g., "planning holidays", "obtaining a visa", "organizing a birthday party", etc.), instead of simply looking for Web pages. In this Ph.D. dissertation, we sketch and address two core research challenges that we claim next-generation Web search engines should tackle for enhancing user search experience, i.e., Web task discovery and Web task recommendation. Both these challenges rely on the actual understanding of user search behaviors, which can be achieved by mining knowledge from query logs. Search processes of many users are analyzed at a higher level of abstraction, namely from a "task-by-task" instead of a "query-by-query" perspective, thereby producing a model of user search tasks, which in turn can be used to support people during their daily "Web lives".
APA, Harvard, Vancouver, ISO, and other styles
9

Holm, Christer, and Andreas Larsson. "A Model for Multiperiod Route Planning and a Tabu Search Method for Daily Log Truck Scheduling." Thesis, Linköping University, Department of Mathematics, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2147.

Full text
Abstract:

The transportation cost of logs from forest to customers is a large part of the overall cost for the Swedish forestry industry. Finding good routes from harvesting points to saw and pulp mills is a complex task, where the total number of feasible routes is extremely high. In this thesis we present two methods for log truck scheduling.

The first is to, from a given set of routes, find the most valuable subset that fulfils the customers demand. We use a model that is similar to the set partitioning problem and a method that is referred to as a composite pricing coupled with Branch and Bound. The composite pricing based method prices the routes (columns) and chooses the most valuable ones that are then added to the LP relaxation. Once an LP optimum is found, the Branch and Bound method is used to find an integer optimum solution. We have tested this on a case of realistic size.

The second method is a tabu search heuristic. Here, the purpose is to create efficient and qualitative routes from a given number of trips (referred to as predefined trips). From a start solution tabu search systematically generates new solutions. This method was tested on a small problem and on a five times larger problem to study how the size of the problem affected the result. It was also tested and compared on two cases in which the backhauling possibilities (i.e. instead of traveling empty the truck picks up another load on the return trip) had and had not been studied. The composite pricing based method and the tabu search method proved to be very useful for this kind of scheduling.

APA, Harvard, Vancouver, ISO, and other styles
10

Kong, Wei. "EXPLORING HEALTH WEBSITE USERS BY WEB MINING." Thesis, Universal Access in Human-Computer Interaction. Applications and Services Lecture Notes in Computer Science, 2011, Volume 6768/2011, 376-383, DOI: 10.1007/978-3-642-21657-2_40, 2011. http://hdl.handle.net/1805/2810.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
With the continuous growth of health information on the Internet, providing user-orientated health service online has become a great challenge to health providers. Understanding the information needs of the users is the first step to providing tailored health service. The purpose of this study is to examine the navigation behavior of different user groups by extracting their search terms and to make some suggestions to reconstruct a website for more customized Web service. This study analyzed five months’ of daily access weblog files from one local health provider’s website, discovered the most popular general topics and health related topics, and compared the information search strategies for both patient/consumer and doctor groups. Our findings show that users are not searching health information as much as was thought. The top two health topics which patients are concerned about are children’s health and occupational health. Another topic that both user groups are interested in is medical records. Also, patients and doctors have different search strategies when looking for information on this website. Patients get back to the previous page more often, while doctors usually go to the final page directly and then leave the page without coming back. As a result, some suggestions to redesign and improve the website are discussed; a more intuitive portal and more customized links for both user groups are suggested.
APA, Harvard, Vancouver, ISO, and other styles
11

Dantas, Geórgia Geogletti Cordeiro. "A busca e o uso da informação em rede : seguindo o trajeto do internauta em revista científica eletrônica." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2008. http://hdl.handle.net/10183/13797.

Full text
Abstract:
O periódico eletrônico ganha cada vez mais credibilidade no campo da informação científica, e muitas são as iniciativas para a criação de novos periódicos. Contudo, apesar desse crescimento em número, são poucos os estudos que abordam a busca e o uso de informação realizados nas versões eletrônicas destes produtos, assim como o internauta que os acessa. A análise de logs é um método que objetiva identificar as ações dos usuários de um site através da investigação dos arquivos de logs do servidor web. Esse tipo de análise pode auxiliar no levantamento de informações sobre a utilização de um periódico eletrônico, mas poucas foram as pesquisas brasileiras que adotaram este método. Dessa forma, torna-se necessário também avaliar esse método aplicado ao estudo de periódicos. O foco da pesquisa é analisar o comportamento de busca e uso da informação em periódicos científicos eletrônicos por meio da análise de logs da revista Psicologia: reflexão e crítica. Esse trabalho objetiva também levantar as diferentes formas de acesso ao periódico, verificar o número de acessos e sua distribuição pelo território nacional, verificar a freqüência de uso do periódico, verificar os tipos de internauta a visitar o periódico, levantar os padrões de comportamento de busca e uso possíveis. As bases teóricas para essa pesquisa são os conceitos de comportamento de busca e uso da informação, periódico científico eletrônico e visibilidade. A metodologia utilizada é a análise de logs fornecidos pela Scientific Electronic Library Online (SciELO), para a obtenção de dados quantitativos, e entrevistas, para a obtenção de dados qualitativos. Por meio da metodologia aplicada chegou-se a um panorama geral da busca e uso do periódico, onde foram determinadas a freqüência de visitação, quem utiliza, quais suas ações no periódico e a origem desses internautas. Constatou-se também a existência de oito padrões de comportamento informacional, que são as seqüências de ações com maior probabilidade de serem realizadas pelos usuários do periódico.
The electronic journals are increasingly gaining more credibility in the information science field, and there are many initiatives for the creation of new journals. However, despite this growth in quantity, there are only a few studies on the information search and use performed in the electronic version of these products, as well as the user who access them. The log analysis is a method which aims to identify the actions of the users of a site through the investigation of the web server’s log files. This type of analysis can help in obtaining information regarding the use of an electronic journal, but there are only a few Brazilian studies which adopted this method. Hence, it is also necessary to evaluate this methodology applied to the journals study. The focus of this research is to analyze the information search and use behavior in scientific electronic journals through the logs analysis of the journal Psicologia: reflexão e crítica. This work also aims to identify the different forms of accessing the journal, to verify the access number and its distribution in the national territory, to verify the frequency of use of the journal, to verify the types of users that visit the journal, and to identify the patterns of the possible information search and use behavior. The theoretic basis for this research are the concepts of information search and use behavior, scientific electronic journal and visibility. The used methodology is the analysis of the logs provided by the Scientific Electronic Library Online (SciELO), for gathering the quantitative data, and interviews, for gathering the qualitative data. Through the applied methodology we have reached a general view of information search and use behavior in the journal, and in this occasion it was determined the visiting frequency, who uses it, what were his/her actions in the journal and the origin of there users. It was also identified the existence of eight patterns of informational behavior, which are the sequences of actions with the biggest probability of being performed by the users of the journal.
APA, Harvard, Vancouver, ISO, and other styles
12

Roscheck, Michael Thomas. "Detection Likelihood Maps for Wilderness Search and Rescue: Assisting Search by Utilizing Searcher GPS Track Logs." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3312.

Full text
Abstract:
Every year there are numerous cases of individuals becoming lost in remote wilderness environments. Principles of search theory have become a foundation for developing more efficient and successful search and rescue methods. Measurements can be taken that describe how easily a search object is to detect. These estimates allow the calculation of the probability of detection—the probability that an object would have been detected if in the area. This value only provides information about the search area as a whole; it does not provide details about which portions were searched more thoroughly than others. Ground searchers often carry portable GPS devices and their resulting GPS track logs have recently been used to fill in part of this knowledge gap. We created a system that provides a detection likelihood map that estimates the probability that each point in a search area was seen well enough to detect the search object if it was there. This map will be used to aid ground searchers as they search an assigned area, providing real time feedback of what has been "seen." The maps will also assist incident commanders as they assess previous searches and plan future ones by providing more detail than is available by viewing GPS track logs.
APA, Harvard, Vancouver, ISO, and other styles
13

Håkansson, Gunnar. "Applikation för sökning i databaslogg samt design av databas." Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-23462.

Full text
Abstract:
Den här rapporten behandlar ett system som använder en databas som lagringsplats för loggar. En bra metod för att hämta ut dessa loggar saknades och databasdesignen behövde förbättras för sökningar i loggarna. En applikation för att hämta och söka i loggposter från databasen skapades. En undersökning om hur databasdesignen kunde förbättras genomfördes också. Båda delarna gjordes i ett projekt för att de hörde ihop. Applikationen skulle använda databasen. Då jag inte kunde göra vilka ändringar jag ville i databasen gjordes relativt begränsade ändringar i den. Större ändringar utreddes teoretiskt. Applikationen gjordes mot den existerande databasdesignen, med ett undantag: en vy lades till. Rapporten undersöker index och andra metoder att göra sökningar i en databas snabbare. En metod för att hämta data inom ett intervall i en databas utvecklades och den beskrivs i rapporten. Metoden söker efter all data som har värden på en kolumn som faller inom ett intervall och där databasen är ordnad, eller nästan ordnad, på den kolumnen. Metoden ger oexakta svar om databasen är nästan ordnad på den kolumnen. Den är snabbare än en motsvarande exakt sökning.
This report considers a system where a database is used as the back-end storage for logging. A suitable method for extracting information from the logs was missing and the database design needed an improvement for log searching. An application for extracting and filtering the logs was created. An evaluation of how the database could be improved was also performed. Both parts were done in one project since they were heavily connected. The application would use the database. Since I couldn’t make arbitrary changes to the database only relatively limited changes were made in practice. Larger changes were evaluated theoretically. The application was made against the existing database, with one exception: a view was added. The report handles indexes and other methods for speeding up database searches. A method for fetching data inside an interval in a database was developed and is described in the report. The method searches for all data where the value of a column is inside an interval and the database is ordered, or almost ordered, on that column. The method gives inexact answers if the database is almost ordered on that column. It is faster than a corresponding exact search.
APA, Harvard, Vancouver, ISO, and other styles
14

Valli, Paola. "Concordancing Software in Practice: An investigation of searches and translation problems across EU official languages." Doctoral thesis, Università degli studi di Trieste, 2013. http://hdl.handle.net/10077/8591.

Full text
Abstract:
2011/2012
The present work reports on an empirical study aimed at investigating translation problems across multiple language pairs. In particular, the analysis is aimed at developing a methodological approach to study concordance search logs taken as manifestations of translation problems and, in a wider perspective, information needs. As search logs are a relatively unexplored data type within translation process research, a controlled environment was needed in order to carry out this exploratory analysis without incurring in additional problems caused by an excessive amount of variables. The logs were collected at the European Commission and contain a large volume of searches from English into 20 EU languages that staff translators working for the EU translation services submitted to an internally available multilingual concordancer. The study attempts to (i) identify differences in the searches (i.e. problems) based on the language pairs; and (ii) group problems into types. Furthermore, the interactions between concordance users and the tool itself have been examined to provide a translation-oriented perspective on the domain of Human-Computer Interaction. The study draws on the literature on translation problems, Information Retrieval and Web search log analysis, moving from the assumption that in the perspective of concordance searching, translation problems are best interpreted as information needs for which the concordancer is chosen as a form of external support. The structure of a concordance search is examined in all its parts and is eventually broken down into two main components: the 'Search Strategy' component and the 'Problem Unit' component. The former was further analyzed using a mainly quantitative approach, whereas the latter was addressed from a more qualitative perspective. The analysis of the Problem Unit takes into account the length of the search strings as well as their content and linguistic form, each addressed with a different methodological approach. Based on the understanding of concordance searches as manifestations of translation problems, a user- centered classification of translation-oriented information needs is developed to account for as many "problem" scenarios as possible. According to the initial expectations, different languages should experience different problems. This assumption could not be verified: the 20 different language pairs considered in this study behaved consistently on many levels and, due to the specific research environment, no definite conclusions could be reached as regards the role of the language family criterion for problem identification. The analysis of the 'Problem Unit' component has highlighted automatized support for translating Named Entities as a possible area for further research in translation technology and the development of computer-based translation support tools. Finally, the study indicates (concordance) search logs as an additional data type to be used in experiments on the translation process and for triangulation purposes, while drawing attention on the concordancer as a type of translation aid to be further fine-tuned for the needs of professional translators. ***
Il presente lavoro consiste in uno studio empirico sui problemi di traduzione che emergono quando si considerano diverse coppie di lingue e in particolare sviluppa una metodologia per analizzare i log di ricerche effettuate dai traduttori in un software di concordanza (concordancer) quali manifestazioni di problemi di traduzione che, visti in una prospettiva più ampia, si possono anche considerare dei "bisogni d'informazione" (information needs). I log di ricerca costituiscono una tipologia di dato ancora relativamente nuova e inesplorata nell'ambito delle ricerche sul processo di traduzione e pertanto è emersa la necessità di svolgere un'analisi di tipo esplorativo in un contesto controllato onde evitare le problematiche aggiuntive derivanti da un numero eccessivo di variabili. I log di ricerca sono stati raccolti presso la Commissione europea e contengono quantitativi ingenti di ricerche effettuate dai traduttori impiegati presso i servizi di traduzione dell'Unione europea in un concordancer multilingue disponibile come risorsa interna. L'analisi si propone di individuare le differenze nelle ricerche (e quindi nei problemi) a seconda della coppia di lingue selezionata e di raggruppare tali problemi in tipologie. Lo studio fornisce inoltre informazioni sulle modalità di interazione tra gli utenti e il software nell'ambito di un contesto traduttivo, contribuendo alla ricerca nel campo dell'interazione uomo-macchina (Human-Computer Interaction). Il presente studio trae spunto dalla letteratura sui problemi di traduzione, sull'estrazione d'informazioni (Information Retrieval) e sulle ricerche nel Web e si propone di considerare i problemi di traduzione associati all'impiego di uno strumento per le concordanze quali bisogni di informazione per i quali lo strumento di concordanze è stato scelto come forma di supporto esterna. Ogni singola ricerca è stata esaminata e scomposta in due elementi principali: la "strategia di ricerca" (Search Strategy) e l'"unità problematica" (Problem Unit) che vengono studiati rispettivamente usando approcci prevalentemente di tipo quantitativo e qualitativo. L'analisi dell'unità problematica prende in considerazione la lunghezza, il contenuto e la forma linguistica delle stringhe, analizzando ciascuna con una metodologia di lavoro appositamente studiata. Avendo interpretato le ricerche di concordanze quali manifestazioni di bisogni d'informazione, l'analisi prosegue con la definizione di una serie di categorie di bisogni d'informazione (o problemi) legati alla traduzione e incentrati sul singolo utente al fine di includere quanti più scenari di ricerca possibile. L'assunto iniziale in base al quale lingue diverse manifesterebbero problemi diversi non è stato verificato empiricamente in quanto le 20 coppie di lingue esaminate hanno mostrato comportamenti alquanto similari nei diversi livelli di analisi. Vista la peculiarità dei dati utilizzati e la specificità dell'Unione europea come contesto di ricerca, non è stato possibile ottenere conclusioni definitive in merito al ruolo delle famiglie linguistiche quali indicatori di problemi, rispetto ad altri criteri di classificazione. L'analisi dell'unità problematica ha evidenziato le entità denominate (Named Entities) quale possibile oggetto di futuri progetti di ricerca nell'ambito delle tecnologie della traduzione. Oltre a offrire un contributo per i futuri sviluppi nell'ambito dei supporti informatici alla traduzione, con il presente studio si è voluto altresì presentare i log delle ricerche (di concordanze) quale tipologia aggiuntiva di dati per lo studio del processo di traduzione e per la triangolazione dei risultati empirico-sperimentali, cercando anche di suggerire possibili tratti migliorativi dei software di concordanza sulla base dei bisogni di informazione riscontrati nei traduttori.
XXV Ciclo
1984
APA, Harvard, Vancouver, ISO, and other styles
15

Ureten, Suzan. "Single and Multiple Emitter Localization in Cognitive Radio Networks." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/35692.

Full text
Abstract:
Cognitive radio (CR) is often described as a context-intelligent radio, capable of changing the transmit parameters dynamically based on the interaction with the environment it operates. The work in this thesis explores the problem of using received signal strength (RSS) measurements taken by a network of CR nodes to generate an interference map of a given geographical area and estimate the locations of multiple primary transmitters that operate simultaneously in the area. A probabilistic model of the problem is developed, and algorithms to address location estimation challenges are proposed. Three approaches are proposed to solve the localization problem. The first approach is based on estimating the locations from the generated interference map when no information about the propagation model or any of its parameters is present. The second approach is based on approximating the maximum likelihood (ML) estimate of the transmitter locations with the grid search method when the model is known and its parameters are available. The third approach also requires the knowledge of model parameters but it is actually based on generating samples from the joint posterior of the unknown location parameter with Markov chain Monte Carlo (MCMC) methods, as an alternative for the highly computationally complex grid search approach. For RF cartography generation problem, we study global and local interpolation techniques, specifically the Delaunay triangulation based techniques as the use of existing triangulation provides a computationally attractive solution. We present a comparative performance evaluation of these interpolation techniques in terms of RF field strength estimation and emitter localization. Even though the estimates obtained from the generated interference maps are less accurate compared to the ML estimator, the rough estimates are utilized to initialize a more accurate algorithm such as the MCMC technique to reduce the complexity of the algorithm. The complexity issues of ML estimators based on full grid search are also addressed by various types of iterative grid search methods. One challenge to apply the ML estimation algorithm to multiple emitter localization problem is that, it requires a pdf approximation to summands of log-normal random variables for likelihood calculations at each grid location. This inspires our investigations on sum of log-normal approximations studied in literature for selecting the appropriate approximation to our model assumptions. As a final extension of this work, we propose our own approximation based on distribution fitting to a set of simulated data and compare our approach with Fenton-Wilkinson's well-known approximation which is a simple and computational efficient approach that fits a log-normal distribution to sum of log-normals by matching the first and second central moments of random variables. We demonstrate that the location estimation accuracy of the grid search technique obtained with our proposed approximation is higher than the one obtained with Fenton-Wilkinson's in many different case scenarios.
APA, Harvard, Vancouver, ISO, and other styles
16

Dennis, Johansson. "Search Engine Optimization and the Long Tail of Web Search." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-296388.

Full text
Abstract:
In the subject of search engine optimization, many methods exist and many aspects are important to keep in mind. This thesis studies the relation between keywords and website ranking in Google Search, and how one can create the biggest positive impact. Keywords with smaller search volume are called "long tail" keywords, and they bear the potential to expand visibility of the website to a larger crowd by increasing the rank of the website for the large fraction of keywords that might not be as common on their own, but together make up for a large amount of the total web searches. This thesis will analyze where on the web page these keywords should be placed, and a case study will be performed in which the goal is to increase the rank of a website with knowledge from previous tests in mind.
APA, Harvard, Vancouver, ISO, and other styles
17

Vaziri, Farzad <1986&gt. "Discovering Single-Query Tasks from Search Engine Logs." Master's Degree Thesis, Università Ca' Foscari Venezia, 2014. http://hdl.handle.net/10579/5389.

Full text
Abstract:
When a user uses a search engine to find out some information, for each query that he does, search engine gives some links as result. Search engines provide these results for the user and on the other hand save the log information of each user about queries have done by them. These log information through query mining methods and algorithms are used to extract useful knowledge. In previous works a concept has defined as “task” which is a set of possibly noncontiguous queries which refer to the same need of information. Clustering methods have been used for discovering collective tasks by aggregative similar user tasks, possibly performed by distinct users. All these studies were taking into account just the queries which have done in a session and then concept of mission came up and it was trying to find multi tasks even in different sessions. Well, till now all studies were trying to cluster and aggregate the queries which are done for same information need and none of them cares about queries which are done independently and without any relation to other queries. Queries that we can call them singleton or single task queries. We thought that may be finding these queries could have some more interesting benefits for search engine either by taking them off and eliminating them or studying more on them and improving the results of search engines about responding to these kind of queries. Our contribution is using classification methods to find and classify these single task queries than multi tasks. Based on saved information about query logs we tried to define features for a single query and use these features in classification algorithms to achieve to our goal.
APA, Harvard, Vancouver, ISO, and other styles
18

Aydin, Mehmet. "An exploratory analysis of Village Search Operations /." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Jun%5FAydin.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Comandini, Ubaldo Visco. "Viral parameters influencing clinical long-term non progression in HIV-1 infected subjects /." Stockholm, 1998. http://diss.kib.ki.se/search/diss.se.cfm?19980925coma.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Tata, Ramarao. "SEARCH, CHARACTERIZATION, AND PROPERTIES OF BROWN DWARFS." Doctoral diss., University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3439.

Full text
Abstract:
A trend in polarization as predicted by theoretical models was validated, and atmospheric dust grain sizes and projected rotational velocities for these objects were estimated. Comprehensive studies of UDs are proving to be crucial not only in our understanding of UDs but also for star and planet formation as brown dwarfs represent their lower and upper mass boundaries, respectively. Brown dwarfs (BD) were mere theoretical astrophysical objects for more than three decades (Kumar (1962)) till their first observational detection in 1995 (Rebolo et al. (1995), Nakajima et al. (1995)). These objects are intermediate in mass between stars and planets. Since their observational discovery these objects have been studied thoroughly and holistically.Various methods for searching and characterizing these objects in different regions of the sky have been put forward and tested with great success. Theoretical models describing their physical, atmospheric and chemical processes and properties have been proposed and have been validated with a large number of observational results. The work presented in this dissertation is a compilation of synoptic studies of ultracool dwarfs(UDs)¹. [Footnote 1:]. [bullet] A search for wide binaries around solar type stars in upper scorpio OB association (Upper Sco) do indicate (the survey is not yet complete) a deficit of BD binaries at these large separations ([less than] 5AU). [bullet] Twenty six new UDs were discovered at low galactic latitudes in our survey from archival data and a novel technique using reduced proper motion. [bullet] Six field UDs were discovered by spectroscopic follow-up of the candidates selected from a deep survey. [bullet] Optical interferometry was used to independently determine the orbit of the companion of HD33636 which was initially determined using Hubble Space Telescope(HST)astrometry and radial velocity found. Some inconsistency in the HST determined orbit and mass. [bullet] Optical linear polarization in UDs was used to investigate the dust propertied in their atmospheres. Footnote 1: We use the term “ultracool dwarfs” as the mass of most of the objects mentioned is unknown, which is required to classify an object as a brown dwarf. We define objects later than M7 as ultra cool dwarfs.
Ph.D.
Department of Physics
Sciences
Physics PhD
APA, Harvard, Vancouver, ISO, and other styles
21

Lafuente, Martinez Cristina. "Essays on long-term unemployment in Spain." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31085.

Full text
Abstract:
This thesis is comprised of three essays relating to long term unemployment in Spain. The first chapter is methodological analysis of the main dataset that is used throughout the thesis. The second and third chapter provide two applications of the dataset for the study of long term unemployment. The methodology in these chapters can be easily adapted to study unemployment in other countries. Chapter 1. On the use of administrative data for the study of unemployment Social security administrative data are increasingly becoming available in many countries. These are very attractive data as they have a long panel structure (large N, large T) and allow to measure many different variables with higher precision. Because of their nature they can capture aspects that are usually hidden due to design or timing of survey data. However, administrative data are not ready to be used for labour market research, especially studies involving unemployment. The main reason is that administrative data only capture those registered unemployed, and in some cases only those receiving unemployment benefits. The gap between total unemployment and registered unemployment is not constant neither across workers characteristics nor time. In this paper I augment Spanish Social Security administrative data by adding missing unemployment spells using information from the institutional framework. I compare the resulting unemployment rate to that of the Labour Force Survey, showing that both are comparable and thus the administrative dataset is useful for labour market research. I also explore how the administrative data can be used to study some important aspects of the labour market that the Labour Force survey can’t capture. Administrative data can also be used to overcome some of the problems of the Labour Force survey such as changes in the structure of the survey. This paper aims to provide a comprehensive guide on how to adapt administrative datasets to make them useful for studying unemployment. Chapter 2. Unemployment Duration Variance Decomposition `a la ABS: Evidence from Spain Existing studies of unemployment duration typically use self-reported information from labour force surveys. We revisit this question using precise information on spells from administrative data. We follow the recent method proposed by Alvarez, Borovickova and Shimer (2015) for estimating the different components of the duration of unemployment using administrative data and have applied it to Austria. In this paper we apply the same method (the ABS method hereafter) to Spain using Spanish Social Security data. Administrative data have many advantages compared to Labour Force Survey data, but we note that there are some incompleteness that need to be enhanced in order to use the data for unemployment analysis (e.g., unemployed workers that run out of unemployment insurance have no labour market status in the data). The degree and nature of such incompleteness is country-specific and are particularly important in Spain. Following Chapter 1, we deal with these data issues in a systematic way by using information from the Spanish LFS data as well as institutional information. We hope that our approach will provide a useful way to apply the ABS method in other countries. Our findings are: (i) the unemployment decomposition is quite similar in Austria and Spain, specially when minimizing the effect of fixed-term contracts in Spain. (ii) the constant component is the most important one; while (total) heterogeneity and duration dependence are roughly comparable. (iii) also, we do not find big differences in the contribution of the different components along the business cycle. Chapter 3. Search Capital and Unemployment Duration I propose a novel mechanism called search capital to explain long term unemployment patters across different ages: workers who have been successful in finding jobs in the recent past become more efficient at finding jobs in the present. Search ability increases with search experience and depreciates with tenure if workers do not search often enough. This leaves young (who have not gained enough search experience) and older workers in a disadvantaged position, making them more likely to suffer long term unemployment. I focus on the case of Spain, as its dual labour market structure favours the identification of search capital. I provide empirical evidence that search capital affects unemployment duration and wages at the individual level. Then I propose a search model with search capital and calibrate it using Spanish administrative data. The addition of search capital helps the model match the dynamics of unemployment and job finding rates in the data, especially for younger workers.
APA, Harvard, Vancouver, ISO, and other styles
22

Richardson, James. "Targeted wage subsidies and long-term unemployment : theory and policy evaluation." Thesis, London School of Economics and Political Science (University of London), 1999. http://etheses.lse.ac.uk/1531/.

Full text
Abstract:
Prolonged experience of high and long-term unemployment has led many governments to a renewed interest in active labour market policies. In particular, targeted wage subsidies have been seen as a means of both directly getting longterm unemployed people into work, and improving their future prospects of finding and keeping jobs. We examine three issues. Firstly, we look at the macroeconomic theory of targeted wage subsidies, and, to a lesser extent, job search assistance, within efficiency wage, union bargaining and search theoretic frameworks. Subsidies directly increase labour demand, but we also find that their effectiveness is enhanced by general equilibrium effects from targeting: wage pressure is reduced; and the average quality of the unemployed pool rises as long-term unemployed workers are removed from it, increasing the incentives for other firms to open vacancies. Secondly we address the optimal degree of policy targeting, using an extension of the Mortensen-Pissarides job creation and destruction model. We argue that there are real gains to targeting the long-term unemployed, but also diminishing returns. Hence, as the level of policy expenditure rises, the extent of targeting should fall. Simulating the model for the UK, we find that policy could have a significant impact on equilibrium unemployment, with more modest welfare gains. Finally, we look at longer-term employability effects by evaluating the Australian Special Youth Employment Training Program (SYETP). Controlling for selection bias using a bivariate probit, we find that participation increased the chances of having a job by 26% between 8 and 13 months after subsidy expiry, and 20% a year later. Much of this gain arose from retention of initially subsidised jobs, but even excluding this, participants were significantly more likely to be employed in subsequent years than if they had not gone on the programme.
APA, Harvard, Vancouver, ISO, and other styles
23

Timmons, Ashley. "Search for sterile neutrinos with the MINOS long-baseline experiment." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/search-for-sterile-neutrinos-with-the-minos-longbaseline-experiment(a4cf8449-b521-4fec-8750-464e41b4c2a8).html.

Full text
Abstract:
This thesis will present a search for sterile neutrinos using data taken with the MINOS experiment between 2005 and 2012. MINOS is a two-detector on-axis experiment based at Fermilab. The NuMI neutrino beam encounters the MINOS Near Detector 1km downstream of the neutrino-production target before travelling a further 734km through the Earth's crust, to reach the Far Detector located at the Soudan Underground Laboratory in Northern Minnesota. By searching for oscillations driven by a large mass splitting, MINOS is sensitive to the existence of sterile neutrinos through looking for any energy-dependent perturbations using a charged-current sample, as well as looking at any relative deficit in neutral current events between the Far and Near Detectors. This thesis will discuss the novel analysis that enabled a search for sterile neutrinos covering five orders of magnitude in the mass splitting and setting a limit in previously unexplored regions of the sterile neutrino parameter space, where a 3+1-flavour phenomenological model was used to extract parameter limits. The results presented in this thesis are sensitive to the sterile neutrino parameter space suggested by the LSND and MiniBooNE experiments.
APA, Harvard, Vancouver, ISO, and other styles
24

Love, Jeremy R. "A search for technicolor at the Large Hadron Collider." Thesis, Boston University, 2012. https://hdl.handle.net/2144/31586.

Full text
Abstract:
Thesis (Ph.D.)--Boston University
PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you.
The Standard Model of particle physics provides an accurate description of all experimental data to date. The only unobserved piece of the Standard Model is the Higgs boson, a consequence of the spontaneous breaking of electroweak symmetry by the Higgs mechanism. An alternative to the Higgs mechanism is proposed by Technicolor theories which break electroweak symmetry dynamically through a new force. Technicolor predicts many new particles, called Technihadrons, that could be observed by experiments at hadron colliders. This thesis presents a search for two of the lightest Technihadrons, the ρT and ωT . The Low-Scale Technicolor model predicts the phenomenology of these new states. The ρT and ωT are produced through qq annihilation and couple to Standard Model fermions through the Drell-Yan process, which can result in the dimuon final state. The ρT and ω T preferentially decay to the πT and a Standard Model gauge boson if kinematically allowed. Changing the mass of the πT relative to that of the ρT and ωT affects the cross section times branching fraction to dimuons. The ρT and ωT are expected to have masses below about 1 TeV. The Large Hadron Collider (LHC) at CERN outside of Geneva, Switzerland, produces proton-proton collisions with a center of mass energy of 7 TeV. A general purpose high energy physics detector ATLAS has been used in this analysis to search for Technihadrons decaying to two muons. We use the ATLAS detector to reconstruct the tracks of muons with high transverse momentum coming from these proton-proton collisions. The dimuon invariant mass spectrum is analyzed above 130 GeV to test the consistency of the observed data with the Standard Model prediction. We observe excellent agreement between our data and the background only hypothesis, and proceed to set limits on the cross section times branching ratio of the ρT and ωT as a function of their mass using the Low-Scale Technicolor model. We combine the dielectron and dimuon channels to exclude masses of the ρT and ωT between 130 GeV - 480 GeV at 95% Confidence Level for masses of the πT between 50 GeV - 480 GeV. In addition for the parameter choice of m(π T ) = m(ρT /ω T )- 100 GeV, 95% Confidence Level limits are set excluding masses of the ρT and ωT below 470 GeV. This analysis represents the current world's best limit on this model.
2031-01-01
APA, Harvard, Vancouver, ISO, and other styles
25

Cloutier, Jacques. "The search for a new long-range force weaker than gravity /." Thesis, McGill University, 1988. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=61974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Chalmers, Kelsey. "Utilising Big Data in the search for low-value health care." Thesis, The University of Sydney, 2019. http://hdl.handle.net/2123/20574.

Full text
Abstract:
Low-value health care provides little benefit relative to its cost. Australian policy makers, health care payers and providers want to reduce its use due to unnecessary costs and harms. These decisions, however, need to be informed by the measurement of low-value care in the context of Australia’s mixed public-private health care system. This thesis investigated low-value procedures using routinely collected data and direct measures. Direct measures use patient- or episode-level clinical information to distinguish low-value from appropriate care. Based on our review of the literature, we introduced a framework to classify direct measures as providing either a service-centric or a patient-centric result. The Choosing Wisely campaign publishes clinician-endorsed ‘do-not-do’ recommendations, and provides a source of potential direct measures. We screened 824 recommendations, and found only a small proportion measurable in a hospital-claims data set. We used these and other recommendations to develop 21 measures applicable to private health insurance claims from 376,354 patients (approximately 7% of the Australian privately insured population). There were 14,662 patients with at least one of the 21 procedures in 2014 (the service-centric result, according to our framework). Of these patients, 20.8 to 32.0% had a low-value procedure according to a narrow (more specific) and broad (more sensitive) set of measures. We extended this investigation to all payer types using the New South Wales (NSW) Admitted Patient Data Collection, and generally found higher proportions and volumes of low-value procedures in the private sector. In 2014-15, 40.3% of all low-value procedures in NSW state were for privately insured patients in private hospitals (relative to 35.6% of all procedures). Despite the limited scope of health care captured by these measures, the work in this thesis has already led to several policy-focussed projects informing governments and payers on low-value care.
APA, Harvard, Vancouver, ISO, and other styles
27

Krafft, Maria. "Non-fatal injuries to car occupants : injury assessment and analysis of impacts causing short- and long-term consequences with special reference to neck injuries /." Stockholm, 1998. http://diss.kib.ki.se/search/diss_se.cfm?19981016kraf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Al-Garni, Sareh D. "Search for long lived isomers in the neutron-rich mass 180 region." Thesis, University of Surrey, 2002. http://epubs.surrey.ac.uk/843292/.

Full text
Abstract:
Nuclei in the A~ 180 region were populated and investigated in a series of deep-inelastic reactions involving an 11.4 MeV per nucleon 136Xe beam produced by the GSI UNILAC accelerator, impinging on a selection of tantalum, tungsten and rhenium targets. The reaction products were released from both thermal (TIS) and FEBIAD ion sources and subsequently mass-separated using the GSI on-line mass separator. This work concentrates on the observation of gamma rays associated with the decay of the well known Kpi= 37/2--, t1/2 = 51.4 min. isomer in 177Hf. Due to the anomalous half-life characteristics and unexpectedly high yield of this decay, it is interpreted as being fed via the beta-decay of a high-K isomer in 177Lu. By comparing the experimental findings with the results obtained from multi-quasiparticle blocked-BCS-Nilsson calculations (which predict a low-lying state with Kpi = 39/2-- in 177Lu), the proposed decay is suggested to be an energetically favoured Kpi= 39/2-- five-quasiparticle state in 177Lu. A half-life of 7.7+/-3.0 min. is determined for this previously unreported A = 177 beta-decay path, also involving 89-keV and 1003-keV gamma-ray transitions in association with hafnium X-rays. In addition, two previously unreported transitions (2016 and 2114 keV) were assigned to 182Hf as a result of their coincidence with Hf X-rays and the 98 keV 2+ → 0+ decay of that nucleus.
APA, Harvard, Vancouver, ISO, and other styles
29

Mori, Masamitsu. "Long time supernova simulation and search for supernovae in Super-Kamiokande IV." Doctoral thesis, Kyoto University, 2021. http://hdl.handle.net/2433/263465.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Nolan-Miljevic, Jelena. "Long lost storylines : narrative inquiry into the search for a missing parent." Thesis, University of Bristol, 2015. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.686185.

Full text
Abstract:
This research explores the narratives and narrative resources connected to the search for a missing parent (SMP) undertaken by people not previously recognised as searchers. Methods used are autoethnography, friendship as inquiry, writing as inquiry and fictional representations. Main research question is How do people who have searched for a missing parent create and to tell meaningful stories? What resources do they call upon? The findings identified several dominant narratives about the search for a missing parent- the narratives of search, bad place, missing piece, best interests of a child, happy ending and silence. These narratives sustain processes of marginalisation and stigmatisation of lived experience which doesn't fit in dominant narrative frameworks. This can have adverse effects on searcher, as five stories of personal experience demonstrate. The inquiry in personal narratives identified that stories of lived experience critique and challenge the state of things offered by dominant narratives and engage in resistance and critique of available stories. The personal stories were also written in order to encourage reader to think with them (Frank, 1994) and through that process critically examine their own convictions about the SMP. Juxtaposition of the personal and dominant stories outlined the need for more narratives which would empower and support searcher. The new narratives were then written up. Original contributions to knowledge arising from this research are: challenge to the concept of search as exclusively belonging to adoption studies; identifying processes of marginalisation and stigmatisation arising from dominant narratives and offering these as alternative explanatory frameworks for searcher's behaviours; demonstrating how stories of lived experience critique dominant narrative landscape and providing new narratives of search inspired by personal experiences as means to empower searchers. This research is the most relevant for fields of adoption studies, family studies and socio psychological narrative inquiry.
APA, Harvard, Vancouver, ISO, and other styles
31

Pavawalla, Shital Prabodh. "Long-term retention of skilled visual search following severe closed-head injury." Online access for everyone, 2005. http://www.dissertations.wsu.edu/Thesis/Spring2005/s%5Fpavawalla%5F041805.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Pattie, Robert W. Jr. "Status of the Los Alamos Room Temperature Neutorn Electric Dipole Moment Search." Digital Commons @ East Tennessee State University, 2019. https://dc.etsu.edu/etsu-works/5532.

Full text
Abstract:
A discovery of the neutron's permanent electric dipole moment larger than the standard model prediction of dn ≈ 10-31 e·cm would signal a new source of CP-violation and help explain the matter-antimatter asymmetry in the universe. Tightening the limits on dn constrain extensions to the standard model in a complementary fashion to the atomic and electron EDM searches. The recent upgrade of the Los Alamos ultracold neutron source makes it possible for a new room temperature search with the statistical reach to improve upon current limits by a factor of 10 or more. During the 2018 LANSCE cycle a prototype apparatus was used to demonstrate the capability to transport and manipulate polarized neutrons and perform Ramsey and Rabi sequence measurements. I will report on the measurements made over the last year, efforts underway to upgrade the prototype chamber, and possible future upgrades of the ultracold neutron source.
APA, Harvard, Vancouver, ISO, and other styles
33

Deshpande, Rohit. "Search for gas giants around late-M dwarfs." Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4640.

Full text
Abstract:
The absolute radial and rotational velocities of our targets were also calculated. Values of rotational velocities indicate that M dwarfs are, in general, slow rotators. Using our result and that from literature, we extended our study of rotational velocities to L dwarfs. Our observations show an increase in rotational velocities from late M to L dwarfs. We also find that the mean periods of M dwarfs are less than 10 hours. In order to improve our precision in measuring relative radial velocity (RV), we employed the use of deconvolution method. With this method we were able to ameliorate relative RV precision from 300 m/s to 200 m/s. This was a substantial improvement in our ability to detect gas-giant planets. However none of the 15 dwarfs we monitored indicate a presence of companions. This null result was then used to compute the upper limit to the binary frequency and close-in Jupiter mass planetary frequency. We find the binary frequency to be 11% while the planetary frequency was 1.20%.; We carried out a near-infrared radial velocity search for Jupiter-mass planets around 36 late M dwarfs. This survey was the first of its kind undertaken to monitor radial velocity variability of these faint dwarfs. For this unique survey we employed the 10-m Keck II on Mauna Kea in Hawaii. With a resolution of 20,000 on the near-infrared spectrograph, NIRSPEC, we monitored these stars over four epochs in 2007. In addition to the measurement of relative radial velocity we established physical properties of these stars. The physical properties of M dwarfs we determined included the identification of neutral atomic lines, the measurement of pseudo-equivalent widths, masses, surface gravity, effective temperature, absolute radial velocities, rotational velocities and rotation periods. The identification of neutral atomic lines was carried out using the Vienna Atomic line Database. We were able to confirm these lines that were previously identified. We also found that some of the lines observed in the K-type stars, such as Mg I though weak, still persist in late M dwarfs. Using the measurement of pseudo-equivalent widths (p-EW) of 13 neutral atomic lines, we have established relations between p-EW and spectral type. Such relations serve as a tool in determining the spectral type of an unknown dwarf star by means of measuring its p-EW. We employed the mass-luminosity relation to compute the masses of M dwarfs. Our calculations indicate these dwarfs to be in the range of 0.1 to 0.07 solar masses. This suggests that some of the late M dwarfs appear to be in the Brown dwarf regime. Assuming their radii of 0.1 solar radii, we calculated their surface gravity. The mean surface gravity is, log g = 5.38. Finally their effective temperature was determined by using the spectral-type temperature relationship. Our calculations show effective temperatures in the range of 3000 to 2300 K. Comparison of these values with models in literature show a good agreement.
ID: 029094443; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2010.; Includes bibliographical references (p. 224-246).
Ph.D.
Doctorate
Department of Physics
Sciences
APA, Harvard, Vancouver, ISO, and other styles
34

Richmond, II Richard Steven. "A Low-Power Design of Motion Estimation Blocks for Low Bit-Rate Wireless Video Communications." Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/31458.

Full text
Abstract:
Motion estimation and motion compensation comprise one of the most important compression methods for video communications. We propose a low-power design of a motion estimation block for a low bit-rate video codec standard H.263. Since the motion estimation is computationally intensive to result in large power consumption, a low-power design is essential for portable or mobile systems. Our block employs the Four-Step Search (4SS) method as its primary algorithm. The design and the algorithm have been optimized to provide adequate results for low-quality video at low-power consumption. The model is developed in VHDL and synthesized using a 0.35 um CMOS library. Power consumption of both gate-level circuits and memory-accesses have been considered. Gate-level simulation shows the proposed design offers a 38% power reduction over a "baseline" implementation of a 4SS model and a 60% power reduction over a baseline Three-Step Search (TSS) model. Power savings through reduction of memory access is 26% over the TSS model and 32% over the 4SS model. The total power consumption of the proposed motion estimation block ranges from 7 - 9 mW and is dependent on the type of video being motion estimated.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
35

Abbas, Mohamad. "A search for Long-Period Variable Stars in the Globular Cluster NGC 6496." Bowling Green State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1308597257.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Yamamoto, Shimpei. "Search for ν[μ]→ν[e] oscillation in a long-baseline accelerator experiment." 京都大学 (Kyoto University), 2006. http://hdl.handle.net/2433/136724.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Insaf, Zeenat. "A neighborhood that empowers women : in search of housing sustainability." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq64115.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Shorr, Emma. "Let's Not Eat Alone: A Search for Food Security Justice." Scholarship @ Claremont, 2014. http://scholarship.claremont.edu/pitzer_theses/51.

Full text
Abstract:
The food justice movement has taken off in recent years. Despite its call for justice in the food system, it has been critiqued as being inaccessible to people who need food the most. The food system marginalizes women, minorities, and low-income people, making these groups the most at risk for food insecurity. Solutions to food insecurity come from both government and non-governmental avenues. This thesis calls for a merger of solutions to food insecurity and food justice in food security justice, and assesses the ability of solutions to food insecurity to confront issues of injustice. Community-based solutions currently have the potential to address issues of justice, as well as providing added benefits of promoting community cohesion and creating new economic spaces. Through a simulation of the SNAP budget and an exploration of the narrative between gang violence and food insecurity in Los Angeles, the necessity for solutions to food insecurity to address justice is established.
APA, Harvard, Vancouver, ISO, and other styles
39

Lamborn, Peter C. "January : search based On social insect behavior /." Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd801.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Jacmenovic, Dennis, and dennis_jacman@yahoo com au. "Optimisation of Active Microstrip Patch Antennas." RMIT University. Electrical and Computer Engineering, 2004. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20060307.144507.

Full text
Abstract:
This thesis presents a study of impedance optimisation of active microstrip patch antennas to multiple frequency points. A single layered aperture coupled microstrip patch antenna has been optimised to match the source reflection coefficient of a transistor in designing an active antenna. The active aperture coupled microstrip patch antenna was optimised to satisfy Global Positioning System (GPS) frequency specifications. A rudimentary aperture coupled microstrip patch antenna consists of a rectangular antenna element etched on the top surface of two dielectric substrates. The substrates are separated by a ground plane and a microstrip feed is etched on the bottom surface. A rectangular aperture in the ground plane provides coupling between the feed and the antenna element. This type of antenna, which conveniently isolates any circuit at the feed from the antenna element, is suitable for integrated circuit design and is simple to fabricate. An active antenna design directly couples an antenna to an active device, therefore saving real estate and power. This thesis focuses on designing an aperture coupled patch antenna directly coupled to a low noise amplifier as part of the front end of a GPS receiver. In this work an in-house software package, dubbed ACP by its creator Dr Rod Waterhouse, for calculating aperture coupled microstrip patch antenna performance parameters was linked to HP-EEsof, a microwave computer aided design and simulation package by Hewlett-Packard. An ANSI C module in HP-EEsof was written to bind the two packages. This process affords the client the benefit of powerful analysis tools offered in HP-EEsof and the fast analysis of ACP for seamless system design. Moreover, the optimisation algorithms in HP-EEsof were employed to investigate which algorithms are best suited for optimising patch antennas. The active antenna design presented in this study evades an input matching network, which is accomplished by designing the antenna to represent the desired source termination of a transistor. It has been demonstrated that a dual-band microstrip patch antenna can be successfully designed to match the source reflection coefficient, avoiding the need to insert a matching network. Maximum power transfer in electrical circuits is accomplished by matching the impedance between entities, which is generally acheived with the use of a matching network. Passive matching networks employed in amplifier design generally consist of discrete components up to the low GHz frequency range or distributed elements at greater frequencies. The source termination for a low noise amplifier will greatly influence its noise, gain and linearity which is controlled by designing a suitable input matching network. Ten diverse search methods offered in HP-EEsof were used to optimise an active aperture coupled microstrip patch antenna. This study has shown that the algorithms based on the randomised search techniques and the Genetic algorithm provide the most robust performance. The optimisation results were used to design an active dual-band antenna.
APA, Harvard, Vancouver, ISO, and other styles
41

Ribom, Dan. "In Search of Prognostic Factors in Grade 2 Gliomas." Doctoral thesis, Uppsala University, Neurology, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-2789.

Full text
Abstract:

Grade 2 gliomas are malignant brain tumours affecting otherwise healthy adults. Although the long-term prognosis is poor, many patients are well and may have a high quality of life for several years. There is, however, a large variability in the natural course of the disease which makes it essential to identify patients who might benefit from early surgery or radio-therapy. The aim of the present thesis was to define new and clinically useful prognostic markers that may assist in the initial treatment decision and in patient follow-up.

A retrospective study of 189 patients with gliomas WHO grade 2 showed no advantage in survival of early tumour resection or radiotherapy, and confirmed that histological subtype and patient age are the most important predictors of survival (I). In 89 patients, the pre-treatment uptake of 11C-methionine (MET) measured with positron emission tomography (PET) was identified as a prognostic marker for survival (II). At the time of tumour progression, irradiated tumours demonstrated signs of a residual radiotherapeutic effect that correlated with the pre-treatment uptake of MET (III). Pre-treatment uptake of MET may, therefore, be important both in predicting the natural course of the disease and the response after treatment. Immunohistochemical staining of 40 tumour samples showed an inverse association between the number of tumour cells expressing platelet-derived growth factor alpha receptor (PDGFRa) and survival (IV). Also, a reduction was observed in the number of receptor-positive cells after malignant transformation, supporting the prognostic value of PDGFRa.

Lumbar puncture was performed in eight patients with newly diagnosed low-grade gliomas to identify three important growth factors in tumour development. Neither PDGF nor vascular endothelial growth factor (VEGF) were detected in the cerebrospinal fluid (CSF), and fibroblast growth factor 2 (FGF-2) was measurable at extremely low concentrations in two of the patients (V). A proteome screening of the CSF, using two-dimensional gel electrophoresis and mass spectrometry, detected alpha 2-HS glycoprotein at significantly higher concentrations than in a control group (VI). This glycoprotein emerges as a novel substance in glioma research and may be of great interest because of its suggested involvement in the embryonic development of the neocortex.

APA, Harvard, Vancouver, ISO, and other styles
42

Moffett, Joe. "The search for origins in the twentieth-century long poem : Sumerian, Homeric, Anglo-Saxon /." Morgantown, W. Va. : West Virginia University Press, 2007. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=015671691&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Riedel, Curtis B. "The long search for democratic stability in El Salvador: implications for United States policy." Monterey, California. Naval Postgraduate School, 1997. http://hdl.handle.net/10945/8634.

Full text
Abstract:
Approved for public release; distribution is unlimited
From 1980 to 1992, the United States spent over 6 billion dollars to combat insurgency and bolster democracy in El Salvador, a nation of only 5.3 million people. In fact, El Salvador was the site of the United States' most prolonged - and until the Persian Gulf War - the most costly military endeavor since Vietnam. While United States assistance did help the Salvadoran government combat the insurgents, this aid by most accounts acted to undermine rather than bolster the democratic stability of the country. The thesis examines the democratic experience of El Salvador, as a representative case study of a nation experiencing insurgency, to determine what changes are required in the formation of US foreign policy to help bolster democratic stability in countries challenged by insurgency. The thesis makes four key assertions: First, it is in the United States' self-interest to aid in the consolidation of democracy in El Salvador. Second, El Salvador is a nascent democracy, even after the Peace Accords of 1992 were signed, lacking democratic experience or stability, thus requiring US assistance. Third, despite oligarchic resistance, the United States has the ability to successfully influence democratic reform. Fourth, the best way to define United States' priorities for democratic assistance to El Salvador must be through a comprehensive, empirically-based assessment of causal factors. Utilizing the El Salvador case study and pre-existing theories, the thesis then presents and tests a new empirically-based model for define US priorities for providing democratic assistance to El Salvador or any other country under consideration. The research could potentially save the United States significant resources and time, while achieving the foreign policy goal of democratic enlargement
APA, Harvard, Vancouver, ISO, and other styles
44

Clement, Emyr John. "Search for neutral long-lived particles that decay to dileptons with the CMS detector." Thesis, University of Bristol, 2015. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.685039.

Full text
Abstract:
This thesis presents a search for exotic, neutral, long-lived particles, which decay to final states that include a pair of electrons or a pair of muons. The search is performed with proton-proton collision data collected by the CMS detector at the LHC at a centre-of-mass energy √s = 8 TeV. The experimental signature consists of a pair of charged leptons originating from a vertex that is significantly displaced from the centre of the CMS detector. This is a very striking signature and would clearly indicate the presence of new physics beyond the Standard Model if observed. No significant excess of events was observed above the Standard Model expectations. Therefore, upper limits are placed on the production of long-lived particles in the context of two benchmark models, as a function of the lifetime of the long-lived particle.
APA, Harvard, Vancouver, ISO, and other styles
45

Ma, Hongyan. "User-system coordination in unified probabilistic retrieval exploiting search logs to construct common ground /." Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1581426061&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Muthuvelu, Sethumadhavan. "Simultaneous Lot sizing and Lead-time Setting (SLLS)Via Queuing Theory and Heuristic search." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/9692.

Full text
Abstract:
Materials requirements planning (MRP) is a widely used method for production planning and scheduling. Planned lead-time (PLT) and lot size are two of the input parameters for MRP systems, which determine planned order release dates. Presently, planned lead-time and lot size are estimated using independent methodologies. No existing PLT estimation methods consider factors such as machine breakdown, scrap-rate, etc. Moreover, they do not consider the capacity of a shop, which changes dynamically, because the available capacity at any given time is determined by the loading of the shop at that time. The absence of such factors in calculations leads to a huge lead-time difference between the actual lead-time and PLT, i.e., lead-time error. Altering the size of a lot will have an effect not only on the lead-time of that lot but also on that of other lots. The estimation of lot size and lead-time using independent methodologies currently does not completely capture the inter-dependent nature of lead-time and lot size. In this research, a lot-sizing model is modified in such a way that it minimizes the combination of setup cost, holding cost and work-in-process cost. This proposed approach embeds an optimization routine, which is based on dynamic programming on a manufacturing system model, which is based on open queuing network theory. Then, it optimizes lot size by using realistic estimates of WIP and the lead-time of different lots simultaneously for single-product, single-level bills of material. Experiments are conducted to compare the performance of the production plans generated by applying both conventional and the proposed methods. The results show that the proposed method has great potential and it can save up to 38% of total cost and minimize lead-time error up to 72%.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
47

Recoskie, Steven. "Autonomous Hybrid Powered Long Ranged Airship for Surveillance and Guidance." Thesis, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/31711.

Full text
Abstract:
With devastating natural disasters on the rise, technological improvements are needed in the field of search and rescue (SAR). Unmanned aerial vehicles (UAVs) would be ideal for the search function such that manned vehicles can be prioritized to distributing first-aid and ultimately saving lives. One of the major reasons that UAVs are under utilized in SAR is that they lack a long flight endurance which compromises their effectiveness. Dirigibles are well suited for SAR missions since they can hover and maintain lift without consuming energy and can be easily deflated for packaging and transportation. This research focuses on extending flight endurance of small-scale airship UAVs through improvements to the infrastructure design and flight trajectory planning. In the first area, airship design methodologies are reviewed leading to the development and experimental testing two hybrid fuel-electric power plants. The prevailing hybrid power plant design consists of a 4-stroke 14cc gasoline engine in-line with a brushless DC motor/generator and variable pitch propeller. The results show that this design can produce enough mechanical and electrical power to support 72 hours of flight compared to 1-4 hours typical of purely electric designs. A power plant configuration comparison method was also developed to compare its performance and endurance to other power plant configurations that could be used in dirigible UAVs. Overall, the proposed hybrid power plant has a 600% increase in energy density over that of a purely electric configuration. In the second area, a comprehensive multi-objective cost function is developed using spatially variable wind vector fields generated from computational fluid dynamic analysis on digital elevations maps. The cost function is optimized for time, energy and collision avoidance using a wavefront expansion approach to produce feasible trajectories that obey the differential constraints of the airship platform. The simulated trajectories including 1) variable vehicle velocity, 2) variable wind vector field (WVF) data, and 3) high grid resolutions were found to consume 50% less energy on average compared to planned trajectories not considering one of these three characteristics. In its entirety, this research addresses current UAV flight endurance limitations and provides a novel UAV solution to SAR surveillance.
APA, Harvard, Vancouver, ISO, and other styles
48

Pion, Sébastien. "Contribution à la modélisation des filarioses à "Onchocerca volvulus" et à "Loa loa" en Afrique centrale." Paris 12, 2004. https://athena.u-pec.fr/primo-explore/search?query=any,exact,990002140040204611&vid=upec.

Full text
Abstract:
L'onchocercose, parasitose provoquée par la filaire Onchocerca volvulus, est transmise par des diptères se reproduisant dans les cours d'eau rapides, les simulies. Elle constitue un problème de santé publique majeur en Afrique, où 18 millions de personnes sont infectées. La cécité, principale complication de la maladie, touche environ 300. 000 individus. Depuis les années 1990, la lutte contre l'onchocercose repose sur le traitement de masse annuel des populations par un médicament actif sur les embryons du parasite, l'ivermectine. L'impact des traitements ne peut être évalué que si l'on connaît bien la structure de population et la dynamique de transmission du parasite. Nos travaux ont permis de documenter ces points, ainsi que de préciser le poids démographique de l'onchocercose dans un foyer du centre Cameroun. Ceci devrait participer à la mise au point de modèles mathématiques prédisant l'effet à long terme des activités de lutte. Par ailleurs, les sujets parasités par une autre filaire, Loa Joa, étant à risque de développer une encéphalopathie après la prise d'ivermectine, nous avons également analysé la dynamique de transmission de ce parasite
Onchocerciasis, a parasitic disease caused by infection with the filaria Onchocerca volvulus, s transmitted by simuliid black flies that breed in fast-flowing streams. Onchocerciasis constitutes a major public health problem in Africa, where 18 million people are nfected Blindness, the most severe complication of the disease, affects 300. 000 individuals. Since the 1990's, onchocerciasis control is based upon annual large scale treatment of the populations by a drug which kills the embryonic stages of the parasite, ivermectin. A good knowledge of both the population structure and the transmission dynamics are necessary when one wants to assess the impact of the treatments. Our researches allow us to document these issues and aso the demographic burden of onchocerciasis in a focus of central Cameroon our results should contribute to adjust mathematical models developed to predict the ong term effects of control activities. In other respects. Patients infected with an other filarial species, Loa Joa, being at risk to develop adverse reactions, such encephalopathy, following ivermectin intake, we also analysed the transmission dynamics of this parasite
APA, Harvard, Vancouver, ISO, and other styles
49

Sales, Martha Jane. "Assessing the Efficacy of the Talent Search Program." TopSCHOLAR®, 2008. http://digitalcommons.wku.edu/theses/1/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Liu, Bingxuan. "Search for displaced leptons in the e-mu final state at the CMS experiment." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1476805042657329.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography