Dissertations / Theses on the topic 'Modello search'

To see the other types of publications on this topic, follow the link: Modello search.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Modello search.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

MEMBRETTI, MARCO. "Firm size and the Macroeconomy." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/403956.

Full text
Abstract:
La tesi è formata da due capitoli su dinamica della distribuzione delle imprese e shock aggregati. Usando un modello ad imprese eterogenee, la tesi studia le fluttuazioni di ciclo economico dovute a shock alla tecnologia ed ai costi in entrata.
This dissertation collects two essays on firm size dynamics and aggregate shocks. By employing a model with heterogeneous firms, search frictions and endogenous entry/exit we investigate the business cycle dynamics of the firm size distribution by looking at entry cost and technology shocks. The thesis is divided into two chapters.\\ The first chapter explores how an increase in entry costs affects the size of new entrants and the concentration of employment according to firm size, along with its effects on macro-variables such as unemployment and the exit rate. To this aim we use a BVAR model to estimate the response of such variables to an entry cost shock, then we develop a heterogeneous-firm model with search frictions and endogenous entry/exit dynamics calibrated on data from Business Dynamics Statistics (BDS) database to address our empirical results.\\ We find that positive entry cost shocks increase the average size of entrants and move employment shares toward the largest firms. These results reveal the role of entry costs' fluctuations in explaining the dynamics at business cycle horizons of both firm and employment share distributions according to size.\\ The second chapter perturbed the model with a technology shock to replicate the long-run differential of job destruction due to exit between small and large firms and its empirical response to technology shocks (estimated by a BVAR). Contrary to frameworks with \textit{exogenous} exit, the model is able to account for the volatility of exit and the differential of job destruction due to exit between small and large firms conditional to the technology shock. Moreover we find that not only entry but also exit is a viable amplification channel for the response of unemployment to the shock.\\
APA, Harvard, Vancouver, ISO, and other styles
2

Tosi, Mia. "Feasibility of the SM Higgs boson search in the channel H ->ZZ^*->mumubb via VBF at sqrt(s)=7TeV with the CMS experiment." Doctoral thesis, Università degli studi di Padova, 2011. http://hdl.handle.net/11577/3427394.

Full text
Abstract:
One of the main goals of the now-running Large Hadron Collider (LHC) machine at CERN in Geneva is to elucidate the mechanism of electroweak symmetry breaking, and in particular to determine whether a Standard Model Higgs boson exists or not. For this aim, and in general to explore the high-energy frontier of particle physics, the LHC produces proton-proton collisions in the core of two multi-purpose experiments: ATLAS and CMS. In the Compact Muon Solenoid (CMS) experiment one of the most promising discovery modes of the Higgs boson is the one involving the decay into two Z bosons, with a subsequent decay of the Z pair in a fully leptonic final state. Among the various production mechanisms for the Higgs boson, vector-boson fusion (VBF) offers certainly one of the most distinctive and interesting signals. In this thesis I report an analysis of the feasibility of the search for Higgs decays in the $H\rightarrow ZZ\rightarrow l^+l^-jj$ decay channel with the CMS detector. Allowing for a $Z$ boson to decay to a jet pair entails a large increase of backgrounds in exchange for a tenfold increase in the total branching ratio. The analysis of the di-lepton plus di-jet final state may furthermore provide ground for interesting additional searches and measurements. An optimization of the search for the Higgs decay signal has been performed using the key observables of this Higgs VBF production mechanism, applying the multivariate technique called {\em boosted decision trees'} for data selection in two successive stages. The application of $b$-jet tagging is also used to try and favorably select $b$-enriched final states, strongly suppressing the main background due to $Z$ production associated to light quarks at the cost of a 4.5 reduction factor of selectable signal events. Results on the signal significance achievable with $30\ fb^{-1}$ of collisions with the optimized Higgs candidate selection are presented. The LHC provided proton-proton collisions at the centre-of-mass energy of $7\ TeV$ from March $30^{th}$ to November $8^{th}$ 2010. During the 2010 run CMS collected an integrated luminosity of $43.2\ pb^{-1}$. Despite being utterly insufficient for a meaningful Higgs boson search, these data have been used to test the analysis strategy and the signal selection methodology. Limits to the ratio between signal cross section and SM-predicted cross section as a function of the Higgs boson mass have been obtained with the available data.
Uno degli obiettivi principali del Large Hadron Collider (LHC) in funzione al laboratorio CERN di Ginevra \'e di svelare il meccanismo della rottura della simmetria elettrodebole, e in particolare di determinare se il bosone di Higgs del Modello Standard esista o meno. A questo scopo, e in generale per esplorare la frontiera di alta energia della fisica delle particelle, l'LHC produce collisioni protone-protone al centro di due esperimenti multi-funzione: ATLAS e CMS. All'esperimento CMS (Compact Muon Solenoid) uno dei modi pi\'u promettenti per scoprire il bosone di Higgs consisten nello studiare il decadimento in due bosoni Z, con un successivo decadimento della coppia di Z in uno stato finale puramente leptonico. Fra i vari meccanismi di produzione del bosone di Higgs, la fusione di bosoni vettori (VBF, da Vector-Boson Fusion) offre certamente uno dei segnali pi\'u interessanti e distintivi~\cite{Cahn:1983ip,Kane:1984bb,Kleiss:1986xp}. In questo lavoro viene descritta l'analisi della fattibilit\'a della ricerca di decadimenti del bosone diHiggs nel canale di decadimento $H\rightarrow ZZ\rightarrow l^+l^-jj$ con il rivelatore CMS. Permettendo a un bosone Z di decadere in una coppia di jets si subisce un grande aumento dei processi di fondo in cambio di un aumento di un fattore 10 nella frazione di decadimento totale. L'analisi dello stato finale con dileptoni e coppie di jets pu\'o inoltre creare le condizioni di interessanti ricerche e misure addizionali. Una ottimizzazione della ricerca del segnale di decadimento del bosone di Higgs \'e stata ottenuta usando le quantit\'a osservabili pi\'u significative del meccanismo di produzione di Higgs via VBF, applicando una tecnica multivariata chiamata {\em boosted decision trees} per la selezione dei dati in due fasi distinte. L'applicazione dell'identificazione di jets da $b$-quarks \'e anche usata per cercare di selezionare favorevolmente stati finali ricchi di $b$-quarks, riducendo fortemente il fondo principale dovuto alla produzione di bosoni $Z$ associati a jets da quarks leggeri o gluoni, al costo di una riduzione di un fattore 6 nel numero di eventi di segnale selezionabili. La tesi presenta i risultati per la significanza del segnale ottenibili con $30\ fb^{-1}$ di collisioni protone-protone a 7 TeV, usando la selezione ottimizzata dei candidati decadimenti del bosone di Higgs. Durante il 2010 LHC ha fornito collisioni protone-protone all'energia del centro di massa di 7 TeV dal 30 marzo al 8 novembre. In questo run CMS ha raccolto una luminosit\'a integrata pari a $43.2\ pb^{-1}$. Nonostante questi dati siano assolutamente insufficienti per una ricerca del bosone di Higgs, questi dati sono stati utilizzati per un test della strategia di analisi e della metodologia di selezione del segnale. Limiti al rapporto fra la sezione d'urto del segnale e la sezione d'urto prevista dal Modello Standard in funzione della massa del bosone di Higgs sono stati ottenuti con i dati a disposizione.
APA, Harvard, Vancouver, ISO, and other styles
3

Gonçalves, Solange Ledi. "Income shocks, household job search and labor supply." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/12/12138/tde-15082017-092759/.

Full text
Abstract:
Analyses about aggregate employment, unemployment, and inactivity rates frequently disregard labor market trends of specific household members, which may explain some puzzles in the relationship between economic activity and labor market participation. The relevance of family approaches of labor supply transcends the aggregate macroeconomic trends and addresses important micro-level analysis concerning the behavior and intrahousehold decisions of members and policy-relevant results. Despite the consensus about the joint labor supply decisions of household members, studies are typically at the individual level and disregard sons and daughters as decision-makers in a family. Therefore, in this thesis, we investigate these questions for Brazil, in two studies. In the first study, we analyze the labor supply decisions of sons daughters aged 14 to 24 years living with their parents, in a reduced form exercise. We collaborate to the empirical literature about intrahousehold impacts of policies, testing whether the minimum wage, which affects the income of parents, impacts the final labor supply decisions of sons and daughters. We also verify if the policy has distinct effects depending on whether the eligible person is the father or the mother, aiming to test the income-pooling hypothesis. Our identification strategy is based on an intention-to-treat approach, and in a differences-in-differences estimator. Another innovation is the use of the PNADC (IBGE) for 2012-2016. We find that the direction and magnitude of the minimum wage effects affecting fathers and mothers, on the labor supply of sons and daughters, depend on who is and how many eligible members there exist in the household: it is negative, whether the eligible person is the mother or the father, and it is positive, whether both are eligible. Therefore, our results strengthen the argument in favor of household approaches, since the income pooling hypothesis does not seem to be valid in this context. In the second study, we investigate how the decisions about labor supply could determine the aggregate results of unemployment and inactivity of the secondary household earners. We develop and estimate a structural household job search model with on-the-job search. We extend Dey and Flinn (2008) to allow for unemployment and inactivity of mothers and sons and daughters who are subject to shocks to their employment and income shocks to fathers. These shocks may determine different search behavior and job acceptance, depending on the other household member\'s labor market situation. The model is estimated using the PME (IBGE) for 2004-2014. We perform counterfactual simulations, and we verify that the decreasing unemployment rate of sons/daughters would not have changed between 2004 and 2014 if the labor market opportunities and conditions of this member remain the same. The unemployment rate of mothers does not alter a lot in this period. The increasing trend in the inactivity of sons/daughters is mostly determined by a decreasing encouragement rate and the increasing dropout rate observed among these members in this period. These exogenous factors that determine the move to or the permanence in the inactivity could be related to the lower cost of education. We conclude that the use of individual job search models to understand aggregate unemployment and inactivity can be misleading, since the household search behavior matters in the labor supply decisions of secondary household earners.
As análises sobre taxas agregadas de emprego, desemprego e inatividade frequentemente ignoram a dinâmica dos membros das famílias no mercado de trabalho, a qual pode explicar puzzles na relação entre atividade econômica e participação no mercado de trabalho. A relevância das abordagens familiares para a oferta de trabalho está nas análises macroeconômicas sobre tendências agregadas, e também em análises microeconômicas do comportamento, decisões intrafamiliares e resultados de políticas. Apesar do consenso sobre as decisões conjuntas de oferta de trabalho dos membros da família, grande parte dos estudos são abordagens individuais e desconsideram filhos jovens como tomadores de decisão. Nesta tese, organizada em dois estudos, investigamos essas questões para o Brasil. No primeiro estudo, analisamos a decisão de ofertar trabalho de jovens entre 14 e 24 anos vivendo com os pais, em um exercício de forma reduzida. A tese colabora com a literatura empírica sobre os efeitos intrafamiliares de políticas, ao testar se o salário mínimo que afeta a renda dos pais impacta a decisão final dos filhos de ofertar trabalho. Também testamos a hipótese de agregação da renda, ao verificar se se a política tem efeitos distintos caso a pessoa elegível na família seja a mãe ou o pai. A estratégia de identificação é baseada em uma abordagem de intention-to-treat, e no uso do estimador de diferenças-em-diferenças. Outra inovação é o uso da PNADC (IBGE) para 2012-2016. Verificamos que a direção e a magnitude dos efeitos do salário mínimo dos pais, na oferta de trabalho dos filhos, dependem de quem é e de quantos são os membros elegíveis na família: o efeito é negativo, se a pessoa elegível é a mãe ou o pai, e é positivo, se ambos são elegíveis. Esses resultados reforçam o argumento em favor das abordagens intrafamiliares, uma vez que a hipótese de income-pooling não parece ser válida neste contexto. No segundo estudo, investigamos como as decisões de oferta de trabalho poderiam determinar os resultados agregados de desemprego e inatividade dos membros secundários. Desenvolvemos e estimamos um modelo estrutural de busca por emprego familiar com on-the-jobsearch. Estendemos o modelo de Dey e Flinn (2008), para permitir desemprego e inatividade de mães e filhos, sujeitos a choques em seus empregos e choques na renda dos pais. Esses choques podem determinar diferentes comportamentos de busca e aceitação de emprego, dependendo da situação do outro membro no mercado de trabalho. O modelo é estimado com a PME (IBGE) para 2004-2014. Realizamos simulações contrafactuais e verificamos que a taxa de desemprego dos filhos, decrescente entre 2004 e 2014, não teria se alterado no período, caso as condições e oportunidades de mercado de trabalho dos filhos tivessem continuado as mesmas de 2004. Já a taxa de desemprego das mães não sofre grandes alterações no período. A tendência crescente na inatividade dos filhos é determinada por uma taxa de encorajamento decrescente e uma taxa de desistência crescente, que refletem fatores exógenos que levam jovens trabalhadores à inatividade. Esses fatores exógenos podem estar relacionados ao menor custo da educação no período. Concluímos que o uso de modelos individuais de busca por emprego para entender o desemprego e a inatividade agregados deve ser desencorajado, pois o comportamento de busca familiar importa para as decisões de oferta de trabalho dos membros secundários da família.
APA, Harvard, Vancouver, ISO, and other styles
4

Yang, Fengyu, and 楊丰羽. "Machine-order search space for job-shop scheduling." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B31246205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lima, Allan Diego Silva. "Modelo social de relevância para opiniões." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-20052015-155245/.

Full text
Abstract:
Esta tese apresenta um modelo de relevância de opinião genérico e independente de domínio para usuários de Redes Sociais. O Social Opinion Relevance Model (SORM) é capaz de estimar a relevância de uma opinião com base em doze parâmetros distintos. Comparado com outros modelos, a principal característica que distingue o SORM é a sua capacidade para fornecer resultados personalizados de relevância de uma opinião, de acordo com o perfil da pessoa para a qual ela está sendo estimada. Devido à falta de corpus de relevância de opiniões capazes de testar corretamente o SORM, fez-se necessária a criação de um novo corpus chamado Social Opinion Relevance Corpus (SORC). Usando o SORC, foram realizados experimentos no domínio de jogos eletrônicos que ilustram a importância da personalização da relevância para alcançar melhores resultados, baseados em métricas típicas de Recuperação de Informação. Também foi realizado um teste de significância estatística que reforça e confirma as vantagens que o SORM oferece.
This thesis presents a generic and domain independent opinion relevance model for Social Network users. The Social Opinion Relevance Model (SORM) is able to estimate an opinions relevance based on twelve different parameters. Compared to other models, SORMs main distinction is its ability to provide customized results, according to whom the opinion relevance is being estimated for. Due to the lack of opinion relevance corpora that are able to properly test our model, we have created a new one called Social Opinion Relevance Corpus (SORC). Using SORC, we carried out some experiments on the Electronic Games domain that illustrate the importance of customizing opinion relevance in order to achieve better results, based on typical Information Retrieval metrics, such as NDCG, QMeasure and MAP. We also performed a statistical significance test that reinforces and corroborates the advantages that SORM offers.
APA, Harvard, Vancouver, ISO, and other styles
6

Hood, Ben Andrew Ashcom. "Extrasolar planet search and characterisation." Thesis, St Andrews, 2007. http://hdl.handle.net/10023/359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pizzinelli, Carlo. "Essays on labor market dynamics with worker heterogeneity." Thesis, University of Oxford, 2018. http://ora.ox.ac.uk/objects/uuid:28323577-c33e-4df9-80ec-f2506e42b473.

Full text
Abstract:
This thesis is comprised of three chapters which discuss topics related to labor market dynamics from a macroeconomic perspective. Although each chapter is self-standing in terms of research question and methodology, they are united by a common interest for the macroeconomic implications of worker heterogeneity. The chapters vary with respect to the time horizon over which they study aggregate dynamics, covering business cycle frequency, the economy's long run steady state, and households' life cycle. Furthermore, they develop the concept of heterogeneity across different dimensions: stages of the life cycle, households' income and wealth, observed worker characteristics, and worker-firm productivity levels. The overall purpose of this thesis is therefore to contribute to the study of labor markets and labor policies through a multi-faceted approach.
APA, Harvard, Vancouver, ISO, and other styles
8

Straub, Sebastian. "Towards Collaborative Session-based Semantic Search." Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-229549.

Full text
Abstract:
In recent years, the most popular web search engines have excelled in their ability to answer short queries that require clear, localized and personalized answers. When it comes to complex exploratory search tasks however, the main challenge for the searcher remains the same as back in the 1990s: Trying to formulate a single query that contains all the right keywords to produce at least some relevant results. In this work we want to investigate new ways to facilitate exploratory search by making use of context information from the user's entire search process. Therefore we present the concept of session-based semantic search, with an optional extension to collaborative search scenarios. To improve the relevance of search results we expand queries with terms from the user's recent query history in the same search context (session-based search). We introduce a novel method for query classification based on statistical topic models which allows us to track the most important topics in a search session so that we can suggest relevant documents that could not be found through keyword matching. To demonstrate the potential of these concepts, we have built the prototype of a session-based semantic search engine which we release as free and open source software. In a qualitative user study that we have conducted, this prototype has shown promising results and was well-received by the participants
Die führenden Web-Suchmaschinen haben sich in den letzten Jahren gegenseitig darin übertroffen, möglichst leicht verständliche, lokalisierte und personalisierte Antworten auf kurze Suchanfragen anzubieten. Bei komplexen explorativen Rechercheaufgaben hingegen ist die größte Herausforderung für den Nutzer immer noch die gleiche wie in den 1990er Jahren: Eine einzige Suchanfrage so zu formulieren, dass alle notwendigen Schlüsselwörter enthalten sind, um zumindest ein paar relevante Ergebnisse zu erhalten. In der vorliegenden Arbeit sollen neue Methoden entwickelt werden, um die explorative Suche zu erleichtern, indem Kontextinformationen aus dem gesamten Suchprozess des Nutzers einbezogen werden. Daher stellen wir das Konzept der sitzungsbasierten semantischen Suche vor, mit einer optionalen Erweiterung auf kollaborative Suchszenarien. Um die Relevanz von Suchergebnissen zu steigern, werden Suchanfragen mit Begriffen aus den letzten Anfragen des Nutzers angereichert, die im selben Suchkontext gestellt wurden (sitzungsbasierte Suche). Außerdem wird ein neuartiger Ansatz zur Klassifizierung von Suchanfragen eingeführt, der auf statistischen Themenmodellen basiert und es uns ermöglicht, die wichtigsten Themen in einer Suchsitzung zu erkennen, um damit weitere relevante Dokumente vorzuschlagen, die nicht durch Keyword-Matching gefunden werden konnten. Um das Potential dieser Konzepte zu demonstrieren, wurde im Rahmen dieser Arbeit der Prototyp einer sitzungsbasierten semantischen Suchmaschine entwickelt, den wir als freie Software veröffentlichen. In einer qualitativen Nutzerstudie hat dieser Prototyp vielversprechende Ergebnisse hervorgebracht und wurde von den Teilnehmern positiv aufgenommen
APA, Harvard, Vancouver, ISO, and other styles
9

Pasalic, Zlatana. "Evaluation of search models for Molecular Replacement using MolRep." Thesis, University of Skövde, Department of Computer Science, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-709.

Full text
Abstract:

he aim of this study is to use several homology models of different completeness and accuracy and to evaluate them as search models for Molecular Replacement (MR).Three structural groups are evaluated: α-, β- and α/β- group. From every group one template structure and a couple of search models are selected. The search models are manipulated and evaluated. B-factor manipulation, side chain removal and homology modelling are the ways the search models are manipulated. This work shows that B-factor manipulation do not improve the search models. The work also shows that removing the side chains is not improving the search models. Finally the work shows that homology modelling did not model better search models.

APA, Harvard, Vancouver, ISO, and other styles
10

Verbel, Irina. "Neuroninių tinklų architektūros parinkimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2009. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2006~D_20081203_184401-63341.

Full text
Abstract:
Darbe aprašytas modelis, naudojant aktyvacijos funkcijas su Gauso branduoliais. Vienu atveju buvo paimtos aktyvacijos funkcijos, maksimizuojančios Shannon entropija, kitu – maksimizuojančios Renyi entropiją. Manoma, kad tokio tipo funkcijos turėtų geriau tikti prognozavimui.
In this thesis a novel technique is used to construct sparse generalized Gaussian Kernel regression model- so called neural network. Kernel which maximize Renyi entropy is used too. Experimental results obtained using these models are promising.
APA, Harvard, Vancouver, ISO, and other styles
11

Fazi, Diego <1979&gt. "Development of a Physical-Template Search for Gravitational Waves from Spinning Compact-Object Binaries with LIGO." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/2211/1/PhD_thesis_final.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Fazi, Diego <1979&gt. "Development of a Physical-Template Search for Gravitational Waves from Spinning Compact-Object Binaries with LIGO." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/2211/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Lafuente, Martinez Cristina. "Essays on long-term unemployment in Spain." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31085.

Full text
Abstract:
This thesis is comprised of three essays relating to long term unemployment in Spain. The first chapter is methodological analysis of the main dataset that is used throughout the thesis. The second and third chapter provide two applications of the dataset for the study of long term unemployment. The methodology in these chapters can be easily adapted to study unemployment in other countries. Chapter 1. On the use of administrative data for the study of unemployment Social security administrative data are increasingly becoming available in many countries. These are very attractive data as they have a long panel structure (large N, large T) and allow to measure many different variables with higher precision. Because of their nature they can capture aspects that are usually hidden due to design or timing of survey data. However, administrative data are not ready to be used for labour market research, especially studies involving unemployment. The main reason is that administrative data only capture those registered unemployed, and in some cases only those receiving unemployment benefits. The gap between total unemployment and registered unemployment is not constant neither across workers characteristics nor time. In this paper I augment Spanish Social Security administrative data by adding missing unemployment spells using information from the institutional framework. I compare the resulting unemployment rate to that of the Labour Force Survey, showing that both are comparable and thus the administrative dataset is useful for labour market research. I also explore how the administrative data can be used to study some important aspects of the labour market that the Labour Force survey can’t capture. Administrative data can also be used to overcome some of the problems of the Labour Force survey such as changes in the structure of the survey. This paper aims to provide a comprehensive guide on how to adapt administrative datasets to make them useful for studying unemployment. Chapter 2. Unemployment Duration Variance Decomposition `a la ABS: Evidence from Spain Existing studies of unemployment duration typically use self-reported information from labour force surveys. We revisit this question using precise information on spells from administrative data. We follow the recent method proposed by Alvarez, Borovickova and Shimer (2015) for estimating the different components of the duration of unemployment using administrative data and have applied it to Austria. In this paper we apply the same method (the ABS method hereafter) to Spain using Spanish Social Security data. Administrative data have many advantages compared to Labour Force Survey data, but we note that there are some incompleteness that need to be enhanced in order to use the data for unemployment analysis (e.g., unemployed workers that run out of unemployment insurance have no labour market status in the data). The degree and nature of such incompleteness is country-specific and are particularly important in Spain. Following Chapter 1, we deal with these data issues in a systematic way by using information from the Spanish LFS data as well as institutional information. We hope that our approach will provide a useful way to apply the ABS method in other countries. Our findings are: (i) the unemployment decomposition is quite similar in Austria and Spain, specially when minimizing the effect of fixed-term contracts in Spain. (ii) the constant component is the most important one; while (total) heterogeneity and duration dependence are roughly comparable. (iii) also, we do not find big differences in the contribution of the different components along the business cycle. Chapter 3. Search Capital and Unemployment Duration I propose a novel mechanism called search capital to explain long term unemployment patters across different ages: workers who have been successful in finding jobs in the recent past become more efficient at finding jobs in the present. Search ability increases with search experience and depreciates with tenure if workers do not search often enough. This leaves young (who have not gained enough search experience) and older workers in a disadvantaged position, making them more likely to suffer long term unemployment. I focus on the case of Spain, as its dual labour market structure favours the identification of search capital. I provide empirical evidence that search capital affects unemployment duration and wages at the individual level. Then I propose a search model with search capital and calibrate it using Spanish administrative data. The addition of search capital helps the model match the dynamics of unemployment and job finding rates in the data, especially for younger workers.
APA, Harvard, Vancouver, ISO, and other styles
14

Westerdahl, Simon, and Larsson Fredrik Lemón. "Optimization for search engines based on external revision database." Thesis, Högskolan Kristianstad, Fakulteten för naturvetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hkr:diva-21000.

Full text
Abstract:
The amount of data is continually growing and the ability to efficiently search through vast amounts of data is almost always sought after. To efficiently find data in a set there exist many technologies and methods but all of them cost in the form of resources like cpu-cycles, memory and storage. In this study a search engine (SE) is optimized using several methods and techniques. Thesis looks into how to optimize a SE that is based on an external revision database.The optimized implementation is compared to a non-optimized implementation when executing a query. An artificial neural network (ANN) trained on a dataset containing 3 years normal usage at a company is used to prioritize within the resultset before returning the result to the caller. The new indexing algorithms have improved the document space complexity by removing all duplicate documents that add no value. Machine learning (ML) has been used to analyze the user behaviour to reduce the necessary amount of documents that gets retrieved by a query.
APA, Harvard, Vancouver, ISO, and other styles
15

Bianchi, Riccardo Maria [Verfasser], and Gregor [Akademischer Betreuer] Herten. "A model-independent "General Search" for new physics with the ATLAS detector at LHC = Ein Modell-unabhängiger "General Search" nach neuer Physik mit dem ATLAS-Detektor am LHC." Freiburg : Universität, 2014. http://d-nb.info/1123479712/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Oliveira, Paulo Felipe Alencar de. "EstimaÃÃo estrutural de um modelo de busca por emprego com dispersÃo de produtividade: uma anÃlise para o Brasil." Universidade Federal do CearÃ, 2011. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=5688.

Full text
Abstract:
Conselho Nacional de Desenvolvimento CientÃfico e TecnolÃgico
O objetivo do presente trabalho à estimar o modelo de busca por emprego com heterogeneidade nas produtividades das firmas com microdados da Pesquisa Mensal de Emprego - PME do ano de 2009. Estima-se o modelo para as seis regiÃes metropolitanas abordadas pela PME. Ademais, consideram-se dois mecanismos de determinaÃÃo dos salÃrios: salÃrio postado por firmas monopsonistas e atravÃs de uma barganha bilateral de Nash. Para estimar os modelos, utiliza-se o mÃtodo nÃo-paramÃtrico desenvolvido em Bontemps, Robin e van den Berg (2000). Encontra-se uma heterogeneidade relevante entre os parÃmetros estruturais estimados para as seis regiÃes metropolitanas. AlÃm disso, o modelo com determinaÃÃo de salÃrio via barganha apresenta resultados mais plausÃveis e nÃo rejeitados pelos dados.
The aim of this work is to estimate an equilibrium job search model with heterogeneity in productivities of firms using microdata from the Monthly Employment Survey (PME) collected in 2009. The model is estimated for the six metropolitan regions covered by the PME. Moreover, it is considered two mechanisms of wage determination: wage posting by monopsonistic firms and as a outcome of a Nash bilateral bargaining. To estimate the model, we use the non-parametric method developed by Bontemps, Robin and van den Berg (2000). There is a significant heterogeneity among the structural parameters estimated for the six metropolitan regions. Furthemore, the model with wage determination through bargaining show plausible results and not rejected by the data.
APA, Harvard, Vancouver, ISO, and other styles
17

Potter, Christopher Thomas. "A search for the rare decay B⁰ (arrow tau⁺ tau⁻) at the Babar experiment /." view abstract or download file of text, 2005. http://wwwlib.umi.com/cr/uoregon/fullcit?p3181121.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 2005.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 219-223). Also available for download via the World Wide Web; free to University of Oregon users.
APA, Harvard, Vancouver, ISO, and other styles
18

Primavera, Federica <1985&gt. "Search for the MSSM Neutral Higgs Boson in the mu+mu- final state with the CMS experiment at LHC." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6215/1/Primavera_Federica_tesi.pdf.

Full text
Abstract:
In this thesis, my work in the Compact Muon Solenoid (CMS) experiment on the search for the neutral Minimal Supersymmetric Standard Model (MSSM) Higgs decaying into two muons is presented. The search is performed on the full data collected during the years 2011 and 2012 by CMS in proton-proton collisions at CERN Large Hadron Collider (LHC). The MSSM is explored within the most conservative benchmark scenario, m_h^{max}, and within its modified versions, m_h^{mod +} and m_h^{mod -}. The search is sensitive to MSSM Higgs boson production in association with a b\bar{b} quark pair and to the gluon-gluon fusion process. In the m_h^{max} scenario, the results exclude values of tanB larger than 15 in the m_A range 115-200 GeV, and values of tanB greater than 30 in the m_A range up to 300 GeV. There are no significant differences in the results obtained within the three different scenarios considered. Comparisons with other neutral MSSM Higgs searches are shown.
APA, Harvard, Vancouver, ISO, and other styles
19

Primavera, Federica <1985&gt. "Search for the MSSM Neutral Higgs Boson in the mu+mu- final state with the CMS experiment at LHC." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6215/.

Full text
Abstract:
In this thesis, my work in the Compact Muon Solenoid (CMS) experiment on the search for the neutral Minimal Supersymmetric Standard Model (MSSM) Higgs decaying into two muons is presented. The search is performed on the full data collected during the years 2011 and 2012 by CMS in proton-proton collisions at CERN Large Hadron Collider (LHC). The MSSM is explored within the most conservative benchmark scenario, m_h^{max}, and within its modified versions, m_h^{mod +} and m_h^{mod -}. The search is sensitive to MSSM Higgs boson production in association with a b\bar{b} quark pair and to the gluon-gluon fusion process. In the m_h^{max} scenario, the results exclude values of tanB larger than 15 in the m_A range 115-200 GeV, and values of tanB greater than 30 in the m_A range up to 300 GeV. There are no significant differences in the results obtained within the three different scenarios considered. Comparisons with other neutral MSSM Higgs searches are shown.
APA, Harvard, Vancouver, ISO, and other styles
20

Stefana, Janićijević. "Metode promena formulacija i okolina za problem maksimalne klike grafa." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2016. http://www.cris.uns.ac.rs/record.jsf?recordId=101446&source=NDLTD&language=en.

Full text
Abstract:
Doktorska disertacija se bavi temama rešavanja računarski teškihproblema kombinatorne optimizacije. Istaknut je problem maksimalneklike kao predstavnik određenih struktura u grafovima. Problemmaksimalne klike i sa njim povezani problemi su formulisani kaonelinearne funkcije. Rešavani su sa ciljem otkrivanja novih metodakoje pronalaze dobre aproksimacije rešenja za neko razumno vreme.Predložene su varijante Metode promenljivih okolina na rešavanjemaksimalne klike u grafu. Povezani problemi na grafovima se moguprimeniti na pretragu informacija, raspoređivanje, procesiranjesignala, teoriju klasifikacije, teoriju kodiranja, itd. Svi algoritmisu implementirani i uspešno testirani na brojnim različitimprimerima.
This Ph.D. thesis addresses topics NP hard problem solving approaches incombinatorial optimization and according to that it is highlighted maximumclique problem as a representative of certain structures in graphs. Maximumclique problem and related problems with this have been formulated as nonlinear functions which have been solved to research for new methods andgood solution approximations for some reasonable time. It has beenproposed several different extensions of Variable Neighborhood Searchmethod. Related problems on graphs could be applied on informationretrieval, scheduling, signal processing, theory of classi_cation, theory ofcoding, etc. Algorithms are implemented and successfully tested on variousdifferent tasks.
APA, Harvard, Vancouver, ISO, and other styles
21

Nunes, Cecília. "Towards the improvement of decision tree learning: a perspective on search and evaluation." Doctoral thesis, Universitat Pompeu Fabra, 2019. http://hdl.handle.net/10803/667879.

Full text
Abstract:
Data mining and machine learning (ML) are increasingly at the core of many aspects of modern life. With growing concerns about the impact of relying on predictions we cannot understand, there is widespread agreement regarding the need for reliable interpretable models. One of the areas where this is particularly important is clinical decision-making. Specifically, explainable models have the potential to facilitate the elaboration of clinical guidelines and related decision-support tools. The presented research focuses on the improvement of decision tree (DT) learning, one of the most popular interpretable models, motivated by the challenges posed by clinical data. One of the limitations of interpretable DT algorithms is that they involve decisions based on strict thresholds, which can impair performance in the presence noisy measurements. In this regard, we proposed a probabilistic method that takes into account a model of the noise in the distinct learning phases. When considering this model during training, the method showed moderate improvements in accuracy compared to the standard approach, but significant reductions in number of leaves. Standard DT algorithms follow a locally-optimal approach which, despite providing good performances at a low computational cost, does not guarantee optimal DTs. The second direction of research therefore concerned the development of a non-greedy DT learning approach that employs Monte Carlo tree search (MCTS) to heuristically explore the space of DTs. Experiments revealed that the algorithm improved the trade-off between performance and model complexity compared to locally-optimal learning. Moreover, dataset size and feature interactions played a role in the behavior of the method. Despite being used for their explainability, DTs are chiefly evaluated based on prediction performance. The need for comparing the structure of DT models arises frequently in practice, and is usually dealt with by manually assessing a small number of models. We attempted to fill this gap by proposing an similarity measure to compare the structure of DTs. An evaluation of the proposed distance on a hierarchical forest of DTs indicates that it was able to capture structure similarity. Overall, the reported algorithms take a step in the direction of improving the performance of DT algorithms, in particular in what concerns model complexity and a more useful evaluation of such models. The analyses help improve the understanding of several data properties on DT learning, and illustrate the potential role of DT learning as an asset for clinical research and decision-making.
La minería de datos y el aprendizaje de patrones se encuentran cada vez más debajo de muchos aspectos de la vida cotidiana moderna. La preocupación creciente sobre el impacto de basarse en predicciones difíciles de explicar o comprender hace que haya un consenso amplio respecto a la necesidad de modelos interpretables y robustos. Una de las áreas donde esto es particularmente importante es en la toma de decisiones clínicas. Específicamente, los modelos interpretables tienen el potencial para facilitar la elaboración de guías clínicas y herramientas relacionadas de soporte a la decisión. La investigación que se presenta en este manuscrito se centra en la mejora del aprendizaje de los árboles de decisión (“Decision Trees”, DT, en inglés), uno de los modelos interpretables más populares, motivada por los retos que ofrecen los datos clínicos. Una de las limitaciones actuales de los algoritmos de DT interpretables es que implican decisiones basadas estrictamente en umbrales que pueden deteriorar la precisión en presencia de medidas con ruido. Al respecto, hemos propuesto un método probabilístico que considera un modelo de ruido en las distintas fases de aprendizaje. Al considerar este modelo en la fase de entrenamiento, el método demuestra mejoras moderadas en la precisión del algoritmo DT, comparado con el método clásico, aunque produce reducciones significativas en el número de hojas (e.g. niveles) del árbol de decisión. Los algoritmos clásicos de DT siguen un enfoque óptimo a nivel local que, a pesar de proporcionar buenos resultados a un coste computacional bajo, no garantiza árboles de decisión óptimos. Así, la segunda dirección de la investigación en este doctorado se dirigió al desarrollo de una metodología de aprendizaje de árboles de decisión no voraz (“non-greedy” en inglés) que usa una búsqueda de árboles de Monte Carlo (“Monte Carlo Tree Search”, MCDS en inglés) para explorar de manera heurística el espacio de DTs posibles. Los experimentos realizados revelaron que el algoritmo usando MCTS mejoró el balance entre la precisión en los resultados y la complejidad del modelo, comparado con el aprendizaje óptimo a nivel local. Asimismo, el tamaño de los datos y las interacciones entre las características tuvieron un rol importante en el comportamiento del método. A pesar de emplearse por su explicabilidad, los árboles de decisión son principalmente evaluados con criterios basados en la predicción. La necesidad de poder comparar la estructura de diferentes modelos de DT es frecuente en la práctica y usualmente se trata evaluando manualmente un pequeño número de modelos. Durante esta tesis intentamos cubrir esta necesidad proponiendo una medida de similitud para comparar la estructura de los árboles de decisión. Una evaluación basada en la medida propuesta aplicada a un bosque jerárquico de DTs indicó que era capaz de capturar la similitud estructural. De manera global, los algoritmos descritos dan un paso hacia la dirección de mejorar la precisión de los algoritmos basados en árboles de decisión, especialmente en lo concerniente a la reducción de la complejidad de los modelos y a una evaluación más práctica de ellos. Los análisis efectuados mejoran la comprensión de varias de las propiedades de los datos en el aprendizaje de DT, demostrando su rol potencial como recurso en la investigación y toma de decisiones clínicas.
APA, Harvard, Vancouver, ISO, and other styles
22

Gu, Jian. "Multi-modal Neural Representations for Semantic Code Search." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279101.

Full text
Abstract:
In recent decades, various software systems have gradually become the basis of our society. Programmers search existing code snippets from time to time in their daily life. It would be beneficial and meaningful to have better solutions for the task of semantic code search, which is to find the most semantically relevant code snippets for a given query. Our approach is to introduce tree representations by multi-modal learning. The core idea is to enrich semantic information for code snippets by preparing data of different modalities, and meanwhile ignore syntactic information. We design one novel tree structure named Simplified Semantic Tree and then extract RootPath representations from that. We utilize RootPath representation to complement the conventional sequential representation, namely the token sequence of the code snippet. Our multi-modal model receives code-query pair as input and computes similarity score as output, following the pseudo-siamese architecture. For each pair, besides the ready-made code sequence and query sequence, we extra one extra tree sequence from Simplified Semantic Tree. There are three encoders in our model, and they respectively encode these three sequences as vectors of the same length. Then we combine the code vector with the tree vector for one joint vector, which is still of the same length, as the multi-modal representation for the code snippet. We introduce triplet loss to ensure vectors of code and query in the same pair be close at the shared vector space. We conduct experiments in one large-scale multi-language corpus, with comparisons of strong baseline models by specified performance metrics. Among baseline models, the simplest Neural Bag-of-Words model is with the most satisfying performance. It indicates that syntactic information is likely to distract complex models from critical semantic information. Results show that our multi-modal representation approach performs better because it surpasses baseline models by far in most cases. The key to our multi-modal model is that it is totally about semantic information, and it learns from data of multiple modalities.
Under de senaste decennierna har olika programvarusystem gradvis blivit basen i vårt samhälle. Programmerare söker i befintliga kodavsnitt från tid till annan i deras dagliga liv. Det skulle vara fördelaktigt och meningsfullt att ha bättre lösningar för uppgiften att semantisk kodsökning, vilket är att hitta de mest semantiskt relevanta kodavsnitten för en given fråga. Vår metod är att introducera trädrepresentationer genom multimodal inlärning. Grundidén är att berika semantisk information för kodavsnitt genom att förbereda data med olika modaliteter och samtidigt ignorera syntaktisk information. Vi designar en ny trädstruktur med namnet Simplified Semantic Tree och extraherar sedan RootPath-representationer från det. Vi använder RootPath-representation för att komplettera den konventionella sekvensrepresentationen, nämligen kodsekvensens symbolsekvens. Vår multimodala modell får kodfrågeställningar som inmatning och beräknar likhetspoäng som utgång efter den pseudo-siamesiska arkitekturen. För varje par, förutom den färdiga kodsekvensen och frågesekvensen, extrager vi en extra trädsekvens från Simplified Semantic Tree. Det finns tre kodare i vår modell, och de kodar respektive tre sekvenser som vektorer av samma längd. Sedan kombinerar vi kodvektorn med trädvektorn för en gemensam vektor, som fortfarande är av samma längd som den multimodala representationen för kodavsnittet. Vi introducerar tripletförlust för att säkerställa att vektorer av kod och fråga i samma par är nära det delade vektorn. Vi genomför experiment i ett storskaligt flerspråkigt korpus, med jämförelser av starka baslinjemodeller med specificerade prestandametriker. Bland baslinjemodellerna är den enklaste Neural Bag-of-Words-modellen med den mest tillfredsställande prestanda. Det indikerar att syntaktisk information sannolikt kommer att distrahera komplexa modeller från kritisk semantisk information. Resultaten visar att vår multimodala representationsmetod fungerar bättre eftersom den överträffar basmodellerna i de flesta fall. Nyckeln till vår multimodala modell är att den helt handlar om semantisk information, och den lär sig av data om flera modaliteter.
APA, Harvard, Vancouver, ISO, and other styles
23

COIMBRA, Leandro Willer Pereira. "Discriminac¸ ˜ao salarial e diferenc¸as na capacidade produtiva entre grupos no mercado de trabalho." Universidade Federal de Pernambuco, 2015. https://repositorio.ufpe.br/handle/123456789/17666.

Full text
Abstract:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-08-12T12:19:01Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TESE_LEANDRO WILLER P COIMBRA.pdf: 1061880 bytes, checksum: 11f8b239ae17788ee3e063586ea2e919 (MD5)
Made available in DSpace on 2016-08-12T12:19:01Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TESE_LEANDRO WILLER P COIMBRA.pdf: 1061880 bytes, checksum: 11f8b239ae17788ee3e063586ea2e919 (MD5) Previous issue date: 2015-11-30
FACEPE
Este trabalho foi dividido em trˆes ensaios que se complementam na mesma tem´atica: a discriminac¸ ˜ao salarial. O objetivo ´e a constatac¸ ˜ao do comportamento discriminat´orio no mercado de trabalho brasileiro e a compreens˜ao das motivac¸ ˜oes e efeitos deste comportamento a partir de uma perspectiva econˆomica. Primeiramente, ´e utilizado o m´etodo de Propensity Score Matching para comparar trabalhadores com mesmos n´ıveis de esforc¸os, ocupac¸ ˜ao social, background familiar e outras vari´aveis de circunstˆancias, de forma a evidenciar a discriminac¸ ˜ao salarial baseada na cor da pele. A an´alise aponta para uma remunerac¸ ˜ao por hora trabalhada cerca de 14% menor para os trabalhadores “n˜ao brancos”. Al´em disso, ´e observado uma tendˆencia “elitista”da discriminac¸ ˜ao. Na segunda parte, ´e proposta a modelagem do mercado de trabalho baseada no Modelo de Search de Dale Mortensen, caracterizado por uma distribuic¸ ˜ao de ofertas de sal´ario cont´ınua. Este modelo ´e modificado de forma a introduzir trabalhadores heterogˆeneos quanto a habilidade produtiva e um grau de assimetria informacional entre os agentes. Observou-se que o n´ıvel de assimetria de informac¸ ˜ao de um mercado n˜ao ´e apenas precursor da discriminac¸ ˜ao mas definidor da magnitude desta. Por fim, foca-se na evoluc¸ ˜ao e sobrevivˆencia do comportamento discriminat´orio. Para isto, utilizou-se de um modelo com equil´ıbrio dinˆamico evolutivo, dividido em dois casos diferentes, de forma a endogeneizar o n´ıvel de assimetria de informac¸ ˜ao e da capacidade produtiva dos trabalhadores. Observou-se que o mercado possui possibilidades diferentes de equil´ıbrio, enquanto no primeiro caso, o percentual de trabalhadores de alta habilidade est´a ligado ao maior interesse das firmas selecionarem, no outro, o elevado percentual de trabalhadores de alta habilidade no mercado indica menor necessidade de selec¸ ˜ao. Na realidade, a diferenc¸a entre os dois casos se resume aos prˆemios e punic¸ ˜oes relativos a detenc¸ ˜ao de informac¸ ˜ao por parte dos trabalhadores de baixa habilidade.
This thesis was divided into three works that complement each other with the same theme: wage discrimination. The objective of this study is investigate the discriminatory behavior in the Brazilian labor market and understand the motivations and effects from an economic perspective. First, it is used the method of Propensity Score Matching to compare workers with the same level of effort, social occupation, family background and others variables of circumstances, it highlights wage discrimination based on skin color. The analysis found a wage per hour worked about 14% lower for workers “non-white”. Moreover, a tendency “elite”discrimination is observed. Next, it is proposed to model the labor market based on the Search Model of Dale Mortensen, characterized by a continuous distribution of wage offers. This model is modified to introduce heterogeneous productive ability and a degree of information asymmetry between agents. It was observed that the asymmetric information level of a market is not only the precursor of discrimination but defining the magnitude. Finally, focuses on the evolution and survival of discriminatory behavior. For this, we used a model with evolutionary dynamic equilibrium, divided into two different cases, in order to endogenize the level of asymmetric information and the productive capacity of workers. It was observed that the market has different possibilities to balance, in the first case, the percentage of high-skill labor is linked to higher interest selecting firms, on the other, the high percentage of high-skill labor market indicates less need for selection. In fact, the difference between the two cases comes down to rewards and punishments for the possession of information on the part of low-skill workers.
APA, Harvard, Vancouver, ISO, and other styles
24

Sinkus, Skirmantas. "Kinect įrenginiui skirtų gestų atpažinimo algoritmų tyrimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2014. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2014~D_20140806_143213-09689.

Full text
Abstract:
Microsoft Kinect įrenginys išleistas tik 2010 metais. Jis buvo skirtas Microsoft Xbox 360 vaizdo žaidimų konsolei, vėliau 2012 metais buvo pristatytas Kinect ir Windows personaliniams kompiuteriams. Taigi tai palyginus naujas įrenginys ir aktualus šiai dienai. Daugiausiai yra sukurta kompiuterinių žaidimų, kurie naudoja Microsoft Kinect įrenginį, bet šį įrenginį galima panaudoti daug plačiau ne tik žaidimuose, viena iš sričių tai sportas, konkrečiau treniruotės, kurias būtų galima atlikti namuose. Šiuo metu pasaulyje yra programinės įrangos, žaidimų, sportavimo programų, kuri leidžia kontroliuoti treniruočių eigą sekdama ar žmogus teisingai atlieka treniruotėms numatytus judesius. Kadangi Lietuvoje panašios programinės įrangos nėra, taigi reikia sukurti įrangą, kuri leistų Lietuvos treneriams kurti treniruotes orientuotas į šio įrenginio panaudojimą. Šio darbo pagrindinis tikslas yra atlikti Kinect įrenginiui skirtų gestų atpažinimo algoritmų tyrimą, kaip tiksliai jie gali atpažinti gestus ar gestą. Pagrindinis dėmesys skiriamas šiai problemai, taip pat keliami, bet netyrinėjami kriterijai kaip atpažinimo laikas, bei realizacijos sunkumas. Šiame darbe sukurta programa, judesius bei gestus atpažįsta naudojant Golden Section Search algoritmą. Algoritmas palygina du modelius ar šablonus, ir jei neranda atitikmens, tai pirmasis šablonas šiek tiek pasukamas ir lyginimo procesas paleidžiamas vėl, taipogi tam tikro kintamojo dėka galime keisti algoritmo tikslumą. Taipogi... [toliau žr. visą tekstą]
Microsoft Kinect device was released in 2010. It was designed for Microsoft Xbox 360 gaming console, later on in 2012 was presented Kinect device for Windows personal computer. So this device is new and current. Many games has been created for Microsoft Kinect device, but this device could be used not only in games, one of the areas where we can use it its sport, specific training, which can be performed at home. At this moment in world are huge variety of games, software, training programs which allows user to control training course by following a person properly perform training provided movements. Since in Lithuania similar software is not available, so it is necessary to create software that would allow Lithuania coaches create training focused on the use of this device. The main goal of this work is to perform research of the Kinect device gesture recognition algorithms to study exactly how they can recognize gestures or gesture. It will focus on this issue mainly, but does not address the criteria for recognition as the time and difficulty of realization. In this paper, a program that recognizes movements and gestures are using the Golden section search algorithm. Algorhithm compares the two models or templates, and if it can not find a match, this is the first template slightly rotated and comparison process is started again, also a certain variable helping, we can modify the algorithm accuracy. Also for comparison we can use Hidden Markov models algorhithm received... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Liangda. "Influence modeling in behavioral data." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53879.

Full text
Abstract:
Understanding influence in behavioral data has become increasingly important in analyzing the cause and effect of human behaviors under various scenarios. Influence modeling enables us to learn not only how human behaviors drive the diffusion of memes spread in different kinds of networks, but also the chain reactions evolve in the sequential behaviors of people. In this thesis, I propose to investigate into appropriate probabilistic models for efficiently and effectively modeling influence, and the applications and extensions of the proposed models to analyze behavioral data in computational sustainability and information search. One fundamental problem in influence modeling is the learning of the degree of influence between individuals, which we called social infectivity. In the first part of this work, we study how to efficient and effective learn social infectivity in diffusion phenomenon in social networks and other applications. We replace the pairwise infectivity in the multidimensional Hawkes processes with linear combinations of those time-varying features, and optimize the associated coefficients with lasso regularization on coefficients. In the second part of this work, we investigate the modeling of influence between marked events in the application of energy consumption, which tracks the diffusion of mixed daily routines of household members. Specifically, we leverage temporal and energy consumption information recorded by smart meters in households for influence modeling, through a novel probabilistic model that combines marked point processes with topic models. The learned influence is supposed to reveal the sequential appliance usage pattern of household members, and thereby helps address the problem of energy disaggregation. In the third part of this work, we investigate a complex influence modeling scenario which requires simultaneous learning of both infectivity and influence existence. Specifically, we study the modeling of influence in search behaviors, where the influence tracks the diffusion of mixed search intents of search engine users in information search. We leverage temporal and textual information in query logs for influence modeling, through a novel probabilistic model that combines point processes with topic models. The learned influence is supposed to link queries that serve for the same formation need, and thereby helps address the problem of search task identification. The modeling of influence with the Markov property also help us to understand the chain reaction in the interaction of search engine users with query auto-completion (QAC) engine within each query session. The fourth part of this work studies how a user's present interaction with a QAC engine influences his/her interaction in the next step. We propose a novel probabilistic model based on Markov processes, which leverage such influence in the prediction of users' click choices of suggested queries of QAC engines, and accordingly improve the suggestions to better satisfy users' search intents. In the fifth part of this work, we study the mutual influence between users' behaviors on query auto-completion (QAC) logs and normal click logs across different query sessions. We propose a probabilistic model to explore the correlation between user' behavior patterns on QAC and click logs, and expect to capture the mutual influence between users' behaviors in QAC and click sessions.
APA, Harvard, Vancouver, ISO, and other styles
26

Johansson, Emelie. "Information seeking behavior hos ungdomar på Örnsköldsviks stadsbibliotek : En undersökning med stöd i Kuhlthaus Information Search Process -modell vid litteratursökning." Thesis, Umeå universitet, Sociologiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-129370.

Full text
Abstract:
Denna uppsats undersöker informationssökningsprocessen hos en grupp utvalda ungdomar på Örnsköldsviks stadsbibliotek. Syftet har varit att undersöka och förstå unga biblioteksbesökares information seeking behavior i ett annorlunda bokuppställningssystem. Örnsköldsviks stadsbibliotek valde, efter KB:s beslut om nedläggning av SAB 2008, ett relativt nytt sätt att placera och ställa upp sina böcker på. Det nya systemet kallas Rainbow och det guidar besökaren genom tydliga ämneskategorier, klartext och färger. Med Carol. C. Kuhlthaus ISP- modell i sex steg som stöd, vilken beskriver tankar, känslor och ageranden under en informationssökningsprocess, undersöks hur de unga besökarna tog sig an sin informationssökning på Örnsköldsviks bibliotek. Genom intervjuer och observationer av de unga biblioteksanvändarna, samt ytterligare intervjuer med bibliotekets bibliotekarier, har material och information insamlats och analyserats kring hur ungdomarnas informationssökningsprocesser gått till. Resultaten från undersökningen visade att de intentioner som informationssökaren initialt bär med sig i sökprocessen påverkar sökresultatet. Det visade sig också att känslor, tankar och ageranden, som respondenterna förväntades att uppleva, till viss del skilde sig från de upplevelser som Kuhlthaus respondenter beskrivit vid sina informationssökningar, och vilka har legat till grund för ISP- modellen. I och med att studien behandlar folkbibliotek och ungdomarnas fritid var respondenterna på Örnsköldsviks bibliotek inte heller lika medvetna, något som tycktes tydligt redan från start, om sin egen sökproces som Kuhlthaus respondenter varit. Men denna undersöknings respondenter uppgav dock att bokuppställningen i sig fungerade bra för dem att söka information i.
APA, Harvard, Vancouver, ISO, and other styles
27

Junior, Eder Lorenzato. "Desenvolvimento de uma plataforma modelo para a busca de ligantes com potencial leishmanicida baseado na inibição seletiva da enzima diidroorotato desidrogenase." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/60/60136/tde-07062016-094346/.

Full text
Abstract:
As doenças tropicais negligenciadas (DTNs) causam um imenso sofrimento para a pessoa acometida e em muitos casos podem levar o indivíduo a morte. Elas representam um obstáculo devastador para a saúde e continuam a ser um sério impedimento para a redução da pobreza e desenvolvimento socioeconômico. Das 17 doenças desse grupo, a leishmaniose, incluindo a leishmaniose cutânea, tem grande destaque devido sua alta incidência, os gastos para o tratamento e as complicações geradas em processos de coinfecção. Ainda mais agravante, os investimentos direcionados ao controle, combate e principalmente a inovação em novos produtos é ainda muito limitado. Atualmente, a academia tem um importante papel na luta contra essas doenças através da busca de novos alvos terapêuticos e também de novas moléculas com potencial terapêutico. É nesse contexto que esse projeto teve como meta a implantação de uma plataforma para a identificação de moléculas com atividade leishmanicida. Como alvo terapêutico, optamos pela utilização da enzima diidroorotato desidrogenase de Leishmania Viannia braziliensis (LbDHODH), enzima de extrema importância na síntese de novo de nucleotídeos de pirimidina, cuja principal função é converter o diidroorotato em orotato. Esta enzima foi clonada, expressa e purificada com sucesso em nosso laboratório. Os estudos permitiram que a enzima fosse caracterizada cineticamente e estruturalmente via cristalografia de raios- X. Os primeiros ensaios inibitórios foram realizados com o orotato, produto da catálise e inibidor natural da enzima. O potencial inibitório do orotato foi mensurado através da estimativa do IC50 e a interação proteína-ligante foi caracterizada através de estudos cristalográficos. Estratégias in silico e in vitro foram utilizadas na busca de ligantes, através das quais foram identificados inibidores para a enzima LbDHODH. Ensaios de validação cruzada, utilizando a enzima homóloga humana, permitiram identificar os ligantes com maior índice de seletividade que tiveram seu potencial leishmanicida avaliado in vitro contra as formas promastigota e amastigota de Leishmania braziliensis. A realização do presente projeto permitiu a identificação de uma classe de ligantes que apresentam atividade seletiva contra LbDHODH e que será utilizada no planejamento de futuras gerações de moléculas com atividade terapêutica para o tratamento da leishmaniose. Além disso, a plataforma de ensaios otimizada permitirá a avaliação de novos grupos de moléculas como uma importante estratégia na busca por novos tratamentos contra a leishmaniose
Neglected tropical diseases (NTDs) pose a devastating obstacle to health and remain a serious impediment to poverty reduction and socioeconomic development. Of 17 diseases listed among NTDs, the leishmaniasis, including cutaneous and mucocutaneous leishmaniasis, are characterized by high number of cases and high costs for the treatment, usually of low efficacy and highly toxic. Even more problematic, the investment devoted to combat to control and to develop new therapies remains limited. Nowadays, the Academy has had an important role in the fight against NTDs, contributing in the search of new therapeutic targets and in the development of lead compounds. Within this context, the present project aimed the development of a pipeline for identification of leishmanicidal compounds based on the selective inhibition of the enzyme dihydroorotate dehydrogenase from Leishmania Viannia braziliensis (LbDHODH). LbDHODH acts in the de novo biosynthetic pathway of pyrimidine nucleotides, by catalysing the conversion of dihydroorotate to orotate and has been considered an important macromolecular target against proliferative and parasitic diseases. LbDHODH was successfully cloned, expressed and purified. The target enzyme was characterized by kinetic studies and its structure was solved by X-ray crystallography techniques. Inhibition assays were performed in presence of orotate, product of catalysis and natural inhibitor of DHODH. Its inhibitory potential was evaluated by the estimative of IC50 and the protein-ligand interaction was characterized by crystallographic studies. In silico and in vitro strategies were used in the search for LbDHODH inhibitors. Cross-validation studies were performed against the human homologue enzyme. Inhibitors displaying higher selectivity index were evaluated against both promastigote and amastigote forms of Leishmania braziliensis. Our work allowed the identification of a new class of compounds that display selective inhibition of LbDHODH and show leishmanicidal activity. They will be used as prototypes for the development of new generations of LbDHODH inhibitors. Moreover, our pipeline can be used to screen large number of chemical libraries as tool for the identification of different chemical entities that can contribute to the development of new therapeutic strategies for the treatment of leishmaniasis.
APA, Harvard, Vancouver, ISO, and other styles
28

Bornia, Poulsen Camilo José. "Desenvolvimento de um modelo para o School Timetabling Problem baseado na Meta-Heurística Simulated Annealing." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2012. http://hdl.handle.net/10183/39522.

Full text
Abstract:
Todo início de período letivo, gestores de instituições de ensino se deparam com um típico problema: montar as grades horárias das turmas, segundo as demandas de aulas de suas disciplinas e considerando as restrições de disponibilidade horária de todos os envolvidos. Conhecido na literatura como School Timetabling Problem (STP), este típico problema de otimização combinatória é reconhecidamente complexo por conta do seu elevado número de variáveis e restrições. Devido à dependência das regras do sistema educacional de cada país, o STP pode ter inúmeras variantes, cada uma com o seu próprio conjunto de particularidades. Este trabalho se propõe a oferecer um modelo para o STP considerando o sistema educacional brasileiro, visando alocar não apenas professores, mas também determinando que disciplina cada professor deve ministrar e alocando os locais de aula. O modelo proposto, baseado na meta-heurística simulated annealing, foi concebido para que cada instituição de ensino usuária tenha liberdade para definir a penalidade de cada tipo possível de inconformidade ou restrição, de modo que o algoritmo empregado possa encontrar uma solução com o menor custo possível.
Every beginning of term, educational institution managers face a typical problem: planning the classes' timetable, according to their lesson demands for each subject, considering, furthermore, the schedule constrains of all actors. Known as school timetabling problem (STP), this typical combinatorial optimization problem is remarkably complex due to the high number of variables and constraints. Owing to the rules of each country's educational system, STP can have uncountable variants, each one with their own set of features. This dissertation searches to offer a model to STP considering the Brazilian Educational System, focusing on allocating not only the teachers but also determining which subject each teacher should teach and allocating classrooms, laboratories and the like. The propesed model, based on the metaheuristic simulated annealing, was conceived so that each educational institution using this model has the freedom to define which penalty will be applied to each possible kind of noncomformity and constraint, in order for the applied algorithm to find a solution at the lowest cost as possible.
APA, Harvard, Vancouver, ISO, and other styles
29

Bego, Marcelo da Silva. "Three essays on agricultural markets." reponame:Repositório Institucional do FGV, 2017. http://hdl.handle.net/10438/18066.

Full text
Abstract:
Submitted by Marcelo Bego (marcelo.bego@gmail.com) on 2017-03-21T14:14:52Z No. of bitstreams: 1 Three Essays on Agricultural Markets - Marcelo S. Bego.pdf: 1207964 bytes, checksum: 33f8f4a9215ea6404b2dcbd5c0538a0e (MD5)
Approved for entry into archive by Pamela Beltran Tonsa (pamela.tonsa@fgv.br) on 2017-03-21T14:58:27Z (GMT) No. of bitstreams: 1 Three Essays on Agricultural Markets - Marcelo S. Bego.pdf: 1207964 bytes, checksum: 33f8f4a9215ea6404b2dcbd5c0538a0e (MD5)
Made available in DSpace on 2017-03-21T16:13:42Z (GMT). No. of bitstreams: 1 Three Essays on Agricultural Markets - Marcelo S. Bego.pdf: 1207964 bytes, checksum: 33f8f4a9215ea6404b2dcbd5c0538a0e (MD5) Previous issue date: 2017-02-22
Esta tese apresenta três ensaios que investigam três questões relevantes sobre mercados agrícolas: escolha de hedge dos agricultores; imposto ótimo do governo; e reações do governo à volatilidade dos preços. O primeiro ensaio preenche uma lacuna teórica provando que agricultores mais ricos fazem mais hedge que agricultores menos ricos. O segundo ensaio examina imposto ótimo do governo e mostra como políticas do governo de Ramsey competem com o mercado financeiro. O terceiro ensaio mostra o efeito da casualidade da volatilidade dos preços nos subsídios do governo utilizando dados do mercado de trigo dos Estados Unidos. Ele também mostra que o governo reage a volatilidade dos preços, principalmente, quando preços estão baixos o suficiente, e as reações acontecem independente do plano agrícola.
This dissertation presents three essays that investigate three relevant issues about agricultural markets: farmers’ choice of hedge; government optimal taxation; and government farm program reactions to price volatility. First essay fills a theoretical gap showing that high profitable farmers hedge more than low profitable farmers. Second essay examines government optimal taxation and shows how Ramsey government policies compete with financial markets. The third essay shows the causality from price volatility to government subsidies using US wheat market data. It also shows that government reacts to price volatility, mainly, when prices are low enough, despite the farm program design.
APA, Harvard, Vancouver, ISO, and other styles
30

Garcia, Rodrigo Moreira [UNESP]. "Modelos de comportamento de busca de informação: contribuições para a organização da informação." Universidade Estadual Paulista (UNESP), 2007. http://hdl.handle.net/11449/93696.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:26:45Z (GMT). No. of bitstreams: 0 Previous issue date: 2007-09-10Bitstream added on 2014-06-13T19:34:29Z : No. of bitstreams: 1 garcia_rm_me_mar.pdf: 929037 bytes, checksum: 8c9eb461b20ae567adf214358dbbb0cc (MD5)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Universidade Estadual Paulista (UNESP)
Com o desenvolvimento da Web e, conseqüentemente, das bibliotecas digitais, open archives, repositórios entre outros, novos sistemas e fontes de informação têm sido criados, colocando os usuários em um novo ambiente de busca e recuperação da informação, caracterizado pela sobreabundância de recursos informacionais dispostos no formato de hipertexto, aumentando significativamente as possibilidades de acesso. Diante deste ambiente digital de informação, uma das funções das Linguagens Documentárias é representar as informações dos documentos contidos nestes sistemas de informação, de tal forma que o usuário seja capaz de acessar essa informação para o uso, comunicação e, principalmente, para a geração de novos conhecimentos. No entanto, o problema-chave da Recuperação da Informação passa pela busca de procedimentos teóricos e metodológicos para a Organização e Representação da Informação, ou seja, uma das principais problemáticas é encontrar meios de aperfeiçoar os métodos de Organização da Informação ao nível conceitual, de forma a aumentar a acessibilidade pelos usuários finais. Diante disto, foi proposto um estudo exploratório de análise teórica de alguns modelos de comportamento de busca de informação que, de acordo com a literatura especializada, são os que têm maior impacto em termos de pesquisa na área. Tais modelos podem ser incorporados aos processos de tratamento temático da informação, permitindo a bibliotecários e indexadores a se darem conta das variáveis que interferem no processo de busca e recuperação da informação dos usuários e, desta forma, entenderem os fatores que afetam a organização e representação da informação. Como resultados considera-se que uma abordagem que procure colocar o usuário no centro das preocupações na Organização da Informação deve recorrer à experiência acumulada...
With the development of the Web and, consequently, of the digital libraries, open archives, repositories between other, new systems and information sources have been created, placing the users in a new search and information retrieval environment, characterized by the abundance of informational resources disposed in the hypertext format, increasing the access possibilities significantly. Ahead of this information digital environment, one of the Indexing Languages functions is to represent the information of the documents contained in these information systems, so that the user be capable to access that information for the use, communication and, mainly, for the generation of new knowledge. However, the problem-key of the Information Retrieval goes by the search of theoretical and methodological proceedings for the Information Organization and Representation. In other words, the principal problematic is to find means of improving the methods of Information Organization at the conceptual level, in way to increase the accessibility for the end-users. Ahead this, an exploratory study of theoretical analysis was proposed of some information search behavior models that, in agreement with the specialized literature, they are the ones that have larger impact in research terms in the area. Such models can be incorporate to the information thematic treatment processes, allowing to librarians and indexers the if give bill of the intervening variables in the search and information retrieval process of the users and, this way, that they understand the factors that affect the information organization and representation. As results are considered that an approach that tries to place the user in the center of the concerns in the Information Organization should appeal to the accumulated experience in the calls studies of users. It is ended that the studies about the users' information behaviors... (Complete abstract click electronic access below)
APA, Harvard, Vancouver, ISO, and other styles
31

Stavrunova, Olena. "Labor market policies in an equilibrium matching model with heterogeneous agents and on-the-job search." Diss., University of Iowa, 2007. http://ir.uiowa.edu/etd/150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Gaspar, Manuel Augusto Ribeiro. "Automatic system for approximate and noncontiguous DNA sequences search." Master's thesis, Universidade de Aveiro, 2017. http://hdl.handle.net/10773/23810.

Full text
Abstract:
Mestrado em Engenharia Eletrónica e Telecomunicações
A capacidade de efectuar pesquisas de sequências de ADN similares a outras contidas numa sequência maior, tal como um cromossoma, tem um papel muito importante no estudo de organismos e na possível ligação entre espécies diferentes. Apesar da existência de várias técnicas e algoritmos, criados com o intuito de realizar pesquisas de sequência, este problema ainda está aberto ao desenvolvimento de novas ferramentas que possibilitem melhorias em relação a ferramentas já existentes. Esta tese apresenta uma solução para pesquisa de sequências, baseada em compressão de dados, ou, mais especificamente, em modelos de contexto finito, obtendo uma medida de similaridade entre uma referência e um alvo. O método usa uma abordagem com base em modelos de contexto finito para obtenção de um modelo estatístico da sequência de referência e obtenção do número estimado de bits necessários para codificação da sequência alvo, utilizando o modelo da referência. Ao longo deste trabalho, estudámos o método descrito acima, utilizando, inicialmente, condições controladas, e, por m, fazendo um estudo de regiões de ADN do genoma humano moderno, que não se encontram em ADN ancestral (ou se encontram com elevado grau de dissimilaridade).
The ability to search similar DNA sequences with relation to a larger sequence, such as a chromosome, has a really important role in the study of organisms and the possible connection between di erent species. Even though several techniques and algorithms, created with the goal of performing sequence searches, already exist, this problem is still open to the development of new tools that exhibit improvements over currently existent tools. This thesis proposes a solution for sequence search, based on data compression, or, speci cally, nite-context models, by obtaining a measure of similarity between a reference and a target. The method uses an approach based on nite-context models for the creation of a statistical model of the reference sequence and obtaining the estimated number of bits necessary for the codi cation of the target sequence, using the reference model. In this work we studied the above described method, using, initially, controlled conditions, and, nally, conducting a study on DNA regions, belonging to the modern human genome, that can not be found in ancient DNA (or can only be found with high dissimilarity rate).
APA, Harvard, Vancouver, ISO, and other styles
33

Potter, Charles D. "Search for evidence of fermi surface nesting in Bi₂Sr₂Ca₁Cu₂O₈." Diss., Virginia Tech, 1992. http://hdl.handle.net/10919/40088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Zaveh, Fakhraldin. "Essays on the labor market." Doctoral thesis, Universitat Autònoma de Barcelona, 2014. http://hdl.handle.net/10803/284145.

Full text
Abstract:
El mercado laboral es un tema importante e interesante en para el análisis económico. En esta tesis me propongo aumentar marginalmente nuestro conocimiento del mercado de trabajo. En particular, me centro en el desempleo y la productividad media del trabajo. En el primer capítulo, "Búsqueda, rigideces y la dinámica del desempleo". Estudio las fuentes de las diferencias entre países en la dinámica del desempleo. Sostengo que las regulaciones pueden afectar la dinámica del desempleo, como se observa en los datos. Introduzco la regulación en un modelo estándar de búsqueda de trabajo. El modelo explica alrededor de la mitad de las variaciones entre países. En el segundo capítulo , "Los trabajadores heterogéneos, dinámica empresarial y la anticíclicidad de la Productividad " , desarrollo un modelo de búsqueda de trabajo tanto con la empresa como con la heterogeneidad de los trabajadores, que es capaz de generar un amplio conjunto de flujos de trabajo en el nivel micro. Utilizo el modelo para estudiar el posible origen de la disminución en el carácter cíclico de la productividad. Por último, en el capítulo tercero, " ¿Fue la Reserva Federal o la heterogeneidad lo que cambió el patrón cíclico de la productividad?" Utilizo el análisis de factores dinámicos para estudiar el comportamiento de la productividad, así como el desempleo.
Labor market is an important and interesting topic in the economics. In this thesis I aim to marginally increase our knowledge of the labor market. In particular, I focus on unemployment and average labor productivity. In the first chapter, “Search, rigidities and unemployment dynamics” I study the sources of cross-country differences in unemployment dynamics. I argue that regulations can affect the dynamics of unemployment as we observe in the data. I introduce regulation into a standard model of labor search. The model can explain about half of cross-country variations. In the second chapter, ”Heterogeneous Workers, Firm Dynamics and the Countercyclicality of Productivity”, I develop a labor search model with both firm and worker heterogeneity, that is able to generate a rich set of employment flows at the micro-level. I use the model to study the possible source of decline in the cyclicality of productivity. Finally, in the third chapter, “Was it the Fed or the heterogeneity that changed the cyclical pattern of productivity?” I use dynamic factors analysis to study the behavior of productivity as well as unemployment.
APA, Harvard, Vancouver, ISO, and other styles
35

Park, Yongmin. "Interactions between heterogeneity in nominal rigidities and search frictions in general equilibrium models." Thesis, University of Exeter, 2018. http://hdl.handle.net/10871/33065.

Full text
Abstract:
This dissertation consists of three chapters that aim to build a framework which can be used to study interactions between the labour market and macroeconomic dynamics. To achieve this, we reformulate a standard New Keynesian dynamic stochastic general equilibrium (DSGE) model to include search and matching frictions in the labour market and heterogeneity in price and wage stickiness. The first chapter, coauthored with Professor Engin Kara, builds a real business cycle model with labour search frictions and heterogeneity in wage stickiness. Shimer’s (2005) critique on labour search models, that it cannot explain observed unemployment movements, reignited a long-standing debate on unemployment fluctuations and wage determination. Gertler and Trigari (2009) introduce wage stickiness to the model to match unemployment volatility, while Pissarides (2009) finds this modification not satisfactory, citing evidence on high wage cyclicality. We find heterogeneity in wage stickiness in microdata on wages. Our model, which reflects this heterogeneity, matches the data better than its one sector alternatives. The second chapter, coauthored with Professor Engin Kara, studies output dynamics in New Keynesian models with the standard labour market and heterogeneity in price stickiness. We analytically and numerically show that these models can reproduce a hump-shaped output response to persistent monetary shocks, which is a key feature of monetary transmission mechanism. The version of models without heterogeneity cannot generate a hump. Flexible prices in models with heterogeneity play a crucial role, by generating inertia to price-setting and output. The third chapter studies how the labour search frictions affect output dynamics in New Keynesian models, when combined with heterogeneity in nominal rigidities. Long-term employment relationship, that arises under search and matching framework, makes marginal costs history dependent. We show that this history dependence generates inertia in the model. Heterogeneity in nominal rigidities significantly reinforces this inertia, resulting in a hump-shaped output response to persistent monetary shocks. The model without the search frictions cannot replicate a hump even when monetary shocks are persistent, when wages are sticky.
APA, Harvard, Vancouver, ISO, and other styles
36

Kermorvant, Patrice. "Tests de l'effet fixe dans les plans factoriels croisés : modèles linéaires mixtes et méthode bayesienne empirique,applications aux essais thérapeutiques multicentriques." Paris 5, 1994. http://www.theses.fr/1994PA11T042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Guo, Chuanliang. "Effects of turbulence modelling on the analysis and optimisation of high-lift configurations." Thesis, Cranfield University, 2011. http://dspace.lib.cranfield.ac.uk/handle/1826/7220.

Full text
Abstract:
Due to the significant effects on the performance and competitiveness of aircraft, high lift devices are of extreme importance in aircraft design. The flow physics of high lift devices is so complex, that traditional one pass and multi-pass design approaches can’t reach the most optimised concept and multi-objective design optimisation (MDO) methods are increasingly explored in relation to this design task. The accuracy of the optimisation, however, depends on the accuracy of the underlying Computational Fluid Dynamics (CFD) solver. The complexity of the flow around high-lift configuration, namely transition and separation effects leads to a substantial uncertainty associated with CFD results. Particularly, the uncertainty related to the turbulence modelling aspect of the CFD becomes important. Furthermore, employing full viscous flow solvers within MDO puts severe limitations on the density of computational meshes in order to achieve a computationally feasible solution, thereby adding to the uncertainty of the outcome. This thesis explores the effect of uncertainties in CFD modelling when detailed aerodynamic analysis is required in computational design of aircraft configurations. For the purposes of this work, we select the benchmark NLR7301 multi-element airfoil (main wing and flap). This flow around this airfoil features all challenges typical for the high-lift configurations, while at the same time there is a wealth of experimental and computational data available in the literature for this case. A benchmark shape bi-objective optimization problem is formed, by trying to reveal the trade-off between lift and drag coefficients at near stall conditions. Following a detailed validation and grid convergence study, three widely used turbulence models are applied within Reynolds-Averaged Navier-Stokes (RANS) approach. K- Realizable, K- SST and Spalart-Allmaras. The results show that different turbulent models behave differently in the optimisation environment, and yield substantially different optimised shapes, while maintaining the overall optimisation trends (e.g. tendency to maximise camber for the increased lift). The differences between the models however exhibit systemic trends irrespective of the criteria for the selection of the target configuration in the Pareto front. A-posteriori error analysis is also conducted for a wide range of configurations of interest resulting from the optimisation process. Whereas Spalart-Allmaras exhibits best accuracy for the datum airfoil, the overall arrangement of the results obtained with different models in the (Lift, Drag) plane is consistent for all optimisation scenarios leading to increased confidence in the MDO/RANS CFD coupling.
APA, Harvard, Vancouver, ISO, and other styles
38

Sun, Qing. "Greedy Inference Algorithms for Structured and Neural Models." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/81860.

Full text
Abstract:
A number of problems in Computer Vision, Natural Language Processing, and Machine Learning produce structured outputs in high-dimensional space, which makes searching for the global optimal solution extremely expensive. Thus, greedy algorithms, making trade-offs between precision and efficiency, are widely used. %Unfortunately, they in general lack theoretical guarantees. In this thesis, we prove that greedy algorithms are effective and efficient to search for multiple top-scoring hypotheses from structured (neural) models: 1) Entropy estimation. We aim to find deterministic samples that are representative of Gibbs distribution via a greedy strategy. 2) Searching for a set of diverse and high-quality bounding boxes. We formulate this problem as the constrained maximization of a monotonic sub-modular function such that there exists a greedy algorithm having near-optimal guarantee. 3) Fill-in-the-blank. The goal is to generate missing words conditioned on context given an image. We extend Beam Search, a greedy algorithm applicable on unidirectional expansion, to bidirectional neural models when both past and future information have to be considered. We test our proposed approaches on a series of Computer Vision and Natural Language Processing benchmarks and show that they are effective and efficient.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
39

Korger, Christina. "Clustering of Distributed Word Representations and its Applicability for Enterprise Search." Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-208869.

Full text
Abstract:
Machine learning of distributed word representations with neural embeddings is a state-of-the-art approach to modelling semantic relationships hidden in natural language. The thesis “Clustering of Distributed Word Representations and its Applicability for Enterprise Search” covers different aspects of how such a model can be applied to knowledge management in enterprises. A review of distributed word representations and related language modelling techniques, combined with an overview of applicable clustering algorithms, constitutes the basis for practical studies. The latter have two goals: firstly, they examine the quality of German embedding models trained with gensim and a selected choice of parameter configurations. Secondly, clusterings conducted on the resulting word representations are evaluated against the objective of retrieving immediate semantic relations for a given term. The application of the final results to company-wide knowledge management is subsequently outlined by the example of the platform intergator and conceptual extensions."
APA, Harvard, Vancouver, ISO, and other styles
40

Limbu, Dilip Kumar. "Contextual information retrieval from the WWW." Click here to access this resource online, 2008. http://hdl.handle.net/10292/450.

Full text
Abstract:
Contextual information retrieval (CIR) is a critical technique for today’s search engines in terms of facilitating queries and returning relevant information. Despite its importance, little progress has been made in its application, due to the difficulty of capturing and representing contextual information about users. This thesis details the development and evaluation of the contextual SERL search, designed to tackle some of the challenges associated with CIR from the World Wide Web. The contextual SERL search utilises a rich contextual model that exploits implicit and explicit data to modify queries to more accurately reflect the user’s interests as well as to continually build the user’s contextual profile and a shared contextual knowledge base. These profiles are used to filter results from a standard search engine to improve the relevance of the pages displayed to the user. The contextual SERL search has been tested in an observational study that has captured both qualitative and quantitative data about the ability of the framework to improve the user’s web search experience. A total of 30 subjects, with different levels of search experience, participated in the observational study experiment. The results demonstrate that when the contextual profile and the shared contextual knowledge base are used, the contextual SERL search improves search effectiveness, efficiency and subjective satisfaction. The effectiveness improves as subjects have actually entered fewer queries to reach the target information in comparison to the contemporary search engine. In the case of a particularly complex search task, the efficiency improves as subjects have browsed fewer hits, visited fewer URLs, made fewer clicks and have taken less time to reach the target information when compared to the contemporary search engine. Finally, subjects have expressed a higher degree of satisfaction on the quality of contextual support when using the shared contextual knowledge base in comparison to using their contextual profile. These results suggest that integration of a user’s contextual factors and information seeking behaviours are very important for successful development of the CIR framework. It is believed that this framework and other similar projects will help provide the basis for the next generation of contextual information retrieval from the Web.
APA, Harvard, Vancouver, ISO, and other styles
41

Freyberger, Jonathan E. "Using association rules to guide a search for best fitting transfer models of student learning." Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0430104-014117/.

Full text
Abstract:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: aprior; ASAS; association rules; logistic regression; transfer models; predicting performance. Includes bibliographical references (p. 50-51).
APA, Harvard, Vancouver, ISO, and other styles
42

Zhao, Ying, and ying zhao@rmit edu au. "Effective Authorship Attribution in Large Document Collections." RMIT University. Computer Science and Information Technology, 2008. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080730.162501.

Full text
Abstract:
Techniques that can effectively identify authors of texts are of great importance in scenarios such as detecting plagiarism, and identifying a source of information. A range of attribution approaches has been proposed in recent years, but none of these are particularly satisfactory; some of them are ad hoc and most have defects in terms of scalability, effectiveness, and computational cost. Good test collections are critical for evaluation of authorship attribution (AA) techniques. However, there are no standard benchmarks available in this area; it is almost always the case that researchers have their own test collections. Furthermore, collections that have been explored in AA are usually small, and thus whether the existing approaches are reliable or scalable is unclear. We develop several AA collections that are substantially larger than those in literature; machine learning methods are used to establish the value of using such corpora in AA. The results, also used as baseline results in this thesis, show that the developed text collections can be used as standard benchmarks, and are able to clearly distinguish between different approaches. One of the major contributions is that we propose use of the Kullback-Leibler divergence, a measure of how different two distributions are, to identify authors based on elements of writing style. The results show that our approach is at least as effective as, if not always better than, the best existing attribution methods-that is, support vector machines-for two-class AA, and is superior for multi-class AA. Moreover our proposed method has much lower computational cost and is cheaper to train. Style markers are the key elements of style analysis. We explore several approaches to tokenising documents to extract style markers, examining which marker type works the best. We also propose three systems that boost the AA performance by combining evidence from various marker types, motivated from the observation that there is no one type of marker that can satisfy all AA scenarios. To address the scalability of AA, we propose the novel task of authorship search (AS), inspired by document search and intended for large document collections. Our results show that AS is reasonably effective to find documents by a particular author, even within a collection consisting of half a million documents. Beyond search, we also propose the AS-based method to identify authorship. Our method is substantially more scalable than any method published in prior AA research, in terms of the collection size and the number of candidate authors; the discrimination is scaled up to several hundred authors.
APA, Harvard, Vancouver, ISO, and other styles
43

Colomé, Rosa. "Consumer choice in competitive location models." Doctoral thesis, Universitat Pompeu Fabra, 2002. http://hdl.handle.net/10803/7330.

Full text
Abstract:
L'objectiu bàsic d'aquesta tesis doctoral és la incorporació dels models de preferència revelada d'elecció del consumidor en els models de localització competitiva en entorns discrets que tenen com a objectiu maximitzar la captura.

Després de la introducció i de la revisió de la literatura, el capítol 3 analitza la importància que té reflectir la conducta real del consumidor respecte a com aquest té en compte la distancia, i com això afecta a la optimalitat de les localitzacions que s'obtenen en aquests models.

Després d'analitzar la característica distancia, el capítol 4 presenta una nova metodologia per determinar quines altres característiques (a més de la distancia) s'han d'introduir en els models de Màxima Captura i com s'han de representar fent servir el model MCI (Multiplicative Competitive Interaction model) com a model d'elecció del consumidor. La metodologia s'aplica a dos entorns reals: Barcelona i Milton Keynes.

El capítol 5 presenta el "New Chance - Constrained Maximum Capture Location Problem". Aquest model de màxima captura, a més d'introduir les teories d'elecció del consumidor, introdueixen una nova restricció estocàstica de llindar de rendibilitat.

Finalment en el capítol 6 es presenten els algoritmes desenvolupats per a resoldre els models presentats en els capítols 3 i 5. Concretament, es desenvolupen i testejen dues metaheurístiques basades en les metaheurístiques GRASP, Ant System i TABU Search.
Esta tesis doctoral tiene como objetivo la incorporación de los modelos de preferencia revelada de elección del consumidor en los modelos de localización competitiva en entorno discreto que tienen como objetivo maximizar la captura.

Para ello, después de la introducción y de la revisión de la literatura, el capítulo 3 analiza la importancia que tiene reflejar la conducta real del consumidor respecto a cómo éste considera la distancia, y cómo esto afecta a la optimalidad de las localizaciones que se obtienen en estos modelos.

Una vez analizada la característica distancia, el capítulo 4 presenta una nueva metodología para determinar que otras características (además de la distancia) se deben introducir en los modelos de Máxima Captura y cómo se tienen que representar éstas utilizando el modelo MCI (Multiplicative Competitive Interaction model) como modelo de elección del consumidor. La metodología se aplica a dos entornos reales: Barcelona y Milton Keynes.

El capitulo 5 presenta el "New Chance - Constrained Maximum Capture Location Problem". Este modelo de máxima captura, además de introducir las teorías de elección del consumidor, introduce una nueva restricción estocástica de umbral de rentabilidad.

Finalmente en el capítulo 6 se presentan los algoritmos desarrollados para resolver los modelos presentados en los capítulos 3 y 5. Concretamente, se desarrollan y testean dos metaheurísticas basadas en las metaheurísticas GRASP, Ant System y TABU Search.
The main aim of this thesis is the introduction of consumer store - choice theories in the discrete competitive location models that have a maximum captured objective function.

After the introduction and the literature review, chapter 3 analyses the importance of consumer behaviour with respect to distance in the optimality of locations obtained by a traditional discrete competitive location models.

Once the distance attribute has been analysed, chapter 4 presents a methodology for determining which store attributes (other than distance) should be included in a new version of the Maximum Capture Discrete Competitive Location models to the retail sector, as well as how these parameters ought to be reflected. The revealed preference store choice model use to define this methodology is the Multiplicative Competitive Interaction model. The methodology is tested to the supermarket sector in two different scenarios: Barcelona and Milton Keynes .

Chapter 5 presents the "New Chance - Constrained Maximum Capture Location Problem". , which is a maximum capture model that takes into account the store-choice theories and has an stochastic threshold constraint.

Finally, chapter 6 presents the algorithms developed to solved the models presented in this thesis (chapter 3 and chapter 5). Basically, this chapter presents the formulation development and the computational experience, for a two metaheuristics. These are based in three metaheuristics: GRASP, Ant System and TABU Search.
APA, Harvard, Vancouver, ISO, and other styles
44

Mantani, Luca. "Simplified t-channel models for dark matter searches." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13444/.

Full text
Abstract:
Una enorme quantità di evidenze sperimentali sulla esistenza di una forma di materia non luminosa nell'Universo, si sono accumulate nel corso di circa un secolo. Chiarire la sua natura è diventata una delle sfide più eccitanti ed urgenti negli sforzi per capire il nostro Universo. In questo lavoro presento uno studio su un approccio per scoprire la Materia Oscura interpretata come particella elementare e sulla possibilità di produrla e rilevarla negli acceleratori. Nella parte introduttiva presento una breve storia delle evidenze astrofisiche e astronomiche che hanno portato alla ipotesi della esistenza di Materia Oscura. Assumendo che la Materia Oscura sia costituita da una particella elementare ulteriore a quelle predette dal Modello Standard, delineo poi i tre principali metodi di rilevazione utilizzati attualmente per identificarla. Nella seconda parte discuto come si possono costruire teorie nelle quali sia possibile interpretare le ricerche attuali ed i risultati corrispondenti. Eseguo un confronto tra approcci diversi, partendo da modelli completi fino a quelli che utilizzano teorie di campo effettive. In particolare, discuto i loro lati positivi e negativi, motivando l'utilizzo di uno schema intermedio, il cosiddetto approccio con modelli semplificati, caratterizzati da un numero limitato di nuovi stati e parametri e che supera le limitazioni intrinseche delle teorie effettive nel contesto delle ricerche negli acceleratori. Nell'ultima parte fornisco una esaustiva classificazione dei modelli semplificati nel canale t, che non sono ancora stati analizzati sistematicamente nella letteratura. Per ciascuno di essi presento un possibile completamento UV e i segnali più promettenti ad LHC. Per questa ragione tutti i modelli considerati sono stati implementati in strumenti Monte Carlo, validati nel confronto con risultati analitici, studiati in dettaglio e resi pronti per un rilascio pubblico per la comunità fenomenologica e sperimentale di LHC.
APA, Harvard, Vancouver, ISO, and other styles
45

Dimelow, David J. "Non-linear dynamics of an offshore mooring tower." Thesis, University of Aberdeen, 1997. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU092912.

Full text
Abstract:
Offshore mooring towers are one of a number of single-point mooring (SPM) systems which provide a berthing point for tankers, enabling the transfer of crude oil to or from the moored vessel. The periodic slackening of the mooring hawser between the vessel and the tower gives rise to a discontinuously non-linear restoring function. Hence, the wave-induced motions of the tower can be highly complex, with the possibility of large amplitude, and potentially hazardous motions. A large amount of work has been carried out in studying single-point mooring systems. However, much of this work has focused on mooring forces and tanker motions. Few studies have looked in-depth at the motions of the mooring structure itself. In this thesis, mooring tower motions have been studied in detail using three techniques: numerical analysis, approximate analytical methods, and experimental modelling. Each of these approaches to the problem has demonstrated that large amplitude and hence potentially hazardous motions can occur. Numerical predictions of motion showed very good comparison with measured responses, particularly for synchronous motions. However, for more complex motions, such as subharmonic resonances, the agreement between measured and predicted results was seen to deteriorate. Approximate analytical methods did not perform so well. Useful results were obtained for the simplified single-degree-of-freedom symmetric model only, highlighting the need for a more sophisticated method. This research has been successful in providing insight into the complex non-linear motions of an offshore mooring tower. The fundamental mechanisms and features of the system have been presented. The methodology used in this study has been applied to the specific case of an offshore mooring tower. However, the general approach to investigating the non-linear motions of the structure is widely applicable in the field of offshore engineering.
APA, Harvard, Vancouver, ISO, and other styles
46

Ceolin, Celina. "A equação unidimensional de difusão de nêutrons com modelo multigrupo de energia e meio heterogêneo : avaliação do fluxo para problemas estacionários e de cinética." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/96762.

Full text
Abstract:
Na presente tese é resolvida a equação de difusão de nêutrons estacionária, bem como problemas de cinética, em geometria unidimensional cartesiana multi-região considerando o modelo de multigrupos de energia. Um dos objetivos e inovação neste trabalho é a obtenção de uma solução aproximada com estimativa de erro, controle de precisão e na forma de uma expressão analítica. Com esse tipo de solução não há a necessidade de recorrer a esquemas de interpolação, geralmente necessários em caso de discretizações do domínio. O fluxo de nêutrons é expandido em uma série de Taylor cujos coeficientes são encontrados utilizando a equação diferencial e as condições de contorno e interface. O domínio é dividido em várias células, cujo tamanho e o grau do polinômio são ajustáveis de acordo com a precisão requerida. Para resolver o problema de autovalor é utilizado o método da potência. A metodologia é aplicada em um benchmark que consiste na solução da equação de difusão como condição inicial e na solução de problemas de cinética para diferentes transientes. Os resultados são comparados com sucesso com resultados da literatura. A convergência da série é garantida pela aplicação de um raciocínio baseado no critério de Lipschitz para funções contínuas. Cabe ressaltar que a solução obtida, em conjunto com a análise da convergência, mostra a solidez e a precisão dessa metodologia.
In the present dissertation the one-dimensional neutron diffusion equation for stationary and kinetic problems in a multi-layer slab has been solved considering the multi-group energy model. One of the objectives and innovation in this work is to obtain an approximate solution with error estimation, accuracy control and in the form of an analytical expression. With this solution there is no need for interpolation schemes, which are usually needed in case of discretization of the domain. The neutron flux is expanded in a Taylor series whose coefficients are found using the differential equation and the boundary and interface conditions. The domain is divided into several layers, whose size and the polynomial order can be adjusted according to the required accuracy. To solve the eigenvalue problem the conventional power method has been used. The methodology is applied in a benchmark problem consisting of the solution of the diffusion equation as an initial condition and solving kinetic problems for different transients. The results are compared successfully with the ones in the literature. The convergence of the series is guaranteed by applying a criterion based on the Lipschitz criterion for continuous functions. Note that the solution obtained, together with the convergence analysis, shows the robustness and accuracy of this methodology.
APA, Harvard, Vancouver, ISO, and other styles
47

Souza, Thiago Silva de. "Ataîru: modelo ubíquo para o turismo com busca dinâmica de conteúdo baseado em dispositivos móveis." Universidade do Vale do Rio dos Sinos, 2016. http://www.repositorio.jesuita.org.br/handle/UNISINOS/5356.

Full text
Abstract:
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2016-06-13T13:27:34Z No. of bitstreams: 1 Thiago Silva de Souza_.pdf: 4022356 bytes, checksum: a1501abc241ec37282c058d610c0508e (MD5)
Made available in DSpace on 2016-06-13T13:27:34Z (GMT). No. of bitstreams: 1 Thiago Silva de Souza_.pdf: 4022356 bytes, checksum: a1501abc241ec37282c058d610c0508e (MD5) Previous issue date: 2016-03-21
Nenhuma
O modo como as pessoas fazem turismo mudou consideravelmente nas últimas décadas. Essas mudanças de comportamento foram influenciadas, entre outros, por fatores tecnológicos, pois os turistas agora possuem diversas ferramentas de apoio desde o momento em que se pensa em fazer turismo, durante o turismo propriamente dito, e até mesmo após ele. Com isso, espera-se muito mais dessas ferramentas de apoio no quesito utilidade, confiabilidade e qualidade no serviço prestado. Muitos estudos têm sido conduzidos nesse sentido, diversos avanços já foram alcançados e atualmente já existem ferramentas que atendem várias demandas atuais dos turistas em seus itinerários. No entanto, a maioria delas não levam em consideração alguns aspectos importantes para que o turista seja realmente auxiliado, como por exemplo, encontrar informações turísticas sobre cidades consideradas de pequeno porte ou distantes de locais turísticos expressivos. O que muito se nota nessas ferramentas atuais, até mesmo nas de cunho comercial, é que somente são contempladas as cidades de grande porte, de alto fluxo de turistas. As demais cidades, que também possuem seus atrativos turísticos, são deixadas de lado. Outra característica comum nas ferramentas atuais é que os guias são gerados previamente e são entregues muitos dados estáticos aos turistas. No modelo de arquitetura desenvolvido neste trabalho, a coleta por informações turísticas acontece de maneira dinâmica, realizando buscas de conteúdo na web em diversas bases de conhecimento abertas e que estão em constante atualização. A partir daí essas informações são organizadas e armazenadas em ontologia, e por fim, entregues ao turista, baseadas em princípios de computação ubíqua. Essa capacidade de realizar buscas na web faz com que um número maior de cidades possa ser encontrado e se consiga chegar a um bom nível de satisfação do turista no uso da ferramenta, bem como eleva a percepção do grau de utilidade da arquitetura proposta. Portanto, o objetivo deste trabalho é desenvolver um modelo de arquitetura de sistema para o turismo ubíquo, baseado em dispositivos móveis, chamado de Ataîru, que na língua indígena Tupi-Guarani significa “companheiro de viagem”. O modelo considera dados sobre a localização e perfil do turista, bem como sobre dados climáticos, data e hora do local pesquisado, para prover respostas personalizadas aos turistas. Para avaliar o modelo proposto, foram utilizadas três técnicas pertinentes, a saber: avaliação por cenários, avaliação de desempenho e avaliação de usabilidade. Na avaliação de desempenho mostrou-se a viabilidade do modelo através de testes feitos no ambiente de nuvem computacional. Na avaliação de usabilidade foram convidados voluntários para usarem o aplicativo móvel cliente Ataîru, onde o objetivo foi avaliar a (1º) percepção de facilidade de uso e (2º) percepção de utilidade do aplicativo, e a (3º) percepção de utilidade das informações apresentadas, alcançando percentuais de concordância total de 72,5%, 82,5% e 67,5%, respectivamente e de concordância parcial de 27,5%, 15% e 30%, respectivamente. Isso mostra que o modelo supre a necessidade dos turistas de informações turísticas de cidades consideradas pequenas ou distantes de locais turísticos expressivos, tanto no aspecto quantitativo quanto qualitativo.
The way people do tourism has changed considerably in recent decades. These behavioral changes were influenced, among others, by technological factors, because tourists now have several tools to support from the moment in which it thinks about doing tourism, during the tour itself, and even after it. Thus, it is expected more of these support tools in the category utility, reliability and quality of service provided. Many studies have been conducted in this direction, several advances have been achieved and currently there are already several tools that meet current demands of tourists in their itineraries. However, most of them do not take into account some important aspects for the tourist really be helped, for example, find tourist information about cities considered small or far from significant tourist sites. What very noticeable in these current tools, even in commercial nature, is that only the large cities are contemplated, high flow of tourists. The other cities, which also have their tourist attractions, are set aside. Another common feature in today's tools is that the guides are pre-generated and delivered many static data to tourists. In architectural model developed in this work, the collection of tourist information happens dynamically, making content web searches in several open knowledge bases and are constantly updated. From there the information is organized and stored in ontology, and finally delivered to the tourist, based on principles of ubiquitous computing. This ability to search the web makes a larger number of cities can be found and can reach a good level of tourist satisfaction in using the tool as well as increases the perception of the degree of usefulness of the proposed architecture. Therefore, the objective of this work is to develop a system architecture model for the ubiquitous tourism, based mobile devices, called Ataîru, which in the indigenous language Tupi-Guarani means "traveling companion". The model takes into account data on the location and tourist profile, as well as weather data, date and time of the searched location, to provide personalized answers to tourists. To evaluate the proposed model, we used three techniques relevant, namely: evaluation of scenarios, performance evaluation and evaluation of usability. In performance evaluation proved the viability of the model by testing done in the cloud computing environment. In evaluating usability volunteered invited to use the mobile client application Ataîru, where the objective was to evaluate (1) perceived ease of use and (2) application utility perception, and (3) perceived usefulness of the information presented, increasing percentage of total concordance of 72.5%, 82.5% and 67.5%, respectively, and partial concordance 27.5%, 15% and 30%, respectively. This shows that the model supplies the need of tourist of tourist information about cities considered small or far from significant tourist sites, both in quantitative and qualitative aspect.
APA, Harvard, Vancouver, ISO, and other styles
48

Chaves, Áquila Neves. "Proposta de modelo de veículos aéreos não tripulados (VANTs) cooperativos aplicados a operações de busca." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-26072013-115944/.

Full text
Abstract:
Os Veículos Aéreos Não Tripulados (VANTs) são ideais para operações de risco e estressante para o ser humano são as chamadas dull, dirty and dangerous missions. Portanto, uma importante aplicação desse tipo de robô aéreo diz respeito a operações de busca envolvendo múltiplos VANTs cooperativos, em que há risco de colisões entre aeronaves e o tempo de um voo é limitado, entre outros fatores, pela capacidade de um piloto trabalhar sem descanso. Entretanto, apesar de atualmente verificar-se um crescente número de pesquisas envolvendo VANTs e do grande potencial existente na utilização de VANTs, operações de busca cooperativas ainda não estão ocorrendo. Esse assunto é uma área de estudo multidisciplinar e nascente, que possui diversas linhas de pesquisa. Diferentes algoritmos de navegação e padrões de busca foram estudados visando selecionar o(s) mais adequado(s). Além disso, apresenta-se, neste trabalho, uma visão geral sobre os mecanismos de coordenação multiagente e avalia a adequação de cada uma delas à coordenação distribuída de agentes (VANTs), visando cooperação. Assim, com o objetivo de melhorar o desempenho de uma operação de busca, esta pesquisa de mestrado propõe um modelo de VANTs cooperativos que combina mecanismos de coordenação multiagente, algoritmos de navegação e padrões de busca estabelecidos pelos principais órgãos responsáveis pelas operações de busca e salvamento. Visando avaliar a sensibilidade do percentual médio de detecção de objetos, bem como o tempo médio de busca, foi desenvolvido um simulador e milhares de simulações foram realizadas. Observou-se que, utilizando o modelo, VANTs cooperativos podem reduzir, em média, 57% do tempo de busca (comparando com uma busca de dois VANTs não cooperativos no mesmo cenário), mantendo a probabilidade média de detecção dos objetos próxima de 100% e sobrevoando apenas 30% do espaço de busca.
There are an increasing number of researches into UAV (Unmanned Aerial Vehicle) in the literature. These robots are quite suitable to dull, dirty and dangerous missions. Thus, an important application of these vehicles is the search operations involving multiple UAVs in which there is risk of collisions among aircrafts and the flight time is limited by the maximum time of pilot working hours. However, despite the huge potential use of the UAVs, cooperative search operations with this kind of flying robots are not yet occurring. This research topic is a new and multidisciplinary area of study in its beginning and there are several issues that can be studied, such as centralized versus decentralized control, path planning for cooperative flights, agent reasoning for UAV tactical planning, safety assessments, reliability in automatic target reconnaissance by cameras, agent coordination mechanisms applied to UAV cooperation and the application itself. Different path planning algorithms were studied aiming to attain the most suitable to these kinds of operations, and the conclusions are presented. In addition, official documents of Search and Rescue operations are also studied in order to know the best practices already established for this kind of operations, and, finally, an overview of the coordination multi-agent theory is presented and evaluated to achieve the UAV coordination. This work proposes a model that combines path planning algorithms, search patterns and multi-agent coordination techniques to obtain a cooperative UAV model. The great goal for cooperative UAV is to achieve such performance that the performance of the group overcomes the sum of the individual performances isolatedly. Then, aiming to analyze the average percentage of objects detection, and the average search time, a simulator was developed and thousands of simulations were run. It was observed that, using the proposed model, two cooperative UAVs can perform a search operation 57% faster than two non cooperative UAVs, keeping the average probability of objects detection approaching at 100% and flying only 30% of the search space.
APA, Harvard, Vancouver, ISO, and other styles
49

Alves, Maria Bernardete Martins. "A percepção do processo de busca de informação em bibliotecas, dos estudantes do curso de Pedagogia de UFSC, à luz do Modelo ISP (Information Search Process)." Florianópolis, SC, 2001. http://repositorio.ufsc.br/xmlui/handle/123456789/80181.

Full text
Abstract:
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-Graduação em Engenharia de Produção.
Made available in DSpace on 2012-10-18T11:19:31Z (GMT). No. of bitstreams: 0Bitstream added on 2013-07-16T18:30:22Z : No. of bitstreams: 1 186992.pdf: 5147676 bytes, checksum: e80a1450346360bddb37b500a3672fb2 (MD5)
Analisou-se o processo de busca e uso da informação em um contexto de aprendizagem ou seja, a preparação de um trabalho acadêmico. O estudo baseou-se no modelo de busca de informação denominado ISP-Information Search Process, desenvolvido por Kuhlthau (1994) que compreende a busca de informação como um processo de aprendizagem significativa, na perspectiva do usuário. A pesquisa foi realizada com um grupo de trinta e três estudantes do quinto período do segundo semestre de 2000, do curso de Pedagogia do Centro de Ciências da Educação - CED/UFSC, resgatando retrospectivamente por meio da aplicação de um questionário, a percepção desses estudantes, a respeito do processo de busca de informação, de como eles escolhem um tema para sua pesquisa, qual o papel dos mediadores, quais os problemas encontrados e o que eles gostariam que mudasse no próximo trabalho de pesquisa. Os resultados obtidos indicam que a maioria dos estudantes considera a escolha de um tema para pesquisa uma tarefa fácil quando há informações disponíveis embora, eles escolham e delimitam o tema da pesquisa antes de pesquisar na biblioteca. Quando pesquisam na biblioteca, o catálogo, em ficha ou automatizado, é o primeiro recurso informacional utilizado. Todavia, o contato pessoal, foi o recurso informacional mais utilizado. Quanto aos mediadores, formais e informais, a maioria dos estudantes apontou os colegas como principal mediador. Nenhum estudante apontou o bibliotecário. Eles não o reconhecem como parceiro no processo de ensino-aprendizagem. Por fim esta pesquisa evidenciou a dificuldade que os estudantes têm de descrever o processo de pesquisa na sua totalidade, embora reconheçam os passos da pesquisa.
APA, Harvard, Vancouver, ISO, and other styles
50

Queiroz, Marco Aurélio Lima de. "Business competition dynamics: agent-based modeling simulations of firms in search of economic performance." reponame:Repositório Institucional do FGV, 2010. http://hdl.handle.net/10438/8170.

Full text
Abstract:
Submitted by Cristiane Oliveira (cristiane.oliveira@fgv.br) on 2011-05-24T14:38:04Z No. of bitstreams: 1 71070100736.pdf: 4813549 bytes, checksum: 4b466ead23b0b18a6810a4a640824037 (MD5)
Approved for entry into archive by Gisele Isaura Hannickel(gisele.hannickel@fgv.br) on 2011-05-24T14:59:28Z (GMT) No. of bitstreams: 1 71070100736.pdf: 4813549 bytes, checksum: 4b466ead23b0b18a6810a4a640824037 (MD5)
Approved for entry into archive by Gisele Isaura Hannickel(gisele.hannickel@fgv.br) on 2011-05-24T15:03:02Z (GMT) No. of bitstreams: 1 71070100736.pdf: 4813549 bytes, checksum: 4b466ead23b0b18a6810a4a640824037 (MD5)
Made available in DSpace on 2011-05-24T15:12:02Z (GMT). No. of bitstreams: 1 71070100736.pdf: 4813549 bytes, checksum: 4b466ead23b0b18a6810a4a640824037 (MD5) Previous issue date: 2010-12-07
The intent of this work is to explore dynamics of business competition through agentbased modeling simulations of firms searching for performance in markets configured as fitness landscapes. Building upon a growing number of studies in management science that utilizes simulation methods and analogies to Kauffman´s model of biological evolution, we developed a computer model to emulate competition and observe whether different search methods matter, under varied conditions. This study also explores potential explanations for the persistence of above and below average performances of firms under competition.
A intenção deste trabalho é explorar dinâmicas de competição por meio de “simulação baseada em agentes”. Apoiando-se em um crescente número de estudos no campo da estratégia e teoria das organizações que utilizam métodos de simulação, desenvolveu-se um modelo computacional para simular situações de competição entre empresas e observar a eficiência relativa dos métodos de busca de melhoria de desempenho teorizados. O estudo também explora possíveis explicações para a persistência de desempenho superior ou inferior das empresas, associados às condições de vantagem ou desvantagem competitiva
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography