Dissertations / Theses on the topic 'Weighting technique'

To see the other types of publications on this topic, follow the link: Weighting technique.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 22 dissertations / theses for your research on the topic 'Weighting technique.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Zakos, John, and n/a. "A Novel Concept and Context-Based Approach for Web Information Retrieval." Griffith University. School of Information and Communication Technology, 2005. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20060303.104937.

Full text
Abstract:
Web information retrieval is a relatively new research area that has attracted a significant amount of interest from researchers around the world since the emergence of the World Wide Web in the early 1990s. The problems facing successful web information retrieval are a combination of challenges that stem from traditional information retrieval and challenges characterised by the nature of the World Wide Web. The goal of any information retrieval system is to provide an information need fulfilment in response to an information need. In a web setting, this means retrieving as many relevant web documents as possible in response to an inputted query that is typically limited to only containing a few terms expressive of the user's information need. This thesis is primarily concerned with firstly reviewing pertinent literature related to various aspects of web information retrieval research and secondly proposing and investigating a novel concept and context-based approach. The approach consists of techniques that can be used together or independently and aim to provide an improvement in retrieval accuracy over other approaches. A novel concept-based term weighting technique is proposed as a new method of deriving query term significance from ontologies that can be used for the weighting of inputted queries. A technique that dynamically determines the significance of terms occurring in documents based on the matching of contexts is also proposed. Other contributions of this research include techniques for the combination of document and query term weights for the ranking of retrieved documents. All techniques were implemented and tested on benchmark data. This provides a basis for performing comparison with previous top performing web information retrieval systems. High retrieval accuracy is reported as a result of utilising the proposed approach. This is supported through comprehensive experimental evidence and favourable comparisons against previously published results.
APA, Harvard, Vancouver, ISO, and other styles
2

Zakos, John. "A Novel Concept and Context-Based Approach for Web Information Retrieval." Thesis, Griffith University, 2005. http://hdl.handle.net/10072/365878.

Full text
Abstract:
Web information retrieval is a relatively new research area that has attracted a significant amount of interest from researchers around the world since the emergence of the World Wide Web in the early 1990s. The problems facing successful web information retrieval are a combination of challenges that stem from traditional information retrieval and challenges characterised by the nature of the World Wide Web. The goal of any information retrieval system is to provide an information need fulfilment in response to an information need. In a web setting, this means retrieving as many relevant web documents as possible in response to an inputted query that is typically limited to only containing a few terms expressive of the user's information need. This thesis is primarily concerned with firstly reviewing pertinent literature related to various aspects of web information retrieval research and secondly proposing and investigating a novel concept and context-based approach. The approach consists of techniques that can be used together or independently and aim to provide an improvement in retrieval accuracy over other approaches. A novel concept-based term weighting technique is proposed as a new method of deriving query term significance from ontologies that can be used for the weighting of inputted queries. A technique that dynamically determines the significance of terms occurring in documents based on the matching of contexts is also proposed. Other contributions of this research include techniques for the combination of document and query term weights for the ranking of retrieved documents. All techniques were implemented and tested on benchmark data. This provides a basis for performing comparison with previous top performing web information retrieval systems. High retrieval accuracy is reported as a result of utilising the proposed approach. This is supported through comprehensive experimental evidence and favourable comparisons against previously published results.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information and Communication Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
3

Albqmi, Aisha Rashed M. "Integrating three-way decisions framework with multiple support vector machines for text classification." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/235898/7/Aisha_Rashed_Albqmi_Thesis_.pdf.

Full text
Abstract:
Identifying the boundary between relevant and irrelevant objects in text classification is a significant challenge due to the numerous uncertainties in text documents. Most existing binary text classifiers cannot deal effectively with this problem due to the issue of over-fitting. This thesis proposes a three-way decision model for dealing with the uncertain boundary to improve the binary text classification performance by integrating the distinct aspects of three-way decisions theory and the capacities of the Support Vector Machine. The experimental results show that the proposed models outperform baseline models on the RCV1, Reuters-21578, and R65CO datasets.
APA, Harvard, Vancouver, ISO, and other styles
4

Finnerman, Erik, and Carl Robin Kirchmann. "Evaluation of Alternative Weighting Techniques on the Swedish Stock Market." Thesis, KTH, Matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-168294.

Full text
Abstract:
The aim of this thesis is to evaluate how the stock index SIX30RX compares against portfolios based on the same stock selection but with alternative weighting techniques. Eleven alternative weighting techniques are used and divided into three categories; heuristic, optimisation and momentum based ones. These are evaluated from 1990-01-01 until 2014-12-31. The results show that heuristic based weighting techniques overperform and show similar risk characteristics as the SIX30RX index. Optimisation based weighting techniques show strong overperformance but have different risk characteristics manifested in higher portfolio concentration and tracking error. Momentum based weighting techniques have slightly better performance and risk-adjusted performance while their risk concentration and average annual turnover is higher than all other techniques used. Minimum variance is the overall best performing weighting technique in terms of return and risk-adjusted return. Additionally, the equal weighted portfolio overperforms and has similar characteristics as the SIX30RX index despite its simple heuristic approach. In conclusion, all studied alternative weighting techniques except the momentum based ones clearly overperform the SIX30RX index.
APA, Harvard, Vancouver, ISO, and other styles
5

Boman, Trotte, and Samuel Jangenstål. "Beating the MSCI USA Index by Using Other Weighting Techniques." Thesis, KTH, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209258.

Full text
Abstract:
In this thesis various portfolio weighting strategies are tested. Their performance is determined by their average annual return, Sharpe ratio, tracking error, information ratio and annual standard deviation. The data used is provided by Öhman from Bloomberg and consists of monthly data between 1996-2016 of all stocks that were in the MSCI USA Index at any time between 2002-2016.For any given month we use the last five years of data as a basis for the analysis. Each time the MSCI USA Index changes portfolio constituents we update which constituents are in our portfolio. The traditional weighting strategies used in this thesis are market capitalization, equal, risk-adjusted alpha, fundamental and minimum variance weighting. On top of that, the weighting strategies are used in a cluster framework where the clusters are constructed by using K-means clustering on the stocks each month. The clusters are assigned equal weight and then the traditional weighting strategies are applied within each cluster. Additionally, a GARCH-estimated covariance matrix of the clusters is used to determine the minimum variance optimized weights of the clusters where the constituents within each cluster are equally weighted. We conclude in this thesis that the market capitalization weighting strategy is the one that earns the least of all traditional strategies. From the results we can conclude that there are weighting strategies with higher Sharpe ratio and lower standard deviation. The risk-adjusted alpha in a traditional framework performed best out of all strategies. All cluster weighting strategies with the exception of risk-adjusted alpha outperform their traditional counterpart in terms of return.
I denna rapport prövas olika viktningsstrategier med målet att prestera bättre i termer av genomsnittlig årlig avkastning, Sharpekvot, aktiv risk, informationskvot och årlig standardavvikelse än det marknadsviktade MSCI USA Index. Rapporten är skriven i samarbete med Öhman och data som används kommer från Bloomberg och består av månadsvis data mellan 1996-2016 av alla aktier som var i MSCI USA Index vid någon tidpunkt mellan 2002-2016. För en given månad används senaste fem åren av historisk data för vår analys. Varje gång som MSCI USA Index ändrar portföljsammansättning så uppdaterar vi vilka värdepapper som ingår i vår portfölj. De traditionella viktningsstrategierna som används i denna avhandling är marknadviktat, likaviktat,risk-justerad alpha viktat, fundamental viktat och minsta varians viktat. De klusterviktade strategierna som används i denna avhandling är konstruerade genom att använda K-medel klustring på aktierna varje månad, tilldela lika vikt till varje kluster och sedan använda traditionella viktningsstrategier inom varje kluster. Dessutom används en GARCH skattad kovariansmatris av klustrena för att bestämma minsta varians optimerade vikter för varje kluster där varje aktie inom alla kluster är likaviktade. Vi konstaterar i detta arbete att den marknadsviktade strategin har lägst avkastning av alla viktningsmetoder. Från resultaten kan vi konstatera att det _nns viktningsmetoder med högre Sharpekvot och lägre standardavvikelse. Risk-justerad alpha viktning använt på traditionellt vis är den strategi som presterar bäst av alla metoder. Alla klusterviktade strategier med undantag av risk-justerad alpha viktning presterar bättre än deras traditionella motsvarighet i termer av avkastning.
APA, Harvard, Vancouver, ISO, and other styles
6

Nilubol, Chanin. "Two-dimensional HMM classifier with density perturbation and data weighting techniques for pattern recognition problems." Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/13538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shah, Kashif. "Model adaptation techniques in machine translation." Phd thesis, Université du Maine, 2012. http://tel.archives-ouvertes.fr/tel-00718226.

Full text
Abstract:
Nowadays several indicators suggest that the statistical approach to machinetranslation is the most promising. It allows fast development of systems for anylanguage pair provided that sufficient training data is available.Statistical Machine Translation (SMT) systems use parallel texts ‐ also called bitexts ‐ astraining material for creation of the translation model and monolingual corpora fortarget language modeling.The performance of an SMT system heavily depends upon the quality and quantity ofavailable data. In order to train the translation model, the parallel texts is collected fromvarious sources and domains. These corpora are usually concatenated, word alignmentsare calculated and phrases are extracted.However, parallel data is quite inhomogeneous in many practical applications withrespect to several factors like data source, alignment quality, appropriateness to thetask, etc. This means that the corpora are not weighted according to their importance tothe domain of the translation task. Therefore, it is the domain of the training resourcesthat influences the translations that are selected among several choices. This is incontrast to the training of the language model for which well‐known techniques areused to weight the various sources of texts.We have proposed novel methods to automatically weight the heterogeneous data toadapt the translation model.In a first approach, this is achieved with a resampling technique. A weight to eachbitexts is assigned to select the proportion of data from that corpus. The alignmentscoming from each bitexts are resampled based on these weights. The weights of thecorpora are directly optimized on the development data using a numerical method.Moreover, an alignment score of each aligned sentence pair is used as confidencemeasurement.In an extended work, we obtain such a weighting by resampling alignments usingweights that decrease with the temporal distance of bitexts to the test set. By thesemeans, we can use all the available bitexts and still put an emphasis on the most recentone. The main idea of our approach is to use a parametric form or meta‐weights for theweighting of the different parts of the bitexts. This ensures that our approach has onlyfew parameters to optimize.In another work, we have proposed a generic framework which takes into account thecorpus and sentence level "goodness scores" during the calculation of the phrase‐tablewhich results into better distribution of probability mass of the individual phrase pairs.
APA, Harvard, Vancouver, ISO, and other styles
8

Sigweni, Boyce B. "An investigation of feature weighting algorithms and validation techniques using blind analysis for analogy-based estimation." Thesis, Brunel University, 2016. http://bura.brunel.ac.uk/handle/2438/12797.

Full text
Abstract:
Context: Software effort estimation is a very important component of the software development life cycle. It underpins activities such as planning, maintenance and bidding. Therefore, it has triggered much research over the past four decades, including many machine learning approaches. One popular approach, that has the benefit of accessible reasoning, is analogy-based estimation. Machine learning including analogy is known to significantly benefit from feature selection/weighting. Unfortunately feature weighting search is an NP hard problem, therefore computationally very demanding, if not intractable. Objective: Therefore, one objective of this research is to develop an effi cient and effective feature weighting algorithm for estimation by analogy. However, a major challenge for the effort estimation research community is that experimental results tend to be contradictory and also lack reliability. This has been paralleled by a recent awareness of how bias can impact research results. This is a contributory reason why software effort estimation is still an open problem. Consequently the second objective is to investigate research methods that might lead to more reliable results and focus on blinding methods to reduce researcher bias. Method: In order to build on the most promising feature weighting algorithms I conduct a systematic literature review. From this I develop a novel and e fficient feature weighting algorithm. This is experimentally evaluated, comparing three feature weighting approaches with a na ive benchmark using 2 industrial data sets. Using these experiments, I explore blind analysis as a technique to reduce bias. Results: The systematic literature review conducted identified 19 relevant primary studies. Results from the meta-analysis of selected studies using a one-sample sign test (p = 0.0003) shows a positive effect - to feature weighting in general compared with ordinary analogy-based estimation (ABE), that is, feature weighting is a worthwhile technique to improve ABE. Nevertheless the results remain imperfect so there is still much scope for improvement. My experience shows that blinding can be a relatively straightforward procedure. I also highlight various statistical analysis decisions which ought not be guided by the hunt for statistical significance and show that results can be inverted merely through a seemingly inconsequential statistical nicety. After analysing results from 483 software projects from two separate industrial data sets, I conclude that the proposed technique improves accuracy over the standard feature subset selection (FSS) and traditional case-based reasoning (CBR) when using pseudo time-series validation. Interestingly, there is no strong evidence for superior performance of the new technique when traditional validation techniques (jackknifing) are used but is more effi cient. Conclusion: There are two main findings: (i) Feature weighting techniques are promising for software effort estimation but they need to be tailored for target case for their potential to be adequately exploited. Despite the research findings showing that assuming weights differ in different parts of the instance space ('local' regions) may improve effort estimation results - majority of studies in software effort estimation (SEE) do not take this into consideration. This represents an improvement on other methods that do not take this into consideration. (ii) Whilst there are minor challenges and some limits to the degree of blinding possible, blind analysis is a very practical and an easy-to-implement method that supports more objective analysis of experimental results. Therefore I argue that blind analysis should be the norm for analysing software engineering experiments.
APA, Harvard, Vancouver, ISO, and other styles
9

Leary, Emily Vanessa. "A comparison of sampling, weighting, and variance estimation of techniques for the Oklahoma oral health needs assessment." Oklahoma City : [s.n.], 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Örn, Henrik. "Accuracy and precision of bedrock sur-face prediction using geophysics and geostatistics." Thesis, KTH, Mark- och vattenteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-171859.

Full text
Abstract:
In underground construction and foundation engineering uncertainties associated with subsurface properties are inevitable to deal with. Site investigations are expensive to perform, but a limited understanding of the subsurface may result in major problems; which often lead to an unexpected increase in the overall cost of the construction project. This study aims to optimize the pre-investigation program to get as much correct information out from a limited input of resources, thus making it as cost effective as possible. To optimize site investigation using soil-rock sounding three different sampling techniques, a varying number of sample points and two different interpolation methods (Inverse distance weighting and point Kriging) were tested on four modeled reference surfaces. The accuracy of rock surface predictions was evaluated using a 3D gridding and modeling computer software (Surfer 8.02®). Samples with continuously distributed data, resembling profile lines from geophysical surveys were used to evaluate how this could improve the accuracy of the prediction compared to adding additional sampling points. The study explains the correlation between the number of sampling points and the accuracy of the prediction obtained using different interpolators. Most importantly it shows how continuous data significantly improves the accuracy of the rock surface predictions and therefore concludes that geophysical measurement should be used combined with traditional soil rock sounding to optimize the pre-investigation program.
APA, Harvard, Vancouver, ISO, and other styles
11

Al-Nashashibi, May Y. A. "Arabic Language Processing for Text Classification. Contributions to Arabic Root Extraction Techniques, Building An Arabic Corpus, and to Arabic Text Classification Techniques." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/6326.

Full text
Abstract:
The impact and dynamics of Internet-based resources for Arabic-speaking users is increasing in significance, depth and breadth at highest pace than ever, and thus requires updated mechanisms for computational processing of Arabic texts. Arabic is a complex language and as such requires in depth investigation for analysis and improvement of available automatic processing techniques such as root extraction methods or text classification techniques, and for developing text collections that are already labeled, whether with single or multiple labels. This thesis proposes new ideas and methods to improve available automatic processing techniques for Arabic texts. Any automatic processing technique would require data in order to be used and critically reviewed and assessed, and here an attempt to develop a labeled Arabic corpus is also proposed. This thesis is composed of three parts: 1- Arabic corpus development, 2- proposing, improving and implementing root extraction techniques, and 3- proposing and investigating the effect of different pre-processing methods on single-labeled text classification methods for Arabic. This thesis first develops an Arabic corpus that is prepared to be used here for testing root extraction methods as well as single-label text classification techniques. It also enhances a rule-based root extraction method by handling irregular cases (that appear in about 34% of texts). It proposes and implements two expanded algorithms as well as an adjustment for a weight-based method. It also includes the algorithm that handles irregular cases to all and compares the performances of these proposed methods with original ones. This thesis thus develops a root extraction system that handles foreign Arabized words by constructing a list of about 7,000 foreign words. The outcome of the technique with best accuracy results in extracting the correct stem and root for respective words in texts, which is an enhanced rule-based method, is used in the third part of this thesis. This thesis finally proposes and implements a variant term frequency inverse document frequency weighting method, and investigates the effect of using different choices of features in document representation on single-label text classification performance (words, stems or roots as well as including to these choices their respective phrases). This thesis applies forty seven classifiers on all proposed representations and compares their performances. One challenge for researchers in Arabic text processing is that reported root extraction techniques in literature are either not accessible or require a long time to be reproduced while labeled benchmark Arabic text corpus is not fully available online. Also, by now few machine learning techniques were investigated on Arabic where usual preprocessing steps before classification were chosen. Such challenges are addressed in this thesis by developing a new labeled Arabic text corpus for extended applications of computational techniques. Results of investigated issues here show that proposing and implementing an algorithm that handles irregular words in Arabic did improve the performance of all implemented root extraction techniques. The performance of the algorithm that handles such irregular cases is evaluated in terms of accuracy improvement and execution time. Its efficiency is investigated with different document lengths and empirically is found to be linear in time for document lengths less than about 8,000. The rule-based technique is improved the highest among implemented root extraction methods when including the irregular cases handling algorithm. This thesis validates that choosing roots or stems instead of words in documents representations indeed improves single-label classification performance significantly for most used classifiers. However, the effect of extending such representations with their respective phrases on single-label text classification performance shows that it has no significant improvement. Many classifiers were not yet tested for Arabic such as the ripple-down rule classifier. The outcome of comparing the classifiers' performances concludes that the Bayesian network classifier performance is significantly the best in terms of accuracy, training time, and root mean square error values for all proposed and implemented representations.
Petra University, Amman (Jordan)
APA, Harvard, Vancouver, ISO, and other styles
12

Al-Nashashibi, May Yacoub Adib. "Arabic language processing for text classification : contributions to Arabic root extraction techniques, building an Arabic corpus, and to Arabic text classification techniques." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/6326.

Full text
Abstract:
The impact and dynamics of Internet-based resources for Arabic-speaking users is increasing in significance, depth and breadth at highest pace than ever, and thus requires updated mechanisms for computational processing of Arabic texts. Arabic is a complex language and as such requires in depth investigation for analysis and improvement of available automatic processing techniques such as root extraction methods or text classification techniques, and for developing text collections that are already labeled, whether with single or multiple labels. This thesis proposes new ideas and methods to improve available automatic processing techniques for Arabic texts. Any automatic processing technique would require data in order to be used and critically reviewed and assessed, and here an attempt to develop a labeled Arabic corpus is also proposed. This thesis is composed of three parts: 1- Arabic corpus development, 2- proposing, improving and implementing root extraction techniques, and 3- proposing and investigating the effect of different pre-processing methods on single-labeled text classification methods for Arabic. This thesis first develops an Arabic corpus that is prepared to be used here for testing root extraction methods as well as single-label text classification techniques. It also enhances a rule-based root extraction method by handling irregular cases (that appear in about 34% of texts). It proposes and implements two expanded algorithms as well as an adjustment for a weight-based method. It also includes the algorithm that handles irregular cases to all and compares the performances of these proposed methods with original ones. This thesis thus develops a root extraction system that handles foreign Arabized words by constructing a list of about 7,000 foreign words. The outcome of the technique with best accuracy results in extracting the correct stem and root for respective words in texts, which is an enhanced rule-based method, is used in the third part of this thesis. This thesis finally proposes and implements a variant term frequency inverse document frequency weighting method, and investigates the effect of using different choices of features in document representation on single-label text classification performance (words, stems or roots as well as including to these choices their respective phrases). This thesis applies forty seven classifiers on all proposed representations and compares their performances. One challenge for researchers in Arabic text processing is that reported root extraction techniques in literature are either not accessible or require a long time to be reproduced while labeled benchmark Arabic text corpus is not fully available online. Also, by now few machine learning techniques were investigated on Arabic where usual preprocessing steps before classification were chosen. Such challenges are addressed in this thesis by developing a new labeled Arabic text corpus for extended applications of computational techniques. Results of investigated issues here show that proposing and implementing an algorithm that handles irregular words in Arabic did improve the performance of all implemented root extraction techniques. The performance of the algorithm that handles such irregular cases is evaluated in terms of accuracy improvement and execution time. Its efficiency is investigated with different document lengths and empirically is found to be linear in time for document lengths less than about 8,000. The rule-based technique is improved the highest among implemented root extraction methods when including the irregular cases handling algorithm. This thesis validates that choosing roots or stems instead of words in documents representations indeed improves single-label classification performance significantly for most used classifiers. However, the effect of extending such representations with their respective phrases on single-label text classification performance shows that it has no significant improvement. Many classifiers were not yet tested for Arabic such as the ripple-down rule classifier. The outcome of comparing the classifiers' performances concludes that the Bayesian network classifier performance is significantly the best in terms of accuracy, training time, and root mean square error values for all proposed and implemented representations.
APA, Harvard, Vancouver, ISO, and other styles
13

Steinhauer, Hans Walter Verfasser], and Susanne [Akademischer Betreuer] [Rässler. "Sampling techniques and weighting procedures for complex survey designs - The school cohorts of the National Educational Panel Study (NEPS) / Hans Walter Steinhauer. Betreuer: Susanne Rässler." Bamberg : Otto-Friedrich-Universität Bamberg, 2014. http://d-nb.info/1061022536/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Steinhauer, Hans Walter [Verfasser], and Susanne [Akademischer Betreuer] Rässler. "Sampling techniques and weighting procedures for complex survey designs - The school cohorts of the National Educational Panel Study (NEPS) / Hans Walter Steinhauer. Betreuer: Susanne Rässler." Bamberg : Otto-Friedrich-Universität Bamberg, 2014. http://d-nb.info/1061022536/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Sarmah, Dipsikha. "Evaluation of Spatial Interpolation Techniques Built in the Geostatistical Analyst Using Indoor Radon Data for Ohio,USA." University of Toledo / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1350048688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ganem, Bruna Ribeiro. "Incid??ncia do IPTU sobre bens im??veis p??blicos ocupados por empresas privadas: uma an??lise cr??tica da materialidade constitucional do imposto e suas rela????es com a imunidade tribut??ria rec??proca (Tema 437 da Repercuss??o Geral do STF)." Universidade Cat??lica de Bras??lia, 2015. https://bdtd.ucb.br:8443/jspui/handle/tede/2004.

Full text
Abstract:
Submitted by Kelson Anthony de Menezes (kelson@ucb.br) on 2017-01-09T11:10:46Z No. of bitstreams: 1 BrunaRibeiroGanemDissertacaoParcial2015.pdf: 952390 bytes, checksum: b9f45d5295e02005f556c040f27bb147 (MD5)
Made available in DSpace on 2017-01-09T11:10:46Z (GMT). No. of bitstreams: 1 BrunaRibeiroGanemDissertacaoParcial2015.pdf: 952390 bytes, checksum: b9f45d5295e02005f556c040f27bb147 (MD5) Previous issue date: 2015-12-15
This is a critical paper about the incidence of Real State Tax, a municipal tax, in cases where immovable properties and assets held by Federal or States Government are occupied by particular persons, as result of legal use concession contracts or authorized occupation, and its connections with the Mutual Tax Immunity. To achieve these aims, therefore, this study solved the following research problems: in first place, confirmation of the partial unconstitutionality of the National Tax Code in relation to the Constitution of 1946 and its non reception by the current Federal Constitution (1988); construction of the normative matrix rule of the studied tax, since the constitutional level, passing throw the complementary legal settlement, until the local legislation, study that resulted in a doctrinal refinement of the analyzed institute, specially, on its material, personal e quantitative criterions; and, in second place, implications of the Mutual Tax Immunity on the definition of the taxpayer subject, considering the impossibility of transferring the payment responsibility to a non-taxable person, such as the occupant of the public property covered by a concession contract. Furthermore, this research analyzed the Mutual Tax Immunity under the Article 150, VI, ???a???, of the Federal Constitution, in order to identify if it grants a absolute status of protection against the taxation, or if it can be relaxed in cases where the Public Person explore remunerate economic activities. These subjects are under judgment by the Brazilian Supreme Court in the Leading Case number 473 of the general repercussion. Finally, from the theory of the fundamental rights of Robert Alexy, pondering and weighing were developed in order to solve the conflict between the involved constitutional principles: the Free Competition as a key element of an open market economy; and the Mutual Tax Immunity as a guarantor of the Federation Principle.
O objeto de estudo deste trabalho ?? a an??lise da incid??ncia do Imposto Predial e Territorial Urbano (IPTU) sobre os im??veis p??blicos ocupados por particulares por meio de contratos onerosos de concess??o de direito real de uso. No primeiro cap??tulo, a autora fez uma an??lise hist??ria da evolu????o constitucional do estudado imposto e suas rela????es com o C??digo Tribut??rio Nacional (CTN), que resultou no reconhecimento da inconstitucionalidade material de partes dos artigos 32 e 34 do CTN em face da Constitui????o de 1946, bem como a sua n??o recep????o parcial pela Constitui????o Federal de 1988. No cap??tulo subsequente, a norma tribut??ria do IPTU foi estruturada em seus diversos planos normativos, com a constru????o de suas regras-matrizes constitucional, complementar e local, cujos conte??dos se mostraram conflitantes. O terceiro cap??tulo foi direcionado para a constru????o de uma proposta de solu????o para o Tema 437 da Repercuss??o Geral do Supremo Tribunal Federal, que tem como objeto a verifica????o da possibilidade de manuten????o da imunidade tribut??ria rec??proca dos entes p??blicos nas situa????es em que transferem a posse e o uso de seus bens im??veis para particulares mediante o recebimento de contrapresta????o. Nesse contexto, foi necess??rio analisar se a imunidade em quest??o pode ser afastada em raz??o do car??ter oneroso do contrato, bem como se essa atividade de concess??o de bem im??vel p??blico pode ser considerada como h??bil para interferir na livre concorr??ncia do setor imobili??rio local. Para solucionar essas quest??es, utilizou-se a teoria dos direitos fundamentais de Robert Alexy, com a pondera????o e o sopesamento do princ??pio da imunidade tribut??ria rec??proca com o da livre concorr??ncia, a fim de verificar qual deles deve prevalecer no caso concreto. Por fim, foi apresentada uma proposta de solu????o para o leading case que tem a pretens??o tanto de harmonizar o conflito principiol??gico constatado, como de aprimorar a estrutura????o da regra-matriz do IPTU, particularmente no que tange ?? defini????o de seus crit??rios material, pessoal e quantitativo.
APA, Harvard, Vancouver, ISO, and other styles
17

Lachat, Elise. "Relevé et consolidation de nuages de points issus de multiples capteurs pour la numérisation 3D du patrimoine." Thesis, Strasbourg, 2019. http://www.theses.fr/2019STRAD012/document.

Full text
Abstract:
La numérisation 3D du patrimoine bâti est un procédé qui s’inscrit dans de multiples applications (documentation, visualisation, etc.), et peut tirer profit de la diversité des techniques de mesure disponibles. Afin d’améliorer la complétude et la qualité des livrables, de plus en plus de projets de numérisation s’appuient sur la combinaison de nuages de points provenant de différentes sources. La connaissance des performances propres aux différents capteurs, ainsi que de la qualité de leurs mesures, est alors souhaitable. Par la suite, plusieurs pistes peuvent être explorées en vue d’intégrer des nuages hétérogènes au sein d’un même projet, de leur recalage à la modélisation finale. Une approche pour le recalage simultané de plusieurs nuages de points est exposée dans ces travaux. La gestion de potentielles fautes parmi les observations, ou de bruit de mesure inhérent à certaines techniques de levé, est envisagée à travers l’ajout d’estimateurs robustes dans la méthodologie de recalage
Three dimensional digitization of built heritage is involved in a wide range of applications (documentation, visualization, etc.), and may take advantage of the diversity of measurement techniques available. In order to improve the completeness as well as the quality of deliverables, more and more digitization projects rely on the combination of data coming from different sensors. To this end, the knowledge of sensor performances along with the quality of the measurements they produce is recommended. Then, different solutions can be investigated to integrate heterogeneous point clouds within a same project, from their registration to the modeling steps. A global approach for the simultaneous registration of multiple point clouds is proposed in this work, where the introduction of individual weights for each dataset is foreseen. Moreover, robust estimators are introduced in the registration framework, in order to deal with potential outliers or measurement noise among the data
APA, Harvard, Vancouver, ISO, and other styles
18

Gerchinovitz, Sébastien. "Prédiction de suites individuelles et cadre statistique classique : étude de quelques liens autour de la régression parcimonieuse et des techniques d'agrégation." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00653550.

Full text
Abstract:
Cette thèse s'inscrit dans le domaine de l'apprentissage statistique. Le cadre principal est celui de la prévision de suites déterministes arbitraires (ou suites individuelles), qui recouvre des problèmes d'apprentissage séquentiel où l'on ne peut ou ne veut pas faire d'hypothèses de stochasticité sur la suite des données à prévoir. Cela conduit à des méthodes très robustes. Dans ces travaux, on étudie quelques liens étroits entre la théorie de la prévision de suites individuelles et le cadre statistique classique, notamment le modèle de régression avec design aléatoire ou fixe, où les données sont modélisées de façon stochastique. Les apports entre ces deux cadres sont mutuels : certaines méthodes statistiques peuvent être adaptées au cadre séquentiel pour bénéficier de garanties déterministes ; réciproquement, des techniques de suites individuelles permettent de calibrer automatiquement des méthodes statistiques pour obtenir des bornes adaptatives en la variance du bruit. On étudie de tels liens sur plusieurs problèmes voisins : la régression linéaire séquentielle parcimonieuse en grande dimension (avec application au cadre stochastique), la régression linéaire séquentielle sur des boules L1, et l'agrégation de modèles non linéaires dans un cadre de sélection de modèles (régression avec design fixe). Enfin, des techniques stochastiques sont utilisées et développées pour déterminer les vitesses minimax de divers critères de performance séquentielle (regrets interne et swap notamment) en environnement déterministe ou stochastique.
APA, Harvard, Vancouver, ISO, and other styles
19

Hakala, Tim. "Settling-Time Improvements in Positioning Machines Subject to Nonlinear Friction Using Adaptive Impulse Control." BYU ScholarsArchive, 2006. https://scholarsarchive.byu.edu/etd/1061.

Full text
Abstract:
A new method of adaptive impulse control is developed to precisely and quickly control the position of machine components subject to friction. Friction dominates the forces affecting fine positioning dynamics. Friction can depend on payload, velocity, step size, path, initial position, temperature, and other variables. Control problems such as steady-state error and limit cycles often arise when applying conventional control techniques to the position control problem. Studies in the last few decades have shown that impulsive control can produce repeatable displacements as small as ten nanometers without limit cycles or steady-state error in machines subject to dry sliding friction. These displacements are achieved through the application of short duration, high intensity pulses. The relationship between pulse duration and displacement is seldom a simple function. The most dependable practical methods for control are self-tuning; they learn from online experience by adapting an internal control parameter until precise position control is achieved. To date, the best known adaptive pulse control methods adapt a single control parameter. While effective, the single parameter methods suffer from sub-optimal settling times and poor parameter convergence. To improve performance while maintaining the capacity for ultimate precision, a new control method referred to as Adaptive Impulse Control (AIC) has been developed. To better fit the nonlinear relationship between pulses and displacements, AIC adaptively tunes a set of parameters. Each parameter affects a different range of displacements. Online updates depend on the residual control error following each pulse, an estimate of pulse sensitivity, and a learning gain. After an update is calculated, it is distributed among the parameters that were used to calculate the most recent pulse. As the stored relationship converges to the actual relationship of the machine, pulses become more accurate and fewer pulses are needed to reach each desired destination. When fewer pulses are needed, settling time improves and efficiency increases. AIC is experimentally compared to conventional PID control and other adaptive pulse control methods on a rotary system with a position measurement resolution of 16000 encoder counts per revolution of the load wheel. The friction in the test system is nonlinear and irregular with a position dependent break-away torque that varies by a factor of more than 1.8 to 1. AIC is shown to improve settling times by as much as a factor of two when compared to other adaptive pulse control methods while maintaining precise control tolerances.
APA, Harvard, Vancouver, ISO, and other styles
20

Hsieh, Ji-Tao, and 謝季陶. "Designs of Efficient Sample Timing/Carrier Frequency Tracking Algorithms Based on a New Signal-Power Weighting Technique and an Improved Modified Golay Code Correlator for the Single-Carrier Block Transmission System." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/74400597279925934337.

Full text
Abstract:
碩士
國立交通大學
電子研究所
99
802.15.3c is a new indoor wireless communication technique. It operates at the 60Ghz unlicensed band and provides high data rates of more than 1.5 Gbps to satisfy the command of high-speed data transmission such as high definition video stream transmission. There are two transmission modes in 802.15.3c: OFDM mode and SCBT mode. The sampling timing/carrier frequency tracking part of SCBT mode has been designed in this thesis according to the frame format and the characteristics of the channel environment at 60GHz. The main contents in this thesis are carrier frequency offset (CFO) estimation, sampling timing offset estimation and compensation. For CFO estimation, an effective method based on Golay code is proposed to estimate frequency offset. It has low complexity compared to traditional correlation method. In sampling timing and carrier frequency offset tracking problem, unlike conventional tracking method in time domain, here we propose a frequency domain timing offset estimation methods. The proposed methods using the data after equalization and decision to detect their phase difference. By the phase difference, sampling timing and carrier frequency offset could be estimated. The offset estimation is started by using a threshold to select a number of signal samples in frequency domain which have higher SNR than most of received signal samples Second, we use the power of the selected signal samples as a weighting factor to enhance the estimation accuracy. The compensation is done by interpolating the new data sample base on the new sampling time instant. We found out that 8-time interpolation has very good performance by simulation. Here we compare different interpolation structure, including 1, 2 and 3stages design and two design methods: Least square and Equiripple. By adopting these interpolation designs to real tracking application, one can select suitable designs in 15.3c system. With the timing offset estimation method, high-accuracy timing offset can be estimated and compensated so as to maintain the system performance.
APA, Harvard, Vancouver, ISO, and other styles
21

LAI, JIAN-ZHANG, and 賴錦璋. "Weighting techniques on multi-objective linear programming." Thesis, 1988. http://ndltd.ncl.edu.tw/handle/88685652119513898238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Chen, Yi-Wen, and 陳意雯. "Super-Resolution Based on Advanced Weighting and Learning Techniques." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/dt3yzq.

Full text
Abstract:
碩士
國立臺灣大學
電信工程學研究所
107
Nowadays, digital images are easy to access, and high-resolution images are often required for later image processing and analysis. However, the spatial resolution of images captured by digital cameras is limited by principles of optics and the size of imaging sensors. While constructing optical components that can capture very high-resolution images is prohibitively expensive and impractical, image super-resolution (SR) provides a convenient and economical solution. Image super-resolution aims to generate a high-resolution (HR) image from a low-resolution (LR) input image. It is an essential task in image processing and can be utilized in many high-level computer vision applications, such as video surveillance, medical diagnosis and remote sensing. Super-resolution is an ill-posed problem since multiple HR images could correspond to the same LR image. In this thesis, we propose two algorithms for image super-resolution. The first one is to combine and take advantage of different image super-resolution methods while the second one is based on deep learning. Conventional image super-resolution methods, including bilinear interpolation and cubic convolution interpolation, are intuitive and simple to use. However, they often suffer from artifacts such as blurring and ringing. To deal with this problem, we propose a weighting-based algorithm that takes advantage of three different image super-resolution methods and generates the final results from the combination of these methods. We extract features of the input LR image and investigate the performance of the chosen methods under different features. Results from the candidate methods are combined using a weighted average based on the statistical values of the training data. As the development of convolutional neural networks and deep learning in recent years, models trained on large scale of datasets achieve favorable performance on many computer vision applications. In this thesis, we propose another deep learning-based approach for image super-resolution. We use the wavelet transform to separate the input image into four frequency bands, and train a model for each sub-band. By processing information from different frequency bands via different CNN models, we can extract features more efficiently and learn better LR-to-HR mappings. In addition, we add dense connection to the model to make better use of the internal features in the CNN model. Furthermore, geometric self-ensemble is applied in the testing stage to maximize the potential performance.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography