Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: WEIGHTING METHOD.

Rozprawy doktorskie na temat „WEIGHTING METHOD”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „WEIGHTING METHOD”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Calandruccio, Lauren. "Spectral weighting strategies for sentences measured by a correlational method". Related electronic resource:, 2007. http://proquest.umi.com/pqdweb?did=1342726281&sid=1&Fmt=2&clientId=3739&RQT=309&VName=PQD.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Jeon, Byung Ho. "Proposed automobile steering wheel test method for vibration". Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/4623.

Pełny tekst źródła
Streszczenie:
This thesis proposes a test method for evaluating the perceived vibration which occurs at the driver's hand in automotive steering wheel interface. The objective of the research was to develop frequency weightings for quantifying the human perception of steering wheel hand-arm vibration. Family of frequency weightings were developed from equal sensation curves obtained from the psychophysical laboratory experimental tests. The previous literature suggests that the only internationally standardised frequency weighting Wh is not accurate to predict human perception of steering wheel hand-arm vibration (Amman et. al, 2005) because Wh was developed originally for health effects, not for the human perception. In addition, most of the data in hand-arm vibration are based upon responses from male subjects (Neely and Burström, 2006) and previous studies based only on sinusoidal stimuli. Further, it has been continuously suggested by researchers (Gnanasekarna et al., 2006; Morioka and Griffin, 2006; Ajovalasit and Giacomin, 2009) that only one weighting is not optimal to estimate the human perception at all vibrational magnitudes. In order to address these problems, the investigation of the effect of gender, body mass and the signal type on the equal sensation curves has been performed by means of psychophysical laboratory experimental tests. The test participants were seated on a steering wheel simulator which consists of a rigid frame, a rigid steering wheel, an automobile seat, an electrodynamic shaker unit, a power amplifier and a signal generator. The category-ratio Borg CR10 scale procedure was used to quantify the perceived vibration intensity. A same test protocol was used for each test and for each test subject. The first experiment was conducted to investigate the effect of gender using sinusoidal vibration with 40 test participants (20 males and 20 females). The results suggested that the male participants provided generally lower subjective ratings than the female participants. The second experiment was conducted using band-limited random vibration to investigate the effect of signal type between sinusoidal and band-limited random vibration with 30 test participants (15 males and 15 females). The results suggested that the equal sensation curves obtained using random vibration were generally steeper and deeper in the shape of the curves than those obtained using sinusoidal vibration. These differences may be due to the characteristics of random vibration which produce generally higher crest factors than sinusoidal vibration. The third experiment was conducted to investigate the effect of physical body mass with 40 test participants (20 light and 20 heavy participants) using sinusoidal vibration. The results suggested that the light participants produced generally higher subjective ratings than the heavy participants. From the results it can be suggested that the equal sensation curves for steering wheel rotational vibration differ mainly due to differences of body size rather than differences of gender. The final experiments was conducted using real road signals to quantify the human subjective response to representative driving condition and to use the results to define the selection method for choosing the adequate frequency weightings for the road signals by means of correlation analysis. The final experiment was performed with 40 test participants (20 light and 20 heavy participants) using 21 real road signals obtained from the road tests. From the results the hypothesis was established that different amplitude groups may require different frequency weightings. Three amplitude groups were defined and the frequency weightings were selected for each amplitude group. The following findings can be drawn from the research: • the equal sensation curves suggest a nonlinear dependency on both the frequency and the amplitude. • the subjective responses obtained from band-limited random stimuli were steeper and the deeper in the shape of the equal sensation curves than those obtained using sinusoidal vibration stimuli. • females provided higher perceived intensity values than the males for the same physical stimulus at most frequencies. • light test participants provided higher perceived intensity than the heavy test participants for the same physical stimulus at most frequencies. • the equal sensation curves for steering wheel rotational vibration differ mainly due to differences in body size, rather than differences of gender. • at least three frequency weightings may be necessary to estimate the subjective intensity for road surface stimuli.
Style APA, Harvard, Vancouver, ISO itp.
3

Chen, Ziyue. "Generalizing Results from Randomized Trials to Target Population via Weighting Methods Using Propensity Score". The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1503007759352248.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Asgeirsson, David J. "Development of a Monte Carlo re-weighting method for data fitting and application to measurement of neutral B meson oscillations". Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/37021.

Pełny tekst źródła
Streszczenie:
In experimental particle physics, researchers must often construct a mathematical model of the experiment that can be used in fits to extract parameter values. With very large data sets, the statistical precision of measurements improves, and the required level of detail of the model increases. It can be extremely difficult or impossible to write a sufficiently precise analytical model for modern particle physics experiments. To avoid this problem, we have developed a new method for estimating parameter values from experimental data, using a Maximum Likelihood fit which compares the data distribution with a “Monte Carlo Template”, rather than an analytical model. In this technique, we keep a large number of simulated events in computer memory, and for each iteration of the fit, we use the stored true event and the current guess at the parameters to re-weight the event based on the probability functions of the underlying physical models. The re-weighted Monte-Carlo (MC) events are then used to recalculate the template histogram, and the process is repeated until convergence is achieved. We use simple probability functions for the underlying physical processes, and the complicated experimental resolution is modeled by a highly detailed MC simulation, instead of trying to capture all the details in an analytical form. We derive and explain in detail the “Monte-Carlo Re-Weighting” (MCRW) fit technique, and then apply it to the problem of measuring the neutral B meson mixing frequency. In this thesis, the method is applied to simulated data, to demonstrate the technique, and to indicate the results that could be expected when this analysis is performed on real data in the future.
Style APA, Harvard, Vancouver, ISO itp.
5

De, la Rey Tanja. "Two statistical problems related to credit scoring / Tanja de la Rey". Thesis, North-West University, 2007. http://hdl.handle.net/10394/3689.

Pełny tekst źródła
Streszczenie:
This thesis focuses on two statistical problems related to credit scoring. In credit scoring of individuals, two classes are distinguished, namely low and high risk individuals (the so-called "good" and "bad" risk classes). Firstly, we suggest a measure which may be used to study the nature of a classifier for distinguishing between the two risk classes. Secondly, we derive a new method DOUW (detecting outliers using weights) which may be used to fit logistic regression models robustly and for the detection of outliers. In the first problem, the focus is on a measure which may be used to study the nature of a classifier. This measure transforms a random variable so that it has the same distribution as another random variable. Assuming a linear form of this measure, three methods for estimating the parameters (slope and intercept) and for constructing confidence bands are developed and compared by means of a Monte Carlo study. The application of these estimators is illustrated on a number of datasets. We also construct statistical hypothesis to test this linearity assumption. In the second problem, the focus is on providing a robust logistic regression fit and the identification of outliers. It is well-known that maximum likelihood estimators of logistic regression parameters are adversely affected by outliers. We propose a robust approach that also serves as an outlier detection procedure and is called DOUW. The approach is based on associating high and low weights with the observations as a result of the likelihood maximization. It turns out that the outliers are those observations to which low weights are assigned. This procedure depends on two tuning constants. A simulation study is presented to show the effects of these constants on the performance of the proposed methodology. The results are presented in terms of four benchmark datasets as well as a large new dataset from the application area of retail marketing campaign analysis. In the last chapter we apply the techniques developed in this thesis on a practical credit scoring dataset. We show that the DOUW method improves the classifier performance and that the measure developed to study the nature of a classifier is useful in a credit scoring context and may be used for assessing whether the distribution of the good and the bad risk individuals is from the same translation-scale family.
Thesis (Ph.D. (Risk Analysis))--North-West University, Potchefstroom Campus, 2008.
Style APA, Harvard, Vancouver, ISO itp.
6

Яременко, Наталія Сергіївна, Наталья Сергеевна Яременко i Nataliia Serhiivna Yaremenko. "Метод рандомізованих зведених показників визначення вагових коефіцієнтів в таксономічних показниках". Thesis, Запорізький національний університет, 2015. http://essuir.sumdu.edu.ua/handle/123456789/60125.

Pełny tekst źródła
Streszczenie:
Розглядається метод рандомізованих зведених показників для знаходження вагових коефіцієнтів при необхідності згортки стандартизованих показників з урахуванням їх нерівнозначного вкладу в інтегральний показник.
The method of randomized aggregates for finding weighting coefficients when necessary convolution of standardized indicators given their unequal contribution to the integral index.
Style APA, Harvard, Vancouver, ISO itp.
7

Col, Juliana Sipoli. "Coerência, ponderação de princípios e vinculação à lei: métodos e modelos". Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/2/2139/tde-29082013-132628/.

Pełny tekst źródła
Streszczenie:
O objeto da discussão é a racionalidade das decisões judiciais em casos em que se constata conflito de princípios ou entre princípios e regras, casos esses considerados difíceis, uma vez que não há no ordenamento jurídico solução predeterminada que permita mera subsunção dos fatos à norma. São examinados métodos alternativos ao de subsunção. O primeiro é o método da ponderação, difundido principalmente por Robert Alexy, com suas variantes. Entretanto, o problema que surge com a aplicação do método da ponderação é da imponderabilidade entre ponderação e vinculação à lei, ou seja, a escolha dos pesos dos princípios e sua potencial desvinculação da lei. O segundo modelo, chamado de coerentista, busca conferir alguma racionalidade e fornecer critérios que poderiam explicar escolhas entre valores conflitantes subjacentes à legislação e mesmo aos pesos do método de ponderação. Dentro do modelo coerentista, examina-se em particular a versão inferencial que explora a coerência entre regras e princípios pela inferência abdutiva dos princípios a partir das regras. A aplicação dos diferentes modelos é feita em duas decisões prolatadas pelo Supremo Tribunal Federal em casos de conflito de princípio, casos Ellwanger e de aborto de anencéfalos. O que não permite generalização, mas oferece ilustrações específicas das virtudes e vícios desses modelos de decisão.
The subject of this study is rationality of judgments when there is collision of principles or conflict between principles and rules, which are hard cases, since there is no predetermined solution in legal system that allows only subsuming facts to the norm. Alternative methods are then examined. The first is the method of weighting and balancing proposed mainly by Robert Alexy, in spite of its variants. However, the difficulty to apply such method is theweightlessness between weighing and law binding, that is, the choice of weight of principles and its untying to the Law. The second model, called coherence model, intends to reach any rationality and provide criteria that could explain choices between conflicting values underlying Law and also the ascription of weights of the weighing and balancing method. In coherence model, it is studied especially its inferential version that explores coherence between rules and principles through abduction of principles from rules. These methods are tested in two decisions by Brazilian Supreme Court in cases of collision of principle, in Ellwanger and anencephalic abortion cases. That does not allow a general approach, but only specific outlines of the virtues and defects of these models of decision.
Style APA, Harvard, Vancouver, ISO itp.
8

Gencturk, Bilgehan. "Nickel Resource Estimation And Reconciliation At Turkmencardagi Laterite Deposits". Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614978/index.pdf.

Pełny tekst źródła
Streszczenie:
In recent years nickel is mostly produced from lateritic ore deposits such as nontronite, limonite, etc. Resource estimation is difficult for laterite deposits as they have a weak and heterogeneous form. 3D modeling software are rather suitable for deposits having tabular or vein type ores. In this study the most appropriate estimation technique for resource estimation of nickel laterite deposits was investigated. One of the known nickel laterite deposits in Turkey is located at Tü
rkmenç
ardagi - Gö
rdes region. Since the nickel (Ni) grade recovered from drilling studies seem to be very low, a reconciliation pit having dimensions of 40 m x 40 m x 15 m in x-y-z directions was planned by Meta Nikel Kobalt Mining Company (META), the license owner of the mine, to produce nickel ore. 13 core drilling and 13 reverse circulation drilling (RC) and 26 column samplings adjacent to each drillholes were located in this area. Those three sampling results were compared to each other and as well as the actual production values obtained from reconciliation pit. On the other side 3D computer modeling was also used to model the nickel resource in Tü
rkmenç
ardagi - Gö
rdes laterites. The results obtained from both inverse distance weighting and kriging methods were compared to the results of actual production to find out the applicability of 3D modeling to laterite deposits. Modeling results showed that Ni grade of the reconciliation pit in Tü
rkmenç
ardagi - Gö
rdes, considering 0.5% Ni cut-off value, by using drillholes data, inverse distance weighting method estimates 622 tonnes with 0.553% Ni and kriging method estimates 749 tonnes with 0.527% Ni. The actual production pit results provided 4,882 tonnes of nickel ore with 0.649% Ni grade. These results show that grade values seem to be acceptable but in terms of tonnage, there are significant differences between theoretical estimated values and production values.
Style APA, Harvard, Vancouver, ISO itp.
9

Lu, Ling, i Bofeng Li. "Combining Different Feature Weighting Methods for Case Based Reasoning". Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-26603.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Choo, Wei-Chong. "Volatility forecasting with exponential weighting, smooth transition and robust methods". Thesis, University of Oxford, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.489421.

Pełny tekst źródła
Streszczenie:
This thesis focuses on the forecasting of the volatility in financial returns. Our first main contribution is the introduction of two new approaches for combining volatility forecasts. One approach involves the use of discounted weighted least square. The second proposed approach is smooth transition (ST) combining, which allows the combining weights to change gradually and smoothly over time in response to changes in suitably chosen transition variables.
Style APA, Harvard, Vancouver, ISO itp.
11

Alqasrawi, Yousef T. N. "Natural scene classification, annotation and retrieval : developing different approaches for semantic scene modelling based on Bag of Visual Words". Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5523.

Pełny tekst źródła
Streszczenie:
With the availability of inexpensive hardware and software, digital imaging has become an important medium of communication in our daily lives. A huge amount of digital images are being collected and become available through the internet and stored in various fields such as personal image collections, medical imaging, digital arts etc. Therefore, it is important to make sure that images are stored, searched and accessed in an efficient manner. The use of bag of visual words (BOW) model for modelling images based on local invariant features computed at interest point locations has become a standard choice for many computer vision tasks. Based on this promising model, this thesis investigates three main problems: natural scene classification, annotation and retrieval. Given an image, the task is to design a system that can determine to which class that image belongs to (classification), what semantic concepts it contain (annotation) and what images are most similar to (retrieval). This thesis contributes to scene classification by proposing a weighting approach, named keypoints density-based weighting method (KDW), to control the fusion of colour information and bag of visual words on spatial pyramid layout in a unified framework. Different configurations of BOW, integrated visual vocabularies and multiple image descriptors are investigated and analyzed. The proposed approaches are extensively evaluated over three well-known scene classification datasets with 6, 8 and 15 scene categories using 10-fold cross validation. The second contribution in this thesis, the scene annotation task, is to explore whether the integrated visual vocabularies generated for scene classification can be used to model the local semantic information of natural scenes. In this direction, image annotation is considered as a classification problem where images are partitioned into 10x10 fixed grid and each block, represented by BOW and different image descriptors, is classified into one of predefined semantic classes. An image is then represented by counting the percentage of every semantic concept detected in the image. Experimental results on 6 scene categories demonstrate the effectiveness of the proposed approach. Finally, this thesis further explores, with an extensive experimental work, the use of different configurations of the BOW for natural scene retrieval.
Style APA, Harvard, Vancouver, ISO itp.
12

Ferreira, Junior Valnir, i N/A. "Improvements to Clause Weighting Local Search for Propositional Satisfiability". Griffith University. Institute for Integrated and Intelligent Systems, 2007. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20070823.123257.

Pełny tekst źródła
Streszczenie:
The propositional satisfiability (SAT) problem is of considerable theoretical and practical relevance to the artificial intelligence (AI) community and has been used to model many pervasive AI tasks such as default reasoning, diagnosis, planning, image interpretation, and constraint satisfaction. Computational methods for SAT have historically fallen into two broad categories: complete search and local search. Within the local search category, clause weighting methods are amongst the best alternatives for SAT, becoming particularly attractive on problems where a complete search is impractical or where there is a need to find good candidate solutions within a short time. The thesis is concerned with the study of improvements to clause weighting local search methods for SAT. The main contributions are: A component-based framework for the functional analysis of local search methods. A clause weighting local search heuristic that exploits longer-term memory arising from clause weight manipulations. The approach first learns which clauses are globally hardest to satisfy and then uses this information to treat these clauses differentially during weight manipulation [Ferreira Jr and Thornton, 2004]. A study of heuristic tie breaking in the domain of additive clause weighting local search methods, and the introduction of a competitive method that uses heuristic tie breaking instead of the random tie breaking approach used in most existing methods [Ferreira Jr and Thornton, 2005]. An evaluation of backbone guidance for clause weighting local search, and the introduction of backbone guidance to three state-of-the-art clause weighting local search methods [Ferreira Jr, 2006]. A new clause weighting local search method for SAT that successfully exploits synergies between the longer-term memory and tie breaking heuristics developed in the thesis to significantly improve on the performance of current state-of-the-art local search methods for SAT-encoded instances containing identifiable CSP structure. Portions of this thesis have appeared in the following refereed publications: Longer-term memory in clause weighting local search for SAT. In Proceedings of the 17th Australian Joint Conference on Artificial Intelligence, volume 3339 of Lecture Notes in Artificial Intelligence, pages 730-741, Cairns, Australia, 2004. Tie breaking in clause weighting local search for SAT. In Proceedings of the 18th Australian Joint Conference on Artificial Intelligence, volume 3809 of Lecture Notes in Artificial Intelligence, pages 70–81, Sydney, Australia, 2005. Backbone guided dynamic local search for propositional satisfiability. In Proceedings of the Ninth International Symposium on Artificial Intelligence and Mathematics, AI&M, Fort Lauderdale, Florida, 2006.
Style APA, Harvard, Vancouver, ISO itp.
13

Ferreira, Junior Valnir. "Improvements to Clause Weighting Local Search for Propositional Satisfiability". Thesis, Griffith University, 2007. http://hdl.handle.net/10072/365857.

Pełny tekst źródła
Streszczenie:
The propositional satisfiability (SAT) problem is of considerable theoretical and practical relevance to the artificial intelligence (AI) community and has been used to model many pervasive AI tasks such as default reasoning, diagnosis, planning, image interpretation, and constraint satisfaction. Computational methods for SAT have historically fallen into two broad categories: complete search and local search. Within the local search category, clause weighting methods are amongst the best alternatives for SAT, becoming particularly attractive on problems where a complete search is impractical or where there is a need to find good candidate solutions within a short time. The thesis is concerned with the study of improvements to clause weighting local search methods for SAT. The main contributions are: A component-based framework for the functional analysis of local search methods. A clause weighting local search heuristic that exploits longer-term memory arising from clause weight manipulations. The approach first learns which clauses are globally hardest to satisfy and then uses this information to treat these clauses differentially during weight manipulation [Ferreira Jr and Thornton, 2004]. A study of heuristic tie breaking in the domain of additive clause weighting local search methods, and the introduction of a competitive method that uses heuristic tie breaking instead of the random tie breaking approach used in most existing methods [Ferreira Jr and Thornton, 2005]. An evaluation of backbone guidance for clause weighting local search, and the introduction of backbone guidance to three state-of-the-art clause weighting local search methods [Ferreira Jr, 2006]. A new clause weighting local search method for SAT that successfully exploits synergies between the longer-term memory and tie breaking heuristics developed in the thesis to significantly improve on the performance of current state-of-the-art local search methods for SAT-encoded instances containing identifiable CSP structure. Portions of this thesis have appeared in the following refereed publications: Longer-term memory in clause weighting local search for SAT. In Proceedings of the 17th Australian Joint Conference on Artificial Intelligence, volume 3339 of Lecture Notes in Artificial Intelligence, pages 730-741, Cairns, Australia, 2004. Tie breaking in clause weighting local search for SAT. In Proceedings of the 18th Australian Joint Conference on Artificial Intelligence, volume 3809 of Lecture Notes in Artificial Intelligence, pages 70–81, Sydney, Australia, 2005. Backbone guided dynamic local search for propositional satisfiability. In Proceedings of the Ninth International Symposium on Artificial Intelligence and Mathematics, AI&M, Fort Lauderdale, Florida, 2006.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Institute for Integrated and Intelligent Systems
Full Text
Style APA, Harvard, Vancouver, ISO itp.
14

Hawley, Kevin J. "A comparative analysis of areal interpolation methods". Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1139949635.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Černoch, Adam. "Vyhodnocování dopravního hluku a jeho modelování". Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2014. http://www.nusl.cz/ntk/nusl-226952.

Pełny tekst źródła
Streszczenie:
The task of the master's thesis is introduction with the problems of traffic noise, focusing on noise from road traffic. There is a description what is the noise, how it is formed, its resources and what are the methods of measuring. The following are the various noise reduction measures such as noise barriers and low noise pavements. The main attention is devoted to the noise generated at the tire / road that is reduced by these pavements. The practical part describes the implemented measurements on individual sections at various locations in our country. The measurement was carried by slightly modified method CPX with reference tire directly at the vehicle. The main aim was to evaluate the measurement data, make comparison of different low-noise surfaces with each other and with the commonly used surfaces. Then quantification of the rate reduction of the noise emission for a given section and verification of input data for noise modeling. In conclusion, the obtained results are summarized and based on them were confirmed very good acoustic properties with the recommendation to continue with measurements in the future.
Style APA, Harvard, Vancouver, ISO itp.
16

Kaatz, Ewelina. "Development of benchmarks and weighting systems for building environmental assessment methods : opportunities of a participatory approach". Master's thesis, University of Cape Town, 2001. http://hdl.handle.net/11427/4767.

Pełny tekst źródła
Streszczenie:
Bibliography: leaves 41-44.
Sustainable construction is a tenns that emerged with the introduction of the concept of sustainable development in construction. Therefore, sustainable construction embraces socio-economic, cultural, biophysical, technical and process-orientated aspects of construction practice and activities. The progress towards sustain ability in construction may be assessed by implementation of good practice in building developments. Therefore, building environmental assessment methods are valuable tools of indicating such a progress as well as promoting sustainable approaches in construction. An effective building environmental assessment method requires definition of explicit benchmarks and weightings. These should take into account environmental, social and economic contexts of building developments. As the existing building environmental assessment methods largely ignore socioeconomic impacts of building developments, the implementation of a participatory approach in the development of benchmarks and weighting systems could greatly contribute to a more meaningful incorporation of social and economic aspects into the assessment process. Furthennore, the participation of stakeholders in establishing qualitative benchmarks and weights should increase the credibility of such a process. The participatory approach could allow for education of all stakeholders about the potential environmental, social and economic consequences of their decisions and actions, which is so vital for achieving their commitment to strive towards sustainable construction.
Style APA, Harvard, Vancouver, ISO itp.
17

Varma, Krishnaraj M. "Fast Split Arithmetic Encoder Architectures and Perceptual Coding Methods for Enhanced JPEG2000 Performance". Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/26519.

Pełny tekst źródła
Streszczenie:
JPEG2000 is a wavelet transform based image compression and coding standard. It provides superior rate-distortion performance when compared to the previous JPEG standard. In addition JPEG2000 provides four dimensions of scalability-distortion, resolution, spatial, and color. These superior features make JPEG2000 ideal for use in power and bandwidth limited mobile applications like urban search and rescue. Such applications require a fast, low power JPEG2000 encoder to be embedded on the mobile agent. This embedded encoder needs to also provide superior subjective quality to low bitrate images. This research addresses these two aspects of enhancing the performance of JPEG2000 encoders. The JPEG2000 standard includes a perceptual weighting method based on the contrast sensitivity function (CSF). Recent literature shows that perceptual methods based on subband standard deviation are also effective in image compression. This research presents two new perceptual weighting methods that combine information from both the human contrast sensitivity function as well as the standard deviation within a subband or code-block. These two new sets of perceptual weights are compared to the JPEG2000 CSF weights. The results indicate that our new weights performed better than the JPEG2000 CSF weights for high frequency images. Weights based solely on subband standard deviation are shown to perform worse than JPEG2000 CSF weights for all images at all compression ratios. Embedded block coding, EBCOT tier-1, is the most computationally intensive part of the JPEG2000 image coding standard. Past research on fast EBCOT tier-1 hardware implementations has concentrated on cycle efficient context formation. These pass-parallel architectures require that JPEG2000's three mode switches be turned on. While turning on the mode switches allows for arithmetic encoding from each coding pass to run independent of each other (and thus in parallel), it also disrupts the probability estimation engine of the arithmetic encoder, thus sacrificing coding efficiency for improved throughput. In this research a new fast EBCOT tier-1 design is presented: it is called the Split Arithmetic Encoder (SAE) process. The proposed process exploits concurrency to obtain improved throughput while preserving coding efficiency. The SAE process is evaluated using three methods: clock cycle estimation, multithreaded software implementation, a field programmable gate array (FPGA) hardware implementation. All three methods achieve throughput improvement; the hardware implementation exhibits the largest speedup, as expected. A high speed, task-parallel, multithreaded, software architecture for EBCOT tier-1 based on the SAE process is proposed. SAE was implemented in software on two shared-memory architectures: a PC using hyperthreading and a multi-processor non-uniform memory access (NUMA) machine. The implementation adopts appropriate synchronization mechanisms that preserve the algorithm's causality constraints. Tests show that the new architecture is capable of improving throughput as much as 50% on the NUMA machine and as much as 19% on a PC with two virtual processing units. A high speed, multirate, FPGA implementation of the SAE process is also proposed. The mismatch between the rate of production of data by the context formation (CF) module and the rate of consumption of data by the arithmetic encoder (AE) module is studied in detail. Appropriate choices for FIFO sizes and FIFO write and read capabilities are made based on the statistics obtained from test runs of the algorithm. Using a fast CF module, this implementation was able to achieve as much as 120% improvement in throughput.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
18

Wong, Mark. "Comparison of heat maps showing residence price generated using interpolation methods". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-214110.

Pełny tekst źródła
Streszczenie:
In this report we attempt to provide insights in how interpolation can be used for creating heat maps showing residence prices for different residence markets in Sweden. More specifically, three interpolation methods are implemented and are then used on three Swedish residence markets. These three residence markets are of varying characteristics such as size and residence type. Data of residence sales and the physical definitions of the residence markets were collected. As residence sales are never identical, residence sales were preprocessed to make them comparable. For comparison, a so-called external predictor was used as an extra parameter for the interpolation method. In this report, distance to nearest public transportation was used as an external predictor. The interpolated heat maps were compared and evaluated using both quantitative and qualitative approaches. Results show that each interpolation method has its own strengths and weaknesses, and that using an external predictor results in better heat maps compared to only using residence price as predictor. Kriging was found to be the most robust method and consistently resulted in the best interpolated heat maps for all residence markets. On the other hand, it was also the most time-consuming interpolation method.
Den här rapporten försöker ge insikter i hur interpolation kan användas för att skapa färgdiagram över bostadspriser för olika bostadsmarknader i Sverige. Mer specifikt implementeras tre interpolationsmetoder som sedan används på tre olika svenska bostadsmarknader. Dessa tre bostadsmarknader är av olika karaktär med hänsyn till storlek och bostadstyp. Bostadsförsäljningsdata och de fysiska definitionerna för bostadsmarknaderna samlades in. Eftersom bostadsförsäljningar aldrig är identiska, behandlas de först i syfte att göra dem jämförbara. En extern indikator, vilket är en extra parameter för interpolationsmetoder, undersöktes även. I den här rapporten användes avståndet till närmaste kollektiva transportmedel som extern indikator. De interpolerade färgdiagrammen jämfördes och utvärderades både med en kvantiativ och en kvalitativ metod. Resultaten visar att varje interpolationsmetod har sina styrkor och svagheter och att användandet av en extern indikator alltid renderade i ett bättre färgdiagram jämfört med att endast använda bostadspris som indikator. Kriging bedöms vara den mest robusta interpolationsmetoden och interpolerade även de bästa färgdiagrammen för alla bostadsmarknader. Samtidigt var det även den mest tidskrävande interpolationsmetoden.
Style APA, Harvard, Vancouver, ISO itp.
19

Schmidl, Ricarda. "Empirical essays on job search behavior, active labor market policies, and propensity score balancing methods". Phd thesis, Universität Potsdam, 2014. http://opus.kobv.de/ubp/volltexte/2014/7114/.

Pełny tekst źródła
Streszczenie:
In Chapter 1 of the dissertation, the role of social networks is analyzed as an important determinant in the search behavior of the unemployed. Based on the hypothesis that the unemployed generate information on vacancies through their social network, search theory predicts that individuals with large social networks should experience an increased productivity of informal search, and reduce their search in formal channels. Due to the higher productivity of search, unemployed with a larger network are also expected to have a higher reservation wage than unemployed with a small network. The model-theoretic predictions are tested and confirmed empirically. It is found that the search behavior of unemployed is significantly affected by the presence of social contacts, with larger networks implying a stronger substitution away from formal search channels towards informal channels. The substitution is particularly pronounced for passive formal search methods, i.e., search methods that generate rather non-specific types of job offer information at low relative cost. We also find small but significant positive effects of an increase of the network size on the reservation wage. These results have important implications on the analysis of the job search monitoring or counseling measures that are usually targeted at formal search only. Chapter 2 of the dissertation addresses the labor market effects of vacancy information during the early stages of unemployment. The outcomes considered are the speed of exit from unemployment, the effects on the quality of employment and the short-and medium-term effects on active labor market program (ALMP) participation. It is found that vacancy information significantly increases the speed of entry into employment; at the same time the probability to participate in ALMP is significantly reduced. Whereas the long-term reduction in the ALMP arises in consequence of the earlier exit from unemployment, we also observe a short-run decrease for some labor market groups which suggest that caseworker use high and low intensity activation measures interchangeably which is clearly questionable from an efficiency point of view. For unemployed who find a job through vacancy information we observe a small negative effect on the weekly number of hours worked. In Chapter 3, the long-term effects of participation in ALMP are assessed for unemployed youth under 25 years of age. Complementary to the analysis in Chapter 2, the effects of participation in time- and cost-intensive measures of active labor market policies are examined. In particular we study the effects of job creation schemes, wage subsidies, short-and long-term training measures and measures to promote the participation in vocational training. The outcome variables of interest are the probability to be in regular employment, and participation in further education during the 60 months following program entry. The analysis shows that all programs, except job creation schemes have positive and long-term effects on the employment probability of youth. In the short-run only short-term training measures generate positive effects, as long-term training programs and wage subsidies exhibit significant locking-in'' effects. Measures to promote vocational training are found to increase the probability of attending education and training significantly, whereas all other programs have either no or a negative effect on training participation. Effect heterogeneity with respect to the pre-treatment level education shows that young people with higher pre-treatment educational levels benefit more from participation most programs. However, for longer-term wage subsidies we also find strong positive effects for young people with low initial education levels. The relative benefit of training measures is higher in West than in East Germany. In the evaluation studies of Chapters 2 and 3 semi-parametric balancing methods of Propensity Score Matching (PSM) and Inverse Probability Weighting (IPW) are used to eliminate the effects of counfounding factors that influence both the treatment participation as well as the outcome variable of interest, and to establish a causal relation between program participation and outcome differences. While PSM and IPW are intuitive and methodologically attractive as they do not require parametric assumptions, the practical implementation may become quite challenging due to their sensitivity to various data features. Given the importance of these methods in the evaluation literature, and the vast number of recent methodological contributions in this field, Chapter 4 aims to reduce the knowledge gap between the methodological and applied literature by summarizing new findings of the empirical and statistical literature and practical guidelines for future applied research. In contrast to previous publications this study does not only focus on the estimation of causal effects, but stresses that the balancing challenge can and should be discussed independent of question of causal identification of treatment effects on most empirical applications. Following a brief outline of the practical implementation steps required for PSM and IPW, these steps are presented in detail chronologically, outlining practical advice for each step. Subsequently, the topics of effect estimation, inference, sensitivity analysis and the combination with parametric estimation methods are discussed. Finally, new extensions of the methodology and avenues for future research are presented.
In Kapitel 1 der Dissertation wird die Rolle von sozialen Netzwerken als Determinante im Suchverhalten von Arbeitslosen analysiert. Basierend auf der Hypothese, dass Arbeitslose durch ihr soziales Netzwerk Informationen über Stellenangebote generieren, sollten Personen mit großen sozialen Netzwerken eine erhöhte Produktivität ihrer informellen Suche erfahren, und ihre Suche in formellen Kanälen reduzieren. Durch die höhere Produktivität der Suche sollte für diese Personen zudem der Reservationslohn steigen. Die modelltheoretischen Vorhersagen werden empirisch getestet, wobei die Netzwerkinformationen durch die Anzahl guter Freunde, sowie Kontakthäufigkeit zu früheren Kollegen approximiert wird. Die Ergebnisse zeigen, dass das Suchverhalten der Arbeitslosen durch das Vorhandensein sozialer Kontakte signifikant beeinflusst wird. Insbesondere sinkt mit der Netzwerkgröße formelle Arbeitssuche - die Substitution ist besonders ausgeprägt für passive formelle Suchmethoden, d.h. Informationsquellen die eher unspezifische Arten von Jobangeboten bei niedrigen relativen Kosten erzeugen. Im Einklang mit den Vorhersagen des theoretischen Modells finden sich auch deutlich positive Auswirkungen einer Erhöhung der Netzwerkgröße auf den Reservationslohn. Kapitel 2 befasst sich mit den Arbeitsmarkteffekten von Vermittlungsangeboten (VI) in der frühzeitigen Aktivierungsphase von Arbeitslosen. Die Nutzung von VI könnte dabei eine „doppelte Dividende“ versprechen. Zum einen reduziert die frühe Aktivierung die Dauer der Arbeitslosigkeit, und somit auch die Notwendigkeit späterer Teilnahme in Arbeitsmarktprogrammen (ALMP). Zum anderen ist die Aktivierung durch Information mit geringeren locking-in‘‘ Effekten verbunden als die Teilnahme in ALMP. Ziel der Analyse ist es, die Effekte von frühen VI auf die Eingliederungsgeschwindigkeit, sowie die Teilnahmewahrscheinlichkeit in ALMP zu messen. Zudem werden mögliche Effekte auf die Qualität der Beschäftigung untersucht. Die Ergebnisse zeigen, dass VI die Beschäftigungswahrscheinlichkeit signifikant erhöhen, und dass gleichzeitig die Wahrscheinlichkeit in ALMP teilzunehmen signifikant reduziert wird. Für die meisten betrachteten Subgruppen ergibt sich die langfristige Reduktion der ALMP Teilnahme als Konsequenz der schnelleren Eingliederung. Für einzelne Arbeitsmarktgruppen ergibt sich zudem eine frühe und temporare Reduktion, was darauf hinweist, dass Maßnahmen mit hohen und geringen „locking-in“ Effekten aus Sicht der Sachbearbeiter austauschbar sind, was aus Effizienzgesichtspunkten fragwürdig ist. Es wird ein geringer negativer Effekt auf die wöchentliche Stundenanzahl in der ersten abhängigen Beschäftigung nach Arbeitslosigkeit beobachtet. In Kapitel 3 werden die Langzeiteffekte von ALMP für arbeitslose Jugendliche unter 25 Jahren ermittelt. Die untersuchten ALMP sind ABM-Maßnahmen, Lohnsubventionen, kurz-und langfristige Maßnahmen der beruflichen Bildung sowie Maßnahmen zur Förderung der Teilnahme an Berufsausbildung. Ab Eintritt in die Maßnahme werden Teilnehmer und Nicht-Teilnehmer für einen Zeitraum von sechs Jahren beobachtet. Als Zielvariable wird die Wahrscheinlichkeit regulärer Beschäftigung, sowie die Teilnahme in Ausbildung untersucht. Die Ergebnisse zeigen, dass alle Programme, bis auf ABM, positive und langfristige Effekte auf die Beschäftigungswahrscheinlichkeit von Jugendlichen haben. Kurzfristig finden wir jedoch nur für kurze Trainingsmaßnahmen positive Effekte, da lange Trainingsmaßnahmen und Lohnzuschüsse mit signifikanten locking-in‘‘ Effekten verbunden sind. Maßnahmen zur Förderung der Berufsausbildung erhöhen die Wahrscheinlichkeit der Teilnahme an einer Ausbildung, während alle anderen Programme keinen oder einen negativen Effekt auf die Ausbildungsteilnahme haben. Jugendliche mit höherem Ausbildungsniveau profitieren stärker von der Programmteilnahme. Jedoch zeigen sich für längerfristige Lohnsubventionen ebenfalls starke positive Effekte für Jugendliche mit geringer Vorbildung. Der relative Nutzen von Trainingsmaßnahmen ist höher in West- als in Ostdeutschland. In den Evaluationsstudien der Kapitel 2 und 3 werden die semi-parametrischen Gewichtungsverfahren Propensity Score Matching (PSM) und Inverse Probability Weighting (IPW) verwendet, um den Einfluss verzerrender Faktoren, die sowohl die Maßnahmenteilnahme als auch die Zielvariablen beeinflussen zu beseitigen, und kausale Effekte der Programmteilahme zu ermitteln. Während PSM and IPW intuitiv und methodisch sehr attraktiv sind, stellt die Implementierung der Methoden in der Praxis jedoch oft eine große Herausforderung dar. Das Ziel von Kapitel 4 ist es daher, praktische Hinweise zur Implementierung dieser Methoden zu geben. Zu diesem Zweck werden neue Erkenntnisse der empirischen und statistischen Literatur zusammengefasst und praxisbezogene Richtlinien für die angewandte Forschung abgeleitet. Basierend auf einer theoretischen Motivation und einer Skizzierung der praktischen Implementierungsschritte von PSM und IPW werden diese Schritte chronologisch dargestellt, wobei auch auf praxisrelevante Erkenntnisse aus der methodischen Forschung eingegangen wird. Im Anschluss werden die Themen Effektschätzung, Inferenz, Sensitivitätsanalyse und die Kombination von IPW und PSM mit anderen statistischen Methoden diskutiert. Abschließend werden neue Erweiterungen der Methodik aufgeführt.
Style APA, Harvard, Vancouver, ISO itp.
20

Wensveen, Paul J. "Detecting, assessing, and mitigating the effects of naval sonar on cetaceans". Thesis, University of St Andrews, 2016. http://hdl.handle.net/10023/8684.

Pełny tekst źródła
Streszczenie:
Effective management of the potential environmental impacts of naval sonar requires quantitative data on the behaviour and hearing physiology of cetaceans. Here, novel experimental and analytical methods were used to obtain such information and to test the effectiveness of an operational mitigation method for naval sonar. A Bayesian method was developed to estimate whale locations through time, integrating visual observations with measurements from on-animal inertial, acoustic, depth, and Fastloc-GPS sensors. The track reconstruction method was applied to 13 humpback whale (Megaptera novaeangliae) data sets collected during a multi-disciplinary behavioural response study in Norwegian waters. Thirty-one controlled exposure experiments with and without active transmissions of 1.3-2 kHz sounds were conducted using a moving vessel that towed a sonar source. Dose-response functions, representing the relationships between measured sonar dose and behavioural responses identified from the reconstructed tracks, predicted that 50% of the humpbacks would initiate avoidance at a relatively high received sound pressure level of 166 dB re 1 µPa. Very similar dose-response functions were obtained for cessation of feeding. In a laboratory study, behavioural reaction times of a harbour porpoise (Phocoena phocoena) to sonar-like sounds were measured using operant conditioning and a psychoacoustic method. Auditory weighting functions, which can be used to improve dose-response functions, were obtained for the porpoise based on the assumption that sounds of equal loudness elicit equal reaction time. Additional analyses of the humpback whale data set provided evidence that ramp-up of naval sonar mitigates harmful sound levels in responsive cetaceans located directly in the path of the source, and suggested that a subset of the humpback whale population, such as mother-calf pairs, and more responsive species would benefit from the use of sonar ramp-up. The findings in this thesis are intended to inform sound exposure criteria and mitigation guidelines for anthropogenic noise exposure to cetaceans.
Style APA, Harvard, Vancouver, ISO itp.
21

Diop, Serigne Arona, i Serigne Arona Diop. "Comparing inverse probability of treatment weighting methods and optimal nonbipartite matching for estimating the causal effect of a multicategorical treatment". Master's thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/34507.

Pełny tekst źródła
Streszczenie:
Des débalancements des covariables entre les groupes de traitement sont souvent présents dans les études observationnelles et peuvent biaiser les comparaisons entre les traitements. Ce biais peut notamment être corrigé grâce à des méthodes de pondération ou d’appariement. Ces méthodes de correction ont rarement été comparées dans un contexte de traitement à plusieurs catégories (>2). Nous avons mené une étude de simulation pour comparer une méthode d’appariement optimal non-biparti, la pondération par probabilité inverse de traitement ainsi qu’une pondération modifiée analogue à l’appariement (matching weights). Ces comparaisons ont été effectuées dans le cadre de simulation de type Monte Carlo à travers laquelle une variable d’exposition à 3 groupes a été utilisée. Une étude de simulation utilisant des données réelles (plasmode) a été conduite et dans laquelle la variable de traitement avait 5 catégories. Parmi toutes les méthodes comparées, celle du matching weights apparaît comme étant la plus robuste selon le critère de l’erreur quadratique moyenne. Il en ressort, aussi, que les résultats de la pondération par probabilité inverse de traitement peuvent parfois être améliorés par la troncation. De plus, la performance de la pondération dépend du niveau de chevauchement entre les différents groupes de traitement. La performance de l’appariement optimal nonbiparti est, quant à elle, fortement tributaire de la distance maximale pour qu’une paire soit formée (caliper). Toutefois, le choix du caliper optimal n’est pas facile et demeure une question ouverte. De surcroît, les résultats obtenus avec la simulation plasmode étaient positifs, dans la mesure où une réduction importante du biais a été observée. Toutes les méthodes ont pu réduire significativement le biais de confusion. Avant d’utiliser la pondération de probabilité inverse de traitement, il est recommandé de vérifier la violation de l’hypothèse de positivité ou l’existence de zones de chevauchement entre les différents groupes de traitement
Des débalancements des covariables entre les groupes de traitement sont souvent présents dans les études observationnelles et peuvent biaiser les comparaisons entre les traitements. Ce biais peut notamment être corrigé grâce à des méthodes de pondération ou d’appariement. Ces méthodes de correction ont rarement été comparées dans un contexte de traitement à plusieurs catégories (>2). Nous avons mené une étude de simulation pour comparer une méthode d’appariement optimal non-biparti, la pondération par probabilité inverse de traitement ainsi qu’une pondération modifiée analogue à l’appariement (matching weights). Ces comparaisons ont été effectuées dans le cadre de simulation de type Monte Carlo à travers laquelle une variable d’exposition à 3 groupes a été utilisée. Une étude de simulation utilisant des données réelles (plasmode) a été conduite et dans laquelle la variable de traitement avait 5 catégories. Parmi toutes les méthodes comparées, celle du matching weights apparaît comme étant la plus robuste selon le critère de l’erreur quadratique moyenne. Il en ressort, aussi, que les résultats de la pondération par probabilité inverse de traitement peuvent parfois être améliorés par la troncation. De plus, la performance de la pondération dépend du niveau de chevauchement entre les différents groupes de traitement. La performance de l’appariement optimal nonbiparti est, quant à elle, fortement tributaire de la distance maximale pour qu’une paire soit formée (caliper). Toutefois, le choix du caliper optimal n’est pas facile et demeure une question ouverte. De surcroît, les résultats obtenus avec la simulation plasmode étaient positifs, dans la mesure où une réduction importante du biais a été observée. Toutes les méthodes ont pu réduire significativement le biais de confusion. Avant d’utiliser la pondération de probabilité inverse de traitement, il est recommandé de vérifier la violation de l’hypothèse de positivité ou l’existence de zones de chevauchement entre les différents groupes de traitement
Style APA, Harvard, Vancouver, ISO itp.
22

Damesa, Tigist Mideksa [Verfasser], i Hans-Peter [Akademischer Betreuer] Piepho. "Weighting methods for variance heterogeneity in phenotypic and genomic data analysis for crop breeding / Tigist Mideksa Damesa ; Betreuer: Hans-Peter Piepho". Hohenheim : Kommunikations-, Informations- und Medienzentrum der Universität Hohenheim, 2019. http://d-nb.info/1199440035/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Enoch, John. "Application of Decision Analytic Methods to Cloud Adoption Decisions". Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-25560.

Pełny tekst źródła
Streszczenie:
This thesis gives an example of how decision analytic methods can be applied to choices in the adoption of cloud computing. The lifecycle of IT systems from planning to retirement is rapidly changing. Making a technology decision that can be justified and explained in terms of outcomes and benefits can be increasingly challenging without a systematic approach underlying the decision making process. It is proposed that better, more informed cloud adoption decisions would be taken if organisations used a structured approach to frame the problem to be solved and then applied trade-offs using an additive utility model. The trade-offs that can be made in the context of cloud adoption decisions are typically complex and rarely intuitively obvious. A structured approach is beneficial in that it enables decision makers to define and seek outcomes that deliver optimum benefits, aligned with their risk profile. The case study demonstrated that proven decision tools are helpful to decision makers faced with a complex cloud adoption decision but are likely to be more suited to the more intractable decision situations.
Style APA, Harvard, Vancouver, ISO itp.
24

May, Michael. "Data analytics and methods for improved feature selection and matching". Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/data-analytics-and-methods-for-improved-feature-selection-and-matching(965ded10-e3a0-4ed5-8145-2af7a8b5e35d).html.

Pełny tekst źródła
Streszczenie:
This work focuses on analysing and improving feature detection and matching. After creating an initial framework of study, four main areas of work are researched. These areas make up the main chapters within this thesis and focus on using the Scale Invariant Feature Transform (SIFT).The preliminary analysis of the SIFT investigates how this algorithm functions. Included is an analysis of the SIFT feature descriptor space and an investigation into the noise properties of the SIFT. It introduces a novel use of the a contrario methodology and shows the success of this method as a way of discriminating between images which are likely to contain corresponding regions from images which do not. Parameter analysis of the SIFT uses both parameter sweeps and genetic algorithms as an intelligent means of setting the SIFT parameters for different image types utilising a GPGPU implementation of SIFT. The results have demonstrated which parameters are more important when optimising the algorithm and the areas within the parameter space to focus on when tuning the values. A multi-exposure, High Dynamic Range (HDR), fusion features process has been developed where the SIFT image features are matched within high contrast scenes. Bracketed exposure images are analysed and features are extracted and combined from different images to create a set of features which describe a larger dynamic range. They are shown to reduce the effects of noise and artefacts that are introduced when extracting features from HDR images directly and have a superior image matching performance. The final area is the development of a novel, 3D-based, SIFT weighting technique which utilises the 3D data from a pair of stereo images to cluster and class matched SIFT features. Weightings are applied to the matches based on the 3D properties of the features and how they cluster in order to attempt to discriminate between correct and incorrect matches using the a contrario methodology. The results show that the technique provides a method for discriminating between correct and incorrect matches and that the a contrario methodology has potential for future investigation as a method for correct feature match prediction.
Style APA, Harvard, Vancouver, ISO itp.
25

Johansson, Sven. "Active Control of Propeller-Induced Noise in Aircraft : Algorithms & Methods". Doctoral thesis, Karlskrona, Ronneby : Blekinge Institute of Technology, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00171.

Pełny tekst źródła
Streszczenie:
In the last decade acoustic noise has become more and more regarded as a problem. In cars, boats, trains and aircraft, low-frequency noise reduces comfort. Lightweight materials and more powerful engines are used in high-speed vehicles, resulting in a general increase in interior noise levels. Low-frequency noise is annoying and during periods of long exposure it causes fatigue and discomfort. The masking effect which low-frequency noise has on speech reduces speech intelligibility. Low-frequency noise is sought to be attenuated in a wide range of applications in order to improve comfort and speech intelligibility. The use of conventional passive methods to attenuate low-frequency noise is often impractical since considerable bulk and weight are required; in transportation large weight is associated with high fuel consumption. In order to overcome the problems of ineffective passive suppression of low-frequency noise, the technique of active noise control has become of considerable interest. The fundamental principle of active noise control is based on secondary sources producing ``anti-noise.'' Destructive interference between the generated and the primary sound fields results in noise attenuation. Active noise control systems significantly increase the capacity for attenuating low-frequency noise without major increase in volume and weight. This doctoral dissertation deals with the topic of active noise control within the passenger cabin in aircraft, and within headsets. The work focuses on methods, controller structures and adaptive algorithms for attenuating tonal low-frequency noise produced by synchronized or moderately synchronized propellers generating beating sound fields. The control algorithm is a central part of an active noise control system. A multiple-reference feedforward controller based on the novel actuator-individual normalized Filtered-X Least-Mean-Squares algorithm is introduced, yielding significant attenuation of such period noise. This algorithm is of the LMS-type, and owing to the novel normalization it can also be regarded as a Newton-type algorithm. The new algorithm combines low computational complexity with high performance. For that reason the algorithm is suitable for use in systems with a large number of control sources and control sensors in order to reduce the computional power required by the control system. The computational power of the DSP hardware is limited, and therefore algorithms with high computational complexity allow fewer control sources and sensors to be used, often with reduced noise attenuation as a result. In applications, such as controlling aircraft cabin noise, where a large multiple-channel system is needed to control the relative complex interior sound field, it is of great importance to keep down the computational complexity of the algorithm so that a large number of loudspeakers and microphones can be used. The dissertation presents theoretical work, off-line computer experiments and practical real-time experiments using the actuator-individual normalized algorithm. The computer experiments are principally based on real-life cabin noise data recorded during flight in a twin-engine propeller aircraft and in a helicopter. The practical experiments were carried out in a full-scale fuselage section from a propeller aircraft.
Buller i vår dagliga miljö kan ha en negativ inverkan på vår hälsa. I många sammanhang, i tex bilar, båtar och flygplan, förekommer lågfrekvent buller. Lågfrekvent buller är oftast inte skadligt för hörseln, men kan vara tröttande och försvåra konversationen mellan personer som vistas i en utsatt miljö. En dämpning av bullernivån medför en förbättrad taluppfattbarhet samt en komfortökning. Att dämpa lågfrekvent buller med traditionella passiva metoder, tex absorbenter och reflektorer, är oftast ineffektivt. Det krävs stora, skrymmande absorbenter för att dämpa denna typ av buller samt tunga skiljeväggar för att förhindra att bullret transmitteras vidare från ett utrymme till ett annat. Metoder som är mera lämpade vid dämpning av lågfrekvent buller är de aktiva. De aktiva metoderna baseras på att en vågrörelse som ligger i motfas med en annan överlagras och de släcker ut varandra. Bullerdämpningen erhålls genom att ett ljudfält genereras som är lika starkt som bullret men i motfas med detta. De aktiva bullerdämpningsmetoderna medför en effektiv dämpning av lågfrekvent buller samtidigt som volymen, tex hos bilkupen eller båt/flygplanskabinen ej påverkas nämnvärt. Dessutom kan fordonets/farkostens vikt reduceras vilket är tacksamt för bränsleförbrukningen. I de flesta tillämpningar varierar bullrets karaktär, dvs styrka och frekvensinnehåll. För att följa dessa variationer krävs ett adaptivt (självinställande) reglersystem som styr genereringen av motljudet. I propellerflygplan är de dominerande frekvenserna i kabinbullret relaterat till propellrarnas varvtal, man känner alltså till frekvenserna som skall dämpas. Man utnyttjar en varvtalssignal för att generera signaler, så kallade referenssignaler, med de frekvenser som skall dämpas. Dessa bearbetas av ett reglersystem som generar signaler till högtalarna som i sin tur generar motljudet. För att ställa in högtalarsignalerna så att en effektiv dämpning erhålls, används mikrofoner utplacerade i kabinen som mäter bullret. För att åstadkomma en effektiv bullerdämpning i ett rum, tex i en flygplanskabin, behövs flera högtalare och mikrofoner, vilket kräver ett avancerat reglersystem. I doktorsavhandlingen ''Active Control of Propeller-Induced Noise in Aircraft'' behandlas olika metoder för att reducera kabinbuller härrörande från propellrarna. Här presenteras olika strukturer på reglersystem samt beräkningsalgoritmer för att ställa in systemet. För stora system där många högtalare och mikrofoner används, samt flera frekvenser skall dämpas, är det viktigt att systemet inte behöver för stor beräkningskapacitet för att generera motljudet. Metoderna som behandlas ger en effektiv dämpning till låg beräkningskostnad. Delar av materialet som presenteras i avhandlingen har ingått i ett EU-projekt med inriktning mot bullerundertryckning i propellerflygplan. I projektet har flera europeiska flygplanstillverkare deltagit. Avhandlingen behandlar även aktiv bullerdämpning i headset, som används av helikopterpiloter. I denna tillämpning har aktiv bullerdämpning används för att öka taluppfattbarheten.
Style APA, Harvard, Vancouver, ISO itp.
26

Pagliarani, Andrea. "New markov chain based methods for single and cross-domain sentiment classification". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8445/.

Pełny tekst źródła
Streszczenie:
Nowadays communication is switching from a centralized scenario, where communication media like newspapers, radio, TV programs produce information and people are just consumers, to a completely different decentralized scenario, where everyone is potentially an information producer through the use of social networks, blogs, forums that allow a real-time worldwide information exchange. These new instruments, as a result of their widespread diffusion, have started playing an important socio-economic role. They are the most used communication media and, as a consequence, they constitute the main source of information enterprises, political parties and other organizations can rely on. Analyzing data stored in servers all over the world is feasible by means of Text Mining techniques like Sentiment Analysis, which aims to extract opinions from huge amount of unstructured texts. This could lead to determine, for instance, the user satisfaction degree about products, services, politicians and so on. In this context, this dissertation presents new Document Sentiment Classification methods based on the mathematical theory of Markov Chains. All these approaches bank on a Markov Chain based model, which is language independent and whose killing features are simplicity and generality, which make it interesting with respect to previous sophisticated techniques. Every discussed technique has been tested in both Single-Domain and Cross-Domain Sentiment Classification areas, comparing performance with those of other two previous works. The performed analysis shows that some of the examined algorithms produce results comparable with the best methods in literature, with reference to both single-domain and cross-domain tasks, in $2$-classes (i.e. positive and negative) Document Sentiment Classification. However, there is still room for improvement, because this work also shows the way to walk in order to enhance performance, that is, a good novel feature selection process would be enough to outperform the state of the art. Furthermore, since some of the proposed approaches show promising results in $2$-classes Single-Domain Sentiment Classification, another future work will regard validating these results also in tasks with more than $2$ classes.
Style APA, Harvard, Vancouver, ISO itp.
27

Al-Nashashibi, May Y. A. "Arabic Language Processing for Text Classification. Contributions to Arabic Root Extraction Techniques, Building An Arabic Corpus, and to Arabic Text Classification Techniques". Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/6326.

Pełny tekst źródła
Streszczenie:
The impact and dynamics of Internet-based resources for Arabic-speaking users is increasing in significance, depth and breadth at highest pace than ever, and thus requires updated mechanisms for computational processing of Arabic texts. Arabic is a complex language and as such requires in depth investigation for analysis and improvement of available automatic processing techniques such as root extraction methods or text classification techniques, and for developing text collections that are already labeled, whether with single or multiple labels. This thesis proposes new ideas and methods to improve available automatic processing techniques for Arabic texts. Any automatic processing technique would require data in order to be used and critically reviewed and assessed, and here an attempt to develop a labeled Arabic corpus is also proposed. This thesis is composed of three parts: 1- Arabic corpus development, 2- proposing, improving and implementing root extraction techniques, and 3- proposing and investigating the effect of different pre-processing methods on single-labeled text classification methods for Arabic. This thesis first develops an Arabic corpus that is prepared to be used here for testing root extraction methods as well as single-label text classification techniques. It also enhances a rule-based root extraction method by handling irregular cases (that appear in about 34% of texts). It proposes and implements two expanded algorithms as well as an adjustment for a weight-based method. It also includes the algorithm that handles irregular cases to all and compares the performances of these proposed methods with original ones. This thesis thus develops a root extraction system that handles foreign Arabized words by constructing a list of about 7,000 foreign words. The outcome of the technique with best accuracy results in extracting the correct stem and root for respective words in texts, which is an enhanced rule-based method, is used in the third part of this thesis. This thesis finally proposes and implements a variant term frequency inverse document frequency weighting method, and investigates the effect of using different choices of features in document representation on single-label text classification performance (words, stems or roots as well as including to these choices their respective phrases). This thesis applies forty seven classifiers on all proposed representations and compares their performances. One challenge for researchers in Arabic text processing is that reported root extraction techniques in literature are either not accessible or require a long time to be reproduced while labeled benchmark Arabic text corpus is not fully available online. Also, by now few machine learning techniques were investigated on Arabic where usual preprocessing steps before classification were chosen. Such challenges are addressed in this thesis by developing a new labeled Arabic text corpus for extended applications of computational techniques. Results of investigated issues here show that proposing and implementing an algorithm that handles irregular words in Arabic did improve the performance of all implemented root extraction techniques. The performance of the algorithm that handles such irregular cases is evaluated in terms of accuracy improvement and execution time. Its efficiency is investigated with different document lengths and empirically is found to be linear in time for document lengths less than about 8,000. The rule-based technique is improved the highest among implemented root extraction methods when including the irregular cases handling algorithm. This thesis validates that choosing roots or stems instead of words in documents representations indeed improves single-label classification performance significantly for most used classifiers. However, the effect of extending such representations with their respective phrases on single-label text classification performance shows that it has no significant improvement. Many classifiers were not yet tested for Arabic such as the ripple-down rule classifier. The outcome of comparing the classifiers' performances concludes that the Bayesian network classifier performance is significantly the best in terms of accuracy, training time, and root mean square error values for all proposed and implemented representations.
Petra University, Amman (Jordan)
Style APA, Harvard, Vancouver, ISO itp.
28

Kiichenko, Vladyslav Yuriiovych. "Use of surface interpolation methods for determination of dioxide hydroxide in the air of the city of Kyiv". Thesis, National Aviation University, 2021. https://er.nau.edu.ua/handle/NAU/50612.

Pełny tekst źródła
Streszczenie:
1. Jalalvand A., Gaidukova EV, Burlov VG, Akhondali AM application of spatial interpolation methods. International Research Journal. Sector "technical sciences". - Issue No2 (80), 2019; 2. How the Kriging tool works — Help. - Access mode: http://desktop.arcgis.com/ru/arcmap/10.3/tools/3d-analyst-toolbox/how-kriging-works.htm; 3. Problems of data protection in information data. - Access mode: https://studfiles.net/preview/6440954/page:39/#77; 4. Spline help | ArcGIS for Desktop. - Access mode: http://desktop.arcgis.com/ru/arcmap/10.3/tools/spatial-analyst-toolbox/how-spline-works.htm; 5. How the Slope - Help tool works ArcGIS. - http://desktop.arcgis.com/ru/arcmap /10.3/tools/3d-analyst-toolbox/how-idw-works.htm. 6. Sibson, R., 1981. A brief description of natural neighbour interpolation. Interpreting multivariate data.
In geographic information systems, interpolation of surfaces by various methods is often used. Topics in this area are relevant today and promising for further study and practical research in the field of geoinformation using GIS technologies. The purpose of interpolation in GIS is to fill in the gaps between known measurement points and thus simulate a continuous distribution of a property (attribute). Interpolation is based on the assumption that spatially distributed objects are correlated in space, that is, adjacent objects have similar characteristics. Spatial interpolation of point data is based on the choice of analytical surface model.
В геоінформаційних системах часто використовується інтерполяція поверхонь різними методами. Теми в цій галузі є актуальними сьогодні та перспективними для подальшого вивчення та практичних досліджень у галузі обробки геоінформації із використанням ГІС-технологій. Метою інтерполяції в ГІС є заповнення прогалин між відомими точками вимірювання і, таким чином, моделювання безперервного розподілу властивості (атрибута). Інтерполяція базується на припущенні, що просторово розподілені об'єкти співвідносяться в просторі, тобто сусідні об'єкти мають подібні характеристики. Просторова інтерполяція точкових даних базується на виборі аналітичної моделі поверхні.
Style APA, Harvard, Vancouver, ISO itp.
29

Pehrson, Ida. "Integrating planetary boundaries into the life cycle assessment of electric vehicles : A case study on prioritising impact categories through environmental benchmarking in normalisation and weighting methods when assessing electric heavy-duty vehicles". Thesis, KTH, Hållbar utveckling, miljövetenskap och teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281862.

Pełny tekst źródła
Streszczenie:
The transport sector is facing great challenges for achieving development within the Earth’s boundaries. Currently, LCA studies on heavy- and medium-duty vehicles have mainly assessed the ‘well-to-wheel’ stage and the impact category climate change. To understand the full range of environmental impacts from a truck, a holistic view needs to be adopted, to acknowledge several sustainability dimensions. The development of new vehicle technologies, such as battery electrical vehicles (BEV), the impact will mainly occur in the production and end-of-life stage, thereby it is crucial to adapt a cradle-to-grave approach in LCA. This thesis have interpret Scania’s current LCA results through normalization and weighting. The normalization and weighting methods used have been based on the planetary boundaries (PBs) and other scientific thresholds of earth’s carrying capacity. The normalised results display that considering a heavy-duty truck with diesel (B5) climate change is the major impact, but for BEV with EU electricity mix it is freshwater ecotoxicity, stratospheric ozone formation and climate change that are the main impacts to consider. For the BEV with wind electricity, it is freshwater ecotoxicity and climate change which are the major impacts. According to the weighed results, the impact on ́climate change ́ and ́fossil resource scarcity ́ are most important for diesel (B5) and considering BEV with EU mix it is the impact categories of ́climate change ́ and ́fossil resource depletion ́ followed by ́mineral resource scarcity ́. Considering BEV with wind electricity it is ́mineral resource scarcity ́ followed by ́climate change ́ and ́fossil resource scarcity ́. The weighted results also display that the impact categories, ‘human toxicity cancer’, ‘freshwater ecotoxicity’, ‘particulate matter’ and ‘water resource scarcity’ are important to consider in an LCA of a BEV. Concludingly, it is a need for future research in the area of connecting the PBs with the LCA framework. Moreover, it is a need to develop normalisation reference (NR) and weighting factors (WF) based on a company and sectorial allowances of the carrying capacity to understand a product or company’s environmental impact in absolute terms.
Transportsektorn står inför stora utmaningar för att nå en utveckling inom planetens gränser. I nuläget har LCA studier för tunga och medeltunga transporter fokuserat på ‘well-to-wheel’ vilket är stegen bränsleproduktionen (från källan till tanken) och konsekvenserna av fordonets användning (från tank till hjul) och påverkanskategorin klimat. För att förstå fordonets totala miljöpåverkan, behövs ett holistiskt synsätt för att förstå flera hållbarhetsdimensioner av fordonets miljöpåverkan. Utvecklingen av nya fordonstekniker, så som batterifordon, kommer leda till att miljöpåverkan möjligen främst uppstår i produktions och avfallsfasen av livscykeln, det är därav viktigt att analysera ett fordon från ́vaggan till graven ́. Denna uppsats har analyserat Scanias LCA resultat genom normalisering och viktning. Normaliserings- och viktningsmetoderna som används är baserade på dom planetära gränserna och andra tröskelvärden för planetens bärkapacitet. Det normaliserade resultatet visar att för en diesel lastbil är klimat en betydande påverkanskategori, dock för en BEV (”Battery Electric Vehicle”) med EU elektricitet är det sötvattentoxicitet, stratosfärisk ozonbildning och klimat som är dom mest betydande påverkanskategorierna. Det normaliserade resultatet för BEV med vindenergi visar att det är sötvattentoxicitet och klimat som dom mest betydande påverkanskategorierna. Enligt den valda viktningsmetoden framgår det att klimat och fossil resursutarmning är dom viktigaste påverkanskategorierna för en diesel lastbil. För en BEV med EU mix är den viktigaste klimat och fossil resursutarmning följt av mineralresursbrist. För BEV laddad med energi från vindkraft, är dom viktigaste påverkanskategorierna mineralresursbrist, klimat och fossil resursutarmning. Det viktade resultatet visade även att påverkanskategorierna, humantoxicitet cancer, sötvatten ekotoxicitet, partiklar och vattenresursbrist bör tas i beaktning i en LCA av en BEV. Slutligen behövs det mer forskning kring sammankoppling av planetära gränser och LCA ramverket, även utveckling av normaliseringsreferenser och viktningsfaktorer som är baserat på företags- och sektorsnivåer för utsläppsrätter behövs för att ett företag ska förstå produkters absoluta miljöpåverkan.
Style APA, Harvard, Vancouver, ISO itp.
30

Al-Nashashibi, May Yacoub Adib. "Arabic language processing for text classification : contributions to Arabic root extraction techniques, building an Arabic corpus, and to Arabic text classification techniques". Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/6326.

Pełny tekst źródła
Streszczenie:
The impact and dynamics of Internet-based resources for Arabic-speaking users is increasing in significance, depth and breadth at highest pace than ever, and thus requires updated mechanisms for computational processing of Arabic texts. Arabic is a complex language and as such requires in depth investigation for analysis and improvement of available automatic processing techniques such as root extraction methods or text classification techniques, and for developing text collections that are already labeled, whether with single or multiple labels. This thesis proposes new ideas and methods to improve available automatic processing techniques for Arabic texts. Any automatic processing technique would require data in order to be used and critically reviewed and assessed, and here an attempt to develop a labeled Arabic corpus is also proposed. This thesis is composed of three parts: 1- Arabic corpus development, 2- proposing, improving and implementing root extraction techniques, and 3- proposing and investigating the effect of different pre-processing methods on single-labeled text classification methods for Arabic. This thesis first develops an Arabic corpus that is prepared to be used here for testing root extraction methods as well as single-label text classification techniques. It also enhances a rule-based root extraction method by handling irregular cases (that appear in about 34% of texts). It proposes and implements two expanded algorithms as well as an adjustment for a weight-based method. It also includes the algorithm that handles irregular cases to all and compares the performances of these proposed methods with original ones. This thesis thus develops a root extraction system that handles foreign Arabized words by constructing a list of about 7,000 foreign words. The outcome of the technique with best accuracy results in extracting the correct stem and root for respective words in texts, which is an enhanced rule-based method, is used in the third part of this thesis. This thesis finally proposes and implements a variant term frequency inverse document frequency weighting method, and investigates the effect of using different choices of features in document representation on single-label text classification performance (words, stems or roots as well as including to these choices their respective phrases). This thesis applies forty seven classifiers on all proposed representations and compares their performances. One challenge for researchers in Arabic text processing is that reported root extraction techniques in literature are either not accessible or require a long time to be reproduced while labeled benchmark Arabic text corpus is not fully available online. Also, by now few machine learning techniques were investigated on Arabic where usual preprocessing steps before classification were chosen. Such challenges are addressed in this thesis by developing a new labeled Arabic text corpus for extended applications of computational techniques. Results of investigated issues here show that proposing and implementing an algorithm that handles irregular words in Arabic did improve the performance of all implemented root extraction techniques. The performance of the algorithm that handles such irregular cases is evaluated in terms of accuracy improvement and execution time. Its efficiency is investigated with different document lengths and empirically is found to be linear in time for document lengths less than about 8,000. The rule-based technique is improved the highest among implemented root extraction methods when including the irregular cases handling algorithm. This thesis validates that choosing roots or stems instead of words in documents representations indeed improves single-label classification performance significantly for most used classifiers. However, the effect of extending such representations with their respective phrases on single-label text classification performance shows that it has no significant improvement. Many classifiers were not yet tested for Arabic such as the ripple-down rule classifier. The outcome of comparing the classifiers' performances concludes that the Bayesian network classifier performance is significantly the best in terms of accuracy, training time, and root mean square error values for all proposed and implemented representations.
Style APA, Harvard, Vancouver, ISO itp.
31

Sjöwall, Fredrik. "Alternative Methods for Value-at-Risk Estimation : A Study from a Regulatory Perspective Focused on the Swedish Market". Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-146217.

Pełny tekst źródła
Streszczenie:
The importance of sound financial risk management has become increasingly emphasised in recent years, especially with the financial crisis of 2007-08. The Basel Committee sets the international standards and regulations for banks and financial institutions, and in particular under market risk, they prescribe the internal application of the measure Value-at-Risk. However, the most established non-parametric Value-at-Risk model, historical simulation, has been criticised for some of its unrealistic assumptions. This thesis investigates alternative approaches for estimating non-parametric Value-at-Risk, by examining and comparing the capability of three counterbalancing weighting methodologies for historical simulation: an exponentially decreasing time weighting approach, a volatility updating method and, lastly, a more general weighting approach that enables the specification of central moments of a return distribution. With real financial data, the models are evaluated from a performance based perspective, in terms of accuracy and capital efficiency, but also in terms of their regulatory suitability, with a particular focus on the Swedish market. The empirical study shows that the capability of historical simulation is improved significantly, from both performance perspectives, by the implementation of a weighting methodology. Furthermore, the results predominantly indicate that the volatility updating model with a 500-day historical observation window is the most adequate weighting methodology, in all incorporated aspects. The findings of this paper offer significant input both to existing research on Value-at-Risk as well as to the quality of the internal market risk management of banks and financial institutions.
Betydelsen av sund finansiell riskhantering har blivit alltmer betonad på senare år, i synnerhet i och med finanskrisen 2007-08. Baselkommittén fastställer internationella normer och regler för banker och finansiella institutioner, och särskilt under marknadsrisk föreskriver de intern tillämpning av måttet Value-at-Risk. Däremot har den mest etablerade icke-parametriska Value-at-Risk-modellen, historisk simulering, kritiserats för några av dess orealistiska antaganden. Denna avhandling undersöker alternativa metoder för att beräkna icke-parametrisk Value-at‑Risk, genom att granska och jämföra prestationsförmågan hos tre motverkande viktningsmetoder för historisk simulering: en exponentiellt avtagande tidsviktningsteknik, en volatilitetsuppdateringsmetod, och slutligen ett mer generellt tillvägagångssätt för viktning som möjliggör specifikation av en avkastningsfördelnings centralmoment. Modellerna utvärderas med verklig finansiell data ur ett prestationsbaserat perspektiv, utifrån precision och kapitaleffektivitet, men också med avseende på deras lämplighet i förhållande till existerande regelverk, med särskilt fokus på den svenska marknaden. Den empiriska studien visar att prestandan hos historisk simulering förbättras avsevärt, från båda prestationsperspektiven, genom införandet av en viktningsmetod. Dessutom pekar resultaten i huvudsak på att volatilitetsuppdateringsmodellen med ett 500 dagars observationsfönster är den mest användbara viktningsmetoden i alla berörda aspekter. Slutsatserna i denna uppsats bidrar i väsentlig grad både till befintlig forskning om Value-at-Risk, liksom till kvaliteten på bankers och finansiella institutioners interna hantering av marknadsrisk.
Style APA, Harvard, Vancouver, ISO itp.
32

Boutoux, Guillaume. "Sections efficaces neutroniques via la méthode de substitution". Phd thesis, Bordeaux 1, 2011. http://tel.archives-ouvertes.fr/tel-00654677.

Pełny tekst źródła
Streszczenie:
Les sections efficaces neutroniques des noyaux de courte durée de vie sont des données cruciales pour la physique fondamentale et appliquée dans des domaines tels que la physique des réacteurs ou l'astrophysique nucléaire. En général, l'extrême radioactivité de ces noyaux ne nous permet pas de procéder à des mesures induites par neutrons. Cependant, il existe une méthode de substitution (" surrogate " dans la littérature) qui permet de déterminer ces sections efficaces neutroniques par l'intermédiaire de réactions de transfert ou de réactions de diffusion inélastique. Son intérêt principal est de pouvoir utiliser des cibles moins radioactives et ainsi d'accéder à des sections efficaces neutroniques qui ne pourraient pas être mesurées directement. La méthode est basée sur l'hypothèse de formation d'un noyau composé et sur le fait que la désexcitation ne dépend essentiellement que de l'énergie d'excitation et du spin et parité de l'état composé peuplé. Toutefois, les distributions de moments angulaires et parités peuplés dans des réactions de transfert et celles induites par neutrons sont susceptibles d'être différentes. Ce travail fait l'état de l'art sur la méthode substitution et sa validité. En général, la méthode de substitution fonctionne très bien pour extraire des sections efficaces de fission. Par contre, la méthode de substitution dédiée à la capture radiative est mise à mal par la comparaison aux réactions induites par neutrons. Nous avons réalisé une expérience afin de déterminer les probabilités de désexcitation gamma du 176Lu et du 173Yb à partir des réactions de substitution 174Yb(3He,p)176Lu* et 174Yb(3He,alpha)173Yb*, respectivement, et nous les avons comparées avec les probabilités de capture radiative correspondantes aux réactions 175Lu(n,gamma) et 172Yb(n,gamma) qui sont bien connues. Cette expérience a permis de comprendre pourquoi, dans le cas de la désexcitation gamma, la méthode de substitution donne des écarts importants par rapport à la réaction neutronique correspondante. Ce travail dans la région de terres rares a permis d'évaluer dans quelle mesure la méthode de substitution peut s'appliquer pour extraire des probabilités de capture dans la région des actinides. Des expériences précédentes sur la fission ont aussi pu être réinterprétées. Ce travail apporte donc un éclairage nouveau sur la méthode de substitution.
Style APA, Harvard, Vancouver, ISO itp.
33

Hakala, Tim. "Settling-Time Improvements in Positioning Machines Subject to Nonlinear Friction Using Adaptive Impulse Control". BYU ScholarsArchive, 2006. https://scholarsarchive.byu.edu/etd/1061.

Pełny tekst źródła
Streszczenie:
A new method of adaptive impulse control is developed to precisely and quickly control the position of machine components subject to friction. Friction dominates the forces affecting fine positioning dynamics. Friction can depend on payload, velocity, step size, path, initial position, temperature, and other variables. Control problems such as steady-state error and limit cycles often arise when applying conventional control techniques to the position control problem. Studies in the last few decades have shown that impulsive control can produce repeatable displacements as small as ten nanometers without limit cycles or steady-state error in machines subject to dry sliding friction. These displacements are achieved through the application of short duration, high intensity pulses. The relationship between pulse duration and displacement is seldom a simple function. The most dependable practical methods for control are self-tuning; they learn from online experience by adapting an internal control parameter until precise position control is achieved. To date, the best known adaptive pulse control methods adapt a single control parameter. While effective, the single parameter methods suffer from sub-optimal settling times and poor parameter convergence. To improve performance while maintaining the capacity for ultimate precision, a new control method referred to as Adaptive Impulse Control (AIC) has been developed. To better fit the nonlinear relationship between pulses and displacements, AIC adaptively tunes a set of parameters. Each parameter affects a different range of displacements. Online updates depend on the residual control error following each pulse, an estimate of pulse sensitivity, and a learning gain. After an update is calculated, it is distributed among the parameters that were used to calculate the most recent pulse. As the stored relationship converges to the actual relationship of the machine, pulses become more accurate and fewer pulses are needed to reach each desired destination. When fewer pulses are needed, settling time improves and efficiency increases. AIC is experimentally compared to conventional PID control and other adaptive pulse control methods on a rotary system with a position measurement resolution of 16000 encoder counts per revolution of the load wheel. The friction in the test system is nonlinear and irregular with a position dependent break-away torque that varies by a factor of more than 1.8 to 1. AIC is shown to improve settling times by as much as a factor of two when compared to other adaptive pulse control methods while maintaining precise control tolerances.
Style APA, Harvard, Vancouver, ISO itp.
34

Chen, Shih-Keng, i 陳世耿. "Weighting Adjustment Method". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/7ack8j.

Pełny tekst źródła
Streszczenie:
碩士
中原大學
應用數學研究所
92
The classical inference method for opinion survey is generally inappropriate when certain amount of the subjects sampled refused or were unable to provide responses. Based on the assumption that the missing mechanism is at random, differential weights derived from poststratification, which is closely related to the Horvitz-Thompson estimator, can be used to modify the estimation to adjust for bias. It is shown that under the missing-at-random assumption the estimator is unbiased. The method is primarily used to handle unit nonresponse, where a subset of sampled individuals do not complete the survey because of noncontact, refusal, or some other reason. We consider the applications to three common sampling schemes, namely, simple random sampling, stratified random sampling and cluster sampling. Monte-Carlo simulations show that the weighting adjustments have significantly improved the accuracy of the estimators in many cases.
Style APA, Harvard, Vancouver, ISO itp.
35

Chang, Yi-Te, i 張奕得. "A Dynamic Weighting Method and Analysis". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/52745826902692887589.

Pełny tekst źródła
Streszczenie:
碩士
國立交通大學
統計學研究所
104
Markov Chain Monte Carlo method is a universal-used method in numerical integration. In this talk, we will discuss the dynamic weighting MCMC proposed by Wong and Liang (1997), which makes the Markov chain converges faster. In the decades, Metropolis Hasting algorithm is an important simulation method, but there are still some drawbacks in the simulation. For example, the movement of the process can be influenced by some tiny probability nodes. This phenomenon may directly affect to our simulated estimation. Our main work is to review the weighted MCMC and give some theoretical proof in some special cases. Through the manner, we can make the MCMC method more efficient.
Style APA, Harvard, Vancouver, ISO itp.
36

CHOUHAN, CHETAN. "MULTIOBJECTIVE ECONOMIC LOAD DISPATCH USING WEIGHTING METHOD". Thesis, 2012. http://dspace.dtu.ac.in:8080/jspui/handle/repository/13939.

Pełny tekst źródła
Streszczenie:
M.TECH
In general, a large scale power system possesses multiple objectives to be achieved. The ideal power system operation is achieved when various objectives like cost of generation, system transmission loss, environmental pollution, security etc. are simultaneously attained with minimum values. Since these objectives are conflicting in nature, it is impossible to achieve the ideal power system operation. In this thesis work, three objectives of Multiobjective Economic Load Dispatch (MOELD) problem-cost of generation, system transmission loss and environmental pollution- are considered. The MOELD problem is formulated as a multiobjective optimization problem using weighting method and a number of noninferior solutions are generated in 3D space. The optimal power system operation is attained by Ideal Distance Minimization method. This method employs the concept of an ‘Ideal Point’ (IP) to scalarize the problems having multiple objectives and it minimizes the Euclidean distance between IP and the set of noninferior solutions. This method has been applied to IEEE 30 bus system.
Style APA, Harvard, Vancouver, ISO itp.
37

Huang, Chien-Chung, i 黃建中. "The Objective Weighting Method of Life Cycle Impact Assessment". Thesis, 2005. http://ndltd.ncl.edu.tw/handle/74554777745416017027.

Pełny tekst źródła
Streszczenie:
博士
國立臺灣大學
環境工程學研究所
93
Life cycle assessment (LCA), the powerful environmental management tool of design for environment (DfE), could consider all stages and aspects of a specific product. Although LCA is widely recognized in environmental management, there are still some shortcomings that need to be overcome. The weighting process, the step couldn’t be standardize, is one of the shortcomings. In the past, the weighting process is usually implemented with subject methodologies. However, the subject weights couldn’t be used in different cases and the process of getting subject weights are not transparent. The goal of this paper is to improve the valuation stage in LCA by combining the LCA with the environmental indicator system through factor analysis and sample additive weighting method. The new weighting method, “additive weighting method based on environmental indicators, AWBEI”, is expected to increase the objectivity and reflect spatial variability to enhance the reliability of the assessment results. There are two case studies in the paper. One is the production of coffee machine in Taiwan, and the other is the global allocation of green supply chain on LCD (Liquid Crystal Display). These two cases illustrated the processes, the applications, even the limitations of AWBEI.
Style APA, Harvard, Vancouver, ISO itp.
38

WU, WEN-LONG, i 吳文龍. "Study of the weighting method for environmental impact assessment". Thesis, 1990. http://ndltd.ncl.edu.tw/handle/98597498098397710447.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Yi-HanLin i 林意涵. "An e-Journal Recommendation Method by Considering Time Weighting". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/83772003250925368690.

Pełny tekst źródła
Streszczenie:
碩士
國立成功大學
工業與資訊管理學系專班
101
With the explosive growth of the emerging popularity of the Internet, it has the trend to publish researching papers on-line. The Institute for Scientific Information (ISI, now Thomas Reuters) has the most widely used online database (Journal Citation Reports, JCR) to provide the most valuable journals. The JCR Science edition contains data over 8,000 journals in science and technology. However, the mass of content available on the Internet arises a problem of information overload and disorientation. Currently, the teachers usually seek the researching papers by keying the keywords from academic search engine. However, the number of selected papers from the searching results is still very large. On the other hand, many teachers get used to using the target-based searching to seek the journal papers on-line. Since they don’t consider the factor of time-varying, it becomes difficult to find the appropriate journal papers to match their current studying. To overcome above-mentioned problems, we propose a journal recommendation method based on considering time-weighting parameter. Firstly, we utilize the N-gram and term frequency (TF) to classify and categorize words. Secondly, we identify the keywords according to the Java Wikimedia API. Generally, we judge the newer papers to be more important than older papers. Therefore, in order to extract the suitable for researching topics, a time-weighting is set in our method according to time factor of the journal papers. Finally, we make a reference vector (RV) from the set of studying topics of teachers, and utilize the RV to set up the binary vectors of researching topics of teachers and journals. To reduce the complexity of the proposed method, we perform the similarity matching module in binary vector space. In this thesis, we propose a time-aware journal recommendation method to seek appropriate journals to teachers. Experimental results show that the proposed approach can efficiently improve the accuracy of the recommendation.
Style APA, Harvard, Vancouver, ISO itp.
40

Su, Yung-Yu, i 蘇永裕. "Applying Entropy Weighting Method and Gray Theory to Product Design". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/vhf894.

Pełny tekst źródła
Streszczenie:
碩士
國立成功大學
工業設計學系碩博士班
92
Abstract The purpose of this study is to construct a product development design process. There are two major methods as entropy weighting and gray theory. QFD, ACE and the Structure Variation Method are accessory methods. Starting with questionnaire, the method of entropy weighting evaluates the importance of each factor in the product. Second, the method of gray statistic decides the favor of the design factors from customers. Furthermore, by using the QFD to evaluate the relationship between evaluation factors and design factors. By using the result of above to set up the essentiality design factors and through the web questionnaire, the alternative design will show. Using the Structure Variation Method to analyze the alternative design and using the 3D MAX to present the model of the product. This study establishes the product development principle and use the MP3 player design as the example. The combination of entropy weighting and gray theory bring the successful conclusion of product design method.
Style APA, Harvard, Vancouver, ISO itp.
41

Yuan, Hui-Wei, i 袁輝偉. "Applying the keyword weighting method to measure SQL statement complexities". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/sjg875.

Pełny tekst źródła
Streszczenie:
碩士
中華大學
資訊管理學系碩士在職專班
101
Databases have been extensively applied in our daily life. In the modern database almost all of them employ SQL as an interface for retrieving or maintaining data. Therefore coding SQL query statements efficiently becomes a necessary skill of programmers. To explore the issue of how to help the programmers equip the coding skill, it is necessary to develop a method for measuring the complexity of coding SQL statements. In the research community most of the existing research used Halstead complexity to measure the complexity of coding the SQL query statements. This study however finds the Halstead complexity could not consistently measure the complexity of coding SQL statement if the programmer uses different SQL statements to answer the same query. This study hence proposed the keyword weighting method instead of Halstead complexity method to measure the complexity of coding the SQL statements. The results of this study are listed as follows. (1) Different keywords have different complexities, (2) the complexity of coding the SQL statement can be represented as a function which depends on the complexity of the keywords within the SQL statement, (3) the complexity of coding the SQL statement correlates with the accuracy and the confidence. This study concluded that, comparing with the Halstead complexity method, the keyword weighting method excels in measuring the complexity of coding the SQL statements.
Style APA, Harvard, Vancouver, ISO itp.
42

Yeh, Tien-Yu, i 葉天煜. "Research of Reducing Bit Error Rate by Gray Level Weighting Method". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/82043063228214877496.

Pełny tekst źródła
Streszczenie:
碩士
國立中央大學
光電科學研究所
97
The purpose in this study is to reduce the error bits caused in the holographic data storage system, and use the RS code of error correction codes to slove the random noise effectively. After using the gray level weighting method, RS code can correct all error bits of reduced coding pages, and get decoded figure without error bits. The gray level weighting method can decrease error bits effectively when optical system has defocus aberration or more serious lens aberration. We can know that using the gray level weighting method can decrease the optical demand of holographic data storage. Utilize Gaussian accumulate probability to get the mean value and standard deviation to describe the Gaussian destribution curve matched the gray level distribution of actual experiment. Calculate the theoretical value of bit error rate of reduced figure. Using the Himax LCoS and common lens to get the optimal theoretical value of Bit Error Rate is 4.19×10^-10。
Style APA, Harvard, Vancouver, ISO itp.
43

YANG, JIN-DE, i 楊金德. "Dividing a rectangular land using numerical method weighting with land price". Thesis, 1990. http://ndltd.ncl.edu.tw/handle/45983060881380483961.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Liu, Pin-Yen, i 劉品言. "An Edge Detection Method via the Tuning of RGB Weighting Ratio". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/bc99a5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Wu, Cheng-Mao, i 吳承懋. "Applying Entropy Weighting Method and Grey Theory to Optimize Multi-response Problems". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/42247124042956765273.

Pełny tekst źródła
Streszczenie:
碩士
國立交通大學
工業工程與管理系所
103
Facing the sharp competitiveness in twenty-first century, the advanced technology and sophisticated manufacturing process are necessary for manufacturers to meet the consumer’s requirements. Developing innovative products, improving product quality and reducing production cost are effective ways to maintain market competitiveness. Therefore, finding the optimal factor-level combination in a multi-response process under the restricted experimental cost, experimental time and machine feasibility becomes a very important issue for manufacturers. Design of Experiments (DOE) is often applied in industry to determine the optimal parameter setting of a process. However, DOE can only be utilized to optimize single response. Although many studies have developed optimization procedures for multi-response problems, they still have some shortcoming. Therefore, the main purpose of this study is to develop a method of optimizing multiple responses simultaneously using Grey Relation Analysis (GRA), Entropy Weight Method and Dual Response Surface Methodology (DRSM). Finally, a real case from a semiconductor factory in Taiwan is utilized to verify the effectiveness of the proposed procedure.
Style APA, Harvard, Vancouver, ISO itp.
46

HUNG, CHANG-YI, i 洪長義. "Study on The Weighting Analysis of Theft by Using GM(0,N) Method". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/50362618602878734820.

Pełny tekst źródła
Streszczenie:
碩士
建國科技大學
自動化工程系暨機電光系統研究所
102
Under the impact of global economic recession and financial tsunami, Taiwan is also not free from the impact of the entire environment. Because of the high unemployment rate, social security problems are also getting more and more important and restless. In addition, there are significant changes among people’s interpersonal relationships and values. Therefore the original function of social control is facing disruption, the society is becoming disorder and it seems to become more unsafe and disorder. According to major types of criminal cases statistics taken by domestic National Police Agency, among the nationwide numbers of theft, fraud, breach of trust, breach of drug control, and the theft cases in Taipei, the theft cases scored at the top on the criminal cases. So, we can imagine the seriousness of this problem. Therefore, this paper is based on the number of the theft cases and applies GM(0,N) model of grey system theory to find out the weighting of impact factors in theft cases. Then, the possible solution and possible directions for improvement can be found. This study can provide another statistical method for the executive police when they propose further policies.
Style APA, Harvard, Vancouver, ISO itp.
47

SHIH, MAN-YU, i 石曼郁. "Appling GM(h,N) Method to Analyze the Weighting ofInfluence Factor in Glasses Selling". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/02428583935677859704.

Pełny tekst źródła
Streszczenie:
碩士
建國科技大學
電機工程系暨研究所
104
Along with the progress of living civilization of human being, the basic necessities of daily life have improved from basic requirement to life quality promotion, life convenience and recreation. Especially, the progression of technological life and living custom has been changed greatly. World Health Organization (WHO) statistics indicates that there are about ten million people in Taiwan need wearing glasses to aid their vision. This numerical value reveals a huge market to the optical industry. Also according to the past studies, the optical industry behavior mode has shown that quick selling items are their major requirement. However, the weight of impact factors in the studies did not reveal. Therefore, the paper adopts GM(h,N) method in grey system theory which to analyze the weight of impact factors to the system. In addition, The Eight-Fold Glasses in Feng-Chia night market is taken as the analyzed subject. Totally, there are 10 impact factors for selling items. Then, the paper calculates the weight of individual impact factors toward total amount of sales and to find out the significant that influences the business operation. After the analysis of the impact factors, it not only can promote the operation benefit, but also can achieve the high economical income effect in the management toward optical industry.
Style APA, Harvard, Vancouver, ISO itp.
48

Wang, Han-Yang, i 王瀚陽. "Improvement of Auto Focus for Conventional Climbing Method by Using Adaptive Weighting Estimation Mechanism". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/88322594749244830788.

Pełny tekst źródła
Streszczenie:
碩士
國立中央大學
資訊工程學系在職專班
101
Webcams are actively used for social interaction. Accordingly, this sort of application has been successfully visualized in many consumer electronics products, such as notebooks, tablets and smart TVs. In the context of user-driven processing operations, focusing is probably the most frequently used function developed to obtain a particular desired view of a scene. In the case of fixed-focusing webcams, the captured scene will exhibit blurring contents if the image scene is out of focus. It hence drives the demands for achieving a particular desired clarity in the scene with an auto-focusing function. Almost all of the auto-focusing webcams in the market are based on a system-on-chip(SOC) framework, where a integrated camera module includes lens, complementary metal-oxide-semiconductor(CMOS) sensor, image signal processor(ISP) and voice coil motor(VCM). Due to the consideration of cost, many different auto-focusing approaches have been developed to overcome the limitations pertaining to resources available to consumer webcams. Among these, hill-climbing algorithm is most widely adopted because of its simplicity and easy implementation in hardware for real-time applications. However, traditional hill-climbing solution is limited to focusing on high-frequency blocks of an image scene. This results in a miss-focusing when the complex of region-of-interest (ROI) is much simpler than other contents of an image scene. Motivated by this, we propose an improved hill-climbing algorithm using an adaptive weighting estimation mechanism in this thesis. Experimental results show that this new solution can perform outstandingly well for various scenes and different light conditions without requiring high computational cost. Moreover, the developed scheme is well-suited for implementation in low-cost consumer webcams.
Style APA, Harvard, Vancouver, ISO itp.
49

Wang, Haiou. "Logic sampling, likelihood weighting and AIS-BN : an exploration of importance sampling". Thesis, 2001. http://hdl.handle.net/1957/28769.

Pełny tekst źródła
Streszczenie:
Logic Sampling, Likelihood Weighting and AIS-BN are three variants of stochastic sampling, one class of approximate inference for Bayesian networks. We summarize the ideas underlying each algorithm and the relationship among them. The results from a set of empirical experiments comparing Logic Sampling, Likelihood Weighting and AIS-BN are presented. We also test the impact of each of the proposed heuristics and learning method separately and in combination in order to give a deeper look into AIS-BN, and see how the heuristics and learning method contribute to the power of the algorithm. Key words: belief network, probability inference, Logic Sampling, Likelihood Weighting, Importance Sampling, Adaptive Importance Sampling Algorithm for Evidential Reasoning in Large Bayesian Networks(AIS-BN), Mean Percentage Error (MPE), Mean Square Error (MSE), Convergence Rate, heuristic, learning method.
Graduation date: 2002
Style APA, Harvard, Vancouver, ISO itp.
50

Liu, Jia-Rou, i 劉珈柔. "The Impact between Social Enterprise’s Business Model Combined with Corporate Social Responsibility Strategy and Stakeholder’s Weighting Method". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/71602924381724645956.

Pełny tekst źródła
Streszczenie:
碩士
國立高雄應用科技大學
國際企業研究所
102
THE IMPACT BETWEEN SOCIAL ENTERPRISE’S BUSINESS MODEL COMBINED WITH CORPORATION SOCIAL RESBONSIBILITY STRATEGY AND STAKEHOLDER’S WEIGHTING METHOD Due to the shortage of donation faced by the non-profit organizations and recent lack of social responsibility for the minority, social enterprises combined with social value and economic profits have gained their weights around the world. Forsyth (2012) suggested that a well-run social enterprise need to balance its social and economic benefits. Researches on the social enterprise have stated the relation between the social enterprise’s business model and stakeholders. However, research which deals empirically with the stakeholder’s weighting method is scant. Therefore, the aim of this paper is to explore the relation between the social enterprise’s business model and stakeholder’s weighting method. The research methods used in this paper involve a semi-structured interview concerning business model and stakeholder’s weighting method. Results of this study show that social enterprises focus on social strategy while there are two other factors are also important in the business model; that is, resource partners and value identification. Moreover, the stakeholder’s weighting method proposed by Mitchell et al. (1997) is not a suitable method for social enterprise. To conclude, the results of this study shed light on the dynamic relationship of social enterprise’s business model and stakeholder’s weighting method, and provide social entrepreneurs with a better understanding of how social enterprise’s business model relate to stakeholder’s weighting method. Key Words:Social Enterprise; Corporate Social Responsibility; Business Model; Stakeholder’s Weighting Method
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii