To see the other types of publications on this topic, follow the link: WEIGHING SCALE.

Dissertations / Theses on the topic 'WEIGHING SCALE'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 dissertations / theses for your research on the topic 'WEIGHING SCALE.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wheeler, Travis John. "EFFICIENT CONSTRUCTION OF ACCURATE MULTIPLE ALIGNMENTS AND LARGE-SCALE PHYLOGENIES." Diss., The University of Arizona, 2009. http://hdl.handle.net/10150/195143.

Full text
Abstract:
A central focus of computational biology is to organize and make use of vast stores of molecular sequence data. Two of the most studied and fundamental problems in the field are sequence alignment and phylogeny inference. The problem of multiple sequence alignment is to take a set of DNA, RNA, or protein sequences and identify related segments of these sequences. Perhaps the most common use of alignments of multiple sequences is as input for methods designed to infer a phylogeny, or tree describing the evolutionary history of the sequences. The two problems are circularly related: standard phylogeny inference methods take a multiple sequence alignment as input, while computation of a rudimentary phylogeny is a step in the standard multiple sequence alignment method.Efficient computation of high-quality alignments, and of high-quality phylogenies based on those alignments, are both open problems in the field of computational biology. The first part of the dissertation gives details of my efforts to identify a best-of-breed method for each stage of the standard form-and-polish heuristic for aligning multiple sequences; the result of these efforts is a tool, called Opal, that achieves state-of-the-art 84.7% accuracy on the BAliBASE alignment benchmark. The second part of the dissertation describes a new algorithm that dramatically increases the speed and scalability of a common method for phylogeny inference called neighbor-joining; this algorithm is implemented in a new tool, called NINJA, which is more than an order of magnitude faster than a very fast implementation of the canonical algorithm, for example building a tree on 218,000 sequences in under 6 days using a single processor computer.
APA, Harvard, Vancouver, ISO, and other styles
2

Tudor, Joshua. "Developing a national frame of reference on student achievement by weighting student records from a state assessment." Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/1779.

Full text
Abstract:
A fundamental issue in educational measurement is what frame of reference to use when interpreting students’ performance on an assessment. One frame of reference that is often used to enhance interpretations of test scores is normative, which adds meaning to test score interpretations by indicating the rank of an individual’s score within a distribution of test scores of a well-defined reference group. One of the most commonly used frames of reference on student achievement provided by test publishers of large-scale assessments is national norms, whereby students’ test scores are referenced to a distribution of scores of a nationally representative sample. A national probability sample can fail to fully represent the population because of student and school nonparticipation. In practice, this is remedied by weighting the sample so that it better represents the intended reference population. The focus of this study was on weighting and determining the extent to which weighting grade 4 and grade 8 student records that are not fully representative of the nation can recover distributions of reading and math scores in a national probability sample. Data from a statewide testing program were used to create six grade 4 and grade 8 datasets, each varying in its degree of representativeness of the nation, as well as in the proximity of its reading and math distributions to those of a national sample. The six datasets created for each grade were separately weighted to different population totals in two different weighting conditions using four different bivariate stratification designs. The weighted distributions were then smoothed and compared to smoothed distributions of the national sample in terms of descriptive statistics, maximum absolute differences between the relative cumulative frequency distributions, and chi-square effect sizes. The impact of using percentile ranks developed from the state data was also investigated. By and large, the smoothed distributions of the weighted datasets were able to recover the national distribution in each content area, grade, and weighting condition. Weighting the datasets to the nation was effective in making the state test score distributions more similar to the national distributions. Moreover, the stratification design that defined weighting cells by the joint distribution of median household income and ethnic composition of the school consistently produced desirable results for the six datasets used in each grade. Log-linear smoothing using a polynomial of degree 4 was effective in making the weighted distributions even more similar to those in the national sample. Investigation of the impact of using the percentile ranks derived from the state datasets revealed that the percentile ranks of the distributions that were most similar to the national distributions resulted in a high percentage of agreement when classifying student performance based on raw scores associated with the same percentile rank in each dataset. The utility of having a national frame of reference on student achievement, and the efficacy of estimating such a frame of reference from existing data are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
3

Tarn, Yen-Huei Tony. "Re-weighting the Quality of Well-Being Scale and assessment of self-reported health status in Chinese Americans." Diss., The University of Arizona, 1993. http://hdl.handle.net/10150/186324.

Full text
Abstract:
Asian-Americans are the fastest growing ethnic minority group in the United States, followed by Hispanics. Little is known about their health state preferences or their health status. The purpose of this research was to determine whether a Chinese-American population has different preference values on four dimensions of health status than a general community sample in the United States. Also of interest was the self-reported health status of this sample of Chinese-Americans, using weights derived from Americans or Chinese-Americans to see whether the resulting index scores were significantly different. The question is whether Quality of Well-Being (QWB) weights derived from preferences of the American sample were appropriate for scoring QWBs for the Chinese-Americans. This research was conducted on 383 Chinese-Americans living in the San Gabriel Valley area, east of Los Angeles, California. A model of deliberate sampling for heterogeneity and a snowball sampling strategy were used for subjects' selection into the study. Three instruments (a weighting booklet, the Quality of Well-Being Scale, and a demographic battery), each having an English and a Chinese version, were used. Results indicate that the reliability and validity of the booklet rating and QWB Scale were high in the Chinese-American sample. The preference weights derived from the sample of Chinese-Americans were different from those derived from the community sample of Americans. Although the weights cannot be compared individually due to the lack of variance associated with them, of those 48 levels on the symptom/problems scale, 28 of the Chinese-American weights were lower than the American weights. For the 11 levels of the three functional scales, eight were higher than the American sample. The mean QWB scores calculated using Chinese-American weights were lower than those calculated using American weights. Therefore, QWB weights derived from preferences of the American sample were not appropriate for scoring QWBs for the Chinese-Americans.
APA, Harvard, Vancouver, ISO, and other styles
4

Cunha, Ana Torre do Valle de Arriaga e. "Cumulative prospect theory : a parametric analysis of the functional forms and applications." Master's thesis, Instituto Superior de Economia e Gestão, 2012. http://hdl.handle.net/10400.5/10990.

Full text
Abstract:
Mestrado em Finanças
Este trabalho apresenta um estudo empírico sobre cumulative prospect theory através do estudo da função de utilidade e a função de probabilidade distorcida. Os resultados obtidos estão de acordo com a literatura, que mostra que a função da utilidade é côncava no domínio dos ganhos, e quase linear no domínio das perdas. Não só mostra que a função da probabilidade distorcida tem a forma de um "S" inverso tanto no domínio dos ganhos como no domínio das perdas. Também aborda o estudo de variáveis demográficas relacionando-as com os coeficientes das funções mencionadas anteriormente, concluindo assim que os homens estão mais dispostos a correr riscos do que as mulheres. Por fim, através dos coeficientes calculados, foi possível aplicar os resultados ao mercado financeiro. Primeiro criando uma ponte entre o coeficiente de loss aversion e a escala de DOSPERT, o que irá facilitar a determinação do carteira mais adequado para cada individuo. Segundo, aplicando a cumulative prospect theory à modern portfolio theory para o mercado Português. Isto irá permitir que as instituições financeiras consigam determinar a carteira óptima do mercado, tendo em conta as probabilidades distorcidas.
This work presents an empirical study of the cumulative prospect theory using a Portuguese sample. We estimate the value function and the probability weighting function with positive and negative outcomes. The results confirm previous works that the value function is concave in the gain domain and almost linear in the loss domain. Our results also show an inverse S-shape for the probability weighting function in both loss and gain domain. We also look into the relation of the coefficient from the already mentioned functions with some demographic variables. It was possible to conclude that males are more willing to take risks than females. Finally, using the calculated coefficients we discuss the applicability of the results in the context of financial markets. First we establish a bridge between the loss aversion coefficient and the DOSPERT-scale, which will provide an easier way for financial institutions to present the correct efficient portfolio for each individual. Second we apply the cumulative prospect theory to the modern portfolio theory, for the Portuguese market. This will allow the financial institutions to create an efficient portfolio of the market, taking into account the probabilities distortions
APA, Harvard, Vancouver, ISO, and other styles
5

Rai, kurlethimar Yashas. "Visual attention for quality prediction at fine spatio-temporal scales : from perceptual weighting towards visual disruption modeling." Thesis, Nantes, 2017. http://www.theses.fr/2017NANT4027/document.

Full text
Abstract:
Cette thèse revisite les relations entre les processus attentionnels visuels et la perception de qualité. Nous nous intéressons à la perception de dégradation dans des séquences d’images et leur impact sur la perception de qualité. Plutôt qu’un approcha globale, nous travaillons à une échelle spatio temporelle fine, plus adaptée aux décisions des encodeurs vidéo. Deux approches liant attention visuelle et qualité perçue sont explorées. La première, suit une approche classique, de type pondération des distorsions. Ceci est mis en relation avec des scénarios d’usage comme le streaming interactif ou la visualisation de contenus omnidirectionnels. Une seconde approche nous amène à introduire le concept de disruption visuelle (DV) et sa relation avec la perception de qualité. Nous proposons d’abord des techniques permettant d’étudier les saccades résultantes de la DV à partir par de données expérimentales oculométriques. Nous proposons ensuite un modèle computationnel de prédiction de la DV. Une nouvelle mesure objective de qualité est ainsi introduite nommée "Disruption Metric" permettant l’évaluation de la qualité locale de vidéos. Les résultats obtenus trouvent leurs applications dans de nombreux domaines tels que l’évaluation de qualité, la compression, la transmission perpétuellement optimisée de contenus visuel ou le rendu/visualisation foéval
This thesis revisits the relationship between visual attentional processes and the perception of quality. We mainly focus on the perception of degradation in video sequences and their overall impact on our perception of quality. Rather than a global approach, we work in a very localized spatio-temporal scale, more adapted to the decision-process in video encoders. Two approaches linking visual attention and perceived quality are explored in the thesis. The first follows a classical approach, of the distortion weighting type. This is very useful in certain scenarios such as interactive streaming or visualization of omni-directional content. The second approach leads us to the introduction of the concept of visual disruption(DV), and explore its relation to perceived quality. We first propose techniques for studying the saccades related to DV from experimental oculometric data. Then, a computational model for the prediction of DV is proposed. A new objective measurement of quality is therefore born, which we call the "Disruption Metric" : that allows the evaluation of the local quality of videos. The results obtained, find their applications in many fields such as quality evaluation, compression, perpetually optimized transmission of visual content or foveated rendering / transmission
APA, Harvard, Vancouver, ISO, and other styles
6

Harris, Katherine S. "Investigating the efficacy of weighting the subscales of the Braden scale for predicting pressure sore risk to enhance its predictive validity." NSUWorks, 2009. http://nsuworks.nova.edu/hpd_pt_stuetd/27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Aytekin, Caglar. "Geo-spatial Object Detection Using Local Descriptors." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613488/index.pdf.

Full text
Abstract:
There is an increasing trend towards object detection from aerial and satellite images. Most of the widely used object detection algorithms are based on local features. In such an approach, first, the local features are detected and described in an image, then a representation of the images are formed using these local features for supervised learning and these representations are used during classification . In this thesis, Harris and SIFT algorithms are used as local feature detector and SIFT approach is used as a local feature descriptor. Using these tools, Bag of Visual Words algorithm is examined in order to represent an image by the help of histograms of visual words. Finally, SVM classifier is trained by using positive and negative samples from a training set. In addition to the classical bag of visual words approach, two novel extensions are also proposed. As the first case, the visual words are weighted proportional to their importance of belonging to positive samples. The important features are basically the features occurring more in the object and less in the background. Secondly, a principal component analysis after forming the histograms is processed in order to remove the undesired redundancy and noise in the data, reduce the dimension of the data to yield better classifying performance. Based on the test results, it could be argued that the proposed approach is capable to detecting a number of geo-spatial objects, such as airplane or ships, for a reasonable performance.
APA, Harvard, Vancouver, ISO, and other styles
8

Seegmiller, Luke W. "Utah Commercial Motor Vehicle Weigh-in-Motion Data Analysis and Calibration Methodology." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1616.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

May, Michael. "Data analytics and methods for improved feature selection and matching." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/data-analytics-and-methods-for-improved-feature-selection-and-matching(965ded10-e3a0-4ed5-8145-2af7a8b5e35d).html.

Full text
Abstract:
This work focuses on analysing and improving feature detection and matching. After creating an initial framework of study, four main areas of work are researched. These areas make up the main chapters within this thesis and focus on using the Scale Invariant Feature Transform (SIFT).The preliminary analysis of the SIFT investigates how this algorithm functions. Included is an analysis of the SIFT feature descriptor space and an investigation into the noise properties of the SIFT. It introduces a novel use of the a contrario methodology and shows the success of this method as a way of discriminating between images which are likely to contain corresponding regions from images which do not. Parameter analysis of the SIFT uses both parameter sweeps and genetic algorithms as an intelligent means of setting the SIFT parameters for different image types utilising a GPGPU implementation of SIFT. The results have demonstrated which parameters are more important when optimising the algorithm and the areas within the parameter space to focus on when tuning the values. A multi-exposure, High Dynamic Range (HDR), fusion features process has been developed where the SIFT image features are matched within high contrast scenes. Bracketed exposure images are analysed and features are extracted and combined from different images to create a set of features which describe a larger dynamic range. They are shown to reduce the effects of noise and artefacts that are introduced when extracting features from HDR images directly and have a superior image matching performance. The final area is the development of a novel, 3D-based, SIFT weighting technique which utilises the 3D data from a pair of stereo images to cluster and class matched SIFT features. Weightings are applied to the matches based on the 3D properties of the features and how they cluster in order to attempt to discriminate between correct and incorrect matches using the a contrario methodology. The results show that the technique provides a method for discriminating between correct and incorrect matches and that the a contrario methodology has potential for future investigation as a method for correct feature match prediction.
APA, Harvard, Vancouver, ISO, and other styles
10

SWAMI, SUNIL KUMAR. "IOT WEIGHING SCALE." Thesis, 2018. http://dspace.dtu.ac.in:8080/jspui/handle/repository/16410.

Full text
Abstract:
In current time, the calculations of population of bee for honey strictly increase. So that's why here i am going to discuss that how i can get the weight of bee population for honey. I am taking two kinds of technology like bluetooth and wi-fi. One more is hardware design weighingmachine.so i can get the bee population by using bluetooth and wi-fi module .Bluetooth module and wi-fi module are attached in weighing machine and all are controlled by programming. Bluetooth can measure the weight in small range but wi-fi will work as cloud service in which we can calculate the weight in every 15 second by python programming. Also i got the database server to store all intermediate results and also i can delete all results. There is load cell by using, i can measure up to 5 kg weight. Firstly this weigh goes to ADC (analogue to digital converter) to convert the weight into digital weight, after that is weight will go to microcontroller to control all weight to the led, wi-fi module, bluetooth module. here i use power plug source to get the electricity. Bluetooth shows only weight is bee population but wi-fi service does not only shows the result but also it stores in database server in every 15 seconds that's why i do not have to go for checking weight to reach at particular place for our goal weight.
APA, Harvard, Vancouver, ISO, and other styles
11

Lin, Chih-Kun, and 林志坤. "A Preliminary Study on Combination of Automatic PC Controlled Weighing Scale System." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/07892170742343929977.

Full text
Abstract:
碩士
國立臺灣大學
生物產業機電工程學研究所
89
The purpose of this study was to develop a combination automatic weighing system of high accuracy and rapid weighing. Therefore, an automatic weighing system was built by using Load-Cell、amplifier、low-pass filter、FX-4AD and PLC ( programmable logic controller ). The combination system for calculation and grade was built by using Visual Basic 6.0 for PC. Based on the 1 : 1 ratio set between weight and digital value, and man-machine interface to control the system. The performances of this system were as follows : the worst accuracy of the dynamic combination automatic weighing system was 1.68﹪,and the best accuracy of the dynamic automatic weighing system was 2.32﹪. Therefore, the accuracy of the combination automatic weighing system was better than that of the dynamic automatic weighing system. In addition, the weighing time of the dynamic automatic weighing system was more than that of the dynamic combination automatic weighing system. Therefore, the feasibility of the combination automatic weighing system is better than that of the automatic weighing system about high accuracy and rapid weighing.
APA, Harvard, Vancouver, ISO, and other styles
12

Klier, Christine [Verfasser]. "Environmental fate of the herbicide glyphosate in the soil plant system : monitoring and modelling using large-scale weighing lysimeters / Christine Klier." 2007. http://d-nb.info/987955780/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Yu-Ting, and 王予廷. "Large-Scale One-Class Collaborative Filtering:The Impact of Weighting Schemes." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/09357425088653461186.

Full text
Abstract:
碩士
國立臺灣大學
資訊管理學研究所
104
Recommendation systems have been widely used in e-commerce applications. With the development of information technology, users can easily reach enormous products. However, users have limited ability to evaluate their choices. Therefore, it’s important for content providers and e-retailers to recommend items which meet users’ taste to enhance user satisfaction and loyalty. Collaborative filtering is a popular way to implement a recommendation system. Collaborative filtering analyzes the relationships between users and items by users’ feedback which reflect users’ preferences. Then, it recommends user a ranked item list which is sorted by predicted preferences. This research focus on the One-class Collaborative Filtering (OCCF) approach. In OCCF, we only have positive examples to represent users’ actions. The data are ambiguous because unobserved data points can be interpreted as missing or negative cases. In this study, we treat unknown examples as negative examples with a confidence score, which is calculated by our weighting schemes. We apply our model on two large-scale movie rating datasets, and implement OCCF with gap-weighting Alternating Least Square (gALS). Then, we adjust weighting schemes to observe the impact on the model. Our result shows that gALS improves predicting performance. However, weighting strategies don’t make a dramatic impact.
APA, Harvard, Vancouver, ISO, and other styles
14

Pletts, T. R. "The feasibility of automatic on -board weighing systems in the South African sugarcane transport industry /." Thesis, 2009. http://hdl.handle.net/10413/944.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Surve, Sachin Ramchandra. "Interferometric optical fibre sensor for highway pavements and civil structures." Thesis, 2003. https://vuir.vu.edu.au/15705/.

Full text
Abstract:
Optical fibres have been used for developing variety of sensing configurations for monitoring a wide range of parameters. This thesis presents the design, construction and characterisation of a new type of single-transducer optical fibre Weigh-ln-Motion (WIM) sensor. The sensor is based on an extended (long) fibre optic Fabry-Perot interferometer. The Fabry-Perot arrangement was chosen for its simplicity, sensitivity, low cost and ease of installation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography