To see the other types of publications on this topic, follow the link: Prediction of quality.

Dissertations / Theses on the topic 'Prediction of quality'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Prediction of quality.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

KUNTE, DEEPTI SHRIRAM. "Sound Quality Prediction Using Neural Networks." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-283336.

Full text
Abstract:
Sound quality is an important measure depicting the quality of a machine as well as the comfort in its usage. However, it being a subjective measure, not only is it difficult to capture it ahead of time but also necessitates time and cost expensive jury testing. Thus, it is worthwhile to be able to effectively predict the results of the jury study from metrics that can be objectively measured. The aim of the thesis is twofold: first, to establish neural network models to predict subjective sound quality metrics from objective metrics and second, to interpret the model to understand the relative importance of each objective metric towards a specific subjective rating. Ultimately the thesis aims to pave the way for inclusion of sound quality metrics in the early design stages. From the study, it was evident that neural networks’ performance was at least equal to or better than linear or quadratic models. The connection weights method, the profile method, the perturbation method, the improved stepwise selection method and linear regression method were the interpretation algorithms found to work well in all simulated data-sets. They also gave comparable results on the real data-sets. Neural networks were shown to have the potential for giving low prediction errors while maintaining interpretability in sound quality applications. The data scarcity study gave an idea of the scale of performance enhancement that can be achieved with more data and can act as a useful input for deciding the number data points.
Ljudkvalitet är ett viktigt mått som skildrar en maskins kvalitet såväl som bekvämlighet i dess användning. Det är emellertid ett subjektivt mått, inte bara är det svårt att fånga detta i förväg men också att det kräver både tid och dyra jurytestningar. Det är därför värdefullt att kunna effektivt förutsäga de resultaten av jurystudien från mätvärden som kan mätas objektivt. Syftet med arbetet är tvåfaldigt: det första är att etablera neuronnätsmodeller till att förutsäga subjektiva ljudkvalitetsmätvärden från objektiva mätvärden. Det andra är att tolka modellen till att kunna förstå den relativa betydelsen av varje objektivt mätvärde mot en specifik subjektiv bedömning. I sista hand syftar arbetet till att bana vägen för inkludering av mätvärden för ljudkvalitet i de tidiga designfaserna. Utifrån studien var det uppenbart att neuronnäts prestanda var åtminstone lika med eller bättre än de linjära eller kvadratiska modellerna. Anslutningsviktsmetoden, profilmetoden, störningsmetoden, den förbättrade stegvisa urvalsmetoden samt den linjära regressionsmetoden var tolkningsalgoritmerna som visade sig att fungera väl på alla simulerad datauppsättningar. De gav också jämförbara resultat på de verkliga datauppsättningarna. Neuronnät visade sig att ha potential att ge låga prediktionsfel samtidigt som de bibehåller tolkningsbarhet i applikationer för ljudkvalitet. Studien av dataknapphet gav det en uppfattning om storleken på prestandaförbättring som kan uppnås med mer data och kan fungera som en användbar input vid bestämning av antalet datapunkter.
APA, Harvard, Vancouver, ISO, and other styles
2

Steel, Donald. "Software reliability prediction." Thesis, Abertay University, 1990. https://rke.abertay.ac.uk/en/studentTheses/4613ff72-9650-4fa1-95d1-1a9b7b772ee4.

Full text
Abstract:
The aim of the work described in this thesis was to improve NCR's decision making process for progressing software products through the development cycle. The first chapter briefly describes the software development process at NCR, detailing documentation review and software testing techniques. The objectives and reasons for investigating software reliability models as a tool in the decision making process are outlined. There follows a short review of software reliability models, with the Littlewood and Verrall Bayesian model considered in detail. The difficulties in using this model to obtain estimates for model parameters and time to next failure are described. These estimation difficulties exist using the model on good datasets, in this case simulated failure data, and the difficulties are compounded when used with real failure data. The problems of collecting and recording failure data are outlined, highlighting the inadequacies of these collected data, and real failure data are analysed. Software reliability models are used in an attempt to quantify the reliability of real software products. The thesis concludes by summarising the problems encountered when using reliability models to measure software products and suggests future research into metrics that are required in this area of software engineering.
APA, Harvard, Vancouver, ISO, and other styles
3

Peng, Huiping. "Air quality prediction by machine learning methods." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/55069.

Full text
Abstract:
As air pollution is a complex mixture of toxic components with considerable impact on humans, forecasting air pollution concentration emerges as a priority for improving life quality. In this study, air quality data (observational and numerical) were used to produce hourly spot concentration forecasts of ozone (O₃), particulate matter 2.5μm (PM₂.₅) and nitrogen dioxide (NO₂), up to 48 hours for six stations across Canada -- Vancouver, Edmonton, Winnipeg, Toronto, Montreal and Halifax. Using numerical data from an air quality model (GEM-MACH15) as predictors, forecast models for pollutant concentrations were built using multiple linear regression (MLR) and multi-layer perceptron neural networks (MLP NN). A relatively new method, the extreme learning machine (ELM), was also used to overcome the limitation of linear methods as well as the large computational demand of MLP NN. In operational forecasting, the continuous arrival of new data means frequent updating of the models is needed. This type of learning, called online sequential learning, is straightforward for MLR and ELM but not for MLP NN. Forecast performance of the online sequential MLR (OSMLR) and online sequential ELM (OSELM), together with stepwise MLR, all updated daily were compared with MLP NN updated seasonally, and the benchmark, updatable model output statistics (UMOS) from Environmental Canada. Overall OSELM tended to slightly outperform the other models including UMOS, being most successful with ozone forecasts and least with PM₂.₅ forecasts. MLP NN updated seasonally was generally underperforming the linear models MLR and OSMLR, indicating the need to update a nonlinear model frequently.
Science, Faculty of
Earth, Ocean and Atmospheric Sciences, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
4

Hollier, M. P. "Audio quality prediction for telecomunications speech systems." Thesis, University of Essex, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.282496.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mateus, Ana Teresa Moreirinha Vila Fernandes. "Quality management in laboratories- Effciency prediction models." Doctoral thesis, Universidade de Évora, 2021. http://hdl.handle.net/10174/29338.

Full text
Abstract:
In recent years, the choice of quality tools by laboratories has increased significantly. This fact contributed to the growth of competitiveness, requiring a new organizational posture to adapt to the new challenges. In order to obtain competitive advantages in the respective sectors of activity, laboratories have increasingly invested in innovation. In this context, the main objective of this study aims to develop efficiency models for laboratories using tools from the Scientific Area of Artificial Intelligence. Throughout this work, different studies will be presented, carried out in water analysis laboratories, stem cell cryopreservation laboratories and dialysis care clinics, in which innovative solutions and better resource control were sought, without compromising quality and promoting greater sustainability This work can be seen as an investigation opportunity that can be applied not only in laboratories and clinics, but also in organizations from different sectors in order to seek to define prediction models, allowing the anticipation of future scenarios and the evaluation of ways of acting. The results show the feasibility of applying the models and that the normative references applied to laboratories and clinics can be a basis for structuring the systems; Gestão da Qualidade em Laboratórios Modelos de Previsão de Eficiência Resumo: Nos últimos anos, a adoção de ferramentas da qualidade por parte dos laboratórios tem aumentado significativamente. Este facto contribuiu para o crescimento da competitividade, exigindo uma nova postura organizacional de forma a se adaptarem aos novos desafios. Tendo em vista obter vantagens competitivas nos respetivos sectores de atividade, os laboratórios têm, cada vez mais, apostado em inovação. Neste contexto, o principal objetivo deste estudo visa o desenvolvimento de modelos de eficiência para laboratórios através do recurso a ferramentas da Área Científica da Inteligência Artificial. Ao longo deste trabalho irão ser apresentados diferentes estudos, realizados em laboratórios de análises de águas, laboratórios de criopreservação de células estaminais e clínicas de prestação de cuidados de diálise, nos quais se procuraram soluções inovadoras e um melhor controlo de recursos, sem comprometer a qualidade e promovendo uma maior sustentabilidade. Este trabalho pode ser encarado como uma oportunidade de investigação que pode ser aplicado não apenas em laboratórios e clínicas mas, também, em organizações de diversos sectores com o intuito de se procurar definir modelos de previsão, possibilitando a antecipação de cenários futuros e a avaliação de formas de atuação. Os resultados mostram a viabilidade da aplicação dos modelos e que os referenciais normativos aplicados aos laboratórios e às clínicas podem servir como base para estruturação dos sistemas.
APA, Harvard, Vancouver, ISO, and other styles
6

Taipale, T. (Taneli). "Improving software quality with software error prediction." Master's thesis, University of Oulu, 2015. http://urn.fi/URN:NBN:fi:oulu-201512042251.

Full text
Abstract:
Today’s agile software development can be a complicated process, especially when dealing with a large-scale project with demands for tight communication. The tools used in software development, while aiding the process itself, can also offer meaningful statistics. With the aid of machine learning, these statistics can be used for predicting the behavior patterns of the development process. The starting point of this thesis is a software project developed to be a part of a large telecommunications network. On the one hand, this type of project demands expensive testing equipment, which, in turn, translates to costly testing time. On the other hand, unit testing and code reviewing are practices that improve the quality of software, but require large amounts of time from software experts. Because errors are the unavoidable evil of the software process, the efficiency of the above-mentioned quality assurance tools is very important for a successful software project. The target of this thesis is to improve the efficiency of testing and other quality tools by using a machine learner. The machine learner is taught to predict errors using historical information about software errors made earlier in the project. The error predictions are used for prioritizing the test cases that are most probably going to find an error. The result of the thesis is a predictor that is capable of estimating which of the file changes are most likely to cause an error. The prediction information is used for creating reports such as a ranking of the most probably error-causing commits. Furthermore, a line-wise map of probability of an error for the whole project is created. Lastly, the information is used for creating a graph that combines organizational information with error data. The original goal of prioritizing test cases based on the error predictions was not achieved because of limited coverage data. This thesis brought important improvements in project practices into focus, and gave new perspectives into the software development process
Nykyaikainen ketterä ohjelmistokehitys on monimutkainen prosessi. Tämä väittämä pätee varsinkin isoihin projekteihin. Ohjelmistokehityksessä käytettävät työkalut helpottavat jo itsessään kehitystyötä, mutta ne myös säilövät tärkeää tilastotietoa. Tätä tilastotietoa voidaan käyttää koneoppimisjärjestelmän opettamiseen. Tällä tavoin koneoppimisjärjestelmä oppii tunnistamaan ohjelmistokehitystyölle ominaisia käyttäytymismalleja. Tämän opinnäytetyön lähtökohta on ohjelmistoprojekti, jonka on määrä toimia osana laajaa telekommunikaatioverkkoa. Tällainen ohjelmistoprojekti vaatii kalliin testauslaitteiston, mikä johtaa suoraan kalliiseen testausaikaan. Toisaalta yksikkötestaus ja koodikatselmointi ovat työmenetelmiä, jotka parantavat ohjelmiston laatua, mutta vaativat paljon ohjelmistoammattilaisten resursseja. Koska ohjelmointivirheet ovat ohjelmistoprojektin edetessä väistämättömiä, on näiden työkalujen tehokkuus tunnistaa ohjelmointivirheitä erityisen tärkeää onnistuneen projektin kannalta. Tässä opinnäytetyössä testaamisen ja muiden laadunvarmennustyökalujen tehokkuutta pyritään parantamaan käyttämällä hyväksi koneoppimisjärjestelmää. Koneoppimisjärjestelmä opetetaan tunnistamaan ohjelmointivirheet käyttäen historiatietoa projektissa aiemmin tehdyistä ohjelmointivirheistä. Koneoppimisjärjestelmän ennusteilla kohdennetaan testausta painottamalla virheen todennäköisimmin löytäviä testitapauksia. Työn lopputuloksena on koneoppimisjärjestelmä, joka pystyy ennustamaan ohjelmointivirheen todennäköisimmin sisältäviä tiedostomuutoksia. Tämän tiedon pohjalta on luotu raportteja kuten listaus todennäköisimmin virheen sisältävistä tiedostomuutoksista, koko ohjelmistoprojektin kattava kartta virheen rivikohtaisista todennäköisyyksistä sekä graafi, joka yhdistää ohjelmointivirhetiedot organisaatiotietoon. Alkuperäisenä tavoitteena ollutta testaamisen painottamista ei kuitenkaan saatu aikaiseksi vajaan testikattavuustiedon takia. Tämä opinnäytetyö toi esiin tärkeitä parannuskohteita projektin työtavoissa ja uusia näkökulmia ohjelmistokehitysprosessiin
APA, Harvard, Vancouver, ISO, and other styles
7

Krishnamurthy, Janaki. "Quality Market: Design and Field Study of Prediction Market for Software Quality Control." NSUWorks, 2010. http://nsuworks.nova.edu/gscis_etd/352.

Full text
Abstract:
Given the increasing competition in the software industry and the critical consequences of software errors, it has become important for companies to achieve high levels of software quality. While cost reduction and timeliness of projects continue to be important measures, software companies are placing increasing attention on identifying the user needs and better defining software quality from a customer perspective. Software quality goes beyond just correcting the defects that arise from any deviations from the functional requirements. System engineers also have to focus on a large number of quality requirements such as security, availability, reliability, maintainability, performance and temporal correctness requirements. The fulfillment of these run-time observable quality requirements is important for customer satisfaction and project success. Generating early forecasts of potential quality problems can have significant benefits to quality improvement. One approach to better software quality is to improve the overall development cycle in order to prevent the introduction of defects and improve run-time quality factors. Many methods and techniques are available which can be used to forecast quality of an ongoing project such as statistical models, opinion polls, survey methods etc. These methods have known strengths and weaknesses and accurate forecasting is still a major issue. This research utilized a novel approach using prediction markets, which has proved useful in a variety of situations. In a prediction market for software quality, individual estimates from diverse project stakeholders such as project managers, developers, testers, and users were collected at various points in time during the project. Analogous to the financial futures markets, a security (or contract) was defined that represents the quality requirements and various stakeholders traded the securities using the prevailing market price and their private information. The equilibrium market price represents the best aggregate of diverse opinions. Among many software quality factors, this research focused on predicting the software correctness. The goal of the study was to evaluate if a suitably designed prediction market would generate a more accurate estimate of software quality than a survey method which polls subjects. Data were collected using a live software project in three stages: viz., the requirements phase, an early release phase and a final release phase. The efficacy of the market was tested with results from prediction markets by (i) comparing the market outcomes to final project outcome, and (ii) by comparing market outcomes to results of opinion poll. Analysis of data suggests that predictions generated using the prediction market are significantly different from those generated using polls at early release and final release stages. The prediction market estimates were also closer to the actual probability estimates for quality compared to the polls. Overall, the results suggest that suitably designed prediction markets provide better forecasts of potential quality problems than polls.
APA, Harvard, Vancouver, ISO, and other styles
8

Wallner, Björn. "Protein Structure Prediction : Model Building and Quality Assessment." Doctoral thesis, Stockholm University, Department of Biochemistry and Biophysics, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-649.

Full text
Abstract:

Proteins play a crucial roll in all biological processes. The wide range of protein functions is made possible through the many different conformations that the protein chain can adopt. The structure of a protein is extremely important for its function, but to determine the structure of protein experimentally is both difficult and time consuming. In fact with the current methods it is not possible to study all the billions of proteins in the world by experiments. Hence, for the vast majority of proteins the only way to get structural information is through the use of a method that predicts the structure of a protein based on the amino acid sequence.

This thesis focuses on improving the current protein structure prediction methods by combining different prediction approaches together with machine-learning techniques. This work has resulted in some of the best automatic servers in world – Pcons and Pmodeller. As a part of the improvement of our automatic servers, I have also developed one of the best methods for predicting the quality of a protein model – ProQ. In addition, I have also developed methods to predict the local quality of a protein, based on the structure – ProQres and based on evolutionary information – ProQprof. Finally, I have also performed the first large-scale benchmark of publicly available homology modeling programs.

APA, Harvard, Vancouver, ISO, and other styles
9

Wallner, Björn. "Protein structure prediction : model building and quality assessment /." Stockholm : Stockholm Bioinformatics Center, Department of Biochemistry and Biophysics, Stockholm University, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Brun, Daniel, and Colin Lawless. "Quality Prediction in Jet Printing Using Neural Networks." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-278882.

Full text
Abstract:
Surface mount   technology   is   widely   used   in   the   manufacturing   of   commercial  electronics, and  the  demands  on  the  machines  increase  as  the  complexity of  the electronics increases and the size of the components decreases. Mycronic is a company that focuses on addressing those demands with their high-technology jet printing and pick-and-place machines. This master's thesis has been performed at Mycronic and has focused on the MY700 jet printer. Due to unknown factors, the quality of the ejected  solder paste droplets from the machine can vary over time. It was therefore of interest to monitor variables of the MY700 in order to gain more knowledge about the cause of the varying quality, and also to be able to detect substantial changes in deposit quality. In this project, the temperature has been measured at three key locations  on the ej ector as well as the current going through the piezoelectric actuator. This data was fed to a neural network in order to make  quality predictions with respect to the diameter of the solder paste deposits. Different combinations of sensor data were used to evaluate how the different sensors affected the performance of the neural network. Thereby, a better understanding of how big an  impact the different variables had on the quality of the deposits could be achieved.  The results indicate that the current was more significant than the temperature for making quality predictions. Using only the temperature data, the neural network was not able to accurately predict quality deviations, whereas with the piezo current data or both of them  combined,  better predictions could be made. The current data also significantly improved the performance of the neural network when printing jobs with varying diameter were  used. The conclusion is that none of the  three  temperature sensors  significantly  improved  the  performance, and there were no considerable differences between them, while the current did improve it.
Ytmonteringsteknologi är en väletablerad metod som används inom tillverkningen av kommersiell elektronik, och kravet på dessa maskiner ökar i takt med att elektronikens komplexitet  ökar  och  storleken  på  komponenterna  minskar.  Mycronic är ett företag vars fokus ligger i att möta dessa krav med deras högteknologiska jet printing - och pick-and-place-maskiner. Detta examensarbete  har utförts på Mycronic och har fokuserat på jet printing-maskinen MY700. På  grund av  okända faktorer kan kvaliteten på den deponerade lodpastan från maskinen variera över tid. Det var därför intressant att övervaka variabler hos maskinen för att få mer kunskap om orsaken till den varierande kvaliteten och också för att kunna upptäcka förändringar i kvaliteten.  I det här projektet har temperaturen mätts på tre kritiska positioner på ejektorn samt även strömmen som går genom  det  piezoelektriska  ställdonet. Dessa data  gavs till ett neuralt nätverk för att göra kvalitetsprognoser med avseende på diametern på deponeringarna av lodpasta. Olika  kombinationer av sensordata användes för att utvärdera  hur de olika sensorerna påverkade det neurala nätverkets prestanda. Därigenom kunde en bättre förståelse av hur stor påverkan de olika variablerna hade på kvaliteten på deponeringarna uppnås. Resultaten indikerar att strömmen var mer betydelsefull än temperaturen för att göra kvalitetsprognoser. Om bara temperaturdata användes lyckades inte det neurala nätverket göra exakta förutsägelser för kvalitetsavvikelser, medan med bara strömdata eller  båda  kombinerade kunde bättre förutsägelser  göras. Strömdatan  förbättrade också prestandan hos det neurala nätverket när jobb med olika diametrar användes. Slutsatsen är att ingen av de tre temperatursensorerna förbättrade prestandan signifikant, och det fanns inga betydande skillnader  mellan  dem, medan strömmen förbättrade prestandan.
APA, Harvard, Vancouver, ISO, and other styles
11

Kothawade, Rohan Dilip. "Wine quality prediction model using machine learning techniques." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-20009.

Full text
Abstract:
The quality of a wine is important for the consumers as well as the wine industry. The traditional (expert) way of measuring wine quality is time-consuming. Nowadays, machine learning models are important tools to replace human tasks. In this case, there are several features to predict the wine quality but the entire features will not be relevant for better prediction. So, our thesis work is focusing on what wine features are important to get the promising result. For the purposeof classification model and evaluation of the relevant features, we used three algorithms namely support vector machine (SVM), naïve Bayes (NB), and artificial neural network (ANN). In this study, we used two wine quality datasets red wine and white wine. To evaluate the feature importance we used the Pearson coefficient correlation and performance measurement matrices such as accuracy, recall, precision, and f1 score for comparison of the machine learning algorithm. A grid search algorithm was applied to improve the model accuracy. Finally, we achieved the artificial neural network (ANN) algorithm has better prediction results than the Support Vector Machine (SVM) algorithm and the Naïve Bayes (NB) algorithm for both red wine and white wine datasets.
APA, Harvard, Vancouver, ISO, and other styles
12

Nébouy, David. "Printing quality assessment by image processing and color prediction models." Thesis, Saint-Etienne, 2015. http://www.theses.fr/2015STET4018/document.

Full text
Abstract:
L'impression, bien qu'étant une technique ancienne pour la coloration de surfaces, a connu un progrès considérable ces dernières années essentiellement grâce à la révolution du numérique. Les professionnels souhaitant remplir les exigences en termes de qualité du rendu visuel de leurs clients veulent donc savoir dans quelle mesure des observateurs humains sont sensibles à la dégradation d'une image. De telles questions concernant la qualité perçue d'une image reproduite peuvent être séparées en deux sujets différents: La qualité de l'impression, comme la capacité d'un système d'impression à reproduire fidèlement l'image d'origine, et la qualité d'une image imprimée, résultant à la fois de la qualité de reproduction, mais aussi de la qualité même de l'image numérique d'origine. Ce premier concept repose sur une analyse physique de la façon dont l'image d'origine est dégradée lors de son transfert sur un support, et nous proposons de la coupler avec une analyse sensorielle, visant à évaluer des attributs perceptuels et leur donner une valeur sur une certaine échelle, déterminée par des échantillons de référence classés par un ensemble d'observateurs. Le second concept inclut cette dégradation due à l’impression mais aussi la qualité perçu de l’image d’origine, qui ne fait pas parti de notre étude. Notre approche consiste d'abord à définir les différents indices de qualité, basés sur des critères mesurables en utilisant des outils d'évaluation basés sur des algorithmes "objectifs" de traitement d'image et des modèles optiques, sur une image imprimée-scannée. Thèse réalisée au Laboratoire Hubert Curien
Printing, though an old technique for surface coloration, considerably progressed these last decades especially thanks to the digital revolution. Professionals who want to meet the demands in terms of quality regarding the visual rendering of their clients thus want to know to which extent human observers are sensitive to the degradation of an image. Such questions regarding the perceived quality of a reproduced image can be split into two different topics: the printing quality as capacity of a printing system of accurately reproduce an original digital image, and the printed image quality which results from both the reproduction quality and the quality of the original image itself. The first concept relies on physical analysis of the way the original image is deteriorated when transferred onto the support, and we propose to couple it with a sensorial analysis, which aims at assessing perceptual attributes by giving them a value on a certain scale, determined with respect to reference samples classified by a set of observers. The second concept includes the degradation due to the printing plus the perceived quality of the original image, not in the scope of this work. In this report, we focus on the printing quality concept. Our approach first consists in the definition of several printing quality indices, based on measurable criteria using assessment tools based on “objective” image processing algorithms and optical models on a printed-then-scanned image. PhD work made in Hubert Curien Laboratory
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Rui. "Site-specific prediction and measurement of cotton fiber quality." Diss., Mississippi State : Mississippi State University, 2004. http://library.msstate.edu/etd/show.asp?etd=etd-10122004-220250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Kunta, Karika. "Effects of geographic information : quality on soil erosion prediction /." [S.l.] : [s.n.], 2009. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=18136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Cheng, Shuiyuan. "Multi-dimensional multi-box models for air quality prediction." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0017/NQ54669.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Afzal, Wasif. "Search-Based Prediction of Software Quality : Evaluations and Comparisons." Doctoral thesis, Karlskrona : Blekinge Institute of Technology, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00490.

Full text
Abstract:
Software verification and validation (V&V) activities are critical for achieving software quality; however, these activities also constitute a large part of the costs when developing software. Therefore efficient and effective software V&V activities are both a priority and a necessity considering the pressure to decrease time-to-market and the intense competition faced by many, if not all, companies today. It is then perhaps not unexpected that decisions that affects software quality, e.g., how to allocate testing resources, develop testing schedules and to decide when to stop testing, needs to be as stable and accurate as possible. The objective of this thesis is to investigate how search-based techniques can support decision-making and help control variation in software V&V activities, thereby indirectly improving software quality. Several themes in providing this support are investigated: predicting reliability of future software versions based on fault history; fault prediction to improve test phase efficiency; assignment of resources to fixing faults; and distinguishing fault-prone software modules from non-faulty ones. A common element in these investigations is the use of search-based techniques, often also called metaheuristic techniques, for supporting the V&V decision-making processes. Search-based techniques are promising since, as many problems in real world, software V&V can be formulated as optimization problems where near optimal solutions are often good enough. Moreover, these techniques are general optimization solutions that can potentially be applied across a larger variety of decision-making situations than other existing alternatives. Apart from presenting the current state of the art, in the form of a systematic literature review, and doing comparative evaluations of a variety of metaheuristic techniques on large-scale projects (both industrial and open-source), this thesis also presents methodological investigations using search-based techniques that are relevant to the task of software quality measurement and prediction. The results of applying search-based techniques in large-scale projects, while investigating a variety of research themes, show that they consistently give competitive results in comparison with existing techniques. Based on the research findings, we conclude that search-based techniques are viable techniques to use in supporting the decision-making processes within software V&V activities. The accuracy and consistency of these techniques make them important tools when developing future decision-support for effective management of software V&V activities.
APA, Harvard, Vancouver, ISO, and other styles
17

Sun, Lingfen. "Speech quality prediction for voice over Internet protocol networks." Thesis, University of Plymouth, 2004. http://hdl.handle.net/10026.1/870.

Full text
Abstract:
IP networks are on a steep slope of innovation that will make them the long-term carrier of all types of traffic, including voice. However, such networks are not designed to support real-time voice communication because their variable characteristics (e.g. due to delay, delay variation and packet loss) lead to a deterioration in voice quality. A major challenge in such networks is how to measure or predict voice quality accurately and efficiently for QoS monitoring and/or control purposes to ensure that technical and commercial requirements are met. Voice quality can be measured using either subjective or objective methods. Subjective measurement (e.g. MOS) is the benchmark for objective methods, but it is slow, time consuming and expensive. Objective measurement can be intrusive or non-intrusive. Intrusive methods (e.g. ITU PESQ) are more accurate, but normally are unsuitable for monitoring live traffic because of the need for a reference data and to utilise the network. This makes non-intrusive methods(e.g. ITU E-model) more attractive for monitoring voice quality from IP network impairments. However, current non-intrusive methods rely on subjective tests to derive model parameters and as a result are limited and do not meet new and emerging applications. The main goal of the project is to develop novel and efficient models for non-intrusive speech quality prediction to overcome the disadvantages of current subjective-based methods and to demonstrate their usefulness in new and emerging VoIP applications. The main contributions of the thesis are fourfold: (1) a detailed understanding of the relationships between voice quality, IP network impairments (e.g. packet loss, jitter and delay) and relevant parameters associated with speech (e.g. codec type, gender and language) is provided. An understanding of the perceptual effects of these key parameters on voice quality is important as it provides a basis for the development of non-intrusive voice quality prediction models. A fundamental investigation of the impact of the parameters on perceived voice quality was carried out using the latest ITU algorithm for perceptual evaluation of speech quality, PESQ, and by exploiting the ITU E-model to obtain an objective measure of voice quality. (2) a new methodology to predict voice quality non-intrusively was developed. The method exploits the intrusive algorithm, PESQ, and a combined PESQ/E-model structure to provide a perceptually accurate prediction of both listening and conversational voice quality non-intrusively. This avoids time-consuming subjective tests and so removes one of the major obstacles in the development of models for voice quality prediction. The method is generic and as such has wide applicability in multimedia applications. Efficient regression-based models and robust artificial neural network-based learning models were developed for predicting voice quality non-intrusively for VoIP applications. (3) three applications of the new models were investigated: voice quality monitoring/prediction for real Internet VoIP traces, perceived quality driven playout buffer optimization and perceived quality driven QoS control. The neural network and regression models were both used to predict voice quality for real Internet VoIP traces based on international links. A new adaptive playout buffer and a perceptual optimization playout buffer algorithms are presented. A QoS control scheme that combines the strengths of rate-adaptive and priority marking control schemes to provide a superior QoS control in terms of measured perceived voice quality is also provided. (4) a new methodology for Internet-based subjective speech quality measurement which allows rapid assessment of voice quality for VoIP applications is proposed and assessed using both objective and traditional MOS test methods.
APA, Harvard, Vancouver, ISO, and other styles
18

Lepard, Robert F. (Robert Frederick). "Power quality prediction based on determination of supply impedance." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/40171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Hellberg, Johan, and Kasper Johansson. "Building Models for Prediction and Forecasting of Service Quality." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-295617.

Full text
Abstract:
In networked systems engineering, operational datagathered from sensors or logs can be used to build data-drivenfunctions for performance prediction, anomaly detection, andother operational tasks [1]. Future telecom services will share acommon communication and processing infrastructure in orderto achieve cost-efficient and robust operation. A critical issuewill be to ensure service quality, whereby different serviceshave very different requirements. Thanks to recent advances incomputing and networking technologies we are able to collect andprocess measurements from networking and computing devices,in order to predict and forecast certain service qualities, such asvideo streaming or data stores. In this paper we examine thesetechniques, which are based on statistical learning methods. Inparticular we will analyze traces from testbed measurements andbuild predictive models. A detailed description of the testbed,which is localized at KTH, is given in Section II, as well as in[2].
Inom nätverk och systemteknik samlas operativ data från sensorer eller loggar som sedan kan användas för att bygga datadrivna funktioner för förutsägelser om prestanda och andra operationella uppgifter [1]. Framtidens teletjänster kommer att dela en gemensam kommunikation och bearbetnings infrastruktur i syfte att uppnå kostnadseffektiva och robusta nätverk. Ett kritiskt problem med detta är att kunna garantera en hög servicekvalitet. Detta problem uppstår till stor del som ett resultat av att olika tjänster har olika krav. Tack vare nyliga avanceringar inom beräkning och nätverksteknologi har vi kunnat samla in användningsmätningar från nätverk och olika datorenheter för att kunna förutspå servicekvalitet för exempelvis videostreaming och lagring av data. I detta arbete undersöker vi data med hjälp av statistiska inlärningsmetoder och bygger prediktiva modeller. En mer detaljerat beskrivning av vår testbed, som är lokaliserad på KTH, finns i [2].
Kandidatexjobb i elektroteknik 2020, KTH, Stockholm
APA, Harvard, Vancouver, ISO, and other styles
20

FOTIO, TIOTSOP LOHIC. "Optimizing Perceptual Quality Prediction Models for Multimedia Processing Systems." Doctoral thesis, Politecnico di Torino, 2022. http://hdl.handle.net/11583/2970982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Cairns, Stefan H. 1949. "Eutrophication Monitoring and Prediction." Thesis, University of North Texas, 1993. https://digital.library.unt.edu/ark:/67531/metadc277850/.

Full text
Abstract:
Changes in trophic status are often related to increases or decreases in the allocthonous inputs of nutrients from changes in land use and management practices. Lake and reservoir managers are continually faced with the questions of what to monitor, how to monitor it, and how much change is necessary to be considered significant. This study is a compilation of four manuscripts, addressing one of these questions, using data from six reservoirs in Texas.
APA, Harvard, Vancouver, ISO, and other styles
22

Wu, Guoli. "Accruals Quality and the Prediction of Earnings and Cash Flows." Thesis, Imperial College London, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.511863.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Ackerman, Mattheus Johannes. "Steel slab surface quality prediction using neural networks / M.J. Ackerman." Thesis, North-West University, 2003. http://hdl.handle.net/10394/382.

Full text
Abstract:
Columbus Stainless grinds the majority of the steel slabs that are produced to improve the surface quality. However, the surface quality of some slabs is good enough not to be ground. If a reliable method can be found to identify these slabs, the production costs associated with grinding can be saved. Initially slabs were selected manually based on knowledge of the process parameters that affect the steel surface quality. This was not successful and may have been due to the interaction between variables and non-linear effects that were not taken into account. A neural network approach was therefore considered. A multilayer perceptron neural network was used for defect prediction. The neural network is trained by repeatedly attempting to match input data to the corresponding output data. Linear regression and decision tree models were also trained for comparison. The neural networks performed the best. The effectiveness of the models was tested using a test data set (data not used during the training of the model) and the neural networks gave high levels of accuracy (greater than 75% for both defect and no-defect cases). A committee of models was also trained, but this did not improve the prediction accuracy. Neural networks provided a powerful tool to predict the slab surface quality. This has enabled Columbus Stainless to limit the deterioration in the steel quality associated with non-grinding of slabs.
Thesis (M.Ing. (Mechanical Engineering))--North-West University, Potchefstroom Campus, 2004.
APA, Harvard, Vancouver, ISO, and other styles
24

Yusof, Norzan Mohd. "Environmental load versus concrete quality : prediction of structure's design life." Thesis, University of Birmingham, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.323997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Zhang, Yangyue. "Water quality prediction for recreational use of Kranji Reservoir, Singapore." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66848.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 52-57).
Singapore has been making efforts in relieving its water shortage problems and has been making great progress through its holistic water management. Via the Active, Beautiful, Clean Waters (ABC Waters) Programme, Singapore's Public Utilities Board (PUB) is now aiming to opening Kranji Reservoir for recreation. Considering the potential contamination of freshwater, particularly by fecal coliform, which threatens public health by causing water-borne diseases, a practical microbial water quality prediction program has been built up to evaluate the safety of the recreational use of Kranji Reservoir. E. coli bacteria concentrations within the reservoir were adopted as an indicator of recreational water quality. Dynamic fate-and-transport modeling of E. coli concentrations along the reservoir was carried out using the Water Quality Analysis Simulation Program (WASP). The model was constructed by specifying basic hydraulic parameters. E. coli loadings were indexed to the various land uses within the Kranji Catchment and the effective E. coli bacterial decay rates were derived from theoretical equations and verified by on-site attenuation studies carried out in Singapore. Simulation results from the WASP model are consistent with samples collected and analyzed for E. coli concentration in Kranji Reservoir in January 2011. The simulation results indicate potentially high risk in using the reservoir's three tributaries for water-contact recreation. The model also shows advective flow through the reservoir to be a big contributor to the concentration changes along the reservoir. A prototype of a practical early warning system for recreational use of Kranji Reservoir has been designed based on the implementation of the model.
by Yangyue Zhang.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
26

Ekholmer, Henrik. "Prediction and Optimization of Paper Quality Properties in Paper Manufacturing." Thesis, KTH, Optimeringslära och systemteori, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-200383.

Full text
Abstract:
A problem in the paper industry is that most paper quality properties can only be measured in lab after a full tambour of paper is produced. The tambour length is normally about 20-40 km and takes about one hour to produce. This may lead to one hour production and several km of paper becomes wasted due to poor paper quality. To reduce this problem, prediction models can be used to estimate the paper quality properties on-line. By using these models, a control strategy can be developed, which make sure that the paper quality properties are fulfilled. Optimization has here been used to find a control strategy that minimizes the cost to produce paper of with desired paper quality properties. In the thesis, focus has been to find models for prediction of paper quality properties, which includes synchronizing data in different parts of the paper machine and lab, variable selection and filtering. Focus has also been on minimizing production cost, utilizing the models of paper quality properties. A sensitivity analysis has been done for a number of variables in order to increase the understanding of the optimization.
Ett stort problem inom pappersindustrin i dag är att de flesta papperskvaliteter bara kan mätas i ett labb efter en hel tambur är producerad. En tamburs längd är ca 20-40 km och tar ungefär en timme att producera. Detta kan leda till att en hel timmes produktion och flera kilometer av papper går till spillo på grund av dålig papperskvalitet. För att lösa detta kan en prediktionsmodell användas för att uppskatta papperskvaliteterna on-line. Med denna modell kan även en kostnadsoptimering utföras för att producera samma kvalitet men till ett lägre pris. I detta examensarbete ligger största fokus p å att välja en modell för att prediktera papperskvaliteterna. Detta inkluderar synkronisera data i olika delar av pappersmaskinen, variabelselektion och filtrering. Nästa fokus är att optimera produktionskostnaderna baserat p å prediktionsmodellen. En känslighetsanalys utförs p å kostnaden för ett antal variabler för att öka förståelsen för modellen.
APA, Harvard, Vancouver, ISO, and other styles
27

Soltani, Behdad. "Model Based Quality Prediction in Fluidised-Bed Dryers for Yeast." Thesis, The University of Sydney, 2019. https://hdl.handle.net/2123/21373.

Full text
Abstract:
The key aims of this work were to 1) develop a numerical model that would be capable of predicting fluidised-bed dryer operating parameters (including bed temperature, air humidity, and solids moisture content), 2) find the key parameters affecting the viability of yeast, and 3) predict the viability of yeast throughout the drying process. The numerical model developed uses the reaction engineering approach to estimate the drying kinetics and has a strong physical basis with the only fitted parameters being those used for the GAB isotherm. Agreement between model predictions and experimental data was excellent, the maximum root-mean-square errors were 3.1 °C, 1.8 g of water per kg of dry air, and 2.3% in the bed temperature, air humidity and the final moisture content on a wet-basis, respectively. In the second portion of this study, the effect of drying conditions on the viability of yeast cells was studied to gain an improved understanding of the mechanisms affecting yeast viability during fluidised-bed drying. The results of this research showed the major viability losses (dead cells) only occurred when the moisture content on a wet-basis was below 15%. It was also found lower bed temperatures (30-40 °C) resulted in fewer compromised cells than higher bed temperatures (above 40 °C) for moisture contents below 15%. In the final chapter, a response surface model has been fitted using IBM SPSS® (V.24) to predict the viability as a function of both temperature and moisture content, wet-basis. The response surface model was combined with the numerical model to give predictions of the viability as a function of the operating conditions. The model predictions and experimental observations showed good agreement, with the root-mean-square error in the viability predictions being less than 3%.
APA, Harvard, Vancouver, ISO, and other styles
28

Andersson, Martin. "Parametric Prediction Model for Perceived Voice Quality in Secure VoIP." Thesis, Linköpings universitet, Informationskodning, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-127402.

Full text
Abstract:
More and more sensitive information is communicated digitally and with thatcomes the demand for security and privacy on the services being used. An accurateQoS metric for these services are of interest both for the customer and theservice provider. This thesis has investigated the impact of different parameterson the perceived voice quality for encrypted VoIP using a PESQ score as referencevalue. Based on this investigation a parametric prediction model has been developedwhich outputs a R-value, comparable to that of the widely used E-modelfrom ITU. This thesis can further be seen as a template for how to construct modelsof other equipments or codecs than those evaluated here since they effect theresult but are hard to parametrise. The results of the investigation are consistent with previous studies regarding theimpact of packet loss, the impact of jitter is shown to be significant over 40 ms.The results from three different packetizers are presented which illustrates theneed to take such aspects into consideration when constructing a model to predictvoice quality. The model derived from the investigation performs well withno mean error and a standard deviation of the error of a mere 1:45 R-value unitswhen validated in conditions to be expected in GSM networks. When validatedagainst an emulated 3G network the standard deviation is even lower.v
APA, Harvard, Vancouver, ISO, and other styles
29

Magwaza, Lembe Samukelo. "Non-destructive prediction and monitoring of postharvest quality of citrus fruit." Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/85578.

Full text
Abstract:
Thesis (PhD(Agric))--Stellenbosch University, 2013.
ENGLISH ABSTRACT: The aim of this study was to develop non-destructive methods to predict external and internal quality of citrus fruit. A critical review of the literature identified presymptomatic biochemical markers associated with non-chilling rind physiological disorders. The prospects for the use of visible to near infrared spectroscopy (Vis/NIRS) as non-destructive technology to sort affected fruit were also reviewed. Initial studies were conducted to determine the optimum condition for NIRS measurements and to evaluate the accuracy of this technique and associated chemometric analysis. It was found that the emission head spectroscopy in diffuse reflectance mode could predict fruit mass, colour index, total soluble solids, and vitamin C with high accuracy. Vis/NIRS was used to predict postharvest rind physico-chemical properties related to rind quality and susceptibility of ‘Nules Clementine’ to RBD. Partial least squares (PLS) statistics demonstrated that rind colour index, dry matter (DM) content, total carbohydrates, and water loss were predicted accurately. Chemometric analysis showed that optimal PLS model performances for DM, sucrose, glucose, and fructose were obtained using models based on multiple scatter correction (MSC) spectral pre-processing. The critical step in evaluating the feasibility of Vis/NIRS was to test the robustness of the calibration models across orchards from four growing regions in South Africa over two seasons. Studies on the effects of microclimatic conditions predisposing fruit to RBD showed that fruit inside the canopy, especially artificially bagged fruit, had lower DM, higher mass loss, and were more susceptible to RBD. The study suggested that variations in microclimatic conditions between seasons, as well as within the tree canopy, affect the biochemical profile of the rind, which in turn influences fruit response to postharvest stresses associated with senescence and susceptibility to RBD. Principal component analysis (PCA) and PLS discriminant analysis (PLS-DA) models were applied to distinguish between fruit from respectively, inside and outside tree canopy, using Vis/NIRS signal, suggesting the possibility of using this technology to discriminate between fruit based on their susceptibility to RBD. Results from the application of optical coherence tomography (OCT), a novel non-destructive technology for imaging histological changes in biological tissues, showed promise as a potential technique for immediate, real-time acquisition of images of rind anatomical features of citrus fruit. The study also demonstrated the potential of Vis/NIRS as a non-destructive tool for sorting citrus fruit based on external and internal quality.
AFRIKAANSE OPSOMMING: Die studie het ten doel gestaan om nie-destruktiewe meeting metodes te toets en ontwikkel wat die interne en eksterne-kwaliteit van sitrusvrugte kan voorspel. In ʼn litratuuroorsig is biochemies verandering in die skil en wat geassosieer word met die ontwikkeling van fisiologies skildefekte geïdentifiseer, asook is die moontlikheid ondersoek om Naby Infrarooi spektroskopie (NIRS) as ‘n nie-destruktiewe tegnologie te gebruik om vrugte te sorteer. Eerstens was die optimale toestande waarby NIRS meetings van sitrusvrugte geneem moet word asook die akkuraatheid van die toerusting en chemometrika data-ontleding beproef. Daar is gevind dat die uitstralings-kop spektrofotometer in diffusie-weerkaatsings modus vrugmassa, skilkleur, totale opgeloste stowwe asook vitamien C akkuraat kan voorspel. Daarna van NIRS gebruik om na-oes fisies-chemiese eienskappe wat verband hou met skilkwaliteit en vatbaarheid vir skilafbraak van ‘Nules Clementine’ mandaryn. Deur gebruik te maak van “Partial least squares” (PLS) statistieke was gedemonstreer dat die skilkleur, droë massa (DM), totale koolhidrate en waterverlies akkuraat voorspel kon word. Chemometriese analises het ook getoon dat optimale PLS modelle vir DM, sukrose, glukose en fruktose verkry kan word deur modelle te skep wat gebaseer is op “Multiple scatter correction” (MSC) spektrale voor-verwerking. ʼn Belangrike stap in die ontwikkeling van NIRS gebaseerde indeling is om die robuustheid van die kalibrasiemodelle te toets en was gedoen deur vrugte te meet en sorteer van vier boorde en oor twee seisoene. ʼn Verder eksperiment om die impak van mikroklimaat op die skil se vatbaarheid vir fisiologiese defekte te ontwikkel het getoon dat vrugte wat binne in die blaardak ontwikkel (lae vlakke van sonlig) ʼn laer DM, hoër gewigsverlies het en was ook meer vatbaar vir skilafbraak. Die resultate dui daarop dat verskille in mikroklimaat oor die seisoen asook in die blaardak die skil se biochemiese profiel beïnvloed, wat lei tot ʼn negatiewe reaksie op na-oes stres en verhoogde voorkoms van fisiologiese skilafbraak. Die ontwikkelde “Principal component analysis” (PCA) en PLS-diskriminant analise modelle was daarna suksesvol toegepas om vrugte te skei na NIRS meetings, op die basis van vrugpossies in die blaardak. Nuwe, nie-destruktiewe tegniek, nl. “Optical coherence tomography” (OCT) was suksesvol getoets as manier om ʼn fotografiese beeld te skep van histologiese veranderinge in die skil. Die resultate dui op die potensiaal van die onontginde tegnologie om intak biologiese-materiaal te analiseer. Hierdie studie het getoon dat daar wesenlike potensiaal is om NIRS verder te ontwikkel tot ʼn tegnologie wat gebruik kan word om vrugte te sorteer gebaseer op eksterne (skil) asook interne (pulp) eienskappe
APA, Harvard, Vancouver, ISO, and other styles
30

Pham, Van Tan. "Prediction of Change in Quality of 'Cripps Pink' Apples during Storage." Thesis, The University of Sydney, 2008. http://hdl.handle.net/2123/5133.

Full text
Abstract:
The goal of this research was to investigate changes in the physiological properties including firmness, stiffness, weight, background colour, ethylene production and respiration of ‘Cripps Pink’ apple stored under different temperature and atmosphere conditions,. This research also seeks to establish mathematical models for the prediction of changes in firmness and stiffness of the apple during normal atmosphere (NA) storage. Experiments were conducted to determine the quality changes in ‘Cripps Pink’ apple under three sets of storage conditions. The first set of storage conditions consisted of NA storage at 0oC, 2.5oC, 5oC, 10oC, 20oC and 30oC. In the second set of conditions the apples were placed in NA cold storage at 0oC for 61 days, followed by NA storage at the aforementioned six temperatures. The third set of conditions consisted of controlled atmosphere (CA) (2 kPa O2 : 1 kPa CO2) at 0oC storage for 102 days followed by NA storage at the six temperatures mentioned previously. The firmness, stiffness, weight loss, skin colour, ethylene and carbon dioxide production of the apples were monitored at specific time intervals during storage. Firmness was measured using a HortPlus Quick Measure Penetrometer (HortPlus Ltd, Hawke Bat, New Zealand); stiffness was measured using a commercial acoustic firmness sensor-AFS (AWETA, Nootdorp, The Netherlands). Experimental data analysis was performed using the GraphPad Prism 4.03, 2005 software package. The Least-Squares method and iterative non-linear regression were used to model and simulate changes in firmness and stiffness in GraphPad Prism 4.03, 2005 and DataFit 8.1, 2005 softwares. The experimental results indicated that the firmness and stiffness of ‘Cripps Pink’ apple stored in NA decreased with increases in temperature and time. Under NA, the softening pattern was tri-phasic for apples stored at 0oC, 2.5oC and 5oC for firmness, and at 0oC and 2.5oC for stiffness. However, there were only two softening phases for apples stored at higher temperatures. NA at 0oC, 2.5oC and 5oC improved skin background colour and extended the storage ability of apples compared to higher temperatures. CA during the first stage of storage better maintained the firmness and stiffness of the apples. However, it reduced subsequent ethylene and carbon dioxide (CO2) production after removal from storage. Steep increases in ethylene and CO2 production coincided with rapid softening in the fruit flesh and yellowing of the skin background colour, under NA conditions. The exponential decay model was the best model for predicting changes in the firmness, stiffness and keeping quality of the apples. The exponential decay model satisfied the biochemical theory of softening in the apple, and had the highest fitness to the experimental data collected over the wide range of temperatures. The softening rate increased exponentially with storage temperature complying with the Arrhenius equation. Therefore a combination of the exponential decay model with the Arrhenius equation was found to best characterise the softening process and to predict changes in the firmness and stiffness of apples stored at different temperatures in NA conditions.
APA, Harvard, Vancouver, ISO, and other styles
31

Pham, Van Tan. "Prediction of Change in Quality of 'Cripps Pink' Apples during Storage." University of Sydney, 2008. http://hdl.handle.net/2123/5133.

Full text
Abstract:
Doctor of Philosophy (PhD)
The goal of this research was to investigate changes in the physiological properties including firmness, stiffness, weight, background colour, ethylene production and respiration of ‘Cripps Pink’ apple stored under different temperature and atmosphere conditions,. This research also seeks to establish mathematical models for the prediction of changes in firmness and stiffness of the apple during normal atmosphere (NA) storage. Experiments were conducted to determine the quality changes in ‘Cripps Pink’ apple under three sets of storage conditions. The first set of storage conditions consisted of NA storage at 0oC, 2.5oC, 5oC, 10oC, 20oC and 30oC. In the second set of conditions the apples were placed in NA cold storage at 0oC for 61 days, followed by NA storage at the aforementioned six temperatures. The third set of conditions consisted of controlled atmosphere (CA) (2 kPa O2 : 1 kPa CO2) at 0oC storage for 102 days followed by NA storage at the six temperatures mentioned previously. The firmness, stiffness, weight loss, skin colour, ethylene and carbon dioxide production of the apples were monitored at specific time intervals during storage. Firmness was measured using a HortPlus Quick Measure Penetrometer (HortPlus Ltd, Hawke Bat, New Zealand); stiffness was measured using a commercial acoustic firmness sensor-AFS (AWETA, Nootdorp, The Netherlands). Experimental data analysis was performed using the GraphPad Prism 4.03, 2005 software package. The Least-Squares method and iterative non-linear regression were used to model and simulate changes in firmness and stiffness in GraphPad Prism 4.03, 2005 and DataFit 8.1, 2005 softwares. The experimental results indicated that the firmness and stiffness of ‘Cripps Pink’ apple stored in NA decreased with increases in temperature and time. Under NA, the softening pattern was tri-phasic for apples stored at 0oC, 2.5oC and 5oC for firmness, and at 0oC and 2.5oC for stiffness. However, there were only two softening phases for apples stored at higher temperatures. NA at 0oC, 2.5oC and 5oC improved skin background colour and extended the storage ability of apples compared to higher temperatures. CA during the first stage of storage better maintained the firmness and stiffness of the apples. However, it reduced subsequent ethylene and carbon dioxide (CO2) production after removal from storage. Steep increases in ethylene and CO2 production coincided with rapid softening in the fruit flesh and yellowing of the skin background colour, under NA conditions. The exponential decay model was the best model for predicting changes in the firmness, stiffness and keeping quality of the apples. The exponential decay model satisfied the biochemical theory of softening in the apple, and had the highest fitness to the experimental data collected over the wide range of temperatures. The softening rate increased exponentially with storage temperature complying with the Arrhenius equation. Therefore a combination of the exponential decay model with the Arrhenius equation was found to best characterise the softening process and to predict changes in the firmness and stiffness of apples stored at different temperatures in NA conditions.
APA, Harvard, Vancouver, ISO, and other styles
32

TANNEEDI, NAREN NAGA PAVAN PRITHVI. "Customer Churn Prediction Using Big Data Analytics." Thesis, Blekinge Tekniska Högskola, Institutionen för kommunikationssystem, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13518.

Full text
Abstract:
Customer churn is always a grievous issue for the Telecom industry as customers do not hesitate to leave if they don’t find what they are looking for. They certainly want competitive pricing, value for money and above all, high quality service. Customer churning is directly related to customer satisfaction. It’s a known fact that the cost of customer acquisition is far greater than cost of customer retention, that makes retention a crucial business prototype. There is no standard model which addresses the churning issues of global telecom service providers accurately. BigData analytics with Machine Learning were found to be an efficient way for identifying churn. This thesis aims to predict customer churn using Big Data analytics, namely a J48 decision tree on a Java based benchmark tool, WEKA. Three different datasets from various sources were considered; first includes Telecom operator’s six month aggregate active and churned users’ data usage volumes, second includes globally surveyed data and third dataset comprises of individual weekly data usage analysis of 22 android customers along with their average quality, annoyance and churn scores by accompanying theses. Statistical analyses and J48 Decision trees were drawn for three different datasets. From the statistics of normalized volumes, autocorrelations were small owing to reliable confidence intervals, but confidence intervals were overlapping and close by, therefore no much significance could be noticed, henceforth no strong trends could be observed. From decision tree analytics, decision trees with 52%, 70% and 95% accuracies were achieved for three different data sources respectively.      Data preprocessing, data normalization and feature selection have shown to be prominently influential. Monthly data volumes have not shown much decision power. Average Quality, Churn Risk and to some extent, Annoyance scores may point out a probable churner. Weekly data volumes with customer’s recent history and necessary attributes like age, gender, tenure, bill, contract, data plan, etc., are pivotal for churn prediction.
APA, Harvard, Vancouver, ISO, and other styles
33

Yu, Libo. "Consensus Fold Recognition by Predicted Model Quality." Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/1124.

Full text
Abstract:
Protein structure prediction has been a fundamental challenge in the biological field. In this post-genomic era, the need for automated protein structure prediction has never been more evident and researchers are now focusing on developing computational techniques to predict three-dimensional structures with high throughput. Consensus-based protein structure prediction methods are state-of-the-art in automatic protein structure prediction. A consensus-based server combines the outputs of several individual servers and tends to generate better predictions than any individual server. Consensus-based methods have proved to be successful in recent CASP (Critical Assessment of Structure Prediction). In this thesis, a Support Vector Machine (SVM) regression-based consensus method is proposed for protein fold recognition, a key component for high throughput protein structure prediction and protein function annotation. The SVM first extracts the features of a structural model by comparing the model to the other models produced by all the individual servers. Then, the SVM predicts the quality of each model. The experimental results from several LiveBench data sets confirm that our proposed consensus method, SVM regression, consistently performs better than any individual server. Based on this method, we developed a meta server, the Alignment by Consensus Estimation (ACE).
APA, Harvard, Vancouver, ISO, and other styles
34

Hebert, Courtney L. "Leveraging the electronic problem list for public health research and quality improvement." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1385129530.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Khan, Asiya. "Video quality prediction for video over wireless access networks (UMTS and WLAN)." Thesis, University of Plymouth, 2011. http://hdl.handle.net/10026.1/893.

Full text
Abstract:
Transmission of video content over wireless access networks (in particular, Wireless Local Area Networks (WLAN) and Third Generation Universal Mobile Telecommunication System (3G UMTS)) is growing exponentially and gaining popularity, and is predicted to expose new revenue streams for mobile network operators. However, the success of these video applications over wireless access networks very much depend on meeting the user’s Quality of Service (QoS) requirements. Thus, it is highly desirable to be able to predict and, if appropriate, to control video quality to meet user’s QoS requirements. Video quality is affected by distortions caused by the encoder and the wireless access network. The impact of these distortions is content dependent, but this feature has not been widely used in existing video quality prediction models. The main aim of the project is the development of novel and efficient models for video quality prediction in a non-intrusive way for low bitrate and resolution videos and to demonstrate their application in QoS-driven adaptation schemes for mobile video streaming applications. This led to five main contributions of the thesis as follows:(1) A thorough understanding of the relationships between video quality, wireless access network (UMTS and WLAN) parameters (e.g. packet/block loss, mean burst length and link bandwidth), encoder parameters (e.g. sender bitrate, frame rate) and content type is provided. An understanding of the relationships and interactions between them and their impact on video quality is important as it provides a basis for the development of non-intrusive video quality prediction models.(2) A new content classification method was proposed based on statistical tools as content type was found to be the most important parameter. (3) Efficient regression-based and artificial neural network-based learning models were developed for video quality prediction over WLAN and UMTS access networks. The models are light weight (can be implemented in real time monitoring), provide a measure for user perceived quality, without time consuming subjective tests. The models have potential applications in several other areas, including QoS control and optimization in network planning and content provisioning for network/service providers.(4) The applications of the proposed regression-based models were investigated in (i) optimization of content provisioning and network resource utilization and (ii) A new fuzzy sender bitrate adaptation scheme was presented at the sender side over WLAN and UMTS access networks. (5) Finally, Internet-based subjective tests that captured distortions caused by the encoder and the wireless access network for different types of contents were designed. The database of subjective results has been made available to research community as there is a lack of subjective video quality assessment databases.
APA, Harvard, Vancouver, ISO, and other styles
36

Stineburg, Jeffrey. "Software reliability prediction based on design metrics." Virtual Press, 1999. http://liblink.bsu.edu/uhtbin/catkey/1154775.

Full text
Abstract:
This study has presented a new model for predicting software reliability based on design metrics. An introduction to the problem of software reliability is followed by a brief overview of software reliability models. A description of the models is given, including a discussion of some of the issues associated with them. The intractability of validating life-critical software is presented. Such validation is shown to require extended periods of test time that are impractical in real world situations. This problem is also inherent in fault tolerant software systems of the type currently being implemented in critical applications today. The design metrics developed at Ball State University is proposed as the basis of a new model for predicting software reliability from information available during the design phase of development. The thesis investigates the proposition that a relationship exists between the design metric D(G) and the errors that are found in the field. A study, performed on a subset of a large defense software system, discovered evidence to support the proposition.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
37

Campean, Ioan Felician. "Product reliability analysis and prediction : applications to mechanical systems." Thesis, Bucks New University, 1998. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.714448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Steele, Clint. "The prediction and management of the variability of manufacturing operations." Australasian Digital Theses Program, 2005. http://adt.lib.swin.edu.au/public/adt-VSWT20060815.151147.

Full text
Abstract:
Thesis (PhD) - Swinburne University of Technology, 2005.
Submitted in fulfillment of the requirements for the degree of Doctor of Philosophy, Swinburne University of Technology - 2005. Typescript. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
39

Cencerrado, Barraqué Andrés. "Methodology for time response and quality assessment in natural hazards evolution prediction." Doctoral thesis, Universitat Autònoma de Barcelona, 2012. http://hdl.handle.net/10803/284023.

Full text
Abstract:
En aquesta tesi doctoral es descriu una metodologia per a l’evaluació del temps de resposta i la qualitat en la predicció de l’evolució d’emergències mediambientals. El treball s’ha centrat en el cas específic dels incendis forestals, com un dels desastres naturals més importants i devastadors, però és facilment extrapol·lable a altre tipus d’emèrgencies mediambientals. Existeixen molts entorns de predicció que es basen en l’ús de simuladors de l’evolució del fenòmen catastròfic. Donat el creixent poder quant a capacitat de cómput que ens ofereixen els nous progressos computacionals, com les arquitectures multicore i manycore, i inclús els paradigmes de cómput distribuit, com Grid o Cloud Computing, sorgeix la necessitat d’explotar encertadament el poder computacional que aquests ens ofereixen. Aquest objectiu s’assoleix proporcionant la capacitat d’avaluar, per endavant, com les restriccions existents en el moment d’atendre un incendi forestal actiu afectaran als resultats que s’obtindran, en termes de qualitat (precisió) obtinguda, i temps necessari per prendre una decisió, i en conseqüència, tenir la capacitat de escollir la configuració més adient tant de l’estratègia de predicció, com dels recursos computacionals. Com a conseqüència, el sistema que deriva de l’aplicació d’aquesta metodologia no està dissenyat per ser un Sistema de Suport a les Decisions (DSS), però sí una eina de la que la majoria de DSSs per incendis forestals es poden beneficiar notablement. El problema s’ha tractat per mitjà de la caracterització del comportament d’aquests dos factors durant el procés de predicció. Per això, es presenta un mètode de predicció de dues etapes i s’utilitza com a base de treball, donat el notable augment de qualitat que proporciona en les prediccions. Aquesta metodologia implica haver de treballar amb tècniques pròpies del camp de la Intel.ligència Artificial, com són els Algorismes Genètics i els Arbres de Decisió, i també es recolza en un intens estudi estadístic de les bases de dades d’entrenament, compostes pels resultats de milers de simulacions. Els resultats obtinguts en aquest treball d’investigació de llarga durada són completament satisfactoris, i obren camí a nous reptes. A més, la flexibilitat que ofereix aquesta metodologia permet aplicar-la en qualsevol altre context d’emergència, el qual la converteix en una destacable i molt útil eina per lluitar contra aquestes catàstrofes.
En esta tesis doctoral se describe una metodología para la evaluación del tiempo de respuesta y la calidad en la predicción de la evolución de emergencias medioambientales. El trabajo se ha centrado en el caso específico de los incendios forestales, como uno de los desastres naturales más importantes y devastadores, pero es fácilmente extrapolable a otro tipo de emergencias medioambientales. Existen muchos entornos de predicción que se basan en el uso de simuladores de la evolución del fenómeno catastrófico. Dado el creciente poder en cuanto a capacidad de cómputo que nos ofrecen los nuevos avances computacionales, como las arquitecturas multicore y manycore, e incluso los paradigmas de cómputo distribuido, como Grid o Cloud Computing, surge la necesidad de ser capaces de explotar acertadamente el poder computacional que éstos nos ofrecen. Tal objetivo se alcanza proporcionando la capacidad de evaluar, de antemano, cómo las restricciones existentes a la hora de atender un incendio forestal activo afectarán a los resultados que se obtendrán, tanto en términos de calidad (precisión) obtenida, y tiempo necesario para tomar una decisión, y por consiguiente, tener la capacidad de escoger la configuración más adecuada tanto de la estrategia de predicción, como de los recursos computacionales. Como consecuencia, el sistema que deriva de la aplicación de esta metodología no está diseñado para ser un Sistema de Soporte a las Decisiones (DSS), pero sí una herramienta de la que la mayoría de DSSs para incendios forestales se pueden beneficiar notablemente. El problema se ha tratado por medio de la caracterización del comportamiento de estos dos factores durante el proceso de predicción. Para ello, un método de predicción de dos etapas es presentado y utilizado como base de trabajo, dado el notable aumento de calidad que proporciona en las predicciones. Esta metodología implica lidiar con técnicas propias del campo de la Inteligencia Artificial, como son los Algoritmos Genéticos y los Árboles de Decisión, y a su vez se apoya en un intenso estudio estadístico de bases de datos de entrenamiento, compuestas por los resultados de miles de distintas simulaciones. Los resultados obtenidos en este trabajo de investigación a largo plazo son completamente satisfactorios, y abren camino a nuevos retos. Además, la flexibilidad que ofrece la metodología permite aplicarla en cualquier otro contexto de emergencia, lo que la convierte en una destacable y muy útil herramienta para luchar contra estas catástrofes
This thesis describes a methodology for time response and quality assessment in natural hazards evolution prediction. This work has been focused on the specific case of forest fires as an important and worrisome catastrophe, but it can easily be extrapolated to all other kinds of natural hazards. There exist many prediction frameworks based on the use of simulators of the evolution of the hazard. Given the increasing computing capabilities allowed by new computing advances such as multicore and manycore architectures, and even distributed-computing paradigms, such as Grid and Cloud Computing, the need arises to be able to properly exploit the computational power they offer. This goal is fulfilled by introducing the capability to assess in advance how the present constraints at the time of attending to an ongoing forest fire will affect the results obtained from them, both in terms of quality (accuracy) obtained and time needed to make a decision, and therefore being able to select the most suitable configuration of both the prediction strategy and computational resources to be used. As a consequence, the framework derived from the application of this methodology is not supposed to be a new Decision Support System (DSS) for fire departments and Civil Protection agencies, but a tool from which most of forest fire (and other kinds of natural hazards) DSSs could benefit notably. The problem has been tackled by means of characterizing the behavior of these two factors during the prediction process. For this purpose, a two-stage prediction framework is presented and considered as a suitable and powerful strategy to enhance the quality of the predictions. This methodology involves dealing with Artificial Intelligence techniques, such as Genetic Algorithms and Decision Trees and also relies on a strong statistical study from training databases, composed of the results of thousands of different simulations. The results obtained in this long-term research work are fully satisfactory, and give rise to several new challenges. Moreover, the flexibility offered by the methodology allows it to be applied to other kinds of emergency contexts, which turns it into an outstanding and very useful tool in fighting against these catastrophes.
APA, Harvard, Vancouver, ISO, and other styles
40

Hansen, Martin. "Assessment and prediction of speech transmission quality with an auditory processing model." [S.l. : s.n.], 1998. http://deposit.ddb.de/cgi-bin/dokserv?idn=958448523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Hong, Jeong Jin. "Multivariate statistical modelling for fault analysis and quality prediction in batch processes." Thesis, University of Newcastle Upon Tyne, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.576960.

Full text
Abstract:
Multivariate statistical process control (MSPC) has emerged as an effective technique for monitoring processes with a large number of correlated process variables. MSPC techniques use principal component analysis (PCA) and partial least squares (PLS) to project the high dimensional correlated process variables onto a low dimensional principal component or latent variable space and process monitoring is carried out in this low dimensional space. This study is focused on developing enhanced MSPC techniques for fault diagnosis and quality prediction in batch processes. A progressive modelling method is developed in this study to facilitate fault analysis and fault localisation. A PCA model is developed from normal process operation data and is used for on-line process monitoring. Once a fault is detected by the PCA model, process variables that are related to the fault are identified using contribution analysis. The time information on when abnormalities occurred in these variables is identified using time series plot of the squared prediction errors (SPE) on these variables. These variables are then removed and another PCA model is developed using the remaining variables. If the faulty batch cannot be detected by the new PCA model, then the remaining variables are not related to the fault. If the faulty batch can still be detected by the new PCA model, then further variables associated with the fault are identified from SPE contribution analysis. The procedure is repeated until the faulty batch can no longer be detected using the remaining variables. Multi-block methods are then applied with the progressive modelling scheme to enhance fault analysis and localisation efficiency. The methods are tested on a benchmark simulated penicillin production process and real industrial data. An enhanced multi-block PLS predictive modelling method is developed in this study. It is based on the hypothesis that meaningful variable selection can lead to better prediction performance. A data partitioning method for enhanced predictive process modelling is proposed and it enables data to be separated into blocks by different measuring time. Model parameters can be used to express contributions
APA, Harvard, Vancouver, ISO, and other styles
42

Simfukwe, Paul. "Role of conventional soil classification in the prediction of soil quality indicators." Thesis, Bangor University, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.529749.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Jolley, Bianca. "Development of quality control tools and a taste prediction model for rooibos." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/95991.

Full text
Abstract:
Thesis (MScFoodSc)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: In this study quality control tools were developed for the rooibos industry, primarily to determine the quality of rooibos infusions. A considerable variation between samples of the same quality grade has been noted. As there are no guidelines or procedures in place to help minimise this inconsistency it was important to develop quality control tools, which could confront this problem. Both the sensory characteristics and phenolic composition of rooibos infusions were analysed in order to create and validate these quality control tools. Descriptive sensory analysis was used for the development of a targeted sensory wheel and sensory lexicon, to be used as quality control tools by the rooibos industry, and to validate the major rooibos sensory profiles. In order to ensure all possible variation was taken into account, 230 fermented rooibos samples were sourced from the Northern Cape and Western Cape areas within South Africa over a 3-year period (2011-2013). The aroma, flavour, taste and mouthfeel attributes found to associate with rooibos sensory quality were validated and assembled into a rooibos sensory wheel, which included the average intensity, as well as the percentage occurrence of each attribute. Two major characteristic sensory profiles prevalent within rooibos, namely the primary and secondary profiles, were identified. Both profiles had a sweet taste and an astringent mouthfeel, however, the primary sensory profile is predominantly made up of “rooibos-woody”, “fynbos-floral” and “honey” aroma notes, while “fruity-sweet”, “caramel” and “apricot” aroma notes are the predominant sensory attributes of the secondary profile. The predictive value of the phenolic compounds of the infusions towards the taste and mouthfeel attributes (“sweet”, “sour”, “bitter” and “astringent”) was examined using different regression analyses, namely, Pearson’s correlation, partial least squares regression (PLS) and step-wise regression. Correlations between individual phenolic compounds and the taste and mouthfeel attributes were found to be significant, but low. Although a large sample set (N = 260) spanning 5 years (2009-2013) and two production areas (Western Cape and Northern Cape, South Africa) was used, no individual phenolic compounds could be singled out as being responsible for a specific taste or mouthfeel attribute. Furthermore, no difference was found between the phenolic compositions of the infusions based on production area, a trend that was also seen for the sensory characterisation of rooibos infusions. Sorting, a rapid sensory profiling method was evaluated for its potential use as a quality control tool for the rooibos industry. Instructed sorting was shown to successfully determine rooibos sensory quality, especially based on the aroma quality of the infusions. However, determining the quality of the infusion based on flavour quality was more difficult, possibly due to the low sensory attribute intensities. Categorisation of rooibos samples based on the two major aroma profiles i.e. the primary and secondary characteristic profiles, was achieved with uninstructed sorting. The potential of using sorting as a rapid technique to determine both quality and characteristic aroma profiles, was therefore demonstrated, indicating its relevance as another quality control tool to the rooibos industry.
AFRIKAANSE OPSOMMING: Gehaltebeheer hulpmiddels is as deel van hierdie studie vir die rooibosbedryf ontwikkel, hoofsaaklik om die sensoriese kwaliteit van rooibostee te bepaal. Aansienlike verskille is tussen monsters van dieselfde gehaltegraad opgemerk, primêr omdat daar in die wyer rooibosbedryf beperkte riglyne of prosedures in plek is om kwaliteitsverskille effektief te bepaal. Dit is as belangrik geag om gehaltebeheer hulpmiddels te ontwikkel om laasgenoemde probleem aan te spreek. Spesifieke gehaltebeheer hulpmiddels is dus vir hierdie studie ontwikkel en gevalideer deur die sensoriese eienskappe en fenoliese samestelling van rooibostee te analiseer. Beskrywende sensoriese analise (BSA) is gebruik om ‘n sensoriese wiel en leksikon vir die rooibosbedryf te ontwikkel en te valideer. Om alle moontlike produkvariasie te ondervang, is 230 gefermenteerde rooibos monsters afkomstig van die Noord-Kaap en Wes-Kaap areas in Suid-Afrika oor ‘n tydperk van drie jaar (2011-2013) verkry. Die aroma, geur, smaak en mondgevoel eienskappe wat met rooibos se sensoriese kwaliteit assosieer, is bevestig en uiteindelik gebruik om die sensoriese wiel te ontwikkel. Die gemiddelde intensiteit en persentasie voorkoms van elke eienskap is in die wiel ingesluit. Twee belangrike “karakteristieke” sensoriese profiele wat met rooibos geassosieer word, is geïdentifiseer, nl. die primêre en sekondêre sensoriese profiele. Tipies van beide sensoriese profiele is ‘n kenmerkende soet smaak en vrank mondgevoel, daarenteen bestaan die primêre sensoriese profiel hoofsaaklik uit "rooibos-houtagtige", "fynbos-blomagtige" en "heuning" aromas, terwyl "vrugtige-soet", "karamel" en "appelkoos" aromas die oorheersende sensoriese eienskappe van die sekondêre profiel is. Die korrelasie tussen die fenoliese verbindings en die smaak en mondgevoel eienskappe van rooibos ("soet", "suur", "bitter" en "vrankheid") is ondersoek met behulp van verskillende tipe regressieontledings, nl. Pearson se korrelasie, gedeeltelike kleinstekwadrate regressie (PLS) en stapsgewyse regressie. Korrelasies tussen individuele fenoliese verbindings en die smaak en mondgevoel eienskappe was laag, maar steeds betekenisvol. Alhoewel die uitgebreide stel monsters (N = 260) verteenwoordigend was van vyf oesjare (2009-2013) en twee produksiegebiede (Wes-Kaap en Noord-Kaap, Suid-Afrika), kon geen individuele fenoliese verbindings uitgesonder word as betekenisvolle voorspellers van spesifieke smaak of mondgevoel eienskappe nie. Verder is daar ook geen verskil tussen die verskillende produksie-areas wat betref fenoliese samestelling gevind nie. Soortgelyke resultate is bevind vir die sensoriese karakterisering van rooibostee. Sortering, 'n vinnige sensoriese profileringsmetode, is geëvalueer vir sy potensiële gebruik as 'n gehaltebeheer hulpmiddel vir die rooibosbedryf. Gestrukteerde sortering was suksesvol om rooibos se sensoriese kwaliteit, veral die algemene aroma kwaliteit van rooibos, te bepaal. Hierdie profileringsmetode was egter nie so suksesvol om rooibos se algemene geur, smaak en mondgevoeleienskappe te bepaal nie. Hierdie tendens kan moontlik toegeskryf word aan die betekenisvolle laer intensiteite van laasgenoemde sensoriese eienskappe. Die kategorisering van die rooibos monsters op grond van hul karakteristieke primêre en sekondêre sensoriese profiele is suksesvol deur middel van ongestrukteerde sortering bepaal. In die geheel gesien is die potensiaal van die sorteringstegniek as ‘n vinnige metode om die algemene sensoriese kwaliteit, asook die karakteristieke aroma profiele van rooibos te bepaal, dus bewys. Hierdie vinnige sensoriese profileringstegniek hou dus besliste voordele in vir die rooibosbedryf as dit kom by sensoriese gehaltebeheer.
APA, Harvard, Vancouver, ISO, and other styles
44

Qiu, D. (Daoying). "Evaluation and prediction of content quality in stack overflow with logistic regression." Master's thesis, University of Oulu, 2015. http://urn.fi/URN:NBN:fi:oulu-201510282094.

Full text
Abstract:
Collaborative questioning and answering (CQA) sites such as Stack Overflow are the placement in which community members can ask and answer questions as well as interact with questions and answers. A question may receive multiple answers, and only one may be selected as the best answer, which means that this answer is more suitable for the given question. For the purpose of effective information retrieval, it will be beneficial to automatically predict and select the best answer. This thesis carried out and presented a study to evaluate content quality in CQA site by using logistic regression and features extracted from questions, answers and users’ information. By reviewing previous researches, all features which can be used to evaluate and predict the quality of content in the research case were identified. Stack Overflow was chosen as the research case and a sample of questions and answers has been extracted for further analysis. The human rated question score was done with the assistance of three people working in the field of information technology. Various features from questions, answers, and owners’ information were modelled and trained into classifiers to choose the best answer or high quality question. The results indicate that the models built in this research for evaluating answer quality have high predictive ability and strong robustness. While the models for evaluating question quality have low predictive ability in this study. In addition, it is demonstrated that several features from questions, answers, and owners’ information can be valuable component in evaluating and predicting content quality, such as owner’s reputation points, and questions’ or answer score, but human rated question score has no significant influence on evaluating answer quality. This research has contributions to science and implications for practice. For example, one main contribution is that based on the models built in this study, CQA sites can automatically suggest to their users the best answers, which is a time-saving solution for users looking for help from CQA sites.
APA, Harvard, Vancouver, ISO, and other styles
45

Alreshoodi, Mohammed A. M. "Prediction of quality of experience for video streaming using raw QoS parameters." Thesis, University of Essex, 2016. http://repository.essex.ac.uk/16566/.

Full text
Abstract:
Along with the rapid growth in consumer adoption of modern portable devices, video streaming is expected to dominate a large share of the global Internet traffic in the near future. Today user experience is becoming a reliable indicator for video service providers and telecommunication operators to convey overall end-to-end system functioning. Towards this, there is a profound need for an efficient Quality of Experience (QoE) monitoring and prediction. QoE is a subjective metric, which deals with user perception and can vary due to the user expectation and context. However, available QoE measurement techniques that adopt a full reference method are impractical in real-time transmission since they require the original video sequence to be available at the receiver’s end. QoE prediction, however, requires a firm understanding of those Quality of Service (QoS) factors that are the most influential on QoE. The main aim of this thesis work is the development of novel and efficient models for video quality prediction in a non-intrusive way and to demonstrate their application in QoE-enabled optimisation schemes for video delivery. In this thesis, the correlation between QoS and QoE is utilized to objectively estimate the QoE. For this, both objective and subjective methods were used to create datasets that represent the correlation between QoS parameters and measured QoE. Firstly, the impact of selected QoS parameters from both encoding and network levels on video QoE is investigated. The obtained QoS/QoE correlation is backed by thorough statistical analysis. Secondly, the development of two novel hybrid non-reference models for predicting video quality using fuzzy logic inference systems (FIS) as a learning-based technique. Finally, attention was move onto demonstrating two applications of the developed FIS prediction model to show how QoE is used to optimise video delivery.
APA, Harvard, Vancouver, ISO, and other styles
46

Shukla, Sunil Ravindra. "Improving High Quality Concatenative Text-to-Speech Using the Circular Linear Prediction Model." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14481.

Full text
Abstract:
Current high quality text-to-speech (TTS) systems are based on unit selection from a large database that is both contextually and prosodically rich. These systems, albeit capable of natural voice quality, are computationally expensive and require a very large footprint. Their success is attributed to the dramatic reduction of storage costs in recent times. However, for many TTS applications a smaller footprint is becoming a standard requirement. This thesis presents a new method for representing speech segments that can improve the quality and/or reduce the footprint current concatenative TTS systems. The circular linear prediction (CLP) model is revisited and combined with the constant pitch transform (CPT) to provide a robust representation of speech signals that allows for limited prosodic movements without a perceivable loss in quality. The CLP model assumes that each frame of voiced speech is an infinitely periodic signal. This assumption allows for LPC modeling using the covariance method, with the efficiency of the autocorrelation method. The CPT is combined with this model to provide a database that is uniform in pitch for matching the target prosody during synthesis. With this representation, limited prosody modifications and unit concatenation can be performed without causing audible artifacts. For resolving artifacts caused by pitch modifications in voicing transitions, a method has been introduced for reducing peakiness in the LP spectra by constraining the line spectral frequencies. Two experiments have been conducted to demonstrate the potential for the capabilities of CLP/CPT method. The first is a listening test to determine the ability of this model to realize prosody modifications without perceivable degradation. Utterances are resynthesized using the CLP/CPT method with emphasized prosodics to increase intelligibility in harsh environments. The second experiment compares the quality of utterances synthesized by unit-selection based limited-domain TTS against the CLP/CPT method. The results demonstrate that the CLP/CPT representation, applied to current concatenative TTS systems, can reduce the size of the database and increase the prosodic richness without noticeable degradation in voice quality.
APA, Harvard, Vancouver, ISO, and other styles
47

May, Laura Anne. "Measurement and prediction of quality of life of persons with spinal cord injury." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape9/PQDD_0031/NQ46884.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Walker, Alan. "The carbon texture of metallurgical coke and its bearing on coke quality prediction." Thesis, Loughborough University, 1988. https://dspace.lboro.ac.uk/2134/10950.

Full text
Abstract:
The carbon in metallurgical coke is composed of textural units, varying in size and shape depending on the rank of coal carbonized. These induce a characteristic texture to coke surfaces. This thesis describes a study of the bearing of this texture on coke strength, particular emphasis being placed on investigating the feasibility of using textural composition data, determined by either scanning electron microscopy (SEX) of etched surfaces or polarized-light microscopy (PLX) of polished coke surfaces, as a basis of predicting the tensile strength of cokes produced from blended-coal charges from the behaviour of individual blend components. Scanning electron microscopy (SEM) of fractured coke surfaces revealed differences in the mode of fracture of textural components which implied variations in their contribution to coke strength. The tensile strengths of pilot-oven cokes, produced from blended-coal charges, could be related to their measured PLM textural compositions using equations derived from consideration of simple models of intergranular and transgranular fracture. The coke strengths could also be related, with greater precision, with textural data calculated from the coal blend composition and either the SEM or the PLM textural data for the cokes from the individual blend components. It was further found that the strength of blended-coal cokes were additively related to the blend composition and the tensile strengths of the single-coal cokes. Such relationships are useful, at the very least, for predicting the strength of cokes from other blends of the same coals carbonized under similar conditions. The various approaches to coke strength prediction have potential value in different situations.
APA, Harvard, Vancouver, ISO, and other styles
49

Maritz, Gert Stephanus Herman. "A network traffic analysis tool for the prediction of perceived VoIP call quality." Thesis, Stellenbosch : University of Stellenbosch, 2011. http://hdl.handle.net/10019.1/17897.

Full text
Abstract:
Thesis (MScEng)--University of Stellenbosch, 2011.
ENGLISH ABSTRACT: The perceived quality of Voice over Internet Protocol (IP) (VoIP) communication relies on the network which is used to transport voice packets between the end points. Variable network characteristics such as bandwidth, delay and loss are critical for real-time voice traffic and are not always guaranteed by networks. It is important for network service providers to determine the Quality of Service (QoS) it provides to its customers. The solution proposed here is to predict the perceived quality of a VoIP call, in real-time by using network statistics. The main objective of this thesis is to develop a network analysis tool, which gathers meaningful statistics from network traffic. These statistics will then be used for predicting the perceived quality of a VoIP call. This study includes the investigation and deployment of two main components. Firstly, to determine call quality, it is necessary to extract the voice streams from captured network traffic. The extracted sound files can then be analysed by various VoIP quality models to determine the perceived quality of a VoIP call. The second component is the analysis of network characteristics. Loss, delay and jitter are all known to influence perceived call quality. These characteristics are, therefore, determined from the captured network traffic and compared with the call quality. Using the statistics obtained by the repeated comparison of the call quality and network characteristics, a network specific algorithm is generated. This Non-Intrusive Quality Prediction Algorithm (NIQPA) uses basic characteristics such as time of day, delay, loss and jitter to predict the quality of a real-time VoIP call quickly in a non-intrusive way. The realised algorithm for each network will differ, because every network is different. Prediction results can then be used to adapt either the network (more bandwidth, packet prioritising) or the voice stream (error correction, change VoIP codecs) to assure QoS.
AFRIKAANSE OPSOMMING: Die kwaliteit van spraak oor die internet (VoIP) kommunikasie is afhanklik van die netwerk wat gebruik word om spraakpakkies te vervoer tussen die eindpunte. Netwerk eienskappe soos bandwydte, vertraging en verlies is krities vir intydse spraakverkeer en kan nie altyd gewaarborg word deur netwerkverskaffers nie. Dit is belangrik vir die netwerk diensverskaffers om die vereiste gehalte van diens (QoS) te verskaf aan hul kliënte. Die oplossing wat hier voorgestel word is om die kwaliteit van ’n VoIP oproep intyds te voorspel, deur middel van die netwerkstatistieke. Die belangrikste doel van hierdie projek is om ’n netwerk analise-instrument te ontwikkel. Die instrument versamel betekenisvolle statistiek deur van netwerkverkeer gebruik te maak. Hierdie statistiek sal dan gebruik word om te voorspel wat die gehalte van ’n VoIP oproep sal wees vir sekere netwerk toestande. Hierdie studie berus op die ondersoek en implementering van twee belangrike komponente. In die eerste plek, moet oproep kwaliteit bepaal word. Spraakstrome word uit die netwerkverkeer onttrek. Die onttrekte klanklêers kan dan geanaliseer word deur verskeie spraak kwaliteitmodelle om die kwaliteitdegradasie van ’n spesifieke VoIP oproep vas te stel. Die tweede komponent is die analise van netwerkeienskappe. Pakkieverlies, pakkievertraging en bibbereffek is bekend vir hul invloed op VoIP kwaliteit en is waargeneem. Hierdie netwerk eienskappe word dus bepaal uit die netwerkverkeer en daarna vergelyk met die gemete gesprekskwaliteit. Statistiek word verkry deur die herhaalde vergelyking van gesprekkwaliteit en netwerk eienskappe. Uit die statistiek kan ’n algoritme (vir die spesifieke network) gegenereer word om spraakkwaliteit te voorspel. Hierdie Nie-Indringende Kwaliteit Voorspellings-algoritme (NIKVA), gebruik basiese kenmerke, soos die tyd van die dag, pakkie vertraging, pakkie verlies en bibbereffek om die kwaliteit van ’n huidige VoIP oproep te voorspel. Hierdie metode is vinnig, in ’n nie-indringende manier. Die gerealiseerde algoritme vir die verskillende netwerke sal verskil, want elke netwerk is anders. Die voorspelling van spraakgehalte kan dan gebruik word om òf die netwerk aan te pas (meer bandwydte, pakkie prioriteit) òf die spraakstroom aan te pas (foutkorreksie, verander VoIP kodering) om die goeie kwaliteit van ’n VoIP oproep te verseker.
APA, Harvard, Vancouver, ISO, and other styles
50

López, del Río Ángela. "Data preprocessing and quality diagnosis in deep learning-based in silico bioactivity prediction." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/672385.

Full text
Abstract:
Drug discovery is a time and resource consuming process involving the identification of a target and the exploration of suitable drug candidates for it. To streamline drug discovery, computational techniques help identifying molecular candidates with desirable properties by modeling their interactions with the target. These techniques are in constant improvement thanks to the development of algorithms, the increasing computational power and the growth of public molecular databases. Specifically, machine learning approaches provide predictive models on biochemical properties and target-ligand binding activity. Deep learning is a machine learning approach that automatically extracts multiple levels of representations of the data. Within the last ten years, deep learning has outperformed classical prediction models in most domains, including drug discovery. Common use cases encompass molecular property prediction, de novo compound generation, protein secondary structure prediction and target-compound binding prediction. However, studies point out the reported performance of deep learning bioactivity prediction models could be a consequence of data bias rather than generalization capability. Efforts are being put in addressing this problem, but it is still present in the state of the art, rewarding novelty over critical assessment. Moreover, the flexibility of deep learning derives in a lack of consensus on how to represent the input spaces, making it difficult to compare models in a common benchmark. Bioactivity data has limited availability because of its associated costs and is often imbalanced, hampering the model learning process. The diagnosis of these problems is not straightforward, since deep learning models are considered black boxes, hindering their adoption as the de facto solution in computer-aided drug discovery. The present thesis aims to improve deep learning models for computational drug discovery, focusing in the input representation, the data bias control, the data imbalance correction and the model diagnosis. First, this thesis assesses the effect that different validation strategies have on binding classification models, aiming to find the most realistic performance estimates. The strategy based on clustering molecules to avoid having similar compounds in training and test sets showed to be the most similar to a prospective validation, and thus, more consistent than random cross-validation (over-optimistic) or than an external test set from other database (over-pessimistic). Second, this thesis focuses on the sequential inputs padding. Padding is necessary to establish a common sequence length by adding zeros to each sequence. These are usually added at the end of the sequence, without formal justification behind it. Here, classical and novel padding strategies were compared in an enzyme classification task. Results showed that the padding position has an effect in the performance of deep learning models, so it should be tuned as an additional hyperparameter. Third, this thesis studies the effect of data imbalance in protein-compound activity classification models and its mitigation through resampling techniques. The model performance was assessed for different combinations of oversampling the minority class and clustering. Results showed that the proportion of actives predicted by the model was explained by the actual data balance in the test set. Data clustering, followed by data resampling in training and validation sets, stood as the best performing strategy without altering the test set. To accomplish the three points above, this thesis provides a systematic way to diagnose deep learning models, identifying the factors that govern the model predictions and performance. Specifically, explanatory linear models enabled informed, quantitative decisions regarding input preprocessing. This ultimately leads to more consistent deep learning target-compound binding prediction models.
El descubrimiento de fármacos es un proceso costoso en tiempo y recursos. Consiste en la identificación de una diana y la exploración de fármacos candidatos apropiados para ella. Las técnicas computacionales optimizan este proceso, ayudando a identificar las mejores moléculas candidatas mediante el modelado de sus interacciones con la diana. Estas técnicas están en constante mejora gracias al desarrollo de algoritmos, al incremento del poder computacional y al aumento de bases de datos moleculares públicas. Particularmente, el aprendizaje automático proporciona modelos predictivos de distintas propiedades bioquímicas. El deep learning (aprendizaje profundo) es una aproximación del aprendizaje automático basada en las redes neuronales multicapa. Durante los últimos diez años el deep learning ha superado a los modelos predictivos clásicos en la mayoría de dominios, incluído el descubrimiento de fármacos. Algunas de sus aplicaciones son la predicción de propiedades moleculares, la generación de nuevos compuestos, la predicción de la estructura secundaria de proteínas y la predicción de unión entre compuestos y dianas. Sin embargo, algunos estudios apuntan a que el rendimiento reportado por los modelos de deep learning de predicción de unión entre dianas y compuestos podría deberse más al sesgo de los datos que a su capacidad de generalización, dando más peso a la novedad que a la valoración crítica. Además, la flexibilidad del deep learning da pie a una falta de consenso en la representación de sus entradas, dificultando su comparación en un marco común. Los datos de bioactividad tienen una disponibilidad limitada debido a su coste y suelen estar desbalanceados, lo cual puede dificultar el proceso de aprendizaje del modelo. El diagnóstico de estos problemas no es sencillo porque los modelos de deep learning son considerados cajas negras. El objetivo de esta tesis es mejorar los modelos de deep learning para el descubrimiento computacional de fármacos, centrándose en la representación de la entrada, el control del sesgo de los datos, la corrección de su desbalance y el diagnóstico de los modelos. Primero, esta tesis evalúa el efecto de diferentes estrategias de validación en los modelos de clasificación de la unión diana-compuesto para encontrar las estimaciones de rendimiento más realistas. La estrategia basada en el agrupamiento de las moléculas demostró ser la más parecida a una validación prospectiva y por tanto, más consistente que la validación cruzada aleatoria (demasiado optimista) o que un conjunto de test externo proveniente de otra base de datos (demasiado pesimista). Segundo, esta tesis se centra en el relleno de las secuencias de entrada, utilizado para establecer una longitud común de las mismas. Este relleno consiste normalmente en añadir ceros al final de cada secuencia, sin una justificación formal detrás esta decisión. Aquí, se compararon estrategias de relleno novedosas y clásicas en una tarea de clasificación de enzimas. Los resultados mostraron que la posición del relleno tiene un efecto sobre el rendimiento de los modelos de aprendizaje profundo, por lo que se le debería dar más atención. Tercero, esta tesis estudia el efecto del desbalance de los datos en los modelos de clasificación de actividad diana-compuesto y su atenuación mediante técnicas de remuestreo. Se evaluó el rendimiento de un modelo para diferentes combinaciones de sobremuestreo de la clase minoritaria y agrupamiento de las moléculas. Los resultados demostraron que el agrupamiento de los datos, seguido por su remuestreo en los conjuntos de entrenamiento y validación, es la estrategia con mejor rendimiento. Por último, esta tesis proporciona una forma sistemática de diagnosticar modelos de deep learning, identificando los factores que rigen sus predicciones. Estos modelos lineales explicativos permitieron la toma de decisiones informadas y cuantitativas en cada uno
Enginyeria biomèdica
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography