Rozprawy doktorskie na temat „Data quality and noise”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Data quality and noise”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Alkharboush, Nawaf Abdullah H. "A data mining approach to improve the automated quality of data". Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/65641/1/Nawaf%20Abdullah%20H_Alkharboush_Thesis.pdf.
Pełny tekst źródłaLie, Chin Cheong Patrick. "Iterative algorithms for fast, signal-to-noise ratio insensitive image restoration". Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=63767.
Pełny tekst źródłaAl, Jurdi Wissam. "Towards next generation recommender systems through generic data quality". Electronic Thesis or Diss., Bourgogne Franche-Comté, 2024. http://www.theses.fr/2024UBFCD005.
Pełny tekst źródłaRecommender systems are essential for filtering online information and delivering personalized content, thereby reducing the effort users need to find relevant information. They can be content-based, collaborative, or hybrid, each with a unique recommendation approach. These systems are crucial in various fields, including e-commerce, where they help customers find pertinent products, enhancing user experience and increasing sales. A significant aspect of these systems is the concept of unexpectedness, which involves discovering new and surprising items. This feature, while improving user engagement and experience, is complex and subjective, requiring a deep understanding of serendipitous recommendations for its measurement and optimization. Natural noise, an unpredictable data variation, can influence serendipity in recommender systems. It can introduce diversity and unexpectedness in recommendations, leading to pleasant surprises. However, it can also reduce recommendation relevance, causing user frustration. Therefore, it is crucial to design systems that balance natural noise and serendipity. Inconsistent user information due to natural noise can negatively impact recommender systems, leading to lower-quality recommendations. Current evaluation methods often overlook critical user-oriented factors, making noise detection a challenge. To provide powerful recommendations, it’s important to consider diverse user profiles, eliminate noise in datasets, and effectively present users with relevant content from vast data catalogs. This thesis emphasizes the role of serendipity in enhancing recommender systems and preventing filter bubbles. It proposes serendipity-aware techniques to manage noise, identifies algorithm flaws, suggests a user-centric evaluation method, and proposes a community-based architecture for improved performance. It highlights the need for a system that balances serendipity and considers natural noise and other performance factors. The objectives, experiments, and tests aim to refine recommender systems and offer a versatile assessment approach
Sorensen, Thomas J. "Inverse Scattering Image Quality with Noisy Forward Data". Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2541.pdf.
Pełny tekst źródłaDemiroglu, Cenk. "Multisensor Segmentation-based Noise Suppression for Intelligibility Improvement in MELP Coders". Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/10455.
Pełny tekst źródłaCorreia, Fábio Gonçalves. "Quality control of ultra high resolution seismic data acquisition in real-time". Master's thesis, Universidade de Aveiro, 2017. http://hdl.handle.net/10773/22007.
Pełny tekst źródłaA aquisicção de grandes volumes de dados durante uma campanha sísmica exige, necessariamente, mais tempo para o controlo de qualidade (QC). No entanto, o tempo de QC não pode ser extendido devido a limitações do tempo de operação, tendo de ser feito mais rápido, o que pode comprometer a qualidade. A alternativa, alocar mais pessoas e recursos para QC e melhorar a eficiência, leva a aumentos de custo e à necessidade de maiores embarcações. Além disso, o QC tradicional requer tempo de análise após a aquisição, atrasando a desmobilização da embarcação, aumentando assim os custos da aquisição. A solução proposta passou pelo desenvolvimento de um QC automático em tempo real eficiente, testando a Comparação Espetral e o Atributo Razão Sinal-Ruído - ferramentas desenvolvidas no software SPW, usado para processamento de dados sísmicos. Usando este software foi testada a deteção e identificação de dados de fraca qualidade através das ferramentas de QC automáticas e os seus parâmetros ajustados para incluir pelo menos todos os maus registos encontrados manualmente. Foi também feita a deteção e identificação de vários problemas encontrados durante uma campanha de aquisição, tais como fortes ondulações e respetiva direção, o ruído de esteira provocado pelas hélices da embarcação e consequente Trouser’s Effect e mau funcionamento das fontes ou dos recetores. A deteção antecipada destes problemas pode permitir a sua resolução atempada, não comprometendo a aquisição dos dados. Foram feitos vários relatórios para descrever problemas encontrados durante os testes de versões beta do software SPW e os mesmos reportados à equipa da Parallel Geoscience, que atualizou o software de forma a preencher os requisitos necessários ao bom funcionamento do QC em tempo real. Estas atualizações permitiram o correto mapeamento dos headers dos ficheiros, otimização da velocidade de análise das ferramentas automáticas e correção de erros em processamento dos dados em multi-thread, para evitar atrasos entre o QC em tempo real e a aquisição dos dados, adaptação das ferramentas à leitura de um número variável de assinaturas das fontes, otimização dos limites de memória gráfica e correção de valores anómalos de semelhança espetral. Algumas atualizações foram feitas através da simulação da aquisição de dados na empresa, de forma a efetuar alguns ajustes e posteriormente serem feitos testes numa campanha futura. A parametrização destas ferramentas foi alcançada, assegurando-se assim a correta deteção automática dos vários problemas encontrados durante a campanha de aquisição usada para os testes, o que levará à redução do tempo gasto na fase de QC a bordo e ao aumento da sua eficácia.
The acquisition of larger volumes of seismic data during a survey requires, necessarily, more time for quality control (QC). Despite this, QC cannot be extended due operational time constraints and must be done faster, compromising its efficiency and consequently the data quality. The alternative, to allocate more people and resources for QC to improve efficiency, leads to prohibitive higher costs and larger vessel requirements. Therefore, traditional QC methods for large data require extended standby times after data acquisition, before the vessel can be demobilized, increasing the cost of survey. The solution tested here consisted on the development of an efficient Real- Time QC by testing Spectral Comparison and Signal to Noise Ratio Attribute (tools developed for the SPW seismic processing software). The detection and identification of bad data by the automatic QC tools was made and the parameters adapted to include at least all manual QC flags. Also, the detection and identification of common problems during acquisition, such strong wave motion and its direction, strong propeller’s wash, trouser’s effect and malfunction in sources or receivers were carried out. The premature detection of these problems will allow to solve them soon enough to not compromise the data acquisition. Several problem reports from beta tests of SPW were transmitted to the Parallel Geoscience team, to be used as a reference to update the software and fulfil Real-Time QC requirements. These updates brought the correct mapping of data headers in files, optimization of data analysis speed along with multi-thread processing debug, to assure it will be running fast enough to avoid delays between acquisition and Real-Time QC, software design to read a variable number of source signatures, optimization of graphic memory limits and debugging of anomalous spectral semblance values. Some updates resulted from a data acquisition simulation that was set up in the office, to make some adjustments to be later tested on an upcoming survey. The parameterization of these tools was finally achieved, assuring the correct detection of all major issues found during the survey, what will eventually lead to the reduction of time needed for QC stage on board, as also to the improvement of its efficiency.
Hardwick, Jonathan Robert. "Synthesis of Noise from Flyover Data". Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/50531.
Pełny tekst źródłaMaster of Science
Durand, Philippe. "Traitement des donnees radar varan et estimation de qualites en geologie, geomorphologie et occupation des sols". Paris 7, 1988. http://www.theses.fr/1988PA077183.
Pełny tekst źródłaGrillo, Aderibigbe. "Developing a data quality scorecard that measures data quality in a data warehouse". Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/17137.
Pełny tekst źródłaStone, Ian. "The effect of noise on image quality". Thesis, University of Westminster, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.283456.
Pełny tekst źródłaSýkorová, Veronika. "Data Quality Metrics". Master's thesis, Vysoká škola ekonomická v Praze, 2008. http://www.nusl.cz/ntk/nusl-2815.
Pełny tekst źródłaCrozier, Philip Mark. "Enhancement techniques for noise affected telephone quality speech". Thesis, University of Liverpool, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.321115.
Pełny tekst źródłaPeralta, Veronika. "Data Quality Evaluation in Data Integration Systems". Phd thesis, Université de Versailles-Saint Quentin en Yvelines, 2006. http://tel.archives-ouvertes.fr/tel-00325139.
Pełny tekst źródłaPeralta, Costabel Veronika del Carmen. "Data quality evaluation in data integration systems". Versailles-St Quentin en Yvelines, 2006. http://www.theses.fr/2006VERS0020.
Pełny tekst źródłaCette thèse porte sur la qualité des données dans les Systèmes d’Intégration de Données (SID). Nous nous intéressons, plus précisément, aux problèmes de l’évaluation de la qualité des données délivrées aux utilisateurs en réponse à leurs requêtes et de la satisfaction des exigences des utilisateurs en terme de qualité. Nous analysons également l’utilisation de mesures de qualité pour l’amélioration de la conception du SID et la conséquente amélioration de la qualité des données. Notre approche consiste à étudier un facteur de qualité à la fois, en analysant sa relation avec le SID, en proposant des techniques pour son évaluation et en proposant des actions pour son amélioration. Parmi les facteurs de qualité qui ont été proposés, cette thèse analyse deux facteurs de qualité : la fraîcheur et l’exactitude des données
Deb, Rupam. "Data Quality Enhancement for Traffic Accident Data". Thesis, Griffith University, 2017. http://hdl.handle.net/10072/367725.
Pełny tekst źródłaThesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information and Communication Technology
Science, Environment, Engineering and Technology
Full Text
Powers, John W. "Neural networks : an application to electrochemical noise data". Virtual Press, 1997. http://liblink.bsu.edu/uhtbin/catkey/1045629.
Pełny tekst źródłaDepartment of Mathematical Sciences
Cousins, John David. "CEAREX ambient noise data measured northeast of Svalbard". Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/28023.
Pełny tekst źródłaSampson, Aaron (Aaron Lee Kasey). "An analysis of noise in the CoRoT data". Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61265.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (p. 57).
In this thesis, publically available data from the French/ESA satellite mission CoRoT, designed to seek out extrasolar planets, was analyzed using MATLAB. CoRoT attempts to observe the transits of these planets across their parent stars. CoRoT occupies an orbit which periodically carries it through the Van Allen Belts, resulting in a very high level of high outliers in the flux data. Known systematics and outliers were removed from the data and the remaining scatter was evaluated using the median of absolute deviations from the median (MAD), a measure of scatter which is robust to outliers. The level of scatter (evaluated with MAD) present in this data is indicative of the lower limits on the size of planets detectable by CoRoT or a similar satellite. The MAD for CoRoT stars is correlated with the magnitude. The brightest stars observed by CoRoT display scatter of approximately 0.02 percent, while the median value for all stars is 0.16 percent.
by Aaron Sampson.
S.B.
Fisher, Robert W. H. "ExploringWeakly Labeled Data Across the Noise-Bias Spectrum". Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/786.
Pełny tekst źródłaWang, Tianmiao. "Non-parametric regression for data with correlated noise". Thesis, University of Bristol, 2017. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.730888.
Pełny tekst źródłaHe, Ying Surveying & Spatial Information Systems Faculty of Engineering UNSW. "Spatial data quality management". Publisher:University of New South Wales. Surveying & Spatial Information Systems, 2008. http://handle.unsw.edu.au/1959.4/43323.
Pełny tekst źródłaYoo, Seungyup. "Field effect transistor noise model analysis and low noise amplifier design for wireless data communications". Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/13024.
Pełny tekst źródłaBringle, Per. "Data Quality in Data Warehouses: a Case Study". Thesis, University of Skövde, Department of Computer Science, 1999. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-404.
Pełny tekst źródłaCompanies today experience problems with poor data quality in their systems. Because of the enormous amount of data in companies, the data has to be of good quality if companies want to take advantage of it. Since the purpose with a data warehouse is to gather information from several databases for decision support, it is absolutely vital that data is of good quality. There exists several ways of determining or classifying data quality in databases. In this work the data quality management in a large Swedish company's data warehouse is examined, through a case study, using a framework specialized for data warehouses. The quality of data is examined from syntactic, semantic and pragmatic point of view. The results of the examination is then compared with a similar case study previously conducted in order to find any differences and similarities.
Redgert, Rebecca. "Evaluating Data Quality in a Data Warehouse Environment". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208766.
Pełny tekst źródłaMängden data som ackumulerats av organisationer har ökat betydligt under de senaste åren, vilket har ökat betydelsen för datakvalitet. Att säkerställa datakvalitet för stora mängder data är en komplicerad uppgift, men avgörande för efterföljande analys. Denna studie undersöker hur man underhåller och förbättrar datakvaliteten i ett datalager. En fallstudie av fel i ett datalager på det svenska företaget Kaplan genomfördes och resulterade i riktlinjer för hur datakvaliteten kan förbättras. Undersökningen gjordes genom att manuellt jämföra data från källsystemen med datat integrerat i datalagret och genom att tillämpa ett kvalitetsramverk grundat på semiotisk teori för att kunna identifiera fel. De tre huvudsakliga riktlinjerna som gavs är att (1) implementera ett standardiserat format för källdatat, (2) genomföra en kontroll före integration där källdatat granskas och korrigeras vid behov, och (3) att skapa och implementera specifika databasintegritetsregler. Vidare forskning uppmuntras för att skapa en guide till ramverket om hur man bäst jämför data genom en manuell undersökning, och kvalitetssäkring av källdata.
Li, Lin. "Data quality and data cleaning in database applications". Thesis, Edinburgh Napier University, 2012. http://researchrepository.napier.ac.uk/Output/5788.
Pełny tekst źródłaKonaté, Cheick Mohamed. "Enhancing speech coder quality: improved noise estimation for postfilters". Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=104578.
Pełny tekst źródłaITU-T G.711.1 est une extension multi-débit pour signaux à large-bande de la très répandue norme de compression audio de UIT-T G.711. Cette extension est interoperationelle avec sa version initiale à bande étroite. Lorsque l'ancienne version G.711 est employée pour coder un signal vocal et que G.711.1 est utiliser pour le décoder, le bruit de quantificationpeut être entendu. Pour ce cas, la norme propose un post-filtre optionel. Le post-filtre nécessite l'estimation du bruit de quantification. La précision de l'estimation du bruit de quantification va jouer sur la performance du post-filtre.Dans cette thèse, nous proposons un meilleur estimateur du bruit de quantification pour le post-filtre proposé pour le codec G.711.1 et nous évaluons ses performances. L'estimateur que nous proposons donne une estimation plus précise du bruit de quantification avec la même complexité.
Johansson, Magnus. "On noise and hearing loss : Prevalence and reference data". Doctoral thesis, Linköping : Univ, 2003. http://www.ep.liu.se/diss/science_technology/07/97/index.html.
Pełny tekst źródłaJeatrakul, Piyasak. "Enhancing classification performance over noise and imbalanced data problems". Thesis, Jeatrakul, Piyasak (2012) Enhancing classification performance over noise and imbalanced data problems. PhD thesis, Murdoch University, 2012. https://researchrepository.murdoch.edu.au/id/eprint/10044/.
Pełny tekst źródłaHammond, Patrick Douglas. "Deep Synthetic Noise Generation for RGB-D Data Augmentation". BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7516.
Pełny tekst źródłaCooley, Daniel Warren. "Data acquisition unit for low-noise, continuous glucose monitoring". Diss., University of Iowa, 2012. https://ir.uiowa.edu/etd/2844.
Pełny tekst źródłaTcheheumeni, Djanni Axel Laurel. "Identification and quantification of noise sources in marine towed active electromagnetic data". Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28914.
Pełny tekst źródłaWedin, Jonas. "Replicating noise in video : a comparison between physics-based and deep learning models for simulating noise". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-272110.
Pełny tekst źródłaAlgoritmer som används för att spåra objekt i video, som följer newtonska rörelser, kan ofta bli påverkade av störningar. Vissa av dessa störningar kan vara svåra och dyra att spela in, så att kunna utöka eller generera ny data som representerar en viss typ av störning kan vara mycket användbart. Forskning inom oövervakad träning av djupinlärningsmodeller som använder sig av Recurrent Neural Networks (RNNs) och Long Short-Term Memory (LSTMs) kombinerat med konvolutioner (ConvLSTM) ger hopp om att en djupinlärningsmodell som är tränad på en viss typ av data, ska kunna återskapa den utan att kopiera orginaldatan. Den här uppsatsen använder sig utav två dataset som representerar störningar (regn och flygande insekter) och försöker att imitera dessa. För att kunna jämföra så skapas två modeller för varje störning. En skapas genom att definera en fysisk modell för störningen som sedan används för att generera data, och den andra är en djupinlärninsmodell som tränas på riktig data. Sekvenser genererade från dessa modeller utvärderas sedan med olika tekniker. Etablerade tekniker så som Frechet Inception Distance (FID) används och andra tas fram för att visa statisktiska skillnader mellan modellerna. Resultatet visar att det är svårt att mäta så gles data med existerande tekniker. FID-mätningen för insekts-modellerna jämfört med ett valideringsset är nästan lika (103 ≈ 107). Detta stämmer inte överens med en visuell inspektion utav datan, där djupinlärningsmodellen presterar sämre. Liknande resultat kan ses för regndatan, vilket gör FID-mätningarna svåra att tolka eftersom det inte stämmer med vad datan visar. Nya mättekniker visar att dom fysiska modellerna presterar bättre än djupinlärninsmodellerna, men användbarheten hos dom teknikerna ifrågasätts. Slutsatsen är att dom fysiska modellerna presterar bättre än djupinlärningsmodellerna men att dom inte generaliserar lika väl och kräver stor ansträngning att producera.
Yu, Wenyuan. "Improving data quality : data consistency, deduplication, currency and accuracy". Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/8899.
Pełny tekst źródłaBarker, James M. "Data governance| The missing approach to improving data quality". Thesis, University of Phoenix, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10248424.
Pełny tekst źródłaIn an environment where individuals use applications to drive activities from what book to purchase, what film to view, to what temperature to heat a home, data is the critical element. To make things work data must be correct, complete, and accurate. Many firms view data governance as a panacea to the ills of systems and organizational challenge while other firms struggle to generate the value of these programs. This paper documents a study that was executed to understand what is being done by firms in the data governance space and why? The conceptual framework that was established from the literature on the subject was a set of six areas that should be addressed for a data governance program including: data governance councils; data quality; master data management; data security; policies and procedures; and data architecture. There is a wide range of experiences and ways to address data quality and the focus needs to be on execution. This explanatory case study examined the experiences of 100 professionals at 41 firms to understand what is being done and why professionals are undertaking such an endeavor. The outcome is that firms need to address data quality, data security, and operational standards in a manner that is organized around business value including strong business leader sponsorship and a documented dynamic business case. The outcome of this study provides a foundation for data governance program success and a guide to getting started.
Wolf, Hilke. "Data Quality Bench-Marking for High Resolution Bragg Data". Doctoral thesis, Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2014. http://hdl.handle.net/11858/00-1735-0000-0022-5DE2-A.
Pełny tekst źródłaSwapna, B., i R. VijayaPrakash. "Privacy Preserving Data Mining Operations without Disrupting Data Quality". International Journal of Computer Science and Network (IJCSN), 2012. http://hdl.handle.net/10150/271473.
Pełny tekst źródłaData mining operations help discover business intelligence from historical data. The extracted business intelligence or actionable knowledge helps in taking well informed decisions that leads to profit to the organization that makes use of it. While performing mining privacy of data has to be given utmost importance. To achieve this PPDM (Privacy Preserving Data Mining) came into existence by sanitizing database that prevents discovery of association rules. However, this leads to modification of data and thus disrupting the quality of data. This paper proposes a new technique and algorithms that can perform privacy preserving data mining operations while ensuring that the data quality is not lost. The empirical results revealed that the proposed technique is useful and can be used in real world applications.
Pedroza, Moises. "TRACKING RECEIVER NOISE BANDWIDTH SELECTION". International Foundation for Telemetering, 1996. http://hdl.handle.net/10150/607591.
Pełny tekst źródłaThe selection of the Intermediate Frequency (IF) bandwidth filter for a data receiver for processing PCM data is based on using a peak deviation of 0.35 times the bit rate. The optimum IF bandwidth filter is equal to the bit rate. An IF bandwidth filter of 1.5 times the bit rate degrades the data by approximately 0.7 dB. The selection of the IF bandwidth filter for tracking receivers is based on the narrowest “noise bandwidth” that will yield the best system sensitivity. In some cases the noise bandwidth of the tracking receiver is the same as the IF bandwidth of the data receiver because it is the same receiver. If this is the case, the PCM bit rate determines the IF bandwidth and establishes the system sensitivity. With increasing bit rates and increased transmitter stability characteristics, the IF bandwidth filter selection criteria for a tracking receiver must include system sensitivity considerations. The tracking receiver IF bandwidth filter selection criteria should also be based on the narrowest IF bandwidth that will not cause the tracking errors to be masked by high bit rates and alter the pedestal dynamic response. This paper describes a selection criteria for a tracking receiver IF bandwidth filter based on measurements of the tracking error signals versus antenna pedestal dynamic response. Different IF bandwidth filters for low and high bit rates were used.
López, Martinez Carlos. "Multidimensional speckle noise. Modelling and filtering related to sar data". Doctoral thesis, Universitat Politècnica de Catalunya, 2003. http://hdl.handle.net/10803/6921.
Pełny tekst źródłaArizaleta, Mikel. "Structured data extraction: separating content from noise on news websites". Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9898.
Pełny tekst źródłaIn this thesis, we have treated the problem of separating content from noise on news websites. We have approached this problem by using TiMBL, a memory-based learning software. We have studied the relevance of the similarity in the training data and the effect of data size in the performance of the extractions.
Kawaguchi, Hirokazu. "Signal Extraction and Noise Removal Methods for Multichannel Electroencephalographic Data". 京都大学 (Kyoto University), 2014. http://hdl.handle.net/2433/188593.
Pełny tekst źródłaShahbazian, Mehdi. "Multiresolution denoising for arbitrarily spaced data contaminated with arbitrary noise". Thesis, University of Surrey, 2005. http://epubs.surrey.ac.uk/843064/.
Pełny tekst źródłaLópez, Martinez Carlos. "Multidimensional speckle noise, modelling and filtering related to SAR data /". Köln : DLR, Bibliotheks- und Informationswesen, 2004. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=015380575&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.
Pełny tekst źródłaDe, Stefano Antonio. "Wavelet-based reduction of spatial video noise". Thesis, University of Southampton, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.342855.
Pełny tekst źródłaAngeles, Maria del Pilar. "Management of data quality when integrating data with known provenance". Thesis, Heriot-Watt University, 2007. http://hdl.handle.net/10399/64.
Pełny tekst źródłaDiallo, Thierno Mahamoudou. "Discovering data quality rules in a master data management context". Thesis, Lyon, INSA, 2013. http://www.theses.fr/2013ISAL0067.
Pełny tekst źródłaDirty data continues to be an important issue for companies. The datawarehouse institute [Eckerson, 2002], [Rockwell, 2012] stated poor data costs US businesses $611 billion dollars annually and erroneously priced data in retail databases costs US customers $2.5 billion each year. Data quality becomes more and more critical. The database community pays a particular attention to this subject where a variety of integrity constraints like Conditional Functional Dependencies (CFD) have been studied for data cleaning. Repair techniques based on these constraints are precise to catch inconsistencies but are limited on how to exactly correct data. Master data brings a new alternative for data cleaning with respect to it quality property. Thanks to the growing importance of Master Data Management (MDM), a new class of data quality rule known as Editing Rules (ER) tells how to fix errors, pointing which attributes are wrong and what values they should take. The intuition is to correct dirty data using high quality data from the master. However, finding data quality rules is an expensive process that involves intensive manual efforts. It remains unrealistic to rely on human designers. In this thesis, we develop pattern mining techniques for discovering ER from existing source relations with respect to master relations. In this set- ting, we propose a new semantics of ER taking advantage of both source and master data. Thanks to the semantics proposed in term of satisfaction, the discovery problem of ER turns out to be strongly related to the discovery of both CFD and one-to-one correspondences between sources and target attributes. We first attack the problem of discovering CFD. We concentrate our attention to the particular class of constant CFD known as very expressive to detect inconsistencies. We extend some well know concepts introduced for traditional Functional Dependencies to solve the discovery problem of CFD. Secondly, we propose a method based on INclusion Dependencies to extract one-to-one correspondences from source to master attributes before automatically building ER. Finally we propose some heuristics of applying ER to clean data. We have implemented and evaluated our techniques on both real life and synthetic databases. Experiments show both the feasibility, the scalability and the robustness of our proposal
Gredmaier, Ludwig Konrad. "The effect of probe tone duration on psychoacoustic frequency selectivity". Thesis, University of Southampton, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.396142.
Pełny tekst źródłaWilson, James Harris. "Development and validation of a laminate flooring system sound quality test method". Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29660.
Pełny tekst źródłaCommittee Chair: Cunefare, Kenneth A.; Committee Member: Qu, Jianmin; Committee Member: Ryherd, Erica. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Gens, Rüdiger. "Quality assessment of SAR interferometric data". Hannover : Fachrichtung Vermessungswesen der Univ, 1998. http://deposit.ddb.de/cgi-bin/dokserv?idn=95607121X.
Pełny tekst źródłaBerg, Marcus. "Evaluating Quality of Online Behavior Data". Thesis, Stockholms universitet, Statistiska institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-97524.
Pełny tekst źródłaMa, Shuai. "Extending dependencies for improving data quality". Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/5045.
Pełny tekst źródła