Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Coal Analysis Data processing.

Дисертації з теми "Coal Analysis Data processing"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Coal Analysis Data processing".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

CRAESMEYER, GABRIEL R. "Tratamento de efluente contendo urânio com zeólita magnética." reponame:Repositório Institucional do IPEN, 2013. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10578.

Повний текст джерела
Анотація:
Made available in DSpace on 2014-10-09T12:42:11Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:05:08Z (GMT). No. of bitstreams: 0
Dissertação (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Leggett, Miles. "Crosshole seismic processing of physical model and coal measures data." Thesis, Durham University, 1992. http://etheses.dur.ac.uk/5623/.

Повний текст джерела
Анотація:
Crosshole seismic techniques can be used to gain a large amount of information about the properties of the rock mass between two or more boreholes. The bulk of this thesis is concerned with two crosshole seismic processing techniques and their application to real data. The first part of this thesis describes the application of traveltime and amplitude tomographic processing in the monitoring of a simulated EOR project. Two physical models were made, designed to simulate 'pre-flood' and 'post-flood' stages in an EOR project. The results of the tomography work indicate that it is beneficial to perform amplitude tomographic processing of cross-well data, as a complement to traveltime inversion, because of the different response of velocity and absorption to changes in liquid/gas saturations for real reservoir rocks. The velocity tomograms image the flood zone quite accurately. Amplitude tomography shows the flood zone as an area of higher absorption but does not image its boundaries as precisely, because multi-pathing and diffraction effects are not accounted for by the ray-based techniques used. Part two is concerned with the crosshole seismic reflection technique, using data acquired from a site in northern England. The processing of these data is complex and includes deconvolution, wavefield separation and migration to a depth section. The two surveys fail to pin-point accurately the position of a large fault; the disappointing results, compared to earlier work in Yorkshire, are attributed to poorer generation of compressional body waves in harder Coal Measures strata. The final part of this thesis describes the results from a pilot seismic reflection test over the Tertiary igneous centre on the Isle of Skye, Scotland. The results indicate that the base of a large granite body consists of interlayered granites and basic rocks between 2.1 and 2.4km below mean sea level.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Irick, Nancy. "Post Processing Data Analysis." International Foundation for Telemetering, 2009. http://hdl.handle.net/10150/606091.

Повний текст джерела
Анотація:
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada
Once the test is complete, the job of the Data Analyst has begun. Files from the various acquisition systems are collected. It is the job of the analyst to put together these files in a readable format so the success or failure of the test can be attained. This paper will discuss the process of breaking down these files, comparing data from different systems, and methods of presenting the data.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Jifon, Francis. "Processing and modelling of seismic reflection data acquired off the Durham coast." Thesis, Durham University, 1985. http://etheses.dur.ac.uk/9315/.

Повний текст джерела
Анотація:
Off the Durham coast, the Permian succession above the Coal Measures contains limestones and anhydrite bands with high seismic velocities and reflection coefficients. The consequent reduction in penetration of seismic energy makes it difficult to determine Coal Measures structure by the seismic reflection method. Seismic data sets acquired from this region by the National Coal Board in 1979 and 1982 are used to illustrate that satisfactory results are difficult to achieve. Synthetic seismograms, generated for a simplified geological section of the region, are also used to study various aspects of the overall problem of applying the seismic technique in the area. Standard and non-standard processing sequences are applied to the seismic data to enhance the quality of the stacked sections and the results are discussed. This processing showed that in the 1979 survey, in which a watergun source and a 600m streamer were used, some penetration was achieved but Coal Measures resolution on the final sections is poor. The 1982 data set, shot along a segment of the 1979 line using a sleeve exploder source and a 150m streamer, showed no Coal Measures after processing. Synthetic seismograms, generated using the reflectivity method and a broadband source wavelet, are processed to confirm that a streamer with a length of 360 to 400m towed at a depth of 5-7.5m will be optimal for future data acquisition in the area. It is also shown that the erosion of the surface of the limestone lowers the horizontal resolution of the Coal Measures. Scattering
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Bilalli, Besim. "Learning the impact of data pre-processing in data analysis." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/587221.

Повний текст джерела
Анотація:
There is a clear correlation between data availability and data analytics, and hence with the increase of data availability --- unavoidable according to Moore's law, the need for data analytics increases too. This certainly engages many more people, not necessarily experts, to perform analytics tasks. However, the different, challenging, and time consuming steps of the data analytics process, overwhelm non-experts and they require support (e.g., through automation or recommendations). A very important and time consuming step that marks itself out of the rest, is the data pre-processing step. Data pre-processing is challenging but at the same time has a heavy impact on the overall analysis. In this regard, previous works have focused on providing user assistance in data pre-processing but without being concerned on its impact on the analysis. Hence, the goal has generally been to enable analysis through data pre-processing and not to improve it. In contrast, this thesis aims at developing methods that provide assistance in data pre-processing with the only goal of improving (e.g., increasing the predictive accuracy of a classifier) the result of the overall analysis. To this end, we propose a method and define an architecture that leverages ideas from meta-learning to learn the relationship between transformations (i.e., pre-processing operators) and mining algorithms (i.e., classification algorithms). This eventually enables ranking and recommending transformations according to their potential impact on the analysis. To reach this goal, we first study the currently available methods and systems that provide user assistance, either for the individual steps of data analytics or for the whole process altogether. Next, we classify the metadata these different systems use and then specifically focus on the metadata used in meta-learning. We apply a method to study the predictive power of these metadata and we extract and select the metadata that are most relevant. Finally, we focus on the user assistance in the pre-processing step. We devise an architecture and build a tool, PRESISTANT, that given a classification algorithm is able to recommend pre-processing operators that once applied, positively impact the final results (e.g., increase the predictive accuracy). Our results show that providing assistance in data pre-processing with the goal of improving the result of the analysis is feasible and also very useful for non-experts. Furthermore, this thesis is a step towards demystifying the non-trivial task of pre-processing that is an exclusive asset in the hands of experts.
Existe una clara correlación entre disponibilidad y análisis de datos, por tanto con el incremento de disponibilidad de datos --- inevitable según la ley de Moore, la necesidad de analizar datos se incrementa también. Esto definitivamente involucra mucha más gente, no necesariamente experta, en la realización de tareas analíticas. Sin embargo los distintos, desafiantes y temporalmente costosos pasos del proceso de análisis de datos abruman a los no expertos, que requieren ayuda (por ejemplo, automatización o recomendaciones). Uno de los pasos más importantes y que más tiempo conlleva es el pre-procesado de datos. Pre-procesar datos es desafiante, y a la vez tiene un gran impacto en el análisis. A este respecto, trabajos previos se han centrado en proveer asistencia al usuario en el pre-procesado de datos pero sin tener en cuenta el impacto en el resultado del análisis. Por lo tanto, el objetivo ha sido generalmente el de permitir analizar los datos mediante el pre-procesado y no el de mejorar el resultado. Por el contrario, esta tesis tiene como objetivo desarrollar métodos que provean asistencia en el pre-procesado de datos con el único objetivo de mejorar (por ejemplo, incrementar la precisión predictiva de un clasificador) el resultado del análisis. Con este objetivo, proponemos un método y definimos una arquitectura que emplea ideas de meta-aprendizaje para encontrar la relación entre transformaciones (operadores de pre-procesado) i algoritmos de minería de datos (algoritmos de clasificación). Esto, eventualmente, permite ordenar y recomendar transformaciones de acuerdo con el impacto potencial en el análisis. Para alcanzar este objetivo, primero estudiamos los métodos disponibles actualmente y los sistemas que proveen asistencia al usuario, tanto para los pasos individuales en análisis de datos como para el proceso completo. Posteriormente, clasificamos los metadatos que los diferentes sistemas usan y ponemos el foco específicamente en aquellos que usan metadatos para meta-aprendizaje. Aplicamos un método para estudiar el poder predictivo de los metadatos y extraemos y seleccionamos los metadatos más relevantes. Finalmente, nos centramos en la asistencia al usuario en el paso de pre-procesado de datos. Concebimos una arquitectura y construimos una herramienta, PRESISTANT, que dado un algoritmo de clasificación es capaz de recomendar operadores de pre-procesado que una vez aplicados impactan positivamente el resultado final (por ejemplo, incrementan la precisión predictiva). Nuestros resultados muestran que proveer asistencia al usuario en el pre-procesado de datos con el objetivo de mejorar el resultado del análisis es factible y muy útil para no-expertos. Además, esta tesis es un paso en la dirección de desmitificar que la tarea no trivial de pre-procesar datos esta solo al alcance de expertos.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Chen, Chuan. "Numerical algorithms for data processing and analysis." HKBU Institutional Repository, 2016. https://repository.hkbu.edu.hk/etd_oa/277.

Повний текст джерела
Анотація:
Magnetic nanoparticles (NPs) with sizes ranging from 2 to 20 nm in diameter represent an important class of artificial nanostructured materials, since the NP size is comparable to the size of a magnetic domain. They have potential applications in data storage, catalysis, permanent magnetic nanocomposites, and biomedicine. To begin with, a brief overview on the background of Fe-based bimetallic NPs and their applications for data-storage and catalysis was presented in Chapter 1. In Chapter 2, L10-ordered FePt NPs with high coercivity were directly prepared from a novel bimetallic acetylenic alternating copolymer P3 by a one-step pyrolysis method without post-thermal annealing. The chemical ordering, morphology and magnetic properties were studied. Magnetic measurements showed that a record coercivity of 3.6 T (1 T = 10 kOe) was obtained in FePt NPs. By comparison of the resultant FePt NPs synthesized under Ar and Ar/H2, the characterization proved that the incorporation of H2 would affect the nucleation and promote the growth of FePt NPs. The L10 FePt NPs were also successfully patterned on Si substrate by nanoimprinting lihthography (NIL). The highly ordered ferromagnetic arrays on a desired substrate for bit-patterned media (BPM) were studied and promised bright prospects for the progress of data-storage. Furthermore, we also reported a new FePt-containing metallopolymer P4 as the single-source precursor for metal alloy NPs synthesis, where the metal fractions were on the side chain and the ratio could be easily controlled. This polymer was synthesized from random copolymer poly(styrene-4-ethynylstyrene) PES-PS and bimetallic precursor TPy-FePt ([Pt(4’-ferrocenyl-(N^N^N))Cl]Cl) by Sonogashira coupling reaction. After pyrolysis of P4, the stoichiometry of Fe and Pt atoms in the synthesized NPs (NPs) is nearly close to 1:1, which is more precise than using TPy-FePt as precursor. Polymer P4 was also more favorable for patterning by high throughout NIL as compared to TPy-FePt. Ferromagnetic nanolines, potentially as bit-patterned magnetic recording media, were successfully fabricated from P4 and fully characterized. In Chapter 3, a novel organometallic compound TPy-FePd-1 [4’-ferrocenyl-(N^N^N)PdOCOCH3] was synthesized and structurally characterized, whose crystal structure showed a coplanar Pd center and Pd-Pd distance (3.17 Å). Two metals Fe and Pd were evenly embedded in the molecular dimension and remained tightly coupled between each other benefiting to the metalmetal (Pd-Pd) and ligand ππ stacking interactions, all of which made it facilitate the nucleation without sintering during preparing the FePd NPs. Ferromagnetic FePd NPs of ca. 16.2 nm in diameter were synthesized by one-pot pyrolysis of the single-source precursor TPy-FePd-1 under getter gas with metal-ion reduction and minimal nanoparticle coalescence, which have a nearly equal atomic ratio (Fe/Pd = 49/51) and exhibited coercivity of 4.9 kOe at 300 K. By imprinting the mixed chloroform solution of TPy-FePd-1 and polystyrene (PS) on Si, reproducible patterning of nanochains was formed due to the excellent self-assembly properties and the incompatibility between TPy-FePd-1 and PS under the slow evaporation of the solvents. The FePd nanochains with average length of ca. 260 nm were evenly dispersed around the PS nanosphere by self-assembly of TPy-FePd-1. In addition, the orientation of the FePd nanochains could also be controlled by tuning the morphology of PS, and the length was shorter in confined space of PS. Orgnic skeleton in TPy-FePd-1 and PS were carbonized and removed by pyrolysis under Ar/H2 (5 wt%) and only magnetic FePd alloy nanochains with domain structure were left. Besides, a bimetallic complex TPy-FePd-2 was prepared and used as a single-source precursor to synthesize ferromagnetic FePd NPs by one-pot pyrolysis. The resultant FePd NPs have a mean size of 19.8 nm and show the coercivity of 1.02 kOe. In addition, the functional group (-NCMe) in TPy-FePd-2 was easily substituted by a pyridyl group. A random copolymer PS-P4VP was used to coordinate with TPy-FePd-2, and the as-synthesized polymer made the metal fraction disperse evenly along the flexible chain. Fabrication of FePd NPs from the polymers was also investigated, and the size could be easily controlled by tuning the metal fraction in polymer. FePd NPs with the mean size of 10.9, 14.2 and 17.9 nm were prepared from the metallopolymer with 5 wt%, 10 wt% and 20wt% of metal fractions, respectively. In Chapter 4, molybdenum disulfide (MoS2) monolayers decorated with ferromagnetic FeCo NPs on the edges were synthesized through a one-step pyrolysis of precursor molecules in an argon atmosphere. The FeCo precursor was spin coated on the MoS2 monolayer grown on Si/SiO2 substrate. Highly-ordered body-centered cubic (bcc) FeCo NPs were revealed under optimized pyrolysis conditions, possessing coercivity up to 1000 Oe at room temperature. The FeCo NPs were well-positioned along the edge sites of MoS2 monolayers. The vibration modes of Mo and S atoms were confined after FeCo NPs decoration, as characterized by Raman shift spectroscopy. These MoS2 monolayers decorated with ferromagnetic FeCo NPs can be used for novel catalytic materials with magnetic recycling capabilities. The sizes of NPs grown on MoS2 monolayers are more uniform than from other preparation routines. Finally, the optimized pyrolysis temperature and conditions provide receipts for decorating related noble catalytic materials. Finally, Chapters 5 and 6 present the concluding remarks and the experimental details of the work described in Chapters 2-4.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Chen, George C. M. "Strategic analysis of a data processing company /." Burnaby B.C. : Simon Fraser University, 2005. http://ir.lib.sfu.ca/handle/1892/3624.

Повний текст джерела
Анотація:
Research Project (M.B.A.) - Simon Fraser University, 2005.
Research Project (Faculty of Business Administration) / Simon Fraser University. Senior supervisor : Dr. Ed Bukszar. EMBA Program. Also issued in digital format and available on the World Wide Web.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Purahoo, K. "Maximum entropy data analysis." Thesis, Cranfield University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.260038.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Aygar, Alper. "Doppler Radar Data Processing And Classification." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609890/index.pdf.

Повний текст джерела
Анотація:
In this thesis, improving the performance of the automatic recognition of the Doppler radar targets is studied. The radar used in this study is a ground-surveillance doppler radar. Target types are car, truck, bus, tank, helicopter, moving man and running man. The input of this thesis is the output of the real doppler radar signals which are normalized and preprocessed (TRP vectors: Target Recognition Pattern vectors) in the doctorate thesis by Erdogan (2002). TRP vectors are normalized and homogenized doppler radar target signals with respect to target speed, target aspect angle and target range. Some target classes have repetitions in time in their TRPs. By the use of these repetitions, improvement of the target type classification performance is studied. K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) algorithms are used for doppler radar target classification and the results are evaluated. Before classification PCA (Principal Component Analysis), LDA (Linear Discriminant Analysis), NMF (Nonnegative Matrix Factorization) and ICA (Independent Component Analysis) are implemented and applied to normalized doppler radar signals for feature extraction and dimension reduction in an efficient way. These techniques transform the input vectors, which are the normalized doppler radar signals, to another space. The effects of the implementation of these feature extraction algoritms and the use of the repetitions in doppler radar target signals on the doppler radar target classification performance are studied.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Roberts, G. "Some aspects seismic signal processing and analysis." Thesis, Bangor University, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.379692.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Weisenburger, Kenneth William. "Reflection seismic data acquisition and processing for enhanced interpretation of high resolution objectives." Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/74518.

Повний текст джерела
Анотація:
Reflection seismic data were acquired (by CONOCO, Inc.) which targeted known channel interruption of an upper Pennsylvanian coal seam (Herrin #6) in the Illinois basin. The data were reprocessed and interpreted by the Regional Geophysics Laboratory, Virginia Tech. Conventional geophysical techniques involving field acquisition and data processing were modified to enhance and maintain high frequency content in the signal bandwidth. Single sweep processing was employed to increase spatial sampling density and reduce low pass filtering associated with the array response. Whitening of the signal bandwidth was accomplished using Vibroseis whitening (VSW) and stretched automatic gain control (SAGC). A zero-phase wavelet-shaping filter was used to optimize the waveform length allowing a thinner depositional sequence to be resolved. The high resolution data acquisition and processing led to an interpreted section which shows cyclic deposition in a deltaic environment. Complex channel development interrupted underlying sediments including the Herrin coal seam complex. Contrary to previous interpretations of channel development in the study area by Chapman and others (1981), and Nelson (1983), the channel has been interpreted as having bimodal structure leaving an"island" of undisturbed deposits. Channel activity affects the younger Pennsylvanian sediments and also the unconsolidated Pleistocene till. A limit to the eastern migration of channel development affecting the the Pennsylvanian sediments considered in this study can be identified by the abrupt change in event characteristics.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Scheuer, Timothy Ellis. "Complex seismic data : aspects of processing, analysis, and inversion." Thesis, University of British Columbia, 1989. http://hdl.handle.net/2429/29283.

Повний текст джерела
Анотація:
A common theme in this thesis is the use of complex signal attributes to facilitate the processing, analysis, and inversion of seismic data. Complex data are formed from real data by removing the negative frequency portion of the Fourier transform and doubling the positive frequency portion. Removing the dual nature of frequency components in real data allows the development of efficient algorithms for time-variant filtering, local phase velocity estimation, and subsequent velocity-depth inversion of shot gathers using Snell traces. For 1-D seismic data, I develop a computationally efficient time-variant filter with an idealized boxcar-like frequency spectrum. The filter can either be zero phase, or it can effect a time-variant phase rotation of the input data. The instantaneous low-frequency and high-frequency cutoffs are independent continuous time functions that are specified by the user. Frequency-domain rolloff characteristics of the filter can be prescribed, but the achieved spectrum depends on the length of the applied filter and the instantaneous frequency cutoffs used. The primary derivation of this theory depends upon the properties of the complex signal and the complex delta function. This formulation is particularly insightful because of the geometrical interpretation it offers in the frequency domain. Basically, a high-pass filter, can be implemented by shifting the Fourier transform of the complex signal towards the negative frequency band, annihilating that portion of the signal that lies to the left of the origin, and then shifting the truncated spectrum back to the right. This geometrical insight permits inference of the mathematical form of a general time-variant band-pass filter. In addition, I show that the time-variant filter reduces to a Hilbert transform filter when the derivation is constrained to include real signal input. Application of the procedure to a spectral function permits frequency-variant windowing of an input time signal. For 2-D arid 3-D seismic data, I propose a new method that uses the concepts of complex trace analysis for the automatic estimation of local phase velocity. A complex seismic record is obtained from a real seismic record by extending complex trace analysis into higher dimensions. Phase velocities are estimated from the complex data by finding trajectories of constant phase. In 2-D, phase velocity calculation reduces to a ratio of instantaneous frequency and wavenumber, and thus provides a measure of the dominant plane-wave component at each point in the seismic record. The algorithm is simple to implement and computational requirements are small; this is partly due to a new method for computing instantaneous frequency and wavenumber which greatly simplifies these calculations for 2-D and 3-D complex records. In addition, this approach has the advantage that no a priori velocity input is needed; however, optimum stability is achieved when a limited range of dipping events is considered. Preconditioning the record with an appropriate velocity filter helps reduce the detrimental effects of crossing events, spatial aliasing, and random noise contamination. Accurate recovery of local phase velocity information about underlying seismic events allows the rapid evaluation of seismic attributes such as rms velocity and maximum depth of ray penetration. I utilize local phase velocity data from a shot gather for the estimation and inversion of Snell traces. The primary Snell trace corresponding to a 1-D velocity model locates all primary reflection energy,corresponding to a fixed emergence angle. Constraints on interval velocity and thickness obtained from several estimated Snell trajectories are inverted using SVD to provide a least squares velocity-depth model. The estimation and inversion is efficiently carried out on an interactive workstation utilizing constraints from a hyperbolic velocity analysis. Finally, Snell trace inversion is extended to an inhomogeneous medium. When dips are small, averaging Snell traces of a common phase velocity from forward and reversed shot gathers approximately removes the effects of planar dip. This allows recovery of velocity and depth vertically beneath the midpoint of the source locations used to obtain the reversed information.
Science, Faculty of
Earth, Ocean and Atmospheric Sciences, Department of
Graduate
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Nedstrand, Paul, and Razmus Lindgren. "Test Data Post-Processing and Analysis of Link Adaptation." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-121589.

Повний текст джерела
Анотація:
Analysing the performance of cell phones and other wireless connected devices to mobile networks are key when validating if the standard of the system is achieved. This justies having testing tools that can produce a good overview of the data between base stations and cell phones to see the performance of the cell phone. This master thesis involves developing a tool that produces graphs with statistics from the trac data in the communication link between a connected mobile device and a base station. The statistics will be the correlation between two parameters in the trac data in the channel (e.g. throughput over the channel condition). The tool is oriented on analysis of link adaptation and by the produced graphs the testing personnel at Ericsson will be able to analyse the performance of one or several mobile equipments. We performed our own analysis on link adaptation using the tool to show that this type of analysis is possible with this tool. To show that the tool is useful for Ericsson we let test personnel answer a survey on the usability and user friendliness of it.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Dickinson, Keith William. "Traffic data capture and analysis using video image processing." Thesis, University of Sheffield, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306374.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Valdivia, Paola Tatiana Llerena. "Graph signal processing for visual analysis and data exploration." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-15102018-165426/.

Повний текст джерела
Анотація:
Signal processing is used in a wide variety of applications, ranging from digital image processing to biomedicine. Recently, some tools from signal processing have been extended to the context of graphs, allowing its use on irregular domains. Among others, the Fourier Transform and the Wavelet Transform have been adapted to such context. Graph signal processing (GSP) is a new field with many potential applications on data exploration. In this dissertation we show how tools from graph signal processing can be used for visual analysis. Specifically, we proposed a data filtering method, based on spectral graph filtering, that led to high quality visualizations which were attested qualitatively and quantitatively. On the other hand, we relied on the graph wavelet transform to enable the visual analysis of massive time-varying data revealing interesting phenomena and events. The proposed applications of GSP to visually analyze data are a first step towards incorporating the use of this theory into information visualization methods. Many possibilities from GSP can be explored by improving the understanding of static and time-varying phenomena that are yet to be uncovered.
O processamento de sinais é usado em uma ampla variedade de aplicações, desde o processamento digital de imagens até a biomedicina. Recentemente, algumas ferramentas do processamento de sinais foram estendidas ao contexto de grafos, permitindo seu uso em domínios irregulares. Entre outros, a Transformada de Fourier e a Transformada Wavelet foram adaptadas nesse contexto. O Processamento de Sinais em Grafos (PSG) é um novo campo com muitos aplicativos potenciais na exploração de dados. Nesta dissertação mostramos como ferramentas de processamento de sinal gráfico podem ser usadas para análise visual. Especificamente, o método de filtragem de dados porposto, baseado na filtragem de grafos espectrais, levou a visualizações de alta qualidade que foram atestadas qualitativa e quantitativamente. Por outro lado, usamos a transformada de wavelet em grafos para permitir a análise visual de dados massivos variantes no tempo, revelando fenômenos e eventos interessantes. As aplicações propostas do PSG para analisar visualmente os dados são um primeiro passo para incorporar o uso desta teoria nos métodos de visualização da informação. Muitas possibilidades do PSG podem ser exploradas melhorando a compreensão de fenômenos estáticos e variantes no tempo que ainda não foram descobertos.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Ao, Sio-iong, and 區小勇. "Data mining algorithms for genomic analysis." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B38319822.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Lu, Feng. "Big data scalability for high throughput processing and analysis of vehicle engineering data." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-207084.

Повний текст джерела
Анотація:
"Sympathy for Data" is a platform that is utilized for Big Data automation analytics. It is based on visual interface and workflow configurations. The main purpose of the platform is to reuse parts of code for structured analysis of vehicle engineering data. However, there are some performance issues on a single machine for processing a large amount of data in Sympathy for Data. There are also disk and CPU IO intensive issues when the data is oversized and the platform need fits comfortably in memory. In addition, for data over the TB or PB level, the Sympathy for data needs separate functionality for efficient processing simultaneously and scalable for distributed computation functionality. This paper focuses on exploring the possibilities and limitations in using the Sympathy for Data platform in various data analytic scenarios within the Volvo Cars vision and strategy. This project re-writes the CDE workflow for over 300 nodes into pure Python script code and make it executable on the Apache Spark and Dask infrastructure. We explore and compare both distributed computing frameworks implemented on Amazon Web Service EC2 used for 4 machine with a 4x type for distributed cluster measurement. However, the benchmark results show that Spark is superior to Dask from performance perspective. Apache Spark and Dask will combine with Sympathy for Data products for a Big Data processing engine to optimize the system disk and CPU IO utilization. There are several challenges when using Spark and Dask to analyze large-scale scientific data on systems. For instance, parallel file systems are shared among all computing machines, in contrast to shared-nothing architectures. Moreover, accessing data stored in commonly used scientific data formats, such as HDF5 is not tentatively supported in Spark. This report presents research carried out on the next generation of Big Data platforms in the automotive industry called "Sympathy for Data". The research questions focusing on improving the I/O performance and scalable distributed function to promote Big Data analytics. During this project, we used the Dask.Array parallelism features for interpretation the data sources as a raster shows in table format, and Apache Spark used as data processing engine for parallelism to load data sources to memory for improving the big data computation capacity. The experiments chapter will demonstrate 640GB of engineering data benchmark for single node and distributed computation mode to evaluate the Sympathy for Data Disk CPU and memory metrics. Finally, the outcome of this project improved the six times performance of the original Sympathy for data by developing a middleware SparkImporter. It is used in Sympathy for Data for distributed computation and connected to the Apache Spark for data processing through the maximum utilization of the system resources. This improves its throughput, scalability, and performance. It also increases the capacity of the Sympathy for data to process Big Data and avoids big data cluster infrastructures.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Park, Joonam. "A visualization system for nonlinear frame analysis." Thesis, Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/19172.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Jones, Jonathan A. "Nuclear magnetic resonance data processing methods." Thesis, University of Oxford, 1992. http://ora.ox.ac.uk/objects/uuid:7df97c9a-4e65-4c10-83eb-dfaccfdccefe.

Повний текст джерела
Анотація:
This thesis describes the application of a wide variety of data processing methods, in particular the Maximum Entropy Method (MEM), to data from Nuclear Magnetic Resonance (NMR) experiments. Chapter 1 provides a brief introduction to NMR and to data processing, which is developed in chapter 2. NMR is described in terms of the classical model due to Bloch, and the principles of conventional (Fourier transform) data processing developed. This is followed by a description of less conventional techniques. The MEM is derived on several grounds, and related to both Bayesian reasoning and Shannon information theory. Chapter 3 describes several methods of evaluating the quality of NMR spectra obtained by a variety of data processing techniques; the simple criterion of spectral appearance is shown to be completely unsatisfactory. A Monte Carlo method is described which allows several different techniques to be compared, and the relative advantages of Fourier transformation and the MEM are assessed. Chapter 4 describes in vivo NMR, particularly the application of the MEM to data from Phase Modulated Rotating Frame Imaging (PMRFI) experiments. In this case the conventional data processing is highly unsatisfactory, and MEM processing results in much clearer spectra. Chapter 5 describes the application of a range of techniques to the estimation and removal of splittings from NMR spectra. The various techniques are discussed using simple examples, and then applied to data from the amino acid iso-leucine. The thesis ends with five appendices which contain historical and philosophical notes, detailed calculations pertaining to PMRFI spectra, and a listing of the MEM computer program.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Kelso, Janet. "The development and application of informatics-based systems for the analysis of the human transcriptome." Thesis, University of the Western Cape, 2003. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_5101_1185442672.

Повний текст джерела
Анотація:

Despite the fact that the sequence of the human genome is now complete it has become clear that the elucidation of the transcriptome is more complicated than previously expected. There is mounting evidence for unexpected and previously underestimated phenomena such as alternative splicing in the transcriptome. As a result, the identification of novel transcripts arising from the genome continues. Furthermore, as the volume of transcript data grows it is becoming increasingly difficult to integrate expression information which is from different sources, is stored in disparate locations, and is described using differing terminologies. Determining the function of translated transcripts also remains a complex task. Information about the expression profile &ndash
the location and timing of transcript expression &ndash
provides evidence that can be used in understanding the role of the expressed transcript in the organ or tissue under study, or in developmental pathways or disease phenotype observed.

In this dissertation I present novel computational approaches with direct biological applications to two distinct but increasingly important areas of research in gene expression research. The first addresses detection and characterisation of alternatively spliced transcripts. The second is the construction of an hierarchical controlled vocabulary for gene expression data and the annotation of expression libraries with controlled terms from the hierarchies. In the final chapter the biological questions that can be approached, and the discoveries that can be made using these systems are illustrated with a view to demonstrating how the application of informatics can both enable and accelerate biological insight into the human transcriptome.

Стилі APA, Harvard, Vancouver, ISO та ін.
21

Zhang, Yiqun. "Advances in categorical data clustering." HKBU Institutional Repository, 2019. https://repository.hkbu.edu.hk/etd_oa/658.

Повний текст джерела
Анотація:
Categorical data are common in various research areas, and clustering is a prevalent technique used for analyse them. However, two challenging problems are encountered in categorical data clustering analysis. The first is that most categorical data distance metrics were actually proposed for nominal data (i.e., a categorical data set that comprises only nominal attributes), ignoring the fact that ordinal attributes are also common in various categorical data sets. As a result, these nominal data distance metrics cannot account for the order information of ordinal attributes and may thus inappropriately measure the distances for ordinal data (i.e., a categorical data set that comprises only ordinal attributes) and mixed categorical data (i.e., a categorical data set that comprises both ordinal and nominal attributes). The second problem is that most hierarchical clustering approaches were actually designed for numerical data and have very high computation costs; that is, with time complexity O(N2) for a data set with N data objects. These issues have presented huge obstacles to the clustering analysis of categorical data. To address the ordinal data distance measurement problem, we studied the characteristics of ordered possible values (also called 'categories' interchangeably in this thesis) of ordinal attributes and propose a novel ordinal data distance metric, which we call the Entropy-Based Distance Metric (EBDM), to quantify the distances between ordinal categories. The EBDM adopts cumulative entropy as a measure to indicate the amount of information in the ordinal categories and simulates the thinking process of changing one's mind between two ordered choices to quantify the distances according to the amount of information in the ordinal categories. The order relationship and the statistical information of the ordinal categories are both considered by the EBDM for more appropriate distance measurement. Experimental results illustrate the superiority of the proposed EBDM in ordinal data clustering. In addition to designing an ordinal data distance metric, we further propose a unified categorical data distance metric that is suitable for distance measurement of all three types of categorical data (i.e., ordinal data, nominal data, and mixed categorical data). The extended version uniformly defines distances and attribute weights for both ordinal and nominal attributes, by which the distances measured for the two types of attributes of a mixed categorical data can be directly combined to obtain the overall distances between data objects with no information loss. Extensive experiments on all three types of categorical data sets demonstrate the effectiveness of the unified distance metric in clustering analysis of categorical data. To address the hierarchical clustering problem of large-scale categorical data, we propose a fast hierarchical clustering framework called the Growing Multi-layer Topology Training (GMTT). The most significant merit of this framework is its ability to reduce the time complexity of most existing hierarchical clustering frameworks (i.e., O(N2)) to O(N1.5) without sacrificing the quality (i.e., clustering accuracy and hierarchical details) of the constructed hierarchy. According to our design, the GMTT framework is applicable to categorical data clustering simply by adopting a categorical data distance metric. To make the GMTT framework suitable for the processing of streaming categorical data, we also provide an incremental version of GMTT that can dynamically adopt new inputs into the hierarchy via local updating. Theoretical analysis proves that the GMTT frameworks have time complexity O(N1.5). Extensive experiments show the efficacy of the GMTT frameworks and demonstrate that they achieve more competitive categorical data clustering performance by adopting the proposed unified distance metric.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Du, Toit André Johan. "Preslab - micro-computer analysis and design of prestressed concrete slabs." Master's thesis, University of Cape Town, 1988. http://hdl.handle.net/11427/17057.

Повний текст джерела
Анотація:
Bibliography: pages 128-132.
A micro-computer based package for the analysis and design of prestressed flat slabs is presented. The constant strain triangle and the discreet Kirchhoff plate bending triangle are combined to provide an efficient "shell" element. These triangles are used for the finite element analysis of prestressed flat slabs. An efficient out-of-core solver for sets of linear simultaneous equations is presented. This solver was developed especially for micro-computers. Subroutines for the design of prestressed flat slabs include the principal stresses in the top and bottom fibres of the plate, Wood/Armer moments and untensioned steel areas calculated according to Clark's recommendations. Extensive pre- and post-processing facilities are presented. Several plotting routines were developed to aid the user in his understanding of the behaviour of the structure under load and prestressing.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Apte, Madhav Vasudeo 1958. "Software modification and implementation for, and analysis of, lidar data." Thesis, The University of Arizona, 1988. http://hdl.handle.net/10150/276696.

Повний текст джерела
Анотація:
The software system to process integrated slant path lidar data has been debugged, modified, documented, and improved in reliability and user-friendliness. The substantial data set acquired since 1979 has been processed and a large body of results has been generated. A database has been implemented to store, organize, and access the results. The lidar data set results--the S ratios, the optical depths, and the mixing layer heights are presented. The seasonal dependence of the lidar solution parameters has been explored. The assumptions made in the lidar solution procedure are investigated. The sensitivity of the S ratio and the particulate extinction coefficient to the system calibration constant is examined. The reliability of the calibration constant is demonstrated by examining the particulate to Rayleigh extinction ratio values above the mixing layer.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Pan, Jian Jia. "EMD/BEMD improvements and their applications in texture and signal analysis." HKBU Institutional Repository, 2013. https://repository.hkbu.edu.hk/etd_oa/75.

Повний текст джерела
Анотація:
The combination of the well-known Hilbert spectral analysis (HAS) and the recently developed Empirical Mode Decomposition (EMD) designated as the Hilbert-Huang Transform (HHT) by Huang in 1998, represents a paradigm shift of data analysis methodology. The HHT is designed specifically for analyzing nonlinear and nonstationary data. The key part of HHT is EMD with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMFs). For two dimension, bidimensional IMFs (BIMFs) is decomposed by use of bidimensional EMD (BEMD). However, the HHT has some limitations in signal processing and image processing. This thesis addresses the problems of using HHT for signal and image processing. To reduce end effect in EMD, we propose a boundary extend method for EMD. A linear prediction based method combined with boundary extrema points information is employed to extend the signal, which reduces the end effect in EMD sifting process. It is a simple and effective method. In the EMD decomposition, interpolation method is another key point to get ideal components. The envelope mean in EMD is computed from the upper and lower envelopes by cubic spline interpolation, which has overshooting problem and is time-consuming. Based on the linear interpolation (straight line) method, we propose using the extrema points information to get the mean envelope, which is Extrema Mean Empirical Mode Decomposition (EMEMD). The mean envelope taken by EMEMD is smoother than EMD and the undershooting and overshooting problems in cubic spline are reduced compared with EMD. EMEMD also reduces the computation complex. Experimental results show the IMFs of EMEMD present more and clearer time-frequency information than EMD. Hilbert spectral of EMEMD is also clearer and more meaningful than EMD. Furthermore, based on the procedure of EMEMD method, a fast method to detect the frequency change location information of the piecewise stationary signal is also proposed, which is Extrema Points Empirical Mode Decomposition (EPEMD). Later, two applications based on the improved EMD/BEMD methods are proposed. One application is texture classification in image processing. A saddle points added BEMD is developed to supply multi-scale components (BIMFs) and Riesz transform is used to get the frequency domain characters of these BIMFs. Based on local descriptor Local Binary Pattern (LBP), two new features (based on BIMFs and based on Monogenic-BIMFs signals) are developed. In these new multi-scale components and frequency domain components, the LBP descriptor can achieve better performance than in original image. Experimental results show the texture images recognition rate based on our methods are better than other texture features methods. Another application is signal forecasting in one dimensional time series. EMEMD combined with Local Linear Wavelet Neural Network (LLWNN) for signal forecasting is proposed. The architecture is a decomposition-trend detection-forecasting-ensemble methodology. The EMEMD based decomposition forecasting method decomposed the time series into its basic components, and more accurate forecasts are obtained. In short, the main contributions of this thesis are summarized as following: 1. A boundary extension method is developed for one dimensional EMD. This extension method is based on linear prediction and end points adjusting. This extension method can reduce the end effect in EMD. 2. A saddle points added BEMD is developed to analysis and classify the texture images. This new BEMD detected more high oscillation in BIMFs and contributed for texture analysis. 3. A new texture analysis and classification method is proposed, which is based on BEMD (no/with saddle points), LBP and Riesz transform. The texture features based on BIMFs and BIMFs’ frequency domain 2D monogenic phase are developed. The performances and comparisons on the Brodatz, KTH-TIPS2a, CURet and Outex databases are reported. 4. An improved EMD method, EMEMD, is proposed to overcome the shortcoming in interpolation. EMEMD can provide more meaningful IMFs and it is also a fast decomposition method. The decomposition result and analysis in simulation temperature signal compare with Fourier transform, Wavelet transform are reported. 5. A forecasting methodology based on EMEMD and LLWNN is proposed. The architecture is a decomposition-trend detection-forecasting-ensemble methodology. The predicted results of Hong Kong Hang Seng Index and Global Land-Ocean Temperature Index are reported.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

高銘謙 and Ming-him Ko. "A multi-agent model for DNA analysis." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31222778.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Tao, Yufei. "Indexing and query processing of spatio-temporal data /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?COMP%202002%20TAO.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 208-215). Also available in electronic version. Access restricted to campus users.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Greenaway, Richard Scott. "Image processing and data analysis algorithm for application in haemocytometry." Thesis, University of Hertfordshire, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.263063.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Rivetti, di Val Cervo Nicolo. "Efficient Stream Analysis and its Application to Big Data Processing." Thesis, Nantes, 2016. http://www.theses.fr/2016NANT4046/document.

Повний текст джерела
Анотація:
L’analyse de flux de données est utilisée dans beaucoup de contexte où la masse des données et/ou le débit auquel elles sont générées, excluent d’autres approches (par exemple le traitement par lots). Le modèle flux fourni des solutions aléatoires et/ou fondées sur des approximations pour calculer des fonctions d’intérêt sur des flux (repartis) de n-uplets, en considérant le pire cas, et en essayant de minimiser l’utilisation des ressources. En particulier, nous nous intéressons à deux problèmes classiques : l’estimation de fréquence et les poids lourds. Un champ d’application moins courant est le traitement de flux qui est d’une certaine façon un champ complémentaire aux modèle flux. Celui-ci fournis des systèmes pour effectuer des calculs génériques sur les flux en temps réel souple, qui passent à l’échèle. Cette dualité nous permet d’appliquer des solutions du modèle flux pour optimiser des systèmes de traitement de flux. Dans cette thèse, nous proposons un nouvel algorithme pour la détection d’éléments surabondants dans des flux repartis, ainsi que deux extensions d’un algorithme classique pour l’estimation des fréquences des items. Nous nous intéressons également à deux problèmes : construire un partitionnement équitable de l’univers des n-uplets par rapport à leurs poids et l’estimation des valeurs de ces n-uplets. Nous utilisons ces algorithmes pour équilibrer et/ou délester la charge dans les systèmes de traitement de flux
Nowadays stream analysis is used in many context where the amount of data and/or the rate at which it is generated rules out other approaches (e.g., batch processing). The data streaming model provides randomized and/or approximated solutions to compute specific functions over (distributed) stream(s) of data-items in worst case scenarios, while striving for small resources usage. In particular, we look into two classical and related data streaming problems: frequency estimation and (distributed) heavy hitters. A less common field of application is stream processing which is somehow complementary and more practical, providing efficient and highly scalable frameworks to perform soft real-time generic computation on streams, relying on cloud computing. This duality allows us to apply data streaming solutions to optimize stream processing systems. In this thesis, we provide a novel algorithm to track heavy hitters in distributed streams and two extensions of a well-known algorithm to estimate the frequencies of data items. We also tackle two related problems and their solution: provide even partitioning of the item universe based on their weights and provide an estimation of the values carried by the items of the stream. We then apply these results to both network monitoring and stream processing. In particular, we leverage these solutions to perform load shedding as well as to load balance parallelized operators in stream processing systems
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Patel, Ankur. "3D morphable models : data pre-processing, statistical analysis and fitting." Thesis, University of York, 2011. http://etheses.whiterose.ac.uk/1576/.

Повний текст джерела
Анотація:
This thesis presents research aimed at using a 3D linear statistical model (known as a 3D morphable model) of an object class (which could be faces, bodies, cars, etc) for robust shape recovery. Our aim is to use this recovered information for the purposes of potentially useful applications like recognition and synthesis. With a 3D morphable model as its central theme, this thesis includes: a framework for the groupwise processing of a set of meshes in dense correspondence; a new method for model construction; a new interpretation of the statistical constraints afforded by the model and addressing of some key limitations associated with using such models in real world applications. In Chapter 1 we introduce 3D morphable models, touch on the current state-of-the-art and emphasise why these models are an interesting and important research tool in the computer vision and graphics community. We then talk about the limitations of using such models and use these limitations as a motivation for some of the contributions made in this thesis. Chapter 2 presents an end-to-end system for obtaining a single (possibly symmetric) low resolution mesh topology and texture parameterisation which are optimal with respect to a set of high resolution input meshes in dense correspondence. These methods result in data which can be used to build 3D morphable models (at any resolution). In Chapter 3 we show how the tools of thin-plate spline warping and Procrustes analysis can be used to construct a morphable model as a shape space. We observe that the distribution of parameter vector lengths follows a chi-square distribution and discuss how the parameters of this distribution can be used as a regularisation constraint on the length of parameter vectors. In Chapter 4 we take the idea introduced in Chapter 3 further by enforcing a hard constraint which restricts faces to points on a hyperspherical manifold within the parameter space of a linear statistical model. We introduce tools from differential geometry (log and exponential maps for a hyperspherical manifold) which are necessary for developing our methodology and provide empirical validation to justify our choice of manifold. Finally, we show how to use these tools to perform model fitting, warping and averaging operations on the surface of this manifold. Chapter 5 presents a method to simplify a 3D morphable model without requiring knowledge of the training meshes used to build the model. This extends the simplification ideas in Chapter 2 into a statistical setting. The proposed method is based on iterative edge collapse and we show that the expected value of the Quadric Error Metric can be computed in closed form for a linear deformable model. The simplified models can used to achieve efficient multiscale fitting and super-resolution. In Chapter 6 we consider the problem of model dominance and show how shading constraints can be used to refine morphable model shape estimates, offering the possibility of exceeding the maximum possible accuracy of the model. We present an optimisation scheme based on surface normal error as opposed to image error. This ensures the fullest possible use of the information conveyed by the shading in an image. In addition, our framework allows non-model based estimation of per-vertex bump and albedo maps. This means the recovered model is capable of describing shape and reflectance phenomena not present in the training set. We explore the use of the recovered shape and reflectance information for face recognition and synthesis. Finally, in Chapter 7 we provide concluding remarks and discuss directions for future research.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Abu, Salih Bilal Ahmad Abdal Rahman. "Trustworthiness in Social Big Data Incorporating Semantic Analysis, Machine Learning and Distributed Data Processing." Thesis, Curtin University, 2018. http://hdl.handle.net/20.500.11937/70285.

Повний текст джерела
Анотація:
This thesis presents several state-of-the-art approaches constructed for the purpose of (i) studying the trustworthiness of users in Online Social Network platforms, (ii) deriving concealed knowledge from their textual content, and (iii) classifying and predicting the domain knowledge of users and their content. The developed approaches are refined through proof-of-concept experiments, several benchmark comparisons, and appropriate and rigorous evaluation metrics to verify and validate their effectiveness and efficiency, and hence, those of the applied frameworks.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Yang, Hsueh-szu, Nathan Sadia, and Benjamin Kupferschmidt. "Designing an Object-Oriented Data Processing Network." International Foundation for Telemetering, 2008. http://hdl.handle.net/10150/606204.

Повний текст джерела
Анотація:
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California
There are many challenging aspects to processing data from a modern high-performance data acquisition system. The sheer diversity of data formats and protocols makes it very difficult to create a data processing application that can properly decode and display all types of data. Many different tools need to be harnessed to process and display all types of data. Each type of data needs to be displayed on the correct type of display. In particular, it is very hard to synchronize the display of different types of data. This tends to be an error prone, complex and very time-consuming process. This paper discusses a solution to the problem of decoding and displaying many different types of data in the same system. This solution is based on the concept of a linked network of data processing nodes. Each node performs a particular task in the data decoding and/or analysis process. By chaining these nodes together in the proper sequence, we can define a complex decoder from a set of simple building blocks. This greatly increases the flexibility of the data visualization system while allowing for extensive code reuse.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Neukirch, Maik. "Non Stationary Magnetotelluric Data Processing." Doctoral thesis, Universitat de Barcelona, 2014. http://hdl.handle.net/10803/284932.

Повний текст джерела
Анотація:
Studies have proven that the desired signal for Magnetotellurics (MT) in the electromagnetic (EM) field can be regarded as 'quasi stationary' (i.e. sufficiently stationary to apply a windowed Fourier transform). However, measured time series often contain environmental noise. Hence, they may not fulfill the stationarity requirement for the application of the Fourier Transform (FT) and therefore may lead to false or unreliable results under methods that rely on the FT. In light of paucity of algorithms of MT data processing in the presence of non stationary noise, it is the goal of this thesis to elaborate a robust, non stationary algorithm, which can compete with sophisticated, state-of-the-art algorithms in terms of accuracy and precision. In addition, I proof mathematically the algorithm's viability and validate its superiority to other codes processing non stationary, synthetic and real MT data. Non stationary EM data may affect the computation of Fourier spectra in unforeseeable manners and consequently, the traditional estimation of the MT transfer functions (TF). The TF estimation scheme developed in this work is based on an emerging nonlinear, non stationary time series analysis tool, called Empirical Mode Decomposition (EMD). EMD decomposes time series into Intrinsic Mode Functions (IMF) in the time-frequency domain, which can be represented by the instantaneous parameters amplitude, phase and frequency. In the first part of my thesis, I show that time slices of well defined IMFs equal time slices of Fourier Series, where the instantaneous parameters of the IMF define amplitude and phase of the Fourier Series parameters. Based on these findings I formulate the theorem that non stationary convolution of an IMF with a general time domain response function translates into a multiplication of the IMF with the respective spectral domain response function, which is explicitly permitted to vary over time. Further, I employ real world MT data to illustrate that a de-trended signal's IMFs can be convolved independently and then be used for further time-frequency analysis as done for MT processing. In the second part of my thesis, I apply the newly formulated theorem to the MT method. The MT method analyses the correlation between the electric and magnetic field due to the conductivity structure of the subsurface. For sufficiently low frequencies (i.e. when the EM field interacts diffusively), the conductive body of the Earth acts as an inductive system response, which convolves with magnetic field variations and results in electric field variations. The frequency representation of this system response is commonly referred to as MT TF and its estimation from measured electric and magnetic time series is summarized as MT processing. The main contribution in this thesis is the design of the MT TF estimation algorithm based on EMD. In contrast to previous works that employ EMD for MT data processing, I (i) point out the advantages of a multivariate decomposition, (ii) highlight the possibility to use instantaneous parameters, and (iii) define the homogenization of frequency discrepancies between data channels. In addition, my algorithm estimates the transfer functions using robust statistical methods such as (i) robust principal component analysis and (ii) iteratively re-weighted least squares regression with a Huber weight function. Finally, TF uncertainties are estimated by iterating the complete robust regression, including the robust weight computation, by means of a bootstrap routine. The proposed methodology is applied to synthetic and real data with and without non stationary character and the results are compared with other processing techniques. I conclude that non stationary noise can heavily affect Fourier based MT data processing but the presented non stationary approach is nonetheless able to extract the impedances correctly even when the other methods fail.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Ren, Chenghui, and 任成會. "Algorithms for evolving graph analysis." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/197105.

Повний текст джерела
Анотація:
In many applications, entities and their relationships are represented by graphs. Examples include social networks (users and friendship), the WWW (web pages and hyperlinks) and bibliographic networks (authors and co-authorship). In a dynamic world, information changes and so the graphs representing the information evolve with time. For example, a Facebook link between two friends is established, or a hyperlink is added to a web page. We propose that historical graph-structured data be archived for analytical processing. We call a historical evolving graph sequence an EGS. We study the problem of efficient query processing on an EGS, which finds many applications that lead to interesting evolving graph analysis. To solve the problem, we propose a solution framework called FVF and a cluster-based LU decomposition algorithm called CLUDE, which can evaluate queries efficiently to support EGS analysis. The Find-Verify-and-Fix (FVF) framework applies to a wide range of queries. We demonstrate how some important graph measures, including shortest-path distance, closeness centrality and graph centrality, can be efficiently computed from EGSs using FVF. Since an EGS generally contains numerous large graphs, we also discuss several compact storage models that support our FVF framework. Through extensive experiments on both real and synthetic datasets, we show that our FVF framework is highly efficient in EGS query processing. A graph can be conveniently modeled by a matrix from which various quantitative measures are derived like PageRank and SALSA and Personalized PageRank and Random Walk with Restart. To compute these measures, linear systems of the form Ax = b, where A is a matrix that captures a graph's structure, need to be solved. To facilitate solving the linear system, the matrix A is often decomposed into two triangular matrices (L and U). In a dynamic world, the graph that models it changes with time and thus is the matrix A that represents the graph. We consider a sequence of evolving graphs and its associated sequence of evolving matrices. We study how LU-decomposition should be done over the sequence so that (1) the decomposition is efficient and (2) the resulting LU matrices best preserve the sparsity of the matrices A's (i.e., the number of extra non-zero entries introduced in L and U are minimized). We propose a cluster-based algorithm CLUDE for solving the problem. Through an experimental study, we show that CLUDE is about an order of magnitude faster than the traditional incremental update algorithm. The number of extra non-zero entries introduced by CLUDE is also about an order of magnitude fewer than that of the traditional algorithm. CLUDE is thus an efficient algorithm for LU decomposition that produces high-quality LU matrices over an evolving matrix sequence.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Morgan, Clifford Owen. "Development of computer aided analysis and design software for studying dynamic process operability." Thesis, Georgia Institute of Technology, 1986. http://hdl.handle.net/1853/10187.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Chen, Liang. "Performance analysis and improvement of parallel simulation." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/25477.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Yang, Bin, and 杨彬. "A novel framework for binning environmental genomic fragments." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B45789344.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Gu, Lifang. "Video analysis in MPEG compressed domain." University of Western Australia. School of Computer Science and Software Engineering, 2003. http://theses.library.uwa.edu.au/adt-WU2003.0016.

Повний текст джерела
Анотація:
The amount of digital video has been increasing dramatically due to the technology advances in video capturing, storage, and compression. The usefulness of vast repositories of digital information is limited by the effectiveness of the access methods, as shown by the Web explosion. The key issues in addressing the access methods are those of content description and of information space navigation. While textual documents in digital form are somewhat self-describing (i.e., they provide explicit indices, such as words and sentences that can be directly used to categorise and access them), digital video does not provide such an explicit content description. In order to access video material in an effective way, without looking at the material in its entirety, it is therefore necessary to analyse and annotate video sequences, and provide an explicit content description targeted to the user needs. Digital video is a very rich medium, and the characteristics in which users may be interested are quite diverse, ranging from the structure of the video to the identity of the people who appear in it, their movements and dialogues and the accompanying music and audio effects. Indexing digital video, based on its content, can be carried out at several levels of abstraction, beginning with indices like the video program name and name of subject, to much lower level aspects of video like the location of edits and motion properties of video. Manual video indexing requires the sequential examination of the entire video clip. This is a time-consuming, subjective, and expensive process. As a result, there is an urgent need for tools to automate the indexing process. In response to such needs, various video analysis techniques from the research fields of image processing and computer vision have been proposed to parse, index and annotate the massive amount of digital video data. However, most of these video analysis techniques have been developed for uncompressed video. Since most video data are stored in compressed formats for efficiency of storage and transmission, it is necessary to perform decompression on compressed video before such analysis techniques can be applied. Two consequences of having to first decompress before processing are incurring computation time for decompression and requiring extra auxiliary storage.To save on the computational cost of decompression and lower the overall size of the data which must be processed, this study attempts to make use of features available in compressed video data and proposes several video processing techniques operating directly on compressed video data. Specifically, techniques of processing MPEG-1 and MPEG-2 compressed data have been developed to help automate the video indexing process. This includes the tasks of video segmentation (shot boundary detection), camera motion characterisation, and highlights extraction (detection of skin-colour regions, text regions, moving objects and replays) in MPEG compressed video sequences. The approach of performing analysis on the compressed data has the advantages of dealing with a much reduced data size and is therefore suitable for computationally-intensive low-level operations. Experimental results show that most analysis tasks for video indexing can be carried out efficiently in the compressed domain. Once intermediate results, which are dramatically reduced in size, are obtained from the compressed domain analysis, partial decompression can be applied to enable high resolution processing to extract high level semantic information.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Bhatt, Mittal Gopalbhai. "Detecting glaucoma in biomedical data using image processing /." Link to online version, 2005. https://ritdml.rit.edu/dspace/handle/1850/939.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

DeLongchamp, Sarah R. "Bioinformatics analysis of predicted S/MARS and associated stowaway transposon locations in the Gramineae." Virtual Press, 2007. http://liblink.bsu.edu/uhtbin/catkey/1380099.

Повний текст джерела
Анотація:
Stowaway/matrix attachment regions (S/MARS) are sequences of DNA that anchor chromatin to the nuclear matrix, function in gene expression, chromatin organization, and conformation. Current identification tools in Eukaryotes rely on a small population of known S/MARs for search criterion. This study presents bioinformatics prediction of S/MARs across various genomes using the program Basic Local Alignment Search Tool (BLAST), providing an opportunity to identify putative S/MARs for further characterization and a novel application of BLAST for S/MAR identification. Two wheat S/MARs were used to identify homologous sequences, within the true grasses, or Gramineae. The evidence suggests that S/MARs are prolific in Gramineae species, specifically in the related subspecies Triticeae. In addition, stowaway-like sequences associated with predicted S/MARs within Gramineae species are present, found to be in association with predicted S/MARs in Gramineae, and proposed to be the product of an unknown duplication mechanism and bear no significant association with S/MARs.
Department of Biology
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Abdalla, Taysir. "Performance analysis of disk mirroring techniques." FIU Digital Commons, 1994. http://digitalcommons.fiu.edu/etd/1061.

Повний текст джерела
Анотація:
Unequaled improvements in processor and I/O speeds make many applications such as databases and operating systems to be increasingly I/O bound. Many schemes such as disk caching and disk mirroring have been proposed to address the problem. In this thesis we focus only on disk mirroring. In disk mirroring, a logical disk image is maintained on two physical disks allowing a single disk failure to be transparent to application programs. Although disk mirroring improves data availability and reliability, it has two major drawbacks. First, writes are expensive because both disks must be updated. Second, load balancing during failure mode operation is poor because all requests are serviced by the surviving disk. Distorted mirrors was proposed to address the write problem and interleaved declustering to address the load balancing problem. In this thesis we perform a comparative study of these two schemes under various operating modes. In addition we also study traditional mirroring to provide a common basis for comparison.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Landström, Anders. "Adaptive tensor-based morphological filtering and analysis of 3D profile data." Licentiate thesis, Luleå tekniska universitet, Signaler och system, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-26510.

Повний текст джерела
Анотація:
Image analysis methods for processing 3D profile data have been investigated and developed. These methods include; Image reconstruction by prioritized incremental normalized convolution, morphology-based crack detection for steel slabs, and adaptive morphology based on the local structure tensor. The methods have been applied to a number of industrial applications.An issue with 3D profile data captured by laser triangulation is occlusion, which occurs when the line-of-sight between the projected laser light and the camera sensor is obstructed. To overcome this problem, interpolation of missing surface in rock piles has been investigated and a novel interpolation method for filling in missing pixel values iteratively from the edges of the reliable data, using normalized convolution, has been developed.3D profile data of the steel surface has been used to detect longitudinal cracks in casted steel slabs. Segmentation of the data is done using mathematical morphology, and the resulting connected regions are assigned a crack probability estimate based on a statistic logistic regression model. More specifically, the morphological filtering locates trenches in the data, excludes scale regions for further analysis, and finally links crack segments together in order to obtain a segmented region which receives a crack probability based on its depth and length.Also suggested is a novel method for adaptive mathematical morphology intended to improve crack segment linking, i.e. for bridging gaps in the crack signature in order to increase the length of potential crack segments. Standard morphology operations rely on a predefined structuring element which is repeatedly used for each pixel in the image. The outline of a crack, however, can range from a straight line to a zig-zag pattern. A more adaptive method for linking regions with a large enough estimated crack depth would therefore be beneficial. More advanced morphological approaches, such as morphological amoebas and path openings, adapt better to curvature in the image. For our purpose, however, we investigate how the local structure tensor can be used to adaptively assign to each pixel an elliptical structuring element based on the local orientation within the image. The information from the local structure tensor directly defines the shape of the elliptical structuring element, and the resulting morphological filtering successfully enhances crack signatures in the data.
Godkänd; 2012; 20121017 (andlan); LICENTIATSEMINARIUM Ämne: Signalbehandling/Signal Processing Examinator: Universitetslektor Matthew Thurley, Institutionen för system- och rymdteknik, Luleå tekniska universitet Diskutant: Associate Professor Cris Luengo, Centre for Image Analysis, Uppsala Tid: Onsdag den 21 november 2012 kl 12.30 Plats: A1545, Luleå tekniska universitet
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Roberts, J. (Juho). "Iterative root cause analysis using data mining in software testing processes." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201604271548.

Повний текст джерела
Анотація:
In order to remain competitive, companies need to be constantly vigilant and aware of the current trends in the industry in which they operate. The terms big data and data mining have exploded in popularity in recent years, and will continue to do so with the launch of the internet of things (IoT) and the 5th generation of mobile networks (5G) in the next decade. Companies need to recognize the value of the big data they are generating in their day-to-day operations, and learn how and why to exploit data mining techniques to extract the most knowledge out of the data their customers and the company itself are generating. The root cause analysis of faults uncovered during base station system testing is a difficult process due to the profound complexity caused by the multi-disciplinary nature of a base station system, and the sheer volume of log data outputted by the numerous system components. The goal of this research is to investigate if data mining can be exploited to conduct root cause analysis. It took the form of action research and is conducted in industry at an organisation unit responsible for the research and development of mobile base station equipment. In this thesis, we survey existing literature on how data mining has been used to address root cause analysis. Then we propose a novel approach to root cause analysis by making iterations to the root cause analysis process with data mining. We use the data mining tool Splunk in this thesis as an example; however, the practices presented in this research can be applied to other similar tools. We conduct root cause analysis by mining system logs generated by mobile base stations, to investigate which system component is causing the base station to fall short of its performance specifications. We then evaluate and validate our hypotheses by conducting a training session for the test engineers to collect feedback on the suitability of data mining in their work. The results from the evaluation show that amongst other benefits, data mining makes root cause analysis more efficient, but also makes bug reporting in the target organisation more complete. We conclude that data mining techniques can be a significant asset in root cause analysis. The efficiency gains are significant in comparison to the manual root cause analysis which is currently being conducted at the target organisation
Kilpailuedun säilyttämiseksi yritysten on pysyttävä ajan tasalla markkinoiden viimeisimpien kehityssuuntien kanssa. Massadata ja sen jatkojalostaminen, eli tiedonlouhinta, ovat tällä hetkellä mm. IT- ja markkinointialan muotisanoja. Esineiden internetin ja viidennen sukupolven matkapuhelinverkon (5G) yleistyessä tiedonlouhinnan merkitys tulee kasvamaan entisestään. Yritysten on kyettävä tunnistamaan luomansa massadatan merkitys omissa toiminnoissaan, ja mietittävä kuinka soveltaa tiedonlouhintamenetelmiä kilpailuedun luomiseksi. Matkapuhelinverkon tukiasemien vika-analyysi on haastavaa tukiasemien monimutkaisen luonteen sekä valtavan datamäärän ulostulon vuoksi. Tämän tutkimuksen tavoitteena on arvioida tiedonlouhinnan soveltuvuutta vika-analyysin edesauttamiseksi. Tämä pro gradu -tutkielma toteutettiin toimintatutkimuksen muodossa matkapuhelinverkon tukiasemia valmistavassa yrityksessä. Tämä pro gradu -tutkielma koostui sekä kirjallisuuskatsauksesta, jossa perehdyttiin siihen, kuinka tiedonlouhintaa on sovellettu vika-analyysissä aikaisemmissa tutkimuksissa että empiirisestä osiosta, jossa esitetään uudenlaista iteratiivista lähestymistapaa vika-analyysiin tiedonlouhintaa hyödyntämällä. Tiedonlouhinta toteutettiin Splunk -nimistä tiedonlouhintatyökalua hyödyntäen, mutta tutkimuksessa esitelty teoria voidaan toteuttaa muitakin työkaluja käyttäen. Tutkimuksessa louhittiin tukiaseman synnyttämiä lokitiedostoja, joista pyrittiin selvittämään, mikä tukiaseman ohjelmistokomponentti esti tukiasemaa saavuttamasta suorituskyvyllisiä laatuvaatimuksia. Tutkimuksen tulokset osoittivat tiedonlouhinnan olevan oivallinen lähestymistapa vika-analyysiin sekä huomattava etu työn tehokkuuden lisäämiseksi verrattuna nykyiseen käsin tehtyyn analyysiin
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Forshed, Jenny. "Processing and analysis of NMR data : Impurity determination and metabolic profiling." Doctoral thesis, Stockholm : Dept. of analytical chemistry, Stockholm university, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-712.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Frogner, Gary Russell. "Monitoring of global acoustic transmissions : signal processing and preliminary data analysis." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/28379.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Hitchcock, Jonathan James. "Automated processing and analysis of gas chromatography/mass spectrometry screening data." Thesis, University of Bedfordshire, 2009. http://hdl.handle.net/10547/134940.

Повний текст джерела
Анотація:
The work presented is a substantial addition to the established methods of analysing the data generated by gas chromatography and low-resolution mass spectrometry. It has applications where these techniques are used on a large scale for screening complex mixtures, including urine samples for sports drug surveillance. The analysis of such data is usually automated to detect peaks in the chromatograms and to search a library of mass spectra of banned or unwanted substances. The mass spectra are usually not exactly the same as those in the library, so to avoid false negatives the search must report many doubtful matches. Nearly all the samples in this type of screening are actually negative, so the process of checking the results is tedious and time-consuming. A novel method, called scaled subtraction, takes each scan from the test sample and subtracts a mass spectrum taken from a second similar sample. The aim is that the signal from any substance common to the two samples will be eliminated. Provided that the second sample does not contain the specified substances, any which are present in the first sample can be more easily detected in the subtracted data. The spectrum being subtracted is automatically scaled to allow for compounds that are common to both samples but with different concentrations. Scaled subtraction is implemented as part of a systematic approach to preprocessing the data. This includes a new spectrum-based alignment method that is able to precisely adjust the retention times so that corresponding scans of the second sample can be chosen for the subtraction. This approach includes the selection of samples based on their chromatograms. For this, new measures of similarity or dissimilarity are defined. The thesis presents the theoretical foundation for such measures based on mass spectral similarity. A new type of difference plot can highlight significant differences. The approach has been tested, with the encouraging result that there are less than half as many false matches compared with when the library search is applied to the original data. True matches of compounds of interest are still reported by the library search of the subtracted data.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Vitale, Raffaele. "Novel chemometric proposals for advanced multivariate data analysis, processing and interpretation." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/90442.

Повний текст джерела
Анотація:
The present Ph.D. thesis, primarily conceived to support and reinforce the relation between academic and industrial worlds, was developed in collaboration with Shell Global Solutions (Amsterdam, The Netherlands) in the endeavour of applying and possibly extending well-established latent variable-based approaches (i.e. Principal Component Analysis - PCA - Partial Least Squares regression - PLS - or Partial Least Squares Discriminant Analysis - PLSDA) for complex problem solving not only in the fields of manufacturing troubleshooting and optimisation, but also in the wider environment of multivariate data analysis. To this end, novel efficient algorithmic solutions are proposed throughout all chapters to address very disparate tasks, from calibration transfer in spectroscopy to real-time modelling of streaming flows of data. The manuscript is divided into the following six parts, focused on various topics of interest: Part I - Preface, where an overview of this research work, its main aims and justification is given together with a brief introduction on PCA, PLS and PLSDA; Part II - On kernel-based extensions of PCA, PLS and PLSDA, where the potential of kernel techniques, possibly coupled to specific variants of the recently rediscovered pseudo-sample projection, formulated by the English statistician John C. Gower, is explored and their performance compared to that of more classical methodologies in four different applications scenarios: segmentation of Red-Green-Blue (RGB) images, discrimination of on-/off-specification batch runs, monitoring of batch processes and analysis of mixture designs of experiments; Part III - On the selection of the number of factors in PCA by permutation testing, where an extensive guideline on how to accomplish the selection of PCA components by permutation testing is provided through the comprehensive illustration of an original algorithmic procedure implemented for such a purpose; Part IV - On modelling common and distinctive sources of variability in multi-set data analysis, where several practical aspects of two-block common and distinctive component analysis (carried out by methods like Simultaneous Component Analysis - SCA - DIStinctive and COmmon Simultaneous Component Analysis - DISCO-SCA - Adapted Generalised Singular Value Decomposition - Adapted GSVD - ECO-POWER, Canonical Correlation Analysis - CCA - and 2-block Orthogonal Projections to Latent Structures - O2PLS) are discussed, a new computational strategy for determining the number of common factors underlying two data matrices sharing the same row- or column-dimension is described, and two innovative approaches for calibration transfer between near-infrared spectrometers are presented; Part V - On the on-the-fly processing and modelling of continuous high-dimensional data streams, where a novel software system for rational handling of multi-channel measurements recorded in real time, the On-The-Fly Processing (OTFP) tool, is designed; Part VI - Epilogue, where final conclusions are drawn, future perspectives are delineated, and annexes are included.
La presente tesis doctoral, concebida principalmente para apoyar y reforzar la relación entre la academia y la industria, se desarrolló en colaboración con Shell Global Solutions (Amsterdam, Países Bajos) en el esfuerzo de aplicar y posiblemente extender los enfoques ya consolidados basados en variables latentes (es decir, Análisis de Componentes Principales - PCA - Regresión en Mínimos Cuadrados Parciales - PLS - o PLS discriminante - PLSDA) para la resolución de problemas complejos no sólo en los campos de mejora y optimización de procesos, sino también en el entorno más amplio del análisis de datos multivariados. Con este fin, en todos los capítulos proponemos nuevas soluciones algorítmicas eficientes para abordar tareas dispares, desde la transferencia de calibración en espectroscopia hasta el modelado en tiempo real de flujos de datos. El manuscrito se divide en las seis partes siguientes, centradas en diversos temas de interés: Parte I - Prefacio, donde presentamos un resumen de este trabajo de investigación, damos sus principales objetivos y justificaciones junto con una breve introducción sobre PCA, PLS y PLSDA; Parte II - Sobre las extensiones basadas en kernels de PCA, PLS y PLSDA, donde presentamos el potencial de las técnicas de kernel, eventualmente acopladas a variantes específicas de la recién redescubierta proyección de pseudo-muestras, formulada por el estadista inglés John C. Gower, y comparamos su rendimiento respecto a metodologías más clásicas en cuatro aplicaciones a escenarios diferentes: segmentación de imágenes Rojo-Verde-Azul (RGB), discriminación y monitorización de procesos por lotes y análisis de diseños de experimentos de mezclas; Parte III - Sobre la selección del número de factores en el PCA por pruebas de permutación, donde aportamos una guía extensa sobre cómo conseguir la selección de componentes de PCA mediante pruebas de permutación y una ilustración completa de un procedimiento algorítmico original implementado para tal fin; Parte IV - Sobre la modelización de fuentes de variabilidad común y distintiva en el análisis de datos multi-conjunto, donde discutimos varios aspectos prácticos del análisis de componentes comunes y distintivos de dos bloques de datos (realizado por métodos como el Análisis Simultáneo de Componentes - SCA - Análisis Simultáneo de Componentes Distintivos y Comunes - DISCO-SCA - Descomposición Adaptada Generalizada de Valores Singulares - Adapted GSVD - ECO-POWER, Análisis de Correlaciones Canónicas - CCA - y Proyecciones Ortogonales de 2 conjuntos a Estructuras Latentes - O2PLS). Presentamos a su vez una nueva estrategia computacional para determinar el número de factores comunes subyacentes a dos matrices de datos que comparten la misma dimensión de fila o columna y dos planteamientos novedosos para la transferencia de calibración entre espectrómetros de infrarrojo cercano; Parte V - Sobre el procesamiento y la modelización en tiempo real de flujos de datos de alta dimensión, donde diseñamos la herramienta de Procesamiento en Tiempo Real (OTFP), un nuevo sistema de manejo racional de mediciones multi-canal registradas en tiempo real; Parte VI - Epílogo, donde presentamos las conclusiones finales, delimitamos las perspectivas futuras, e incluimos los anexos.
La present tesi doctoral, concebuda principalment per a recolzar i reforçar la relació entre l'acadèmia i la indústria, es va desenvolupar en col·laboració amb Shell Global Solutions (Amsterdam, Països Baixos) amb l'esforç d'aplicar i possiblement estendre els enfocaments ja consolidats basats en variables latents (és a dir, Anàlisi de Components Principals - PCA - Regressió en Mínims Quadrats Parcials - PLS - o PLS discriminant - PLSDA) per a la resolució de problemes complexos no solament en els camps de la millora i optimització de processos, sinó també en l'entorn més ampli de l'anàlisi de dades multivariades. A aquest efecte, en tots els capítols proposem noves solucions algorítmiques eficients per a abordar tasques dispars, des de la transferència de calibratge en espectroscopia fins al modelatge en temps real de fluxos de dades. El manuscrit es divideix en les sis parts següents, centrades en diversos temes d'interès: Part I - Prefaci, on presentem un resum d'aquest treball de recerca, es donen els seus principals objectius i justificacions juntament amb una breu introducció sobre PCA, PLS i PLSDA; Part II - Sobre les extensions basades en kernels de PCA, PLS i PLSDA, on presentem el potencial de les tècniques de kernel, eventualment acoblades a variants específiques de la recentment redescoberta projecció de pseudo-mostres, formulada per l'estadista anglés John C. Gower, i comparem el seu rendiment respecte a metodologies més clàssiques en quatre aplicacions a escenaris diferents: segmentació d'imatges Roig-Verd-Blau (RGB), discriminació i monitorització de processos per lots i anàlisi de dissenys d'experiments de mescles; Part III - Sobre la selecció del nombre de factors en el PCA per proves de permutació, on aportem una guia extensa sobre com aconseguir la selecció de components de PCA a través de proves de permutació i una il·lustració completa d'un procediment algorítmic original implementat per a la finalitat esmentada; Part IV - Sobre la modelització de fonts de variabilitat comuna i distintiva en l'anàlisi de dades multi-conjunt, on discutim diversos aspectes pràctics de l'anàlisis de components comuns i distintius de dos blocs de dades (realitzat per mètodes com l'Anàlisi Simultània de Components - SCA - Anàlisi Simultània de Components Distintius i Comuns - DISCO-SCA - Descomposició Adaptada Generalitzada en Valors Singulars - Adapted GSVD - ECO-POWER, Anàlisi de Correlacions Canòniques - CCA - i Projeccions Ortogonals de 2 blocs a Estructures Latents - O2PLS). Presentem al mateix temps una nova estratègia computacional per a determinar el nombre de factors comuns subjacents a dues matrius de dades que comparteixen la mateixa dimensió de fila o columna, i dos plantejaments nous per a la transferència de calibratge entre espectròmetres d'infraroig proper; Part V - Sobre el processament i la modelització en temps real de fluxos de dades d'alta dimensió, on dissenyem l'eina de Processament en Temps Real (OTFP), un nou sistema de tractament racional de mesures multi-canal registrades en temps real; Part VI - Epíleg, on presentem les conclusions finals, delimitem les perspectives futures, i incloem annexos.
Vitale, R. (2017). Novel chemometric proposals for advanced multivariate data analysis, processing and interpretation [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90442
TESIS
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Lynch, Thomas J. III, Thomas E. Fortmann, Howard Briscoe, and Sanford Fidell. "MULTIPROCESSOR-BASED DATA ACQUISITION AND ANALYSIS." International Foundation for Telemetering, 1989. http://hdl.handle.net/10150/614478.

Повний текст джерела
Анотація:
International Telemetering Conference Proceedings / October 30-November 02, 1989 / Town & Country Hotel & Convention Center, San Diego, California
Multiprocessing computer systems offer several attractive advantages for telemetry-related data acquisition and processing applications. These include: (1) high-bandwidth, fail-soft operation with convenient, low-cost, growth paths, (2) cost-effective integration and clustering of data acquisition, decommutation, monitoring, archiving, analysis, and display processing, and (3) support for modern telemetry system architectures that allow concurrent network access to test data (for both real-time and post-test analyses) by multiple analysts. This paper asserts that today’s general-purpose hardware and software offer viable platforms for these applications. One such system, currently under development, closely couples VME data buses and other off-the-shelf components, parallel processing computers, and commercial data analysis packages to acquire, process, display, and analyze telemetry and other data from a major weapon system. This approach blurs the formerly clear architectural distinction in telemetry data processing systems between special-purpose, front-end, preprocessing hardware and generalpurpose, back-end, host computers used for further processing and display.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Windhorst, Anita Cornelia [Verfasser]. "Transcriptome analysis in preterm infants developing bronchopulmonary dysplasia : data processing and statistical analysis of microarray data / Anita Cornelia Windhorst." Gießen : Universitätsbibliothek, 2015. http://d-nb.info/1078220395/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Williamson, Lance K. "ROPES : an expert system for condition analysis of winder ropes." Master's thesis, University of Cape Town, 1990. http://hdl.handle.net/11427/15982.

Повний текст джерела
Анотація:
Includes bibliographical references.
This project was commissioned in order to provide engineers with the necessary knowledge of steel wire winder ropes so that they may make accurate decisions as to when a rope is near the end of its useful life. For this purpose, a knowledge base was compiled from the experience of experts in the field in order to create an expert system to aid the engineer in his task. The EXSYS expert system shell was used to construct a rule-based program which would be run on a personal computer. The program derived in this thesis is named ROPES, and provides information as to the forms of damage that may be present in a rope and the effect of any defects on rope strength and rope life. Advice is given as to the procedures that should be followed when damage is detected as well as the conditions which would necessitate rope discard and the urgency with which the replacement should take place. The expert system program will provide engineers with the necessary expertise and experience to assess, more accurately than at present, the condition of a winder rope. This should lead to longer rope life and improved safety with the associated cost savings. Rope assessment will also be more uniform with changes to policy being able to be implemented quickly and on an ongoing basis as technology and experience improves. The program ROPES, although compiled from expert knowledge, still requires the further input of personal opinions and inferences to some extent. For this reason, the program cannot be assumed infallible and must be used as an aid only.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Noras, James M., Steven M. R. Jones, Haile S. Rajamani, Simon J. Shepherd, and Eetvelt Peter Van. "Improvements in and Relating to Processing Apparatus & Method." European Patent Office, 2004. http://hdl.handle.net/10454/2742.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії