Gotowa bibliografia na temat „Data analysis and interpretation techniques”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Data analysis and interpretation techniques”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Data analysis and interpretation techniques"

1

Wu, Yanping, i Md Habibur Rahman. "Analysis of Structured Data in Biomedicine Using Soft Computing Techniques and Computational Analysis". Computational Intelligence and Neuroscience 2022 (10.10.2022): 1–11. http://dx.doi.org/10.1155/2022/4711244.

Pełny tekst źródła
Streszczenie:
In the field of biomedicine, enormous data are generated in a structured and unstructured form every day. Soft computing techniques play a major role in the interpretation and classification of the data to make appropriate decisions for making policies. The field of medical science and biomedicine needs efficient soft computing-based methods which can process all kind of data such as structured data, categorical data, and unstructured data to generate meaningful outcome for decision-making. The soft-computing methods allow clustering of similar data, classification of data, predictions from big-data analysis, and decision-making on the basis of analysis of data. A novel method is proposed in the paper using soft-computing methods where clustering mechanisms and classification mechanisms are used to process the biomedicine data for productive outcomes. Fuzzy logic and C-means clustering are devised as a collaborative approach to analyze the biomedicine data by reducing the time and space complexity of the clustering solutions. This research work is considering categorical data, numeric data, and structured data for the interpretation of data to make further decisions. Timely decisions are very important especially in the field of biomedicine because human health and human lives are involved in this field and delays in decision-making may cause threats to human lives. The COVID-19 situation was a recent example where timely diagnosis and interpretations played significant roles in saving the lives of people. Therefore, this research work has attempted to use soft computing techniques for the successful clustering of similar medical data and for quicker interpretation of data to support the decision-making processes related to medical fields.
Style APA, Harvard, Vancouver, ISO itp.
2

Fisher, M., i E. Hunter. "Digital imaging techniques in otolith data capture, analysis and interpretation". Marine Ecology Progress Series 598 (28.06.2018): 213–31. http://dx.doi.org/10.3354/meps12531.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Dobson, Scott, Jennifer Dabelstein, Anita Bagley i Jon Davids. "Interpretation of kinematic data: Visual vs. computer-based analysis techniques". Gait & Posture 7, nr 2 (marzec 1998): 182–83. http://dx.doi.org/10.1016/s0966-6362(98)90277-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Bornik, Alexander, i Wolfgang Neubauer. "3D Visualization Techniques for Analysis and Archaeological Interpretation of GPR Data". Remote Sensing 14, nr 7 (1.04.2022): 1709. http://dx.doi.org/10.3390/rs14071709.

Pełny tekst źródła
Streszczenie:
The non-invasive detection and digital documentation of buried archaeological heritage by means of geophysical prospection is increasingly gaining importance in modern field archaeology and archaeological heritage management. It frequently provides the detailed information required for heritage protection or targeted further archaeological research. High-resolution magnetometry and ground-penetrating radar (GPR) became invaluable tools for the efficient and comprehensive non-invasive exploration of complete archaeological sites and archaeological landscapes. The analysis and detailed archaeological interpretation of the resulting large 2D and 3D datasets, and related data from aerial archaeology or airborne remote sensing, etc., is a time-consuming and complex process, which requires the integration of all data at hand, respective three-dimensional imagination, and a broad understanding of the archaeological problem; therefore, informative 3D visualizations supporting the exploration of complex 3D datasets and supporting the interpretative process are in great demand. This paper presents a novel integrated 3D GPR interpretation approach, centered around the flexible 3D visualization of heterogeneous data, which supports conjoint visualization of scenes composed of GPR volumes, 2D prospection imagery, and 3D interpretative models. We found that the flexible visual combination of the original 3D GPR datasets and images derived from the data applying post-processing techniques inspired by medical image analysis and seismic data processing contribute to the perceptibility of archaeologically relevant features and their respective context within a stratified volume. Moreover, such visualizations support the interpreting archaeologists in their development of a deeper understanding of the complex datasets as a starting point for and throughout the implemented interactive interpretative process.
Style APA, Harvard, Vancouver, ISO itp.
5

Thomas, Sabu K., i K. T. Thomachen. "Biodiversity Studies and Multicollinearity in Multivariate Data Analysis". Mapana - Journal of Sciences 6, nr 1 (31.05.2007): 27–35. http://dx.doi.org/10.12723/mjs.10.2.

Pełny tekst źródła
Streszczenie:
Multicollinearity of explanatory variables often threatens statistical interpretation of ecological data analysis in biodiversity studies. Using litter ants as an example,the impact of multicollinearity on ecological multiple regression and complications arsing from collinearity is explained.We list the various statistical techniques available for enhancing the reliability and interpretation of ecological multiple regressions in the presence of multicollinearity.
Style APA, Harvard, Vancouver, ISO itp.
6

Razminia, K., A. Hashemi, A. Razminia i D. Baleanu. "Explicit Deconvolution of Well Test Data Dominated by Wellbore Storage". Abstract and Applied Analysis 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/912395.

Pełny tekst źródła
Streszczenie:
This paper addresses some methods for interpretation of oil and gas well test data distorted by wellbore storage effects. Using these techniques, we can deconvolve pressure and rate data from drawdown and buildup tests dominated by wellbore storage. Some of these methods have the advantage of deconvolving the pressure data without rate measurement. The two important methods that are applied in this study are an explicit deconvolution method and a modification of material balance deconvolution method. In cases with no rate measurements, we use a blind deconvolution method to restore the pressure response free of wellbore storage effects. Our techniques detect the afterflow/unloading rate function with explicit deconvolution of the observed pressure data. The presented techniques can unveil the early time behavior of a reservoir system masked by wellbore storage effects and thus provide powerful tools to improve pressure transient test interpretation. Each method has been validated using both synthetic data and field cases and each method should be considered valid for practical applications.
Style APA, Harvard, Vancouver, ISO itp.
7

Yamada, Ryo, Daigo Okada, Juan Wang, Tapati Basak i Satoshi Koyama. "Interpretation of omics data analyses". Journal of Human Genetics 66, nr 1 (8.05.2020): 93–102. http://dx.doi.org/10.1038/s10038-020-0763-5.

Pełny tekst źródła
Streszczenie:
AbstractOmics studies attempt to extract meaningful messages from large-scale and high-dimensional data sets by treating the data sets as a whole. The concept of treating data sets as a whole is important in every step of the data-handling procedures: the pre-processing step of data records, the step of statistical analyses and machine learning, translation of the outputs into human natural perceptions, and acceptance of the messages with uncertainty. In the pre-processing, the method by which to control the data quality and batch effects are discussed. For the main analyses, the approaches are divided into two types and their basic concepts are discussed. The first type is the evaluation of many items individually, followed by interpretation of individual items in the context of multiple testing and combination. The second type is the extraction of fewer important aspects from the whole data records. The outputs of the main analyses are translated into natural languages with techniques, such as annotation and ontology. The other technique for making the outputs perceptible is visualization. At the end of this review, one of the most important issues in the interpretation of omics data analyses is discussed. Omics studies have a large amount of information in their data sets, and every approach reveals only a very restricted aspect of the whole data sets. The understandable messages from these studies have unavoidable uncertainty.
Style APA, Harvard, Vancouver, ISO itp.
8

Pavlopoulos, Sotiris, Trias Thireou, George Kontaxakis i Andres Santos. "Analysis and interpretation of dynamic FDG PET oncological studies using data reduction techniques". BioMedical Engineering OnLine 6, nr 1 (2007): 36. http://dx.doi.org/10.1186/1475-925x-6-36.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Kendrick, Sarah K., Qi Zheng, Nichola C. Garbett i Guy N. Brock. "Application and interpretation of functional data analysis techniques to differential scanning calorimetry data from lupus patients". PLOS ONE 12, nr 11 (9.11.2017): e0186232. http://dx.doi.org/10.1371/journal.pone.0186232.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Renqi, Jiang, John P. Castagna i Wu Jian. "Applications of high-resolution seismic frequency and phase attribute analysis techniques". Earth sciences and subsoil use 45, nr 4 (8.01.2023): 324–44. http://dx.doi.org/10.21285/2686-9993-2022-45-4-324-344.

Pełny tekst źródła
Streszczenie:
Seismic prospecting for oil and gas exploration and development is limited by seismic data resolution. Improving the accuracy of quantitative interpretation of seismic data in thin layers, thereby identifying effective reservoirs and delineating favorable areas, can be a key factor for successful exploration and development. Historically, the limit of seismic resolution is usually assumed to be about 1/4 wavelength of the dominant frequency of the data in the formation of interest. Constrained seismic reflectivity inversion can resolve thinner layers than this assumed limit. This leads to a series of highresolution quantitative interpretation methods and techniques have been developed. Case studies in carbonates, clastic, and unconventional reservoirs indicate that the application of quantitative interpretation techniques such as high-resolution seismic frequency and phase attribute analysis can resolve and allow/or allow quantitative estimation of rock and fluid properties in such seismically thin layers. Band recovery using high resolution seismic processing technology can greatly improve the ability to recognize geological details such as thin layers, faults, and karst caves. Multiscale fault detection technology can effectively detect small-scale faults in addition to more readily recognized large-scale faults. Based on traditional seismic amplitude information, high-resolution spectral decomposition and phase decomposition technology expands seismic attribute analysis to the frequency and phase dimensions, boosting the interpretable geological information content of the seismic data including subsurface geological characteristics and hydrocarbon potential and thereby improving the reliability of seismic interpretation. These technologies, based on high-resolution quantitative interpretation techniques, make the identification of effective reservoirs more efficient and accurate.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Data analysis and interpretation techniques"

1

Vitale, Raffaele. "Novel chemometric proposals for advanced multivariate data analysis, processing and interpretation". Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/90442.

Pełny tekst źródła
Streszczenie:
The present Ph.D. thesis, primarily conceived to support and reinforce the relation between academic and industrial worlds, was developed in collaboration with Shell Global Solutions (Amsterdam, The Netherlands) in the endeavour of applying and possibly extending well-established latent variable-based approaches (i.e. Principal Component Analysis - PCA - Partial Least Squares regression - PLS - or Partial Least Squares Discriminant Analysis - PLSDA) for complex problem solving not only in the fields of manufacturing troubleshooting and optimisation, but also in the wider environment of multivariate data analysis. To this end, novel efficient algorithmic solutions are proposed throughout all chapters to address very disparate tasks, from calibration transfer in spectroscopy to real-time modelling of streaming flows of data. The manuscript is divided into the following six parts, focused on various topics of interest: Part I - Preface, where an overview of this research work, its main aims and justification is given together with a brief introduction on PCA, PLS and PLSDA; Part II - On kernel-based extensions of PCA, PLS and PLSDA, where the potential of kernel techniques, possibly coupled to specific variants of the recently rediscovered pseudo-sample projection, formulated by the English statistician John C. Gower, is explored and their performance compared to that of more classical methodologies in four different applications scenarios: segmentation of Red-Green-Blue (RGB) images, discrimination of on-/off-specification batch runs, monitoring of batch processes and analysis of mixture designs of experiments; Part III - On the selection of the number of factors in PCA by permutation testing, where an extensive guideline on how to accomplish the selection of PCA components by permutation testing is provided through the comprehensive illustration of an original algorithmic procedure implemented for such a purpose; Part IV - On modelling common and distinctive sources of variability in multi-set data analysis, where several practical aspects of two-block common and distinctive component analysis (carried out by methods like Simultaneous Component Analysis - SCA - DIStinctive and COmmon Simultaneous Component Analysis - DISCO-SCA - Adapted Generalised Singular Value Decomposition - Adapted GSVD - ECO-POWER, Canonical Correlation Analysis - CCA - and 2-block Orthogonal Projections to Latent Structures - O2PLS) are discussed, a new computational strategy for determining the number of common factors underlying two data matrices sharing the same row- or column-dimension is described, and two innovative approaches for calibration transfer between near-infrared spectrometers are presented; Part V - On the on-the-fly processing and modelling of continuous high-dimensional data streams, where a novel software system for rational handling of multi-channel measurements recorded in real time, the On-The-Fly Processing (OTFP) tool, is designed; Part VI - Epilogue, where final conclusions are drawn, future perspectives are delineated, and annexes are included.
La presente tesis doctoral, concebida principalmente para apoyar y reforzar la relación entre la academia y la industria, se desarrolló en colaboración con Shell Global Solutions (Amsterdam, Países Bajos) en el esfuerzo de aplicar y posiblemente extender los enfoques ya consolidados basados en variables latentes (es decir, Análisis de Componentes Principales - PCA - Regresión en Mínimos Cuadrados Parciales - PLS - o PLS discriminante - PLSDA) para la resolución de problemas complejos no sólo en los campos de mejora y optimización de procesos, sino también en el entorno más amplio del análisis de datos multivariados. Con este fin, en todos los capítulos proponemos nuevas soluciones algorítmicas eficientes para abordar tareas dispares, desde la transferencia de calibración en espectroscopia hasta el modelado en tiempo real de flujos de datos. El manuscrito se divide en las seis partes siguientes, centradas en diversos temas de interés: Parte I - Prefacio, donde presentamos un resumen de este trabajo de investigación, damos sus principales objetivos y justificaciones junto con una breve introducción sobre PCA, PLS y PLSDA; Parte II - Sobre las extensiones basadas en kernels de PCA, PLS y PLSDA, donde presentamos el potencial de las técnicas de kernel, eventualmente acopladas a variantes específicas de la recién redescubierta proyección de pseudo-muestras, formulada por el estadista inglés John C. Gower, y comparamos su rendimiento respecto a metodologías más clásicas en cuatro aplicaciones a escenarios diferentes: segmentación de imágenes Rojo-Verde-Azul (RGB), discriminación y monitorización de procesos por lotes y análisis de diseños de experimentos de mezclas; Parte III - Sobre la selección del número de factores en el PCA por pruebas de permutación, donde aportamos una guía extensa sobre cómo conseguir la selección de componentes de PCA mediante pruebas de permutación y una ilustración completa de un procedimiento algorítmico original implementado para tal fin; Parte IV - Sobre la modelización de fuentes de variabilidad común y distintiva en el análisis de datos multi-conjunto, donde discutimos varios aspectos prácticos del análisis de componentes comunes y distintivos de dos bloques de datos (realizado por métodos como el Análisis Simultáneo de Componentes - SCA - Análisis Simultáneo de Componentes Distintivos y Comunes - DISCO-SCA - Descomposición Adaptada Generalizada de Valores Singulares - Adapted GSVD - ECO-POWER, Análisis de Correlaciones Canónicas - CCA - y Proyecciones Ortogonales de 2 conjuntos a Estructuras Latentes - O2PLS). Presentamos a su vez una nueva estrategia computacional para determinar el número de factores comunes subyacentes a dos matrices de datos que comparten la misma dimensión de fila o columna y dos planteamientos novedosos para la transferencia de calibración entre espectrómetros de infrarrojo cercano; Parte V - Sobre el procesamiento y la modelización en tiempo real de flujos de datos de alta dimensión, donde diseñamos la herramienta de Procesamiento en Tiempo Real (OTFP), un nuevo sistema de manejo racional de mediciones multi-canal registradas en tiempo real; Parte VI - Epílogo, donde presentamos las conclusiones finales, delimitamos las perspectivas futuras, e incluimos los anexos.
La present tesi doctoral, concebuda principalment per a recolzar i reforçar la relació entre l'acadèmia i la indústria, es va desenvolupar en col·laboració amb Shell Global Solutions (Amsterdam, Països Baixos) amb l'esforç d'aplicar i possiblement estendre els enfocaments ja consolidats basats en variables latents (és a dir, Anàlisi de Components Principals - PCA - Regressió en Mínims Quadrats Parcials - PLS - o PLS discriminant - PLSDA) per a la resolució de problemes complexos no solament en els camps de la millora i optimització de processos, sinó també en l'entorn més ampli de l'anàlisi de dades multivariades. A aquest efecte, en tots els capítols proposem noves solucions algorítmiques eficients per a abordar tasques dispars, des de la transferència de calibratge en espectroscopia fins al modelatge en temps real de fluxos de dades. El manuscrit es divideix en les sis parts següents, centrades en diversos temes d'interès: Part I - Prefaci, on presentem un resum d'aquest treball de recerca, es donen els seus principals objectius i justificacions juntament amb una breu introducció sobre PCA, PLS i PLSDA; Part II - Sobre les extensions basades en kernels de PCA, PLS i PLSDA, on presentem el potencial de les tècniques de kernel, eventualment acoblades a variants específiques de la recentment redescoberta projecció de pseudo-mostres, formulada per l'estadista anglés John C. Gower, i comparem el seu rendiment respecte a metodologies més clàssiques en quatre aplicacions a escenaris diferents: segmentació d'imatges Roig-Verd-Blau (RGB), discriminació i monitorització de processos per lots i anàlisi de dissenys d'experiments de mescles; Part III - Sobre la selecció del nombre de factors en el PCA per proves de permutació, on aportem una guia extensa sobre com aconseguir la selecció de components de PCA a través de proves de permutació i una il·lustració completa d'un procediment algorítmic original implementat per a la finalitat esmentada; Part IV - Sobre la modelització de fonts de variabilitat comuna i distintiva en l'anàlisi de dades multi-conjunt, on discutim diversos aspectes pràctics de l'anàlisis de components comuns i distintius de dos blocs de dades (realitzat per mètodes com l'Anàlisi Simultània de Components - SCA - Anàlisi Simultània de Components Distintius i Comuns - DISCO-SCA - Descomposició Adaptada Generalitzada en Valors Singulars - Adapted GSVD - ECO-POWER, Anàlisi de Correlacions Canòniques - CCA - i Projeccions Ortogonals de 2 blocs a Estructures Latents - O2PLS). Presentem al mateix temps una nova estratègia computacional per a determinar el nombre de factors comuns subjacents a dues matrius de dades que comparteixen la mateixa dimensió de fila o columna, i dos plantejaments nous per a la transferència de calibratge entre espectròmetres d'infraroig proper; Part V - Sobre el processament i la modelització en temps real de fluxos de dades d'alta dimensió, on dissenyem l'eina de Processament en Temps Real (OTFP), un nou sistema de tractament racional de mesures multi-canal registrades en temps real; Part VI - Epíleg, on presentem les conclusions finals, delimitem les perspectives futures, i incloem annexos.
Vitale, R. (2017). Novel chemometric proposals for advanced multivariate data analysis, processing and interpretation [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90442
TESIS
Style APA, Harvard, Vancouver, ISO itp.
2

Smith, Eugene Herbie. "An analytical framework for monitoring and optimizing bank branch network efficiency / E.H. Smith". Thesis, North-West University, 2009. http://hdl.handle.net/10394/5029.

Pełny tekst źródła
Streszczenie:
Financial institutions make use of a variety of delivery channels for servicing their customers. The primary channel utilised as a means of acquiring new customers and increasing market share is through the retail branch network. The 1990s saw the Internet explosion and with it a threat to branches. The relatively low cost associated with virtual delivery channels made it inevitable for financial institutions to direct their focus towards such new and more cost efficient technologies. By the beginning of the 21st century -and with increasing limitations identified in alternative virtual delivery channels, the financial industry returned to a more balanced view which may be seen as the revival of branch networks. The main purpose of this study is to provide a roadmap for financial institutions in managing their branch network. A three step methodology, representative of data mining and management science techniques, will be used to explain relative branch efficiency. The methodology consists of clustering analysis (CA), data envelopment analysis (DEA) and decision tree induction (DTI). CA is applied to data internal to the financial institution for increasing' the discriminatory power of DEA. DEA is used to calculate the relevant operating efficiencies of branches deemed homogeneous during CA. Finally, DTI is used to interpret the DEA results and additional data describing the market environment the branch operates in, as well as inquiring into the nature of the relative efficiency of the branch.
Thesis (M.Com. (Computer Science))--North-West University, Potchefstroom Campus, 2010.
Style APA, Harvard, Vancouver, ISO itp.
3

Carter, Duane B. "Analysis of Multiresolution Data fusion Techniques". Thesis, Virginia Tech, 1998. http://hdl.handle.net/10919/36609.

Pełny tekst źródła
Streszczenie:
In recent years, as the availability of remote sensing imagery of varying resolution has increased, merging images of differing spatial resolution has become a significant operation in the field of digital remote sensing. This practice, known as data fusion, is designed to enhance the spatial resolution of multispectral images by merging a relatively coarse-resolution image with a higher resolution panchromatic image of the same geographic area. This study examines properties of fused images and their ability to preserve the spectral integrity of the original image. It analyzes five current data fusion techniques for three complex scenes to assess their performance. The five data fusion models used include one spatial domain model (High-Pass Filter), two algebraic models (Multiplicative and Brovey Transform), and two spectral domain models (Principal Components Transform and Intensity-Hue-Saturation). SPOT data were chosen for both the panchromatic and multispectral data sets. These data sets were chosen for the high spatial resolution of the panchromatic (10 meters) data, the relatively high spectral resolution of the multispectral data, and the low spatial resolution ratio of two to one (2:1). After the application of the data fusion techniques, each merged image was analyzed statistically, graphically, and for increased photointerpretive potential as compared with the original multispectral images. While all of the data fusion models distorted the original multispectral imagery to an extent, both the Intensity-Hue-Saturation Model and the High-Pass Filter model maintained the original qualities of the multispectral imagery to an acceptable level. The High-Pass Filter model, designed to highlight the high frequency spatial information, provided the most noticeable increase in spatial resolution.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
4

Astbury, S. "Analysis and interpretation of full waveform sonic data". Thesis, University of Oxford, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.371535.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Gimblett, Brian James. "The application of artificial intelligence techniques to data interpretation in analytical chemistry". Thesis, University of Salford, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.395862.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Lahouar, Samer. "Development of Data Analysis Algorithms for Interpretation of Ground Penetrating Radar Data". Diss., Virginia Tech, 2003. http://hdl.handle.net/10919/11051.

Pełny tekst źródła
Streszczenie:
According to a 1999 Federal Highway Administration statistic, the U.S. has around 8.2 million lane-miles of roadways that need to be maintained and rehabilitated periodically. Therefore, in order to reduce rehabilitation costs, pavement engineers need to optimize the rehabilitation procedure, which is achieved by accurately knowing the existing pavement layer thicknesses and localization of subsurface defects. Currently, the majority of departments of transportation (DOTs) rely on coring as a means to estimate pavement thicknesses, instead of using other nondestructive techniques, such as Ground Penetrating Radar (GPR). The use of GPR as a nondestructive pavement assessment tool is limited mainly due to the difficulty of GPR data interpretation, which requires experienced operators. Therefore, GPR results are usually subjective and inaccurate. Moreover, GPR data interpretation is very time-consuming because of the huge amount of data collected during a survey and the lack of reliable GPR data-interpretation software. This research effort attempts to overcome these problems by developing new GPR data analysis techniques that allow thickness estimation and subsurface defect detection from GPR data without operator intervention. The data analysis techniques are based on an accurate modeling of the propagation of the GPR electromagnetic waves through the pavement dielectric materials while traveling from the GPR transmitter to the receiver. Image-processing techniques are also applied to detect layer boundaries and subsurface defects. The developed data analysis techniques were validated utilizing data collected from an experimental pavement system: the Virginia Smart Road. The layer thickness error achieved by the developed system was around 3%. The conditions needed to achieve reliable and accurate results from GPR testing were also established.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
7

Pinpart, Tanya. "Techniques for analysis and interpretation of UHF partial discharge signals". Thesis, University of Strathclyde, 2010. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=12830.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Deng, Xinping. "Texture analysis and physical interpretation of polarimetric SAR data". Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/396607.

Pełny tekst źródła
Streszczenie:
This thesis is dedicated to the study of texture analysis and physical interpretation of PolSAR data. As the starting point, a complete survey of the statistical models for PolSAR data is conducted. All the models are classified into three categories: Gaussian distributions, texture models, and finite mixture models. The texture models, which assume that the randomness of the SAR data is due to two unrelated factors, texture and speckle, are the main subject of this study. The PDFs of the scattering vector and the sample covariance matrix in different models are reviewed. Since many models have been proposed, how to choose the most accurate one for a test data is a big challenge. Methods which analyze different polarimetric channels separately or require a filtering of the data are limited in many cases, especially when it comes to high resolution data. In this thesis, the L2-norms of the scattering vectors are studied, and they are found to be advantageous to extract statistical information from PolSAR data. Statistics based on the L2-norms can be utilized to determine what distribution the data actually follow. A number of models are suggested to model the texture of PolSAR data, and some are very complex. But most of them lack a physical explanation. The random walk model, which can be interpreted as a discrete analog of the SAR data focusing process, is studied with the objective to understand the data statistics from the point of view of scattering process. A simulator based on the random walk model is developed, where different variations in the scatterer types and scatterer numbers are considered. It builds a bridge between the mathematical models and underlying physical mechanisms. It is found that both the mixture and the texture could give the same statistics such as log-cumulants of the second order and the third order. The two concepts, texture and mixture, represent two quite different scenarios. A further study was carried on to see if it is possible to distinguish them. And higher order statistics are demonstrated to be favorable in this task. They can be physically interpreted to distinguish the scattering from a single type of target from a mixture of targets.
Esta tesis está dedicada al estudio del análisis de texturas y de la interpretación física de datos PolSAR. Como punto de partida, se ha llevado a cabo un estudio completo de los modelos estadísticos para datos PolSAR. Todos los modelos se han clasificado en tres categorías: distribuciónes gaussianas, modelos de textura y modelos de mezcla finita. Los modelos de textura, que asumen que la aleatoriedad de los datos SAR se debe a dos factores no relacionados, la textura y el speckle, son el tema principal de este estudio. Las distribuciones del vector de dispersión y de la matriz de covarianza en diferentes modelos son revisados. Debido a que se han propuesto muchos modelos, cómo elegir el más preciso para unos datos en particular es un gran reto. Los métodos que analizan diferentes canales polarimétricos por separado o requieren de un filtrado de los datos presentan limitacions en muchos casos, especialmente cuando se trata de datos de alta resolución. En esta tesis, la norma L2 de los vectores de dispersión se estudian, demostrando su utilidad para extraer información estadística de los datos PolSAR. Las estadísticas basadas en la norma L2 se pueden utilizar para determinar la distribución de los datos. En la literatura, se sugieren una serie de modelos para modelar la textura de los datos PolSAR, siendo alguno de ellos muy complejos. Sin embargo, la mayoría de ellos carecen de una explicación física. El modelo de random walk, que se puede interpretar como un análogo discreto del proceso de enfocado de los datos SAR, se estudia con el objetivo de comprender las estadísticas de los datos desde el punto de vista de proceso de dispersión. Se desarrolla un simulador basado en el modelo de random walk, donde se consideran diversas variaciones en los tipos de dispersores y número de dispersores. Se construye un puente entre los modelos matemáticos y mecanismos físicos subyacentes. Se encontró que tanto la mezcla como la textura podrían dar las mismas estadísticas, tales como log-cumulantes de segundo orden y tercer orden. Los dos conceptos, la textura y la mezcla, representan dos escenarios muy diferentes. Se realizó un estudio adicional para ver si es posible distinguirlos, demostrando que las estadísticas de orden superior son favorables en esta tarea. Pueden interpretarse físicamente para distinguir la dispersión a partir de un solo tipo de blanco de una mezcla de blancos.
Style APA, Harvard, Vancouver, ISO itp.
9

Fitzgerald, Tomas W. "Data analysis methods for copy number discovery and interpretation". Thesis, Cranfield University, 2014. http://dspace.lib.cranfield.ac.uk/handle/1826/10002.

Pełny tekst źródła
Streszczenie:
Copy number variation (CNV) is an important type of genetic variation that can give rise to a wide variety of phenotypic traits. Differences in copy number are thought to play major roles in processes that involve dosage sensitive genes, providing beneficial, deleterious or neutral modifications to individual phenotypes. Copy number analysis has long been a standard in clinical cytogenetic laboratories. Gene deletions and duplications can often be linked with genetic Syndromes such as: the 7q11.23 deletion of Williams-­‐Bueren Syndrome, the 22q11 deletion of DiGeorge syndrome and the 17q11.2 duplication of Potocki-­‐Lupski syndrome. Interestingly, copy number based genomic disorders often display reciprocal deletion / duplication syndromes, with the latter frequently exhibiting milder symptoms. Moreover, the study of chromosomal imbalances plays a key role in cancer research. The datasets used for the development of analysis methods during this project are generated as part of the cutting-­‐edge translational project, Deciphering Developmental Disorders (DDD). This project, the DDD, is the first of its kind and will directly apply state of the art technologies, in the form of ultra-­‐high resolution microarray and next generation sequencing (NGS), to real-­‐time genetic clinical practice. It is collaboration between the Wellcome Trust Sanger Institute (WTSI) and the National Health Service (NHS) involving the 24 regional genetic services across the UK and Ireland. Although the application of DNA microarrays for the detection of CNVs is well established, individual change point detection algorithms often display variable performances. The definition of an optimal set of parameters for achieving a certain level of performance is rarely straightforward, especially where data qualities vary.
Style APA, Harvard, Vancouver, ISO itp.
10

Venugopal, Niveditha. "Annotation-Enabled Interpretation and Analysis of Time-Series Data". PDXScholar, 2018. https://pdxscholar.library.pdx.edu/open_access_etds/4708.

Pełny tekst źródła
Streszczenie:
As we continue to produce large amounts of time-series data, the need for data analysis is growing rapidly to help gain insights from this data. These insights form the foundation of data-driven decisions in various aspects of life. Data annotations are information about the data such as comments, errors and provenance, which provide context to the underlying data and aid in meaningful data analysis in domains such as scientific research, genomics and ECG analysis. Storing such annotations in the database along with the data makes them available to help with analysis of the data. In this thesis, I propose a user-friendly technique for Annotation-Enabled Analysis through which a user can employ annotations to help query and analyze data without having prior knowledge of the details of the database schema or any kind of database programming language. The proposed technique receives the request for analysis as a high-level specification, hiding the details of the schema, joins, etc., and parses it, validates the input and converts it into SQL. This SQL query can then be executed in a relational database and the result of the query returned to the user. I evaluate this technique by providing real-world data from a building-data platform containing data about Portland State University buildings such as room temperature, air volume and CO2 level. This data is annotated with information such as class schedules, power outages and control modes (for example, day or night mode). I test my technique with three increasingly sophisticated levels of use cases drawn from this building science domain. (1) Retrieve data with include or exclude annotation selection (2) Correlate data with include or exclude annotation selection (3) Align data based on include annotation selection to support aggregation over multiple periods. I evaluate the technique by performing two kinds of tests: (1) To validate correctness, I generate synthetic datasets for which I know the expected result of these annotation-enabled analyses and compare the expected results with the results generated from my technique (2) I evaluate the performance of the queries generated by this service with respect to execution time in the database by comparing them with alternative SQL translations that I developed.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Data analysis and interpretation techniques"

1

Taylor, John K. Statistical techniques for data analysis. Chelsea, Mich: Lewis Publishers, 1990.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Cheryl, Cihon, red. Statistical techniques for data analysis. Wyd. 2. Boca Raton: Chapman & Hall/CRC, 2004.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

S, Guy Christopher, Brown Michael L i American Fisheries Society, red. Analysis and Interpretation of freshwater fisheries data. Bethesda, Md: American Fisheries Society, 2007.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Matrix population models: Construction, analysis, and interpretation. Sunderland, Mass: Sinauer Associates, 1989.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Digital analysis of remotely sensed imagery. New York: McGraw-Hill, 2009.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Dua, Sumeet. Computational analysis of the human eye with applications. Singapore: World Scientific, 2011.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Cressie, Noel A. C. Statistics for spatial data. New York: Wiley, 1991.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Cressie, Noel A. C. Statistics for spatial data. New York: Wiley, 1993.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Gideon, Keren, i Lewis Charles 1943-, red. A Handbook for data analysis in the behavioral sciences: Methodological issues. Hillsdale, N.J: L. Erlbaum Associates, 1993.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Data analysis techniques. Harlow: Pearson Education Limited, 2006.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Data analysis and interpretation techniques"

1

Ogiela, Lidia, i Marek R. Ogiela. "Understanding-based image analysis systems". W Cognitive Techniques in Visual Data Interpretation, 75–78. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02693-5_6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Sucharita, V., P. Venkateswara Rao i Pellakuri Vidyullatha. "Big Data Analysis, Interpretation, and Management for Secured Smart Health Care". W Big Data Analytics and Intelligent Techniques for Smart Cities, 73–91. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003187356-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Sreenivasarao, Vuda, i Venkata Subbareddy Pallamreddy. "Advanced Data Warehousing Techniques for Analysis, Interpretation and Decision Support of Scientific Data". W Advances in Computing and Information Technology, 162–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22555-0_18.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Rawi, Norkhairani Abdul, Norhasiza Mat Jusoh, Mohd Nordin Abdul Rahman, Abd Rasid Mamat i Mokhairi Makhtar. "Image Segmentation Techniques to Support Manual Chest X-Ray Interpretation". W Digital Economy, Business Analytics, and Big Data Analytics Applications, 11–20. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-05258-3_2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Jenkins, L. M. "Data analysis". W Numerical Techniques, 62–97. London: CRC Press, 2023. http://dx.doi.org/10.1201/9781003422013-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Molinero-Parejo, Ramón. "Geographically Weighted Methods to Validate Land Use Cover Maps". W Land Use Cover Datasets and Validation Tools, 255–65. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-90998-7_13.

Pełny tekst źródła
Streszczenie:
AbstractOne of the most commonly used techniques for validating Land Use Cover (LUC) maps are the accuracy assessment statistics derived from the cross-tabulation matrix. However, although these accuracy metrics are applied to spatial data, this does not mean that they produce spatial results. The overall, user’s and producer’s accuracy metrics provide global information for the entire area analysed, but shed no light on possible variations in accuracy at different points within this area, a shortcoming that has been widely criticized. To address this issue, a series of techniques have been developed to integrate a spatial component into these accuracy assessment statistics for the analysis and validation of LUC maps. Geographically Weighted Regression (GWR) is a local technique for estimating the relationship between a dependent variable with respect to one or more independent variables or explanatory factors. However, unlike traditional regression techniques, it considers the distance between data points when estimating the coefficients of the regression points using a moving window. Hence, it assumes that geographic data are non-stationary i.e., they vary over space. Geographically weighted methods provide a non-stationary analysis, which can reveal the spatial relationships between reference data obtained from a LUC map and classified data. Specifically, logistic GWR is used in this chapter to estimate the accuracy of each LUC data point, so allowing us to observe the spatial variation in overall, user’s and producer’s accuracies. A specific tool (Local accuracy assessment statistics) was specially developed for this practical exercise, aimed at validating a Land Use Cover map. The Marqués de Comillas region was selected as the study area for implementing this tool and demonstrating its applicability. For the calculation of the user’s and producer’s accuracy metrics, we selected the tropical rain forest category [50] as an example. Furthermore, a series of maps were obtained by interpolating the results of the tool, so enabling a visual interpretation and a description of the spatial distribution of error and accuracy.
Style APA, Harvard, Vancouver, ISO itp.
7

Owen, Gwilym, Yu Chen, Gwilym Pryce, Tim Birabi, Hui Song i Bifeng Wang. "Deprivation Indices in China: Establishing Principles for Application and Interpretation". W The Urban Book Series, 305–27. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74544-8_14.

Pełny tekst źródła
Streszczenie:
AbstractIndicesofMultiple Deprivation(IMDs) aim to measure living standards at the small area level. These indices were originally developed in the United Kingdom, but there is a growing interest in adapting them for use in China. However, due to data limitations, Chinese deprivation indices sometimes diverge considerably in approaches and are not always connected with the underlying concepts within UK analysis. In this paper, we seek to bring direction and conceptual rigour to this nascent literature by establishing a set of core principles for IMD estimation that are relevant and feasible in the Chinese context. These principles are based on specifying deprivation domains from theory, selecting the most appropriate measurements for these domains, and then applying rigorous statistical techniques to combine them into an IMD. We apply these principles to create an IMD for Shijiazhuang, the capital city of Hebei Province. We use this to investigate the spatial patterns of deprivation in Shijiazhuang, focussing on clusteringand centralisationof deprivation as well as exploring different deprivation typologies. We highlight two distinct types of deprived areas. One is clustered in industrial areas on the edge of the city, while the second is found more centrally and contains high proportions of low-skilled service workers.
Style APA, Harvard, Vancouver, ISO itp.
8

Akram, Jubran. "Microseismic Data Interpretation". W Understanding Downhole Microseismic Data Analysis, 153–79. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-34017-9_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Amjath-Babu, T. S., Santiago Lopez Riadura i Timothy J. Krupnik. "Agriculture, Food and Nutrition Security: Concept, Datasets and Opportunities for Computational Social Science Applications". W Handbook of Computational Social Science for Policy, 215–29. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-16624-2_11.

Pełny tekst źródła
Streszczenie:
AbstractEnsuring food and nutritional security requires effective policy actions that consider the multitude of direct and indirect drivers. The limitations of data and tools to unravel complex impact pathways to nutritional outcomes have constrained efficient policy actions in both developed and developing countries. Novel digital data sources and innovations in computational social science have resulted in new opportunities for understanding complex challenges and deriving policy outcomes. The current chapter discusses the major issues in the agriculture and nutrition data interface and provides a conceptual overview of analytical possibilities for deriving policy insights. The chapter also discusses emerging digital data sources, modelling approaches, machine learning and deep learning techniques that can potentially revolutionize the analysis and interpretation of nutritional outcomes in relation to food production, supply chains, food environment, individual behaviour and external drivers. An integrated data platform for digital diet data and nutritional information is required for realizing the presented possibilities.
Style APA, Harvard, Vancouver, ISO itp.
10

Raal, J. David, i Andreas L. Mühlbauer. "Techniques for Hpvle Data Interpretation". W Phase Equilibria, 343–52. Boca Raton: Routledge, 2023. http://dx.doi.org/10.1201/9780203743621-18.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Data analysis and interpretation techniques"

1

Loiselet, M. "SAR images interpretation using data analysis techniques". W IEE Colloquium on Polarisation in Radar. IEE, 1996. http://dx.doi.org/10.1049/ic:19960434.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Rajendran, Shobana. "Image Retrieval Techniques, Analysis and Interpretation for Leukemia Data Sets". W Distributed Computing. IEEE, 2011. http://dx.doi.org/10.1109/snpd.2011.46.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Cooper, Gordon R. J. "A new semiautomatic interpretation technique for aeromagnetic data". W PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON NUMERICAL ANALYSIS AND APPLIED MATHEMATICS 2014 (ICNAAM-2014). AIP Publishing LLC, 2015. http://dx.doi.org/10.1063/1.4913063.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

El Amine Senoussaoui, Mohammed, Issouf Fofana i Mostefa Brahami. "Influence of Oil Quality on the Interpretation of Dissolved Gas Analysis Data". W 2021 IEEE 5th International Conference on Condition Assessment Techniques in Electrical Systems (CATCON). IEEE, 2021. http://dx.doi.org/10.1109/catcon52335.2021.9670513.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Michael, Nikolaos A., Christian Scheibe i Neil W. Craigie. "Automations in Chemostratigraphy: Toward Robust Chemical Data Analysis and Interpretation". W SPE Middle East Oil & Gas Show and Conference. SPE, 2021. http://dx.doi.org/10.2118/204892-ms.

Pełny tekst źródła
Streszczenie:
Abstract Elemental chemostratigraphy has become an established stratigraphic correlation technique over the last 15 years. Geochemical data are generated from rock samples (e.g., ditch cuttings, cores or hand specimens) for up to c. 50 elements in the range Na-U in the periodic table using various analytical techniques. The data are commonly displayed and interpreted as ratios, indices and proxy values in profile form against depth. The large number of possible combinations between the determined elements (more than a thousand combinations), makes it a time-consuming effort to identify meaningful variations that resulted in correlative chemostratigraphic boundaries and zones between wells. The large number of combination means that 30-40% of the information is not used for the correlations that maybe crucial to understand the geological processes. Automation and artificial intelligence (AI) are envisaged as likely solutions to this challenge. Statistical and machine learning techniques are tested as a first step to automate and establish a workflow to define (chemo-) stratigraphic boundaries, and to identify geological formations. The workflow commences with a quality check of the input data and then with principle component analysis (PCA) as a multivariate statistical method. PCA is used to minimize the number of elements/ratios plotted in profile form, whilst simultaneously identifying multidimensional relationships between them. A statistical boundary picking method is then applied define chemostratigraphic zones, for which reliability is determined utilizing quartile analysis, which tests the overlap of chemical signals across these statistical boundaries. Machine learning via discriminant function analysis (DFA) has been developed to predict the placement of correlative boundaries between adjacent sections/wells. The proposed workflow has been tested on various geological formations and areas in Saudi Arabia. The chemostratigraphic correlations proposed using this workflow broadly correspond to those defined in the standard workflow by experienced chemostratigraphers, while interpretation times and subjectivity are reduced. While machine learning via DFA is currently further researched, early results of the workflow are very encouraging. A user-friendly software application with workflows and algorithms ultimately leading to automation of the processes is under development.
Style APA, Harvard, Vancouver, ISO itp.
6

Ore, Tobi, Davud Davudov, Anton Malkov, Ashwin Venkatraman, Talal Al-Aulaqi, Gurpreet Singh, Birol Dindoruk i in. "A Comprehensive Analysis of Data Driven Techniques to Quantify Injector Producer Relationships". W Gas & Oil Technology Showcase and Conference. SPE, 2023. http://dx.doi.org/10.2118/214199-ms.

Pełny tekst źródła
Streszczenie:
Abstract Water flooding is an established method of secondary recovery to increase oil production in conventional reservoirs. Analytical models such as capacitance resistance models (CRM) have been used to understand the connectivity between injectors and producers to drive optimization. However, these methods are not applicable to waterflood fields at the initial stage of life with limited data (less than 2 years of injection history). In this work, a novel approach is presented that combines analytics and machine learning to process data and hence quantify connectivity for optimization strategies. A combination of statistical (cross-correlation, mutual information) and machine learning (linear regression, random forest) methods are used to understand the relationship between measured injection and production data from wells. This workflow is first validated using synthetic simulation data with known reservoir heterogeneities as well as known connectivity between wells. Each of the four methods is validated by comparing the result with the CRM results, and it was found that each method provides specific insights and has its associated limitations making it necessary to combine these results for a successful interpretation of connectivity. The proposed workflow is applied to a complex offshore Caspian Sea field with 49 production wells and 8 injection wells. It was observed that implementing the diffusivity filter in the models while being computationally expensive, offers additional insights into the transmissibility between injector producer pairs. The machine learning approach addresses injection time delay through feature engineering, and applying a diffusive filter determines effective injection rates as a function of dissipation through the reservoir. Hence, the combined interpretation of connectivity from the different methods resulted in a better understanding of the field. The presented approach can be extended to similar waterflood systems helping companies realize the benefits of digitization, in not just accessing data, but also using data through such novel workflows that can help evaluate and continuously optimize injection processes.
Style APA, Harvard, Vancouver, ISO itp.
7

Urazov, Ruslan Rubikovich, Alfred Yadgarovich Davletbaev, Alexey Igorevich Sinitskiy, Ilnur Anifovich Zarafutdinov, Artur Khamitovich Nuriev, Veronika Vladimirovna Sarapulova i Oxana Evgenievna Nosova. "The Interpretation Technique of Rate Transient Analysis Data in Fractured Horizontal Wells". W SPE Russian Petroleum Technology Conference. SPE, 2021. http://dx.doi.org/10.2118/206484-ms.

Pełny tekst źródła
Streszczenie:
Abstract This research presents a modified approach to the data interpretation of Rate Transient Analysis (RTA) in hydraulically fractured horizontal well. The results of testing of data interpretation technique taking account of the flow allocation in the borehole according to the well logging and to the injection tests outcomes while carrying out hydraulic fracturing are given. In the course of the interpretation of the field data the parameters of each fracture of hydraulic fracturing were selected with control for results of well logging (WL) by defining the fluid influx in the borehole.
Style APA, Harvard, Vancouver, ISO itp.
8

Li, R., T. H. Hyde, W. Sun i B. Dogan. "Modelling and Data Interpretation of Small Punch Creep Testing". W ASME 2011 Pressure Vessels and Piping Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/pvp2011-57204.

Pełny tekst źródła
Streszczenie:
The small punch testing (SPT) technique has been proposed for use in determining the creep properties of materials for which only a very small volume of material is available. A draft code of practice on SPT has been produced. However it is not, as yet, generally accepted that the data obtained from small punch tests can be directly related to those which would be obtained from conventional uniaxial creep tests. For this reason, the development of techniques suitable for the interpretation of SPT data has become very important. In this paper, a set of uniaxial creep test data has been characterised in such a way as to gain an improved understanding of the correlation between the data from small punch tests and corresponding uniaxial creep tests. Finite element (FE) analyses of small punch creep tests, using a damage mechanics based creep model, have been performed. The effect of large deformation on the determination of material properties for a creep damage model, has been investigated to take into account the large deformation nature of small punch tests. An equivalent stress, σeq, proposed by the draft code, was used to relate the SPT results to the corresponding uniaxial creep test results. A preliminary assessment of the use of small punch test results, in determining creep properties, has been presented, which includes comparisons of the failure life and equivalent minimum strain rate results obtained from SPTs with the corresponding uniaxial creep test data. Future work related to the interpretation of SPT is briefly addressed.
Style APA, Harvard, Vancouver, ISO itp.
9

Silva, Bruno, i Marjory Da Costa-Abreu. "An empirical analysis of Brazilian courts law documents using learning techniques". W VIII Workshop de Forense Computacional. Sociedade Brasileira de Computação, 2019. http://dx.doi.org/10.5753/wfc.2019.14019.

Pełny tekst źródła
Streszczenie:
This paper describes a survey on investigating judicial data to find patterns and relations between crime attributes and corresponding decisions made by courts, aiming to find import directions that interpretation of the law might be taking. We have developed an initial methodology and experimentation to look for behaviour patterns to build judicial sentences in the scope of Brazilian criminal courts and achieved results related to important trends in decision making. Neural networks-based techniques were applied for classification and pattern recognition, based on Multi-Layer Perceptron and Radial-basis Functions, associated with data organisation techniques and behavioral modalities extraction.
Style APA, Harvard, Vancouver, ISO itp.
10

Johnson, W., Gary B. Gustafson, Jeanne M. Sparrow, Ronald G. Isaacs, David B. Hogan, Gail M. Binge i Kristo Miettinen. "Effect of data-compression techniques on a meteorological satellite image test suite Douglas". W OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1990. http://dx.doi.org/10.1364/oam.1990.mi8.

Pełny tekst źródła
Streszczenie:
We report the results of a comparative study of the effect of several lossy compression techniques on satellite-based meteorological imagery. Three algorithms, implemented by participants in this effort, were tested on a variety of satellite data products, at several compression, ratios ranging to as much as a factor of 20. The compression techniques included a scene-adaptive discrete-co-sine-transform technique, an adaptive differential pulse-code-modulation technique, and a vector quantization algorithm. A variety of quantitative measures were applied in the evaluation of these lossy image-compression algorithms. Included were the mean-square error, mean absolute error, and error histograms. We also evaluated the effect on interpretation by a meteorologist trained in the use of satellite imagery for synoptic forecasting. Finally, we applied an automated cloud-fraction-analysis routine to the data in an effort to determine its effect on performance. Additional efforts similar to these should provide useful measures of the operation of proposed image-compression techniques. In particular, additional assessment of automated-image-characterization algorithms and the inclusion of input data with more variation are expected to result in a more thorough understanding of the effects of these compression techniques.
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Data analysis and interpretation techniques"

1

McKean, Adam P., Zachary W. Anderson, Donald L. Clark, Diego Fernandez, Christopher R. Anderson, Tiffany A. Rivera i Taylor K. McCombs. Detrital Zircon U-Pb Geochronology Results for the Bountiful Peak, Coalville, James Peak, Mount Pisgah, Paradise, and Payson Lakes 7.5' Quadrangles, Utah. Utah Geological Survey, maj 2022. http://dx.doi.org/10.34191/ofr-743.

Pełny tekst źródła
Streszczenie:
This Open-File Report makes available raw analytical data from laboratory analysis of U-Pb ages of zircon grains from samples collected during geologic mapping funded by the U.S. Geological Survey (USGS) National Cooperative Geologic Mapping Program (STATEMAP) and the Utah Geological Survey (UGS). The references listed in table 1 provide additional information such as sample location, geologic setting, and interpretation of the samples in the context of the area where they were collected. The data were prepared by the University of Utah Earth Core Facility (Diego Fernandez, Director), under contract to the UGS. These data are highly technical in nature and proper interpretation requires considerable training in the applicable geochronologic techniques.
Style APA, Harvard, Vancouver, ISO itp.
2

Tarko, Andrew P., Mario A. Romero, Vamsi Krishna Bandaru i Cristhian Lizarazo. TScan–Stationary LiDAR for Traffic and Safety Applications: Vehicle Interpretation and Tracking. Purdue University, 2022. http://dx.doi.org/10.5703/1288284317402.

Pełny tekst źródła
Streszczenie:
To improve traffic performance and safety, the ability to measure traffic accurately and effectively, including motorists and other vulnerable road users, at road intersections is needed. A past study conducted by the Center for Road Safety has demonstrated that it is feasible to detect and track various types of road users using a LiDAR-based system called TScan. This project aimed to progress towards a real-world implementation of TScan by building two trailer-based prototypes with full end-user documentation. The previously developed detection and tracking algorithms have been modified and converted from the research code to its implementational version written in the C++ programming language. Two trailer-based TScan units have been built. The design of the prototype was iterated multiple times to account for component placement, ease of maintenance, etc. The expansion of the TScan system from a one single-sensor unit to multiple units with multiple LiDAR sensors necessitated transforming all the measurements into a common spatial and temporal reference frame. Engineering applications for performing traffic counts, analyzing speeds at intersections, and visualizing pedestrian presence data were developed. The limitations of the existing SSAM for traffic conflicts analysis with computer simulation prompted the research team to develop and implement their own traffic conflicts detection and analysis technique that is applicable to real-world data. Efficient use of the development system requires proper training of its end users. An INDOT-CRS collaborative process was developed and its execution planned to gradually transfer the two TScan prototypes to INDOT’s full control. This period will be also an opportunity for collecting feedback from the end user and making limited modifications to the system and documentation as needed.
Style APA, Harvard, Vancouver, ISO itp.
3

Worcester, Peter F., James A. Mercer i Robert C. Spindel. Ocean Acoustic Observatories: Data Analysis and Interpretation. Fort Belvoir, VA: Defense Technical Information Center, wrzesień 1997. http://dx.doi.org/10.21236/ada628417.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Worcester, Peter F., James A. Mercer i Robert C. Spindel. Ocean Acoustic Observatories: Data Analysis and Interpretation. Fort Belvoir, VA: Defense Technical Information Center, wrzesień 1999. http://dx.doi.org/10.21236/ada629597.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

WANG, MIN, Sheng Chen, Changqing Zhong, Tao Zhang, Yongxing Xu, Hongyuan Guo, Xiaoying Wang, Shuai Zhang, Yan Chen i Lianyong Li. Diagnosis using artificial intelligence based on the endocytoscopic observation of the gastrointestinal tumours: a systematic review and meta-analysis. InPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, luty 2023. http://dx.doi.org/10.37766/inplasy2023.2.0096.

Pełny tekst źródła
Streszczenie:
Review question / Objective: With the development of endoscopic techniques, several diagnostic endoscopy methods are available for the diagnosis of malignant lesions, including magnified pigmented endoscopy and narrow band imaging (NBI).The main goal of endoscopy is to achieve the real-time diagnostic evaluation of the tissue, allowing an accurate assessment comparable to histopathological diagnosis based on structural and cellular heterogeneity to significantly improve the diagnostic rate for cancerous tissues. Endocytoscopy (ECS) is based on ultrahigh magnification endoscopy and has been applied to endoscopy to achieve microscopic observation of gastrointestinal (GI) cells through tissue staining, thus allowing the differentiation of cancerous and noncancerous tissues in real time.To date, ECS observation has been applied to the diagnosis of oesophageal, gastric and colorectal tumours and has shown high sensitivity and specificity.Despite the highly accurate diagnostic capability of this method, the interpretation of the results is highly dependent on the operator's skill level, and it is difficult to train all endoscopists to master all methods quickly. Artificial intelligence (AI)-assisted diagnostic systems have been widely recognized for their high sensitivity and specificity in the diagnosis of GI tumours under general endoscopy. Few studies have explored on ECS for endoscopic tumour identification, and even fewer have explored ECS-based AI in the endoscopic identification of GI tumours, all of which have reached different conclusions. Therefore, we aimed to investigate the value of ECS-based AI in detecting GI tumour to provide evidence for its clinical application.
Style APA, Harvard, Vancouver, ISO itp.
6

Kiefner, J. F., J. M. Tuten i T. A. Wall. L51516 Preventing Pipeline Failure in Areas of Soil Movement - Part 1. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), styczeń 1987. http://dx.doi.org/10.55274/r0010303.

Pełny tekst źródła
Streszczenie:
Ordinarily, buried pipelines undergo little or no movement in service. In a stable soil environment the longitudinal stress in a pipeline seldom approaches the limiting design value set by applicable codes and regulations. Pipeline serviceability under such conditions is seldom, if ever, threatened by the degree of longitudinal stress. In contrast, localized areas may exist along a pipeline where soils and/or slopes are unstable or where subsidence or differential settlement can occur. In these areas, longitudinal stresses may become severe enough to cause a failure. Over the years various techniques have been developed to monitor the status of pipelines in unstable areas, and various remedial techniques have been attempted. In more recent times, with the advent of Arctic and offshore pipelining, such potential movements of pipelines are being taken into account in the initial designs. In any case, there is a continuing need to develop better monitoring and remedial techniques to prevent pipeline failures in unstable soil areas. The objectives of this project are to develop a versatile and reliable prototype strain monitoring system, to demonstrate its applicability on an actual pipeline, and to establish allowable limits on strains due to soil movement or subsidence. The scope of the project includes:(1) Review of previous or on-going monitoring efforts by others.(2) Analysis of strains and development of models to predict strain behavior(3) Calculations to establish limits on strains(4) The design and construction of a microprocessor-controlled automatic monitoring system(5) The implementation of the system on an actual pipeline(6) The collection, analysis and interpretation of strain data from the system.
Style APA, Harvard, Vancouver, ISO itp.
7

Hoaglin, David C., i Frederick Mosteller. Robust/Resistant Techniques of Data Analysis. Fort Belvoir, VA: Defense Technical Information Center, październik 1985. http://dx.doi.org/10.21236/ada163972.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Beam, Craig A., Emily F. Conant, Harold L. Kundel, Ji-Hyun Lee, Patricia A. Romily i Edward A. Sickles. Time-Series Analysis of Human Interpretation Data in Mammography. Fort Belvoir, VA: Defense Technical Information Center, styczeń 2005. http://dx.doi.org/10.21236/ada434583.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Kraus, Nicholas C., i Julie D. Rosati. Interpretation of Shoreline-Position Data for Coastal Engineering Analysis. Fort Belvoir, VA: Defense Technical Information Center, grudzień 1997. http://dx.doi.org/10.21236/ada591274.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Venugopal, Niveditha. Annotation-Enabled Interpretation and Analysis of Time-Series Data. Portland State University Library, styczeń 2000. http://dx.doi.org/10.15760/etd.6592.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii