Academic literature on the topic 'Partial Data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Partial Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Partial Data processing"

1

Wu, Xiaoying, Stefanos Souldatos, Dimitri Theodoratos, Theodore Dalamagas, Yannis Vassiliou, and Timos Sellis. "Processing and Evaluating Partial Tree Pattern Queries on XML Data." IEEE Transactions on Knowledge and Data Engineering 24, no. 12 (December 2012): 2244–59. http://dx.doi.org/10.1109/tkde.2011.137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Xiaoming, and Paul L. Rosin. "Superellipse fitting to partial data." Pattern Recognition 36, no. 3 (March 2003): 743–52. http://dx.doi.org/10.1016/s0031-3203(02)00088-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gregor, Jiří, and František Pastuszek. "Novel Approach to Well Tests Data Processing." Environment and Natural Resources Research 12, no. 1 (May 27, 2022): 80. http://dx.doi.org/10.5539/enrr.v12n1p80.

Full text
Abstract:
A new bounded well function is suggested for processing well tests data. A solution of the basic partial differential equation with physically meaningful initial and boundary conditions is given using its Laplace transform simultaneously with the proof of its unicity. A model for distance-dependence of drawdown is suggested. Results reveal the link between unsteady and steady state of pumping. Related computational problems are discussed. Examples of processing actual data using these results are presented. They illustrate high accuracy of results and a considerable increase of information obtainable from a well test.
APA, Harvard, Vancouver, ISO, and other styles
4

Patapoutian, A. "Data-dependent synchronization in partial-response systems." IEEE Transactions on Signal Processing 54, no. 4 (April 2006): 1494–503. http://dx.doi.org/10.1109/tsp.2006.870587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kelly, S. I., C. Du, G. Rilling, and M. E. Davies. "Advanced image formation and processing of partial synthetic aperture radar data." IET Signal Processing 6, no. 5 (2012): 511. http://dx.doi.org/10.1049/iet-spr.2011.0073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Veidenbergs, I., D. Blumberga, C. Rochas, F. Romagnoli, A. Blumberga, and M. Rošā. "Small-Scale Cogeneration Plant Data Processing and Analysis." Latvian Journal of Physics and Technical Sciences 45, no. 3 (September 1, 2008): 25–33. http://dx.doi.org/10.2478/v10047-008-0009-3.

Full text
Abstract:
Small-Scale Cogeneration Plant Data Processing and Analysis In the article, the operational data on electricity and heat energy generation in a small-scale cogeneration plant are analysed. Different measurements done in the plant formed a basis for estimation and evaluation of the savings of primary energy in comparison with distributed energy production. The authors analyse the efficiency values for the heat and the electricity production in the cogeneration regime and the savings of primary energy when the cogeneration plant works with partial load.
APA, Harvard, Vancouver, ISO, and other styles
7

White, Thomas A., Anton Barty, Francesco Stellato, James M. Holton, Richard A. Kirian, Nadia A. Zatsepin, and Henry N. Chapman. "Crystallographic data processing for free-electron laser sources." Acta Crystallographica Section D Biological Crystallography 69, no. 7 (June 15, 2013): 1231–40. http://dx.doi.org/10.1107/s0907444913013620.

Full text
Abstract:
A processing pipeline for diffraction data acquired using the `serial crystallography' methodology with a free-electron laser source is described with reference to the crystallographic analysis suiteCrystFELand the pre-processing programCheetah. A detailed analysis of the nature and impact of indexing ambiguities is presented. Simulations of the Monte Carlo integration scheme, which accounts for the partially recorded nature of the diffraction intensities, are presented and show that the integration of partial reflections could be made to converge more quickly if the bandwidth of the X-rays were to be increased by a small amount or if a slight convergence angle were introduced into the incident beam.
APA, Harvard, Vancouver, ISO, and other styles
8

Lin, Yan, Shu Wen Guo, and Jing Hua Lin. "Improved Method of Data Processing for Flocculating Sedimentation Experiment." Advanced Materials Research 599 (November 2012): 340–43. http://dx.doi.org/10.4028/www.scientific.net/amr.599.340.

Full text
Abstract:
The shortages of the present method of data processing of flocculating sedimentation are pointed out and analyzed in this study. A new method of data processing is put forward and the advantages of this new method are discussed. Compared to the conventional method, the isolines of partial removal efficiency need not to be drawn when the new method is employed and the new method is straightforward, practical and accurate and it is worth to be popularized.
APA, Harvard, Vancouver, ISO, and other styles
9

Wei, Wei, Chongshi Gu, and Xiao Fu. "Processing Method of Missing Data in Dam Safety Monitoring." Mathematical Problems in Engineering 2021 (July 2, 2021): 1–12. http://dx.doi.org/10.1155/2021/9950874.

Full text
Abstract:
A large amount of data obtained by dam safety monitoring provides the basis to evaluate the dam operation state. Due to the interference caused by equipment failure and human error, it is common or even inevitable to suffer the loss of measurement data. Most of the traditional data processing methods for dam monitoring ignore the actual correlation between different measurement points, which brings difficulties to the objective diagnosis of dam safety and even leads to misdiagnosis. Therefore, it is necessary to conduct further study on how to process the missing data in dam safety monitoring. In this study, a data processing method based on partial distance combining fuzzy C-means with long short-term memory (PDS-FCM-LSTM) was proposed to deal with the data missing from dam monitoring. Based on the fuzzy clustering performed for the measurement points of the same category deployed on the dam, the membership degree of each measurement point to cluster center was described by using the fuzzy C-means clustering algorithm based on partial distance (PDS-FCM), so as to determine the clustering results and preprocess the missing data of corresponding measurement points. Then, the bidirectional long short-term memory (LSTM) network was applied to explore the pattern of changes of measurement values under identical clustering conditions, thus processing the data missing from monitoring effectively.
APA, Harvard, Vancouver, ISO, and other styles
10

VAN GENNIP, YVES, and CAROLA-BIBIANE SCHÖNLIEB. "Introduction: Big data and partial differential equations." European Journal of Applied Mathematics 28, no. 6 (November 7, 2017): 877–85. http://dx.doi.org/10.1017/s0956792517000304.

Full text
Abstract:
Partial differential equations (PDEs) are expressions involving an unknown function in many independent variables and their partial derivatives up to a certain order. Since PDEs express continuous change, they have long been used to formulate a myriad of dynamical physical and biological phenomena: heat flow, optics, electrostatics and -dynamics, elasticity, fluid flow and many more. Many of these PDEs can be derived in a variational way, i.e. via minimization of an ‘energy’ functional. In this globalised and technologically advanced age, PDEs are also extensively used for modelling social situations (e.g. models for opinion formation, mathematical finance, crowd motion) and tasks in engineering (such as models for semiconductors, networks, and signal and image processing tasks). In particular, in recent years, there has been increasing interest from applied analysts in applying the models and techniques from variational methods and PDEs to tackle problems in data science. This issue of the European Journal of Applied Mathematics highlights some recent developments in this young and growing area. It gives a taste of endeavours in this realm in two exemplary contributions on PDEs on graphs [1, 2] and one on probabilistic domain decomposition for numerically solving large-scale PDEs [3].
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Partial Data processing"

1

Wu, Qinyi. "Partial persistent sequences and their applications to collaborative text document editing and processing." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/44916.

Full text
Abstract:
In a variety of text document editing and processing applications, it is necessary to keep track of the revision history of text documents by recording changes and the metadata of those changes (e.g., user names and modification timestamps). The recent Web 2.0 document editing and processing applications, such as real-time collaborative note taking and wikis, require fine-grained shared access to collaborative text documents as well as efficient retrieval of metadata associated with different parts of collaborative text documents. Current revision control techniques only support coarse-grained shared access and are inefficient to retrieve metadata of changes at the sub-document granularity. In this dissertation, we design and implement partial persistent sequences (PPSs) to support real-time collaborations and manage metadata of changes at fine granularities for collaborative text document editing and processing applications. As a persistent data structure, PPSs have two important features. First, items in the data structure are never removed. We maintain necessary timestamp information to keep track of both inserted and deleted items and use the timestamp information to reconstruct the state of a document at any point in time. Second, PPSs create unique, persistent, and ordered identifiers for items of a document at fine granularities (e.g., a word or a sentence). As a result, we are able to support consistent and fine-grained shared access to collaborative text documents by detecting and resolving editing conflicts based on the revision history as well as to efficiently index and retrieve metadata associated with different parts of collaborative text documents. We demonstrate the capabilities of PPSs through two important problems in collaborative text document editing and processing applications: data consistency control and fine-grained document provenance management. The first problem studies how to detect and resolve editing conflicts in collaborative text document editing systems. We approach this problem in two steps. In the first step, we use PPSs to capture data dependencies between different editing operations and define a consistency model more suitable for real-time collaborative editing systems. In the second step, we extend our work to the entire spectrum of collaborations and adapt transactional techniques to build a flexible framework for the development of various collaborative editing systems. The generality of this framework is demonstrated by its capabilities to specify three different types of collaborations as exemplified in the systems of RCS, MediaWiki, and Google Docs respectively. We precisely specify the programming interfaces of this framework and describe a prototype implementation over Oracle Berkeley DB High Availability, a replicated database management engine. The second problem of fine-grained document provenance management studies how to efficiently index and retrieve fine-grained metadata for different parts of collaborative text documents. We use PPSs to design both disk-economic and computation-efficient techniques to index provenance data for millions of Wikipedia articles. Our approach is disk economic because we only save a few full versions of a document and only keep delta changes between those full versions. Our approach is also computation-efficient because we avoid the necessity of parsing the revision history of collaborative documents to retrieve fine-grained metadata. Compared to MediaWiki, the revision control system for Wikipedia, our system uses less than 10% of disk space and achieves at least an order of magnitude speed-up to retrieve fine-grained metadata for documents with thousands of revisions.
APA, Harvard, Vancouver, ISO, and other styles
2

Vitale, Raffaele. "Novel chemometric proposals for advanced multivariate data analysis, processing and interpretation." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/90442.

Full text
Abstract:
The present Ph.D. thesis, primarily conceived to support and reinforce the relation between academic and industrial worlds, was developed in collaboration with Shell Global Solutions (Amsterdam, The Netherlands) in the endeavour of applying and possibly extending well-established latent variable-based approaches (i.e. Principal Component Analysis - PCA - Partial Least Squares regression - PLS - or Partial Least Squares Discriminant Analysis - PLSDA) for complex problem solving not only in the fields of manufacturing troubleshooting and optimisation, but also in the wider environment of multivariate data analysis. To this end, novel efficient algorithmic solutions are proposed throughout all chapters to address very disparate tasks, from calibration transfer in spectroscopy to real-time modelling of streaming flows of data. The manuscript is divided into the following six parts, focused on various topics of interest: Part I - Preface, where an overview of this research work, its main aims and justification is given together with a brief introduction on PCA, PLS and PLSDA; Part II - On kernel-based extensions of PCA, PLS and PLSDA, where the potential of kernel techniques, possibly coupled to specific variants of the recently rediscovered pseudo-sample projection, formulated by the English statistician John C. Gower, is explored and their performance compared to that of more classical methodologies in four different applications scenarios: segmentation of Red-Green-Blue (RGB) images, discrimination of on-/off-specification batch runs, monitoring of batch processes and analysis of mixture designs of experiments; Part III - On the selection of the number of factors in PCA by permutation testing, where an extensive guideline on how to accomplish the selection of PCA components by permutation testing is provided through the comprehensive illustration of an original algorithmic procedure implemented for such a purpose; Part IV - On modelling common and distinctive sources of variability in multi-set data analysis, where several practical aspects of two-block common and distinctive component analysis (carried out by methods like Simultaneous Component Analysis - SCA - DIStinctive and COmmon Simultaneous Component Analysis - DISCO-SCA - Adapted Generalised Singular Value Decomposition - Adapted GSVD - ECO-POWER, Canonical Correlation Analysis - CCA - and 2-block Orthogonal Projections to Latent Structures - O2PLS) are discussed, a new computational strategy for determining the number of common factors underlying two data matrices sharing the same row- or column-dimension is described, and two innovative approaches for calibration transfer between near-infrared spectrometers are presented; Part V - On the on-the-fly processing and modelling of continuous high-dimensional data streams, where a novel software system for rational handling of multi-channel measurements recorded in real time, the On-The-Fly Processing (OTFP) tool, is designed; Part VI - Epilogue, where final conclusions are drawn, future perspectives are delineated, and annexes are included.
La presente tesis doctoral, concebida principalmente para apoyar y reforzar la relación entre la academia y la industria, se desarrolló en colaboración con Shell Global Solutions (Amsterdam, Países Bajos) en el esfuerzo de aplicar y posiblemente extender los enfoques ya consolidados basados en variables latentes (es decir, Análisis de Componentes Principales - PCA - Regresión en Mínimos Cuadrados Parciales - PLS - o PLS discriminante - PLSDA) para la resolución de problemas complejos no sólo en los campos de mejora y optimización de procesos, sino también en el entorno más amplio del análisis de datos multivariados. Con este fin, en todos los capítulos proponemos nuevas soluciones algorítmicas eficientes para abordar tareas dispares, desde la transferencia de calibración en espectroscopia hasta el modelado en tiempo real de flujos de datos. El manuscrito se divide en las seis partes siguientes, centradas en diversos temas de interés: Parte I - Prefacio, donde presentamos un resumen de este trabajo de investigación, damos sus principales objetivos y justificaciones junto con una breve introducción sobre PCA, PLS y PLSDA; Parte II - Sobre las extensiones basadas en kernels de PCA, PLS y PLSDA, donde presentamos el potencial de las técnicas de kernel, eventualmente acopladas a variantes específicas de la recién redescubierta proyección de pseudo-muestras, formulada por el estadista inglés John C. Gower, y comparamos su rendimiento respecto a metodologías más clásicas en cuatro aplicaciones a escenarios diferentes: segmentación de imágenes Rojo-Verde-Azul (RGB), discriminación y monitorización de procesos por lotes y análisis de diseños de experimentos de mezclas; Parte III - Sobre la selección del número de factores en el PCA por pruebas de permutación, donde aportamos una guía extensa sobre cómo conseguir la selección de componentes de PCA mediante pruebas de permutación y una ilustración completa de un procedimiento algorítmico original implementado para tal fin; Parte IV - Sobre la modelización de fuentes de variabilidad común y distintiva en el análisis de datos multi-conjunto, donde discutimos varios aspectos prácticos del análisis de componentes comunes y distintivos de dos bloques de datos (realizado por métodos como el Análisis Simultáneo de Componentes - SCA - Análisis Simultáneo de Componentes Distintivos y Comunes - DISCO-SCA - Descomposición Adaptada Generalizada de Valores Singulares - Adapted GSVD - ECO-POWER, Análisis de Correlaciones Canónicas - CCA - y Proyecciones Ortogonales de 2 conjuntos a Estructuras Latentes - O2PLS). Presentamos a su vez una nueva estrategia computacional para determinar el número de factores comunes subyacentes a dos matrices de datos que comparten la misma dimensión de fila o columna y dos planteamientos novedosos para la transferencia de calibración entre espectrómetros de infrarrojo cercano; Parte V - Sobre el procesamiento y la modelización en tiempo real de flujos de datos de alta dimensión, donde diseñamos la herramienta de Procesamiento en Tiempo Real (OTFP), un nuevo sistema de manejo racional de mediciones multi-canal registradas en tiempo real; Parte VI - Epílogo, donde presentamos las conclusiones finales, delimitamos las perspectivas futuras, e incluimos los anexos.
La present tesi doctoral, concebuda principalment per a recolzar i reforçar la relació entre l'acadèmia i la indústria, es va desenvolupar en col·laboració amb Shell Global Solutions (Amsterdam, Països Baixos) amb l'esforç d'aplicar i possiblement estendre els enfocaments ja consolidats basats en variables latents (és a dir, Anàlisi de Components Principals - PCA - Regressió en Mínims Quadrats Parcials - PLS - o PLS discriminant - PLSDA) per a la resolució de problemes complexos no solament en els camps de la millora i optimització de processos, sinó també en l'entorn més ampli de l'anàlisi de dades multivariades. A aquest efecte, en tots els capítols proposem noves solucions algorítmiques eficients per a abordar tasques dispars, des de la transferència de calibratge en espectroscopia fins al modelatge en temps real de fluxos de dades. El manuscrit es divideix en les sis parts següents, centrades en diversos temes d'interès: Part I - Prefaci, on presentem un resum d'aquest treball de recerca, es donen els seus principals objectius i justificacions juntament amb una breu introducció sobre PCA, PLS i PLSDA; Part II - Sobre les extensions basades en kernels de PCA, PLS i PLSDA, on presentem el potencial de les tècniques de kernel, eventualment acoblades a variants específiques de la recentment redescoberta projecció de pseudo-mostres, formulada per l'estadista anglés John C. Gower, i comparem el seu rendiment respecte a metodologies més clàssiques en quatre aplicacions a escenaris diferents: segmentació d'imatges Roig-Verd-Blau (RGB), discriminació i monitorització de processos per lots i anàlisi de dissenys d'experiments de mescles; Part III - Sobre la selecció del nombre de factors en el PCA per proves de permutació, on aportem una guia extensa sobre com aconseguir la selecció de components de PCA a través de proves de permutació i una il·lustració completa d'un procediment algorítmic original implementat per a la finalitat esmentada; Part IV - Sobre la modelització de fonts de variabilitat comuna i distintiva en l'anàlisi de dades multi-conjunt, on discutim diversos aspectes pràctics de l'anàlisis de components comuns i distintius de dos blocs de dades (realitzat per mètodes com l'Anàlisi Simultània de Components - SCA - Anàlisi Simultània de Components Distintius i Comuns - DISCO-SCA - Descomposició Adaptada Generalitzada en Valors Singulars - Adapted GSVD - ECO-POWER, Anàlisi de Correlacions Canòniques - CCA - i Projeccions Ortogonals de 2 blocs a Estructures Latents - O2PLS). Presentem al mateix temps una nova estratègia computacional per a determinar el nombre de factors comuns subjacents a dues matrius de dades que comparteixen la mateixa dimensió de fila o columna, i dos plantejaments nous per a la transferència de calibratge entre espectròmetres d'infraroig proper; Part V - Sobre el processament i la modelització en temps real de fluxos de dades d'alta dimensió, on dissenyem l'eina de Processament en Temps Real (OTFP), un nou sistema de tractament racional de mesures multi-canal registrades en temps real; Part VI - Epíleg, on presentem les conclusions finals, delimitem les perspectives futures, i incloem annexos.
Vitale, R. (2017). Novel chemometric proposals for advanced multivariate data analysis, processing and interpretation [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90442
TESIS
APA, Harvard, Vancouver, ISO, and other styles
3

Karasev, Peter A. "Feedback augmentation of pde-based image segmentation algorithms using application-specific exogenous data." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50257.

Full text
Abstract:
This thesis is divided into five chapters. The scope of problems considered is defined in chapter I. Next, chapter II provides background material on image processing with partial differential equations and a review of prior work in the field. Chapter III covers the medical imaging portion of the research; the key contribution is a control-based algorithm for interactive image segmentation. Applications of the feedback-augmented level set method to fracture reconstruction and surgical planning are shown. Problems in vision-based control are considered in Chapters IV and V. A method of improving performance in closed-loop target tracking using level set segmentation is developed, with unmanned aerial vehicle or next-generation missile guidance being the primary applications of interest. Throughout this thesis, the two application types are connected into a unified viewpoint of open-loop systems that are augmented by exogenous data.
APA, Harvard, Vancouver, ISO, and other styles
4

Kalyon, Gabriel. "Supervisory control of infinite state systems under partial observation." Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210032.

Full text
Abstract:
A discrete event system is a system whose state space is given by a discrete set and whose state transition mechanism is event-driven i.e. its state evolution depends only on the occurrence of discrete events over the time. These systems are used in many fields of application (telecommunication networks, aeronautics, aerospace,). The validity of these systems is then an important issue and to ensure it we can use supervisory control methods. These methods consist in imposing a given specification on a system by means of a controller which runs in parallel with the original system and which restricts its behavior. In this thesis, we develop supervisory control methods where the system can have an infinite state space and the controller has a partial observation of the system (this implies that the controller must define its control policy from an imperfect knowledge of the system). Unfortunately, this problem is generally undecidable. To overcome this negative result, we use abstract interpretation techniques which ensure the termination of our algorithms by overapproximating, however, some computations. The aim of this thesis is to provide the most complete contribution it is possible to bring to this topic. Hence, we consider more and more realistic problems. More precisely, we start our work by considering a centralized framework (i.e. the system is controlled by a single controller) and by synthesizing memoryless controllers (i.e. controllers that define their control policy from the current observation received from the system). Next, to obtain better solutions, we consider the synthesis of controllers that record a part or the whole of the execution of the system and use this information to define the control policy. Unfortunately, these methods cannot be used to control an interesting class of systems: the distributed systems. We have then defined methods that allow to control distributed systems with synchronous communications (decentralized and modular methods) and with asynchronous communications (distributed method). Moreover, we have implemented some of our algorithms to experimentally evaluate the quality of the synthesized controllers. /

Un système à événements discrets est un système dont l'espace d'états est un ensemble discret et dont l'évolution de l'état courant dépend de l'occurrence d'événements discrets à travers le temps. Ces systèmes sont présents dans de nombreux domaines critiques tels les réseaux de communications, l'aéronautique, l'aérospatiale. La validité de ces systèmes est dès lors une question importante et une manière de l'assurer est d'utiliser des méthodes de contrôle supervisé. Ces méthodes associent au système un dispositif, appelé contrôleur, qui s'exécute en parrallèle et qui restreint le comportement du système de manière à empêcher qu'un comportement erroné ne se produise. Dans cette thèse, on s'intéresse au développement de méthodes de contrôle supervisé où le système peut avoir un espace d'états infini et où les contrôleurs ne sont pas toujours capables d'observer parfaitement le système; ce qui implique qu'ils doivent définir leur politique de contrôle à partir d'une connaissance imparfaite du système. Malheureusement, ce problème est généralement indécidable. Pour surmonter cette difficulté, nous utilisons alors des techniques d'interprétation abstraite qui assurent la terminaison de nos algorithmes au prix de certaines sur-approximations dans les calculs. Le but de notre thèse est de fournir la contribution la plus complète possible dans ce domaine et nous considèrons pour cela des problèmes de plus en plus réalistes. Plus précisement, nous avons commencé notre travail en définissant une méthode centralisée où le système est contrôlé par un seul contrôleur qui définit sa politique de contrôle à partir de la dernière information reçue du système. Ensuite, pour obtenir de meilleures solutions, nous avons défini des contrôleurs qui retiennent une partie ou la totalité de l'exécution du système et qui définissent leur politique de contrôle à partir de cette information. Malheureusement, ces méthodes ne peuvent pas être utilisées pour contrôler une classe intéressante de systèmes: les sytèmes distribués. Nous avons alors défini des méthodes permettant de contrôler des systèmes distribués dont les communications sont synchrones (méthodes décentralisées et modulaires) et asynchrones (méthodes distribuées). De plus, nous avons implémenté certains de nos algorithmes pour évaluer expérimentalement la qualité des contrôleurs qu'ils synthétisent.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
5

Jett, David B. "Selection of flip-flops for partial scan paths by use of a statistical testability measure." Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-12302008-063234/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lalevée, André. "Towards highly flexible hardware architectures for high-speed data processing : a 100 Gbps network case study." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0054/document.

Full text
Abstract:
L’augmentation de la taille des réseaux actuels ainsi que de la diversité des applications qui les utilisent font que les architectures de calcul traditionnelles deviennent limitées. En effet, les architectures purement logicielles ne permettent pas de tenir les débits en jeu, tandis que celles purement matérielles n’offrent pas assez de flexibilité pour répondre à la diversité des applications. Ainsi, l’utilisation de solutions de type matériel programmable, en particulier les Field Programmable Gate Arrays (FPGAs), a été envisagée. En effet, ces architectures sont souvent considérées comme un bon compromis entre performances et flexibilité, notamment grâce à la technique de Reconfiguration Dynamique Partielle (RDP), qui permet de modifier le comportement d’une partie du circuit pendant l’exécution. Cependant, cette technique peut présenter des inconvénients lorsqu’elle est utilisée de manière intensive, en particulier au niveau du stockage des fichiers de configuration, appelés bitstreams. Pour palier ce problème, il est possible d’utiliser la relocation de bitstreams, permettant de réduire le nombre de fichiers de configuration. Cependant cette technique est fastidieuse et exige des connaissances pointues dans les FPGAs. Un flot de conception entièrement automatisé a donc été développé dans le but de simplifier son utilisation.Pour permettre une flexibilité sur l’enchaînement des traitements effectués, une architecture de communication flexible supportant des hauts débits est également nécessaire. Ainsi, l’étude de Network-on-Chips dédiés aux circuits reconfigurables et au traitements réseaux à haut débit.Enfin, un cas d’étude a été mené pour valider notre approche
The increase in both size and diversity of applications regarding modern networks is making traditional computing architectures limited. Indeed, purely software architectures can not sustain typical throughputs, while purely hardware ones severely lack the flexibility needed to adapt to the diversity of applications. Thus, the investigation of programmable hardware, such as Field Programmable Gate Arrays (FPGAs), has been done. These architectures are indeed usually considered as a good tradeoff between performance and flexibility, mainly thanks to the Dynamic Partial Reconfiguration (DPR), which allows to reconfigure a part of the design during run-time.However, this technique can have several drawbacks, especially regarding the storing of the configuration files, called bitstreams. To solve this issue, bitstream relocation can be deployed, which allows to decrease the number of configuration files required. However, this technique is long, error-prone, and requires specific knowledge inFPGAs. A fully automated design flow has been developped to ease the use of this technique. In order to provide flexibility regarding the sequence of treatments to be done on our architecture, a flexible and high-throughput communication structure is required. Thus, a Network-on-Chips study and characterization has been done accordingly to network processing and bitstream relocation properties. Finally, a case study has been developed in order to validate our approach
APA, Harvard, Vancouver, ISO, and other styles
7

He, Chuan. "Numerical solutions of differential equations on FPGA-enhanced computers." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lazcano, Vanel. "Some problems in depth enhanced video processing." Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/373917.

Full text
Abstract:
In this thesis we tackle two problems, namely, the data interpolation prob- lem in the context of depth computation both for images and for videos, and the problem of the estimation of the apparent movement of objects in image sequences. The rst problem deals with completion of depth data in a region of an image or video where data are missing due to occlusions, unreliable data, damage or lost of data during acquisition. In this thesis we tackle it in two ways. First, we propose a non-local gradient-based energy which is able to complete planes locally. We consider this model as an extension of the bilateral lter to the gradient domain. We have successfully evaluated our model to complete synthetic depth images and also incomplete depth maps provided by a Kinect sensor. The second approach to tackle the problem is an experimental study of the Biased Absolutely Minimizing Lipschitz Extension (biased AMLE in short) for anisotropic interpolation of depth data to big empty regions without informa- tion. The AMLE operator is a cone interpolator, but the biased AMLE is an exponential cone interpolator which makes it more addapted to depth maps of real scenes that usually present soft convex or concave surfaces. Moreover, the biased AMLE operator is able to expand depth data to huge regions. By con- sidering the image domain endowed with an anisotropic metric, the proposed method is able to take into account the underlying geometric information in order not to interpolate across the boundary of objects at di erent depths. We have proposed a numerical model to compute the solution of the biased AMLE which is based on the eikonal operators. Additionally, we have extended the proposed numerical model to video sequences. The second problem deals with the motion estimation of the objects in a video sequence. This problem is known as the optical ow computation. The Optical ow problem is one of the most challenging problems in computer vision. Traditional models to estimate it fail in presence of occlusions and non-uniform illumination. To tackle these problems we proposed a variational model to jointly estimate optical ow and occlusion. Moreover, the proposed model is able to deal with the usual drawback of variational methods in dealing with fast displacements of objects in the scene which are larger than the object it- self. The addition of a term that balance gradient and intensities increases the robustness to illumination changes of the proposed model. The inclusions of a supplementary matches given by exhaustive search in speci cs locations helps to follow large displacements.
En esta tesis se abordan dos problemas: interpolación de datos en el contexto del cálculo de disparidades tanto para imágenes como para video, y el problema de la estimación del movimiento aparente de objetos en una secuencia de imágenes. El primer problema trata de la completación de datos de profundidad en una región de la imagen o video dónde los datos se han perdido debido a oclusiones, datos no confiables, datos dañados o pérdida de datos durante la adquisición. En esta tesis estos problemas se abordan de dos maneras. Primero, se propone una energía basada en gradientes no-locales, energía que puede (localmente) completar planos. Se considera este modelo como una extensión del filtro bilateral al dominio del gradiente. Se ha evaluado en forma exitosa el modelo para completar datos sintéticos y también mapas de profundidad incompletos de un sensor Kinect. El segundo enfoque, para abordar el problema, es un estudio experimental del biased AMLE (Biased Absolutely Minimizing Lipschitz Extension) para interpolación anisotrópica de datos de profundidad en grandes regiones sin información. El operador AMLE es un interpolador de conos, pero el operador biased AMLE es un interpolador de conos exponenciales lo que lo hace estar más adaptado a mapas de profundidad de escenas reales (las que comunmente presentan superficies convexas, concavas y suaves). Además, el operador biased AMLE puede expandir datos de profundidad a regiones grandes. Considerando al dominio de la imagen dotado de una métrica anisotrópica, el método propuesto puede tomar en cuenta información geométrica subyacente para no interpolar a través de los límites de los objetos a diferentes profundidades. Se ha propuesto un modelo numérico, basado en el operador eikonal, para calcular la solución del biased AMLE. Adicionalmente, se ha extendido el modelo numérico a sequencias de video. El cálculo del flujo óptico es uno de los problemas más desafiantes para la visión por computador. Los modelos tradicionales fallan al estimar el flujo óptico en presencia de oclusiones o iluminación no uniforme. Para abordar este problema se propone un modelo variacional para conjuntamente estimar flujo óptico y oclusiones. Además, el modelo propuesto puede tolerar, una limitación tradicional de los métodos variacionales, desplazamientos rápidos de objetos que son más grandes que el tamaño objeto en la escena. La adición de un término para el balance de gradientes e intensidades aumenta la robustez del modelo propuesto ante cambios de iluminación. La inclusión de correspondencias adicionales (obtenidas usando búsqueda exhaustiva en ubicaciones específicas) ayuda a estimar grandes desplazamientos.
APA, Harvard, Vancouver, ISO, and other styles
9

Nakanishi, Rafael Umino. "Recuperação de objetos tridimensionais utilizando características de séries temporais." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-23112016-112604/.

Full text
Abstract:
Com o aumento da capacidade de armazenamento de informação em bancos de dados e em computadores pessoais, surge a necessidade de algoritmos computacionais capazes de realizar o processamento automático para recuperação desses dados. Esse fato não é diferente para objetos tridimensionais armazenados em formato de arquivos. Nesta Dissertação de Mestrado foram estudadas novas técnicas para processamento desses objetos utilizando uma abordagem não comum à área: técnicas para análise de séries temporais, tais como scattering wavelets e gráficos de recorrência. No caso de recuperação total de objetos, ou seja, dado uma malha tridimensional encontrar outras malhas que são visualmente semelhantes, uma única característica é extraída curvatura gaussiana e variação de superfície, por exemplo e ordenada como uma série com a informação provida pelo vetor de Fiedler. Então processa-se essa série utilizando a técnica scattering wavelets, que são capazes de analisar o comportamento temporal de conjunto de dados seriais. Para esse problema, os resultados obtidos são comparáveis com outras abordagens apresentadas na literatura que utilizam várias características para se chegar ao resultado. Já no caso de recuperação parcial de objetos, em que apenas uma parte do objeto é dado como parâmetro de busca, é necessário realizar uma segmentação das malhas para se encontrar outras partes que são visualmente semelhantes. Ao utilizarmos um gráfico de recorrência para analisar os objetos, é possível encontrar não apenas a região mais semelhante dentro da mesma (ou de outra) malha, mas também se obtém todas as regiões que são similares ao parâmetro de busca.
With the increasing data storage capacity of databases and personal computers, arises the necessity of computer algorithms capable of performing processing for automatic recovery of data and information. This fact is no different for three-dimensional objects stored in files. In this Masters Thesis we studied new techniques for processing such data objects using an unusual approach to the geometric processing area: techniques for analyzing time series, such as scattering wavelets and recurrence plots. For shape retrieval problem, i.e., given a tridimensional mesh try finding other meshes that are visually similar, our method extract only one feature Gaussian curvature and surface variation, for example and organize it as a series using information given by Fiedler vector. Then, the next step is to process the resulting series using a technique called scattering wavelets, that is capable of analyzing the temporal behavior of a set of serial data. For this problem, the results are comparable with other approaches reported in the literature that use multiple characteristics to find a matching mesh. In the case of partial retrieval of objects, in which only a part of the object is given as search parameter, it is necessary to perform a segmentation of the meshes in order to find other parts that are visually similar to the query. By using Recurrence Plot to analyze the objects, our method can find not only the most similar region within the same (or other) object, but also get all the regions that are similar to the search parameter.
APA, Harvard, Vancouver, ISO, and other styles
10

Pike, Scott Mason. "Distributed resource allocation with scalable crash containment." Connect to this title online, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1092857584.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains xiv, 154 p.; also includes graphics, map. Includes bibliographical references (p. 148-154). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Partial Data processing"

1

1962-, Peled Doron, Pratt Vaughan R, Holzmann Gerard J, DIMACS (Group), and Workshop on Partial Order Methods in Verification (1996 : Princeton University), eds. Partial order methods in verification: DIMACS workshop July 24-26, 1996. Providence, R.I: American Mathematical Society, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

1943-, Flaherty J. E., and Workshop on Adaptive Computational Methods for Partial Differential Equations (1988 : Rensselaer Polytechnic Institute), eds. Adaptive methods for partial differential equations. Philadelphia: Society for Industrial and Applied Mathematics, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Introduction to numerical ordinary and partial differential equations using MATLAB. Hoboken, N.J: Wiley-Interscience, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Partial differential equations with Mathematica. Wokingham, England: Addison-Wesley Pub. Co., 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Computational partial differential equations: Numerical methods and Diffpack programming. Berlin: Springer, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Reed, Daniel A. Stencils and problem partitionings: Their influence on the performance of multiple processor systems. Urbana, Ill: Dept. of Computer Science, University of Illinois at Urbana-Champaign, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

E, Schiesser W., ed. Ordinary and partial differential equation routines in C, C++, Fortran, Java, Maple, and MATLAB. Boca Raton: Chapman & Hall/CRC, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Involution: The formal theory of differential equations and its applications in computer algebra. New York, NY: Springer Berlin Heidelberg, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

1946-, Vorozhtsov E. V., ed. Numerical solutions for partial differential equations: Problem solving using Mathematica. Boca Raton, Fla: CRC Press, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Aubert, Gilles. Mathematical problems in image processing: Partial differential equations and the calculus of variations. New York: Springer, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Partial Data processing"

1

Dellmann, F. "Processing Partial Information in Decision Support Systems." In From Data to Knowledge, 423–32. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/978-3-642-79999-0_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chowdhury, Ahsan Raja, Madhu Chetty, and Xuan Nguyen Vinh. "On the Reconstruction of Genetic Network from Partial Microarray Data." In Neural Information Processing, 689–96. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34475-6_83.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Villegas, Rossmary, Oliver Dorn, Miguel Moscoso, and Manuel Kindelan. "Shape Reconstruction from Two-Phase Incompressible Flow Data using Level Sets." In Image Processing Based on Partial Differential Equations, 381–401. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-33267-1_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Berrocal, Eduardo, Leonardo Bautista-Gomez, Sheng Di, Zhiling Lan, and Franck Cappello. "Exploring Partial Replication to Improve Lightweight Silent Data Corruption Detection for HPC Applications." In Euro-Par 2016: Parallel Processing, 419–30. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43659-3_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ta, Vinh-Thong, Abderrahim Elmoataz, and Olivier Lézoray. "Partial Difference Equations over Graphs: Morphological Processing of Arbitrary Discrete Data." In Lecture Notes in Computer Science, 668–80. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-88690-7_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Acevedo, Liesner, Victor M. Garcia, Antonio M. Vidal, and Pedro Alonso. "Partial Data Replication as a Strategy for Parallel Computing of the Multilevel Discrete Wavelet Transform." In Parallel Processing and Applied Mathematics, 51–60. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-14390-8_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Blocher, Hannah, Georg Schollmeyer, and Christoph Jansen. "Statistical Models for Partial Orders Based on Data Depth and Formal Concept Analysis." In Information Processing and Management of Uncertainty in Knowledge-Based Systems, 17–30. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-08974-9_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Qi, Xinxin, Juan Chen, and Lin Deng. "CP$$^{3}$$: Hierarchical Cross-Platform Power/Performance Prediction Using a Transfer Learning Approach." In Algorithms and Architectures for Parallel Processing, 117–38. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-22677-9_7.

Full text
Abstract:
AbstractCross-platform power/performance prediction is becoming increasingly important due to the rapid development and variety of software and hardware architectures in an era of heterogeneous multi-core. However, accurate power/performance prediction is faced with an obstacle caused by the large gap between architectures, which is often overcome by laborious and time-consuming fine-grained program profiling on the target platform. To overcome these problems, this paper introduces $$CP^3$$ C P 3 , a hierarchical Cross-platform Power/Performance Prediction framework, which focuses on utilizing architecture differences to migrate built models to target platforms. The core of $$CP^3$$ C P 3 is the three-step hierarchical transfer learning approach, hierarchical division, partial transfer learning, and model fusion, respectively. $$CP^3$$ C P 3 firstly builds a power/performance model on the source platform, then rebuilds it with the reduced training data on the target platform, and finally obtains a cross-platform model. We validate the effectiveness of $$CP^3$$ C P 3 using a group of benchmarks on X86- and ARM-based platforms that use three different types of commonly used processors. Evaluation results show that when applying $$CP^3$$ C P 3 , only 1% of the baseline training data is required to achieve high cross-platform prediction accuracy, with power prediction error being only 0.65%, and performance prediction error being only 4.64%.
APA, Harvard, Vancouver, ISO, and other styles
9

Raffel, Markus, Christian E. Willert, Fulvio Scarano, Christian J. Kähler, Steven T. Wereley, and Jürgen Kompenhans. "Post-processing of PIV Data." In Particle Image Velocimetry, 243–83. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-68852-7_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Raffel, Markus, Christan E. Willert, and Jürgen Kompenhans. "Post-processing of PIV data." In Particle Image Velocimetry, 147–71. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/978-3-662-03637-2_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Partial Data processing"

1

Slo, Ahmad, Sukanya Bhowmik, Albert Flaig, and Kurt Rothermel. "pSPICE: Partial Match Shedding for Complex Event Processing." In 2019 IEEE International Conference on Big Data (Big Data). IEEE, 2019. http://dx.doi.org/10.1109/bigdata47090.2019.9006436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nir, Guy, and Allen Tannenbaum. "Temporal registration of partial data using particle filtering." In 2011 18th IEEE International Conference on Image Processing (ICIP 2011). IEEE, 2011. http://dx.doi.org/10.1109/icip.2011.6116064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sakhaee, Elham, and Alireza Entezari. "Sparse partial derivatives and reconstruction from partial Fourier data." In ICASSP 2015 - 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015. http://dx.doi.org/10.1109/icassp.2015.7178646.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Klyuvak, Andriy, Oksana Kliuva, and Ruslan Skrynkovskyy. "Partial Motion Blur Removal." In 2018 IEEE Second International Conference on Data Stream Mining & Processing (DSMP). IEEE, 2018. http://dx.doi.org/10.1109/dsmp.2018.8478595.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Hyeokman, Sung-Joon Park, Jinho Lee, Woonkyung M. Kim, and Samuel M. Song. "Processing of partial video data for detection of wipes." In Electronic Imaging '99, edited by Minerva M. Yeung, Boon-Lock Yeo, and Charles A. Bouman. SPIE, 1998. http://dx.doi.org/10.1117/12.333847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cideciyan, Roy D., Robert Hutchins, Thomas Mittelholzer, and Sedat Olcer. "Partial reverse concatenation for data storage." In 2014 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA). IEEE, 2014. http://dx.doi.org/10.1109/apsipa.2014.7041670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Takdir, Hiroyuki Kitagawa, and Toshiyuki Amagasa. "Region-based Sub-Snapshot (RegSnap): Enhanced Fault Tolerance in Distributed Stream Processing with Partial Snapshot." In 2022 IEEE International Conference on Big Data (Big Data). IEEE, 2022. http://dx.doi.org/10.1109/bigdata55660.2022.10020607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Y., and R. Gupta. "Enabling partial cache line prefetching through data compression." In 2003 International Conference on Parallel Processing, 2003. Proceedings. IEEE, 2003. http://dx.doi.org/10.1109/icpp.2003.1240590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Karandaeva, O. I., I. M. Yachikov, E. A. Khramshina, and N. N. Druzhinin. "Partial Discharge Monitoring Data: Statistical Processing to Assessment Transformer Condition." In 2019 IEEE Russian Workshop on Power Engineering and Automation of Metallurgy Industry: Research & Practice (PEAMI). IEEE, 2019. http://dx.doi.org/10.1109/peami.2019.8915407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Diniz, Paulo S. R., Guilherme O. Pinto, and Are Hjorungnes. "Data selective partial-update affine projection algorithm." In ICASSP 2008 - 2008 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2008. http://dx.doi.org/10.1109/icassp.2008.4518489.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Partial Data processing"

1

Liu, Jianshen, Carlos Maltzahn, Matthew Curry, and Craig Ulmer. Processing Particle Data Flows with SmartNICs. Office of Scientific and Technical Information (OSTI), October 2022. http://dx.doi.org/10.2172/1892372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Morley, Steven, John Sullivan, Richard Schirato, and James Terry. Data Processing for Energetic Particle Measurements from the Global Positioning System (GPS) constellation. Office of Scientific and Technical Information (OSTI), November 2014. http://dx.doi.org/10.2172/1164428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Salter, R., Quyen Dong, Cody Coleman, Maria Seale, Alicia Ruvinsky, LaKenya Walker, and W. Bond. Data Lake Ecosystem Workflow. Engineer Research and Development Center (U.S.), April 2021. http://dx.doi.org/10.21079/11681/40203.

Full text
Abstract:
The Engineer Research and Development Center, Information Technology Laboratory’s (ERDC-ITL’s) Big Data Analytics team specializes in the analysis of large-scale datasets with capabilities across four research areas that require vast amounts of data to inform and drive analysis: large-scale data governance, deep learning and machine learning, natural language processing, and automated data labeling. Unfortunately, data transfer between government organizations is a complex and time-consuming process requiring coordination of multiple parties across multiple offices and organizations. Past successes in large-scale data analytics have placed a significant demand on ERDC-ITL researchers, highlighting that few individuals fully understand how to successfully transfer data between government organizations; future project success therefore depends on a small group of individuals to efficiently execute a complicated process. The Big Data Analytics team set out to develop a standardized workflow for the transfer of large-scale datasets to ERDC-ITL, in part to educate peers and future collaborators on the process required to transfer datasets between government organizations. Researchers also aim to increase workflow efficiency while protecting data integrity. This report provides an overview of the created Data Lake Ecosystem Workflow by focusing on the six phases required to efficiently transfer large datasets to supercomputing resources located at ERDC-ITL.
APA, Harvard, Vancouver, ISO, and other styles
4

Neeley, Aimee, Stace E. Beaulieu, Chris Proctor, Ivona Cetinić, Joe Futrelle, Inia Soto Ramos, Heidi M. Sosik, et al. Standards and practices for reporting plankton and other particle observations from images. Woods Hole Oceanographic Institution, July 2021. http://dx.doi.org/10.1575/1912/27377.

Full text
Abstract:
This technical manual guides the user through the process of creating a data table for the submission of taxonomic and morphological information for plankton and other particles from images to a repository. Guidance is provided to produce documentation that should accompany the submission of plankton and other particle data to a repository, describes data collection and processing techniques, and outlines the creation of a data file. Field names include scientificName that represents the lowest level taxonomic classification (e.g., genus if not certain of species, family if not certain of genus) and scientificNameID, the unique identifier from a reference database such as the World Register of Marine Species or AlgaeBase. The data table described here includes the field names associatedMedia, scientificName/ scientificNameID for both automated and manual identification, biovolume, area_cross_section, length_representation and width_representation. Additional steps that instruct the user on how to format their data for a submission to the Ocean Biodiversity Information System (OBIS) are also included. Examples of documentation and data files are provided for the user to follow. The documentation requirements and data table format are approved by both NASA’s SeaWiFS Bio-optical Archive and Storage System (SeaBASS) and the National Science Foundation’s Biological and Chemical Oceanography Data Management Office (BCO-DMO).
APA, Harvard, Vancouver, ISO, and other styles
5

Dudley, J. P., and S. V. Samsonov. Système de traitement automatisé du gouvernement canadien pour la détection des variations et l'analyse des déformations du sol à partir des données de radar à synthèse d'ouverture de RADARSAT-2 et de la mission de la Constellation RADARSAT : description et guide de l'utilisateur. Natural Resources Canada/CMSS/Information Management, 2021. http://dx.doi.org/10.4095/329134.

Full text
Abstract:
Remote sensing using Synthetic Aperture Radar (SAR) offers powerful methods for monitoring ground deformation from both natural and anthropogenic sources. Advanced analysis techniques such as Differential Interferometric Synthetic Aperture Radar (DInSAR), change detection, and Speckle Offset Tracking (SPO) provide sensitive measures of ground movement. With both the RADARSAT-2 and RADARSAT Constellation Mission (RCM) SAR satellites, Canada has access to a significant catalogue of SAR data. To make use of this data, the Canada Centre for Mapping and Earth Observation (CCMEO) has developed an automated system for generating standard and advanced deformation products from SAR data using both DInSAR and SPO methods. This document provides a user guide for this automated processing system.
APA, Harvard, Vancouver, ISO, and other styles
6

Modlo, Yevhenii O., Serhiy O. Semerikov, Stanislav L. Bondarevskyi, Stanislav T. Tolmachev, Oksana M. Markova, and Pavlo P. Nechypurenko. Methods of using mobile Internet devices in the formation of the general scientific component of bachelor in electromechanics competency in modeling of technical objects. [б. в.], February 2020. http://dx.doi.org/10.31812/123456789/3677.

Full text
Abstract:
An analysis of the experience of professional training bachelors of electromechanics in Ukraine and abroad made it possible to determine that one of the leading trends in its modernization is the synergistic integration of various engineering branches (mechanical, electrical, electronic engineering and automation) in mechatronics for the purpose of design, manufacture, operation and maintenance electromechanical equipment. Teaching mechatronics provides for the meaningful integration of various disciplines of professional and practical training bachelors of electromechanics based on the concept of modeling and technological integration of various organizational forms and teaching methods based on the concept of mobility. Within this approach, the leading learning tools of bachelors of electromechanics are mobile Internet devices (MID) – a multimedia mobile devices that provide wireless access to information and communication Internet services for collecting, organizing, storing, processing, transmitting, presenting all kinds of messages and data. The authors reveals the main possibilities of using MID in learning to ensure equal access to education, personalized learning, instant feedback and evaluating learning outcomes, mobile learning, productive use of time spent in classrooms, creating mobile learning communities, support situated learning, development of continuous seamless learning, ensuring the gap between formal and informal learning, minimize educational disruption in conflict and disaster areas, assist learners with disabilities, improve the quality of the communication and the management of institution, and maximize the cost-efficiency. Bachelor of electromechanics competency in modeling of technical objects is a personal and vocational ability, which includes a system of knowledge, skills, experience in learning and research activities on modeling mechatronic systems and a positive value attitude towards it; bachelor of electromechanics should be ready and able to use methods and software/hardware modeling tools for processes analyzes, systems synthesis, evaluating their reliability and effectiveness for solving practical problems in professional field. The competency structure of the bachelor of electromechanics in the modeling of technical objects is reflected in three groups of competencies: general scientific, general professional and specialized professional. The implementation of the technique of using MID in learning bachelors of electromechanics in modeling of technical objects is the appropriate methodic of using, the component of which is partial methods for using MID in the formation of the general scientific component of the bachelor of electromechanics competency in modeling of technical objects, are disclosed by example academic disciplines “Higher mathematics”, “Computers and programming”, “Engineering mechanics”, “Electrical machines”. The leading tools of formation of the general scientific component of bachelor in electromechanics competency in modeling of technical objects are augmented reality mobile tools (to visualize the objects’ structure and modeling results), mobile computer mathematical systems (universal tools used at all stages of modeling learning), cloud based spreadsheets (as modeling tools) and text editors (to make the program description of model), mobile computer-aided design systems (to create and view the physical properties of models of technical objects) and mobile communication tools (to organize a joint activity in modeling).
APA, Harvard, Vancouver, ISO, and other styles
7

Ron, Eliora, and Eugene Eugene Nester. Global functional genomics of plant cell transformation by agrobacterium. United States Department of Agriculture, March 2009. http://dx.doi.org/10.32747/2009.7695860.bard.

Full text
Abstract:
The aim of this study was to carry out a global functional genomics analysis of plant cell transformation by Agrobacterium in order to define and characterize the physiology of Agrobacterium in the acidic environment of a wounded plant. We planed to study the proteome and transcriptome of Agrobacterium in response to a change in pH, from 7.2 to 5.5 and identify genes and circuits directly involved in this change. Bacteria-plant interactions involve a large number of global regulatory systems, which are essential for protection against new stressful conditions. The interaction of bacteria with their hosts has been previously studied by genetic-physiological methods. We wanted to make use of the new capabilities to study these interactions on a global scale, using transcription analysis (transcriptomics, microarrays) and proteomics (2D gel electrophoresis and mass spectrometry). The results provided extensive data on the functional genomics under conditions that partially mimic plant infection and – in addition - revealed some surprising and significant data. Thus, we identified the genes whose expression is modulated when Agrobacterium is grown under the acidic conditions found in the rhizosphere (pH 5.5), an essential environmental factor in Agrobacterium – plant interactions essential for induction of the virulence program by plant signal molecules. Among the 45 genes whose expression was significantly elevated, of special interest is the two-component chromosomally encoded system, ChvG/I which is involved in regulating acid inducible genes. A second exciting system under acid and ChvG/Icontrol is a secretion system for proteins, T6SS, encoded by 14 genes which appears to be important for Rhizobium leguminosarum nodule formation and nitrogen fixation and for virulence of Agrobacterium. The proteome analysis revealed that gamma aminobutyric acid (GABA), a metabolite secreted by wounded plants, induces the synthesis of an Agrobacterium lactonase which degrades the quorum sensing signal, N-acyl homoserine lactone (AHL), resulting in attenuation of virulence. In addition, through a transcriptomic analysis of Agrobacterium growing at the pH of the rhizosphere (pH=5.5), we demonstrated that salicylic acid (SA) a well-studied plant signal molecule important in plant defense, attenuates Agrobacterium virulence in two distinct ways - by down regulating the synthesis of the virulence (vir) genes required for the processing and transfer of the T-DNA and by inducing the same lactonase, which in turn degrades the AHL. Thus, GABA and SA with different molecular structures, induce the expression of these same genes. The identification of genes whose expression is modulated by conditions that mimic plant infection, as well as the identification of regulatory molecules that help control the early stages of infection, advance our understanding of this complex bacterial-plant interaction and has immediate potential applications to modify it. We expect that the data generated by our research will be used to develop novel strategies for the control of crown gall disease. Moreover, these results will also provide the basis for future biotechnological approaches that will use genetic manipulations to improve bacterial-plant interactions, leading to more efficient DNA transfer to recalcitrant plants and robust symbiosis. These advances will, in turn, contribute to plant protection by introducing genes for resistance against other bacteria, pests and environmental stress.
APA, Harvard, Vancouver, ISO, and other styles
8

Burks, Thomas F., Victor Alchanatis, and Warren Dixon. Enhancement of Sensing Technologies for Selective Tree Fruit Identification and Targeting in Robotic Harvesting Systems. United States Department of Agriculture, October 2009. http://dx.doi.org/10.32747/2009.7591739.bard.

Full text
Abstract:
The proposed project aims to enhance tree fruit identification and targeting for robotic harvesting through the selection of appropriate sensor technology, sensor fusion, and visual servo-control approaches. These technologies will be applicable for apple, orange and grapefruit harvest, although specific sensor wavelengths may vary. The primary challenges are fruit occlusion, light variability, peel color variation with maturity, range to target, and computational requirements of image processing algorithms. There are four major development tasks in original three-year proposed study. First, spectral characteristics in the VIS/NIR (0.4-1.0 micron) will be used in conjunction with thermal data to provide accurate and robust detection of fruit in the tree canopy. Hyper-spectral image pairs will be combined to provide automatic stereo matching for accurate 3D position. Secondly, VIS/NIR/FIR (0.4-15.0 micron) spectral sensor technology will be evaluated for potential in-field on-the-tree grading of surface defect, maturity and size for selective fruit harvest. Thirdly, new adaptive Lyapunov-basedHBVS (homography-based visual servo) methods to compensate for camera uncertainty, distortion effects, and provide range to target from a single camera will be developed, simulated, and implemented on a camera testbed to prove concept. HBVS methods coupled with imagespace navigation will be implemented to provide robust target tracking. And finally, harvesting test will be conducted on the developed technologies using the University of Florida harvesting manipulator test bed. During the course of the project it was determined that the second objective was overly ambitious for the project period and effort was directed toward the other objectives. The results reflect the synergistic efforts of the three principals. The USA team has focused on citrus based approaches while the Israeli counterpart has focused on apples. The USA team has improved visual servo control through the use of a statistical-based range estimate and homography. The results have been promising as long as the target is visible. In addition, the USA team has developed improved fruit detection algorithms that are robust under light variation and can localize fruit centers for partially occluded fruit. Additionally, algorithms have been developed to fuse thermal and visible spectrum image prior to segmentation in order to evaluate the potential improvements in fruit detection. Lastly, the USA team has developed a multispectral detection approach which demonstrated fruit detection levels above 90% of non-occluded fruit. The Israel team has focused on image registration and statistical based fruit detection with post-segmentation fusion. The results of all programs have shown significant progress with increased levels of fruit detection over prior art.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography