Literatura académica sobre el tema "Integration of peptidomic data"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Integration of peptidomic data".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Integration of peptidomic data"

1

Fang, Hai Yan, Guo Ping Zhang, Feng Gao, Xiao Ping Zhao, Peng Shen y Shu Fang Wang. "Comparison of Auto and Manual Integration for Peptidomics Data Based on High Performance Liquid Chromatography Coupled with Mass Spectrometry". Advanced Materials Research 340 (septiembre de 2011): 266–72. http://dx.doi.org/10.4028/www.scientific.net/amr.340.266.

Texto completo
Resumen
A growing number of literatures appealed the necessity to develop methods of data processing for peptidome profiling and analysis. Although some methods had been established, many of them focused on the development and application of auto integration softwares. In this work, we paid attention to comparison of auto integration by software and manual integration for peptidomics data based on high performance liquid chromatography coupled with mass spectrometry (HPLC-MS). Two data processing procedures, auto integration by XCMS and manual integration, were applied in processing of peptidomics data based on HPLC-MS from cerebral infarction and breast cancer patients blood samples, respectively. And, it was found that almost all peaks contained in chromatograms could be picked out by XCMS, but the areas of these peaks were greatly different from those given by manual integration. Furthermore, t-test (2-tailed) results of the two data processing procedures were also different and different potential biomarkers were obtained. The results of this work will provide helpful reference for data processing of peptidomics research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Fortier, Marie-Hélène, Étienne Caron, Marie-Pierre Hardy, Grégory Voisin, Sébastien Lemieux, Claude Perreault y Pierre Thibault. "The MHC class I peptide repertoire is molded by the transcriptome". Journal of Experimental Medicine 205, n.º 3 (25 de febrero de 2008): 595–610. http://dx.doi.org/10.1084/jem.20071985.

Texto completo
Resumen
Under steady-state conditions, major histocompatibility complex (MHC) I molecules are associated with self-peptides that are collectively referred to as the MHC class I peptide (MIP) repertoire. Very little is known about the genesis and molecular composition of the MIP repertoire. We developed a novel high-throughput mass spectrometry approach that yields an accurate definition of the nature and relative abundance of unlabeled peptides presented by MHC I molecules. We identified 189 and 196 MHC I–associated peptides from normal and neoplastic mouse thymocytes, respectively. By integrating our peptidomic data with global profiling of the transcriptome, we reached two conclusions. The MIP repertoire of primary mouse thymocytes is biased toward peptides derived from highly abundant transcripts and is enriched in peptides derived from cyclins/cyclin-dependent kinases and helicases. Furthermore, we found that ∼25% of MHC I–associated peptides were differentially expressed on normal versus neoplastic thymocytes. Approximately half of those peptides are derived from molecules directly implicated in neoplastic transformation (e.g., components of the PI3K–AKT–mTOR pathway). In most cases, overexpression of MHC I peptides on cancer cells entailed posttranscriptional mechanisms. Our results show that high-throughput analysis and sequencing of MHC I–associated peptides yields unique insights into the genesis of the MIP repertoire in normal and neoplastic cells.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Labas, Valérie, Lucie Spina, Clémence Belleannee, Ana-Paula Teixeira-Gomes, Audrey Gargaros, Françoise Dacheux y Jean-Louis Dacheux. "Data in support of peptidomic analysis of spermatozoa during epididymal maturation". Data in Brief 1 (diciembre de 2014): 79–84. http://dx.doi.org/10.1016/j.dib.2014.10.003.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Schrader, Michael y Hartmut Selle. "The Process Chain for Peptidomic Biomarker Discovery". Disease Markers 22, n.º 1-2 (2006): 27–37. http://dx.doi.org/10.1155/2006/174849.

Texto completo
Resumen
Over the last few years the interest in diagnostic markers for specific diseases has increased continuously. It is expected that they not only improve a patient's medical treatment but also contribute to accelerating the process of drug development. This demand for new biomarkers is caused by a lack of specific and sensitive diagnosis in many diseases. Moreover, diseases usually occur in different types or stages which may need different diagnostic and therapeutic measures. Their differentiation has to be considered in clinical studies as well. Therefore, it is important to translate a macroscopic pathological or physiological finding into a microscopic view of molecular processes and vice versa, though it is a difficult and tedious task. Peptides play a central role in many physiological processes and are of importance in several areas of drug research. Exploration of endogenous peptides in biologically relevant sources may directly lead to new drug substances, serve as key information on a new target and can as well result in relevant biomarker candidates. A comprehensive analysis of peptides and small proteins of a biological system corresponding to the respective genomic information (peptidomics®methods) was a missing link in proteomics. A new peptidomic technology platform addressing peptides was recently presented, developed by adaptation of the striving proteomic technologies. Here, concepts of using peptidomics technologies for biomarker discovery are presented and illustrated with examples. It is discussed how the biological hypothesis and sample quality determine the result of the study. A detailed study design, appropriate choice and application of technology as well as thorough data interpretation can lead to significant results which have to be interpreted in the context of the underlying disease. The identified biomarker candidates will be characterised in validation studies before use. This approach for discovery of peptide biomarkes has potential for improving clinical studies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Abdelati, Abeer A., Rehab A. Elnemr, Noha S. Kandil, Fatma I. Dwedar y Rasha A. Ghazala. "Serum Peptidomic Profile as a Novel Biomarker for Rheumatoid Arthritis". International Journal of Rheumatology 2020 (3 de agosto de 2020): 1–10. http://dx.doi.org/10.1155/2020/6069484.

Texto completo
Resumen
Over the last decades, there has been an increasing need to discover new diagnostic RA biomarkers, other than the current serologic biomarkers, which can assist early diagnosis and response to treatment. The purpose of this study was to analyze the serum peptidomic profile in patients with rheumatoid arthritis (RA) by using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS). The study included 35 patients with rheumatoid arthritis (RA), 35 patients with primary osteoarthritis (OA) as the disease control (DC), and 35 healthy controls (HC). All participants were subjected to serum peptidomic profile analysis using magnetic bead (MB) separation (MALDI-TOF-MS). The trial showed 113 peaks that discriminated RA from OA and 101 peaks that discriminated RA from HC. Moreover, 95 peaks were identified and discriminated OA from HC; 38 were significant (p<0.05) and 57 nonsignificant. The genetic algorithm (GA) model showed the best sensitivity and specificity in the three trials (RA versus HC, OA versus HC, and RA versus OA). The present data suggested that the peptidomic pattern is of value for differentiating individuals with RA from OA and healthy controls. We concluded that MALDI-TOF-MS combined with MB is an effective technique to identify novel serum protein biomarkers related to RA.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Fortier, Marie-Hélène, Etienne Caron, Marie-Pierre Hardy, Grégory Voisin, Sébastien Lemieux, Claude Perreault y Pierre Thibault. "The MHC I Immunopeptidome Is Moulded by the Transcriptome and Conceals a Tissue-Specific Signature." Blood 110, n.º 11 (16 de noviembre de 2007): 1327. http://dx.doi.org/10.1182/blood.v110.11.1327.1327.

Texto completo
Resumen
Abstract Background: Cell surface MHC I molecules are associated with self peptides that are collectively referred to as the self MHC I immunopeptidome (sMII). The sMII plays vital roles: it shapes the repertoire of developing thymocytes, transmits survival signals to mature CD8 T cells, amplifies responses against intracellular pathogens, allows immunosurveillance of neoplastic cells, and influences mating preferences in mice. Despite the tremendous importance of the sMII, very little is known on its genesis and molecular composition. Methodology/Principal Findings: We developed a novel high-throughput mass spectrometry approach that yields an accurate definition of the nature and relative abundance of unlabeled peptides presented by MHC I molecules. Two major points emerged from a comprehensive analysis of the sMII of primary mouse thymocytes: the sMII is enriched in peptides derived from highly abundant transcripts; and the sMII conceals a tissue-specific signature that emanates from about 17% of genes represented in the sMII. We found that about 25% of MHC I-associated peptides were differentially expressed on normal versus neoplastic thymocytes. Remarkably, about half of those peptides derived from molecules implicated in neoplastic transformation. Integration of peptidomic and transcriptomic data unveiled that, in most cases, overexpression of MHC I peptides on cancer cells entailed posttranscriptional mechanisms. Finally, mice immunized against peptides overexpressed by 10 to ≥ 85 fold on cancer cells generated specific cytotoxic T-cell responses against malignant cells endogenously expressing the target epitope. Conclusion: High-throughput analysis and sequencing of MHC I-associated peptides yields unique insights into the genesis of the sMII in normal and neoplastic cells, and can be used to discover peptide targets for cancer immunotherapy. Furthermore, global portrayal of the sMII offers a novel perspective into how neoplastic transformation affects protein metabolism. Figure 1. Relative Quantification of Differentially Expressed MHC I peptides and Source mRNAs from Thymocytes and EL4 Cells (A) Volcano Plot representation illustrate MHC I peptides reproducibly detected across biological replicates (n = 3). Peptides over- and underexpressed on EL4 cells relative to thymocytes (p-values≤0.05; fold change ≥ 2.5) were highlighted in blue and red, respectively. MS-MS spectra of circled peptides are shown in B and C. B) Scatter plot shows the correlation between relative expression of mRNA and that of MHC I peptide. Expression ratios for source mRNA (x axis) and MHC I peptide (y axis) between EL4 cells and thymocytes were plotted on a log 2 scale for 47 pairs. A Spearman correlation coefficient was calculated from the linear regression. MHC I peptides overexpressed in EL4 cells or normal thymocytes are highlighted in blue and red, respectively; peptides that were not differentially expressed are in grey. Dashed box includes peptides whose overexpression on EL4 cells did not correlated with increased mRNA levels of their source protein. Figure 1. Relative Quantification of Differentially Expressed MHC I peptides and Source mRNAs from Thymocytes and EL4 Cells (A) Volcano Plot representation illustrate MHC I peptides reproducibly detected across biological replicates (n = 3). Peptides over- and underexpressed on EL4 cells relative to thymocytes (p-values≤0.05; fold change ≥ 2.5) were highlighted in blue and red, respectively. MS-MS spectra of circled peptides are shown in B and C. B) Scatter plot shows the correlation between relative expression of mRNA and that of MHC I peptide. Expression ratios for source mRNA (x axis) and MHC I peptide (y axis) between EL4 cells and thymocytes were plotted on a log 2 scale for 47 pairs. A Spearman correlation coefficient was calculated from the linear regression. MHC I peptides overexpressed in EL4 cells or normal thymocytes are highlighted in blue and red, respectively; peptides that were not differentially expressed are in grey. Dashed box includes peptides whose overexpression on EL4 cells did not correlated with increased mRNA levels of their source protein.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Miralles, B., J. Sanchón, L. Sánchez-Rivera, D. Martínez-Maqueda, Y. Le Gouar, D. Dupont, L. Amigo y I. Recio. "Peptidomic data in porcine duodenal effluents after oral administration of micellar casein". Data in Brief 38 (octubre de 2021): 107326. http://dx.doi.org/10.1016/j.dib.2021.107326.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Sheng, Pijie, Minyan Xu, Zhenzhen Zheng, Xiaojing Liu, Wanlu Ma, Ting Ding, Chenchen Zhang et al. "Peptidome and Transcriptome Analysis of Plant Peptides Involved in Bipolaris maydis Infection of Maize". Plants 12, n.º 6 (14 de marzo de 2023): 1307. http://dx.doi.org/10.3390/plants12061307.

Texto completo
Resumen
Southern corn leaf blight (SCLB) caused by Bipolaris maydis threatens maize growth and yield worldwide. In this study, TMT-labeled comparative peptidomic analysis was established between infected and uninfected maize leaf samples using liquid-chromatography-coupled tandem mass spectrometry. The results were further compared and integrated with transcriptome data under the same experimental conditions. Plant peptidomic analysis identified 455 and 502 differentially expressed peptides (DEPs) in infected maize leaves on day 1 and day 5, respectively. A total of 262 common DEPs were identified in both cases. Bioinformatic analysis indicated that the precursor proteins of DEPs are associated with many pathways generated by SCLB-induced pathological changes. The expression profiles of plant peptides and genes in maize plants were considerably altered after B. maydis infection. These findings provide new insights into the molecular mechanisms of SCLB pathogenesis and offer a basis for the development of maize genotypes with SCLB resistance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Santos-Hernández, Marta, Beatriz Miralles, Lourdes Amigo y Isidra Recio. "Peptidomic data of egg white gastrointestinal digests prepared using the Infogest Harmonized Protocol". Data in Brief 31 (agosto de 2020): 105932. http://dx.doi.org/10.1016/j.dib.2020.105932.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Mostovenko, Ekaterina, Samantha Saunders, Pretal P. Muldoon, Lindsey Bishop, Matthew J. Campen, Aaron Erdely y Andrew K. Ottens. "Carbon Nanotube Exposure Triggers a Cerebral Peptidomic Response: Barrier Compromise, Neuroinflammation, and a Hyperexcited State". Toxicological Sciences 182, n.º 1 (21 de abril de 2021): 107–19. http://dx.doi.org/10.1093/toxsci/kfab042.

Texto completo
Resumen
Abstract The unique physicochemical properties of carbon nanomaterials and their ever-growing utilization generate a serious concern for occupational risk. Pulmonary exposure to these nanoparticles induces local and systemic inflammation, cardiovascular dysfunction, and even cognitive deficits. Although multiple routes of extrapulmonary toxicity have been proposed, the mechanism for and manner of neurologic effects remain minimally understood. Here, we examine the cerebral spinal fluid (CSF)-derived peptidomic fraction as a reflection of neuropathological alterations induced by pulmonary carbon nanomaterial exposure. Male C57BL/6 mice were exposed to 10 or 40 µg of multiwalled carbon nanotubes (MWCNT) by oropharyngeal aspiration. Serum and CSFs were collected 4 h post exposure. An enriched peptide fraction of both biofluids was analyzed using ion mobility-enabled data-independent mass spectrometry for label-free quantification. MWCNT exposure induced a prominent peptidomic response in the blood and CSF; however, correlation between fluids was limited. Instead, we determined that a MWCNT-induced peptidomic shift occurred specific to the CSF with 292 significant responses found that were not in serum. Identified MWCNT-responsive peptides depicted a mechanism involving aberrant fibrinolysis (fibrinopeptide A), blood-brain barrier permeation (homeobox protein A4), neuroinflammation (transmembrane protein 131L) with reactivity by astrocytes and microglia, and a pro-degradative (signal transducing adapter molecule, phosphoglycerate kinase), antiplastic (AF4/FMR2 family member 1, vacuolar protein sorting-associated protein 18) state with the excitation-inhibition balance shifted to a hyperexcited (microtubule-associated protein 1B) phenotype. Overall, the significant pathologic changes observed were consistent with early neurodegenerative disease and were diagnostically reflected in the CSF peptidome.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Integration of peptidomic data"

1

Suwareh, Ousmane. "Modélisation de la pepsinolyse in vitro en conditions gastriques et inférence de réseaux de filiation de peptides à partir de données de peptidomique". Electronic Thesis or Diss., Rennes, Agrocampus Ouest, 2022. https://tel.archives-ouvertes.fr/tel-04059711.

Texto completo
Resumen
Pour faire face aux enjeux démographiques actuels, aux « maladies de civilisation » et à la possible raréfaction des ressources alimentaires, il est impératif d’optimiser l’utilisation effective des aliments et d’adapter leur conception aux besoins spécifiques des différentes populations. Cela demande d’accroître notre compréhension des différentes étapes de la digestion. En particulier, en raison du rôle majeur des protéines dans notre alimentation, leur devenir au cours de la digestion est au coeur des préoccupations. Or, les lois probabilistes qui régissent l’action de la pepsine, première protéase à agir dans le tractus gastro-intestinal, ne sont pas encore clairement identifiées.Dans une première approche s’appuyant sur des données de peptidomique, nous démontrons que l’hydrolyse parla pepsine d’une liaison peptidique dépend de la nature des résidus d’acides aminés dans son large voisinage, mais aussi de variables physicochimiques et de structure décrivant son environnement. Nous proposons dans un second temps, tenant compte de l’environnement physicochimique à l’échelle de séquences peptidiques, un modèle non-paramétrique de l’hydrolyse de ces séquences par la pepsine et un algorithme d’estimation de type Expectation-Maximisation, offrant des perspectives de valorisation des données de peptidomique. Dans cette approche dynamique, nous intégrons les réseaux de filiation des peptides dans la procédure d’estimation, ce qui conduit à un modèle plus parcimonieux et plus pertinent au regard des interprétations biologiques
Addressing the current demographic challenges, “civilization diseases” and the possible depletion of food resources, require optimization of food utilization and adapting their conception to the specific needs of each target population. This requires a better understanding of the different stages of the digestion process. In particular, how proteins are hydrolyzed is a major issue, due to their crucial role in human nutrition. However, the probabilistic laws governing the action of pepsin, the first protease to act in the gastrointestinal tract, are still unclear.In a first approach based on peptidomic data, we demonstrate that the hydrolysis by pepsin of a peptidebond depends on the nature of the amino acid residues in its large neighborhood, but also on physicochemical and structural variables describing its environment. In a second step, and considering the physicochemical environment at the peptide level, we propose a nonparametric model of the hydrolysis by pepsin of these peptides, and an Expectation-Maximization type estimation algorithm, offering novel perspectives for the valorization of peptidomic data. In this dynamic approach, we integrate the peptide kinship network into the estimation procedure, which leads to a more parsimonious model that is also more relevant regarding biological interpretations
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Nadal, Francesch Sergi. "Metadata-driven data integration". Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/666947.

Texto completo
Resumen
Data has an undoubtable impact on society. Storing and processing large amounts of available data is currently one of the key success factors for an organization. Nonetheless, we are recently witnessing a change represented by huge and heterogeneous amounts of data. Indeed, 90% of the data in the world has been generated in the last two years. Thus, in order to carry on these data exploitation tasks, organizations must first perform data integration combining data from multiple sources to yield a unified view over them. Yet, the integration of massive and heterogeneous amounts of data requires revisiting the traditional integration assumptions to cope with the new requirements posed by such data-intensive settings. This PhD thesis aims to provide a novel framework for data integration in the context of data-intensive ecosystems, which entails dealing with vast amounts of heterogeneous data, from multiple sources and in their original format. To this end, we advocate for an integration process consisting of sequential activities governed by a semantic layer, implemented via a shared repository of metadata. From an stewardship perspective, this activities are the deployment of a data integration architecture, followed by the population of such shared metadata. From a data consumption perspective, the activities are virtual and materialized data integration, the former an exploratory task and the latter a consolidation one. Following the proposed framework, we focus on providing contributions to each of the four activities. We begin proposing a software reference architecture for semantic-aware data-intensive systems. Such architecture serves as a blueprint to deploy a stack of systems, its core being the metadata repository. Next, we propose a graph-based metadata model as formalism for metadata management. We focus on supporting schema and data source evolution, a predominant factor on the heterogeneous sources at hand. For virtual integration, we propose query rewriting algorithms that rely on the previously proposed metadata model. We additionally consider semantic heterogeneities in the data sources, which the proposed algorithms are capable of automatically resolving. Finally, the thesis focuses on the materialized integration activity, and to this end, proposes a method to select intermediate results to materialize in data-intensive flows. Overall, the results of this thesis serve as contribution to the field of data integration in contemporary data-intensive ecosystems.
Les dades tenen un impacte indubtable en la societat. La capacitat d’emmagatzemar i processar grans quantitats de dades disponibles és avui en dia un dels factors claus per l’èxit d’una organització. No obstant, avui en dia estem presenciant un canvi representat per grans volums de dades heterogenis. En efecte, el 90% de les dades mundials han sigut generades en els últims dos anys. Per tal de dur a terme aquestes tasques d’explotació de dades, les organitzacions primer han de realitzar una integració de les dades, combinantles a partir de diferents fonts amb l’objectiu de tenir-ne una vista unificada d’elles. Per això, aquest fet requereix reconsiderar les assumpcions tradicionals en integració amb l’objectiu de lidiar amb els requisits imposats per aquests sistemes de tractament massiu de dades. Aquesta tesi doctoral té com a objectiu proporcional un nou marc de treball per a la integració de dades en el context de sistemes de tractament massiu de dades, el qual implica lidiar amb una gran quantitat de dades heterogènies, provinents de múltiples fonts i en el seu format original. Per això, proposem un procés d’integració compost d’una seqüència d’activitats governades per una capa semàntica, la qual és implementada a partir d’un repositori de metadades compartides. Des d’una perspectiva d’administració, aquestes activitats són el desplegament d’una arquitectura d’integració de dades, seguit per la inserció d’aquestes metadades compartides. Des d’una perspectiva de consum de dades, les activitats són la integració virtual i materialització de les dades, la primera sent una tasca exploratòria i la segona una de consolidació. Seguint el marc de treball proposat, ens centrem en proporcionar contribucions a cada una de les quatre activitats. La tesi inicia proposant una arquitectura de referència de software per a sistemes de tractament massiu de dades amb coneixement semàntic. Aquesta arquitectura serveix com a planell per a desplegar un conjunt de sistemes, sent el repositori de metadades al seu nucli. Posteriorment, proposem un model basat en grafs per a la gestió de metadades. Concretament, ens centrem en donar suport a l’evolució d’esquemes i fonts de dades, un dels factors predominants en les fonts de dades heterogènies considerades. Per a l’integració virtual, proposem algorismes de rescriptura de consultes que usen el model de metadades previament proposat. Com a afegitó, considerem heterogeneïtat semàntica en les fonts de dades, les quals els algorismes de rescriptura poden resoldre automàticament. Finalment, la tesi es centra en l’activitat d’integració materialitzada. Per això proposa un mètode per a seleccionar els resultats intermedis a materialitzar un fluxes de tractament intensiu de dades. En general, els resultats d’aquesta tesi serveixen com a contribució al camp d’integració de dades en els ecosistemes de tractament massiu de dades contemporanis
Les données ont un impact indéniable sur la société. Le stockage et le traitement de grandes quantités de données disponibles constituent actuellement l’un des facteurs clés de succès d’une entreprise. Néanmoins, nous assistons récemment à un changement représenté par des quantités de données massives et hétérogènes. En effet, 90% des données dans le monde ont été générées au cours des deux dernières années. Ainsi, pour mener à bien ces tâches d’exploitation des données, les organisations doivent d’abord réaliser une intégration des données en combinant des données provenant de sources multiples pour obtenir une vue unifiée de ces dernières. Cependant, l’intégration de quantités de données massives et hétérogènes nécessite de revoir les hypothèses d’intégration traditionnelles afin de faire face aux nouvelles exigences posées par les systèmes de gestion de données massives. Cette thèse de doctorat a pour objectif de fournir un nouveau cadre pour l’intégration de données dans le contexte d’écosystèmes à forte intensité de données, ce qui implique de traiter de grandes quantités de données hétérogènes, provenant de sources multiples et dans leur format d’origine. À cette fin, nous préconisons un processus d’intégration constitué d’activités séquentielles régies par une couche sémantique, mise en oeuvre via un dépôt partagé de métadonnées. Du point de vue de la gestion, ces activités consistent à déployer une architecture d’intégration de données, suivies de la population de métadonnées partagées. Du point de vue de la consommation de données, les activités sont l’intégration de données virtuelle et matérialisée, la première étant une tâche exploratoire et la seconde, une tâche de consolidation. Conformément au cadre proposé, nous nous attachons à fournir des contributions à chacune des quatre activités. Nous commençons par proposer une architecture logicielle de référence pour les systèmes de gestion de données massives et à connaissance sémantique. Une telle architecture consiste en un schéma directeur pour le déploiement d’une pile de systèmes, le dépôt de métadonnées étant son composant principal. Ensuite, nous proposons un modèle de métadonnées basé sur des graphes comme formalisme pour la gestion des métadonnées. Nous mettons l’accent sur la prise en charge de l’évolution des schémas et des sources de données, facteur prédominant des sources hétérogènes sous-jacentes. Pour l’intégration virtuelle, nous proposons des algorithmes de réécriture de requêtes qui s’appuient sur le modèle de métadonnées proposé précédemment. Nous considérons en outre les hétérogénéités sémantiques dans les sources de données, que les algorithmes proposés sont capables de résoudre automatiquement. Enfin, la thèse se concentre sur l’activité d’intégration matérialisée et propose à cette fin une méthode de sélection de résultats intermédiaires à matérialiser dans des flux des données massives. Dans l’ensemble, les résultats de cette thèse constituent une contribution au domaine de l’intégration des données dans les écosystèmes contemporains de gestion de données massives
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Jakonienė, Vaida. "Integration of biological data /". Linköping : Linköpings universitet, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7484.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Akeel, Fatmah Y. "Secure data integration systems". Thesis, University of Southampton, 2017. https://eprints.soton.ac.uk/415716/.

Texto completo
Resumen
As the web moves increasingly towards publishing data, a significant challenge arises when integrating data from diverse sources that have heterogeneous security and privacy policies and requirements. Data Integration Systems (DIS) are concerned with integrating data from multiple data sources to resolve users' queries. DIS are prone to data leakage threats, e.g. unauthorised disclosure or secondary use of the data, that compromise the data's confidentiality and privacy. We claim that these threats are caused by the failure to implement or correctly employ confidentiality and privacy techniques, and by the failure to consider the trust levels of system entities, from the very start of system development. Data leakage also results from a failure to capture or implement the security policies imposed by the data providers on the collection, processing, and disclosure of personal and sensitive data. This research proposes a novel framework, called SecureDIS, to mitigate data leakage threats in DIS. Unlike existing approaches that secure such systems, SecureDIS helps software engineers to lessen data leakage threats during the early phases of DIS development. It comprises six components that represent a conceptualised DIS architecture: data and data sources, security policies, integration approach, integration location, data consumers, and System Security Management (SSM). Each component contains a set of informal guidelines written in natural language to be used by software engineers who build and design a DIS that handles sensitive and personal data. SecureDIS has undergone two rounds of review by experts to conrm its validity, resulting in the guidelines being evaluated and extended. Two approaches were adopted to ensure that SecureDIS is suitable for software engineers. The first was to formalise the guidelines by modelling a DIS with the SecureDIS security policies using Event-B formal methods. This verified the correctness and consistency of the model. The second approach assessed SecureDIS's applicability to a real data integration project by using a case study. The case study addressed the experts' concerns regarding the ability to apply the proposed guidelines in practice.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Eberius, Julian. "Query-Time Data Integration". Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-191560.

Texto completo
Resumen
Today, data is collected in ever increasing scale and variety, opening up enormous potential for new insights and data-centric products. However, in many cases the volume and heterogeneity of new data sources precludes up-front integration using traditional ETL processes and data warehouses. In some cases, it is even unclear if and in what context the collected data will be utilized. Therefore, there is a need for agile methods that defer the effort of integration until the usage context is established. This thesis introduces Query-Time Data Integration as an alternative concept to traditional up-front integration. It aims at enabling users to issue ad-hoc queries on their own data as if all potential other data sources were already integrated, without declaring specific sources and mappings to use. Automated data search and integration methods are then coupled directly with query processing on the available data. The ambiguity and uncertainty introduced through fully automated retrieval and mapping methods is compensated by answering those queries with ranked lists of alternative results. Each result is then based on different data sources or query interpretations, allowing users to pick the result most suitable to their information need. To this end, this thesis makes three main contributions. Firstly, we introduce a novel method for Top-k Entity Augmentation, which is able to construct a top-k list of consistent integration results from a large corpus of heterogeneous data sources. It improves on the state-of-the-art by producing a set of individually consistent, but mutually diverse, set of alternative solutions, while minimizing the number of data sources used. Secondly, based on this novel augmentation method, we introduce the DrillBeyond system, which is able to process Open World SQL queries, i.e., queries referencing arbitrary attributes not defined in the queried database. The original database is then augmented at query time with Web data sources providing those attributes. Its hybrid augmentation/relational query processing enables the use of ad-hoc data search and integration in data analysis queries, and improves both performance and quality when compared to using separate systems for the two tasks. Finally, we studied the management of large-scale dataset corpora such as data lakes or Open Data platforms, which are used as data sources for our augmentation methods. We introduce Publish-time Data Integration as a new technique for data curation systems managing such corpora, which aims at improving the individual reusability of datasets without requiring up-front global integration. This is achieved by automatically generating metadata and format recommendations, allowing publishers to enhance their datasets with minimal effort. Collectively, these three contributions are the foundation of a Query-time Data Integration architecture, that enables ad-hoc data search and integration queries over large heterogeneous dataset collections.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Jakonienė, Vaida. "Integration of Biological Data". Doctoral thesis, Linköpings universitet, IISLAB - Laboratoriet för intelligenta informationssystem, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7484.

Texto completo
Resumen
Data integration is an important procedure underlying many research tasks in the life sciences, as often multiple data sources have to be accessed to collect the relevant data. The data sources vary in content, data format, and access methods, which often vastly complicates the data retrieval process. As a result, the task of retrieving data requires a great deal of effort and expertise on the part of the user. To alleviate these difficulties, various information integration systems have been proposed in the area. However, a number of issues remain unsolved and new integration solutions are needed. The work presented in this thesis considers data integration at three different levels. 1) Integration of biological data sources deals with integrating multiple data sources from an information integration system point of view. We study properties of biological data sources and existing integration systems. Based on the study, we formulate requirements for systems integrating biological data sources. Then, we define a query language that supports queries commonly used by biologists. Also, we propose a high-level architecture for an information integration system that meets a selected set of requirements and that supports the specified query language. 2) Integration of ontologies deals with finding overlapping information between ontologies. We develop and evaluate algorithms that use life science literature and take the structure of the ontologies into account. 3) Grouping of biological data entries deals with organizing data entries into groups based on the computation of similarity values between the data entries. We propose a method that covers the main steps and components involved in similarity-based grouping procedures. The applicability of the method is illustrated by a number of test cases. Further, we develop an environment that supports comparison and evaluation of different grouping strategies. The work is supported by the implementation of: 1) a prototype for a system integrating biological data sources, called BioTRIFU, 2) algorithms for ontology alignment, and 3) an environment for evaluating strategies for similarity-based grouping of biological data, called KitEGA.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Peralta, Veronika. "Data Quality Evaluation in Data Integration Systems". Phd thesis, Université de Versailles-Saint Quentin en Yvelines, 2006. http://tel.archives-ouvertes.fr/tel-00325139.

Texto completo
Resumen
Les besoins d'accéder, de façon uniforme, à des sources de données multiples, sont chaque jour plus forts, particulièrement, dans les systèmes décisionnels qui ont besoin d'une analyse compréhensive des données. Avec le développement des Systèmes d'Intégration de Données (SID), la qualité de l'information est devenue une propriété de premier niveau de plus en plus exigée par les utilisateurs. Cette thèse porte sur la qualité des données dans les SID. Nous nous intéressons, plus précisément, aux problèmes de l'évaluation de la qualité des données délivrées aux utilisateurs en réponse à leurs requêtes et de la satisfaction des exigences des utilisateurs en terme de qualité. Nous analysons également l'utilisation de mesures de qualité pour l'amélioration de la conception du SID et de la qualité des données. Notre approche consiste à étudier un facteur de qualité à la fois, en analysant sa relation avec le SID, en proposant des techniques pour son évaluation et en proposant des actions pour son amélioration. Parmi les facteurs de qualité qui ont été proposés, cette thèse analyse deux facteurs de qualité : la fraîcheur et l'exactitude des données. Nous analysons les différentes définitions et mesures qui ont été proposées pour la fraîcheur et l'exactitude des données et nous faisons émerger les propriétés du SID qui ont un impact important sur leur évaluation. Nous résumons l'analyse de chaque facteur par le biais d'une taxonomie, qui sert à comparer les travaux existants et à faire ressortir les problèmes ouverts. Nous proposons un canevas qui modélise les différents éléments liés à l'évaluation de la qualité tels que les sources de données, les requêtes utilisateur, les processus d'intégration du SID, les propriétés du SID, les mesures de qualité et les algorithmes d'évaluation de la qualité. En particulier, nous modélisons les processus d'intégration du SID comme des processus de workflow, dans lesquels les activités réalisent les tâches qui extraient, intègrent et envoient des données aux utilisateurs. Notre support de raisonnement pour l'évaluation de la qualité est un graphe acyclique dirigé, appelé graphe de qualité, qui a la même structure du SID et contient, comme étiquettes, les propriétés du SID qui sont relevants pour l'évaluation de la qualité. Nous développons des algorithmes d'évaluation qui prennent en entrée les valeurs de qualité des données sources et les propriétés du SID, et, combinent ces valeurs pour qualifier les données délivrées par le SID. Ils se basent sur la représentation en forme de graphe et combinent les valeurs des propriétés en traversant le graphe. Les algorithmes d'évaluation peuvent être spécialisés pour tenir compte des propriétés qui influent la qualité dans une application concrète. L'idée derrière le canevas est de définir un contexte flexible qui permet la spécialisation des algorithmes d'évaluation à des scénarios d'application spécifiques. Les valeurs de qualité obtenues pendant l'évaluation sont comparées à celles attendues par les utilisateurs. Des actions d'amélioration peuvent se réaliser si les exigences de qualité ne sont pas satisfaites. Nous suggérons des actions d'amélioration élémentaires qui peuvent être composées pour améliorer la qualité dans un SID concret. Notre approche pour améliorer la fraîcheur des données consiste à l'analyse du SID à différents niveaux d'abstraction, de façon à identifier ses points critiques et cibler l'application d'actions d'amélioration sur ces points-là. Notre approche pour améliorer l'exactitude des données consiste à partitionner les résultats des requêtes en portions (certains attributs, certaines tuples) ayant une exactitude homogène. Cela permet aux applications utilisateur de visualiser seulement les données les plus exactes, de filtrer les données ne satisfaisant pas les exigences d'exactitude ou de visualiser les données par tranche selon leur exactitude. Comparée aux approches existantes de sélection de sources, notre proposition permet de sélectionner les portions les plus exactes au lieu de filtrer des sources entières. Les contributions principales de cette thèse sont : (1) une analyse détaillée des facteurs de qualité fraîcheur et exactitude ; (2) la proposition de techniques et algorithmes pour l'évaluation et l'amélioration de la fraîcheur et l'exactitude des données ; et (3) un prototype d'évaluation de la qualité utilisable dans la conception de SID.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Peralta, Costabel Veronika del Carmen. "Data quality evaluation in data integration systems". Versailles-St Quentin en Yvelines, 2006. http://www.theses.fr/2006VERS0020.

Texto completo
Resumen
This thesis deals with data quality evaluation in Data Integration Systems (DIS). Specifically, we address the problems of evaluating the quality of the data conveyed to users in response to their queries and verifying if users’ quality expectations can be achieved. We also analyze how quality measures can be used for improving the DIS and enforcing data quality. Our approach consists in studying one quality factor at a time, analyzing its impact within a DIS, proposing techniques for its evaluation and proposing improvement actions for its enforcement. Among the quality factors that have been proposed, this thesis analyzes two of the most used ones: data freshness and data accuracy
Cette thèse porte sur la qualité des données dans les Systèmes d’Intégration de Données (SID). Nous nous intéressons, plus précisément, aux problèmes de l’évaluation de la qualité des données délivrées aux utilisateurs en réponse à leurs requêtes et de la satisfaction des exigences des utilisateurs en terme de qualité. Nous analysons également l’utilisation de mesures de qualité pour l’amélioration de la conception du SID et la conséquente amélioration de la qualité des données. Notre approche consiste à étudier un facteur de qualité à la fois, en analysant sa relation avec le SID, en proposant des techniques pour son évaluation et en proposant des actions pour son amélioration. Parmi les facteurs de qualité qui ont été proposés, cette thèse analyse deux facteurs de qualité : la fraîcheur et l’exactitude des données
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Neumaier, Sebastian, Axel Polleres, Simon Steyskal y Jürgen Umbrich. "Data Integration for Open Data on the Web". Springer International Publishing AG, 2017. http://dx.doi.org/10.1007/978-3-319-61033-7_1.

Texto completo
Resumen
In this lecture we will discuss and introduce challenges of integrating openly available Web data and how to solve them. Firstly, while we will address this topic from the viewpoint of Semantic Web research, not all data is readily available as RDF or Linked Data, so we will give an introduction to different data formats prevalent on the Web, namely, standard formats for publishing and exchanging tabular, tree-shaped, and graph data. Secondly, not all Open Data is really completely open, so we will discuss and address issues around licences, terms of usage associated with Open Data, as well as documentation of data provenance. Thirdly, we will discuss issues connected with (meta-)data quality issues associated with Open Data on the Web and how Semantic Web techniques and vocabularies can be used to describe and remedy them. Fourth, we will address issues about searchability and integration of Open Data and discuss in how far semantic search can help to overcome these. We close with briefly summarizing further issues not covered explicitly herein, such as multi-linguality, temporal aspects (archiving, evolution, temporal querying), as well as how/whether OWL and RDFS reasoning on top of integrated open data could be help.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Cheng, Hui. "Data integration and visualization for systems biology data". Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/77250.

Texto completo
Resumen
Systems biology aims to understand cellular behavior in terms of the spatiotemporal interactions among cellular components, such as genes, proteins and metabolites. Comprehensive visualization tools for exploring multivariate data are needed to gain insight into the physiological processes reflected in these molecular profiles. Data fusion methods are required to integratively study high-throughput transcriptomics, metabolomics and proteomics data combined before systems biology can live up to its potential. In this work I explored mathematical and statistical methods and visualization tools to resolve the prominent issues in the nature of systems biology data fusion and to gain insight into these comprehensive data. In order to choose and apply multivariate methods, it is important to know the distribution of the experimental data. Chi square Q-Q plot and violin plot were applied to all M. truncatula data and V. vinifera data, and found most distributions are right-skewed (Chapter 2). The biplot display provides an effective tool for reducing the dimensionality of the systems biological data and displaying the molecules and time points jointly on the same plot. Biplot of M. truncatula data revealed the overall system behavior, including unidentified compounds of interest and the dynamics of the highly responsive molecules (Chapter 3). The phase spectrum computed from the Fast Fourier transform of the time course data has been found to play more important roles than amplitude in the signal reconstruction. Phase spectrum analyses on in silico data created with two artificial biochemical networks, the Claytor model and the AB2 model proved that phase spectrum is indeed an effective tool in system biological data fusion despite the data heterogeneity (Chapter 4). The difference between data integration and data fusion are further discussed. Biplot analysis of scaled data were applied to integrate transcriptome, metabolome and proteome data from the V. vinifera project. Phase spectrum combined with k-means clustering was used in integrative analyses of transcriptome and metabolome of the M. truncatula yeast elicitation data and of transcriptome, metabolome and proteome of V. vinifera salinity stress data. The phase spectrum analysis was compared with the biplot display as effective tools in data fusion (Chapter 5). The results suggest that phase spectrum may perform better than the biplot. This work was funded by the National Science Foundation Plant Genome Program, grant DBI-0109732, and by the Virginia Bioinformatics Institute.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Libros sobre el tema "Integration of peptidomic data"

1

Genesereth, Michael. Data Integration. Cham: Springer International Publishing, 2010. http://dx.doi.org/10.1007/978-3-031-01550-2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Dyché, Jill y Evan Levy, eds. Customer Data Integration. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2012. http://dx.doi.org/10.1002/9781119202127.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Majkić, Zoran. Big Data Integration Theory. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-04156-8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Doan, AnHai. Principles of data integration. Waltham, MA: Morgan Kaufmann, 2012.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Goldfedder, Jarrett. Building a Data Integration Team. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5653-4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Davino, Cristina y Luigi Fabbris, eds. Survey Data Collection and Integration. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-21308-3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Viola de Azevedo Cunha, Mario. Market Integration Through Data Protection. Dordrecht: Springer Netherlands, 2013. http://dx.doi.org/10.1007/978-94-007-6085-1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Kerr, W. Scott. Data integration using virtual repositories. [Toronto]: Kerr, 1999.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Ning, Kang, ed. Methodologies of Multi-Omics Data Integration and Data Mining. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-8210-1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Office, United States Department of the Interior Bureau of Land Management ALMRS Project. Bureau of Land Management data integration. [Denver, Colo: Bureau of Land Management, ALMRS Project Office, 1985.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Integration of peptidomic data"

1

Curtis, Bobby. "Data Integration". En Pro Oracle GoldenGate for the DBA, 217–24. Berkeley, CA: Apress, 2016. http://dx.doi.org/10.1007/978-1-4842-1179-3_9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Revesz, Peter. "Data Integration". En Texts in Computer Science, 417–34. London: Springer London, 2009. http://dx.doi.org/10.1007/978-1-84996-095-3_17.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Shekhar, Shashi y Hui Xiong. "Data Integration". En Encyclopedia of GIS, 215. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-35973-1_244.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Fait, Aaron y Alisdair R. Fernie. "Data Integration". En Plant Metabolic Networks, 151–71. New York, NY: Springer New York, 2008. http://dx.doi.org/10.1007/978-0-387-78745-9_6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Bergamaschi, Sonia, Domenico Beneventano, Francesco Guerra y Mirko Orsini. "Data Integration". En Handbook of Conceptual Modeling, 441–76. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-15865-0_14.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Alfieri, Roberta y Luciano Milanesi. "Data Integration". En Encyclopedia of Systems Biology, 519. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4419-9863-7_1072.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Papotti, Paolo y Donatello Santoro. "Data Integration". En Encyclopedia of Big Data Technologies, 1–6. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-63962-8_6-1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Kim, Jae Kwang y Jun Shao. "Data Integration". En Statistical Methods for Handling Incomplete Data, 299–322. 2a ed. Boca Raton: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9780429321740-11.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Kadadi, Anirudh y Rajeev Agrawal. "Data Integration". En Encyclopedia of Big Data, 290–94. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-319-32010-6_54.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Sarferaz, Siar. "Data Integration". En Compendium on Enterprise Resource Planning, 419–34. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-93856-7_27.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Integration of peptidomic data"

1

Lenzerini, Maurizio. "Data integration". En the twenty-first ACM SIGMOD-SIGACT-SIGART symposium. New York, New York, USA: ACM Press, 2002. http://dx.doi.org/10.1145/543613.543644.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Golshan, Behzad, Alon Halevy, George Mihaila y Wang-Chiew Tan. "Data Integration". En SIGMOD/PODS'17: International Conference on Management of Data. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3034786.3056124.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Dong, X. L. y D. Srivastava. "Big data integration". En 2013 29th IEEE International Conference on Data Engineering (ICDE 2013). IEEE, 2013. http://dx.doi.org/10.1109/icde.2013.6544914.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Ala'i, Riaz. "Borehole data integration". En SEG Technical Program Expanded Abstracts 1998. Society of Exploration Geophysicists, 1998. http://dx.doi.org/10.1190/1.1820493.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Tan, Wang-Chiew. "Deep Data Integration". En SIGMOD/PODS '21: International Conference on Management of Data. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3448016.3460534.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Cudre-Mauroux, Philippe. "Big Data Integration". En 2017 14th International Conference on Telecommunications (ConTEL). IEEE, 2017. http://dx.doi.org/10.23919/contel.2017.8000011.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Saito, Toru y Jinsong Ouyang. "Client-side data visualization". En Integration (IRI). IEEE, 2009. http://dx.doi.org/10.1109/iri.2009.5211550.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Sheng, Hao, Huajun Chen, Tong Yu y Yelei Feng. "Linked data based semantic similarity and data mining". En Integration (2010 IRI). IEEE, 2010. http://dx.doi.org/10.1109/iri.2010.5558957.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Brodie, Michael L. "Data Integration at Scale: From Relational Data Integration to Information Ecosystems". En 2010 24th IEEE International Conference on Advanced Information Networking and Applications. IEEE, 2010. http://dx.doi.org/10.1109/aina.2010.184.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Mi, Tian, Robert Aseltine y Sanguthevar Rajasekaran. "Data Integration on Multiple Data Sets". En 2008 IEEE International Conference on Bioinformatics and Biomedicine. IEEE, 2008. http://dx.doi.org/10.1109/bibm.2008.48.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "Integration of peptidomic data"

1

Critchlow, T., B. Ludaescher, M. Vouk y C. Pu. Distributed Data Integration Infrastructure. Office of Scientific and Technical Information (OSTI), febrero de 2003. http://dx.doi.org/10.2172/15003342.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Critchlow, T. J., L. Liu, C. Pu, A. Gupta, B. Ludaescher, I. Altintas, M. Vouk, D. Bitzer, M. Singh y D. Rosnick. Scientific Data Management Center Scientific Data Integration. Office of Scientific and Technical Information (OSTI), enero de 2003. http://dx.doi.org/10.2172/15003250.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Bray, O. H. Information integration for data fusion. Office of Scientific and Technical Information (OSTI), enero de 1997. http://dx.doi.org/10.2172/444047.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Miller, R. Allen. Castability Assessment and Data Integration. Office of Scientific and Technical Information (OSTI), marzo de 2005. http://dx.doi.org/10.2172/859291.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Sturdy, James T. Military Data Link Integration Application. Fort Belvoir, VA: Defense Technical Information Center, junio de 2004. http://dx.doi.org/10.21236/ada465745.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Musick, R., T. Critchlow, M. Ganesh, Z. Fidelis, A. Zemla y T. Slezak. Data Foundry: Data Warehousing and Integration for Scientific Data Management. Office of Scientific and Technical Information (OSTI), febrero de 2000. http://dx.doi.org/10.2172/793555.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Swinhoe, Martyn Thomas. Model development and data uncertainty integration. Office of Scientific and Technical Information (OSTI), diciembre de 2015. http://dx.doi.org/10.2172/1227409.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Swinhoe, Martyn Thomas. Model development and data uncertainty integration. Office of Scientific and Technical Information (OSTI), diciembre de 2015. http://dx.doi.org/10.2172/1227933.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Williams, D. N., G. Palanisamy y K. K. van Dam. Working Group on Virtual Data Integration. Office of Scientific and Technical Information (OSTI), febrero de 2016. http://dx.doi.org/10.2172/1239196.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Williams, Dean N. Working Group on Virtual Data Integration. Office of Scientific and Technical Information (OSTI), marzo de 2016. http://dx.doi.org/10.2172/1253674.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía