Literatura científica selecionada sobre o tema "Incremental Schema Discovery"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Incremental Schema Discovery".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Incremental Schema Discovery"

1

Fotiadis, D. I., A. Likas e K. Blekas. "A Sequential Method for Discovering Probabilistic Motifs in Proteins". Methods of Information in Medicine 43, n.º 01 (2004): 9–12. http://dx.doi.org/10.1055/s-0038-1633414.

Texto completo da fonte
Resumo:
Summary Objectives: This paper proposes a greedy algorithm for learning a mixture of motifs model through likelihood maximization, in order to discover common substrings, known as motifs, from a given collection of related biosequences. Methods: The approach sequentially adds a new motif component to a mixture model by performing a combined scheme of global and local search for appropriately initializing the component parameters. A hierarchical clustering scheme is also applied initially which leads to the identification of candidate motif models and speeds up the global searching procedure. Results: The performance of the proposed algorithm has been studied in both artificial and real biological datasets. In comparison with the well-known MEME approach, the algorithm is advantageous since it identifies motifs with significant conservation and produces larger protein fingerprints. Conclusion: The proposed greedy algorithm constitutes a promising approach for discovering multiple probabilistic motifs in biological sequences. By using an effective incremental mixture modeling strategy, our technique manages to successfully overcome the limitation of the MEME scheme which erases motif occurrences each time a new motif is discovered.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Carpenter, Chris. "Study Examines Incremental Method vs. Split Conditions in Reserves Booking". Journal of Petroleum Technology 73, n.º 12 (1 de dezembro de 2021): 35–36. http://dx.doi.org/10.2118/1221-0035-jpt.

Texto completo da fonte
Resumo:
This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 206104, “Incremental Method vs. Split Conditions: Discussing the Similarities Between Reserves Evaluation and a Madoff Scheme,” by Dominique Salacz, SPE, and Farid Allam, SPE, ADNOC, and Imre Szilagyi, Eötvös Loránd University, et al., prepared for the 2021 SPE Annual Technical Conference and Exhibition, Dubai, 21–23 September. The paper has not been peer reviewed. After the oil-price crashes of 2014 and 2020, several merger and acquisition deals ended in litigation because operators canceled major projects or infills wells that were booked in “probable” reserves only. In the complete paper, the authors challenge the compatibility between the deterministic incremental reserve assessment method and the concept of split condition, which is not allowed for reserves booking under the Petroleum Resources Management System (PRMS). With some examples, the authors argue that the incremental method may mislead investors if used wrongly. The Two Approaches PRMS is a simple, fit-for-purpose system. However, it still allows for reporting volumes under slightly different methods that may lead to very different results based on the definition adopted for the “project” by the evaluator. The text of the PRMS is quoted extensively in the complete paper in defining these methods. The authors write that the deterministic incremental method seems to be favored by evaluators educated under, or heavily exposed to, pre-2010 US Securities and Exchange Commission regulations, in which proved reserves were only disclosed. Under this approach, the term “proven” generally refers to recoverable volumes expected from the partition of the reservoir with a high degree of confidence related to the high level of geological knowledge. The authors also write that the deterministic scenario method, on the other hand, evidently is favored by evaluators and engineers with less exposure to the pre-2010 guidelines. This method provides three estimates (low, best, and high) for every individual project. Comparing the Two Methods The study-case accumulation is defined by a simple structural trap controlled by a top seal and a fault on the western side of the asset (Fig. 2 of the complete paper). Oil has been discovered by Well W1, which hit the permeable top at 9,695 ft near the structure’s crest. For mechanical reasons, drilling operations were stopped and the lowest known hydrocarbon sets the oil down to (ODT) at 9,785 ft, providing a discovered oil column of 90 ft. Horizontal Well WH2 was drilled into the discovered reservoir partition. Today, Well WH2 is the only producing well of the accumulation. Well W1 will be reperforated during a workover to produce the attic oil of the reservoir. Because the well did not reach the aquifer, a major uncertainty for further development is assigned to oil/water contact (OWC) position. At the current stage of resource maturation, up to 11 additional wells are considered.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Kluger, Avraham N., Limor Borut, Michal Lehmann, Tal Nir, Ella Azoulay, Ofri Einy e Galit Gordoni. "A New Measure of the Rogerian Schema of the Good Listener". Sustainability 14, n.º 19 (9 de outubro de 2022): 12893. http://dx.doi.org/10.3390/su141912893.

Texto completo da fonte
Resumo:
Sustainable social relationships can be produced by good listening. Good listening may be exhibited by people who endorse Carl Rogers’s schema of good listening; a set of beliefs about what constitutes high-quality listening. To measure it, in Study One, we constructed 46 items. In Study Two, we administered them to 476 participants and discovered three factors: belief that listening can help the speaker, trusting the ability of the speaker to benefit from listening, and endorsing behaviors constituting good listening. These results suggested a reduced 27-item scale. In Study Three, we translated the items to Hebrew and probed some difficulties found in the last factor. In Study Four, we administered this scale in Hebrew to a sample of 50 romantic couples, replicated the factorial structure found in Study Two, and showed that it predicts the partner’s listening experience. In Study Five, we administered this scale to 190 romantic couples, replicated Study Four, and obtained evidence for test–retest reliability and construct validity. In Study Six, we obtained, from the same couples of Study Five, eight months after measuring their listening schema, measures of relationship sustainability—commitment, trust, and resilience. We found that the listening schema of one romantic partner predicts the relationship sustainability reported by the other romantic partner and showed incremental validity over the listener’s self-reported listening. This work contributes to understanding the essence of good listening, its measurement, and its implications for sustainable relationships.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Han, Zhiyan, e Jian Wang. "A Fault Diagnosis Method Based on Active Example Selection". Journal of Circuits, Systems and Computers 27, n.º 01 (23 de agosto de 2017): 1850013. http://dx.doi.org/10.1142/s0218126618500135.

Texto completo da fonte
Resumo:
The fault diagnosis in the real world is often complicated. It is due to the fact that not all relevant fault information is available directly. In many fault diagnosis situations, it is impossible or inconvenient to find all fault information before establishing a fault diagnosis model. To deal with this issue, a method named active example selection (AES) is proposed for the fault diagnosis. AES could actively discover unseen faults and choose useful samples to improve the fault detection accuracy. AES consists of three key components: (1) a fusion model of combining the advantage of the unsupervised and supervised fault diagnosis methods, where the unsupervised fault diagnosis methods could discover unseen faults and the supervised fault diagnosis methods could provide better fault detection accuracy on seen faults, (2) an active learning algorithm to help the supervised fault diagnosis methods actively discover unseen faults and choose useful samples to improve the fault detection accuracy, and (3) an incremental learning scheme to speed up the iterative training procedure for AES. The proposed method was evaluated on the benchmark Tennessee Eastman Process data. The proposed method performed better on both unseen and seen faults than the stand-alone unsupervised, supervised fault diagnosis methods, their joint and referenced support vector machines based on active learning.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Venkata Sailaja, N., L. Padmasree e N. Mangathayaru. "Incremental learning for text categorization using rough set boundary based optimized Support Vector Neural Network". Data Technologies and Applications 54, n.º 5 (3 de julho de 2020): 585–601. http://dx.doi.org/10.1108/dta-03-2020-0071.

Texto completo da fonte
Resumo:
PurposeText mining has been used for various knowledge discovery based applications, and thus, a lot of research has been contributed towards it. Latest trending research in the text mining is adopting the incremental learning data, as it is economical while dealing with large volume of information.Design/methodology/approachThe primary intention of this research is to design and develop a technique for incremental text categorization using optimized Support Vector Neural Network (SVNN). The proposed technique involves four major steps, such as pre-processing, feature selection, classification and feature extraction. Initially, the data is pre-processed based on stop word removal and stemming. Then, the feature extraction is done by extracting semantic word-based features and Term Frequency and Inverse Document Frequency (TF-IDF). From the extracted features, the important features are selected using Bhattacharya distance measure and the features are subjected as the input to the proposed classifier. The proposed classifier performs incremental learning using SVNN, wherein the weights are bounded in a limit using rough set theory. Moreover, for the optimal selection of weights in SVNN, Moth Search (MS) algorithm is used. Thus, the proposed classifier, named Rough set MS-SVNN, performs the text categorization for the incremental data, given as the input.FindingsFor the experimentation, the 20 News group dataset, and the Reuters dataset are used. Simulation results indicate that the proposed Rough set based MS-SVNN has achieved 0.7743, 0.7774 and 0.7745 for the precision, recall and F-measure, respectively.Originality/valueIn this paper, an online incremental learner is developed for the text categorization. The text categorization is done by developing the Rough set MS-SVNN classifier, which classifies the incoming texts based on the boundary condition evaluated by the Rough set theory, and the optimal weights from the MS. The proposed online text categorization scheme has the basic steps, like pre-processing, feature extraction, feature selection and classification. The pre-processing is carried out to identify the unique words from the dataset, and the features like semantic word-based features and TF-IDF are obtained from the keyword set. Feature selection is done by setting a minimum Bhattacharya distance measure, and the selected features are provided to the proposed Rough set MS-SVNN for the classification.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Gutierrez, Dubert, Vinodh Kumar, Robert G. Moore e Sudarshan A. Mehta. "Air Injection and Waterflood Performance Comparison of Two Adjacent Units in the Buffalo Field". SPE Reservoir Evaluation & Engineering 11, n.º 05 (1 de outubro de 2008): 848–57. http://dx.doi.org/10.2118/104479-pa.

Texto completo da fonte
Resumo:
Summary Buffalo field covers a large area on the southwestern flank of the Williston basin, in the northwest corner of South Dakota. In 1987, 8,000 acres of the field were divided into two units to initiate improved-oil-recovery (IOR) operations with two different methods: air injection and waterflooding. After collecting 19 years of production history, a technical and economic comparison has been made between the two projects to determine the relative success of both units. The technical performance was evaluated in terms of incremental oil recovery, ultimate recovery, and incremental recovery per volumes of fluid injected. Ultimate primary recovery was estimated using conventional decline-curve analysis on individual wells. Ultimate recovery was estimated by extrapolation of the current performance of the units, assuming the same actual development scheme and operating strategies. The economic comparison was performed in terms of net present value, incremental rate of return, and payout time. A sensitivity analysis on some of the key drivers of the project economics--specifically, oil price, operating cost, and capital investment--was also performed. Throughout the years, the west Buffalo Red River unit (WBRRU) under high-pressure air injection (HPAI) has technically outperformed its "twin," west Buffalo "B" Red River unit (WBBRRU), which is under waterflooding. Nevertheless, the waterflood project has shown greater economic benefit, which results primarily from the low oil prices (less than USD 20/bbl) experienced during most of their operating lives. This case study shows that for an air-injection project to be successful not only technically but also economically, a sufficiently high oil price (i.e., greater than USD 25/bbl) is needed, mainly because of the high operating costs and capital investment. Introduction Producing from thin, low-permeability oil reservoirs can be a very challenging issue, particularly when an efficient driving mechanism is lacking originally. Rapid depressurization makes primary production a very inefficient process; and low capacities limit the injectivities for potential IOR operations. This challenge was faced by several operators in Buffalo field since its discovery in 1954. During the early 1960s, it was recognized from the fast reservoir depletion that primary-recovery efficiency in the field would be very low, and water-injectivity tests were discouraging for future waterflood operations. During the late 1970s Koch Exploration Company (Koch) conducted an air-injectivity test and developed a pilot under HPAI. Because the pilot results were promising, the Buffalo Red River unit (BRRU) was formed (Fassihi et al. 1987; Erickson et al. 1993; SDDENR 2005). On the basis of the success of the BRRU air-injection project, another HPAI project was started in the early 1980s in the southern part of the field and was called the south Buffalo Red River unit (SBRRU) (Erickson et al. 1993; SDDENR 2005). Late in 1987, the western area of the field was divided into two parts to carry out two different IOR projects: an HPAI project in the WBRRU and a waterflood in the WBBRRU located to the west of the HPAI project in WBRRU, both of which are the subject of this paper.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Maw, Aye Aye, Maxim Tyan, Tuan Anh Nguyen e Jae-Woo Lee. "iADA*-RL: Anytime Graph-Based Path Planning with Deep Reinforcement Learning for an Autonomous UAV". Applied Sciences 11, n.º 9 (27 de abril de 2021): 3948. http://dx.doi.org/10.3390/app11093948.

Texto completo da fonte
Resumo:
Path planning algorithms are of paramount importance in guidance and collision systems to provide trustworthiness and safety for operations of autonomous unmanned aerial vehicles (UAV). Previous works showed different approaches mostly focusing on shortest path discovery without a sufficient consideration on local planning and collision avoidance. In this paper, we propose a hybrid path planning algorithm that uses an anytime graph-based path planning algorithm for global planning and deep reinforcement learning for local planning which applied for a real-time mission planning system of an autonomous UAV. In particular, we aim to achieve a highly autonomous UAV mission planning system that is adaptive to real-world environments consisting of both static and moving obstacles for collision avoidance capabilities. To achieve adaptive behavior for real-world problems, a simulator is required that can imitate real environments for learning. For this reason, the simulator must be sufficiently flexible to allow the UAV to learn about the environment and to adapt to real-world conditions. In our scheme, the UAV first learns about the environment via a simulator, and only then is it applied to the real-world. The proposed system is divided into two main parts: optimal flight path generation and collision avoidance. A hybrid path planning approach is developed by combining a graph-based path planning algorithm with a learning-based algorithm for local planning to allow the UAV to avoid a collision in real time. The global path planning problem is solved in the first stage using a novel anytime incremental search algorithm called improved Anytime Dynamic A* (iADA*). A reinforcement learning method is used to carry out local planning between waypoints, to avoid any obstacles within the environment. The developed hybrid path planning system was investigated and validated in an AirSim environment. A number of different simulations and experiments were performed using AirSim platform in order to demonstrate the effectiveness of the proposed system for an autonomous UAV. This study helps expand the existing research area in designing efficient and safe path planning algorithms for UAVs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Leake, Bernard Elgey. "Mechanism of emplacement and crystallisation history of the northern margin and centre of the Galway Granite, western Ireland". Transactions of the Royal Society of Edinburgh: Earth Sciences 97, n.º 1 (março de 2006): 1–23. http://dx.doi.org/10.1017/s0263593300001371.

Texto completo da fonte
Resumo:
ABSTRACTThe main phase (∼400 Ma) emplacement of the central and northern part of the reversely zoned Galway Granite was incremental by progressive northward marginal dyke injection and stoping of the 470–467 Ma Connemara metagabbro-gneiss country rock. The space was provided by the synchronous ESE-opening, along the strike of the country rocks, of extensional fractures generated successively northward by a releasing bend in the sinistrally moving Skird Rocks Fault or an equivalent Galway Bay Fault. This fault is a prolongation of the Antrim–Galway (a splay off the Highland Boundary Fault) and Southern Upland Faults. The ESE-strike of the spalled-off rocks controlled the resultant ESE-elongated shape of the batholith. The magma pulses (∼5–30 m in thickness) were progressively more fractionated towards the northern margin so that the coarse Porphyritic (or Megacrystic) Granite (GP; technically granodiorite) in the centre was followed outwards by finer grained, drier and more siliceous granite, until the movements opening the fractures ceased and the magma became too viscous to intrude. ‘Out-of-sequence’ pulses of more basic diorite-granodiorite (including the Mingling–Mixing Zone) and late main phase, more acid, coarse but Aphyric Granite, into the centre of the batholith, complicated the outward fractionation scheme. The outward expansion, caused by the intrusions into the centre, caused a foliation and flattening of cognate xenoliths within the partly crystallised northern marginal granite and in the Mingling–Mixing Zone to the south.Late phase (∼380 Ma) central intrusions of the newly-discovered aphyric Shannapheasteen Finegrained Granite (technically granodiorite), the Knock, the Lurgan and the Costello Murvey Granites, all more siliceous and less dense than the GP, were emplaced by pushing up the already solid and jointed GP along marginal faults. This concentration of lighter granites plus compression shown in thrusting, caused overall fault uplift of the Central Block of the Galway batholith so that the originally deepest part of the GP is exposed where there is the most late phase granite. Chemical analyses show the main and late phase magmas, including late dykes, were very similar, with repetition of the same fractionation except that the late phase magmas were drier and more quickly cooled, giving finer grained rocks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Hagen, Christoph, Christian Weinert, Christoph Sendner, Alexandra Dmitrienko e Thomas Schneider. "Contact Discovery in Mobile Messengers: Low-cost Attacks, Quantitative Analyses, and Efficient Mitigations". ACM Transactions on Privacy and Security, 30 de junho de 2022. http://dx.doi.org/10.1145/3546191.

Texto completo da fonte
Resumo:
Contact discovery allows users of mobile messengers to conveniently connect with people in their address book. In this work, we demonstrate that severe privacy issues exist in currently deployed contact discovery methods and propose suitable mitigations. Our study of three popular messengers (WhatsApp, Signal, and Telegram) shows that large-scale crawling attacks are (still) possible. Using an accurate database of mobile phone number prefixes and very few resources, we queried 10% of US mobile phone numbers for WhatsApp and 100% for Signal. For Telegram we find that its API exposes a wide range of sensitive information, even about numbers not registered with the service. We present interesting (cross-messenger) usage statistics, which also reveal that very few users change the default privacy settings. Furthermore, we demonstrate that currently deployed hashing-based contact discovery protocols are severely broken by comparing three methods for efficient hash reversal. Most notably, we show that with the password cracking tool “JTR” we can iterate through the entire world-wide mobile phone number space in < 150s on a consumer-grade GPU. We also propose a significantly improved rainbow table construction for non-uniformly distributed input domains that is of independent interest. Regarding mitigations, we most notably propose two novel rate-limiting schemes: our incremental contact discovery for services without server-side contact storage strictly improves over Signal’s current approach while being compatible with private set intersection, whereas our differential scheme allows even stricter rate limits at the overhead for service providers to store a small constant-size state that does not reveal any contact information.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Pratap, Aditya, Arpita Das, Shiv Kumar e Sanjeev Gupta. "Current Perspectives on Introgression Breeding in Food Legumes". Frontiers in Plant Science 11 (21 de janeiro de 2021). http://dx.doi.org/10.3389/fpls.2020.589189.

Texto completo da fonte
Resumo:
Food legumes are important for defeating malnutrition and sustaining agri-food systems globally. Breeding efforts in legume crops have been largely confined to the exploitation of genetic variation available within the primary genepool, resulting in narrow genetic base. Introgression as a breeding scheme has been remarkably successful for an array of inheritance and molecular studies in food legumes. Crop wild relatives (CWRs), landraces, and exotic germplasm offer great potential for introgression of novel variation not only to widen the genetic base of the elite genepool for continuous incremental gains over breeding cycles but also to discover the cryptic genetic variation hitherto unexpressed. CWRs also harbor positive quantitative trait loci (QTLs) for improving agronomic traits. However, for transferring polygenic traits, “specialized population concept” has been advocated for transferring QTLs from CWR into elite backgrounds. Recently, introgression breeding has been successful in developing improved cultivars in chickpea (Cicer arietinum), pigeonpea (Cajanus cajan), peanut (Arachis hypogaea), lentil (Lens culinaris), mungbean (Vigna radiata), urdbean (Vigna mungo), and common bean (Phaseolus vulgaris). Successful examples indicated that the usable genetic variation could be exploited by unleashing new gene recombination and hidden variability even in late filial generations. In mungbean alone, distant hybridization has been deployed to develop seven improved commercial cultivars, whereas in urdbean, three such cultivars have been reported. Similarly, in chickpea, three superior cultivars have been developed from crosses between C. arietinum and Cicer reticulatum. Pigeonpea has benefited the most where different cytoplasmic male sterility genes have been transferred from CWRs, whereas a number of disease-resistant germplasm have also been developed in Phaseolus. As vertical gene transfer has resulted in most of the useful gene introgressions of practical importance in food legumes, the horizontal gene transfer through transgenic technology, somatic hybridization, and, more recently, intragenesis also offer promise. The gains through introgression breeding are significant and underline the need of bringing it in the purview of mainstream breeding while deploying tools and techniques to increase the recombination rate in wide crosses and reduce the linkage drag. The resurgence of interest in introgression breeding needs to be capitalized for development of commercial food legume cultivars.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Incremental Schema Discovery"

1

Bouhamoum, Redouane. "Découverte automatique de schéma pour les données irrégulières et massives". Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG081.

Texto completo da fonte
Resumo:
Le web des données est un espace dans lequel de nombreuses sources sont publiées et interconnectées, et qui repose sur les technologies du web sémantique. Cet espace offre des possibilités d'utilisation sans précédent, cependant, l'exploitation pertinente des sources qu'il contient est rendue difficile par l'absence de schéma décrivant leur contenu. Des approches de découverte automatique de schéma ont été proposées, mais si elles produisent des schémas de bonne qualité, leur complexité limite leur utilisation pour des sources de données massives. Dans notre travail, nous nous intéressons au problème du passage à l'échelle de la découverte de schéma à partir de sources de données RDF massives dont le schéma est incomplet ou absent. Nous nous intéressons également à l'incrémentalité de ces approches et à la prise en compte de connaissances implicites fournies par une source de données.Notre première contribution consiste en une approche scalable de découverte de schéma qui permet l'extraction des classes décrivant le contenu d'une source de données RDF massive. Pour cela, nous avons d'abord proposé d'extraire une représentation condensée d'une source de données RDF qui servira en entrée du processus de découverte de schéma afin d'en améliorer les performances.Cette représentation est un ensemble de patterns qui correspondent à des combinaisons de propriétés décrivant les entités du jeu de données.Nous avons ensuite proposé une approche scalable de découverte de schéma fondée sur un algorithme de clustering distribué qui forme des groupes d'entités structurellement similaires représentant les classes du schéma.Notre deuxième contribution a pour but de maintenir le schéma extrait cohérent avec les changements survenant au niveau des sources RDF, ces dernières étant en constante évolution. Nous proposons pour cela une approche incrémentale de découverte de schéma qui modifie l'ensemble des classes extraites en propageant dans ces dernières les changements survenus dans les sources.Enfin, dans la troisième contribution de notre travail, nous adaptons notre approche de découverte de schéma afin qu'elle prenne en compte toute la sémantique portée par la source de données, qui est représentée par les triplets explicitement déclarés, mais également tous ceux qui peuvent en être déduits par inférence. Nous proposons une extension permettant de prendre en compte toutes les propriétés d'une entité lors de la découverte de schéma, qu'elles correspondent à des triplets explicites ou implicites, ce qui améliorera la qualité du schéma produit
The web of data is a huge global data space, relying on semantic web technologies, where a high number of sources are published and interlinked. This data space provides an unprecedented amount of knowledge available for novel applications, but the meaningful usage of its sources is often difficult due to the lack of schema describing the content of these data sources. Several automatic schema discovery approaches have been proposed, but while they provide good quality schemas, their use for massive data sources is a challenge as they rely on costly algorithms. In our work, we are interested in both the scalability and the incrementality of schema discovery approaches for RDF data sources where the schema is incomplete or missing.Furthermore, we extend schema discovery to take into account not only the explicit information provided by a data source, but also the implicit information which can be inferred.Our first contribution consists of a scalable schema discovery approach which extracts the classes describing the content of a massive RDF data source.We have proposed to extract a condensed representation of the source, which will be used as an input to the schema discovery process in order to improve its performances.This representation is a set of patterns, each one representing a combination of properties describing some entities in the dataset. We have also proposed a scalable schema discovery approach relying on a distributed clustering algorithm that forms groups of structurally similar entities representing the classes of the schema.Our second contribution aims at maintaining the generated schema consistent with the data source it describes, as this latter may evolve over time. We propose an incremental schema discovery approach that modifies the set of extracted classes by propagating the changes occurring at the source, in order to keep the schema consistent with its evolutions.Finally, the goal of our third contribution is to extend schema discovery to consider the whole semantics expressed by a data source, which is represented not only by the explicitly declared triples, but also by the ones which can be inferred through reasoning. We propose an extension allowing to take into account all the properties of an entity during schema discovery, represented either by explicit or by implicit triples, which will improve the quality of the generated schema
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Incremental Schema Discovery"

1

Fan, Hao. "Using Schema Transformation Pathways for Incremental View Maintenance". In Data Warehousing and Knowledge Discovery, 126–35. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11546849_13.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Zhou, Aoying, Jinwen, Zhou Shuigeng e Zenping Tian. "Incremental Mining of Schema for Semistructured Data". In Methodologies for Knowledge Discovery and Data Mining, 159–68. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48912-6_22.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Bouhamoum, Redouane, Zoubida Kedad e Stéphane Lopes. "Incremental Schema Discovery at Scale for RDF Data". In The Semantic Web, 195–211. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77385-4_12.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Incremental Schema Discovery"

1

Chih-Ho Chen, Yung Ting e Jia-Sheng Heh. "Low Overhead Incremental Checkpointing and Rollback Recovery Scheme on Windows Operating System". In 2010 3rd International Conference on Knowledge Discovery and Data Mining (WKDD 2010). IEEE, 2010. http://dx.doi.org/10.1109/wkdd.2010.135.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Azzarone, Eleonora, Roberto Rossi, Giovanni Cirillo e Giacomo Micheletti. "An Enhanced Integrated Asset Model for Offshore Field Development Strategy". In ADIPEC. SPE, 2022. http://dx.doi.org/10.2118/211476-ms.

Texto completo da fonte
Resumo:
Abstract The derivation of an optimised field development strategy is one of the most important tasks in the oil and gas industry. Modern workflows apply complex integrated asset models (IAM) where reservoir and production network simulators are coupled to provide a single model of the entire asset. In this paper we present the solutions adopted to build an IAM for an asset in the current producing scenario, but also modelling future incremental surface facilities’ upgrades, tasks found to be beyond the capabilities of current commercial IAM products. The asset consists of a number of oil and gas fields producing into a common network. A variable gas offtake through existing and future facilities is expected in the field development. Commercial IAM solutions cannot model this or update the constraint on the total production based on the changing offtake volumes. These limitations have been solved by a set of custom scripts embedded in the simulators. The scripts manage the inline separator efficiency both in the current scenario and also when a volume of the produced gas will be sent to the LNG plant planned in the asset future development. The two configurations, i.e., the current producing scenario and the future development one, are based on the same set of reservoir models with different network schemes and facilities’ constraints. The integration of the reservoir and the network enables the determination of the impact of the facilities’ upgrades, namely the installation of a process unit close to the oil production platform and the routing of a fraction of the gas to feed an LNG plant under construction. The total production is increased because of the reduction of the backpressure due to the new location of the process unit and the reduction of the gas flowing to the current treatment terminal. The average gain both in the oil and gas production rates is around 20% for 5 years. Moreover, due to the unique flexibility of the IAM solution, additional projects (such as further developments to the existing producing reservoirs or tie-in of other fields already discovered) can be easily added to the model to simulate the future asset production. For the particular asset being studied, a solution to the challenging reservoir-network coupling issue was developed employing a set of simulation scripts (including Python ones) that enable the modelling of the network configuration in a rigorous way. This approach overcame the current commercial application limitations and expanded the functionalities of each simulator included in the IAM.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia