Literatura académica sobre el tema "Data / knowledge partitioning and distribution"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Data / knowledge partitioning and distribution".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Data / knowledge partitioning and distribution"

1

Rota, Jadranka, Tobias Malm, Nicolas Chazot, Carlos Peña y Niklas Wahlberg. "A simple method for data partitioning based on relative evolutionary rates". PeerJ 6 (28 de agosto de 2018): e5498. http://dx.doi.org/10.7717/peerj.5498.

Texto completo
Resumen
Background Multiple studies have demonstrated that partitioning of molecular datasets is important in model-based phylogenetic analyses. Commonly, partitioning is done a priori based on some known properties of sequence evolution, e.g. differences in rate of evolution among codon positions of a protein-coding gene. Here we propose a new method for data partitioning based on relative evolutionary rates of the sites in the alignment of the dataset being analysed. The rates are inferred using the previously published Tree Independent Generation of Evolutionary Rates (TIGER), and the partitioning is conducted using our novel python script RatePartitions. We conducted simulations to assess the performance of our new method, and we applied it to eight published multi-locus phylogenetic datasets, representing different taxonomic ranks within the insect order Lepidoptera (butterflies and moths) and one phylogenomic dataset, which included ultra-conserved elements as well as introns. Methods We used TIGER-rates to generate relative evolutionary rates for all sites in the alignments. Then, using RatePartitions, we partitioned the data into partitions based on their relative evolutionary rate. RatePartitions applies a simple formula that ensures a distribution of sites into partitions following the distribution of rates of the characters from the full dataset. This ensures that the invariable sites are placed in a partition with slowly evolving sites, avoiding the pitfalls of previously used methods, such as k-means. Different partitioning strategies were evaluated using BIC scores as calculated by PartitionFinder. Results Simulations did not highlight any misbehaviour of our partitioning approach, even under difficult parameter conditions or missing data. In all eight phylogenetic datasets, partitioning using TIGER-rates and RatePartitions was significantly better as measured by the BIC scores than other partitioning strategies, such as the commonly used partitioning by gene and codon position. We compared the resulting topologies and node support for these eight datasets as well as for the phylogenomic dataset. Discussion We developed a new method of partitioning phylogenetic datasets without using any prior knowledge (e.g. DNA sequence evolution). This method is entirely based on the properties of the data being analysed and can be applied to DNA sequences (protein-coding, introns, ultra-conserved elements), protein sequences, as well as morphological characters. A likely explanation for why our method performs better than other tested partitioning strategies is that it accounts for the heterogeneity in the data to a much greater extent than when data are simply subdivided based on prior knowledge.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Shaikh, M. Bilal, M. Abdul Rehman y Attaullah Sahito. "Optimizing Distributed Machine Learning for Large Scale EEG Data Set". Sukkur IBA Journal of Computing and Mathematical Sciences 1, n.º 1 (30 de junio de 2017): 114. http://dx.doi.org/10.30537/sjcms.v1i1.14.

Texto completo
Resumen
Distributed Machine Learning (DML) has gained its importance more than ever in this era of Big Data. There are a lot of challenges to scale machine learning techniques on distributed platforms. When it comes to scalability, improving the processor technology for high level computation of data is at its limit, however increasing machine nodes and distributing data along with computation looks as a viable solution. Different frameworks and platforms are available to solve DML problems. These platforms provide automated random data distribution of datasets which miss the power of user defined intelligent data partitioning based on domain knowledge. We have conducted an empirical study which uses an EEG Data Set collected through P300 Speller component of an ERP (Event Related Potential) which is widely used in BCI problems; it helps in translating the intention of subject w h i l e performing any cognitive task. EEG data contains noise due to waves generated by other activities in the brain which contaminates true P300Speller. Use of Machine Learning techniques could help in detecting errors made by P300 Speller. We are solving this classification problem by partitioning data into different chunks and preparing distributed models using Elastic CV Classifier. To present a case of optimizing distributed machine learning, we propose an intelligent user defined data partitioning approach that could impact on the accuracy of distributed machine learners on average. Our results show better average AUC as compared to average AUC obtained after applying random data partitioning which gives no control to user over data partitioning. It improves the average accuracy of distributed learner due to the domain specific intelligent partitioning by the user. Our customized approach achieves 0.66 AUC on individual sessions and 0.75 AUC on mixed sessions, whereas random / uncontrolled data distribution records 0.63 AUC.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Liu, Richen, Liming Shen, Xueyi Chen, Genlin Ji, Bin Zhao, Chao Tan y Mingjun Su. "Sketch-Based Slice Interpretative Visualization for Stratigraphic Data". Journal of Imaging Science and Technology 63, n.º 6 (1 de noviembre de 2019): 60505–1. http://dx.doi.org/10.2352/j.imagingsci.technol.2019.63.6.060505.

Texto completo
Resumen
Abstract In this article, the authors propose a stratigraphic slice interpretative visualization system, namely slice analyzer. It enables the domain experts, i.e., geologists and oil/gas exploration experts, to interactively interpret the slices with domain knowledge, which helps them get a better understanding of stratigraphic structures and the distribution of the geological materials, e.g., underground flow path (UFP), river delta, floodplain, slump fan, etc. In addition to some domain-specific slice edit manipulations, a sketch-based sub-region partitioning approach is further presented to help users divide the slice into individual sub-regions with homologous characteristics according to their domain knowledge. Consequently, the geological materials they are interested in can be extracted automatically and visualized by the proposed geological symbol definition algorithm. Feedback from domain experts suggests that the proposed system is capable of interpreting the stratigraphic slice, compared with their currently used tools.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Zhu, Zichen, Xiao Hu y Manos Athanassoulis. "NOCAP: Near-Optimal Correlation-Aware Partitioning Joins". Proceedings of the ACM on Management of Data 1, n.º 4 (8 de diciembre de 2023): 1–27. http://dx.doi.org/10.1145/3626739.

Texto completo
Resumen
Storage-based joins are still commonly used today because the memory budget does not always scale with the data size. One of the many join algorithms developed that has been widely deployed and proven to be efficient is the Hybrid Hash Join (HHJ), which is designed to exploit any available memory to maximize the data that is joined directly in memory. However, HHJ cannot fully exploit detailed knowledge of the join attribute correlation distribution. In this paper, we show that given a correlation skew in the join attributes, HHJ partitions data in a suboptimal way. To do that, we derive the optimal partitioning using a new cost-based analysis of partitioning-based joins that is tailored for primary key - foreign key (PK-FK) joins, one of the most common join types. This optimal partitioning strategy has a high memory cost, thus, we further derive an approximate algorithm that has tunable memory cost and leads to near-optimal results. Our algorithm, termed NOCAP (Near-Optimal Correlation-Aware Partitioning) join, outperforms the state of the art for skewed correlations by up to 30%, and the textbook Grace Hash Join by up to 4×. Further, for a limited memory budget, NOCAP outperforms HHJ by up to 10%, even for uniform correlation. Overall, NOCAP dominates state-of-the-art algorithms and mimics the best algorithm for a memory budget varying from below √||relation|| to more than ||relation||.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Sineglazov, Victor, Olena Chumachenko y Eduard Heilyk. "Semi-controlled Learning in Information Processing Problems". Electronics and Control Systems 4, n.º 70 (4 de enero de 2022): 37–43. http://dx.doi.org/10.18372/1990-5548.70.16754.

Texto completo
Resumen
The article substantiates the need for further research of known methods and the development of new methods of machine learning – semi-supervized learning. It is shown that knowledge of the probability distribution density of the initial data obtained using unlabeled data should carry information useful for deriving the conditional probability distribution density of labels and input data. If this is not the case, semi-supervised learning will not provide any improvement over supervised learning. It may even happen that the use of unlabeled data reduces the accuracy of the prediction. For semi-supervised learning to work, certain assumptions must hold, namely: the semi-supervised smoothness assumption, the clustering assumption (low-density partitioning), and the manifold assumption. A new hybrid semi-supervised learning algorithm using the label propagation method has been developed. An example of using the proposed algorithm is given.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Sirbiladze, Gia, Bidzina Matsaberidze, Bezhan Ghvaberidze, Bidzina Midodashvili y David Mikadze. "Fuzzy TOPSIS based selection index in the planning of emergency service facilities locations and goods transportation". Journal of Intelligent & Fuzzy Systems 41, n.º 1 (11 de agosto de 2021): 1949–62. http://dx.doi.org/10.3233/jifs-210636.

Texto completo
Resumen
The attributes influencing the decision-making process in planning transportation of goods from selected facilities locations in disaster zones are considered. Experts evaluate each candidate for humanitarian aid distribution centers (HADCs) (service centers) against each uncertainty factor in q-rung orthopair fuzzy sets (q-ROFS). For representation of experts’ knowledge in the input data for planning emergency service facilities locations a q-rung orthopair fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) approach is developed. Based on the offered fuzzy TOPSIS aggregation a new innovative objective function is introduced which maximizes a candidate HADC’s selection index and reduces HADCs opening risks in disaster zones. The HADCs location and goods transportation problem is reduced to the bi-criteria problem of partitioning the set of customers by the set of service centers: 1) Minimization of opened HADCs and goods transportation total costs; 2) Maximization of HADCs selection index. Partitioning type transportation constraints are also constructed. Our approach for solving the constructed bi-criteria partitioning problem consists of two phases. In the first phase, based on the covering’s matrix, we generate a new matrix with columns allowing to find all possible partitioning of the demand points with the opened HADCs. In the second phase, using the generated matrix and our exact algorithm we find the partitioning –allocations of the HADCs to the centers corresponded to the Pareto-optimal solutions. The constructed model is illustrated with a numerical example.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Smith, Bruce R., Christophe M. Herbinger y Heather R. Merry. "Accurate Partition of Individuals Into Full-Sib Families From Genetic Data Without Parental Information". Genetics 158, n.º 3 (1 de julio de 2001): 1329–38. http://dx.doi.org/10.1093/genetics/158.3.1329.

Texto completo
Resumen
Abstract Two Markov chain Monte Carlo algorithms are proposed that allow the partitioning of individuals into full-sib groups using single-locus genetic marker data when no parental information is available. These algorithms present a method of moving through the sibship configuration space and locating the configuration that maximizes an overall score on the basis of pairwise likelihood ratios of being full-sib or unrelated or maximizes the full joint likelihood of the proposed family structure. Using these methods, up to 757 out of 759 Atlantic salmon were correctly classified into 12 full-sib families of unequal size using four microsatellite markers. Large-scale simulations were performed to assess the sensitivity of the procedures to the number of loci and number of alleles per locus, the allelic distribution type, the distribution of families, and the independent knowledge of population allelic frequencies. The number of loci and the number of alleles per locus had the most impact on accuracy. Very good accuracy can be obtained with as few as four loci when they have at least eight alleles. Accuracy decreases when using allelic frequencies estimated in small target samples with skewed family distributions with the pairwise likelihood approach. We present an iterative approach that partly corrects that problem. The full likelihood approach is less sensitive to the precision of allelic frequencies estimates but did not perform as well with the large data set or when little information was available (e.g., four loci with four alleles).
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Grard, Aline y Jean-François Deliège. "Characterizing Trace Metal Contamination and Partitioning in the Rivers and Sediments of Western Europe Watersheds". Hydrology 10, n.º 2 (16 de febrero de 2023): 51. http://dx.doi.org/10.3390/hydrology10020051.

Texto completo
Resumen
Adsorption and desorption processes occurring on suspended and bed sediments were studied in two datasets from western Europe watersheds (Meuse and Mosel). Copper and zinc dissolved and total concentrations, total suspended sediment concentrations, mass concentrations, and grain sizes were analyzed. Four classes of mineral particle size were determined. Grain size distribution had to be considered in order to assess the trace metal particulate phase in the water column. The partitioning coefficients of trace metals between the dissolved and particulate phases were calculated. The objective of this study was to improve the description of the processes involved in the transportation and fate of trace metals in river aquatic ecosystems. Useful data for future modelling, management and contamination assessment of river sediments were provided. As it is confirmed by a literature review, the copper and zinc partitioning coefficients calculated in this study are reliable. The knowledge related to copper and zinc (e.g., partitioning coefficients) will allow us to begin investigations into environmental modelling. This modelling will allow us to consider new sorption processes and better describe trace metal and sediment fates as well as pressure–impact relationships.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

McDonald, H. Gregory. "Yukon to the Yucatan: Habitat partitioning in North American Late Pleistocene ground sloths (Xenarthra, Pilosa)". Journal of Palaeosciences 70, n.º (1-2) (10 de septiembre de 2021): 237–52. http://dx.doi.org/10.54991/jop.2021.17.

Texto completo
Resumen
The late Pleistocene mammalian fauna of North America included seven genera of ground sloth, representing four families. This cohort of megaherbivores had an extensive geographic range in North America from the Yukon in Canada to the Yucatan Peninsula in Mexico and inhabited a variety of biomes. Within this latitudinal range there are taxa with a distribution limited to temperate latitudes while others have a distribution restricted to tropical latitudes. Some taxa are better documented than others and more is known about their palaeoecology and habitat preferences, while our knowledge of the palaeoecology of taxa more recently discovered remains limited. In order to better understand what aspects of their palaeoecology allowed their dispersal from South America, long–term success in North America and ultimately the underlying causes for their extinction at the end of the Pleistocene more information is needed. A summary overview of the differences in the palaeoecology of the late Pleistocene sloths in North America and their preferred habitats is presented based on different data sources.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Dalton, Lori A. y Mohammadmahdi R. Yousefi. "Data Requirements for Model-Based Cancer Prognosis Prediction". Cancer Informatics 14s5 (enero de 2015): CIN.S30801. http://dx.doi.org/10.4137/cin.s30801.

Texto completo
Resumen
Cancer prognosis prediction is typically carried out without integrating scientific knowledge available on genomic pathways, the effect of drugs on cell dynamics, or modeling mutations in the population. Recent work addresses some of these problems by formulating an uncertainty class of Boolean regulatory models for abnormal gene regulation, assigning prognosis scores to each network based on intervention outcomes, and partitioning networks in the uncertainty class into prognosis classes based on these scores. For a new patient, the probability distribution of the prognosis class was evaluated using optimal Bayesian classification, given patient data. It was assumed that (1) disease is the result of several mutations of a known healthy network and that these mutations and their probability distribution in the population are known and (2) only a single snapshot of the patient's gene activity profile is observed. It was shown that, even in ideal settings where cancer in the population and the effect of a drug are fully modeled, a single static measurement is typically not sufficient. Here, we study what measurements are sufficient to predict prognosis. In particular, we relax assumption (1) by addressing how population data may be used to estimate network probabilities, and extend assumption (2) to include static and time-series measurements of both population and patient data. Furthermore, we extend the prediction of prognosis classes to optimal Bayesian regression of prognosis metrics. Even when time-series data is preferable to infer a stochastic dynamical network, we show that static data can be superior for prognosis prediction when constrained to small samples. Furthermore, although population data is helpful, performance is not sensitive to inaccuracies in the estimated network probabilities.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Data / knowledge partitioning and distribution"

1

De, Oliveira Joffrey. "Gestion de graphes de connaissances dans l'informatique en périphérie : gestion de flux, autonomie et adaptabilité". Electronic Thesis or Diss., Université Gustave Eiffel, 2023. http://www.theses.fr/2023UEFL2069.

Texto completo
Resumen
Les travaux de recherche menés dans le cadre de cette thèse de doctorat se situent à l'interface du Web sémantique, des bases de données et de l'informatique en périphérie (généralement dénotée Edge computing). En effet, notre objectif est de concevoir, développer et évaluer un système de gestion de bases de données (SGBD) basé sur le modèle de données Resource Description Framework (RDF) du W3C, qui doit être adapté aux terminaux que l'on trouve dans l'informatique périphérique. Les applications possibles d'un tel système sont nombreuses et couvrent un large éventail de secteurs tels que l'industrie, la finance et la médecine, pour n'en citer que quelques-uns. Pour preuve, le sujet de cette thèse a été défini avec l'équipe du laboratoire d'informatique et d'intelligence artificielle (CSAI) du ENGIE Lab CRIGEN. Ce dernier est le centre de recherche et de développement d'ENGIE dédié aux gaz verts (hydrogène, biogaz et gaz liquéfiés), aux nouveaux usages de l'énergie dans les villes et les bâtiments, à l'industrie et aux technologies émergentes (numérique et intelligence artificielle, drones et robots, nanotechnologies et capteurs). Le CSAI a financé cette thèse dans le cadre d'une collaboration de type CIFRE. Les fonctionnalités d'un système satisfaisant ces caractéristiques doivent permettre de détecter de manière pertinente et efficace des anomalies et des situations exceptionnelles depuis des mesures provenant de capteurs et/ou actuateurs. Dans un contexte industriel, cela peut correspondre à la détection de mesures, par exemple de pression ou de débit sur un réseau de distribution de gaz, trop élevées qui pourraient potentiellement compromettre des infrastructures ou même la sécurité des individus. Le mode opératoire de cette détection doit se faire au travers d'une approche conviviale pour permettre au plus grand nombre d'utilisateurs, y compris les non-programmeurs, de décrire les situations à risque. L'approche doit donc être déclarative, et non procédurale, et doit donc s'appuyer sur un langage de requêtes, par exemple SPARQL. Nous estimons que l'apport des technologies du Web sémantique peut être prépondérant dans un tel contexte. En effet, la capacité à inférer des conséquences implicites depuis des données et connaissances explicites constitue un moyen de créer de nouveaux services qui se distinguent par leur aptitude à s'ajuster aux circonstances rencontrées et à prendre des décisions de manière autonome. Cela peut se traduire par la génération de nouvelles requêtes dans certaines situations alarmantes ou bien en définissant un sous-graphe minimal de connaissances dont une instance de notre SGBD a besoin pour répondre à l'ensemble de ses requêtes. La conception d'un tel SGBD doit également prendre en compte les contraintes inhérentes de l'informatique en périphérie, c'est-à-dire les limites en terme de capacité de calcul, de stockage, de bande passante et parfois énergétique (lorsque le terminal est alimenté par un panneau solaire ou bien une batterie). Il convient donc de faire des choix architecturaux et technologiques satisfaisant ces limitations. Concernant la représentation des données et connaissances, notre choix de conception s'est porté sur les structures de données succinctes (SDS) qui offrent, entre autres, les avantages d'être très compactes et ne nécessitant pas de décompression lors du requêtage. De même, il a été nécessaire d'intégrer la gestion de flux de données au sein de notre SGBD, par exemple avec le support du fenêtrage dans des requêtes SPARQL continues, et des différents services supportés par notre système. Enfin, la détection d'anomalies étant un domaine où les connaissances peuvent évoluer, nous avons intégré le support des modifications au niveau des graphes de connaissances stockés sur les instances des clients de notre SGBD. Ce support se traduit par une extension de certaines structures SDS utilisées dans notre prototype
The research work carried out as part of this PhD thesis lies at the interface between the Semantic Web, databases and edge computing. Indeed, our objective is to design, develop and evaluate a database management system (DBMS) based on the W3C Resource Description Framework (RDF) data model, which must be adapted to the terminals found in Edge computing.The possible applications of such a system are numerous and cover a wide range of sectors such as industry, finance and medicine, to name but a few. As proof of this, the subject of this thesis was defined with the team from the Computer Science and Artificial Intelligence Laboratory (CSAI) at ENGIE Lab CRIGEN. The latter is ENGIE's research and development centre dedicated to green gases (hydrogen, biogas and liquefied gases), new uses of energy in cities and buildings, industry and emerging technologies (digital and artificial intelligence, drones and robots, nanotechnologies and sensors). CSAI financed this thesis as part of a CIFRE-type collaboration.The functionalities of a system satisfying these characteristics must enable anomalies and exceptional situations to be detected in a relevant and effective way from measurements taken by sensors and/or actuators. In an industrial context, this could mean detecting excessively high measurements, for example of pressure or flow rate in a gas distribution network, which could potentially compromise infrastructure or even the safety of individuals. This detection must be carried out using a user-friendly approach to enable as many users as possible, including non-programmers, to describe risk situations. The approach must therefore be declarative, not procedural, and must be based on a query language, such as SPARQL.We believe that Semantic Web technologies can make a major contribution in this context. Indeed, the ability to infer implicit consequences from explicit data and knowledge is a means of creating new services that are distinguished by their ability to adjust to the circumstances encountered and to make autonomous decisions. This can be achieved by generating new queries in certain alarming situations, or by defining a minimal sub-graph of knowledge that an instance of our DBMS needs in order to respond to all of its queries.The design of such a DBMS must also take into account the inherent constraints of Edge computing, i.e. the limits in terms of computing capacity, storage, bandwidth and sometimes energy (when the terminal is powered by a solar panel or a battery). Architectural and technological choices must therefore be made to meet these limitations. With regard to the representation of data and knowledge, our design choice fell on succinct data structures (SDS), which offer, among other advantages, the fact that they are very compact and do not require decompression during querying. Similarly, it was necessary to integrate data flow management within our DBMS, for example with support for windowing in continuous SPARQL queries, and for the various services supported by our system. Finally, as anomaly detection is an area where knowledge can evolve, we have integrated support for modifications to the knowledge graphs stored on the client instances of our DBMS. This support translates into an extension of certain SDS structures used in our prototype
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

HE, AIJING. "UNSUPERVISED DATA MINING BY RECURSIVE PARTITIONING". University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1026406153.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Eberhagen, Niclas. "An investigation of emerging knowledge distribution means and their characterization". Licentiate thesis, Department of Computer and Systems Sciences, Stockholm University, 1999. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-8262.

Texto completo
Resumen
This work investigates emerging knowledge distribution means through a descriptive study. Despite the amount of attention that processes and structures for knowledge management has received within research during the last decade, little attention has been directed towards the actual means used for the distribution of knowledge by individuals. In this respect it is the aim of the study to contribute with knowledge regarding knowledge distribution means. The study consists of a survey of emerging electronically mediated distribution means followed with a characterization and analysis. For the characterization and analysis a framework for interpretation of the different distribution means was created based on the constructs of organizational learning and the levels of knowledge system interpretation. Within the framework characteristics and concepts were identified and then used for the analysis of the knowledge distribution means. The characterization of the different knowledge distribution means as such may be used as an instrument for evaluation since it generalizable to other means of knowledge distribution. The results of the study show that knowledge distribution is not an isolated event. It takes place in larger context, such as organizational learning, since it touches upon other activities or phenomena such as knowledge acquisition, knowledge interpretation, and organizational memory. The concept of genre of knowledge distribution was found to be a viable concept to base exploration and development of support for knowledge distribution. The investigated distribution means only partly support a model for knowledge representation that captures both the problem-solution as well as an understanding of their relationship. In this respect existing distribution means must be enhanced or new ones developed if we wish to endorse such a representational model.

Licentiate thesis in partial fulfillment of the Licentiate of Philosophy degree in Computer and Systems Sciences, Stockholm University

Los estilos APA, Harvard, Vancouver, ISO, etc.
4

George, Chadrick Hendrik. "Knowledge management infrastructure and knowledge sharing: The case of a large fast moving consumer goods distribution centre in the Western Cape". Thesis, University of the Western Cape, 2014. http://hdl.handle.net/11394/3943.

Texto completo
Resumen
Magister Commercii - MCom
The aim of this study is to understand how knowledge is created, shared and used within the fast moving consumer goods distribution centre in the Western Cape (WC). It also aims to understand knowledge sharing between individuals in the organisation. A literature review was conducted, in order to answer the research questions- this covered the background of knowledge management (KM) and KS and its current status with particular reference to SA’s private sector. The study found that technological KM infrastructure, cultural KM infrastructure and organisational KM infrastructure are important enablers of KS. A conceptual model was developed around these concepts. In order to answer the research questions, the study identified a FMCG DC in the WC, where KS is practiced
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Arres, Billel. "Optimisation des performances dans les entrepôts distribués avec Mapreduce : traitement des problèmes de partionnement et de distribution des données". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE2012.

Texto completo
Resumen
Dans ce travail de thèse, nous abordons les problèmes liés au partitionnement et à la distribution des grands volumes d’entrepôts de données distribués avec Mapreduce. Dans un premier temps, nous abordons le problème de la distribution des données. Dans ce cas, nous proposons une stratégie d’optimisation du placement des données, basée sur le principe de la colocalisation. L’objectif est d’optimiser les traitements lors de l’exécution des requêtes d’analyse à travers la définition d’un schéma de distribution intentionnelle des données permettant de réduire la quantité des données transférées entre les noeuds lors des traitements, plus précisément lors phase de tri (shuffle). Nous proposons dans un second temps une nouvelle démarche pour améliorer les performances du framework Hadoop, qui est l’implémentation standard du paradigme Mapreduce. Celle-ci se base sur deux principales techniques d’optimisation. La première consiste en un pré-partitionnement vertical des données entreposées, réduisant ainsi le nombre de colonnes dans chaque fragment. Ce partitionnement sera complété par la suite par un autre partitionnement d’Hadoop, qui est horizontal, appliqué par défaut. L’objectif dans ce cas est d’améliorer l’accès aux données à travers la réduction de la taille des différents blocs de données. La seconde technique permet, en capturant les affinités entre les attributs d’une charge de requêtes et ceux de l’entrepôt, de définir un placement efficace de ces blocs de données à travers les noeuds qui composent le cluster. Notre troisième proposition traite le problème de l’impact du changement de la charge de requêtes sur la stratégie de distribution des données. Du moment que cette dernière dépend étroitement des affinités des attributs des requêtes et de l’entrepôt. Nous avons proposé, à cet effet, une approche dynamique qui permet de prendre en considération les nouvelles requêtes d’analyse qui parviennent au système. Pour pouvoir intégrer l’aspect de "dynamicité", nous avons utilisé un système multi-agents (SMA) pour la gestion automatique et autonome des données entreposées, et cela, à travers la redéfinition des nouveaux schémas de distribution et de la redistribution des blocs de données. Enfin, pour valider nos contributions nous avons conduit un ensemble d’expérimentations pour évaluer nos différentes approches proposées dans ce manuscrit. Nous étudions l’impact du partitionnement et la distribution intentionnelle sur le chargement des données, l’exécution des requêtes d’analyses, la construction de cubes OLAP, ainsi que l’équilibrage de la charge (Load Balacing). Nous avons également défini un modèle de coût qui nous a permis d’évaluer et de valider la stratégie de partitionnement proposée dans ce travail
In this manuscript, we addressed the problems of data partitioning and distribution for large scale data warehouses distributed with MapReduce. First, we address the problem of data distribution. In this case, we propose a strategy to optimize data placement on distributed systems, based on the collocation principle. The objective is to optimize queries performances through the definition of an intentional data distribution schema of data to reduce the amount of data transferred between nodes during treatments, specifically during MapReduce’s shuffling phase. Secondly, we propose a new approach to improve data partitioning and placement in distributed file systems, especially Hadoop-based systems, which is the standard implementation of the MapReduce paradigm. The aim is to overcome the default data partitioning and placement policies which does not take any relational data characteristics into account. Our proposal proceeds according to two steps. Based on queries workload, it defines an efficient partitioning schema. After that, the system defines a data distribution schema that meets the best user’s needs, and this, by collocating data blocks on the same or closest nodes. The objective in this case is to optimize queries execution and parallel processing performances, by improving data access. Our third proposal addresses the problem of the workload dynamicity, since users analytical needs evolve through time. In this case, we propose the use of multi-agents systems (MAS) as an extension of our data partitioning and placement approach. Through autonomy and self-control that characterize MAS, we developed a platform that defines automatically new distribution schemas, as new queries appends to the system, and apply a data rebalancing according to this new schema. This allows offloading the system administrator of the burden of managing load balance, besides improving queries performances by adopting careful data partitioning and placement policies. Finally, to validate our contributions we conduct a set of experiments to evaluate our different approaches proposed in this manuscript. We study the impact of an intentional data partitioning and distribution on data warehouse loading phase, the execution of analytical queries, OLAP cubes construction, as well as load balancing. We also defined a cost model that allowed us to evaluate and validate the partitioning strategy proposed in this work
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Antoine, Emilien. "Distributed data management with a declarative rule-based language webdamlog". Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00933808.

Texto completo
Resumen
Our goal is to enable aWeb user to easily specify distributed data managementtasks in place, i.e. without centralizing the data to a single provider. Oursystem is therefore not a replacement for Facebook, or any centralized system,but an alternative that allows users to launch their own peers on their machinesprocessing their own local personal data, and possibly collaborating with Webservices.We introduce Webdamlog, a datalog-style language for managing distributeddata and knowledge. The language extends datalog in a numberof ways, notably with a novel feature, namely delegation, allowing peersto exchange not only facts but also rules. We present a user study thatdemonstrates the usability of the language. We describe a Webdamlog enginethat extends a distributed datalog engine, namely Bud, with the supportof delegation and of a number of other novelties of Webdamlog such as thepossibility to have variables denoting peers or relations. We mention noveloptimization techniques, notably one based on the provenance of facts andrules. We exhibit experiments that demonstrate that the rich features ofWebdamlog can be supported at reasonable cost and that the engine scales tolarge volumes of data. Finally, we discuss the implementation of a Webdamlogpeer system that provides an environment for the engine. In particular, a peersupports wrappers to exchange Webdamlog data with non-Webdamlog peers.We illustrate these peers by presenting a picture management applicationthat we used for demonstration purposes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Galicia, Auyón Jorge Armando. "Revisiting Data Partitioning for Scalable RDF Graph Processing Combining Graph Exploration and Fragmentation for RDF Processing Query Optimization for Large Scale Clustered RDF Data RDFPart- Suite: Bridging Physical and Logical RDF Partitioning. Reverse Partitioning for SPARQL Queries: Principles and Performance Analysis. ShouldWe Be Afraid of Querying Billions of Triples in a Graph-Based Centralized System? EXGRAF: Exploration et Fragmentation de Graphes au Service du Traitement Scalable de Requˆetes RDF". Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2021. http://www.theses.fr/2021ESMA0001.

Texto completo
Resumen
Le Resource Description Framework (RDF) et SPARQL sont des standards très populaires basés sur des graphes initialement conçus pour représenter et interroger des informations sur le Web. La flexibilité offerte par RDF a motivé son utilisation dans d'autres domaines. Aujourd'hui les jeux de données RDF sont d'excellentes sources d'information. Ils rassemblent des milliards de triplets dans des Knowledge Graphs qui doivent être stockés et exploités efficacement. La première génération de systèmes RDF a été construite sur des bases de données relationnelles traditionnelles. Malheureusement, les performances de ces systèmes se dégradent rapidement car le modèle relationnel ne convient pas au traitement des données RDF intrinsèquement représentées sous forme de graphe. Les systèmes RDF natifs et distribués cherchent à surmonter cette limitation. Les premiers utilisent principalement l’indexation comme stratégie d'optimisation pour accélérer les requêtes. Les deuxièmes recourent au partitionnement des données. Dans le modèle relationnel, la représentation logique de la base de données est cruciale pour concevoir le partitionnement. La couche logique définissant le schéma explicite de la base de données offre un certain confort aux concepteurs. Cette couche leur permet de choisir manuellement ou automatiquement, via des assistants automatiques, les tables et les attributs à partitionner. Aussi, elle préserve les concepts fondamentaux sur le partitionnement qui restent constants quel que soit le système de gestion de base de données. Ce schéma de conception n'est plus valide pour les bases de données RDF car le modèle RDF n'applique pas explicitement un schéma aux données. Ainsi, la couche logique est inexistante et le partitionnement des données dépend fortement des implémentations physiques des triplets sur le disque. Cette situation contribue à avoir des logiques de partitionnement différentes selon le système cible, ce qui est assez différent du point de vue du modèle relationnel. Dans cette thèse, nous promouvons l'idée d'effectuer le partitionnement de données au niveau logique dans les bases de données RDF. Ainsi, nous traitons d'abord le graphe de données RDF pour prendre en charge le partitionnement basé sur des entités logiques. Puis, nous proposons un framework pour effectuer les méthodes de partitionnement. Ce framework s'accompagne de procédures d'allocation et de distribution des données. Notre framework a été incorporé dans un système de traitement des données RDF centralisé (RDF_QDAG) et un système distribué (gStoreD). Nous avons mené plusieurs expériences qui ont confirmé la faisabilité de l'intégration de notre framework aux systèmes existants en améliorant leurs performances pour certaines requêtes. Enfin, nous concevons un ensemble d'outils de gestion du partitionnement de données RDF dont un langage de définition de données (DDL) et un assistant automatique de partitionnement
The Resource Description Framework (RDF) and SPARQL are very popular graph-based standards initially designed to represent and query information on the Web. The flexibility offered by RDF motivated its use in other domains and today RDF datasets are great information sources. They gather billions of triples in Knowledge Graphs that must be stored and efficiently exploited. The first generation of RDF systems was built on top of traditional relational databases. Unfortunately, the performance in these systems degrades rapidly as the relational model is not suitable for handling RDF data inherently represented as a graph. Native and distributed RDF systems seek to overcome this limitation. The former mainly use indexing as an optimization strategy to speed up queries. Distributed and parallel RDF systems resorts to data partitioning. The logical representation of the database is crucial to design data partitions in the relational model. The logical layer defining the explicit schema of the database provides a degree of comfort to database designers. It lets them choose manually or automatically (through advisors) the tables and attributes to be partitioned. Besides, it allows the partitioning core concepts to remain constant regardless of the database management system. This design scheme is no longer valid for RDF databases. Essentially, because the RDF model does not explicitly enforce a schema since RDF data is mostly implicitly structured. Thus, the logical layer is inexistent and data partitioning depends strongly on the physical implementations of the triples on disk. This situation contributes to have different partitioning logics depending on the target system, which is quite different from the relational model’s perspective. In this thesis, we promote the novel idea of performing data partitioning at the logical level in RDF databases. Thereby, we first process the RDF data graph to support logical entity-based partitioning. After this preparation, we present a partitioning framework built upon these logical structures. This framework is accompanied by data fragmentation, allocation, and distribution procedures. This framework was incorporated to a centralized (RDF_QDAG) and a distributed (gStoreD) triple store. We conducted several experiments that confirmed the feasibility of integrating our framework to existent systems improving their performances for certain queries. Finally, we design a set of RDF data partitioning management tools including a data definition language (DDL) and an automatic partitioning wizard
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Meiring, Linda. "A distribution model for the assessment of database systems knowledge and skills among second-year university students". Thesis, [Bloemfontein?] : Central University of Technology, Free State, 2009. http://hdl.handle.net/11462/44.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Dasgupta, Arghya. "How can the ‘Zeigarnik effect’ becombined with analogical reasoning inorder to enhance understanding ofcomplex knowledge related to computerscience?" Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-143636.

Texto completo
Resumen
Many people face difficulties in remembering knowledge, which is complex and abstract. This is especially important when the descriptions of knowledge are to be stored in searchable knowledge bases. But if complex knowledge can be transferred through real life stories, it is more understandable and easier to retrieve for the knowledge acceptor. Moreover, if the stories follow a certain pattern like ‘intentional suspense’ it may be more useful. This study investigates how far a story with intentional interruption is helpful in transferring complex computer science knowledge through processing of information that compares similarities between new and well-understood concepts. The data collection was done by applying framework analysis approach through the interview of 40 students of Stockholm University. Results of this study is assumed to help organizations to design, store and retrieve complex knowledge structures in knowledge bases by using a specific pattern of the stories used in the narrative pedagogy known as 'Zeigarnik effect' which is a form of creating suspense. Interviews with managers showed that they are positive to using the type of knowledge transfer as is proposed in the results of this thesis. Transcribed interviews with students show that the students appreciate and understand the use of analogies in combination with the ‘Zeigarnik effect’ as is described in the result of this thesis. After analysis of the data collected from the experiments, it was confirmed that ‘Zeigarnik effect’ has a small positive effect for a group of people as better results have been found in most of the time when ‘Zeigarnik effect’ was used as compared to when the ‘Zeigarnik effect’ was not used. The participants that experienced the ‘Zeigarnik effect’ answered in a better way which proved that their understanding and memory regarding the subject have been enhanced using it.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Coullon, Hélène. "Modélisation et implémentation de parallélisme implicite pour les simulations scientifiques basées sur des maillages". Thesis, Orléans, 2014. http://www.theses.fr/2014ORLE2029/document.

Texto completo
Resumen
Le calcul scientifique parallèle est un domaine en plein essor qui permet à la fois d’augmenter la vitesse des longs traitements, de traiter des problèmes de taille plus importante ou encore des problèmes plus précis. Ce domaine permet donc d’aller plus loin dans les calculs scientifiques, d’obtenir des résultats plus pertinents, car plus précis, ou d’étudier des problèmes plus volumineux qu’auparavant. Dans le monde plus particulier de la simulation numérique scientifique, la résolution d’équations aux dérivées partielles (EDP) est un calcul particulièrement demandeur de ressources parallèles. Si les ressources matérielles permettant le calcul parallèle sont de plus en plus présentes et disponibles pour les scientifiques, à l’inverse leur utilisation et la programmation parallèle se démocratisent difficilement. Pour cette raison, des modèles de programmation parallèle, des outils de développement et même des langages de programmation parallèle ont vu le jour et visent à simplifier l’utilisation de ces machines. Il est toutefois difficile, dans ce domaine dit du “parallélisme implicite”, de trouver le niveau d’abstraction idéal pour les scientifiques, tout en réduisant l’effort de programmation. Ce travail de thèse propose tout d’abord un modèle permettant de mettre en oeuvre des solutions de parallélisme implicite pour les simulations numériques et la résolution d’EDP. Ce modèle est appelé “Structured Implicit Parallelism for scientific SIMulations” (SIPSim), et propose une vision au croisement de plusieurs types d’abstraction, en tentant de conserver les avantages de chaque vision. Une première implémentation de ce modèle, sous la forme d’une librairie C++ appelée SkelGIS, est proposée pour les maillages cartésiens à deux dimensions. Par la suite, SkelGIS, et donc l’implémentation du modèle, est étendue à des simulations numériques sur les réseaux (permettant l’application de simulations représentant plusieurs phénomènes physiques). Les performances de ces deux implémentations sont évaluées et analysées sur des cas d’application réels et complexes et démontrent qu’il est possible d’obtenir de bonnes performances en implémentant le modèle SIPSim
Parallel scientific computations is an expanding domain of computer science which increases the speed of calculations and offers a way to deal with heavier or more accurate calculations. Thus, the interest of scientific computations increases, with more precised results and bigger physical domains to study. In the particular case of scientific numerical simulations, solving partial differential equations (PDEs) is an especially heavy calculation and a perfect applicant to parallel computations. On one hand, it is more and more easy to get an access to very powerfull parallel machines and clusters, but on the other hand parallel programming is hard to democratize, and most scientists are not able to use these machines. As a result, high level programming models, framework, libraries, languages etc. have been proposed to hide technical details of parallel programming. However, in this “implicit parallelism” field, it is difficult to find the good abstraction level while keeping a low programming effort. This thesis proposes a model to write implicit parallelism solutions for numerical simulations such as mesh-based PDEs computations. This model is called “Structured Implicit Parallelism for scientific SIMulations” (SIPSim), and proposes an approach at the crossroads of existing solutions, taking advantage of each one. A first implementation of this model is proposed, as a C++ library called SkelGIS, for two dimensional Cartesian meshes. A second implementation of the model, and an extension of SkelGIS, proposes an implicit parallelism solution for network-simulations (which deals with simulations with multiple physical phenomenons), and is studied in details. A performance analysis of both these implementations is given on real case simulations, and it demonstrates that the SIPSim model can be implemented efficiently
Los estilos APA, Harvard, Vancouver, ISO, etc.

Libros sobre el tema "Data / knowledge partitioning and distribution"

1

Kjaerulff, Uffe B. Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis. 2a ed. New York, NY: Springer New York, 2013.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Petchey, Owen L., Andrew P. Beckerman, Natalie Cooper y Dylan Z. Childs. Insights from Data with R. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198849810.001.0001.

Texto completo
Resumen
Knowledge of how to get useful information from data is essential in the life and environmental sciences. This book provides learners with knowledge, experience, and confidence about how to efficiently and reliably discover useful information from data. The content is developed from first- and second-year undergraduate-level courses taught by the authors. It charts the journey from question, to raw data, to clean and tidy data, to visualizations that provide insights. This journey is presented as a repeatable workflow fit for use with many types of question, study, and data. Readers discover how to use R and RStudio, and learn key concepts for drawing appropriate conclusions from patterns in data. The book focuses on providing learners with a solid foundation of skills for working with data, and for getting useful information from data summaries and visualizations. It focuses on the strength of patterns (i.e. effect sizes) and their meaning (e.g. correlation or causation). It purposefully stays away from statistical tests and p-values. Concepts covered include distribution, sample, population, mean, median, mode, variance, standard deviation, correlation, interactions, and non-independence. The journey from data to insight is illustrated by one workflow demonstration in the book, and three online. Each involves data collected in a real study. Readers can follow along by downloading the data, and learning from the descriptions of each step in the journey from the raw data to visualizations that show the answers to the questions posed in the original studies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Madsen, Anders L. y Uffe B. B. Kjærulff. Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis. Springer, 2014.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Förster, Michael y Brian Nolan. Inequality and Living Standards. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198807032.003.0002.

Texto completo
Resumen
This chapter provides an overview of how inequality and living standards have evolved across the rich countries of the OECD in recent decades, of the factors driving income inequality upwards in many of them, and of the channels through which this may undermine real income growth and opportunity for households across the middle and lower parts of the income distribution. It presents an overview of key trends drawing on comparative data from the OECD’s Income Distribution Database. It reviews existing evidence on the drivers of income inequality and on how inequality may affect income growth around the middle. It highlights key gaps in knowledge, to be addressed by the in-depth examination of the varying experiences of a range of rich countries in this book.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Ferreira, Eliel Alves y João Vicente Zamperion. Excel: Uma ferramenta estatística. Brazil Publishing, 2021. http://dx.doi.org/10.31012/978-65-5861-400-5.

Texto completo
Resumen
This study aims to present the concepts and methods of statistical analysis using the Excel software, in a simple way aiming at a greater ease of understanding of students, both undergraduate and graduate, from different areas of knowledge. In Excel, mainly Data Analysis Tools will be used. For a better understanding, there are, in this book, many practical examples applying these tools and their interpretations, which are of paramount importance. In the first chapter, it deals with introductory concepts, such as introduction to Excel, the importance of statistics, concepts and definitions. Being that in this will be addressed the subjects of population and sample, types of data and their levels of measurement. Then it brings a detailed study of Descriptive Statistics, where it will be studied percentage, construction of graphs, frequency distribution, measures of central tendency and measures of dispersion. In the third chapter, notions of probability, binomial and normal probability distribution will be studied. In the last chapter, Inferential Statistics will be approached, starting with the confidence interval, going through the hypothesis tests (F, Z and t tests), ending with the statistical study of the correlation between variables and simple linear regression. It is worth mentioning that the statistical knowledge covered in this book can be useful for, in addition to students, professionals who want to improve their knowledge in statistics using Excel.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Tebaldi, Claudia y Richard Smith. Indirect elicitation from ecological experts: From methods and software to habitat modelling and rock-wallabies. Editado por Anthony O'Hagan y Mike West. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198703174.013.19.

Texto completo
Resumen
This article focuses on techniques for eliciting expert judgement about complex uncertainties, and more specifically the habitat of the Australian brush-tailed rock-wallaby. Modelling wildlife habitat requirements is important for mapping the distribution of the rock-wallaby, a threatened species, and therefore informing conservation and management. The Bayesian statistical modelling framework provides a useful ‘bridge’, from purely expert-defined models, to statistical models allowing survey data and expert knowledge to be ‘viewed as complementary, rather than alternative or competing, information sources’. The article describes the use of a rigorously designed and implemented expert elicitation for multiple experts, as well as a software tool for streamlining, automating and facilitating an indirect approach to elicitation. This approach makes it possible to infer the relationship between probability of occurrence and the environmental variables and demonstrates how expert knowledge can contribute to habitat modelling.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

O'Donoghue, Cathal. Practical Microsimulation Modelling. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198852872.001.0001.

Texto completo
Resumen
The purpose of this book is to bring together for the first time a description, with examples, of the main methods used in microsimulation modelling, used in the field of income-distribution analysis. The book provides a practical complement to the Handbook of Microsimulation Modelling, published in 2014. It is structured to develop and use the different types of models used in the field, with a focus on household-targeted policy. The book aims to fill a gap in the literature in providing a greater degree of codified knowledge through a practical guide to developing and using microsimulation models. At present, the training of researchers and analysts that use and develop microsimulation modelling is done on a relatively ad-hoc basis through occasional training programmes and lecture series, built around lecture notes. This book would enable a more formalized and organized approach. Each chapter addresses a separate modelling approach in a similar, consistent way, describing in practical terms the key methodological skills for each approach: · It provides some policy context to each modelling approach so as to understand the modelling choices made and structures developed. · As a very data-intensive modelling approach, each chapter describes key data analysis and data-preparation methods. · As a modelling approach that is used extensively for deciding policy, often involving huge budgets, validation is key. Each chapter describes an approach to validating the model. · Depending upon the policy context, the analysis is assessed in different ways. Each chapter contains a section devoted to measurement issues and tabulating output from the models. · Last, each chapter contains an example simulation of a policy analysis using the chapter’s methodological approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Ashby, F. Gregory y Fabian A. Soto. Multidimensional Signal Detection Theory. Editado por Jerome R. Busemeyer, Zheng Wang, James T. Townsend y Ami Eidels. Oxford University Press, 2015. http://dx.doi.org/10.1093/oxfordhb/9780199957996.013.2.

Texto completo
Resumen
Multidimensional signal detection theory is a multivariate extension of signal detection theory that makes two fundamental assumptions, namely that every mental state is noisy and that every action requires a decision. The most widely studied version is known as general recognition theory (GRT). General recognition theory assumes that the percept on each trial can be modeled as a random sample from a multivariate probability distribution defined over the perceptual space. Decision bounds divide this space into regions that are each associated with a response alternative. General recognition theory rigorously defines and tests a number of important perceptual and cognitive conditions, including perceptual and decisional separability and perceptual independence. General recognition theory has been used to analyze data from identification experiments in two ways: (1) fitting and comparing models that make different assumptions about perceptual and decisional processing, and (2) testing assumptions by computing summary statistics and checking whether these satisfy certain conditions. Much has been learned recently about the neural networks that mediate the perceptual and decisional processing modeled by GRT, and this knowledge can be used to improve the design of experiments where a GRT analysis is anticipated.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Massimi, Michela. Perspectival Realism. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780197555620.001.0001.

Texto completo
Resumen
What does it mean to be a realist about science if one takes seriously the view that scientific knowledge is always perspectival, namely historically and culturally situated? In Perspectival Realism, Michela Massimi articulates an original answer to this question. The result is a philosophical view that goes under the name of ‘perspectival realism’ and it offers a new lens for thinking about scientific knowledge, realism, and pluralism in science. Perspectival Realism begins with an exploration of how epistemic communities often resort to several models and a plurality of practices in some areas of inquiry, drawing on examples from nuclear physics, climate science, and developmental psychology. Taking this plurality in science as a starting point, Massimi explains the perspectival nature of scientific representation, the role of scientific models as inferential blueprints, and the variety of realism that naturally accompanies such a view. Perspectival realism is realism about phenomena (rather than about theories or unobservable entities). The result of this novel view is a portrait of scientific knowledge as a collaborative inquiry, where the reliability of science is made possible by a plurality of historically and culturally situated scientific perspectives. Along the way, Massimi offers insights into the nature of scientific modelling, scientific knowledge qua modal knowledge, data-to-phenomena inferences, and natural kinds as sortal concepts. Perspectival realism offers a realist view that takes the multicultural roots of science seriously and couples it with cosmopolitan duties about how one ought to think about scientific knowledge and the distribution of benefits gained from scientific advancements.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Garnett, Stephen, Judit Szabo y Guy Dutson. Action Plan for Australian Birds 2010. CSIRO Publishing, 2011. http://dx.doi.org/10.1071/9780643103696.

Texto completo
Resumen
The Action Plan for Australian Birds 2010 is the third in a series of action plans that have been produced at the start of each decade. The book analyses the International Union for Conservation of Nature (IUCN) status of all the species and subspecies of Australia's birds, including those of the offshore territories. For each bird the size and trend in their population and distribution has been analysed using the latest iteration of IUCN Red List Criteria to determine their risk of extinction. The book also provides an account of all those species and subspecies that are or are likely to be extinct. The result is the most authoritative account yet of the status of Australia's birds. In this completely revised edition each account covers not only the 2010 status but provides a retrospective assessment of the status in 1990 and 2000 based on current knowledge, taxonomic revisions and changes to the IUCN criteria, and then reasons why the status of some taxa has changed over the last two decades. Maps have been created specifically for the Action Plan based on vetted data drawn from the records of Birds Australia, its members and its partners in many government departments. This is not a book of lost causes. It is a call for action to keep the extraordinary biodiversity we have inherited and pass the legacy to our children. 2012 Whitley Award Commendation for Zoological Resource.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Data / knowledge partitioning and distribution"

1

Tsai, Kao-Tai. "Examining Data Distribution". En Machine Learning for Knowledge Discovery with R, 9–28. Boca Raton: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003205685-2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Aslam, Adeel, Giovanni Simonini, Luca Gagliardelli, Angelo Mozzillo y Sonia Bergamaschi. "HKS: Efficient Data Partitioning for Stateful Streaming". En Big Data Analytics and Knowledge Discovery, 386–91. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-39831-5_35.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Galicia, Jorge, Amin Mesmoudi y Ladjel Bellatreche. "RDFPartSuite: Bridging Physical and Logical RDF Partitioning". En Big Data Analytics and Knowledge Discovery, 136–50. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-27520-4_10.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Bae, Jinuk y Sukho Lee. "Partitioning Algorithms for the Computation of Average Iceberg Queries". En Data Warehousing and Knowledge Discovery, 276–86. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-44466-1_27.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Bellatreche, Ladjel, Kamel Boukhalfa y Pascal Richard. "Data Partitioning in Data Warehouses: Hardness Study, Heuristics and ORACLE Validation". En Data Warehousing and Knowledge Discovery, 87–96. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-85836-2_9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Bodra, Jay, Soumyava Das, Abhishek Santra y Sharma Chakravarthy. "Query Processing on Large Graphs: Scalability Through Partitioning". En Big Data Analytics and Knowledge Discovery, 271–88. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-98539-8_21.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Li, Haoran, Li Xiong, Zhanglong Ji y Xiaoqian Jiang. "Partitioning-Based Mechanisms Under Personalized Differential Privacy". En Advances in Knowledge Discovery and Data Mining, 615–27. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57454-7_48.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Jiang, Hansi y Carl Meyer. "Relations Between Adjacency and Modularity Graph Partitioning". En Advances in Knowledge Discovery and Data Mining, 189–200. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-33377-4_15.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Bauer, H. H., M. Staat y M. Hammerschmidt. "Value Based Benchmarking and Market Partitioning". En Studies in Classification, Data Analysis, and Knowledge Organization, 422–32. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-642-55721-7_43.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Tsuchiya, Takahiro. "Homogeneity Analysis for Partitioning Qualitative Variables". En Studies in Classification, Data Analysis, and Knowledge Organization, 452–59. Tokyo: Springer Japan, 1998. http://dx.doi.org/10.1007/978-4-431-65950-1_50.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Data / knowledge partitioning and distribution"

1

Nishimura, Joel y Johan Ugander. "Restreaming graph partitioning". En KDD' 13: The 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2013. http://dx.doi.org/10.1145/2487575.2487696.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Fetai, Ilir, Damian Murezzan y Heiko Schuldt. "Workload-driven adaptive data partitioning and distribution — The Cumulus approach". En 2015 IEEE International Conference on Big Data (Big Data). IEEE, 2015. http://dx.doi.org/10.1109/bigdata.2015.7363940.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Pacaci, Anil y M. Tamer Özsu. "Distribution-Aware Stream Partitioning for Distributed Stream Processing Systems". En SIGMOD/PODS '18: International Conference on Management of Data. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3206333.3206338.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Higham, Catherine F., Desmond J. Higham y Francesco Tudisco. "Core-periphery Partitioning and Quantum Annealing". En KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3534678.3539261.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Zhang, Chenzi, Fan Wei, Qin Liu, Zhihao Gavin Tang y Zhenguo Li. "Graph Edge Partitioning via Neighborhood Heuristic". En KDD '17: The 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3097983.3098033.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Awadelkarim, Amel y Johan Ugander. "Prioritized Restreaming Algorithms for Balanced Graph Partitioning". En KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3394486.3403239.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Otgonbayar, Ankhbayar, Zeeshan Pervez y Keshav Dahal. "Partitioning based incremental marginalization algorithm for anonymizing missing data streams". En 2019 13th International Conference on Software, Knowledge, Information Management and Applications (SKIMA). IEEE, 2019. http://dx.doi.org/10.1109/skima47702.2019.8982399.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Xie, Xiao-Min y Yun Li. "Bisecting data partitioning methods for Min-Max Modular Support Vector Machine". En 2011 Eighth International Conference on Fuzzy Systems and Knowledge Discovery (FSKD 2011). IEEE, 2011. http://dx.doi.org/10.1109/fskd.2011.6019750.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Kor, Yashar, Liang Tan, Marek Z. Reformat y Petr Musilek. "GridKG: Knowledge Graph Representation of Distribution Grid Data". En 2020 IEEE Electric Power and Energy Conference (EPEC). IEEE, 2020. http://dx.doi.org/10.1109/epec48502.2020.9320066.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Gupta, Gaurav, Tharun Medini, Anshumali Shrivastava y Alexander J. Smola. "BLISS: A Billion scale Index using Iterative Re-partitioning". En KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3534678.3539414.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "Data / knowledge partitioning and distribution"

1

Schoen, Robert, Xiaotong Yang y Gizem Solmaz. Psychometric Report for the 2019 Knowledge for Teaching Early Elementary Mathematics (K-TEEM) Test. Florida State University Libraries, mayo de 2021. http://dx.doi.org/10.33009/lsi.1620243057.

Texto completo
Resumen
The 2019 Knowledge for Teaching Early Elementary Mathematics (2019 K-TEEM) test measures teachers’ mathematical knowledge for teaching early elementary mathematics. This report presents information about a large-scale field test of the 2019 K-TEEM test with 649 practicing educators. The report contains information about the development process used for the test; a description of the sample; descriptions of the procedures used for data entry, scoring of responses, and analysis of data; recommended scoring procedures; and findings regarding the distribution of test scores, standard error of measurement, and marginal reliability. The intended use of the data from the 2019 K-TEEM test is to serve as a measure of teacher knowledge that will be used in a randomized controlled trial to investigate the impact—and variation in impact—of a teacher professional-development program for early elementary teachers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Wolf, Shmuel y William J. Lucas. Involvement of the TMV-MP in the Control of Carbon Metabolism and Partitioning in Transgenic Plants. United States Department of Agriculture, octubre de 1999. http://dx.doi.org/10.32747/1999.7570560.bard.

Texto completo
Resumen
The function of the 30-kilodalton movement protein (MP) of tobacco mosaic virus (TMV) is to facilitate cell-to-cell movement of viral progeny in infected plants. Our earlier findings have indicated that this protein has a direct effect on plasmodesmal function. In addition, these studies demonstrated that constitutive expression of the TMV MP gene (under the control of the CaMV 35S promoter) in transgenic tobacco plants significantly affects carbon metabolism in source leaves and alters the biomass distribution between the various plant organs. The long-term goal of the proposed research was to better understand the factors controlling carbon translocation in plants. The specific objectives were: A) To introduce into tobacco and potato plants a virally-encoded (TMV-MP) gene that affects plasmodesmal functioning and photosynthate partitioning under tissue-specific promoters. B) To introduce into tobacco and potato plants the TMV-MP gene under the control of promoters which are tightly repressed by the Tn10-encoded Tet repressor, to enable the expression of the protein by external application of tetracycline. C) To explore the mechanism by which the TMV-MP interacts with the endogenous control o~ carbon allocation. Data obtained in our previous project together with the results of this current study established that the TMV-MP has pleiotropic effects when expressed in transgenic tobacco plants. In addition to its ability to increase the plasmodesmal size exclusion limit, it alters carbohydrate metabolism in source leaves and dry matter partitioning between the various plant organs, Expression of the TMV-MP in various tissues of transgenic potato plants indicated that sugars and starch levels in source leaves are reduced below those of control plants when the TMV-MP is expressed in green tissue only. However, when the TMV-MP was expressed predominantly in PP and CC, sugar and starch levels were raised above those of control plants. Perhaps the most significant result obtained from experiments performed on transgenic potato plants was the discovery that the influence of the TMV-MP on carbohydrate allocation within source leaves was under developmental control and was exerted only during tuber development. The complexity of the mode by which the TMV-MP exerts its effect on the process of carbohydrate allocation was further demonstrated when transgenic tobacco plants were subjected to environmental stresses such as drought stress and nutrients deficiencies, Collectively, these studies indicated that the influence of the TMV-MP on carbon allocation L the result of protein-protein interaction within the source tissue. Based on these results, together with the findings that plasmodesmata potentiate the cell-to-cell trafficking of viral and endogenous proteins and nucleoproteins complexes, we developed the theme that at the whole plant level, the phloem serves as an information superhighway. Such a long-distance communication system may utilize a new class of signaling molecules (proteins and/or RNA) to co-ordinate photosynthesis and carbon/nitrogen metabolism in source leaves with the complex growth requirements of the plant under the prevailing environmental conditions. The discovery that expression of viral MP in plants can induce precise changes in carbon metabolism and photoassimilate allocation, now provide a conceptual foundation for future studies aimed at elucidating the communication network responsible for integrating photosynthetic productivity with resource allocation at the whole-plant level. Such information will surely provide an understanding of how plants coordinate the essential physiological functions performed by distantly-separated organs. Identification of the proteins involved in mediating and controlling cell-to-cell transport, especially at the companion cell-sieve element boundary, will provide an important first step towards achieving this goal.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Idakwo, Gabriel, Sundar Thangapandian, Joseph Luttrell, Zhaoxian Zhou, Chaoyang Zhang y Ping Gong. Deep learning-based structure-activity relationship modeling for multi-category toxicity classification : a case study of 10K Tox21 chemicals with high-throughput cell-based androgen receptor bioassay data. Engineer Research and Development Center (U.S.), julio de 2021. http://dx.doi.org/10.21079/11681/41302.

Texto completo
Resumen
Deep learning (DL) has attracted the attention of computational toxicologists as it offers a potentially greater power for in silico predictive toxicology than existing shallow learning algorithms. However, contradicting reports have been documented. To further explore the advantages of DL over shallow learning, we conducted this case study using two cell-based androgen receptor (AR) activity datasets with 10K chemicals generated from the Tox21 program. A nested double-loop cross-validation approach was adopted along with a stratified sampling strategy for partitioning chemicals of multiple AR activity classes (i.e., agonist, antagonist, inactive, and inconclusive) at the same distribution rates amongst the training, validation and test subsets. Deep neural networks (DNN) and random forest (RF), representing deep and shallow learning algorithms, respectively, were chosen to carry out structure-activity relationship-based chemical toxicity prediction. Results suggest that DNN significantly outperformed RF (p < 0.001, ANOVA) by 22–27% for four metrics (precision, recall, F-measure, and AUPRC) and by 11% for another (AUROC). Further in-depth analyses of chemical scaffolding shed insights on structural alerts for AR agonists/antagonists and inactive/inconclusive compounds, which may aid in future drug discovery and improvement of toxicity prediction modeling.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Mudge, Christopher, Glenn Suir y Benjamin Sperry. Unmanned aircraft systems and tracer dyes : potential for monitoring herbicide spray distribution. Engineer Research and Development Center (U.S.), octubre de 2023. http://dx.doi.org/10.21079/11681/47705.

Texto completo
Resumen
Chemical control of nuisance aquatic vegetation has long been the most widely utilized management tool due to its high level of efficacy, limited environmental impacts, and relatively low cost. However, unprecise application of herbicides can lead to uncontrolled invasive plants and unintended management costs. Therefore, precision herbicide delivery techniques are being developed to improve invasive plant control and minimize impacts to non-target plants. These technological advancements have the potential to enhance aquatic ecosystem protection from invasive species while reducing associated management costs. Despite the benefits of using registered herbicides for aquatic plant control in efforts to restore aquatic habitats, their use is often misunderstood and opposed by public stakeholders. This can lead to significant challenges related to chemical control of nuisance aquatic vegetation. Thus, US Army Corps of Engineers (USACE) Districts seek improved methods to monitor and quantify the distribution (i.e., amount of herbicide retained on plant foliage compared to those deposited into the water column) of herbicides applied in aquatic systems. Monitoring herbicide movement in aquatic systems can be tedious and costly using standard analytical methods. However, since the inert fluorescent tracer dye Rhodamine WT (RWT) closely mimics product movement in the aquatic environment it has been used as a cost-effective surrogate for herbicides tracing. The use of RWT (or other inert tracer dyes) can be an efficient way to quantify herbicide retention and deposition following foliar treatments. However, the collection of operational spray deposition data in large populations of invasive floating and emergent plant stands is labor intensive and costly. One proposed solution is the use of remote sensing methods as an alternative to traditional in situ samples. Specifically, using unmanned aircraft systems (UAS) in conjunction with RWT could provide more efficient monitoring and quantification of herbicide spray distribution and in-water concentrations when using RWT in combination with herbicides. A better understanding of UAS capabilities and limitations is key as this technology is being explored for improved and integrated management of aquatic plants in the U.S. This technical note (TN) provides a review of literature to assess the state of knowledge and technologies that can assist USACE Districts and partners with tracking herbicide movement (using RWT as a surrogate or additive), which could improve operational monitoring, thus reducing the level of uncertainty related to chemical applications and non-target impacts, and thus improve management in aquatic systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Bédard, K., A. Marsh, M. Hillier y Y. Music. 3D geological model of the Western Canadian Sedimentary Basin in Saskatchewan, Canada. Natural Resources Canada/CMSS/Information Management, 2023. http://dx.doi.org/10.4095/331747.

Texto completo
Resumen
The Western Canadian Sedimentary Basin (WCSB) covers a large part of southern Saskatchewan and hosts many resources such as critical mineral deposits (i.e. potash, helium and lithium) as well as oil and gas reservoirs and is also targeted for deep CO2 storage projects. There is also growing interest in the groundwater resources, the geothermal potential and hydrogen recovery potential. These applications require knowledge of the subsurface geology such as formation thickness and depth, relationships with adjacent formations or unconformities and ultimately, distribution of physical properties within the basin. 3D geological models can provide this knowledge since they characterize the geometry of subsurface geological features. In addition, they can be used as a framework for fluid flow simulation and estimating the distribution a variety of properties. The 3D geological model presented in this report consists of 51 geological units of which, 49 are stratigraphic units spanning from Cambrian Deadwood Formation at the base of the sequence to Upper Cretaceous Belly River Formation at the top, plus the undivided Precambrian and a preliminary Quaternary unit. The model is cut by 7 major regional unconformities, including the base of the Quaternary sediments. The regional model was constrained using oil and gas well data interpretations, provincial scale bedrock geology maps and knowledge from the previously interpreted areal extent of the Phanerozoic strata. A hybrid explicit-implicit modelling approach was employed to produce the 3D geological model of the WCSB in Saskatchewan using Gocad/SKUATM geomodelling software.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Baker, Michael. DTRS56-02-D-70036-16 Mechanical Damage. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), abril de 2009. http://dx.doi.org/10.55274/r0011844.

Texto completo
Resumen
This report reviews and summarizes the current state of knowledge and practice related to mechanical damage in natural gas and hazardous liquid steel pipelines, with a particular focus on transmission pipelines. Comprehensive voluntary interviews were conducted with 10 pipeline operators who represent a diverse cross-section of industry professionals in the United States, Canada, and Europe. The interviews, which focused on operator practices for detection, characterization, and mitigation of mechanical damage on both gas and liquid transmission and gas distribution pipelines (the latter examined for comparison purposes), provided an invaluable source of data for the development of this report. Operator practices associated with the prevention of mechanical damage primarily resulting from excavation damage were also extensively covered in the interviews. The inquiry primarily included pipelines that comprise transmission systems, but gas distribution companies also reported on their experience with distribution systems consisting of both steel and plastic pipe, the latter reviewed for a comprehensive discussion of the operator's damage prevention programs and issues. Pipeline geographic locations included remote and rugged terrain, rural areas, and constrained urban environments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

McMartin, I., D. E. Kerr, M. B. McClenaghan, A. Duk-Rodkin, T. Tremblay, M. Parent y J. M. Rice. Introduction and Summary. Natural Resources Canada/CMSS/Information Management, 2023. http://dx.doi.org/10.4095/331419.

Texto completo
Resumen
This bulletin summarizes surficial geology knowledge and data produced by the Geo-mappingfor Energy and Minerals (GEM) program in the last decade and provides an updated understanding of the nature, distribution, and history of surficial deposits in various glacial terrain types of Canada's North. The advancement in various aspects of surficial geology and the evolution of certain concepts and methods form the subject of the papers that make up this bulletin. Specifically, the papers discuss the status of surficial geology mapping in northern Canada and the development of standards to facilitate map release; highlights from selected GEM surficial geochemical and indicator mineral surveys and the establishment of protocols for drift prospecting; and the revised glacial histories and surficial geology in various regions, from the Mackenzie Mountains to the Labrador coast. This introductory paper to Bulletin 611 describes the scope of the publication and provides a summary of major surficial geology contributions to the GEM program in northern Canada. Remaining knowledge gaps and outstanding issues suggest ideas for future research topics and regions of interest that could inform decisions on mineral exploration and land-use management.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

McMartin, I., D. E. Kerr, M. B. McClenaghan, A. Duk-Rodkin, T. Tremblay, M. Parent y J. M. Rice. Introduction et Sommaire. Natural Resources Canada/CMSS/Information Management, 2023. http://dx.doi.org/10.4095/331427.

Texto completo
Resumen
This Bulletin summarizes surficial geology knowledge and data produced by the Geo-mapping for Energy and Minerals (GEM) program in the last decade, and provides an updated understanding of the nature, distribution, and history of surficial deposits in various glacial terrain types of Canada's North. The advancement in various aspects of surficial geology and the evolution of certain concepts and methods form the subject of the papers that make up this bulletin. Specifically, the status of surficial geology mapping in northern Canada and the development of standards to facilitate map release; highlights from selected GEM surficial geochemical and indicator mineral surveys and the establishment of protocols for drift prospecting; and the revised glacial histories and surficial geology in various regions, from the Mackenzie Mountains to the Labrador coast, are discussed. This introductory paper to Bulletin 611 describes the scope of the publication and provides a summary of major surficial geology contributions to the GEM program in northern Canada. Remaining knowledge gaps and outstanding issues suggest ideas for future research topics and regions of interest that could inform decisions on mineral exploration and land-use management.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Gantzer, Clark J., Shmuel Assouline y Stephen H. Anderson. Synchrotron CMT-measured soil physical properties influenced by soil compaction. United States Department of Agriculture, febrero de 2006. http://dx.doi.org/10.32747/2006.7587242.bard.

Texto completo
Resumen
Methods to quantify soil conditions of pore connectivity, tortuosity, and pore size as altered by compaction were done. Air-dry soil cores were scanned at the GeoSoilEnviroCARS sector at the Advanced Photon Source for x-ray computed microtomography of the Argonne facility. Data was collected on the APS bending magnet Sector 13. Soil sample cores 5- by 5-mm were studied. Skeletonization algorithms in the 3DMA-Rock software of Lindquist et al. were used to extract pore structure. We have numerically investigated the spatial distribution for 6 geometrical characteristics of the pore structure of repacked Hamra soil from three-dimensional synchrotron computed microtomography (CMT) computed tomographic images. We analyzed images representing cores volumes 58.3 mm³ having average porosities of 0.44, 0.35, and 0.33. Cores were packed with < 2mm and < 0.5mm sieved soil. The core samples were imaged at 9.61-mm resolution. Spatial distributions for pore path length and coordination number, pore throat size and nodal pore volume obtained. The spatial distributions were computed using a three-dimensional medial axis analysis of the void space in the image. We used a newly developed aggressive throat computation to find throat and pore partitioning for needed for higher porosity media such as soil. Results show that the coordination number distribution measured from the medial axis were reasonably fit by an exponential relation P(C)=10⁻C/C0. Data for the characteristic area, were also reasonably well fit by the relation P(A)=10⁻ᴬ/ᴬ0. Results indicates that compression preferentially affects the largest pores, reducing them in size. When compaction reduced porosity from 44% to 33%, the average pore volume reduced by 30%, and the average pore-throat area reduced by 26%. Compaction increased the shortest paths interface tortuosity by about 2%. Soil structure alterations induced by compaction using quantitative morphology show that the resolution is sufficient to discriminate soil cores. This study shows that analysis of CMT can provide information to assist in assessment of soil management to ameliorate soil compaction.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Chopra, Deepta, Kas Sempere y Meenakshi Krishnan. Assessing Unpaid Care Work: A Participatory Toolkit. Institute of Development Studies, marzo de 2021. http://dx.doi.org/10.19088/ids.2021.016.

Texto completo
Resumen
This is a participatory toolkit for understanding unpaid care work and its distribution within local communities and families. Together, these tools provide a way of ascertaining and capturing research participants’ understanding of women’s unpaid care work – giving special attention to the lived experiences of carrying out unpaid care work and receiving care. Please note that these tools were developed and used in a pre-Covid-19 era and that they are designed to be implemented through face-to-face interactions rather than online means. We developed the first iteration of these tools in our ‘Balancing Care Work and Paid Work’ project as part of the Growth of Economic Opportunities for Women (GrOW) programme. The mixed-methods project sought to collect data across four countries – India, Nepal, Tanzania, and Rwanda – with data collected in four sites in each country (16 sites in total). The participatory tools were developed with two main intentions: (1) as a data collection tool to gain a broader understanding of the social norms and perspectives of the wider community in each of the 16 sites; and (2) to be implemented with our local partners as a sensitisation tool for the community regarding women’s unpaid care work burdens. While it is not essential to apply these tools in the order that they are presented, or even all of them, we would suggest that this toolkit be used in its entirety, to gather in-depth knowledge of social norms around the distribution of unpaid care, and the impacts that these have on care providers’ lives and livelihoods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía