Добірка наукової літератури з теми "ANALYZE BIG DATA"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "ANALYZE BIG DATA".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "ANALYZE BIG DATA"

1

Dhar, Vasant. "Can Big Data Machines Analyze Stock Market Sentiment?" Big Data 2, no. 4 (December 2014): 177–81. http://dx.doi.org/10.1089/big.2014.1528.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Venkateswara Reddy, R., and Dr D. Murali. "Analyzing Indian healthcare data with big data." International Journal of Engineering & Technology 7, no. 3.29 (August 24, 2018): 88. http://dx.doi.org/10.14419/ijet.v7i3.29.18467.

Повний текст джерела
Анотація:
Big Data is the enormous amounts of data, being generated at present times. Organizations are using this Big Data to analyze and predict the future to make profits and gain competitive edge in the market. Big Data analytics has been adopted into almost every field, retail, banking, governance and healthcare. Big Data can be used for analyzing healthcare data for better planning and better decision making which lead to improved healthcare standards. In this paper, Indian health data from 1950 to 2015 are analyzed using various queries. This healthcare generates the considerable amount of heterogeneous data. But without the right methods for data analysis, these data have become useless. The Big Data analysis with Hadoop plays an active role in performing significant real-time analyzes of the enormous amount of data and able to predict emergency situations before this happens.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Li, Ruowang, Dokyoon Kim, and Marylyn D. Ritchie. "Methods to analyze big data in pharmacogenomics research." Pharmacogenomics 18, no. 8 (June 2017): 807–20. http://dx.doi.org/10.2217/pgs-2016-0152.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ahmed, Waseem, and Lisa Fan. "Analyze Physical Design Process Using Big Data Tool." International Journal of Software Science and Computational Intelligence 7, no. 2 (April 2015): 31–49. http://dx.doi.org/10.4018/ijssci.2015040102.

Повний текст джерела
Анотація:
Physical Design (PD) Data tool is designed mainly to help ASIC design engineers in achieving chip design process quality, optimization and performance measures. The tool uses data mining techniques to handle the existing unstructured data repository. It extracts the relevant data and loads it into a well-structured database. Data archive mechanism is enabled that initially creates and then keeps updating an archive repository on a daily basis. The logs information provide to PD tool is a completely unstructured format which parse by regular expression (regex) based data extraction methodology. It converts the input data into the structured tables. This undergoes the data cleansing process before being fed into the operational DB. PD tool also ensures data integrity and data validity. It helps the design engineers to compare, correlate and inter-relate the results of their existing work with the ones done in the past which gives them a clear picture of the progress made and deviations that occurred. Data analysis can be done using various features offered by the tool such as graphical and statistical representation.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zhang, Yucheng Eason, Siqi Liu, Shan Xu, Miles M. Yang, and Jian Zhang. "Integrating the Split/Analyze/Meta-Analyze (SAM) Approach and a Multilevel Framework to Advance Big Data Research in Psychology." Zeitschrift für Psychologie 226, no. 4 (October 2018): 274–83. http://dx.doi.org/10.1027/2151-2604/a000345.

Повний текст джерела
Анотація:
Abstract. Though big data research has undergone dramatic developments in recent decades, it has mainly been applied in disciplines such as computer science and business. Psychology research that applies big data to examine research issues in psychology is largely lacking. One of the major challenges regarding the use of big data in psychology is that many researchers in the field may not have sufficient knowledge of big data analytical techniques that are rooted in computer science. This paper integrates the split/analyze/meta-analyze (SAM) approach and a multilevel framework to illustrate how to use the SAM approach to address multilevel research questions with big data. Specifically, we first introduce the SAM approach and then illustrate how to implement this to integrate two big datasets at the firm level and country level. Finally, we discuss theoretical and practical implications, proposing future research directions for psychology scholars.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Sp, Syedibrahim sp. "Big Data Analytics Framework to Analyze Student’s Performance." International Journal of Computational Complexity and Intelligent Algorithms 1, no. 1 (2018): 1. http://dx.doi.org/10.1504/ijccia.2018.10021266.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Gogołek, Włodzimierz. "Refining Big Data." Bulletin of Science, Technology & Society 37, no. 4 (December 2017): 212–17. http://dx.doi.org/10.1177/0270467619864012.

Повний текст джерела
Анотація:
Refining big data is a new multipurpose way to find, collect, and analyze information obtained from the web and off-line information sources about any research subject. It gives the opportunity to investigate (with an assumed level of statistical significance) the past and current status of information on a subject, and it can even predict the future. The refining of big data makes it possible to quantitatively investigate a wide spectrum of raw information on significant human issues—social, scientific, political, business, and others. Refining creates a space for new, rich sources of information and opens innovative ways for research. The article describes a procedure for refining big data and gives examples of its use.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Liu, Xin Xing, Xing Wu, and Shu Ji Dai. "The Paradoxes of Big Data." Applied Mechanics and Materials 743 (March 2015): 603–6. http://dx.doi.org/10.4028/www.scientific.net/amm.743.603.

Повний текст джерела
Анотація:
The era of Big Data poses a big challenge to our way of living and thinking. Big Data refers to things which can do at a large scale but cannot be done at a smaller size. There are many paradoxes of Big Data: In this new world far more data can be analyzed, though using all the data can make the datum messy and lose some accuracy, sometimes reach better conclusions. As massive quantities of information produced by and about people and their interactions exposed on the Internet, will large scale search and analyze data help people create better services, goods and tools or it just lead to privacy incursions and invasive marketing? In this article, we offer three main provocations, based on our analysis we have constructed some models to help explain the amazing contradiction in Big Data.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Valdez, Alicia, Griselda Cortes, Laura Vazquez, Adriana Martinez, and Gerardo Haces. "Big Data Analysis Proposal for Manufacturing Firm." European Journal of Electrical Engineering and Computer Science 5, no. 1 (February 15, 2021): 68–75. http://dx.doi.org/10.24018/ejece.2021.5.1.298.

Повний текст джерела
Анотація:
The analysis of large volumes of data is an important activity in manufacturing companies, since they allow improving the decision-making process. The data analysis has generated that the services and products are personalized, and how the consumption of the products has evolved, obtaining results that add value to the companies in real time. In this case study, developed in a large manufacturing company of electronic components as robots and AC motors; a strategy has been proposed to analyze large volumes of data and be able to analyze them to support the decision-making process; among the proposed activities of the strategy are: Analysis of the technological architecture, selection of the business processes to be analyzed, installation and configuration of Hadoop software, ETL activities, and data analysis and visualization of the results. With the proposed strategy, the data of nine production factors of the motor PCI boards were analyzed, which had a greater incidence in the rejection of the components; a solution was made based on the analysis, which has allowed a decrease of 28.2% in the percentage of rejection.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Raich, Vivek, and Pankaj Maurya. "Analytical Study on Big Data." International Journal of Advanced Research in Computer Science and Software Engineering 8, no. 5 (June 2, 2018): 75. http://dx.doi.org/10.23956/ijarcsse.v8i5.668.

Повний текст джерела
Анотація:
in the time of the Information Technology, the big data store is going on. Due to which, Huge amounts of data are available for decision makers, and this has resulted in the progress of information technology and its wide growth in many areas of business, engineering, medical, and scientific studies. Big data means that the size which is bigger in size, but there are several types, which are not easy to handle, technology is required to handle it. Due to continuous increase in the data in this way, it is important to study and manage these datasets by adjusting the requirements so that the necessary information can be obtained.The aim of this paper is to analyze some of the analytic methods and tools. Which can be applied to large data. In addition, the application of Big Data has been analyzed, using the Decision Maker working on big data and using enlightened information for different applications.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "ANALYZE BIG DATA"

1

SHARMA, DIVYA. "APPLICATION OF ML TO MAKE SENCE OF BIOLOGICAL BIG DATA IN DRUG DISCOVERY PROCESS." Thesis, DELHI TECHNOLOGICAL UNIVERSITY, 2021. http://dspace.dtu.ac.in:8080/jspui/handle/repository/18378.

Повний текст джерела
Анотація:
Scientists have been working over years to assemble and accumulate data from biological sources to find solutions for many principal questions. Since a tremendous amount of data has been collected over the past and still increasing at an exponential rate, hence it now becomes unachievable for a human being alone to handle or analyze this data. Most of the data collection and maintenance is now done in digitalized format and hence requires an organization to have better data management and analysis to convert the vast data resource into insights to achieve their objectives. The continuous explosion of information both from biomedical and healthcare sources calls for urgent solutions. Healthcare data needs to be closely combined with biomedical research data to make it more effective in providing personalized medicine and better treatment procedures. Therefore, big data analytics would help in integrating large data sets for proper management, decision-making, and cost- effectiveness in any medical/healthcare organization. The scope of the thesis is to highlight the need for big data analytics in healthcare, explain data processing pipeline, and machine learning used to analyze big data.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Uřídil, Martin. "Big data - použití v bankovní sféře." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-149908.

Повний текст джерела
Анотація:
There is a growing volume of global data, which is offering new possibilities for those market participants, who know to take advantage of it. Data, information and knowledge are new highly regarded commodity especially in the banking industry. Traditional data analytics is intended for processing data with known structure and meaning. But how can we get knowledge from data with no such structure? The thesis focuses on Big Data analytics and its use in banking and financial industry. Definition of specific applications in this area and description of benefits for international and Czech banking institutions are the main goals of the thesis. The thesis is divided in four parts. The first part defines Big Data trend, the second part specifies activities and tools in banking. The purpose of the third part is to apply Big Data analytics on those activities and shows its possible benefits. The last part focuses on the particularities of Czech banking and shows what actual situation about Big Data in Czech banks is. The thesis gives complex description of possibilities of using Big Data analytics. I see my personal contribution in detailed characterization of the application in real banking activities.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Flike, Felix, and Markus Gervard. "BIG DATA-ANALYS INOM FOTBOLLSORGANISATIONER En studie om big data-analys och värdeskapande." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20117.

Повний текст джерела
Анотація:
Big data är ett relativt nytt begrepp men fenomenet har funnits länge. Det går att beskriva utifrån fem V:n; volume, veracity, variety, velocity och value. Analysen av Big Data har kommit att visa sig värdefull för organisationer i arbetet med beslutsfattande, generering av mätbara ekonomiska fördelar och förbättra verksamheten. Inom idrottsbranschen började detta på allvar användas i början av 2000-talet i baseballorganisationen Oakland Athletics. Man började värva spelare baserat på deras statistik istället för hur bra scouterna bedömde deras förmåga vilket gav stora framgångar. Detta ledde till att fler organisationer tog efter och det har inte dröjt länge innan Big Data-analys används i alla stora sporter för att vinna fördelar gentemot konkurrenter. I svensk kontext så är användningen av dessa verktyg fortfarande relativt ny och mångaorganisationer har möjligtvis gått för fort fram i implementeringen av dessa verktyg. Dennastudie syftar till att undersöka fotbollsorganisationers arbete när det gäller deras Big Dataanalys kopplat till organisationens spelare utifrån en fallanalys. Resultatet visar att båda organisationerna skapar värde ur sina investeringar som de har nytta av i arbetet med att nå sina strategiska mål. Detta gör organisationerna på olika sätt. Vilket sätt som är mest effektivt utifrån värdeskapande går inte att svara på utifrån denna studie.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Šoltýs, Matej. "Big Data v technológiách IBM." Master's thesis, Vysoká škola ekonomická v Praze, 2014. http://www.nusl.cz/ntk/nusl-193914.

Повний текст джерела
Анотація:
This diploma thesis presents Big Data technologies and their possible use cases and applications. Theoretical part is initially focused on definition of term Big Data and afterwards is focused on Big Data technology, particularly on Hadoop framework. There are described principles of Hadoop, such as distributed storage and data processing, and its individual components. Furthermore are presented the largest vendors of Big Data technologies. At the end of this part of the thesis are described possible use cases of Big Data technologies and also some case studies. The practical part describes implementation of demo example of Big Data technologies and it is divided into two chapters. The first chapter of the practical part deals with conceptual design of demo example, used products and architecture of the solution. Afterwards, implementation of the demo example is described in the second chapter, from preparation of demo environment to creation of applications. Goals of this thesis are description and characteristics of Big Data, presentation of the largest vendors and their Big Data products, description of possible use cases of Big Data technologies and especially implementation of demo example in Big Data tools from IBM.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Victoria, Åkestrand, and Wisen My. "Big Data-analyser och beslutsfattande i svenska myndigheter." Thesis, Högskolan i Halmstad, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-34752.

Повний текст джерела
Анотація:
Det finns mycket data att samla in om människor och mängden av data som går att samla in ökar. Allt fler verksamheter tar steget in i Big Data-­‐användningen och svenska myndigheter är en av dem. Att analysera Big Data kan generera bättre beslutsunderlag, men det finns en problematik i hur inhämtad data ska analyseras och användas vid beslutsprocessen. Studiens resultat visar på att svenska myndigheter inte kan använda befintliga beslutsmodeller vid beslut som grundas i en Big Data-­‐analys. Resultatet av studien visar även på att svenska myndigheter inte använder sig av givna steg i beslutsprocessen, utan det handlar mest om att identifiera Big Data-­‐ analysens innehåll för att fatta ett beslut. Då beslutet grundas i vad Big Data-­‐ analysen pekar på så blir det kringliggande aktiviteterna som insamling av data, kvalitetssäkring av data, analysering av data och visualisering av data allt mer essentiella.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Kleisarchaki, Sofia. "Analyse des différences dans le Big Data : Exploration, Explication, Évolution." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM055/document.

Повний текст джерела
Анотація:
La Variabilité dans le Big Data se réfère aux données dont la signification change de manière continue. Par exemple, les données des plateformes sociales et les données des applications de surveillance, présentent une grande variabilité. Cette variabilité est dûe aux différences dans la distribution de données sous-jacente comme l’opinion de populations d’utilisateurs ou les mesures des réseaux d’ordinateurs, etc. L’Analyse de Différences a comme objectif l’étude de la variabilité des Données Massives. Afin de réaliser cet objectif, les data scientists ont besoin (a) de mesures de comparaison de données pour différentes dimensions telles que l’âge pour les utilisateurs et le sujet pour le traffic réseau, et (b) d’algorithmes efficaces pour la détection de différences à grande échelle. Dans cette thèse, nous identifions et étudions trois nouvelles tâches analytiques : L’Exploration des Différences, l’Explication des Différences et l’Evolution des Différences.L’Exploration des Différences s’attaque à l’extraction de l’opinion de différents segments d’utilisateurs (ex., sur un site de films). Nous proposons des mesures adaptées à la com- paraison de distributions de notes attribuées par les utilisateurs, et des algorithmes efficaces qui permettent, à partir d’une opinion donnée, de trouver les segments qui sont d’accord ou pas avec cette opinion. L’Explication des Différences s’intéresse à fournir une explication succinte de la différence entre deux ensembles de données (ex., les habitudes d’achat de deux ensembles de clients). Nous proposons des fonctions de scoring permettant d’ordonner les explications, et des algorithmes qui guarantissent de fournir des explications à la fois concises et informatives. Enfin, l’Evolution des Différences suit l’évolution d’un ensemble de données dans le temps et résume cette évolution à différentes granularités de temps. Nous proposons une approche basée sur le requêtage qui utilise des mesures de similarité pour comparer des clusters consécutifs dans le temps. Nos index et algorithmes pour l’Evolution des Différences sont capables de traiter des données qui arrivent à différentes vitesses et des types de changements différents (ex., soudains, incrémentaux). L’utilité et le passage à l’échelle de tous nos algorithmes reposent sur l’exploitation de la hiérarchie dans les données (ex., temporelle, démographique).Afin de valider l’utilité de nos tâches analytiques et le passage à l’échelle de nos algo- rithmes, nous réalisons un grand nombre d’expériences aussi bien sur des données synthé- tiques que réelles.Nous montrons que l’Exploration des Différences guide les data scientists ainsi que les novices à découvrir l’opinion de plusieurs segments d’internautes à grande échelle. L’Explication des Différences révèle la nécessité de résumer les différences entre deux ensembles de donnes, de manière parcimonieuse et montre que la parcimonie peut être atteinte en exploitant les relations hiérarchiques dans les données. Enfin, notre étude sur l’Evolution des Différences fournit des preuves solides qu’une approche basée sur les requêtes est très adaptée à capturer des taux d’arrivée des données variés à plusieurs granularités de temps. De même, nous montrons que les approches de clustering sont adaptées à différents types de changement
Variability in Big Data refers to data whose meaning changes continuously. For instance, data derived from social platforms and from monitoring applications, exhibits great variability. This variability is essentially the result of changes in the underlying data distributions of attributes of interest, such as user opinions/ratings, computer network measurements, etc. {em Difference Analysis} aims to study variability in Big Data. To achieve that goal, data scientists need: (a) measures to compare data in various dimensions such as age for users or topic for network traffic, and (b) efficient algorithms to detect changes in massive data. In this thesis, we identify and study three novel analytical tasks to capture data variability: {em Difference Exploration, Difference Explanation} and {em Difference Evolution}.Difference Exploration is concerned with extracting the opinion of different user segments (e.g., on a movie rating website). We propose appropriate measures for comparing user opinions in the form of rating distributions, and efficient algorithms that, given an opinion of interest in the form of a rating histogram, discover agreeing and disargreeing populations. Difference Explanation tackles the question of providing a succinct explanation of differences between two datasets of interest (e.g., buying habits of two sets of customers). We propose scoring functions designed to rank explanations, and algorithms that guarantee explanation conciseness and informativeness. Finally, Difference Evolution tracks change in an input dataset over time and summarizes change at multiple time granularities. We propose a query-based approach that uses similarity measures to compare consecutive clusters over time. Our indexes and algorithms for Difference Evolution are designed to capture different data arrival rates (e.g., low, high) and different types of change (e.g., sudden, incremental). The utility and scalability of all our algorithms relies on hierarchies inherent in data (e.g., time, demographic).We run extensive experiments on real and synthetic datasets to validate the usefulness of the three analytical tasks and the scalability of our algorithms. We show that Difference Exploration guides end-users and data scientists in uncovering the opinion of different user segments in a scalable way. Difference Explanation reveals the need to parsimoniously summarize differences between two datasets and shows that parsimony can be achieved by exploiting hierarchy in data. Finally, our study on Difference Evolution provides strong evidence that a query-based approach is well-suited to tracking change in datasets with varying arrival rates and at multiple time granularities. Similarly, we show that different clustering approaches can be used to capture different types of change
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Nováková, Martina. "Analýza Big Data v oblasti zdravotnictví." Master's thesis, Vysoká škola ekonomická v Praze, 2014. http://www.nusl.cz/ntk/nusl-201737.

Повний текст джерела
Анотація:
This thesis deals with the analysis of Big Data in healthcare. The aim is to define the term Big Data, to acquaint the reader with data growth in the world and in the health sector. Another objective is to explain the concept of a data expert and to define team members of the data experts team. In following chapters phases of the Big Data analysis according to methodology of EMC2 company are defined and basic technologies for analysing Big Data are described. As beneficial and interesting I consider the part dealing with definition of tasks in which Big Data technologies are already used in healthcare. In the practical part I perform the Big Data analysis task focusing on meteorotropic diseases in which I use real medical and meteorological data. The reader is not only acquainted with the one of recommended methods of analysis and with used statistical models, but also with terms from the field of biometeorology and healthcare. An integral part of the analysis is also information about its limitations, the consultation on results, and conclusions of experts in meteorology and healthcare.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

El, alaoui Imane. "Transformer les big social data en prévisions - méthodes et technologies : Application à l'analyse de sentiments." Thesis, Angers, 2018. http://www.theses.fr/2018ANGE0011/document.

Повний текст джерела
Анотація:
Extraire l'opinion publique en analysant les Big Social data a connu un essor considérable en raison de leur nature interactive, en temps réel. En effet, les données issues des réseaux sociaux sont étroitement liées à la vie personnelle que l’on peut utiliser pour accompagner les grands événements en suivant le comportement des personnes. C’est donc dans ce contexte que nous nous intéressons particulièrement aux méthodes d’analyse du Big data. La problématique qui se pose est que ces données sont tellement volumineuses et hétérogènes qu’elles en deviennent difficiles à gérer avec les outils classiques. Pour faire face aux défis du Big data, de nouveaux outils ont émergés. Cependant, il est souvent difficile de choisir la solution adéquate, car la vaste liste des outils disponibles change continuellement. Pour cela, nous avons fourni une étude comparative actualisée des différents outils utilisés pour extraire l'information stratégique du Big Data et les mapper aux différents besoins de traitement.La contribution principale de la thèse de doctorat est de proposer une approche d’analyse générique pour détecter de façon automatique des tendances d’opinion sur des sujets donnés à partir des réseaux sociaux. En effet, étant donné un très petit ensemble de hashtags annotés manuellement, l’approche proposée transfère l'information du sentiment connue des hashtags à des mots individuels. La ressource lexicale qui en résulte est un lexique de polarité à grande échelle dont l'efficacité est mesurée par rapport à différentes tâches de l’analyse de sentiment. La comparaison de notre méthode avec différents paradigmes dans la littérature confirme l'impact bénéfique de notre méthode dans la conception des systèmes d’analyse de sentiments très précis. En effet, notre modèle est capable d'atteindre une précision globale de 90,21%, dépassant largement les modèles de référence actuels sur l'analyse du sentiment des réseaux sociaux
Extracting public opinion by analyzing Big Social data has grown substantially due to its interactive nature, in real time. In fact, our actions on social media generate digital traces that are closely related to our personal lives and can be used to accompany major events by analysing peoples' behavior. It is in this context that we are particularly interested in Big Data analysis methods. The volume of these daily-generated traces increases exponentially creating massive loads of information, known as big data. Such important volume of information cannot be stored nor dealt with using the conventional tools, and so new tools have emerged to help us cope with the big data challenges. For this, the aim of the first part of this manuscript is to go through the pros and cons of these tools, compare their respective performances and highlight some of its interrelated applications such as health, marketing and politics. Also, we introduce the general context of big data, Hadoop and its different distributions. We provide a comprehensive overview of big data tools and their related applications.The main contribution of this PHD thesis is to propose a generic analysis approach to automatically detect trends on given topics from big social data. Indeed, given a very small set of manually annotated hashtags, the proposed approach transfers information from hashtags known sentiments (positive or negative) to individual words. The resulting lexical resource is a large-scale lexicon of polarity whose efficiency is measured against different tasks of sentiment analysis. The comparison of our method with different paradigms in literature confirms the impact of our method to design accurate sentiment analysis systems. Indeed, our model reaches an overall accuracy of 90.21%, significantly exceeding the current models on social sentiment analysis
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Pragarauskaitė, Julija. "Frequent pattern analysis for decision making in big data." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2013. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2013~D_20130701_092451-80961.

Повний текст джерела
Анотація:
Huge amounts of digital information are stored in the World today and the amount is increasing by quintillion bytes every day. Approximate data mining algorithms are very important to efficiently deal with such amounts of data due to the computation speed required by various real-world applications, whereas exact data mining methods tend to be slow and are best employed where the precise results are of the highest important. This thesis focuses on several data mining tasks related to analysis of big data: frequent pattern mining and visual representation. For mining frequent patterns in big data, three novel approximate methods are proposed and evaluated on real and artificial databases: • Random Sampling Method (RSM) creates a random sample from the original database and makes assumptions on the frequent and rare sequences based on the analysis results of the random sample. A significant benefit is a theoretical estimate of classification errors made by this method using standard statistical methods. • Multiple Re-sampling Method (MRM) is an improved version of RSM method with a re-sampling strategy that decreases the probability to incorrectly classify the sequences as frequent or rare. • Markov Property Based Method (MPBM) relies upon the Markov property. MPBM requires reading the original database several times (the number equals to the order of the Markov process) and then calculates the empirical frequencies using the Markov property. For visual representation... [to full text]
Didžiuliai informacijos kiekiai yra sukaupiami kiekvieną dieną pasaulyje bei jie sparčiai auga. Apytiksliai duomenų tyrybos algoritmai yra labai svarbūs analizuojant tokius didelius duomenų kiekius, nes algoritmų greitis yra ypač svarbus daugelyje sričių, tuo tarpu tikslieji metodai paprastai yra lėti bei naudojami tik uždaviniuose, kuriuose reikalingas tikslus atsakymas. Ši disertacija analizuoja kelias duomenų tyrybos sritis: dažnų sekų paiešką bei vizualizaciją sprendimų priėmimui. Dažnų sekų paieškai buvo pasiūlyti trys nauji apytiksliai metodai, kurie buvo testuojami naudojant tikras bei dirbtinai sugeneruotas duomenų bazes: • Atsitiktinės imties metodas (Random Sampling Method - RSM) formuoja pradinės duomenų bazės atsitiktinę imtį ir nustato dažnas sekas, remiantis atsitiktinės imties analizės rezultatais. Šio metodo privalumas yra teorinis paklaidų tikimybių įvertinimas, naudojantis standartiniais statistiniais metodais. • Daugybinio perskaičiavimo metodas (Multiple Re-sampling Method - MRM) yra RSM metodo patobulinimas, kuris formuoja kelias pradinės duomenų bazės atsitiktines imtis ir taip sumažina paklaidų tikimybes. • Markovo savybe besiremiantis metodas (Markov Property Based Method - MPBM) kelis kartus skaito pradinę duomenų bazę, priklausomai nuo Markovo proceso eilės, bei apskaičiuoja empirinius dažnius remdamasis Markovo savybe. Didelio duomenų kiekio vizualizavimui buvo naudojami pirkėjų internetu elgsenos duomenys, kurie analizuojami naudojant... [toliau žr. visą tekstą]
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Landelius, Cecilia. "Data governance in big data : How to improve data quality in a decentralized organization." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301258.

Повний текст джерела
Анотація:
The use of internet has increased the amount of data available and gathered. Companies are investing in big data analytics to gain insights from this data. However, the value of the analysis and decisions made based on it, is dependent on the quality ofthe underlying data. For this reason, data quality has become a prevalent issue for organizations. Additionally, failures in data quality management are often due to organizational aspects. Due to the growing popularity of decentralized organizational structures, there is a need to understand how a decentralized organization can improve data quality. This thesis conducts a qualitative single case study of an organization currently shifting towards becoming data driven and struggling with maintaining data quality within the logistics industry. The purpose of the thesis is to answer the questions: • RQ1: What is data quality in the context of logistics data? • RQ2: What are the obstacles for improving data quality in a decentralized organization? • RQ3: How can these obstacles be overcome? Several data quality dimensions were identified and categorized as critical issues,issues and non-issues. From the gathered data the dimensions completeness, accuracy and consistency were found to be critical issues of data quality. The three most prevalent obstacles for improving data quality were data ownership, data standardization and understanding the importance of data quality. To overcome these obstacles the most important measures are creating data ownership structures, implementing data quality practices and changing the mindset of the employees to a data driven mindset. The generalizability of a single case study is low. However, there are insights and trends which can be derived from the results of this thesis and used for further studies and companies undergoing similar transformations.
Den ökade användningen av internet har ökat mängden data som finns tillgänglig och mängden data som samlas in. Företag påbörjar därför initiativ för att analysera dessa stora mängder data för att få ökad förståelse. Dock är värdet av analysen samt besluten som baseras på analysen beroende av kvaliteten av den underliggande data. Av denna anledning har datakvalitet blivit en viktig fråga för företag. Misslyckanden i datakvalitetshantering är ofta på grund av organisatoriska aspekter. Eftersom decentraliserade organisationsformer blir alltmer populära, finns det ett behov av att förstå hur en decentraliserad organisation kan arbeta med frågor som datakvalitet och dess förbättring. Denna uppsats är en kvalitativ studie av ett företag inom logistikbranschen som i nuläget genomgår ett skifte till att bli datadrivna och som har problem med att underhålla sin datakvalitet. Syftet med denna uppsats är att besvara frågorna: • RQ1: Vad är datakvalitet i sammanhanget logistikdata? • RQ2: Vilka är hindren för att förbättra datakvalitet i en decentraliserad organisation? • RQ3: Hur kan dessa hinder överkommas? Flera datakvalitetsdimensioner identifierades och kategoriserades som kritiska problem, problem och icke-problem. Från den insamlade informationen fanns att dimensionerna, kompletthet, exakthet och konsekvens var kritiska datakvalitetsproblem för företaget. De tre mest förekommande hindren för att förbättra datakvalité var dataägandeskap, standardisering av data samt att förstå vikten av datakvalitet. För att överkomma dessa hinder är de viktigaste åtgärderna att skapa strukturer för dataägandeskap, att implementera praxis för hantering av datakvalitet samt att ändra attityden hos de anställda gentemot datakvalitet till en datadriven attityd. Generaliseringsbarheten av en enfallsstudie är låg. Dock medför denna studie flera viktiga insikter och trender vilka kan användas för framtida studier och för företag som genomgår liknande transformationer.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "ANALYZE BIG DATA"

1

Cutt, Shannon, ed. Practical Statistics for Data Scientists: 50 Essential Concepts. Beijing: O’Reilly Media, 2017.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

D, Baxevanis Andreas, and Ouellette B. F. Francis, eds. Bioinformatics: A practical guide to the analysis of genes and proteins. 2nd ed. New York, NY: Wiley-Interscience, 2001.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Python for Finance: Analyze Big Financial Data. O'Reilly Media, Incorporated, 2014.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Python for Finance: Analyze Big Financial Data. O'Reilly Media, 2014.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Python for Finance: Analyze Big Financial Data. O'Reilly Media, Incorporated, 2014.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Vanaria, Von. Big Data Solutions : Guides for Beginners to Analyze Big Data Using Python and C++ Programming: C++ Programming Language. Independently Published, 2021.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Pasupuleti, Pradeep, and Beulah Salome Purra. Data Lake Development with Big Data: Explore Architectural Approaches to Building Data Lakes That Ingest, Index, Manage, and Analyze Massive Amounts of Data Using Big Data Technologies. Packt Publishing, Limited, 2015.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Shilpi and Sumit Gupta. Real-Time Big Data Analytics: Design, Process, and Analyze Large Sets of Complex Data in Real Time. Packt Publishing, Limited, 2016.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Lai, Rudy, and Bartłomiej Potaczek. Hands-On Big Data Analytics with Pyspark: Analyze Large Datasets and Discover Techniques for Testing, Immunizing, and Parallelizing Spark Jobs. Packt Publishing, Limited, 2019.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Kearn, Marvin. Great Apartment Buildings : Learn the Tricks and Tips on How to Analyze Big Apartment Buildings: How to Find the Data for Big Apartment Buildings. Independently Published, 2021.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "ANALYZE BIG DATA"

1

Patel, Pragneshkumar, Sanjay Chaudhary, and Hasit Parmar. "Analyze the Impact of Weather Parameters for Crop Yield Prediction Using Deep Learning." In Big Data Analytics, 249–59. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-24094-2_17.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zhou, Zhou, and XuJia Yao. "Analyze and Evaluate Database-Backed Web Applications with WTool." In Web and Big Data, 110–24. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-85896-4_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ma, Xiaobin, Zhihui Du, Yankui Sun, Andrei Tchernykh, Chao Wu, and Jianyan Wei. "An Efficient Parallel Framework to Analyze Astronomical Sky Survey Data." In Big Scientific Data Management, 67–77. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28061-1_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Xuan Do, Canh, and Makoto Tsukai. "Exploring Potential Use of Mobile Phone Data Resource to Analyze Inter-regional Travel Patterns in Japan." In Data Mining and Big Data, 314–25. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-61845-6_32.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Habyarimana, Ephrem, and Sofia Michailidou. "Genomics Data." In Big Data in Bioeconomy, 69–76. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-71069-9_6.

Повний текст джерела
Анотація:
AbstractIn silico prediction of plant performance is gaining increasing breeders’ attention. Several statistical, mathematical and machine learning methodologies for analysis of phenotypic, omics and environmental data typically use individual or a few data layers. Genomic selection is one of the applications, where heterogeneous data, such as those from omics technologies, are handled, accommodating several genetic models of inheritance. There are many new high throughput Next Generation Sequencing (NGS) platforms on the market producing whole-genome data at a low cost. Hence, large-scale genomic data can be produced and analyzed enabling intercrosses and fast-paced recurrent selection. The offspring properties can be predicted instead of manually evaluated in the field . Breeders have a short time window to make decisions by the time they receive data, which is one of the major challenges in commercial breeding. To implement genomic selection routinely as part of breeding programs, data management systems and analytics capacity have therefore to be in order. The traditional relational database management systems (RDBMS), which are designed to store, manage and analyze large-scale data, offer appealing characteristics, particularly when they are upgraded with capabilities for working with binary large objects. In addition, NoSQL systems were considered effective tools for managing high-dimensional genomic data. MongoDB system, a document-based NoSQL database, was effectively used to develop web-based tools for visualizing and exploring genotypic information. The Hierarchical Data Format (HDF5), a member of the high-performance distributed file systems family, demonstrated superior performance with high-dimensional and highly structured data such as genomic sequencing data.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Cano, Luis, Erick Hein, Mauricio Rada-Orellana, and Claudio Ortega. "A Case Study of Library Data Management: A New Method to Analyze Borrowing Behavior." In Information Management and Big Data, 112–20. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-11680-4_12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Huang, Qinglv, and Liang Yan. "Analyze Ming and Qing Literature Under Big Data Technology." In 2020 International Conference on Data Processing Techniques and Applications for Cyber-Physical Systems, 367–74. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-1726-3_45.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Zhilyaeva, Irina A., Stanislav V. Suvorov, Natalia I. Tsarkova та Anastasia D. Perekatova. "Application of Big Data to Analyze Illegal Passenger Transportation Offenses". У Сooperation and Sustainable Development, 3–8. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77000-6_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Bian, Zhenxing. "Using Computer Blockchain Technology to Analyze the Development Trend of China's Modern Financial Industry." In Artificial Intelligence and Big Data for Financial Risk Management, 160–68. London: Routledge, 2022. http://dx.doi.org/10.4324/9781003144410-10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Din, Sadia, Awais Ahmad, Anand Paul, and Gwanggil Jeon. "Software-Defined Internet of Things to Analyze Big Data in Smart Cities." In Edge Computing, 91–106. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99061-3_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "ANALYZE BIG DATA"

1

Dang, Xuan-Hong, Raji Akella, Somaieh Bahrami, Vadim Sheinin, and Petros Zerfos. "Unsupervised Threshold Autoencoder to Analyze and Understand Sentence Elements." In 2018 IEEE International Conference on Big Data (Big Data). IEEE, 2018. http://dx.doi.org/10.1109/bigdata.2018.8622379.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Taeb, Maryam, Hongmei Chi, and Jie Yan. "Applying Machine Learning to Analyze Anti-Vaccination on Tweets." In 2021 IEEE International Conference on Big Data (Big Data). IEEE, 2021. http://dx.doi.org/10.1109/bigdata52589.2021.9671647.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Workman, T. Elizabeth, Michael Hirezi, Eduardo Trujillo-Rivera, Anita K. Patel, Julia A. Heneghan, James E. Bost, Qing Zeng-Treitler, and Murray Pollack. "A Novel Deep Learning Pipeline to Analyze Temporal Clinical Data." In 2018 IEEE International Conference on Big Data (Big Data). IEEE, 2018. http://dx.doi.org/10.1109/bigdata.2018.8622099.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Dindokar, Ravikant, Neel Choudhury, and Yogesh Simmhan. "A meta-graph approach to analyze subgraph-centric distributed programming models." In 2016 IEEE International Conference on Big Data (Big Data). IEEE, 2016. http://dx.doi.org/10.1109/bigdata.2016.7840587.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Xiao, Wei. "Novel Online Algorithms for Nonparametric Correlations with Application to Analyze Sensor Data." In 2019 IEEE International Conference on Big Data (Big Data). IEEE, 2019. http://dx.doi.org/10.1109/bigdata47090.2019.9006483.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Munasinghe, Thilanka, Evan W. Patton, and Oshani Seneviratne. "IoT Application Development Using MIT App Inventor to Collect and Analyze Sensor Data." In 2019 IEEE International Conference on Big Data (Big Data). IEEE, 2019. http://dx.doi.org/10.1109/bigdata47090.2019.9006203.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Dhiman, Aarzoo, and Durga Toshniwal. "An Unsupervised Misinformation Detection Framework to Analyze the Users using COVID-19 Twitter Data." In 2020 IEEE International Conference on Big Data (Big Data). IEEE, 2020. http://dx.doi.org/10.1109/bigdata50022.2020.9378250.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Ordonez, Carlos. "Can we analyze big data inside a DBMS?" In the sixteenth international workshop. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2513190.2513198.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Vuppalapati, Chandrasekar, Anitha Ilapakurti, Sandhya Vissapragada, Vanaja Mamaidi, Sharat Kedari, Raja Vuppalapati, Santosh Kedari, and Jaya Vuppalapati. "Application of Machine Learning and Government Finance Statistics for macroeconomic signal mining to analyze recessionary trends and score policy effectiveness." In 2021 IEEE International Conference on Big Data (Big Data). IEEE, 2021. http://dx.doi.org/10.1109/bigdata52589.2021.9672025.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Yin, Zhanyuan, Lizhou Fan, Huizi Yu, and Anne J. Gilliland. "Using a Three-step Social Media Similarity (TSMS) Mapping Method to Analyze Controversial Speech Relating to COVID-19 in Twitter Collections." In 2020 IEEE International Conference on Big Data (Big Data). IEEE, 2020. http://dx.doi.org/10.1109/bigdata50022.2020.9377930.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "ANALYZE BIG DATA"

1

Alonso-Robisco, Andrés, José Manuel Carbó, and José Manuel Carbó. Machine Learning methods in climate finance: a systematic review. Madrid: Banco de España, February 2023. http://dx.doi.org/10.53479/29594.

Повний текст джерела
Анотація:
Preventing the materialization of climate change is one of the main challenges of our time. The involvement of the financial sector is a fundamental pillar in this task, which has led to the emergence of a new field in the literature, climate finance. In turn, the use of Machine Learning (ML) as a tool to analyze climate finance is on the rise, due to the need to use big data to collect new climate-related information and model complex non-linear relationships. Considering the proliferation of articles in this field, and the potential for the use of ML, we propose a review of the academic literature to assess how ML is enabling climate finance to scale up. The main contribution of this paper is to provide a structure of application domains in a highly fragmented research field, aiming to spur further innovative work from ML experts. To pursue this objective, first we perform a systematic search of three scientific databases to assemble a corpus of relevant studies. Using topic modeling (Latent Dirichlet Allocation) we uncover representative thematic clusters. This allows us to statistically identify seven granular areas where ML is playing a significant role in climate finance literature: natural hazards, biodiversity, agricultural risk, carbon markets, energy economics, ESG factors & investing, and climate data. Second, we perform an analysis highlighting publication trends; and thirdly, we show a breakdown of ML methods applied by research area.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Mazorchuk, Mariia S., Tetyana S. Vakulenko, Anna O. Bychko, Olena H. Kuzminska, and Oleksandr V. Prokhorov. Cloud technologies and learning analytics: web application for PISA results analysis and visualization. [б. в.], June 2021. http://dx.doi.org/10.31812/123456789/4451.

Повний текст джерела
Анотація:
This article analyzes the ways to apply Learning Analytics, Cloud Technologies, and Big Data in the field of education on the international level. This paper provides examples of international analytical researches and cloud technologies used to process the results of those researches. It considers the PISA research methodology and related tools, including the IDB Analyzer application, free R intsvy environment for processing statistical data, and cloud-based web application PISA Data Explorer. The paper justifies the necessity of creating a stand-alone web application that supports Ukrainian localization and provides Ukrainian researchers with rapid access to well-structured PISA data. In particular, such an application should provide for data across the factorial features and indicators applied at the country level and demonstrate the Ukrainian indicators compared to the other countries’ results. This paper includes a description of the application core functionalities, architecture, and technologies used for development. The proposed solution leverages the shiny package available with R environment that allows implementing both the UI and server sides of the application. The technical implementation is a proven solution that allows for simplifying the access to PISA data for Ukrainian researchers and helping them utilize the calculation results on the key features without having to apply tools for processing statistical data.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

García, Gustavo A., Mónica Calijuri, Juan José Bravo, and José Elías Feres de Almeida. Documentos tributarios electrónicos y big data económica para el control tributario y aduanero: big data estructurada para el control tributario y aduanero y la generación de estadísticas económicas en America Latina y el Caribe: Tomo 4. Edited by Gustavo A. García. Banco Interamericano de Desarrollo, July 2023. http://dx.doi.org/10.18235/0005001.

Повний текст джерела
Анотація:
El cuarto tomo de la serie Documentos tributarios electrónicos y big data económica para el control tributario y aduanero analiza los potenciales beneficios que conllevan las tecnologías actuales bajo las cuales se han creado los documentos tributarios electrónicos (DT-e) para capturar y estructurar datos tributarios, económicos y de comercio internacional. También se discuten algunas de las estadísticas e indicadores que pueden elaborarse, basados en grandes volúmenes de datos de elevada cobertura, frecuencia, calidad, seguridad y periodicidad, muchos de los cuales se capturan en tiempo real. Este tomo propone una metodología para capturar, codificar y estructurar la información económica y tributaria generada por los DT-e y discute cómo deberían estructurarse estas bases de datos utilizando diversos clasificadores internacionales. Asimismo, plantea una metodología para ordenar la información contable y financiera proveniente de los estados financieros electrónicos (EF-e) y otros DT-e de las empresas para obtener una visión de 360 grados de todas sus transacciones, y propone medidas de control tributario para reducir las brechas de información económica y tributaria y mejorar la trazabilidad fiscal y aduanera. Finalmente, este tomo expone cómo la big data micro y macroeconómica obtenida puede utilizarse para múltiples propósitos estadísticos, además del control tributario y aduanero.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Goodwin, Katy, and Alan Kirschbaum. Acoustic monitoring for bats at Indiana Dunes National Park: Data summary report for 2016–2019. National Park Service, February 2022. http://dx.doi.org/10.36967/nrds-2290144.

Повний текст джерела
Анотація:
The Great Lakes Inventory and Monitoring Network initiated an acoustic monitoring program for bats in 2015. At Indiana Dunes National Park, monitoring began in 2016. This report presents results for the 2016–2019 surveys. Acoustic recordings were analyzed using the software program Kaleidoscope Pro and a subset of files were manually reviewed to confirm species identifications. Seven of the eight bat species previously documented at the park were reconfirmed. These include big brown bat, eastern red bat, hoary bat, silver-haired bat, evening bat, little brown bat, and tricolored bat. In addition, the Kaleidoscope software classified some acoustic files to northern long-eared bat and Indiana bat, however none of these recordings were verified through manual vetting. Activity levels for six of the nine species (big brown bat, hoary bat, silver-haired bat, evening bat, northern long-eared bat, and Indiana bat) appeared to be stable or slightly increasing. For northern long-eared and Indiana bats, observed activity levels were very low in all four years, so we may not have adequate data to assess trends for those species. The remaining three species (eastern red bat, little brown bat, and tricolored bat) showed slightly decreasing trends.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Wiegel, J., M. M. C. Holstege, M. Kluivers-Poodt, and M. H. Bokma-Bakker. Verdiepende data-analyse naar succesfactoren voor een laag antibioticumgebruik bij vleeskuikens : aanvullend rapport van het project Kritische Succesfactoren Pluimvee (KSF Pluimvee). Wageningen: Wageningen Livestock Research, 2020. http://dx.doi.org/10.18174/518636.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Valko, Nataliia V., Nataliya O. Kushnir, and Viacheslav V. Osadchyi. Cloud technologies for STEM education. [б. в.], July 2020. http://dx.doi.org/10.31812/123456789/3882.

Повний текст джерела
Анотація:
Cloud technologies being used in STEM education for providing robotics studying are highlighted in this article. Developing cloud robotic systems have not been used to their fullest degree in education but are applied by limited specialists’ number. Advantages given by cloud robotics (an access to big data, open systems, open environments development) lead to work with mentioned systems interfaces improving and having them more accessible. The potential represented by these technologies make them worth being shown to the majority of teachers. Benefits of cloud technologies for robotics and automatization systems are defined. An integrated approach to knowledge assimilation is STEM education basis. The demanded stages for robotics system development are shown and cloud sources which could be possibly used are analyzed in this article.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Calijuri, Mónica, Gustavo A. GArcía, Juan José Bravo, and José Elías Feres de Almeida. Documentos tributarios electrónicos y big data económica para el control tributario y aduanero: utilización y codificación de los estados financieros electrónicos para control fiscal y datos económico en América Latina y el Caribe: Tomo 3. Banco Interamericano de Desarrollo, July 2023. http://dx.doi.org/10.18235/0005000.

Повний текст джерела
Анотація:
El tercer tomo de la serie Documentos tributarios electrónicos y big data económica para el control tributario y aduanero analiza la implementación de los estados financieros electrónicos (EF-e) en el contexto de las Normas Internacionales de Información Financiera (NIIF) y su impacto en el control tributario por parte de las administraciones tributarias (AT). También se presentan marcos para guiar iniciativas y planes para implementar la información contable electrónica por parte de AT y gobiernos. La implementación de las NIIF ha llevado a una estandarización en los registros contables, lo que ha facilitado su digitalización y ha tenido un efecto positivo en términos de reducción de costos y mejora de la transparencia tributaria. Se propone la utilización de la tecnología de lenguaje ampliado para divulgación de informes financieros (XBRL) para estandarizar los datos financieros de las empresas, lo que no solo posibilita la integración por parte de las AT, ya que este lenguaje electrónico permite elaborar, extraer y publicar estados financieros, sino también facilita el intercambio de esta información entre las AT. Además, la implementación de los EF-e bajo las NIIF posibilita la producción de estadísticas e indicadores actualizados y detallados de la actividad económica, lo que puede contribuir al diseño de políticas públicas oportunas en beneficio de los ciudadanos.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Shamblin, Robert, Kevin Whelan, Mario Londono, and Judd Patterson. South Florida/Caribbean Network early detection protocol for exotic plants: Corridors of invasiveness. National Park Service, July 2022. http://dx.doi.org/10.36967/nrr-2293364.

Повний текст джерела
Анотація:
Exotic plant populations can be potentially catastrophic to the natural communities of South Florida. Aggressive exotics such as Brazillian Pepper (Schinus terebinthifolius) and Melaleuca (Melaleuca quinquinervia) have displaced native habitats and formed monocultures of exotic stands (Dalrymple et al. 2003). Nearby plant nurseries, especially the ones outside the boundaries of Biscayne National Park (BISC) and Everglades National Park (EVER), are a continuous source of new exotic species that may become established within South Florida’s national parks. Early detection and rapid response to these new species of exotic plants is important to maintaining the integrity of the parks’ natural habitats and is a cost-effective approach to management. The South Florida/Caribbean Network (SFCN) developed the South Florida/Caribbean Network Early Detection Protocol for Exotic Plants to target early detection of these potential invaders. Three national parks of South Florida are monitored for invasive, exotic plants using this protocol: Big Cypress National Preserve (BICY), Biscayne National Park (BISC), and Everglades National Park (EVER). These national parks include some 2,411,000 acres (3,767.2 square miles [mi2]) that encompass a variety of habitat types. To monitor the entire area for new species would not be feasible; therefore the basic approach of this protocol is to scan major “corridors of invasiveness,” e.g., paved and unpaved roads, trails, trail heads, off road vehicle (ORV) trails, boat ramps, canals, and campgrounds, for exotic plant species new to the national parks of South Florida. Sampling is optimized using a two- to three-person crew: a trained botanist, a certified herbicide applicator, and optionally a SFCN (or IPMT [Invasive Plant Management Team]) staff member or park staff to take photographs and help with data collection. If infestations are small, they are treated immediately by the herbicide applicator. If large, they are reported to park staff and the Invasive Plant Management Team. The sampling domain is partitioned into five regions, with one region sampled per year. Regions include the terrestrial habitats of Biscayne National Park, the eastern region of Everglades National Park, the western region of Everglades National Park, the northern region of Big Cypress National Preserve, and the southern region of Big Cypress National Preserve. Monitoring of roads, trails, and canals occurs while traveling into and through the parks (i.e., travel at 2–10 mph) using motorized vehicles, airboats, and/or hiking. Campgrounds, boat launches, trailheads, and similar areas, involve complete searches. When an exotic plant is observed, a GPS location is obtained, and coordinates are taken of the plant. Photographs are not taken for every exotic plant encountered, but photographs will be taken for new and unusual species (for example a coastal exotic found in inland habitats). Information recorded at each location includes the species name, size of infestation, abundance, cover class, any treatment/control action taken, and relevant notes. During the surveys, a GPS “track” is also recorded to document the areas surveyed and a field of view is estimated. Field notes, pictures, and GPS data are compiled, entered, and analyzed in a Microsoft Access database. Resource briefs (and optional data summary reports) and associated shapefiles and data are then produced and sent to contacts within the corresponding national parks.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Scholz, Florian. Sedimentary fluxes of trace metals, radioisotopes and greenhouse gases in the southwestern Baltic Sea Cruise No. AL543, 23.08.2020 – 28.08.2020, Kiel – Kiel - SEDITRACE. GEOMAR Helmholtz Centre for Ocean Research Kiel, November 2020. http://dx.doi.org/10.3289/cr_al543.

Повний текст джерела
Анотація:
R/V Alkor Cruise AL543 was planned as a six-day cruise with a program of water column and sediment sampling in Kiel Bight and the western Baltic Sea. Due to restrictions related to the Covid-19 pandemic, the original plan had to be changed and the cruise was realized as six oneday cruises with sampling in Kiel Bight exclusively. The first day was dedicated to water column and sediment sampling for radionuclide analyses at Boknis Eck and Mittelgrund in Eckernförde Bay. On the remaining five days, water column, bottom water, sediment and pore water samples were collected at eleven stations covering different types of seafloor environment (grain size, redox conditions) in western Kiel Bight. The data and samples obtained on cruise AL543 will be used to investigate (i) the sedimentary cycling of bio-essential metals (e.g., nickel, zinc, and their isotopes) as a function of variable redox conditions, (ii) the impact of submarine groundwater discharge and diffusive benthic fluxes on the distribution of radium and radon as well as greenhouse gases (methane and nitrous oxide) in the water column, and (iii) to characterize and quantify the impact of coastal erosion on sedimentary iron, phosphorus and rare earth element cycling in Kiel Bight.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Tóth, Z., B. Dubé, B. Lafrance, V. Bécu, K. Lauzière, and P. Mercier-Langevin. Whole-rock lithogeochemistry of the banded iron-formation-hosted gold mineralization in the Geraldton area, northwestern Ontario. Natural Resources Canada/CMSS/Information Management, 2023. http://dx.doi.org/10.4095/331919.

Повний текст джерела
Анотація:
This report releases 235 runs of whole-rock geochemical and assay results of 235 samples/subsamples from the Archean banded iron formation-hosted gold mineralization in the Geraldton area, eastern Wabigoon subprovince, northwestern Ontario. The samples were collected during the 2012, 2013, 2014 and 2016 field seasons as part of a PhD study by the senior author (T�th, 2018) at Laurentian University in Sudbury. Geochemical analyses were paid in part by the GSC for the 2012, 2013 samples, while the 2014 and 2016 samples were graciously paid by the second author, Bruno Lafrance and Greenstone Gold Mines, respectively. Research on gold mineralization hosted in banded iron formation BIF was conducted under the Lode Gold project of TGI4. The geochemical data is presented in a format easily importable in a geographic information system (GIS). Samples were collected from drill core and outcrops to document host units, the alteration halo, and the mineralized zones. Preliminary interpretations about the auriferous mineralization and its geological setting are presented in Lafrance et al. (2012) and in T�th et al. (2013a, 2013b, 2014, 2015a, 2015b). The final interpretation of the geological setting of the gold mineralization was published in T�th and others (2022, 2023). Sample information and geochemical results are presented in Appendices 1 and 2 (worksheet "Results"), respectively. The results worksheet combines 5 reports produced between 2012 and 2016.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії