Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Genomics Big Data Engineering.

Artykuły w czasopismach na temat „Genomics Big Data Engineering”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Genomics Big Data Engineering”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Lekić, Matea, Kristijan Rogić, Adrienn Boldizsár, Máté Zöldy i Ádám Török. "Big Data in Logistics". Periodica Polytechnica Transportation Engineering 49, nr 1 (17.12.2019): 60–65. http://dx.doi.org/10.3311/pptr.14589.

Pełny tekst źródła
Streszczenie:
With certainty, we can say that we are in the process of a new big revolution that has its name, Big Data. Though the term was devised by scientists from the area such as astronomy and genomics, Big Data is everywhere. They are both a resource and a tool whose main task is to provide information. However, as far as it can help us better understand the world around us, depending on how they are managed and who controls them, they can take us in some other direction. Although the figures that bind to Big Data can seem enormous at this time, we must be aware that the amount of what we can collect and the process is always just a fraction of the information that really exists in the world (and around it). However, from something we have to start!
Style APA, Harvard, Vancouver, ISO itp.
2

Radha, K., i B. Thirumala Rao. "A Study on Big Data Techniques and Applications". International Journal of Advances in Applied Sciences 5, nr 2 (1.06.2016): 101. http://dx.doi.org/10.11591/ijaas.v5.i2.pp101-108.

Pełny tekst źródła
Streszczenie:
<p>We are living in on-Demand Digital Universe with data spread by users and organizations at a very high rate. This data is categorized as Big Data because of its Variety, Velocity, Veracity and Volume. This data is again classified into unstructured, semi-structured and structured. Large datasets require special processing systems; it is a unique challenge for academicians and researchers. Map Reduce jobs use efficient data processing techniques which are applied in every phases of Map Reduce such as Mapping, Combining, Shuffling, Indexing, Grouping and Reducing. Big Data has essential characteristics as follows Variety, Volume and Velocity, Viscosity, Virality. Big Data is one of the current and future research frontiers. In many areas Big Data is changed such as public administration, scientific research, business, The Financial Services Industry, Automotive Industry, Supply Chain, Logistics, and Industrial Engineering, Retail, Entertainment, etc. Other Big Data applications are exist in atmospheric science, astronomy, medicine, biologic, biogeochemistry, genomics and interdisciplinary and complex researches. This paper is presents the Essential Characteristics of Big Data Applications and State of-the-art tools and techniques to handle data-intensive applications and also building index for web pages available online and see how Map and Reduce functions can be executed by considering input as a set of documents.</p><p> </p>
Style APA, Harvard, Vancouver, ISO itp.
3

Gut, Philipp, Sven Reischauer, Didier Y. R. Stainier i Rima Arnaout. "Little Fish, Big Data: Zebrafish as a Model for Cardiovascular and Metabolic Disease". Physiological Reviews 97, nr 3 (1.07.2017): 889–938. http://dx.doi.org/10.1152/physrev.00038.2016.

Pełny tekst źródła
Streszczenie:
The burden of cardiovascular and metabolic diseases worldwide is staggering. The emergence of systems approaches in biology promises new therapies, faster and cheaper diagnostics, and personalized medicine. However, a profound understanding of pathogenic mechanisms at the cellular and molecular levels remains a fundamental requirement for discovery and therapeutics. Animal models of human disease are cornerstones of drug discovery as they allow identification of novel pharmacological targets by linking gene function with pathogenesis. The zebrafish model has been used for decades to study development and pathophysiology. More than ever, the specific strengths of the zebrafish model make it a prime partner in an age of discovery transformed by big-data approaches to genomics and disease. Zebrafish share a largely conserved physiology and anatomy with mammals. They allow a wide range of genetic manipulations, including the latest genome engineering approaches. They can be bred and studied with remarkable speed, enabling a range of large-scale phenotypic screens. Finally, zebrafish demonstrate an impressive regenerative capacity scientists hope to unlock in humans. Here, we provide a comprehensive guide on applications of zebrafish to investigate cardiovascular and metabolic diseases. We delineate advantages and limitations of zebrafish models of human disease and summarize their most significant contributions to understanding disease progression to date.
Style APA, Harvard, Vancouver, ISO itp.
4

Kennedy, Paul J., Daniel R. Catchpoole, Siamak Tafavogh, Bronwyn L. Harvey i Ahmad A. Aloqaily. "Feature prioritisation on big genomic data for analysing gene-gene interactions". International Journal of Bioinformatics Research and Applications 17, nr 2 (2021): 158. http://dx.doi.org/10.1504/ijbra.2021.10037182.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Aloqaily, Ahmad A., Siamak Tafavogh, Bronwyn L. Harvey, Daniel R. Catchpoole i Paul J. Kennedy. "Feature prioritisation on big genomic data for analysing gene-gene interactions". International Journal of Bioinformatics Research and Applications 17, nr 2 (2021): 158. http://dx.doi.org/10.1504/ijbra.2021.114420.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Chan, Jireh Yi-Le, Steven Mun Hong Leow, Khean Thye Bea, Wai Khuen Cheng, Seuk Wai Phoong, Zeng-Wei Hong i Yen-Lin Chen. "Mitigating the Multicollinearity Problem and Its Machine Learning Approach: A Review". Mathematics 10, nr 8 (12.04.2022): 1283. http://dx.doi.org/10.3390/math10081283.

Pełny tekst źródła
Streszczenie:
Technologies have driven big data collection across many fields, such as genomics and business intelligence. This results in a significant increase in variables and data points (observations) collected and stored. Although this presents opportunities to better model the relationship between predictors and the response variables, this also causes serious problems during data analysis, one of which is the multicollinearity problem. The two main approaches used to mitigate multicollinearity are variable selection methods and modified estimator methods. However, variable selection methods may negate efforts to collect more data as new data may eventually be dropped from modeling, while recent studies suggest that optimization approaches via machine learning handle data with multicollinearity better than statistical estimators. Therefore, this study details the chronological developments to mitigate the effects of multicollinearity and up-to-date recommendations to better mitigate multicollinearity.
Style APA, Harvard, Vancouver, ISO itp.
7

Yan, Hong. "Coclustering of Multidimensional Big Data: A Useful Tool for Genomic, Financial, and Other Data Analysis". IEEE Systems, Man, and Cybernetics Magazine 3, nr 2 (kwiecień 2017): 23–30. http://dx.doi.org/10.1109/msmc.2017.2664218.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Shandilya, Shishir K., S. Sountharrajan, Smita Shandilya i E. Suganya. "Big Data Analytics Framework for Real-Time Genome Analysis: A Comprehensive Approach". Journal of Computational and Theoretical Nanoscience 16, nr 8 (1.08.2019): 3419–27. http://dx.doi.org/10.1166/jctn.2019.8302.

Pełny tekst źródła
Streszczenie:
Big Data Technologies are well-accepted in the recent years in bio-medical and genome informatics. They are capable to process gigantic and heterogeneous genome information with good precision and recall. With the quick advancements in computation and storage technologies, the cost of acquiring and processing the genomic data has decreased significantly. The upcoming sequencing platforms will produce vast amount of data, which will imperatively require high-performance systems for on-demand analysis with time-bound efficiency. Recent bio-informatics tools are capable of utilizing the novel features of Hadoop in a more flexible way. In particular, big data technologies such as MapReduce and Hive are able to provide high-speed computational environment for the analysis of petabyte scale datasets. This has attracted the focus of bio-scientists to use the big data applications to automate the entire genome analysis. The proposed framework is designed over MapReduce and Java on extended Hadoop platform to achieve the parallelism of Big Data Analysis. It will assist the bioinformatics community by providing a comprehensive solution for Descriptive, Comparative, Exploratory, Inferential, Predictive and Causal Analysis on Genome data. The proposed framework is user-friendly, fully-customizable, scalable and fit for comprehensive real-time genome analysis from data acquisition till predictive sequence analysis.
Style APA, Harvard, Vancouver, ISO itp.
9

Shi, Daoyuan, i Lynn Kuo. "VARIABLE SELECTION FOR BAYESIAN SURVIVAL MODELS USING BREGMAN DIVERGENCE MEASURE". Probability in the Engineering and Informational Sciences 34, nr 3 (22.06.2018): 364–80. http://dx.doi.org/10.1017/s0269964818000190.

Pełny tekst źródła
Streszczenie:
The variable selection has been an important topic in regression and Bayesian survival analysis. In the era of rapid development of genomics and precision medicine, the topic is becoming more important and challenging. In addition to the challenges of handling censored data in survival analysis, we are facing increasing demand of handling big data with too many predictors where most of them may not be relevant to the prediction of the survival outcome. With the desire of improving upon the accuracy of prediction, we explore the Bregman divergence criterion in selecting predictive models. We develop sparse Bayesian formulation for parametric regression and semiparametric regression models and demonstrate how variable selection is done using the predictive approach. Model selections for a simulated data set, and two real-data sets (one for a kidney transplant study, and the other for a breast cancer microarray study at the Memorial Sloan-Kettering Cancer Center) are carried out to illustrate our methods.
Style APA, Harvard, Vancouver, ISO itp.
10

Ullah, Mohammad Asad, Muhammad-Redha Abdullah-Zawawi, Rabiatul-Adawiah Zainal-Abidin, Noor Liyana Sukiran, Md Imtiaz Uddin i Zamri Zainal. "A Review of Integrative Omic Approaches for Understanding Rice Salt Response Mechanisms". Plants 11, nr 11 (27.05.2022): 1430. http://dx.doi.org/10.3390/plants11111430.

Pełny tekst źródła
Streszczenie:
Soil salinity is one of the most serious environmental challenges, posing a growing threat to agriculture across the world. Soil salinity has a significant impact on rice growth, development, and production. Hence, improving rice varieties’ resistance to salt stress is a viable solution for meeting global food demand. Adaptation to salt stress is a multifaceted process that involves interacting physiological traits, biochemical or metabolic pathways, and molecular mechanisms. The integration of multi-omics approaches contributes to a better understanding of molecular mechanisms as well as the improvement of salt-resistant and tolerant rice varieties. Firstly, we present a thorough review of current knowledge about salt stress effects on rice and mechanisms behind rice salt tolerance and salt stress signalling. This review focuses on the use of multi-omics approaches to improve next-generation rice breeding for salinity resistance and tolerance, including genomics, transcriptomics, proteomics, metabolomics and phenomics. Integrating multi-omics data effectively is critical to gaining a more comprehensive and in-depth understanding of the molecular pathways, enzyme activity and interacting networks of genes controlling salinity tolerance in rice. The key data mining strategies within the artificial intelligence to analyse big and complex data sets that will allow more accurate prediction of outcomes and modernise traditional breeding programmes and also expedite precision rice breeding such as genetic engineering and genome editing.
Style APA, Harvard, Vancouver, ISO itp.
11

Sheldon, Roger A. "Biocatalysis and biomass conversion: enabling a circular economy". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 378, nr 2176 (6.07.2020): 20190274. http://dx.doi.org/10.1098/rsta.2019.0274.

Pełny tekst źródła
Streszczenie:
This paper is based on a lecture presented to the Royal Society in London on 24 June 2019. Two of the grand societal and technological challenges of the twenty-first century are the ‘greening' of chemicals manufacture and the ongoing transition to a sustainable, carbon neutral economy based on renewable biomass as the raw material, a so-called bio-based economy. These challenges are motivated by the need to eliminate environmental degradation and mitigate climate change. In a bio-based economy, ideally waste biomass, particularly agricultural and forestry residues and food supply chain waste, are converted to liquid fuels, commodity chemicals and biopolymers using clean, catalytic processes. Biocatalysis has the right credentials to achieve this goal. Enzymes are biocompatible, biodegradable and essentially non-hazardous. Additionally, they are derived from inexpensive renewable resources which are readily available and not subject to the large price fluctuations which undermine the long-term commercial viability of scarce precious metal catalysts. Thanks to spectacular advances in molecular biology the landscape of biocatalysis has dramatically changed in the last two decades. Developments in (meta)genomics in combination with ‘big data’ analysis have revolutionized new enzyme discovery and developments in protein engineering by directed evolution have enabled dramatic improvements in their performance. These developments have their confluence in the bio-based circular economy. This article is part of a discussion meeting issue ‘Science to enable the circular economy'.
Style APA, Harvard, Vancouver, ISO itp.
12

Carter, Tonia C., i Max M. He. "Challenges of Identifying Clinically Actionable Genetic Variants for Precision Medicine". Journal of Healthcare Engineering 2016 (2016): 1–14. http://dx.doi.org/10.1155/2016/3617572.

Pełny tekst źródła
Streszczenie:
Advances in genomic medicine have the potential to change the way we treat human disease, but translating these advances into reality for improving healthcare outcomes depends essentially on our ability to discover disease- and/or drug-associated clinically actionable genetic mutations. Integration and manipulation of diverse genomic data and comprehensive electronic health records (EHRs) on a big data infrastructure can provide an efficient and effective way to identify clinically actionable genetic variants for personalized treatments and reduce healthcare costs. We review bioinformatics processing of next-generation sequencing (NGS) data, bioinformatics infrastructures for implementing precision medicine, and bioinformatics approaches for identifying clinically actionable genetic variants using high-throughput NGS data and EHRs.
Style APA, Harvard, Vancouver, ISO itp.
13

Metz, Sebastián, Juan Manuel Cabrera, Eva Rueda, Federico Giri i Patricia Amavet. "FullSSR: Microsatellite Finder and Primer Designer". Advances in Bioinformatics 2016 (6.06.2016): 1–4. http://dx.doi.org/10.1155/2016/6040124.

Pełny tekst źródła
Streszczenie:
Microsatellites are genomic sequences comprised of tandem repeats of short nucleotide motifs widely used as molecular markers in population genetics. FullSSR is a new bioinformatic tool for microsatellite (SSR) loci detection and primer design using genomic data from NGS assay. The software was tested with 2000 sequences of Oryza sativa shotgun sequencing project from the National Center of Biotechnology Information Trace Archive and with partial genome sequencing with ROCHE 454® from Caiman latirostris, Salvator merianae, Aegla platensis, and Zilchiopsis collastinensis. FullSSR performance was compared against other similar SSR search programs. The results of the use of this kind of approach depend on the parameters set by the user. In addition, results can be affected by the analyzed sequences because of differences among the genomes. FullSSR simplifies the detection of SSRs and primer design on a big data set. The command line interface of FullSSR was intended to be used as part of genomic analysis tools pipeline; however, it can be used as a stand-alone program because the results are easily interpreted for a nonexpert user.
Style APA, Harvard, Vancouver, ISO itp.
14

Maia, Ana-Teresa, Stephen-John Sammut, Ana Jacinta-Fernandes i Suet-Feung Chin. "Big data in cancer genomics". Current Opinion in Systems Biology 4 (sierpień 2017): 78–84. http://dx.doi.org/10.1016/j.coisb.2017.07.007.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Ponting, Chris P. "Big knowledge from big data in functional genomics". Emerging Topics in Life Sciences 1, nr 3 (14.11.2017): 245–48. http://dx.doi.org/10.1042/etls20170129.

Pełny tekst źródła
Streszczenie:
With so much genomics data being produced, it might be wise to pause and consider what purpose this data can or should serve. Some improve annotations, others predict molecular interactions, but few add directly to existing knowledge. This is because sequence annotations do not always implicate function, and molecular interactions are often irrelevant to a cell's or organism's survival or propagation. Merely correlative relationships found in big data fail to provide answers to the Why questions of human biology. Instead, those answers are expected from methods that causally link DNA changes to downstream effects without being confounded by reverse causation. These approaches require the controlled measurement of the consequences of DNA variants, for example, either those introduced in single cells using CRISPR/Cas9 genome editing or that are already present across the human population. Inferred causal relationships between genetic variation and cellular phenotypes or disease show promise to rapidly grow and underpin our knowledge base.
Style APA, Harvard, Vancouver, ISO itp.
16

Venkateswarlu, P., i E. G. Rajan. "Spectral domain characterization of genome sequences". International Journal of Engineering & Technology 7, nr 2.12 (3.04.2018): 189. http://dx.doi.org/10.14419/ijet.v7i2.12.11277.

Pełny tekst źródła
Streszczenie:
Genome sequencing became an important research area for understanding order of DNA and discovering genetic secrets of humans. Fortunately voluminous data in this area is available for the study of genome sequences. Characterization of genome sequences is non-trivial and tedious task. Nevertheless, algorithms were found in the literature to study them. As the genome sequences data has characteristics of big data we proposed a technique based on MapReduce programming paradigm to attempt spectral characterization of genome sequences. A machine learning approach is used to discover trends in the genome sequences. Rationale behind using MapReduce, a distributed programming framework, is its support for parallel processing and the usage of more powerful Graphical Processing Units (GPUs). Moreover, the datasets can be maintained in cloud so as to handle it with ease. We built a prototype application to demonstrate proof of the concept. Our empirical results reveal encouraging observations in the genomic study.
Style APA, Harvard, Vancouver, ISO itp.
17

Wang, Xin, Fangfang Li i Weiwei Zhao. "Evaluation of Fundus Blood Flow Perfusion in Patients with Diabetic Retinopathy after PPV with Fundus Color Doppler Based on Big Data Mining". Journal of Healthcare Engineering 2022 (16.02.2022): 1–12. http://dx.doi.org/10.1155/2022/7414165.

Pełny tekst źródła
Streszczenie:
In this paper, we have carefully investigated the clinical phenotype and genotype of patients with Johanson-Blizzard syndrome (JBS) with diabetes mellitus as the main manifestation. Retinal vessel segmentation is an important tool for the detection of many eye diseases and plays an important role in the automated screening system for retinal diseases. A segmentation algorithm based on a multiscale attentional resolution network is proposed to address the problem of insufficient segmentation of small vessels and pathological missegmentation in existing methods. The network is based on the encoder-decoder architecture, and the attention residual block is introduced in the submodule to enhance the feature propagation ability and reduce the impact of uneven illumination and low contrast on the model. The jump connection is added between the encoder and decoder, and the traditional pooling layer is removed to retain sufficient vascular detail information. Two multiscale feature fusion methods, parallel multibranch structure, and spatial pyramid pooling are used to achieve feature extraction under different sensory fields. We collected the clinical data, laboratory tests, and imaging examinations of JBS patients, extracted the genomic DNA of relevant family members, and validated them by whole-exome sequencing and Sanger sequencing. The patient had diabetes mellitus as the main manifestation, with widened eye spacing, low flat nasal root, hypoplastic nasal wing, and low hairline deformities. Genetic testing confirmed the presence of a c.4463 T > C (p.Ile1488Thr) pure missense mutation in the UBR1 gene, which was a novel mutation locus, and pathogenicity analysis indicated that the locus was pathogenic. This patient carries a new UBR1 gene c.4463 T > C pure mutation, which improves the clinical understanding of the clinical phenotypic spectrum of JBS and broadens the genetic spectrum of the UBR1 gene. The experimental results showed that the method achieved 83.26% and 82.56% F1 values on CHASEDB1 and STARE standard sets, respectively, and 83.51% and 81.20% sensitivity, respectively, and its performance was better than the current mainstream methods.
Style APA, Harvard, Vancouver, ISO itp.
18

Hien, Le Thi Thu, Nguyen Tuong Van, Kim Thi Phuong Oanh, Nguyen Dang Ton, Huynh Thi Thu Hue, Nguyen Thuy Duong, Pham Le Bich Hang i Nguyen Hai Ha. "Genomics and big data: Research, development and applications". Vietnam Journal of Biotechnology 19, nr 3 (13.10.2021): 393–410. http://dx.doi.org/10.15625/1811-4989/16158.

Pełny tekst źródła
Streszczenie:
Recent years, genomics and big data analytics have been widely applied and have significant impacts in various important areas of social life worldwide. The development of the next-generation sequencing (NGS) technologies, such as whole-genome sequencing (WGS), whole-exome sequencing (WES), transcriptome, and/or targeted sequencing, has enabled quickly generating the genomes of interested living organisms. Around the world many nations have invested in and promoted the development of genomics and big data analytics. A number of well-established projects on sequencing of human, animal, plant, and microorganism genomes to generate vast amounts of genomic data have been conducted independently or as collaborative efforts by national or international research networks of scientists specializing in different technical fields of genomics, bioinformatics, computational and statistical biology, automation, artificial intelligence, etc. Complicated and large genomic datasets have been effectively established, storage, managed, and used. Vietnam supports this new field of study through setting up governmental authorized institutions and conducting genomic research projects of human and other endemic organisms. In this paper, the research, development, and applications of genomic big data are reviewed with focusing on: (i) Available sequencing technologies for generating genomic datasets; (ii) Genomics and big data initiatives worldwide; (iii) Genomics and big data analytics in selected countries and Vietnam; (iv) Genomic data applications in key areas including medicine for human health care, agriculture - forestry, food safety, and environment.
Style APA, Harvard, Vancouver, ISO itp.
19

Fromer, Menachem. "BIG DATA IN PSYCHIATRY: GENETICS, GENOMICS, AND BEYOND". European Neuropsychopharmacology 27 (2017): S431—S432. http://dx.doi.org/10.1016/j.euroneuro.2016.09.488.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

O’Driscoll, Aisling, Jurate Daugelaite i Roy D. Sleator. "‘Big data’, Hadoop and cloud computing in genomics". Journal of Biomedical Informatics 46, nr 5 (październik 2013): 774–81. http://dx.doi.org/10.1016/j.jbi.2013.07.001.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Mukerji, Mitali, i Michael Sagner. "Genomics and Big Data Analytics in Ayurvedic Medicine". Progress in Preventive Medicine 4, nr 1 (kwiecień 2019): e0021. http://dx.doi.org/10.1097/pp9.0000000000000021.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Wang, Cheng-Gang, i Bor-Sen Chen. "Multiple-Molecule Drug Repositioning for Disrupting Progression of SARS-CoV-2 Infection by Utilizing the Systems Biology Method through Host–Pathogen–Interactive Time Profile Data and DNN-Based DTI Model with Drug Design Specifications". Stresses 2, nr 4 (3.11.2022): 405–36. http://dx.doi.org/10.3390/stresses2040029.

Pełny tekst źródła
Streszczenie:
The coronavirus disease 2019 (COVID-19) pandemic has claimed many lives since it was first reported in late December 2019. However, there is still no drug proven to be effective against the virus. In this study, a candidate host–pathogen–interactive (HPI) genome-wide genetic and epigenetic network (HPI-GWGEN) was constructed via big data mining. The reverse engineering method was applied to investigate the pathogenesis of SARS-CoV-2 infection by pruning the false positives in candidate HPI-GWGEN through the HPI RNA-seq time profile data. Subsequently, using the principal network projection (PNP) method and the annotations of the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway, we identified the significant biomarkers usable as drug targets for destroying favorable environments for the replication of SARS-CoV-2 or enhancing the defense of host cells against it. To discover multiple-molecule drugs that target the significant biomarkers (as drug targets), a deep neural network (DNN)-based drug–target interaction (DTI) model was trained by DTI databases to predict candidate molecular drugs for these drug targets. Using the DNN-based DTI model, we predicted the candidate drugs targeting the significant biomarkers (drug targets). After screening candidate drugs with drug design specifications, we finally proposed the combination of bosutinib, erlotinib, and 17-beta-estradiol as a multiple-molecule drug for the treatment of the amplification stage of SARS-CoV-2 infection and the combination of erlotinib, 17-beta-estradiol, and sertraline as a multiple-molecule drug for the treatment of saturation stage of mild-to-moderate SARS-CoV-2 infection.
Style APA, Harvard, Vancouver, ISO itp.
23

Waltz, Emily. "Plant genomics land big prizes". Nature Biotechnology 27, nr 1 (styczeń 2009): 5. http://dx.doi.org/10.1038/nbt0109-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

H. Mohamad, Ummul, Mohamad T. Ijab i Rabiah A. Kadir. "Genomics big data hybrid depositories architecture to unlock precision medicine: a conceptual framework". International Journal of Engineering & Technology 7, nr 4 (24.09.2018): 2585. http://dx.doi.org/10.14419/ijet.v7i4.16893.

Pełny tekst źródła
Streszczenie:
As the genome sequencing cost becomes more affordable, genomics studies are extensively carried out to empower the ultimate healthcare goal which is the precision medicine. By tailoring each individual medical treatment through precision medicine, it will potentially lead to nearly zero occurrence of the drugs side effects and treatment complications. Unfortunately, the complexity of the genomics data has been one of the bottlenecks that deter the advances of healthcare practices towards precision medicine. Therefore, based on the extensive literature review on the data driven genomics challenges towards precision medicine, this paper proposes two new contributions to the field; the conceptual framework for the genomics-based precision medicine and the architectural design for the development of hybrid depositories as the initial step to bridge the gap towards precision medicine. The genomics big data hybrid depositories architecture design is composed of few components; storage layer and service layer interconnected system such as visualization, data protection modeling, event processing engine and decision support, to carry out their purpose of merging the genomics data with the healthcare data.
Style APA, Harvard, Vancouver, ISO itp.
25

Shi, Lizhen, i Zhong Wang. "Computational Strategies for Scalable Genomics Analysis". Genes 10, nr 12 (6.12.2019): 1017. http://dx.doi.org/10.3390/genes10121017.

Pełny tekst źródła
Streszczenie:
The revolution in next-generation DNA sequencing technologies is leading to explosive data growth in genomics, posing a significant challenge to the computing infrastructure and software algorithms for genomics analysis. Various big data technologies have been explored to scale up/out current bioinformatics solutions to mine the big genomics data. In this review, we survey some of these exciting developments in the applications of parallel distributed computing and special hardware to genomics. We comment on the pros and cons of each strategy in the context of ease of development, robustness, scalability, and efficiency. Although this review is written for an audience from the genomics and bioinformatics fields, it may also be informative for the audience of computer science with interests in genomics applications.
Style APA, Harvard, Vancouver, ISO itp.
26

Bicudo, Edison. "‘Big data’ or ‘big knowledge’? Brazilian genomics and the process of academic marketization". BioSocieties 13, nr 1 (15.03.2017): 1–20. http://dx.doi.org/10.1057/s41292-017-0037-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Verspoor, K., i F. Martin-Sanchez. "Big Data in Medicine Is Driving Big Changes". Yearbook of Medical Informatics 23, nr 01 (sierpień 2014): 14–20. http://dx.doi.org/10.15265/iy-2014-0020.

Pełny tekst źródła
Streszczenie:
Summary Objectives: To summarise current research that takes advantage of “Big Data” in health and biomedical informatics applications. Methods:Survey of trends in this work, and exploration of literature describing how large-scale structured and unstructured data sources are being used to support applications from clinical decision making and health policy, to drug design and pharmacovigilance, and further to systems biology and genetics. Results: The survey highlights ongoing development of powerful new methods for turning that large-scale, and often complex, data into information that provides new insights into human health, in a range of different areas. Consideration of this body of work identifies several important paradigm shifts that are facilitated by Big Data resources and methods: in clinical and translational research, from hypothesis-driven research to data-driven research, and in medicine, from evidence-based practice to practice-based evidence. Conclusions: The increasing scale and availability of large quantities of health data require strategies for data management, data linkage, and data integration beyond the limits of many existing information systems, and substantial effort is underway to meet those needs. As our ability to make sense of that data improves, the value of the data will continue to increase. Health systems, genetics and genomics, population and public health; all areas of biomedicine stand to benefit from Big Data and the associated technologies.
Style APA, Harvard, Vancouver, ISO itp.
28

Ma, Yue, Hongbo Zhu, Zhuo Yang i Danbo Wang. "Optimizing the Prognostic Model of Cervical Cancer Based on Artificial Intelligence Algorithm and Data Mining Technology". Wireless Communications and Mobile Computing 2022 (25.08.2022): 1–10. http://dx.doi.org/10.1155/2022/5908686.

Pełny tekst źródła
Streszczenie:
With the accumulation and development of medical multimodal data as well as the breakthrough in the theory and practice of artificial neural network and deep learning algorithm, the deep integration of multimodal data and artificial intelligence based on the Internet has become an important goal of the Internet of Medical Things. The deep application of the latest technologies in the medical field, such as artificial intelligence, machine learning, multimodal data, and advanced sensors, has a profound impact on the development of medical research. Artificial intelligence can achieve low-consumption and high-efficiency screening of specific markers due to its powerful data integration and processing capabilities, and its advantages are fully demonstrated in the construction of disease-related risk prediction models. In this study, multi-type cloud data were used as research objects to explore the potential of alternative CpG sites and establish a high-quality prognosis model of cervical cancer DNA methylation big data. 14,419 strict differentially methylated CpG sites (DMCs) were identified by ChAMP methylation analysis and presented these distributions based on different genomic regions and relation to island. Further, rbsurv and Cox regression analyses were performed to construct a prognostic model integrating these four methylated CpG sites that could adequately predict the survival of patients ( AUC = 0.833 , P < 0.001 ). The low- and high-risk patient groups, divided by risk score, showed significantly different overall survival (OS) in both the training ( P < 0.001 ) and validation datasets ( P < 0.005 ). Moreover, the model has an independent predictive value for FIGO stage and age and is more suitable for predicting survival time in patients with histological type (SCC) and histologic grade (G2/G3). Finally, the model exhibited much higher predictive accuracy compared to other known models and the corresponding expression of genes. The proposed model provides a novel signature to predict the prognosis, which can serve as a useful guide for increasing the accuracy of predicting overall survival of cervical cancer patients.
Style APA, Harvard, Vancouver, ISO itp.
29

Paten, Benedict, Mark Diekhans, Brian J. Druker, Stephen Friend, Justin Guinney, Nadine Gassner, Mitchell Guttman i in. "The NIH BD2K center for big data in translational genomics". Journal of the American Medical Informatics Association 22, nr 6 (14.07.2015): 1143–47. http://dx.doi.org/10.1093/jamia/ocv047.

Pełny tekst źródła
Streszczenie:
Abstract The world’s genomics data will never be stored in a single repository – rather, it will be distributed among many sites in many countries. No one site will have enough data to explain genotype to phenotype relationships in rare diseases; therefore, sites must share data. To accomplish this, the genetics community must forge common standards and protocols to make sharing and computing data among many sites a seamless activity. Through the Global Alliance for Genomics and Health, we are pioneering the development of shared application programming interfaces (APIs) to connect the world’s genome repositories. In parallel, we are developing an open source software stack (ADAM) that uses these APIs. This combination will create a cohesive genome informatics ecosystem. Using containers, we are facilitating the deployment of this software in a diverse array of environments. Through benchmarking efforts and big data driver projects, we are ensuring ADAM’s performance and utility.
Style APA, Harvard, Vancouver, ISO itp.
30

Yokoyama, Shigeyuki, i Kei Yura. "Special issue: big data analyses in structural and functional genomics". Journal of Structural and Functional Genomics 17, nr 4 (grudzień 2016): 67. http://dx.doi.org/10.1007/s10969-016-9213-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Belle, Ashwin, Raghuram Thiagarajan, S. M. Reza Soroushmehr, Fatemeh Navidi, Daniel A. Beard i Kayvan Najarian. "Big Data Analytics in Healthcare". BioMed Research International 2015 (2015): 1–16. http://dx.doi.org/10.1155/2015/370194.

Pełny tekst źródła
Streszczenie:
The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined.
Style APA, Harvard, Vancouver, ISO itp.
32

Sinha, Saurabh, Jun Song, Richard Weinshilboum, Victor Jongeneel i Jiawei Han. "KnowEnG: a knowledge engine for genomics". Journal of the American Medical Informatics Association 22, nr 6 (22.07.2015): 1115–19. http://dx.doi.org/10.1093/jamia/ocv090.

Pełny tekst źródła
Streszczenie:
Abstract We describe here the vision, motivations, and research plans of the National Institutes of Health Center for Excellence in Big Data Computing at the University of Illinois, Urbana-Champaign. The Center is organized around the construction of “Knowledge Engine for Genomics” (KnowEnG), an E-science framework for genomics where biomedical scientists will have access to powerful methods of data mining, network mining, and machine learning to extract knowledge out of genomics data. The scientist will come to KnowEnG with their own data sets in the form of spreadsheets and ask KnowEnG to analyze those data sets in the light of a massive knowledge base of community data sets called the “Knowledge Network” that will be at the heart of the system. The Center is undertaking discovery projects aimed at testing the utility of KnowEnG for transforming big data to knowledge. These projects span a broad range of biological enquiry, from pharmacogenomics (in collaboration with Mayo Clinic) to transcriptomics of human behavior.
Style APA, Harvard, Vancouver, ISO itp.
33

Notredame, Cedric. "Editorial: NAR Genomics and Bioinformatics: a new journal for reproducible genomics in the Big Data era". NAR Genomics and Bioinformatics 1, nr 1 (1.04.2019): e1-e1. http://dx.doi.org/10.1093/nargab/lqz001.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Jablonka, Kevin Maik, Daniele Ongari, Seyed Mohamad Moosavi i Berend Smit. "Big-Data Science in Porous Materials: Materials Genomics and Machine Learning". Chemical Reviews 120, nr 16 (10.06.2020): 8066–129. http://dx.doi.org/10.1021/acs.chemrev.0c00004.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Hassan, Mubashir, Faryal Mehwish Awan, Anam Naz, Enrique J. deAndrés-Galiana, Oscar Alvarez, Ana Cernea, Lucas Fernández-Brillet, Juan Luis Fernández-Martínez i Andrzej Kloczkowski. "Innovations in Genomics and Big Data Analytics for Personalized Medicine and Health Care: A Review". International Journal of Molecular Sciences 23, nr 9 (22.04.2022): 4645. http://dx.doi.org/10.3390/ijms23094645.

Pełny tekst źródła
Streszczenie:
Big data in health care is a fast-growing field and a new paradigm that is transforming case-based studies to large-scale, data-driven research. As big data is dependent on the advancement of new data standards, technology, and relevant research, the future development of big data applications holds foreseeable promise in the modern day health care revolution. Enormously large, rapidly growing collections of biomedical omics-data (genomics, proteomics, transcriptomics, metabolomics, glycomics, etc.) and clinical data create major challenges and opportunities for their analysis and interpretation and open new computational gateways to address these issues. The design of new robust algorithms that are most suitable to properly analyze this big data by taking into account individual variability in genes has enabled the creation of precision (personalized) medicine. We reviewed and highlighted the significance of big data analytics for personalized medicine and health care by focusing mostly on machine learning perspectives on personalized medicine, genomic data models with respect to personalized medicine, the application of data mining algorithms for personalized medicine as well as the challenges we are facing right now in big data analytics.
Style APA, Harvard, Vancouver, ISO itp.
36

Spalding, B. J. "Roche and SKB sink big bucks into genomics". Nature Biotechnology 12, nr 6 (czerwiec 1994): 559–60. http://dx.doi.org/10.1038/nbt0694-559.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Thiessard, F., i V. Koutkias. "Big Data - Smart Health Strategies". Yearbook of Medical Informatics 23, nr 01 (sierpień 2014): 48–51. http://dx.doi.org/10.15265/iy-2014-0031.

Pełny tekst źródła
Streszczenie:
Summary Objectives: To select best papers published in 2013 in the field of big data and smart health strategies, and summarize outstanding research efforts. Methods: A systematic search was performed using two major bibliographic databases for relevant journal papers. The references obtained were reviewed in a two-stage process, starting with a blinded review performed by the two section editors, and followed by a peer review process operated by external reviewers recognized as experts in the field. Results: The complete review process selected four best papers, illustrating various aspects of the special theme, among them: (a) using large volumes of unstructured data and, specifically, clinical notes from Electronic Health Records (EHRs) for pharmacovigilance; (b) knowledge discovery via querying large volumes of complex (both structured and unstructured) biological data using big data technologies and relevant tools; (c) methodologies for applying cloud computing and big data technologies in the field of genomics, and (d) system architectures enabling high-performance access to and processing of large datasets extracted from EHRs. Conclusions: The potential of big data in biomedicine has been pinpointed in various viewpoint papers and editorials. The review of current scientific literature illustrated a variety of interesting methods and applications in the field, but still the promises exceed the current outcomes. As we are getting closer towards a solid foundation with respect to common understanding of relevant concepts and technical aspects, and the use of standardized technologies and tools, we can anticipate to reach the potential that big data offer for personalized medicine and smart health strategies in the near future.
Style APA, Harvard, Vancouver, ISO itp.
38

Williams, Anna Marie, Yong Liu, Kevin R. Regner, Fabrice Jotterand, Pengyuan Liu i Mingyu Liang. "Artificial intelligence, physiological genomics, and precision medicine". Physiological Genomics 50, nr 4 (1.04.2018): 237–43. http://dx.doi.org/10.1152/physiolgenomics.00119.2017.

Pełny tekst źródła
Streszczenie:
Big data are a major driver in the development of precision medicine. Efficient analysis methods are needed to transform big data into clinically-actionable knowledge. To accomplish this, many researchers are turning toward machine learning (ML), an approach of artificial intelligence (AI) that utilizes modern algorithms to give computers the ability to learn. Much of the effort to advance ML for precision medicine has been focused on the development and implementation of algorithms and the generation of ever larger quantities of genomic sequence data and electronic health records. However, relevance and accuracy of the data are as important as quantity of data in the advancement of ML for precision medicine. For common diseases, physiological genomic readouts in disease-applicable tissues may be an effective surrogate to measure the effect of genetic and environmental factors and their interactions that underlie disease development and progression. Disease-applicable tissue may be difficult to obtain, but there are important exceptions such as kidney needle biopsy specimens. As AI continues to advance, new analytical approaches, including those that go beyond data correlation, need to be developed and ethical issues of AI need to be addressed. Physiological genomic readouts in disease-relevant tissues, combined with advanced AI, can be a powerful approach for precision medicine for common diseases.
Style APA, Harvard, Vancouver, ISO itp.
39

Musa, Aliyu, Matthias Dehmer, Olli Yli-Harja i Frank Emmert-Streib. "Exploiting Genomic Relations in Big Data Repositories by Graph-Based Search Methods". Machine Learning and Knowledge Extraction 1, nr 1 (22.11.2018): 205–10. http://dx.doi.org/10.3390/make1010012.

Pełny tekst źródła
Streszczenie:
We are living at a time that allows the generation of mass data in almost any field of science. For instance, in pharmacogenomics, there exist a number of big data repositories, e.g., the Library of Integrated Network-based Cellular Signatures (LINCS) that provide millions of measurements on the genomics level. However, to translate these data into meaningful information, the data need to be analyzable. The first step for such an analysis is the deliberate selection of subsets of raw data for studying dedicated research questions. Unfortunately, this is a non-trivial problem when millions of individual data files are available with an intricate connection structure induced by experimental dependencies. In this paper, we argue for the need to introduce such search capabilities for big genomics data repositories with a specific discussion about LINCS. Specifically, we suggest the introduction of smart interfaces allowing the exploitation of the connections among individual raw data files, giving raise to a network structure, by graph-based searches.
Style APA, Harvard, Vancouver, ISO itp.
40

SANTIYA, P., V. DHANAKOTI i B. MUTHUSENTHIL. "BIG DATA ENGINEERING". i-manager’s Journal on Cloud Computing 8, nr 1 (2021): 35. http://dx.doi.org/10.26634/jcc.8.1.18456.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Briscoe, James, i Oscar Marín. "Looking at neurodevelopment through a big data lens". Science 369, nr 6510 (17.09.2020): eaaz8627. http://dx.doi.org/10.1126/science.aaz8627.

Pełny tekst źródła
Streszczenie:
The formation of the human brain, which contains nearly 100 billion neurons making an average of 1000 connections each, represents an astonishing feat of self-organization. Despite impressive progress, our understanding of how neurons form the nervous system and enable function is very fragmentary, especially for the human brain. New technologies that produce large volumes of high-resolution measurements—big data—are now being brought to bear on this problem. Single-cell molecular profiling methods allow the exploration of neural diversity with increasing spatial and temporal resolution. Advances in human genetics are shedding light on the genetic architecture of neurodevelopmental disorders, and new approaches are revealing plausible neurobiological mechanisms underlying these conditions. Here, we review the opportunities and challenges of integrating large-scale genomics and genetics for the study of brain development.
Style APA, Harvard, Vancouver, ISO itp.
42

Feltus, Frank A., Joseph R. Breen, Juan Deng, Ryan S. Izard, Christopher A. Konger, Walter B. Ligon, Don Preuss i Kuang-Ching Wang. "The Widening Gulf between Genomics Data Generation and Consumption: A Practical Guide to Big Data Transfer Technology". Bioinformatics and Biology Insights 9s1 (styczeń 2015): BBI.S28988. http://dx.doi.org/10.4137/bbi.s28988.

Pełny tekst źródła
Streszczenie:
In the last decade, high-throughput DNA sequencing has become a disruptive technology and pushed the life sciences into a distributed ecosystem of sequence data producers and consumers. Given the power of genomics and declining sequencing costs, biology is an emerging “Big Data” discipline that will soon enter the exabyte data range when all subdisciplines are combined. These datasets must be transferred across commercial and research networks in creative ways since sending data without thought can have serious consequences on data processing time frames. Thus, it is imperative that biologists, bioinformaticians, and information technology engineers recalibrate data processing paradigms to fit this emerging reality. This review attempts to provide a snapshot of Big Data transfer across networks, which is often overlooked by many biologists. Specifically, we discuss four key areas: 1) data transfer networks, protocols, and applications; 2) data transfer security including encryption, access, firewalls, and the Science DMZ; 3) data flow control with software-defined networking; and 4) data storage, staging, archiving and access. A primary intention of this article is to orient the biologist in key aspects of the data transfer process in order to frame their genomics-oriented needs to enterprise IT professionals.
Style APA, Harvard, Vancouver, ISO itp.
43

Xin, Zhou. "Understanding biodiversity using genomics: Hooke’s microscope in the era of big data". Biodiversity Science 27, nr 5 (2019): 475–79. http://dx.doi.org/10.17520/biods.2019161.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Huang, Dandan, Xianfu Yi, Yao Zhou, Hongcheng Yao, Hang Xu, Jianhua Wang, Shijie Zhang i in. "Ultrafast and scalable variant annotation and prioritization with big functional genomics data". Genome Research 30, nr 12 (15.10.2020): 1789–801. http://dx.doi.org/10.1101/gr.267997.120.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Khomtchouk, Bohdan B., James R. Hennessy i Claes Wahlestedt. "shinyheatmap: Ultra fast low memory heatmap web interface for big data genomics". PLOS ONE 12, nr 5 (11.05.2017): e0176334. http://dx.doi.org/10.1371/journal.pone.0176334.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Pralle, R. S., i H. M. White. "Symposium review: Big data, big predictions: Utilizing milk Fourier-transform infrared and genomics to improve hyperketonemia management". Journal of Dairy Science 103, nr 4 (kwiecień 2020): 3867–73. http://dx.doi.org/10.3168/jds.2019-17379.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Zelco, Aura, Pattama Wapeesittipan i Anagha Joshi. "Insights into Sex and Gender Differences in Brain and Psychopathologies Using Big Data". Life 13, nr 8 (2.08.2023): 1676. http://dx.doi.org/10.3390/life13081676.

Pełny tekst źródła
Streszczenie:
The societal implication of sex and gender (SG) differences in brain are profound, as they influence brain development, behavior, and importantly, the presentation, prevalence, and therapeutic response to diseases. Technological advances have enabled speed up identification and characterization of SG differences during development and in psychopathologies. The main aim of this review is to elaborate on new technological advancements, such as genomics, imaging, and emerging biobanks, coupled with bioinformatics analyses of data generated from these technologies have facilitated the identification and characterization of SG differences in the human brain through development and psychopathologies. First, a brief explanation of SG concepts is provided, along with a developmental and evolutionary context. We then describe physiological SG differences in brain activity and function, and in psychopathologies identified through imaging techniques. We further provide an overview of insights into SG differences using genomics, specifically taking advantage of large cohorts and biobanks. We finally emphasize how bioinformatics analyses of big data generated by emerging technologies provides new opportunities to reduce SG disparities in health outcomes, including major challenges.
Style APA, Harvard, Vancouver, ISO itp.
48

Erdeniz, Seda Polat, Andreas Menychtas, Ilias Maglogiannis, Alexander Felfernig i Thi Ngoc Trang Tran. "Recommender systems for IoT enabled quantified-self applications". Evolving Systems 11, nr 2 (30.10.2019): 291–304. http://dx.doi.org/10.1007/s12530-019-09302-8.

Pełny tekst źródła
Streszczenie:
Abstract As an emerging trend in big data science, applications based on the Quantified-Self (QS) engage individuals in the self-tracking of any kind of biological, physical, behavioral, or environmental information as individuals or groups. There are new needs and opportunities for recommender systems to develop new models/approaches to support QS application users. Recommender systems can help to more easily identify relevant artifacts for users and thus improve user experiences. Currently recommender systems are widely and effectively used in the e-commerce domain (e.g., online music services, online bookstores). Next-generation QS applications could include more recommender tools for assisting the users of QS systems based on their personal self-tracking data streams from wearable electronics, biosensors, mobile phones, genomic data, and cloud-based services. In this paper, we propose three new recommendation approaches for QS applications: Virtual Coach, Virtual Nurse, and Virtual Sleep Regulator which help QS users to improve their health conditions. Virtual Coach works like a real fitness coach to recommend personalized work-out plans whereas Virtual Nurse considers the medical history and health targets of a user to recommend a suitable physical activity plan. Virtual Sleep Regulator is specifically designed for insomnia (a kind of sleep disorder) patients to improve their sleep quality with the help of recommended physical activity and sleep plans. We explain how these proposed recommender technologies can be applied on the basis of the collected QS data to create qualitative recommendations for user needs. We present example recommendation results of Virtual Sleep Regulator on the basis of the dataset from a real world QS application.
Style APA, Harvard, Vancouver, ISO itp.
49

Noor, Ahmed. "Big Data". Mechanical Engineering 135, nr 10 (1.10.2013): 32–37. http://dx.doi.org/10.1115/1.2013-oct-1.

Pełny tekst źródła
Streszczenie:
This article reviews the benefits of Big Data in the manufacturing industry as more sophisticated and automated data analytics technologies are being developed. The challenge of Big Data is that it requires management tools to make sense of large sets of heterogeneous information. A new wave of inexpensive electronic sensors, microprocessors, and other components enables more automation in factories, and vast amounts of data to be collected along the way. In automated manufacturing, Big Data can help reduce defects and control costs of products. Smart manufacturing is expected to evolve into the new paradigm of cognitive manufacturing, in which machining and measurements are merged to form more flexible and controlled environments. The article also suggests that the emerging tools being developed to process and manage the Big Data generated by myriads of sensors and other devices can lead to the next scientific, technological, and management revolutions. The revolutions will enable an interconnected, efficient global industrial ecosystem that will fundamentally change how products are invented, manufactured, shipped, and serviced.
Style APA, Harvard, Vancouver, ISO itp.
50

Lesk, Michael. "Big Data, Big Brother, Big Money". IEEE Security & Privacy 11, nr 4 (lipiec 2013): 85–89. http://dx.doi.org/10.1109/msp.2013.81.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii