Articoli di riviste sul tema "Non-structured data"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Non-structured data.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Non-structured data".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Paradis, Rosemary D., Daniel Davenport, David Menaker e Sarah M. Taylor. "Detection of Groups in Non-Structured Data". Procedia Computer Science 12 (2012): 412–17. http://dx.doi.org/10.1016/j.procs.2012.09.095.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Genzel, Martin, e Peter Jung. "Recovering Structured Data From Superimposed Non-Linear Measurements". IEEE Transactions on Information Theory 66, n. 1 (gennaio 2020): 453–77. http://dx.doi.org/10.1109/tit.2019.2932426.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Cai, Ting, e Xuemei Yang. "Non-structured Data Integration Access Policy Using Hadoop". Wireless Personal Communications 102, n. 2 (13 dicembre 2017): 895–908. http://dx.doi.org/10.1007/s11277-017-5112-4.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Luo, Wen Hua. "The Processing and Analyzing of Non-Structured Data in Digital Investigation". Advanced Materials Research 774-776 (settembre 2013): 1807–11. http://dx.doi.org/10.4028/www.scientific.net/amr.774-776.1807.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The proportion of Non-structured data in total amount is much more than that of structured data. But the research on the method of processing and analyzing non-structured data is not as deep as that on structured data. This paper illustrates the importance of the research on non-structured data. Then from the angle of digital investigation, it illustrates the key techniques of its processing and analyzing ways. And it combines the self-developed Intelligent Analyzing System of Mass Case Information and the background of handling the online ball gambling in Chinese mainland, it illustrates in detail the specific application of non-structured data processing and analyzing in digital investigation.
5

Deng, Song. "Dynamic Non-Cooperative Structured Deep Web Selection". Applied Mechanics and Materials 644-650 (settembre 2014): 2911–14. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.2911.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Most of structured deep web data sources are non-cooperation, therefore establish an accurate data source content summary by sampling is the core technology of data source selection. The content of deep web data source is updated from time to time, however existing efficient methods of non-cooperation structured data source selection does not consider summary update problem. Unrenewed summary of data source can not accurately characterize the content of the data source that produce a greater impact on data source selection. Base on this, we propose a dynamic data source selection method for non-cooperative structured deep web by combining subject headings sampling technology and subject headings extension technology. The experiment results show that our dynamic structured data source selection method has good recall ratio and precision besides being efficient.
6

Silva, Carlos Anderson Oliveira, Rafael Gonzalez-Otero, Michel Bessani, Liliana Otero Mendoza e Cristiano L. de Castro. "Interpretable risk models for Sleep Apnea and Coronary diseases from structured and non-structured data". Expert Systems with Applications 200 (agosto 2022): 116955. http://dx.doi.org/10.1016/j.eswa.2022.116955.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Hu, Changjun, Chunping Ouyang, Jinbin Wu, Xiaoming Zhang e Chongchong Zhao. "Non-Structured Materials Science Data Sharing Based on Semantic Annotation". Data Science Journal 8 (2009): 52–61. http://dx.doi.org/10.2481/dsj.007-042.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Gibiino, Fabio, Vincenzo Positano, Florian Wiesinger, Giulio Giovannetti, Luigi Landini e Maria Filomena Santarelli. "Structured errors in reconstruction methods for Non-Cartesian MR data". Computers in Biology and Medicine 43, n. 12 (dicembre 2013): 2256–62. http://dx.doi.org/10.1016/j.compbiomed.2013.10.013.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Xin, Rui, Tinghua Ai, Ruoxin Zhu, Bo Ai, Min Yang e Liqiu Meng. "A Multi-Scale Virtual Terrain for Hierarchically Structured Non-Location Data". ISPRS International Journal of Geo-Information 10, n. 6 (3 giugno 2021): 379. http://dx.doi.org/10.3390/ijgi10060379.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Metaphor are commonly used rhetorical devices in linguistics. Among the various types, spatial metaphors are relatively common because of their intuitive and sensible nature. There are also many studies that use spatial metaphors to express non-location data in the field of visualization. For instance, some virtual terrains can be built based on computer technologies and visualization methods. In virtual terrains, the original abstract data can obtain specific positions, shapes, colors, etc. and people’s visual and image thinking can play a role. In addition, the theories and methods used in the space field could be applied to help people observe and analyze abstract data. However, current research has limited the use of these space theories and methods. For instance, many existing map theories and methods are not well combined. In addition, it is difficult to fully display data in virtual terrains, such as showing the structure and relationship at the same time. Facing the above problems, this study takes hierarchical data as the research object and expresses both the data structure and relationship from a spatial perspective. First, the conversion from high-dimensional non-location data to two-dimensional discrete points is achieved by a dimensionality reduction algorithm to reflect the data relationship. Based on this, kernel density estimation interpolation and fractal noise algorithms are used to construct terrain features in the virtual terrains. Under the control of the kernel density search radius and noise proportion, a multi-scale terrain model is built with the help of level of detail (LOD) technology to express the hierarchical structure and support the multi-scale analysis of data. Finally, experiments with actual data are carried out to verify the proposed method.
10

Fan, Jianqing, e Donggyu Kim. "Structured volatility matrix estimation for non-synchronized high-frequency financial data". Journal of Econometrics 209, n. 1 (marzo 2019): 61–78. http://dx.doi.org/10.1016/j.jeconom.2018.12.019.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Tekli, Gilbert. "A survey on semi-structured web data manipulations by non-expert users". Computer Science Review 40 (maggio 2021): 100367. http://dx.doi.org/10.1016/j.cosrev.2021.100367.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Dong, Yafei, Pei Shi, Qi Lv, Panpan Li e Gang Fang. "Use of Structured Query Language to Simplify and Analyze Non-Redundant Data". Journal of Computational and Theoretical Nanoscience 14, n. 8 (1 agosto 2017): 3741–46. http://dx.doi.org/10.1166/jctn.2017.6667.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Zorrilla, M. E., E. Mora e J. L. Crespo. "Non-Structured Data Management by Means of Object Relational Database Management Systems". Systems Analysis Modelling Simulation 43, n. 9 (settembre 2003): 1173–87. http://dx.doi.org/10.1080/02329290310001600264.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Calzolan, Nicoletta. "II Non-Language Oriented Reports: Structured Data Bases: Report on two Workshops on Lexical Data Bases". Literary and Linguistic Computing 2, n. 1 (1987): 49–50. http://dx.doi.org/10.1093/llc/2.1.49.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Kaplan, Adam, Eric F. Lock e Mark Fiecas. "Bayesian GWAS with Structured and Non-Local Priors". Bioinformatics 36, n. 1 (22 giugno 2019): 17–25. http://dx.doi.org/10.1093/bioinformatics/btz518.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract Motivation The flexibility of a Bayesian framework is promising for GWAS, but current approaches can benefit from more informative prior models. We introduce a novel Bayesian approach to GWAS, called Structured and Non-Local Priors (SNLPs) GWAS, that improves over existing methods in two important ways. First, we describe a model that allows for a marker’s gene-parent membership and other characteristics to influence its probability of association with an outcome. Second, we describe a non-local alternative model for differential minor allele rates at each marker, in which the null and alternative hypotheses have no common support. Results We employ a non-parametric model that allows for clustering of the genes in tandem with a regression model for marker-level covariates, and demonstrate how incorporating these additional characteristics can improve power. We further demonstrate that our non-local alternative model gives symmetric rates of convergence for the null and alternative hypotheses, whereas commonly used local alternative models have asymptotic rates that favor the alternative hypothesis over the null. We demonstrate the robustness and flexibility of our structured and non-local model for different data generating scenarios and signal-to-noise ratios. We apply our Bayesian GWAS method to single nucleotide polymorphisms data collected from a pool of Alzheimer’s disease and cognitively normal patients from the Alzheimer’s Database Neuroimaging Initiative. Availability and implementation R code to perform the SNLPs method is available at https://github.com/lockEF/BayesianScreening.
16

Weichselbraun, Albert, Gerhard Wohlgenannt e Arno Scharl. "Refining non-taxonomic relation labels with external structured data to support ontology learning". Data & Knowledge Engineering 69, n. 8 (agosto 2010): 763–78. http://dx.doi.org/10.1016/j.datak.2010.02.010.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Kempaiah, Prakasha, Claudia R. Libertin, Rohit A. Chitale, Islam Naeyma, Vasili Pleqi, Johnathan M. Sheele, Michelle J. Iandiorio, Almira L. Hoogesteijn, Thomas R. Caulfield e Ariel L. Rivas. "Decoding Immuno-Competence: A Novel Analysis of Complete Blood Cell Count Data in COVID-19 Outcomes". Biomedicines 12, n. 4 (15 aprile 2024): 871. http://dx.doi.org/10.3390/biomedicines12040871.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Background: While ‘immuno-competence’ is a well-known term, it lacks an operational definition. To address this omission, this study explored whether the temporal and structured data of the complete blood cell count (CBC) can rapidly estimate immuno-competence. To this end, one or more ratios that included data on all monocytes, lymphocytes and neutrophils were investigated. Materials and methods: Longitudinal CBC data collected from 101 COVID-19 patients (291 observations) were analyzed. Dynamics were estimated with several approaches, which included non-structured (the classic CBC format) and structured data. Structured data were assessed as complex ratios that capture multicellular interactions among leukocytes. In comparing survivors with non-survivors, the hypothesis that immuno-competence may exhibit feedback-like (oscillatory or cyclic) responses was tested. Results: While non-structured data did not distinguish survivors from non-survivors, structured data revealed immunological and statistical differences between outcomes: while survivors exhibited oscillatory data patterns, non-survivors did not. In survivors, many variables (including IL-6, hemoglobin and several complex indicators) showed values above or below the levels observed on day 1 of the hospitalization period, displaying L-shaped data distributions (positive kurtosis). In contrast, non-survivors did not exhibit kurtosis. Three immunologically defined data subsets included only survivors. Because information was based on visual patterns generated in real time, this method can, potentially, provide information rapidly. Discussion: The hypothesis that immuno-competence expresses feedback-like loops when immunological data are structured was not rejected. This function seemed to be impaired in immuno-suppressed individuals. While this method rapidly informs, it is only a guide that, to be confirmed, requires additional tests. Despite this limitation, the fact that three protective (survival-associated) immunological data subsets were observed since day 1 supports many clinical decisions, including the early and personalized prognosis and identification of targets that immunomodulatory therapies could pursue. Because it extracts more information from the same data, structured data may replace the century-old format of the CBC.
18

Peng, Wei Ping, Yuan Hua Zhong, Zhao Liu, Jing Li e Rong Gao. "Research on Data Processing in PLM Product Based on Cloud Platform". Advanced Materials Research 1046 (ottobre 2014): 469–76. http://dx.doi.org/10.4028/www.scientific.net/amr.1046.469.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
To solve the problem of massive PLM product data analysis, a PLM product data analysis system based on OpenStack cloud platform was proposed.It includes a data analysis method of structured products based on data warehouse and a data analysis method of non-structured products based on Hadoop. By means of the former method, firstly product data was filtered, tranformed and loaded into the warehouse, then the required data cube was extracted, lastly the structured product data was analyzed with the analysis tools of data warehouse. By means of the latter method, the product data firstly was loaded into the distributed file system,and the non-structured massive PLM product data was analyzed by the data mining algorithm,which was programmed by JAVA language based on MapReduce. By applying the methods mentioned above to massive PLM product data analysis, it shows that these methods hava a higher efficiency.
19

Bhuvan, Ruby, Manimala Puri e Umesh Jain. "A Robust Approach to Secure Structured Sensitive Data using Non-Deterministic Random Replacement Algorithm". International Journal of Computer Applications 179, n. 50 (15 giugno 2018): 17–21. http://dx.doi.org/10.5120/ijca2018917306.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Rios, Katrina Caridad, Arpitha Thakkalapally, Jacob Koskimaki, Mark Riffon, Robert S. Miller, George Anthony Komatsoulis e Danielle Potter. "Impact of curated data on electronic quality measure capture rates within CancerLinQ." Journal of Clinical Oncology 38, n. 29_suppl (10 ottobre 2020): 307. http://dx.doi.org/10.1200/jco.2020.38.29_suppl.307.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
307 Background: Accurate calculation of key quality measures is critical for informing high-quality, value-based cancer care that is consistent with clinical guidelines. The American Society of Clinical Oncology (ASCO)’s CancerLinQ enables oncology organizations around the US to view near-real time quality measure dashboards sourced from structured electronic medical record (EMR) data; however, use of structured data in key fields is highly variable. Unstructured content, such as progress notes, contains important clinical information on treatment and disease status, which can then undergo curation. This process involves trained data abstractors searching for key data elements through a combination of manual review and natural language processing (NLP) to extract structured data from unstructured content. We hypothesize inclusion of curated data substantially augments structured data alone by more accurately representing the patient journey, thus improving validity of quality measures across EMRs. Methods: A total of 96,399 records across 57,232 patients from 4 EMRs vendors were analyzed from 2018-2019 across structured EMR and curated data. Each record represents 1 of 7 key data elements used to calculate the Staging Documented within One Month of First Office Visit quality measure. Structured documentation of these data elements determines if a patient is concordant with the measure, meaning they were staged within 31 days of their first visit after diagnosis, or non-concordant, meaning they were not staged within the appropriate window. Results: More than a quarter of records from patients concordant or non-concordant with the measure (28.85%) had key data elements sourced from curation. In total, 33% of all records among concordant patients were sourced from curation. Relying on structured data alone would show only 67% concordance versus 97.5% concordance among curated records. This demonstrates that appropriate care may often be delivered but documentation may be missing in a significant fraction of structured EMR data, thus limiting accurate reporting capabilities. Conclusions: NLP-assisted curation can meaningfully supplement structured EMR data by providing a more accurate picture of care rendered, which can have substantial impacts on clinical care, quality reporting, and business operations. [Table: see text]
21

Dubey, Abhishek. "BIG DATA". International Journal of Engineering Technologies and Management Research 5, n. 2 (25 aprile 2020): 9–12. http://dx.doi.org/10.29121/ijetmr.v5.i2.2018.606.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The term 'Big Data' portrays inventive methods and advances to catch, store, disseminate, oversee and break down petabyte-or bigger estimated sets of data with high-speed & diverted structures. Enormous information can be organized, non-structured or half-organized, bringing about inadequacy of routine information administration techniques. Information is produced from different distinctive sources and can touch base in the framework at different rates. With a specific end goal to handle this lot of information in an economical and proficient way, parallelism is utilized. Big Data is information whose scale, differences, and unpredictability require new engineering, methods, calculations, and investigation to oversee it and concentrate esteem and concealed learning from it. Hadoop is the center stage for organizing Big Data, and takes care of the issue of making it valuable for examination purposes. Hadoop is an open source programming venture that empowers the dispersed handling of huge information sets crosswise over bunches of ware servers. It is intended to scale up from a solitary server to a huge number of machines, with a high level of adaptation to non-critical failure.
22

Zhuo, Ming, Yunzhuo Liu, Leyuan Liu e Shijie Zhou. "Local Cluster-Aware Attention for Non-Euclidean Structure Data". Symmetry 15, n. 4 (31 marzo 2023): 837. http://dx.doi.org/10.3390/sym15040837.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Meaningful representation of large-scale non-Euclidean structured data, especially in complex domains like network security and IoT system, is one of the critical problems of contemporary machine learning and deep learning. Many successful cases of graph-based models and algorithms deal with non-Euclidean structured data. However, It is often undesirable to derive node representations by walking through the complete topology of a system or network (graph) when it has a very big or complicated structure. An important issue is using neighborhood knowledge to deduce the symmetric network’s topology or graph. The traditional approach to solving the graph representation learning issue is surveyed from machine learning and deep learning perspectives. Second, include local neighborhood data encoded to the attention mechanism to define node solidarity and enhance node capture and interactions. The performance of the proposed model is then assessed for transduction and induction tasks that include downstream node categorization. The attention model taking clustering into account has successfully equaled or reached the state-of-the-art performance of several well-established node classification benchmarks and does not depend on previous knowledge of the complete network structure, according to experiments. Following a summary of the research, we discuss problems and difficulties that must be addressed for developing future graph signal processing algorithms and graph deep learning models, such as graph embeddings’ interpretability and adversarial resilience. At the same time, it has a very positive impact on network security and artificial intelligence security.
23

Ding, Kaize, Zhe Xu, Hanghang Tong e Huan Liu. "Data Augmentation for Deep Graph Learning". ACM SIGKDD Explorations Newsletter 24, n. 2 (29 novembre 2022): 61–77. http://dx.doi.org/10.1145/3575637.3575646.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Graph neural networks, a powerful deep learning tool to model graph-structured data, have demonstrated remarkable performance on numerous graph learning tasks. To address the data noise and data scarcity issues in deep graph learning, the research on graph data augmentation has intensified lately. However, conventional data augmentation methods can hardly handle graph-structured data which is defined in non-Euclidean space with multi-modality. In this survey, we formally formulate the problem of graph data augmentation and further review the representative techniques and their applications in different deep graph learning problems. Specifically, we first propose a taxonomy for graph data augmentation techniques and then provide a structured review by categorizing the related work based on the augmented information modalities. Moreover, we summarize the applications of graph data augmentation in two representative problems in data-centric deep graph learning: (1) reliable graph learning which focuses on enhancing the utility of input graph as well as the model capacity via graph data augmentation; and (2) low-resource graph learning which targets on enlarging the labeled training data scale through graph data augmentation. For each problem, we also provide a hierarchical problem taxonomy and review the existing literature related to graph data augmentation. Finally, we point out promising research directions and the challenges in future research.
24

Fawzia Rahim, Umme, e Hiroshi Mineno. "Data augmentation method for strawberry flower detection in non-structured environment using convolutional object detection networks". Journal of Agricultural and Crop Research 8, n. 11 (4 novembre 2020): 260–71. http://dx.doi.org/10.33495/jacr_v8i11.20.180.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Deep learning has demonstrated significant capabilities for learning image features and presents many opportunities for agricultural automation. Deep neural networks typically require large and diverse training datasets to learn generalizable models. However, this requirement is challenging for applications in agricultural automation systems, since collecting and annotating large amount of training samples from filed crops and greenhouses is an expensive and complicated process due to the large diversity of crops, growth seasons and climate changes. This research proposed a new method for augmenting training dataset using synthesized images that preserves the background context and texture of the data object. A synthetic dataset of 1800 images was generated using a reference dataset and applying image processing techniques. As reference dataset 100 and for evaluating detection performance 230 real images of strawberry flowers were collected in greenhouses. Experimental results demonstrated that the suggested method provides improved performance when applied to the state-of-the-arts convolutional object detectors including Faster R-CNN, SSD, YOLOv3 and CenterNet for the task of strawberry flower detection in non-structured environment. The YOLOv3 w/darknet53 model achieved 46.84% boost in performance with average precision (AP) improved from 39.20% to 86.04% when applied augmentation using synthetic dataset. The AP of Faster R-CNN w/resnet50, SSD w/resnet50 and FPN and CenterNet w/hourglass52 models improved by 15.71, 18.42 and 22.24%, respectively. The Faster R-CNN w/resnet50 model provided most significant strawberry flower detection performance with AP 90.84%, which is higher than SSD w/resnet50 and FPN, YOLOv3 w/darknet53 and CenterNet w/hourglass52 models (88.56%, 86.04 % and 83.82%, respectively). Keywords: Flower detection, deep convolutional neural network, data augmentation, synthetic dataset.
25

Luijtens, K., F. Symons e M. Vuylsteke-Wauters. "Linear and non-linear canonical correlation analysis:an exploratory tool for the analysis of group-structured data". Journal of Applied Statistics 21, n. 3 (gennaio 1994): 43–61. http://dx.doi.org/10.1080/757583648.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Seong, Seung H. "Spectral method for non-Gaussian data generation by phase modeling: White noise phase versus structured phase". Engineering Structures 87 (marzo 2015): 105–15. http://dx.doi.org/10.1016/j.engstruct.2015.01.020.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Kshyvetskyy, Bogdan, Diana Kindzera, Yaroslav Sokolovskyy, Halyna Somar e Ihor Sokolovskyi. "Prediction of the Strength of Oakwood Adhesive Joints Bonded with Thermoplastic Polyvinyl Acetate Adhesives". Chemistry & Chemical Technology 17, n. 1 (27 marzo 2023): 110–17. http://dx.doi.org/10.23939/chcht17.01.110.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Among the several kinds of thermoplastic adhesives, structured and non-structured polyvinyl acetate (PVA) adhesives have a rather wide application and are used currently for forming adhesive joints from different wood species, especially oakwood. To ensure proper conditions of oakwood adhesive joints use, it is important to have fast and accurate methods of predicting their strength and durability. The strength changes of the oakwood adhesive joints bonded with structured and non-structured PVA adhesives have been investigated by conducting long-term experiments. Based on the generalization of experimental data and theoretical predictions regarding the mechanism of the adhesive seam formation, equations that allow calculating theoretically the strength of oakwood adhesive joints bonded with non-structured and structured PVA adhesives have been proposed. The pro-posed equations reproduce experimental data with suffi-cient accuracy of ±3.5 % within the temperature range from 251 K to 306 K and humidity range from 40 % to 100 %, and therefore, are recommended for practical use.
28

Bhatewara, Ankita, e Kalyani Waghmare. "Highly Scalable Network Management Solution Using Cassandra". INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 13, n. 10 (30 ottobre 2014): 5085–89. http://dx.doi.org/10.24297/ijct.v13i10.2330.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
With the current emphasis on Big Data, NOSQL databases have surged in popularity. These databases are claimed to perform better than SQL databases. The traditional database is designed for the structured data and the complex query. In the environment of the cloud, the scale of data is very large, the data is non-structured, the request of the data is dynamic, these characteristics raise new challenges for the data storage and administration, in this context, the NOSQL database comes into picture. This paper discusses about some non-structured databases. It also shows how Cassandra is used to improve the scalability of the network compared to RDBMS.
29

Di Berardino, Daniela, e Simone Vona. "Discovering the Relationship Between Big Data, Big Data Analytics, and Decision Making: A Structured Literature Review". European Scientific Journal, ESJ 19, n. 19 (31 luglio 2023): 1. http://dx.doi.org/10.19044/esj.2023.v19n19p1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper focuses on providing a structured literature review on the role of Big Data (BD) and Big Data Analytics (BDA) in supporting decision making. The study aims to systematize the knowledge, the primary results, and research gaps related to BD and BDA in strategic management and in decision making by providing a future research agenda. Adopting the methodology of Massaro et al. (2015), the structured literature review investigates this phenomenon analyzing a sample of 97 articles published in high-level scientific journals ranked in ABS list in the Marketing, Strategic Management, Ethics, Gender, and Social Responsibility area. Bibliometric analysis, content analysis, and the PRISMA protocol have been used for the review. The study unveils the subject of decisions, factors influencing good decisions, and the main effects of using BD and BDA in decision making. New organizational factors, data chain dynamics, and inhibitors should be explored to remove the obstacles in decision making. The relationship between BD/BDA and decision making remains underexplored in public organizations, non-profit organizations, and small and medium-sized firms.
30

Zhong, Yongmin. "Processing of 3D Unstructured Measurement Data for Reverse Engineering". International Journal of Intelligent Mechatronics and Robotics 1, n. 2 (aprile 2011): 42–51. http://dx.doi.org/10.4018/ijimr.2011040104.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
One of the most difficult problems in reverse engineering is the processing of unstructured data. NURBS (Non-uniform Rational B-splines) surfaces are a popular tool for surface modeling. However, they cannot be directly created from unstructured data, as they are defined on a four-sided domain with explicit parametric directions. Therefore, in reverse engineering, it is necessary to process unstructured data into structured data which enables the creation of NURBS surfaces. This paper presents a methodology to processing unstructured data into the structured data for creating NURBS surfaces. A projection based method is established for constructing 3D triangulation from unstructured data. An optimization method is also established to optimize the 3D triangulation to ensure that the resulted NURBS surfaces have a better form. A triangular surface interpolation method is established for constructing triangular surfaces from the triangulation. This method creates five-degree triangular surfaces with C1 continuity. A series of segment data are obtained by cutting the triangular surfaces with a series of parallel planes. Finally, the structured data is obtained by deleting repetitive data points in each segment data. Results demonstrate the efficacy of the proposed methodology.
31

Chaudhary, Renu, e Gagangeet Singh. "A NOVEL TECHNIQUE IN NoSQL DATA EXTRACTION". International Journal of Research -GRANTHAALAYAH 1, n. 1 (31 agosto 2014): 51–58. http://dx.doi.org/10.29121/granthaalayah.v1.i1.2014.3086.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
NoSQL databases (commonly interpreted by developers as „not only SQL databases‟ and not „no SQL‟) is an emerging alternative to the most widely used relational databases. As the name suggests, it does not completely replace SQL but compliments it in such a way that they can co-exist. In this paper we will be discussing the NoSQL data model, types of NoSQL data stores, characteristics and features of each data store, query languages used in NoSQL, advantages and disadvantages of NoSQL over RDBMS and the future prospects of NoSQL. Motivation/Background:NoSQL systems exhibit the ability to store and index arbitrarily big data sets while enabling a large amount of concurrent user requests. Method:Many people think NoSQL is a derogatory term created to poke at SQL. In reality, the term means Not Only SQL. The idea is that both technologies can coexist and each has its place. Results:Large-scale data processing (parallel processing over distributed systems); Embedded IR (basic machine-to-machine information look-up & retrieval); Exploratory analytics on semi-structured data (expert level); Large volume data storage (unstructured, semi-structured, small-packet structured). Conclusions:This study report motivation to provide an independent understanding of the strengths and weaknesses of various NoSQL database approaches to supporting applications that process huge volumes of data; as well as to provide a global overview of this non-relational NoSQL databases.
32

Xiong, Zhong Kan, Pei Zhen Wan e Jiu Ping Cai. "Study on E-Commerce Platform Operation Mechanism in Big Data Environmen". Applied Mechanics and Materials 687-691 (novembre 2014): 2776–79. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.2776.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Big data is one of the important development direction of modern information technology, realizing the sharing and analysis of large data will bring immeasurable economic value, but also has a tremendous role in promoting the social. In the age of big data, unified the data representation, large data processing, query, analysis and visualization are the key problem to be solved urgently. In order to provide a standardized framework construction of the large data service platform, this paper designed a large data service oriented architecture user experience. Secondly, in the aspect of data model, in order to achieve high data service for non structured data, the design of the non structured data model based on subject behavior. In large data service model, algebraic model large data services and their composition was established by using process algebra. In large data service applications, detailed retrieval, process analysis and visualization services, and by improving the retrieval accuracy and efficiency of the service in two aspects of measures to achieve the high data service optimization.
33

Acharya, Biswaranjan, Ajaya Kumar Jena, Jyotir Moy Chatterjee, Raghvendra Kumar e Dac-Nhuong Le. "NoSQL Database Classification". International Journal of Knowledge-Based Organizations 9, n. 1 (gennaio 2019): 50–65. http://dx.doi.org/10.4018/ijkbo.2019010105.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The rapid growth in the digital world in form of exponentiation to accommodate huge amount of structured, semi-structured, unstructured and hybrid data received from different sources. By using the conventional data management tools, it is quite impossible to manage this semi-structured and unstructured data for which a non-relational database management system such as NoSQL and NewSQL are used to handle such types of data. These types of semi-structured and structured data are generally considered ‘Big Data.' This article describes the basic characteristics, background and the models of NoSQL used for big data applications. In this work, the authors surveyed different NoSQL characteristics used by the researchers and try to compare the strength and weakness of different NoSQL databases.
34

Zhang, Kainan, Zhipeng Cai e Daehee Seo. "Privacy-Preserving Federated Graph Neural Network Learning on Non-IID Graph Data". Wireless Communications and Mobile Computing 2023 (3 febbraio 2023): 1–13. http://dx.doi.org/10.1155/2023/8545101.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Since the concept of federated learning (FL) was proposed by Google in 2017, many applications have been combined with FL technology due to its outstanding performance in data integration, computing performance, privacy protection, etc. However, most traditional federated learning-based applications focus on image processing and natural language processing with few achievements in graph neural networks due to the graph’s nonindependent identically distributed (IID) nature. Representation learning on graph-structured data generates graph embedding, which helps machines understand graphs effectively. Meanwhile, privacy protection plays a more meaningful role in analyzing graph-structured data such as social networks. Hence, this paper proposes PPFL-GNN, a novel privacy-preserving federated graph neural network framework for node representation learning, which is a pioneer work for graph neural network-based federated learning. In PPFL-GNN, clients utilize a local graph dataset to generate graph embeddings and integrate information from other collaborative clients to utilize federated learning to produce more accurate representation results. More importantly, by integrating embedding alignment techniques in PPFL-GNN, we overcome the obstacles of federated learning on non-IID graph data and can further reduce privacy exposure by sharing preferred information.
35

Orike, Sunny, e Daboso Brown. "Big Data Management". International Journal of Interdisciplinary Telecommunications and Networking 8, n. 4 (ottobre 2016): 34–50. http://dx.doi.org/10.4018/ijitn.2016100104.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Organizations and governments leverage on the potentials in data to plan and compete globally. Data from various sources are continually mined, stored in databases and utilized in a manner that improves processes, products and ensure steady profitability. The traditional relational database management systems are unable to cope with these new forms of data. The velocity, volume and variety in which data are generated qualify them as “Big Data”. This scales up the storage and processing needs of client organizations, allowing them to focus on their core areas of expertise. This paper investigates the relevance of aggregating big data and its efficient management in a cloud computing environment. It adopts the prototyping design and a case study approach, based on Couchbase Server, a non-structured query language document database and cloud computing platform. The accruable benefit of this work is organizational cost-effectiveness, timeliness in decision making and higher profitability.
36

TIAN, JIAN-WEI, WEN-HUI QI e XIAO-XIAO LIU. "RETRIEVING DEEP WEB DATA THROUGH MULTI-ATTRIBUTES INTERFACES WITH STRUCTURED QUERIES". International Journal of Software Engineering and Knowledge Engineering 21, n. 04 (giugno 2011): 523–42. http://dx.doi.org/10.1142/s0218194011005396.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
A great deal of data on the Web lies in the hidden databases, or the deep Web. Most of the deep Web data is not directly available and can only be accessed through the query interfaces. Current research on deep Web search has focused on crawling the deep Web data via Web interfaces with keywords queries. However, these keywords-based methods have inherent limitations because of the multi-attributes and top-k features of the deep Web. In this paper we propose a novel approach for siphoning structured data with structured queries. Firstly, in order to retrieve all the data non-repeatedly in hidden databases, we model the hidden database as a hierarchy tree. Under this theoretical framework, data retrieving is transformed into the traversing problem in a tree. We also propose techniques to narrow the query space by using heuristic rule, based on mutual information, to guide the traversal process. We conduct extensive experiments over real deep Web sites and controlled databases to illustrate the coverage and efficiency of our techniques.
37

Xiao, Qin. "Resource Classification and Knowledge Aggregation of Library and Information Based on Data Mining". Ingénierie des systèmes d information 25, n. 5 (10 novembre 2020): 645–53. http://dx.doi.org/10.18280/isi.250512.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The traditional knowledge service systems have nonuniform data structures. Some data are structured, while some are semi-structured and even non-structured. Big data technology helps to optimize the integration and retrieval of the massive data on library and information (L&I), making it possible to classify the resources and optimize the configuration of L&I resource platforms according to user demand. Therefore, this paper introduces the new information service model of big data resources and knowledge services to the processing of L&I data. Firstly, the data storage structure and relationship model of the L&I resource platform were established, and used to sample and integrate the keywords of resource retrieval. Next, an L&I resource classification model was constructed based on support vector machine (SVM), and applied to extract and quantify the attributes of the keywords of resource retrieval. After that, a knowledge aggregation model was developed for a complex network of multiple L&I resource platforms. Experimental results demonstrate the effectiveness of the proposed knowledge aggregation model. The research findings provide a reference for the application of data mining in resource classification.
38

Azizi, Ilia, e Iegor Rudnytskyi. "Improving Real Estate Rental Estimations with Visual Data". Big Data and Cognitive Computing 6, n. 3 (9 settembre 2022): 96. http://dx.doi.org/10.3390/bdcc6030096.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Multi-modal data are widely available for online real estate listings. Announcements can contain various forms of data, including visual data and unstructured textual descriptions. Nonetheless, many traditional real estate pricing models rely solely on well-structured tabular features. This work investigates whether it is possible to improve the performance of the pricing model using additional unstructured data, namely images of the property and satellite images. We compare four models based on the type of input data they use: (1) tabular data only, (2) tabular data and property images, (3) tabular data and satellite images, and (4) tabular data and a combination of property and satellite images. In a supervised context, the branches of dedicated neural networks for each data type are fused (concatenated) to predict log rental prices. The novel dataset devised for the study (SRED) consists of 11,105 flat rentals advertised over the internet in Switzerland. The results reveal that using all three sources of data generally outperforms machine learning models built on only tabular information. The findings pave the way for further research on integrating other non-structured inputs, for instance, the textual descriptions of properties.
39

Moncayo, H., I. Moguel, M. G. Perhinschi, A. Perez, D. Al Azzawi e A. Togayev. "Structured non-self approach for aircraft failure identification within a fault tolerance architecture". Aeronautical Journal 120, n. 1225 (marzo 2016): 415–34. http://dx.doi.org/10.1017/aer.2016.15.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
ABSTRACTWithin an immunity-based architecture for aircraft fault detection, identification and evaluation, a structured, non-self approach has been designed and implemented to classify and quantify the type and severity of different aircraft actuators, sensors, structural components and engine failures. The methodology relies on a hierarchical multi-self strategy with heuristic selection of sub-selves and formulation of a mapping logic algorithm, in which specific detectors of specific selves are mapped against failures based on their capability to selectively capture the dynamic fingerprint of abnormal conditions in all their aspects. Immune negative and positive selection mechanisms have been used within the process. Data from a motion-based six-degrees-of-freedom flight simulator were used to evaluate the performance in terms of percentage identification rates for a set of 2D non-self projections under several upset conditions.
40

Arif, Dashne Raouf, e Nzar Abdulqadir Ali. "Improving the performance of big data databases". Kurdistan Journal of Applied Research 4, n. 2 (31 dicembre 2019): 206–20. http://dx.doi.org/10.24017/science.2019.2.20.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Real-time monitoring systems utilize two types of database, they are relational databases such as MySQL and non-relational databases such as MongoDB. A relational database management system (RDBMS) stores data in a structured format using rows and columns. It is relational because the values of the tables are connected. A non-relational database is a database that does not adopt the relational structure given by traditional. In recent years, this class of databases has also been referred to as Not only SQL (NoSQL). This paper discusses many comparisons that have been conducted on the execution time performance of types of databases (SQL and NoSQL). In SQL (Structured Query Language) databases different algorithms are used for inserting and updating data, such as indexing, bulk insert and multiple updating. However, in NoSQL different algorithms are used for inserting and updating operations such as default-indexing, batch insert, multiple updating and pipeline aggregation. As a result, firstly compared with related papers, this paper shows that the performance of both SQL and NoSQL can be improved. Secondly, performance can be dramatically improved for inserting and updating operations in the NoSQL database compared to the SQL database. To demonstrate the performance of the different algorithms for entering and updating data in SQL and NoSQL, this paper focuses on a different number of data sets and different performance results. The SQL part of the paper is conducted on 50,000 records to 3,000,000 records, while the NoSQL part of the paper is conducted on 50,000 to 16,000,000 documents (2GB) for NoSQL. In SQL, three million records are inserted within 606.53 seconds, while in NoSQL this number of documents is inserted within 67.87 seconds. For updating data, in SQL 300,000 records are updated within 271.17 seconds, while for NoSQL this number of documents is updated within just 46.02 seconds.
41

Allahbakhshi, Hoda, Lindsey Conrow, Babak Naimi e Robert Weibel. "Using Accelerometer and GPS Data for Real-Life Physical Activity Type Detection". Sensors 20, n. 3 (21 gennaio 2020): 588. http://dx.doi.org/10.3390/s20030588.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper aims to examine the role of global positioning system (GPS) sensor data in real-life physical activity (PA) type detection. Thirty-three young participants wore devices including GPS and accelerometer sensors on five body positions and performed daily PAs in two protocols, namely semi-structured and real-life. One general random forest (RF) model integrating data from all sensors and five individual RF models using data from each sensor position were trained using semi-structured (Scenario 1) and combined (semi-structured + real-life) data (Scenario 2). The results showed that in general, adding GPS features (speed and elevation difference) to accelerometer data improves classification performance particularly for detecting non-level and level walking. Assessing the transferability of the models on real-life data showed that models from Scenario 2 are strongly transferable, particularly when adding GPS data to the training data. Comparing individual models indicated that knee-models provide comparable classification performance (above 80%) to general models in both scenarios. In conclusion, adding GPS data improves real-life PA type classification performance if combined data are used for training the model. Moreover, the knee-model provides the minimal device configuration with reliable accuracy for detecting real-life PA types.
42

Verma, Surabhi, e Sushil Chaurasia. "Understanding the Determinants of Big Data Analytics Adoption". Information Resources Management Journal 32, n. 3 (luglio 2019): 1–26. http://dx.doi.org/10.4018/irmj.2019070101.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This article aims to empirically investigate the factors that affects the adoption of big data analytics by firms (adopters and non-adopters). The current study is based on three feature that influence BDA adoption: technological context (relative advantage, complexity, compatibility), organizational context (top management support, technology readiness, organizational data environment), and environmental context (competitive pressure, and trading partner pressure). A structured questionnaire-based survey method was used to collect data from 231 firm managers. Relevant hypotheses were derived and tested by partial least squares. The results indicated that technology, organization and environment contexts impact firms' adoption of big data analytics. The findings also revealed that relative advantage, complexity, compatibility, top management support, technology readiness, organizational data environment and competitive pressure have a significant influence on the adopters of big data analytics, whereas relative advantage, complexity and competitive pressure have a significant influence on the non-adopters of big data analytics.
43

Ye, Xia, Ruiheng Liu e Zengying Yue. "Structured Knowledge Base Q&A System Based on TorchServe Deployment". Journal of Physics: Conference Series 2078, n. 1 (1 novembre 2021): 012017. http://dx.doi.org/10.1088/1742-6596/2078/1/012017.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract Structured tabular data are widely used in various information systems, especially with the development of big data technology, making it more difficult to query on these complex data. SQL facilitates the query on structured tabular, however, the mastery of SQL has a certain threshold for most non-expert users. Therefore, in order to facilitate ordinary users to quickly obtain the required information in complex structured data, we design and implement a Q&A system for structured knowledge. First, we make a detailed distinction between Q&A scenarios for structured data and design different approaches, respectively. Then, we introduce deep learning models in system algorithm layer to enhance the generalization ability. Finally, the TorchServe framework is used to optimize system deployment and improve system performance using batch inference. The experimental results show that the prototype system has certain generalization ability and also has some advantages in performance compared with traditional methods.
44

Luo, Jian She, Wen Qiang Li e Jie Jiang. "Product Innovation Design Knowledge Acquisition System Based on MAS". Advanced Materials Research 328-330 (settembre 2011): 2044–49. http://dx.doi.org/10.4028/www.scientific.net/amr.328-330.2044.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In order to help the designer collect relative materials from expanding Internet, a product innovation design knowledge acquisition system based on MAS has been developed. Information on the Internet can be divided into structured data, non-structured data and semi-structured data, and two different acquisition processes from different sources were established. Based on aforementioned, the framework of the system was established with federal system of MAS. Layered structure feature was applied in each internal federal system. Finally, the development environment was selected, and three-tier B/S architecture was adopted. Each function modules with system were fulfilled by programming.
45

Razen, Alexander, e Stefan Lang. "Random scaling factors in Bayesian distributional regression models with an application to real estate data". Statistical Modelling 20, n. 4 (19 marzo 2019): 347–68. http://dx.doi.org/10.1177/1471082x18823099.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Distributional structured additive regression provides a flexible framework for modelling each parameter of a potentially complex response distribution in dependence of covariates. Structured additive predictors allow for an additive decomposition of covariate effects with non-linear effects and time trends, unit- or cluster-specific heterogeneity, spatial heterogeneity and complex interactions between covariates of different type. Within this framework, we present a simultaneous estimation approach for multiplicative random effects that allow for cluster-specific heterogeneity with respect to the scaling of a covariate′s effect. More specifically, a possibly non-linear function f( z) of a covariate z may be scaled by a multiplicative and possibly spatially correlated cluster-specific random effect (1+αc). Inference is fully Bayesian and is based on highly efficient Markov Chain Monte Carlo (MCMC) algorithms. We investigate the statistical properties of our approach within extensive simulation experiments for different response distributions. Furthermore, we apply the methodology to German real estate data where we identify significant district-specific scaling factors. According to the deviance information criterion, the models incorporating these factors perform significantly better than standard models without (spatially correlated) random scaling factors.
46

Ordóñez Salinas, Sonia, e Alba Consuelo Nieto Lemus. "A model of multilayer tiered architecture for big data". Sistemas y Telemática 14, n. 37 (5 agosto 2016): 23–44. http://dx.doi.org/10.18046/syt.v14i37.2257.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Until recently, the issue of analytical data was related to Data Warehouse, but due to the necessity of analyzing new types of unstructured data, both repetitive and non-repetitive, Big Data arises. Although this subject has been widely studied, there is not available a reference architecture for Big Data systems involved with the processing of large volumes of raw data, aggregated and non-aggregated. There are not complete proposals for managing the lifecycle of data or standardized terminology, even less a methodology supporting the design and development of that architecture. There are architectures in small-scale, industrial and product-oriented, which limit their scope to solutions for a company or group of companies, focused on technology but omitting the functionality. This paper explores the requirements for the formulation of an architectural model that supports the analysis and management of data: structured, repetitive and non-repetitive unstructured; there are some architectural proposals –industrial or technological type– to propose a logical model of multi-layered tiered architecture, which aims to respond to the requirements covering both Data Warehouse and Big Data.
47

Andersen, René, e Susanne Dau. "Podcasts: A generator of non-formal learning". European Conference on e-Learning 21, n. 1 (21 ottobre 2022): 19–24. http://dx.doi.org/10.34190/ecel.21.1.527.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The present study examines the relationship between the use of podcasts (mp3 files) as a non-formal learning tool. The study´s epistemological origins lie in Dewey's pragmatism and learning and reflection theory, including the learning approach at University College of Northern Denmark’s (UCN) “Reflective Practice-based Learning” (RPL). This paper focuses on how podcasts can increase students’ learning and reflection skills by using podcasts as a generator for non-formal learning. The study is based on two different classes that attend an extended course in digital technologies and project management at the Danish University of Applied Science (UCN). The study applies integrated mixed-method data collection: observations, a quantitative survey, and semi-structured interviews. The observations were carried out during four (project management) and seven (digital technologies) full-day lectures in two different classes. These offered an insight into the extent to which the students acquired knowledge by listening to podcasts between classes. The quantitative data consisted of a survey of 65 students, all of whom participated in the courses. The survey was performed as a part of the evaluation at the final course lectures. The semi-structured interviews (ten in all) were used to investigate how the use of podcasts affects students' reflective skills. Ten semi-structured interviews were conducted. The data in this study finds that podcasts/podcasting can have a positive effect on students’ non-formal learning in higher education. Established on the findings, there is evidence indicating podcasts as a supplement in higher education can increase students´ motivation toward non-formal learning. The study reveals that podcasts hold the potential to stimulate the student's non-formal learning and increase the students’ reflective skills. Based on this evidence, further research is suggested e.g. studies that include an extended investigation on the benefits of students’ non-formal learning by using podcasts.
48

Guinard, S., L. Landrieu, L. Caraffa e B. Vallet. "PIECEWISE-PLANAR APPROXIMATION OF LARGE 3D DATA AS GRAPH-STRUCTURED OPTIMIZATION". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2/W5 (29 maggio 2019): 365–72. http://dx.doi.org/10.5194/isprs-annals-iv-2-w5-365-2019.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
<p><strong>Abstract.</strong> We introduce a new method for the piecewise-planar approximation of 3D data, including point clouds and meshes. Our method is designed to operate on large datasets (e.g. millions of vertices) containing planar structures, which are very frequent in anthropic scenes. Our approach is also adaptive to the local geometric complexity of the input data. Our main contribution is the formulation of the piecewise-planar approximation problem as a non-convex optimization problem. In turn, this problem can be efficiently solved with a graph-structured working set approach. We compare our results with a state-of-the-art region-growing-based segmentation method and show a significant improvement both in terms of approximation error and computation efficiency.</p>
49

Zhbanov, Valery A., e Elena B. Abarnikova. "DESIGN AND DEVELOPMENT OF A NEURAL NETWORK MODEL FOR DETERMINING THE SIMILARITY OF TWO SAMPLES OF NON-STRUCTURED DATA". Scholarly Notes of Komsomolsk-na-Amure State Technical University, n. 1 (2023): 47–53. http://dx.doi.org/10.17084/20764359-2023-65-47.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Trigueros, D. E. G., A. N. Módenes, F. R. Espinoza-Quiñones e A. D. Kroumov. "The evaluation of benzene and phenol biodegradation kinetics by applying non-structured models". Water Science and Technology 61, n. 5 (1 marzo 2010): 1289–98. http://dx.doi.org/10.2166/wst.2010.034.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The biodegradation kinetics of the aromatic hydrocarbons benzene and phenol as single substrates and as a mixture were investigated through non-structured model analysis. The material balance equations involving the models of Monod and Andrews and representing the biodegradation kinetics of individual substrates in batch mode were numerically solved. Further, utilization of a benzene–phenol mixture was described by applying more sophisticated mathematical forms of competitive, noncompetitive and uncompetitive inhibition models as well as the sum kinetic interactions parameters (SKIP) model. In order to improve the performance of the studied models, some modifications were also proposed. The Particle Swarm Global Optimization method, coded in Maple, was applied to the parameter identification procedure of each model, where the least square method was used as a search statistical criterion. The description of the biodegradation kinetics of a benzene–phenol mixture by the competitive inhibition model was based on the information that the compounds could be catabolized via one metabolic pathway of Pseudomonas putida F1. Simulation results were in good agreement with the experimental data and proved the robustness of the applied methods and models. The developed knowledge database could be very useful in the optimization of the biodegradation processes of different bioreactor types and operational conditions.

Vai alla bibliografia