Siga este enlace para ver otros tipos de publicaciones sobre el tema: RDF dataset characterization and classification.

Artículos de revistas sobre el tema "RDF dataset characterization and classification"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "RDF dataset characterization and classification".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Gupta, Rupal y Sanjay Kumar Malik. "A classification using RDFLIB and SPARQL on RDF dataset". Journal of Information and Optimization Sciences 43, n.º 1 (2 de enero de 2022): 143–54. http://dx.doi.org/10.1080/02522667.2022.2039461.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Xiao, Changhu, Yuan Guo, Kaixuan Zhao, Sha Liu, Nongyue He, Yi He, Shuhong Guo y Zhu Chen. "Prognostic Value of Machine Learning in Patients with Acute Myocardial Infarction". Journal of Cardiovascular Development and Disease 9, n.º 2 (11 de febrero de 2022): 56. http://dx.doi.org/10.3390/jcdd9020056.

Texto completo
Resumen
(1) Background: Patients with acute myocardial infarction (AMI) still experience many major adverse cardiovascular events (MACEs), including myocardial infarction, heart failure, kidney failure, coronary events, cerebrovascular events, and death. This retrospective study aims to assess the prognostic value of machine learning (ML) for the prediction of MACEs. (2) Methods: Five-hundred patients diagnosed with AMI and who had undergone successful percutaneous coronary intervention were included in the study. Logistic regression (LR) analysis was used to assess the relevance of MACEs and 24 selected clinical variables. Six ML models were developed with five-fold cross-validation in the training dataset and their ability to predict MACEs was compared to LR with the testing dataset. (3) Results: The MACE rate was calculated as 30.6% after a mean follow-up of 1.42 years. Killip classification (Killip IV vs. I class, odds ratio 4.386, 95% confidence interval 1.943–9.904), drug compliance (irregular vs. regular compliance, 3.06, 1.721–5.438), age (per year, 1.025, 1.006–1.044), and creatinine (1 µmol/L, 1.007, 1.002–1.012) and cholesterol levels (1 mmol/L, 0.708, 0.556–0.903) were independent predictors of MACEs. In the training dataset, the best performing model was the random forest (RDF) model with an area under the curve of (0.749, 0.644–0.853) and accuracy of (0.734, 0.647–0.820). In the testing dataset, the RDF showed the most significant survival difference (log-rank p = 0.017) in distinguishing patients with and without MACEs. (4) Conclusions: The RDF model has been identified as superior to other models for MACE prediction in this study. ML methods can be promising for improving optimal predictor selection and clinical outcomes in patients with AMI.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Sarquah, Khadija, Satyanarayana Narra, Gesa Beck, Uduak Bassey, Edward Antwi, Michael Hartmann, Nana Sarfo Agyemang Derkyi, Edward A. Awafo y Michael Nelles. "Characterization of Municipal Solid Waste and Assessment of Its Potential for Refuse-Derived Fuel (RDF) Valorization". Energies 16, n.º 1 (24 de diciembre de 2022): 200. http://dx.doi.org/10.3390/en16010200.

Texto completo
Resumen
Reuse and recycling are preferred strategies in waste management to ensure the high position of waste resources in the waste management hierarchy. However, challenges are still pronounced in many developing countries, where disposal as a final solution is prevalent, particularly for municipal solid waste. On the other hand, refuse-derived fuel as a means of energy recovery provides a sustainable option for managing mixed, contaminated and residual municipal solid waste (MSW). This study provides one of the earliest assessments of refuse-derived fuel (RDF) from MSW in Ghana through a case study in the cities of Accra and Kumasi. The residual/reject fractions (RFs) of MSW material recovery were characterized for thermochemical energy purposes. The studied materials had the potential to be used as RDF. The combustible portions from the residual fractions formed good alternative fuel, RDF, under the class I, II-III classification of the EN 15359:2011 standards. The RDF from only combustible mixed materials such as plastics, paper and wood recorded a significant increase in the lower heating value (28.66–30.24 MJ/kg) to the mass RF, with the presence of organics (19.73 to 23.75 MJ/kg). The chlorine and heavy metal content met the limits set by various standards. An annual RDF production of 12 to 57 kilotons is possible from the two cities. This can offset 10–30% of the present industrial coal consumption, to about 180 kiloton/yr CO2 eq emissions and a net cost saving of USD 8.7 million per year. The market for RDF as an industrial alternative fuel is developing in Ghana and similar jurisdictions in this context. Therefore, this study provides insights into the potential for RDF in integrated waste management system implementation for socioeconomic and environmental benefits. This supports efforts towards achieving the Sustainable Development Goals (SDGs) and a circular economy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Seydou, Sangare, Konan Marcellin Brou, Kouame Appoh y Kouadio Prosper Kimou. "HYBRID MODEL FOR THE CLASSIFICATION OF QUESTIONS EXPRESSED IN NATURAL LANGUAGE". International Journal of Advanced Research 10, n.º 09 (30 de septiembre de 2022): 202–12. http://dx.doi.org/10.21474/ijar01/15343.

Texto completo
Resumen
Question-answering systems rely on an unstructured text corpora or a knowledge base to answer user questions. Most of these systems store knowledge in multiple repositories including RDF. To access this type of repository, SPARQL is the most convenient formal language. It is a complex language, it is therefore necessary to transform the questions expressed in natural language by users into a SPARQL query. As this language is complex, several approaches have been proposed to transform the questions expressed in natural language by users into a SPARQL query.However, the identification of the question type is a serious problem. Questions classification plays a potential role at this level. Machine learning algorithms including neural networks are used for this classification. With the increase in the volume of data, neural networks better perform than those obtained by machine learning algorithms, in general. That is, neural networks, machine learning algorithms also remain good classifiers. For more efficiency, a combination of convolutional neural network with these algorithms has been suggested in this paper. The BICNN-SVM combination has obtained good score not only with small dataset with a precision of 96.60% but also with a large dataset with 94.05%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Liang, Haobang, Jiao Li, Hejun Wu, Li Li, Xinrui Zhou y Xinhua Jiang. "Mammographic Classification of Breast Cancer Microcalcifications through Extreme Gradient Boosting". Electronics 11, n.º 15 (4 de agosto de 2022): 2435. http://dx.doi.org/10.3390/electronics11152435.

Texto completo
Resumen
In this paper, we proposed an effective and efficient approach to the classification of breast cancer microcalcifications and evaluated the mathematical model for calcification on mammography with a large medical dataset. We employed several semi-automatic segmentation algorithms to extract 51 calcification features from mammograms, including morphologic and textural features. We adopted extreme gradient boosting (XGBoost) to classify microcalcifications. Then, we compared other machine learning techniques, including k-nearest neighbor (kNN), adaboostM1, decision tree, random decision forest (RDF), and gradient boosting decision tree (GBDT), with XGBoost. XGBoost showed the highest accuracy (90.24%) for classifying microcalcifications, and kNN demonstrated the lowest accuracy. This result demonstrates that it is essential for the classification of microcalcification to use the feature engineering method for the selection of the best composition of features. One of the contributions of this study is to present the best composition of features for efficient classification of breast cancers. This paper finds a way to select the best discriminative features as a collection to improve the accuracy. This study showed the highest accuracy (90.24%) for classifying microcalcifications with AUC = 0.89. Moreover, we highlighted the performance of various features from the dataset and found ideal parameters for classifying microcalcifications. Furthermore, we found that the XGBoost model is suitable both in theory and practice for the classification of calcifications on mammography.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Sliwinski, Jakub, Martin Strobel y Yair Zick. "Axiomatic Characterization of Data-Driven Influence Measures for Classification". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 718–25. http://dx.doi.org/10.1609/aaai.v33i01.3301718.

Texto completo
Resumen
We study the following problem: given a labeled dataset and a specific datapoint ∼x, how did the i-th feature influence the classification for ∼x? We identify a family of numerical influence measures — functions that, given a datapoint ∼x, assign a numeric value φi(∼x) to every feature i, corresponding to how altering i’s value would influence the outcome for ∼x. This family, which we term monotone influence measures (MIM), is uniquely derived from a set of desirable properties, or axioms. The MIM family constitutes a provably sound methodology for measuring feature influence in classification domains; the values generated by MIM are based on the dataset alone, and do not make any queries to the classifier. While this requirement naturally limits the scope of our framework, we demonstrate its effectiveness on data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Devdatt Kawathekar, Ishan y Anu Shaju Areeckal. "Performance analysis of texture characterization techniques for lung nodule classification". Journal of Physics: Conference Series 2161, n.º 1 (1 de enero de 2022): 012045. http://dx.doi.org/10.1088/1742-6596/2161/1/012045.

Texto completo
Resumen
Abstract Lung cancer ranks very high on a global index for cancer-related casualties. With early detection of lung cancer, the rate of survival increases to 80-90%. The standard method for diagnosing lung cancer from Computed Tomography (CT) scans is by manual annotation and detection of the cancerous regions, which is a tedious task for radiologists. This paper proposes a machine learning approach for multi-class classification of the lung nodules into solid, semi-solid, and Ground Glass Object texture classes. We employ feature extraction techniques, such as gray-level co-occurrence matrix, Gabor filters, and local binary pattern, and validate the performance on the LNDb dataset. The best performing classifier displays an accuracy of 94% and an F1-score of 0.92. The proposed approach was compared with related work using the same dataset. The results are promising, and the proposed method can be used to diagnose lung cancer accurately.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Scime, Anthony, Nilay Saiya, Gregg R. Murray y Steven J. Jurek. "Classification Trees as Proxies". International Journal of Business Analytics 2, n.º 2 (abril de 2015): 31–44. http://dx.doi.org/10.4018/ijban.2015040103.

Texto completo
Resumen
In data analysis, when data are unattainable, it is common to select a closely related attribute as a proxy. But sometimes substitution of one attribute for another is not sufficient to satisfy the needs of the analysis. In these cases, a classification model based on one dataset can be investigated as a possible proxy for another closely related domain's dataset. If the model's structure is sufficient to classify data from the related domain, the model can be used as a proxy tree. Such a proxy tree also provides an alternative characterization of the related domain. Just as important, if the original model does not successfully classify the related domain data the domains are not as closely related as believed. This paper presents a methodology for evaluating datasets as proxies along with three cases that demonstrate the methodology and the three types of results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Stork, Christopher L. y Michael R. Keenan. "Advantages of Clustering in the Phase Classification of Hyperspectral Materials Images". Microscopy and Microanalysis 16, n.º 6 (22 de octubre de 2010): 810–20. http://dx.doi.org/10.1017/s143192761009402x.

Texto completo
Resumen
AbstractDespite the many demonstrated applications of factor analysis (FA) in analyzing hyperspectral materials images, FA does have inherent mathematical limitations, preventing it from solving certain materials characterization problems. A notable limitation of FA is its parsimony restriction, referring to the fact that in FA the number of components cannot exceed the chemical rank of a dataset. Clustering is a promising alternative to FA for the phase classification of hyperspectral materials images. In contrast with FA, the phases extracted by clustering do not have to be parsimonious. Clustering has an added advantage in its insensitivity to spectral collinearity that can result in phase mixing using FA. For representative energy dispersive X-ray spectroscopy materials images, namely a solder bump dataset and a braze interface dataset, clustering generates phase classification results that are superior to those obtained using representative FA-based methods. For the solder bump dataset, clustering identifies a Cu-Sn intermetallic phase that cannot be isolated using FA alone due to the parsimony restriction. For the braze interface sample that has collinearity among the phase spectra, the clustering results do not exhibit the physically unrealistic phase mixing obtained by multivariate curve resolution, a commonly utilized FA algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Bougacha, Aymen, Ines Njeh, Jihene Boughariou, Omar Kammoun, Kheireddine Ben Mahfoudh, Mariem Dammak, Chokri Mhiri y Ahmed Ben Hamida. "Rank-Two NMF Clustering for Glioblastoma Characterization". Journal of Healthcare Engineering 2018 (23 de octubre de 2018): 1–7. http://dx.doi.org/10.1155/2018/1048164.

Texto completo
Resumen
This study investigates a novel classification method for 3D multimodal MRI glioblastomas tumor characterization. We formulate our segmentation problem as a linear mixture model (LMM). Thus, we provide a nonnegative matrix M from every MRI slice in every segmentation process’ step. This matrix will be used as an input for the first segmentation process to extract the edema region from T2 and FLAIR modalities. After that, in the rest of segmentation processes, we extract the edema region from T1c modality, generate the matrix M, and segment the necrosis, the enhanced tumor, and the nonenhanced tumor regions. In the segmentation process, we apply a rank-two NMF clustering. We have executed our tumor characterization method on BraTS 2015 challenge dataset. Quantitative and qualitative evaluations over the publicly training and testing dataset from the MICCAI 2015 multimodal brain segmentation challenge (BraTS 2015) attested that the proposed algorithm could yield a competitive performance for brain glioblastomas characterization (necrosis, tumor core, and edema) among several competing methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Gajjar, Bhavinkumar, Hiren Mewada y Ashwin Patani. "Sparse coded spatial pyramid matching and multi-kernel integrated SVM for non-linear scene classification". Journal of Electrical Engineering 72, n.º 6 (1 de diciembre de 2021): 374–80. http://dx.doi.org/10.2478/jee-2021-0053.

Texto completo
Resumen
Abstract Support vector machine (SVM) techniques and deep learning have been prevalent in object classification for many years. However, deep learning is computation-intensive and can require a long training time. SVM is significantly faster than Convolution Neural Network (CNN). However, the SVM has limited its applications in the mid-size dataset as it requires proper tuning. Recently the parameterization of multiple kernels has shown greater flexibility in the characterization of the dataset. Therefore, this paper proposes a sparse coded multi-scale approach to reduce training complexity and tuning of SVM using a non-linear fusion of kernels for large class natural scene classification. The optimum features are obtained by parameterizing the dictionary, Scale Invariant Feature Transform (SIFT) parameters, and fusion of multiple kernels. Experiments were conducted on a large dataset to examine the multi-kernel space capability to find distinct features for better classification. The proposed approach founds to be promising than the linear multi-kernel SVM approaches achieving 91.12 % maximum accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Thanh, Van The, Do Quang Khoi, Le Huu Ha y Le Manh Thanh. "SIR-DL: AN ARCHITECTURE OF SEMANTIC-BASED IMAGE RETRIEVAL USING DEEP LEARNING TECHNIQUE AND RDF TRIPLE LANGUAGE". Journal of Computer Science and Cybernetics 35, n.º 1 (18 de marzo de 2019): 39–56. http://dx.doi.org/10.15625/1813-9663/35/1/13097.

Texto completo
Resumen
The problem of finding and identifying semantics of images is applied in multimedia applications of many different fields such as Hospital Information System, Geographic Information System, Digital Library System, etc. In this paper, we propose the semantic-based image retrieval (SBIR) system based on the deep learning technique; this system is called as SIR-DL that generates visual semantics based on classifying image contents. At the same time we identify the semantics of similar images on Ontology, which describes semantics of visual features of images. Firstly, the color and spatial features of segmented images are we extracted and these visual feature vectors are trained on the deep neural network to obtain visual words vectors. The process of image retrieval is executed rely on semantic classification of SIR-DL according to the visual feature vector of the query image from which it produces a visual word vector. Then, we retrieve it on Ontology to provide the identities and the semantics of similar images corresponds to a similarity measure. In order to carry out SIR-DL, the algorithms and diagram of this image retrieval system are proposed after that we implement them on ImageCLEF@IAPR, which has 20,000 images. On the base of the experimental results, the effectiveness of our method is evaluated by the accuracy, precision, recall, and F-measure; these results are compared with some of works recently published on the same image dataset. It shows that SIR-DL effectively solves the problem of semantic-based image retrieval and can be used to build multimedia systems in many different fields.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Geerts, Bart y Yu Dawei. "Classification and Characterization of Tropical Precipitation Based on High-Resolution Airborne Vertical Incidence Radar. Part I: Classification". Journal of Applied Meteorology 43, n.º 11 (1 de noviembre de 2004): 1554–66. http://dx.doi.org/10.1175/jam2158.1.

Texto completo
Resumen
Abstract Airborne measurements of vertical incidence radar reflectivity and radial velocity are analyzed for some 21 231 km of high-altitude flight tracks over tropical precipitation systems, in order to describe their characteristic vertical structure. The strength of the radar dataset lies in its superb vertical resolution, sufficient to detect unambiguously a bright band and the coincident Doppler velocity change, which identify the melting layer in stratiform precipitation. In this first of a two-part study, a technique based on the detection of this stratiform precipitation signature is developed to classify hydrometer profiles as convective, stratiform, or shallow. Even though the profiles are classified individually, stratiform and convective regions emerge, whose characteristics are described. The hydrometeor vertical velocity variability is smaller in stratiform profiles, which is consistent with the physical concept of a stratiform region. The purpose of the classification is to describe, in Part II, the composite vertical structure of the various rain types in hurricanes, as well as in isolated to organized precipitating convection sampled in Florida and Brazil.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Bologna, Guido y Yoichi Hayashi. "Characterization of Symbolic Rules Embedded in Deep DIMLP Networks: A Challenge to Transparency of Deep Learning". Journal of Artificial Intelligence and Soft Computing Research 7, n.º 4 (1 de octubre de 2017): 265–86. http://dx.doi.org/10.1515/jaiscr-2017-0019.

Texto completo
Resumen
AbstractRule extraction from neural networks is a fervent research topic. In the last 20 years many authors presented a number of techniques showing how to extract symbolic rules from Multi Layer Perceptrons (MLPs). Nevertheless, very few were related to ensembles of neural networks and even less for networks trained by deep learning. On several datasets we performed rule extraction from ensembles of Discretized Interpretable Multi Layer Perceptrons (DIMLP), and DIMLPs trained by deep learning. The results obtained on the Thyroid dataset and the Wisconsin Breast Cancer dataset show that the predictive accuracy of the extracted rules compare very favorably with respect to state of the art results. Finally, in the last classification problem on digit recognition, generated rules from the MNIST dataset can be viewed as discriminatory features in particular digit areas. Qualitatively, with respect to rule complexity in terms of number of generated rules and number of antecedents per rule, deep DIMLPs and DIMLPs trained by arcing give similar results on a binary classification problem involving digits 5 and 8. On the whole MNIST problem we showed that it is possible to determine the feature detectors created by neural networks and also that the complexity of the extracted rulesets can be well balanced between accuracy and interpretability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Zafar, Haroon, Junaid Zafar y Faisal Sharif. "Automated Clinical Decision Support for Coronary Plaques Characterization from Optical Coherence Tomography Imaging with Fused Neural Networks". Optics 3, n.º 1 (10 de enero de 2022): 8–18. http://dx.doi.org/10.3390/opt3010002.

Texto completo
Resumen
Deep Neural Networks (DNNs) are nurturing clinical decision support systems for the detection and accurate modeling of coronary arterial plaques. However, efficient plaque characterization in time-constrained settings is still an open problem. The purpose of this study is to develop a novel automated classification architecture viable for the real-time clinical detection and classification of coronary artery plaques, and secondly, to use the novel dataset of OCT images for data augmentation. Further, the purpose is to validate the efficacy of transfer learning for arterial plaques classification. In this perspective, a novel time-efficient classification architecture based on DNNs is proposed. A new data set consisting of in-vivo patient Optical Coherence Tomography (OCT) images labeled by three trained experts was created and dynamically programmed. Generative Adversarial Networks (GANs) were used for populating the coronary aerial plaques dataset. We removed the fully connected layers, including softmax and the cross-entropy in the GoogleNet framework, and replaced them with the Support Vector Machines (SVMs). Our proposed architecture limits weight up-gradation cycles to only modified layers and computes the global hyper-plane in a timely, competitive fashion. Transfer learning was used for high-level discriminative feature learning. Cross-entropy loss was minimized by using the Adam optimizer for model training. A train validation scheme was used to determine the classification accuracy. Automated plaques differentiation in addition to their detection was found to agree with the clinical findings. Our customized fused classification scheme outperforms the other leading reported works with an overall accuracy of 96.84%, and multiple folds reduced elapsed time demonstrating it as a viable choice for real-time clinical settings.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Mengistu, Abrham Debasu y Dagnachew Melesew Alemayehu. "Soil Characterization and Classification: A Hybrid Approach of Computer Vision and Sensor Network". International Journal of Electrical and Computer Engineering (IJECE) 8, n.º 2 (1 de abril de 2018): 989. http://dx.doi.org/10.11591/ijece.v8i2.pp989-995.

Texto completo
Resumen
This paper presents soil characterization and classification using computer vision & sensor network approach. Gravity Analog Soil Moisture Sensor with arduino-uno and image processing is considered for classification and characterization of soils. For the data sets, Amhara regions and Addis Ababa city of Ethiopia are considered for this study. In this research paper the total of 6 group of soil and each having 90 images are used. That is, form these 540 images were captured. Once the dataset is collected, pre-processing and noise filtering steps are performed to achieve the goal of the study through MATLAB, 2013. Classification and characterization is performed through BPNN (Back-propagation neural network), the neural network consists of 7 inputs feature vectors and 6 neurons in its output layer to classify soils. 89.7% accuracy is achieved when back-propagation neural network (BPNN) is used.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Lalitha, K. y J. Manjula. "Novel method of Characterization of dispersive properties of heterogeneous head tissue using Microwave sensing and Machine learning Algorithms". Advanced Electromagnetics 11, n.º 3 (6 de octubre de 2022): 84–92. http://dx.doi.org/10.7716/aem.v11i3.1821.

Texto completo
Resumen
A brain tumor is a critical medical condition and early detection is essential for a speedy recovery. Researchers have explored the use of electromagnetic waves in the microwave region for the early detection of brain tumor. However, clinical adoption is not yet realized because of the low resolution of microwave images. This paper provides an innovative approach to improve microwave brain tumor detection intelligently by differentiating normal and malignant tissues using machine learning algorithms. The dataset required for classification is obtained from the antenna measurements. To facilitate the measurement process, an Antipodal Vivaldi antenna with the diamond-shaped parasitic patch (37 mmx21 mm) is designed to operate with a resonance frequency of 3 GHz. The proposed antenna maintains a numerical reflection coefficient (S11) value below -10dB over the entire UWB frequency range. In this paper, Waikato Environment for Knowledge Analysis (WEKA) classification tool with 10 cross-fold validation is used for comparison of various algorithms against the dataset obtained from the proposed antenna.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Ahammed, Kawser y Mosabber Uddin Ahmed. "Epileptic Seizure Detection Based on Complexity Feature of EEG". Journal of Biomedical Analytics 3, n.º 1 (25 de abril de 2020): 1–11. http://dx.doi.org/10.30577/jba.2020.v3n1.39.

Texto completo
Resumen
Brain disorder characterized by seizure is a common disease among people in the world. Characterization of electroencephalogram (EEG) signals in terms of complexity can be used to identify neurological disorders. In this study, a non-linear epileptic seizure detection method based on multiscale entropy (MSE) has been employed to characterize the complexity of EEG signals. For this reason, the MSE method has been applied on Bonn dataset containing seizure and non-seizure EEG data and the corresponding results in terms of complexity have been obtained. Using statistical tests and support vector machine (SVM), the classification ability of the MSE method has been verified on Bonn dataset. Our results show that the MSE method is a viable approach to identifying epileptic seizure demonstrating a classification accuracy of 91.7%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Pineda-Munoz, Silvia y John Alroy. "Dietary characterization of terrestrial mammals". Proceedings of the Royal Society B: Biological Sciences 281, n.º 1789 (22 de agosto de 2014): 20141173. http://dx.doi.org/10.1098/rspb.2014.1173.

Texto completo
Resumen
Understanding the feeding behaviour of the species that make up any ecosystem is essential for designing further research. Mammals have been studied intensively, but the criteria used for classifying their diets are far from being standardized. We built a database summarizing the dietary preferences of terrestrial mammals using published data regarding their stomach contents. We performed multivariate analyses in order to set up a standardized classification scheme. Ideally, food consumption percentages should be used instead of qualitative classifications. However, when highly detailed information is not available we propose classifying animals based on their main feeding resources. They should be classified as generalists when none of the feeding resources constitute over 50% of the diet. The term ‘omnivore’ should be avoided because it does not communicate all the complexity inherent to food choice. Moreover, the so-called omnivore diets actually involve several distinctive adaptations. Our dataset shows that terrestrial mammals are generally highly specialized and that some degree of food mixing may even be required for most species.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Kumar, Gaurav, Adam Ertel, Nicole Naranjo, Shuvanon Shahid, Lucia Languino y Paolo Fortina. "Hematopoietic Cell Cluster Identifier (HCCI): A Knowledgebase Computational Tool to Facilitate the Characterization of Hematopoietic Cell Populations in Single Cell RNA-Sequencing". Blood 134, Supplement_1 (13 de noviembre de 2019): 3587. http://dx.doi.org/10.1182/blood-2019-127175.

Texto completo
Resumen
Single cell transcriptional profiling is critical to interrogate cell types, states and functionality in a complex tissue or disease. Despite rapid advancements in single cell RNA sequencing (scRNA-seq) analysis, major challenge remains in identification of distinct cell types based on specific molecular signatures. Few databases are available to facilitate cell-type characterization, however, their broad all-cells encompassing structure is deemed often inept. At present, manual curation is the only option. To address this, we developed a knowledgebase computational tool, HCCI, that identifies and characterizes up to 28 different hematopoietic cell types in a scRNA-seq dataset. Utilizing the knowledgebase comprised of ~1500 marker genes obtained from data mining, HCCI first uses a voting algorithm to search and match genes from a given cluster to a single or more possible cell type. Next, it normalizes score of each cluster-cell type pair to provide a percentage denoting a cluster to a specific cell-type. To demonstrate this, we utilized a dataset of Peripheral Blood Mononuclear Cells (PBMC) available from 10X Genomics. As the cell populations are already known, this was an ideal dataset to evaluate the performance of HCCI. As expected, unsupervised clustering and UMAP algorithms identified 9 different clusters of cells conforming to the existing knowledge of this dataset. Differentially expressed genes from each cluster were input into HCCI, and the following unique clusters were identified: naïve CD4 (cluster 1), M0 macrophage (cluster 2), CD8 T-cells (cluster 3), naïve B-cell (cluster 4), cytotoxic T-cell (cluster 5), M1 macrophage (cluster 6), activated NK cell (cluster 7), monocyte (cluster 8) and plasmacytoid dendritic cell (cluster 9). Here using HCCI, an improvement in classification was observed for clusters 2,3,6,8 and 9 which were originally classified only in major blood cell types. Classification of clusters 1,4,5 and 7 remained unchanged. The classification by HCCI was observed to be much precise when compared to the manual curation performed originally. To our knowledge, HCCI is the only available tool for characterization of hematopoietic cells from scRNA-seq data. Disclosures No relevant conflicts of interest to declare.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Pradeep Kumar Gaur, Gurdeep Singh,. "CAD System for Liver Diseases using Histological and Imaging features". International Journal of Research in Informative Science Application & Techniques (IJRISAT) 1, n.º 1 (7 de febrero de 2022): 1–8. http://dx.doi.org/10.46828/ijrisat.v1i1.15.

Texto completo
Resumen
The current work for characterization of liver disease has been carried out using histological and imaging data. The BUPA liver disorders dataset created by University of California, Irvine has been considered as histological data. The ultrasound images of hepatocellular carcinoma (HCC) and Hemangioma (HEM) lesions were taken from ultrasoundcases.info. Laws ‘texture features were extracted from these images using Laws’ masks of length 3. In CAD system design 1, histological classification of liver diseases has been carried out using SVM classifier. In CAD system design 2, liver disease classification has been carried out using imaging features.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

B V, Vinay, Poonam Bhaskar, Rohan Keshav H, Sirvi Priyanka Hiralal y Vidya P. "MEGACOSM – DETECTION AND CLASSIFICATION OF ASTRONOMICAL OBJECTS". International Research Journal of Computer Science 9, n.º 8 (13 de agosto de 2022): 217–23. http://dx.doi.org/10.26562/irjcs.2022.v0908.13.

Texto completo
Resumen
The concept of existence started with the bigbang theory, which was a phenomenon where multiple objects collided to create multiple other things. This creation is said to be increasing at the rate of the cosmic acceleration due to which multiple new objects came into existence called as the astronomical objects. Space object (SO) detection, classification, and characterization are significant challenges in many research fields. In recent years, deep learning and other forms of artificial intelligence (AI) have drawn the attention of many astronomers and academics. Megacosm is project that is used for the classification and the identification of those newly created celestial objects. It works on the process of manually training the model with the data annotation techniques and the dataset is enhanced using the data augmentation technique. It uses YOLO as the core algorithm and also deep learning concepts like CNN (Convolution Neural Network) to predict results. It gives the output as a bounding box around the detected object along with the accuracy of that prediction.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Ritschar, Sven, Elisabeth Schirmer, Benedikt Hufnagl, Martin G. J. Löder, Andreas Römpp y Christian Laforsch. "Classification of target tissues of Eisenia fetida using sequential multimodal chemical analysis and machine learning". Histochemistry and Cell Biology 157, n.º 2 (8 de noviembre de 2021): 127–37. http://dx.doi.org/10.1007/s00418-021-02037-1.

Texto completo
Resumen
AbstractAcquiring comprehensive knowledge about the uptake of pollutants, impact on tissue integrity and the effects at the molecular level in organisms is of increasing interest due to the environmental exposure to numerous contaminants. The analysis of tissues can be performed by histological examination, which is still time-consuming and restricted to target-specific staining methods. The histological approaches can be complemented with chemical imaging analysis. Chemical imaging of tissue sections is typically performed using a single imaging approach. However, for toxicological testing of environmental pollutants, a multimodal approach combined with improved data acquisition and evaluation is desirable, since it may allow for more rapid tissue characterization and give further information on ecotoxicological effects at the tissue level. Therefore, using the soil model organism Eisenia fetida as a model, we developed a sequential workflow combining Fourier transform infrared spectroscopy (FTIR) and matrix-assisted laser desorption/ionization mass spectrometry imaging (MALDI-MSI) for chemical analysis of the same tissue sections. Data analysis of the FTIR spectra via random decision forest (RDF) classification enabled the rapid identification of target tissues (e.g., digestive tissue), which are relevant from an ecotoxicological point of view. MALDI imaging analysis provided specific lipid species which are sensitive to metabolic changes and environmental stressors. Taken together, our approach provides a fast and reproducible workflow for label-free histochemical tissue analyses in E. fetida, which can be applied to other model organisms as well.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Kim, Junghwan. "In-Cylinder Pressure Based Engine Knock Classification Model for High-Compression Ratio, Automotive Spark-Ignition Engines Using Various Signal Decomposition Methods". Energies 14, n.º 11 (26 de mayo de 2021): 3117. http://dx.doi.org/10.3390/en14113117.

Texto completo
Resumen
Engine knock determination has been conducted in various ways for spark timing calibration. In the present study, a knock classification model was developed using a machine learning algorithm. Wavelet packet decomposition (WPD) and ensemble empirical mode decomposition (EEMD) were employed for the characterization of the in-cylinder pressure signals from the experimental engine. The WPD was used to calculate 255 features from seven decomposition levels. EEMD provided total 70 features from their intrinsic mode functions (IMF). The experimental engine was operated at advanced spark timings to induce knocking under various engine speeds and load conditions. Three knock intensity metrics were employed to determine that the dataset included 4158 knock cycles out of a total of 66,000 cycles. The classification model trained with 66,000 cycles achieved an accuracy of 99.26% accuracy in the knock cycle detection. The neighborhood component analysis revealed that seven features contributed significantly to the classification. The classification model retrained with the seven significant features achieved an accuracy of 99.02%. Although the misclassification rate increased in the normal cycle detection, the feature selection decreased the model size from 253 to 8.25 MB. Finally, the compact classification model achieved an accuracy of 99.95% with the second dataset obtained at the knock borderline (KBL) timings, which validates that the model is sufficient for the KBL timing determination.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Aboshady, Doaa, Naglaa Ghannam, Eman Elsayed y Lamiaa Diab. "The Malware Detection Approach in the Design of Mobile Applications". Symmetry 14, n.º 5 (19 de abril de 2022): 839. http://dx.doi.org/10.3390/sym14050839.

Texto completo
Resumen
Background: security has become a major concern for smartphone users in line with the increasing use of mobile applications, which can be downloaded from unofficial sources. These applications make users vulnerable to penetration and viruses. Malicious software (malware) is unwanted software that is frequently used by cybercriminals to launch cyber-attacks. Therefore, the motive of the research was to detect malware early before infection by discovering it at the application-design level and not at the code level, where the virus will have already damaged the system. Methods: in this article, we proposed a malware detection method at the design level based on reverse engineering, the unified modeling language (UML) environment, and the web ontology language (OWL). The proposed method detected “Data_Send_Trojan” malware by designing a UML model that simulated the structure of the malware. Then, by generating the ontology of the model, and using RDF query language (SPARQL) to create certain queries, the malware was correctly detected. In addition, we proposed a new classification of malware that was suitable for design detection. Results: the proposed method detected Trojan malware that appeared 552 times in a sample of 600 infected android application packages (APK). The experimental results showed a good performance in detecting malware at the design level with precision and recall of 92% and 91%, respectively. As the dataset increased, the accuracy of detection increased significantly, which made this methodology promising.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Cunha-Vaz, José y Luís Mendes. "Characterization of Risk Profiles for Diabetic Retinopathy Progression". Journal of Personalized Medicine 11, n.º 8 (23 de agosto de 2021): 826. http://dx.doi.org/10.3390/jpm11080826.

Texto completo
Resumen
Diabetic retinopathy (DR) is a frequent complication of diabetes and, through its vision-threatening complications, i.e., macular edema and proliferative retinopathy, may lead to blindness. It is, therefore, of major relevance to identify the presence of retinopathy in diabetic patients and, when present, to identify the eyes that have the greatest risk of progression and greatest potential to benefit from treatment. In the present paper, we suggest the development of a simple to use alternative to the Early Treatment Diabetic Retinopathy Study (ETDRS) grading system, establishing disease severity as a necessary step to further evaluate and categorize the different risk factors involved in the progression of diabetic retinopathy. It needs to be validated against the ETDRS classification and, ideally, should be able to be performed automatically using data directly from the examination equipment without the influence of subjective individual interpretation. We performed the characterization of 105 eyes from 105 patients previously classified by ETDRS level by a Reading Centre using a set of rules generated by a decision tree having as possible inputs a set of metrics automatically extracted from Swept-source Optical Coherence Tomography (SS-OCTA) and Spectral Domain- OCT (SD-OCT) measured at different localizations of the retina. When the most relevant metrics were used to derive the rules to perform the organization of the full pathological dataset, taking into account the different ETDRS grades, a global accuracy equal to 0.8 was obtained. In summary, it is now possible to envision an automated classification of DR progression using noninvasive methods of examination, OCT, and SS-OCTA. Using this classification to establish the severity grade of DR, at the time of the ophthalmological examination, it is then possible to identify the risk of progression in severity and the development of vision-threatening complications based on the predominant phenotype.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Sandaruwan, Pahalage Dhanushka y Champi Thusangi Wannige. "An improved deep learning model for hierarchical classification of protein families". PLOS ONE 16, n.º 10 (20 de octubre de 2021): e0258625. http://dx.doi.org/10.1371/journal.pone.0258625.

Texto completo
Resumen
Although genes carry information, proteins are the main role player in providing all the functionalities of a living organism. Massive amounts of different proteins involve in every function that occurs in a cell. These amino acid sequences can be hierarchically classified into a set of families and subfamilies depending on their evolutionary relatedness and similarities in their structure or function. Protein characterization to identify protein structure and function is done accurately using laboratory experiments. With the rapidly increasing huge amount of novel protein sequences, these experiments have become difficult to carry out since they are expensive, time-consuming, and laborious. Therefore, many computational classification methods are introduced to classify proteins and predict their functional properties. With the progress of the performance of the computational techniques, deep learning plays a key role in many areas. Novel deep learning models such as DeepFam, ProtCNN have been presented to classify proteins into their families recently. However, these deep learning models have been used to carry out the non-hierarchical classification of proteins. In this research, we propose a deep learning neural network model named DeepHiFam with high accuracy to classify proteins hierarchically into different levels simultaneously. The model achieved an accuracy of 98.38% for protein family classification and more than 80% accuracy for the classification of protein subfamilies and sub-subfamilies. Further, DeepHiFam performed well in the non-hierarchical classification of protein families and achieved an accuracy of 98.62% and 96.14% for the popular Pfam dataset and COG dataset respectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Hernández, César, José del Sagrado, Francisco Rodríguez, José Carlos Moreno y Jorge Antonio Sánchez. "Modeling of Energy Demand of a High-Tech Greenhouse in Warm Climate Based on Bayesian Networks". Mathematical Problems in Engineering 2015 (2015): 1–11. http://dx.doi.org/10.1155/2015/201646.

Texto completo
Resumen
This work analyzes energy demand in a High-Tech greenhouse and its characterization, with the objective of building and evaluating classification models based on Bayesian networks. The utility of these models resides in their capacity of perceiving relations among variables in the greenhouse by identifying probabilistic dependences between them and their ability to make predictions without the need of observing all the variables present in the model. In this way they provide a useful tool for an energetic control system design. In this paper the acquisition data system used in order to collect the dataset studied is described. The energy demand distribution is analyzed and different discretization techniques are applied to reduce its dimensionality, paying particular attention to their impact on the classification model’s performance. A comparison between the different classification models applied is performed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Yin, Xiaoxia, Samra Irshad y Yanchun Zhang. "Classifiers fusion for improved vessel recognition with application in quantification of generalized arteriolar narrowing". Journal of Innovative Optical Health Sciences 13, n.º 01 (25 de noviembre de 2019): 1950021. http://dx.doi.org/10.1142/s1793545819500214.

Texto completo
Resumen
This paper attempts to estimate diagnostically relevant measure, i.e., Arteriovenous Ratio with an improved retinal vessel classification using feature ranking strategies and multiple classifiers decision-combination scheme. The features exploited for retinal vessel characterization are based on statistical measures of histogram, different filter responses of images and local gradient information. The feature selection process is based on two feature ranking approaches (Pearson Correlation Coefficient technique and Relief-F method) to rank the features followed by use of maximum classification accuracy of three supervised classifiers (k-Nearest Neighbor, Support Vector Machine and Naïve Bayes) as a threshold for feature subset selection. Retinal vessels are labeled using the selected feature subset and proposed hybrid classification scheme, i.e., decision fusion of multiple classifiers. The comparative analysis shows an increase in vessel classification accuracy as well as Arteriovenous Ratio calculation performance. The system is tested on three databases, a local dataset of 44 images and two publically available databases, INSPIRE-AVR containing 40 images and VICAVR containing 58 images. The local database also contains images with pathologically diseased structures. The performance of the proposed system is assessed by comparing the experimental results with the gold standard estimations as well as with the results of previous methodologies. Overall, an accuracy of 90.45%, 93.90% and 87.82% is achieved in retinal blood vessel separation with 0.0565, 0.0650 and 0.0849 mean error in Arteriovenous Ratio calculation for Local, INSPIRE-AVR and VICAVR dataset, respectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Lu, Shaina, Cantin Ortiz, Daniel Fürth, Stephan Fischer, Konstantinos Meletis, Anthony Zador y Jesse Gillis. "Assessing the replicability of spatial gene expression using atlas data from the adult mouse brain". PLOS Biology 19, n.º 7 (19 de julio de 2021): e3001341. http://dx.doi.org/10.1371/journal.pbio.3001341.

Texto completo
Resumen
High-throughput, spatially resolved gene expression techniques are poised to be transformative across biology by overcoming a central limitation in single-cell biology: the lack of information on relationships that organize the cells into the functional groupings characteristic of tissues in complex multicellular organisms. Spatial expression is particularly interesting in the mammalian brain, which has a highly defined structure, strong spatial constraint in its organization, and detailed multimodal phenotypes for cells and ensembles of cells that can be linked to mesoscale properties such as projection patterns, and from there, to circuits generating behavior. However, as with any type of expression data, cross-dataset benchmarking of spatial data is a crucial first step. Here, we assess the replicability, with reference to canonical brain subdivisions, between the Allen Institute’s in situ hybridization data from the adult mouse brain (Allen Brain Atlas (ABA)) and a similar dataset collected using spatial transcriptomics (ST). With the advent of tractable spatial techniques, for the first time, we are able to benchmark the Allen Institute’s whole-brain, whole-transcriptome spatial expression dataset with a second independent dataset that similarly spans the whole brain and transcriptome. We use regularized linear regression (LASSO), linear regression, and correlation-based feature selection in a supervised learning framework to classify expression samples relative to their assayed location. We show that Allen Reference Atlas labels are classifiable using transcription in both data sets, but that performance is higher in the ABA than in ST. Furthermore, models trained in one dataset and tested in the opposite dataset do not reproduce classification performance bidirectionally. While an identifying expression profile can be found for a given brain area, it does not generalize to the opposite dataset. In general, we found that canonical brain area labels are classifiable in gene expression space within dataset and that our observed performance is not merely reflecting physical distance in the brain. However, we also show that cross-platform classification is not robust. Emerging spatial datasets from the mouse brain will allow further characterization of cross-dataset replicability ultimately providing a valuable reference set for understanding the cell biology of the brain.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Sebbeh-Newton, Sylvanus, Prosper E. A. Ayawah, Jessica W. A. Azure, Azupuri G. A. Kaba, Fauziah Ahmad, Zurinahni Zainol y Hareyani Zabidi. "Towards TBM Automation: On-The-Fly Characterization and Classification of Ground Conditions Ahead of a TBM Using Data-Driven Approach". Applied Sciences 11, n.º 3 (25 de enero de 2021): 1060. http://dx.doi.org/10.3390/app11031060.

Texto completo
Resumen
Pre-tunneling exploration for rock mass classification is a common practice in tunneling projects. This study proposes a data-driven approach that allows for rock mass classification. Two machine learning (ML) classification models, namely random forest (RF) and extremely randomized tree (ERT), are employed to classify the rock mass conditions encountered in the Pahang-Selangor Raw Water Tunnel in Malaysia using tunnel boring machine (TBM) operating parameters. Due to imbalance of rock classes distribution, an oversampling technique was used to obtain a balanced training dataset for unbiased learning of the ML models. A five-fold cross-validation approach was used to tune the model hyperparameters and validation-set approach was used for the model evaluation. ERT achieved an overall accuracy of 95%, while RF achieved 94% accuracy, in rightly classifying rock mass conditions. The result shows that the proposed approach has the potential to identify and correctly classify ground conditions of a TBM, which allows for early problem detection and on-the-fly support system selection based on the identified ground condition. This study, which is part of an ongoing effort towards developing reliable models that could be incorporated into TBMs, shows the potential of data-driven approaches for on-the-fly classification of ground conditions ahead of a TBM and could allow for the early detection of potential construction problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Albuslimi, Mohammed. "K-Mean Clustering Analysis and Logistic Boosting Regression for Rock Facies Characterization and Classification in Zubair Reservoir in Luhais Oil Field, Southern Iraq". Iraqi Geological Journal 54, n.º 2B (31 de agosto de 2021): 65–75. http://dx.doi.org/10.46717/igj.54.2b.6ms-2021-08-26.

Texto completo
Resumen
Identifying rock facies from petrophysical logs is a crucial step in the evaluation and characterization of hydrocarbon reservoirs. The rock facies can be obtained either from core analysis (lithofacies) or from well logging data (electrofacies). In this research, two advanced machine learning approaches were adopted for electrofacies identification and for lithofacies classification, both given the well-logging interpretations from a well in the upper shale member in Luhais Oil Field, southern Iraq. Specifically, the K-mean partitioning analysis and Logistic Boosting (Logit Boost) were conducted for electrofacies characterization and lithofacies classification, respectively. The dataset includes the routine core analysis of core porosity, core permeability, and measured discrete lithofacies along with the well-logging interpretations include (shale volume, water saturation and effective porosity) given the entire reservoir interval. The K-Mean clustering technique demonstrated good matching between the vertical sequence of identified electrofacies and the observed lithofacies from core description as attained 89.92% total correct percent from the confusion matrix table. The Logit Boost showed excellent matching between the recognized lithofacies from the core description and the predicted lithofacies through attained 98.26% total correct classification rate index from the confusion matrix table. The high accuracy of the Logit Boost algorithm comes from taking into account the non-linearity between the lithofacies and petrophysical properties in the classification process. The high degree of lithofacies classification by Logit Boost in this research can be considered in a similar procedure at all sandstone reservoirs to improve the reservoir characterization. The complete facies identification and classification were implemented with the programming language R, the powerful open-source statistical computing language.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Lasaponara, Rosa, Antonio Lanorte y Stefano Pignatti. "Characterization and Mapping of Fuel Types for the Mediterranean Ecosystems of Pollino National Park in Southern Italy by Using Hyperspectral MIVIS Data". Earth Interactions 10, n.º 13 (1 de mayo de 2006): 1–11. http://dx.doi.org/10.1175/ei165.1.

Texto completo
Resumen
Abstract The characterization and mapping of fuel types is one of the most important factors that should be taken into consideration for wildland fire prevention and prefire planning. This research aims to investigate the usefulness of hyperspectral data to recognize and map fuel types in order to ascertain how well remote sensing data can provide an exhaustive classification of fuel properties. For this purpose airborne hyperspectral Multispectral Infrared and Visible Imaging Spectrometer (MIVIS) data acquired in November 1998 have been analyzed for a test area of 60 km2 selected inside Pollino National Park in the south of Italy. Fieldwork fuel-type recognitions, performed at the same time as remote sensing data acquisition, were used as a ground-truth dataset to assess the results obtained for the considered test area. The method comprised the following three steps: 1) adaptation of Prometheus fuel types for obtaining a standardization system useful for remotely sensed classification of fuel types and properties in the considered Mediterranean ecosystems; 2) model construction for the spectral characterization and mapping of fuel types based on a maximum likelihood (ML) classification algorithm; and 3) accuracy assessment for the performance evaluation based on the comparison of MIVIS-based results with ground truth. Results from our analysis showed that the use of remotely sensed data at high spatial and spectral resolution provided a valuable characterization and mapping of fuel types being that the achieved classification accuracy was higher than 90%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Deng, Davy. "Abstract 2294: Genomic landscape and immunological profile of glioblastoma in East Asians". Cancer Research 82, n.º 12_Supplement (15 de junio de 2022): 2294. http://dx.doi.org/10.1158/1538-7445.am2022-2294.

Texto completo
Resumen
Abstract While it is well known that Glioblastoma (GBM) shows profound inter- and intra-tumoral heterogeneity, the ancestry disparities among races have been largely overlooked. By comparing the predominantly caucasian TCGA GBM dataset (EUR, n=383) with a large genomic and transcriptomic East Asian GBM dataset (EAS, n=443), we identified numerous significant differences between EAS-GBM and EUR-GBM. Transcriptomic clustering also established a novel EAS-GBM molecular classification system and the discovery of a new EAS-specific GBM subtype. We characterized each subtype comprehensively including genomic alterations, immune profile and TCR/BCR repertoire analysis. This study elucidates important ancestry-dependent biological and molecular differences between GBMs and proposes a new transcriptome-based EAS-GBM classification. Understanding these unique features contribute to a more comprehensive characterization of the heterogeneity in GBM patients and could facilitate more careful patient stratification during targeted therapy and drug development. Citation Format: Davy Deng. Genomic landscape and immunological profile of glioblastoma in East Asians [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 2294.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Hadiyoso, Sugondo, Inung Wijayanto y Annisa Humairani. "Signal Dynamics Analysis for Epileptic Seizure Classification on EEG Signals". Traitement du Signal 38, n.º 1 (28 de febrero de 2021): 73–78. http://dx.doi.org/10.18280/ts.380107.

Texto completo
Resumen
Epilepsy is the most common form of neurological disease. Patients with epilepsy may experience seizures of a certain duration with or without provocation. Epilepsy analysis can be done with an electroencephalogram (EEG) examination. Observation of qualitative EEG signals generates high cost and often confuses due to the nature of the non-linear EEG signal and noise. In this study, we proposed an EEG signal processing system for EEG seizure detection. The signal dynamics approach to normal and seizure signals' characterization became the main focus of this study. Spectral Entropy (SpecEn) and fractal analysis are used to estimate the EEG signal dynamics and used as feature sets. The proposed method is validated using a public EEG dataset, which included preictal, ictal, and interictal stages using the Naïve Bayes classifier. The test results showed that the proposed method is able to generate an ictal detection accuracy of up to 100%. It is hoped that the proposed method can be considered in the detection of seizure signals on the long-term EEG recording. Thus it can simplify the diagnosis of epilepsy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Iqbal, Saleem, Khalid Iqbal, Fahim Arif, Arslan Shaukat y Aasia Khanum. "Potential Lung Nodules Identification for Characterization by Variable Multistep Threshold and Shape Indices from CT Images". Computational and Mathematical Methods in Medicine 2014 (2014): 1–7. http://dx.doi.org/10.1155/2014/241647.

Texto completo
Resumen
Computed tomography (CT) is an important imaging modality. Physicians, surgeons, and oncologists prefer CT scan for diagnosis of lung cancer. However, some nodules are missed in CT scan. Computer aided diagnosis methods are useful for radiologists for detection of these nodules and early diagnosis of lung cancer. Early detection of malignant nodule is helpful for treatment. Computer aided diagnosis of lung cancer involves lung segmentation, potential nodules identification, features extraction from the potential nodules, and classification of the nodules. In this paper, we are presenting an automatic method for detection and segmentation of lung nodules from CT scan for subsequent features extraction and classification. Contribution of the work is the detection and segmentation of small sized nodules, low and high contrast nodules, nodules attached with vasculature, nodules attached to pleura membrane, and nodules in close vicinity of the diaphragm and lung wall in one-go. The particular techniques of the method are multistep threshold for the nodule detection and shape index threshold for false positive reduction. We used 60 CT scans of “Lung Image Database Consortium-Image Database Resource Initiative” taken by GE medical systems LightSpeed16 scanner as dataset and correctly detected 92% nodules. The results are reproducible.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Chen, Guo Min, Li Sun y Xiang Sheng Bao. "3D Stochastic Modeling for Reservoir Characterization and Application". Advanced Materials Research 1010-1012 (agosto de 2014): 1353–58. http://dx.doi.org/10.4028/www.scientific.net/amr.1010-1012.1353.

Texto completo
Resumen
The reservoir is the accumulation spaces and development targets of hydrocarbon, always draw attention of researchers who participated in hydrocarbon exploration and development. The goal of reservoir characterization is to delineate precisely and completely the geologic variations for reservoirs in 3D distribution by the application of all kinds of useful data channel, so that it can give a reliable reference for the further reservoirs development. However, stochastic modeling has become the dominant tool for reservoir characterization because it can both simulate the reservoir heterogeneity and quantitatively characterize reservoir. Aimed at reservoirs in well block X26-X27 of Xiazijie Oilfield, on basis of reservoir structure, sedimentary microfacies, logging interpretation and reservoir heterogeneity research, the geologic dataset is established, and the 3D models, such as reservoir structure, logging interpretation and reservoir attributes, were all worked out by the application of stochastic modeling technique and 3D visualized technique to this area, furthermore, the testing and modification for facies analysis and classification were conducted in order to unravel the consistent micro-facies and reservoir property distribution, so that it will serve well for the identification and fine description and reservoir dynamic simulation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Varol, Onur, Emilio Ferrara, Clayton Davis, Filippo Menczer y Alessandro Flammini. "Online Human-Bot Interactions: Detection, Estimation, and Characterization". Proceedings of the International AAAI Conference on Web and Social Media 11, n.º 1 (3 de mayo de 2017): 280–89. http://dx.doi.org/10.1609/icwsm.v11i1.14871.

Texto completo
Resumen
Increasing evidence suggests that a growing amount of social media content is generated by autonomous entities known as social bots. In this work we present a framework to detect such entities on Twitter. We leverage more than a thousand features extracted from public data and meta-data about users: friends, tweet content and sentiment, network patterns, and activity time series. We benchmark the classification framework by using a publicly available dataset of Twitter bots. This training data is enriched by a manually annotated collection of active Twitter users that include both humans and bots of varying sophistication. Our models yield high accuracy and agreement with each other and can detect bots of different nature. Our estimates suggest that between 9% and 15% of active Twitter accounts are bots. Characterizing ties among accounts, we observe that simple bots tend to interact with bots that exhibit more human-like behaviors. Analysis of content flows reveals retweet and mention strategies adopted by bots to interact with different target groups. Using clustering analysis, we characterize several subclasses of accounts, including spammers, self promoters, and accounts that post content from connected applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Schreuder, Ramon-Michel, Qurine E. W. van der Zander, Roger Fonollà, Lennard P. L. Gilissen, Arnold Stronkhorst, Birgitt Klerkx, Peter H. N. de With, Ad M. Masclee, Fons van der Sommen y Erik J. Schoon. "Algorithm combining virtual chromoendoscopy features for colorectal polyp classification". Endoscopy International Open 09, n.º 10 (16 de septiembre de 2021): E1497—E1503. http://dx.doi.org/10.1055/a-1512-5175.

Texto completo
Resumen
Abstract Background and study aims Colonoscopy is considered the gold standard for decreasing colorectal cancer incidence and mortality. Optical diagnosis of colorectal polyps (CRPs) is an ongoing challenge in clinical colonoscopy and its accuracy among endoscopists varies widely. Computer-aided diagnosis (CAD) for CRP characterization may help to improve this accuracy. In this study, we investigated the diagnostic accuracy of a novel algorithm for polyp malignancy classification by exploiting the complementary information revealed by three specific modalities. Methods We developed a CAD algorithm for CRP characterization based on high-definition, non-magnified white light (HDWL), Blue light imaging (BLI) and linked color imaging (LCI) still images from routine exams. All CRPs were collected prospectively and classified into benign or premalignant using histopathology as gold standard. Images and data were used to train the CAD algorithm using triplet network architecture. Our training dataset was validated using a threefold cross validation. Results In total 609 colonoscopy images of 203 CRPs of 154 consecutive patients were collected. A total of 174 CRPs were found to be premalignant and 29 were benign. Combining the triplet network features with all three image enhancement modalities resulted in an accuracy of 90.6 %, 89.7 % sensitivity, 96.6 % specificity, a positive predictive value of 99.4 %, and a negative predictive value of 60.9 % for CRP malignancy classification. The classification time for our CAD algorithm was approximately 90 ms per image. Conclusions Our novel approach and algorithm for CRP classification differentiates accurately between benign and premalignant polyps in non-magnified endoscopic images. This is the first algorithm combining three optical modalities (HDWL/BLI/LCI) exploiting the triplet network approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Nasri, Chaimae, Yasmina Halabi, Hicham Harhar, Faez Mohammed, Abdelkabir Bellaouchou, Abdallah Guenbour y Mohamed Tabyaoui. "Chemical characterization of oil from four Avocado varieties cultivated in Morocco". OCL 28 (2021): 19. http://dx.doi.org/10.1051/ocl/2021008.

Texto completo
Resumen
The notable growth in the use of avocado oil in the nutritional and cosmetic field was the main objective to valorize the oil production of important varieties of avocados existing in Morocco by analyzing its chemical composition in fatty acids, sterols, tocopherols and its physico-chemical properties. Oleic acid is the main fatty acid in the oil; they constitute between 50 and 65% of the total fatty acids. The study of the unsaponifiable fraction revealed that avocado oil contains 3259.9–5378.8 mg/kg sterols and 113.13–332.17 mg/kg tocopherols. Chemo-metric tools were employed in manner optimization, such as principal component analysis, agglomerative hierarchical clustering, analysis of variance, and classification trees using Chi-squared Automatic Interaction Detector. Chemo-metric tools revealed a difference in the composition of fatty acid, sterols, and tocopherol of avocado oil samples. This difference resulted from a variety of avocado fruits. Agglomerative Hierarchical Clustering (AHC) method was efficient distinguishing avocado oil samples based on fruit variety using fatty acids, tocopherols, sterol compositions and total sterol. Principal component analysis (PCA) method allowed the distinction the set avocado oil dataset based on fruit varieties, supplied a correct discrimination rate of 95.44% for avocado fruit varieties using the fatty acid. Chi-squared Automatic Interaction Detector (CHAID) carried out using the same variables, also provided an acceptable classification rate of 50% for avocado fruit varieties using the total tocopherol content. Besides, a comparative study of the physico-chemical properties in terms of acidity index, saponification index, iodine index, chlorophylls, carotenoids, and methyl and ethyl esters was performed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Thambawita, Vajira, Inga Strümke, Steven A. Hicks, Pål Halvorsen, Sravanthi Parasa y Michael A. Riegler. "Impact of Image Resolution on Deep Learning Performance in Endoscopy Image Classification: An Experimental Study Using a Large Dataset of Endoscopic Images". Diagnostics 11, n.º 12 (24 de noviembre de 2021): 2183. http://dx.doi.org/10.3390/diagnostics11122183.

Texto completo
Resumen
Recent trials have evaluated the efficacy of deep convolutional neural network (CNN)-based AI systems to improve lesion detection and characterization in endoscopy. Impressive results are achieved, but many medical studies use a very small image resolution to save computing resources at the cost of losing details. Today, no conventions between resolution and performance exist, and monitoring the performance of various CNN architectures as a function of image resolution provides insights into how subtleties of different lesions on endoscopy affect performance. This can help set standards for image or video characteristics for future CNN-based models in gastrointestinal (GI) endoscopy. This study examines the performance of CNNs on the HyperKvasir dataset, consisting of 10,662 images from 23 different findings. We evaluate two CNN models for endoscopic image classification under quality distortions with image resolutions ranging from 32 × 32 to 512 × 512 pixels. The performance is evaluated using two-fold cross-validation and F1-score, maximum Matthews correlation coefficient (MCC), precision, and sensitivity as metrics. Increased performance was observed with higher image resolution for all findings in the dataset. MCC was achieved at image resolutions between 512 × 512 pixels for classification for the entire dataset after including all subclasses. The highest performance was observed with an MCC value of 0.9002 when the models were trained on the highest resolution and tested on the same resolution. Different resolutions and their effect on CNNs are explored. We show that image resolution has a clear influence on the performance which calls for standards in the field in the future.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Khalil, Reem, Sadok Kallel, Ahmad Farhat y Pawel Dlotko. "Topological Sholl descriptors for neuronal clustering and classification". PLOS Computational Biology 18, n.º 6 (22 de junio de 2022): e1010229. http://dx.doi.org/10.1371/journal.pcbi.1010229.

Texto completo
Resumen
Neuronal morphology is a fundamental factor influencing information processing within neurons and networks. Dendritic morphology in particular can widely vary among cell classes, brain regions, and animal species. Thus, accurate quantitative descriptions allowing classification of large sets of neurons is essential for their structural and functional characterization. Current robust and unbiased computational methods that characterize groups of neurons are scarce. In this work, we introduce a novel technique to study dendritic morphology, complementing and advancing many of the existing techniques. Our approach is to conceptualize the notion of a Sholl descriptor and associate, for each morphological feature, and to each neuron, a function of the radial distance from the soma, taking values in a metric space. Functional distances give rise to pseudo-metrics on sets of neurons which are then used to perform the two distinct tasks of clustering and classification. To illustrate the use of Sholl descriptors, four datasets were retrieved from the large public repository https://neuromorpho.org/ comprising neuronal reconstructions from different species and brain regions. Sholl descriptors were subsequently computed, and standard clustering methods enhanced with detection and metric learning algorithms were then used to objectively cluster and classify each dataset. Importantly, our descriptors outperformed conventional morphometric techniques (L-Measure metrics) in several of the tested datasets. Therefore, we offer a novel and effective approach to the analysis of diverse neuronal cell types, and provide a toolkit for researchers to cluster and classify neurons.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Jin, Ye, Mei Wang, Liyan Luo, Dinghao Zhao y Zhanqi Liu. "Polyphonic Sound Event Detection Using Temporal-Frequency Attention and Feature Space Attention". Sensors 22, n.º 18 (9 de septiembre de 2022): 6818. http://dx.doi.org/10.3390/s22186818.

Texto completo
Resumen
The complexity of polyphonic sounds imposes numerous challenges on their classification. Especially in real life, polyphonic sound events have discontinuity and unstable time-frequency variations. Traditional single acoustic features cannot characterize the key feature information of the polyphonic sound event, and this deficiency results in poor model classification performance. In this paper, we propose a convolutional recurrent neural network model based on the temporal-frequency (TF) attention mechanism and feature space (FS) attention mechanism (TFFS-CRNN). The TFFS-CRNN model aggregates Log-Mel spectrograms and MFCCs feature as inputs, which contains the TF-attention module, the convolutional recurrent neural network (CRNN) module, the FS-attention module and the bidirectional gated recurrent unit (BGRU) module. In polyphonic sound events detection (SED), the TF-attention module can capture the critical temporal–frequency features more capably. The FS-attention module assigns different dynamically learnable weights to different dimensions of features. The TFFS-CRNN model improves the characterization of features for key feature information in polyphonic SED. By using two attention modules, the model can focus on semantically relevant time frames, key frequency bands, and important feature spaces. Finally, the BGRU module learns contextual information. The experiments were conducted on the DCASE 2016 Task3 dataset and the DCASE 2017 Task3 dataset. Experimental results show that the F1-score of the TFFS-CRNN model improved 12.4% and 25.2% compared with winning system models in DCASE challenge; the ER is reduced by 0.41 and 0.37 as well. The proposed TFFS-CRNN model algorithm has better classification performance and lower ER in polyphonic SED.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Voudouri, Kalliopi Artemis, Nikolaos Siomos, Konstantinos Michailidis, Nikolaos Papagiannopoulos, Lucia Mona, Carmela Cornacchia, Doina Nicolae y Dimitris Balis. "Comparison of two automated aerosol typing methods and their application to an EARLINET station". Atmospheric Chemistry and Physics 19, n.º 16 (29 de agosto de 2019): 10961–80. http://dx.doi.org/10.5194/acp-19-10961-2019.

Texto completo
Resumen
Abstract. In this study we apply and compare two algorithms for the automated aerosol-type characterization of the aerosol layers derived from Raman lidar measurements over the EARLINET station of Thessaloniki, Greece. Both automated aerosol-type characterization methods base their typing on lidar-derived aerosol-intensive properties. The methodologies are briefly described and their application to three distinct cases is demonstrated and evaluated. Then the two classification schemes were applied in the automatic mode to a more extensive dataset. The dataset analyzed corresponds to ACTRIS/EARLINET (European Aerosol Research Lidar NETwork) Thessaloniki data acquired during the period 2012–2015. Seventy-one layers out of 110 (percentage of 65 %) were typed by both techniques, and 56 of these 71 layers (percentage of 79 %) were attributed to the same aerosol type. However, as shown, the identification rate of both typing algorithms can be changed regarding the selection of appropriate threshold criteria. Four major types of aerosols are considered in this study: Dust, Maritime, PollutedSmoke and CleanContinental. The analysis showed that the two algorithms, when applied to real atmospheric conditions, provide typing results that are in good agreement regarding the automatic characterization of PollutedSmoke, while there are some differences between the two methods regarding the characterization of Dust and CleanContinental. These disagreements are mainly attributed to differences in the definitions of the aerosol types between the two methods, regarding the intensive properties used and their range.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Shokry, Sherif, Naglaa K. Rashwan, Seham Hemdan, Ali Alrashidi y Amr M. Wahaballa. "Characterization of Traffic Accidents Based on Long-Horizon Aggregated and Disaggregated Data". Sustainability 15, n.º 2 (12 de enero de 2023): 1483. http://dx.doi.org/10.3390/su15021483.

Texto completo
Resumen
For sustainable transportation systems, modeling road traffic accidents is essential in order to formulate measures to reduce their harmful impacts on society. This study investigated the outcomes of using different datasets in traffic accident models with a low number of variables that can be easily manipulated by practitioners. Long-horizon aggregated and disaggregated road traffic accident datasets on Egyptian roads (for five years) were used to compare the model’s fit for different data groups. This study analyzed the results of k-means data clustering and classified the data into groups to compare the fit of the base model (Smeed’s model and different types of regression models). The results emphasized that the aggregated data used had less efficiency compared with the disaggregated data. It was found that the classification of the disaggregated dataset into reasonable groups improved the model’s fit. These findings may help in the better utilization of the available road traffic accident data for determining the best-fitting model that can assist decision-makers to choose suitable road traffic accident prevention measures.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Rundo, Leonardo, Roberta Eufrasia Ledda, Christian di Noia, Evis Sala, Giancarlo Mauri, Gianluca Milanese, Nicola Sverzellati et al. "A Low-Dose CT-Based Radiomic Model to Improve Characterization and Screening Recall Intervals of Indeterminate Prevalent Pulmonary Nodules". Diagnostics 11, n.º 9 (3 de septiembre de 2021): 1610. http://dx.doi.org/10.3390/diagnostics11091610.

Texto completo
Resumen
Lung cancer (LC) is currently one of the main causes of cancer-related deaths worldwide. Low-dose computed tomography (LDCT) of the chest has been proven effective in secondary prevention (i.e., early detection) of LC by several trials. In this work, we investigated the potential impact of radiomics on indeterminate prevalent pulmonary nodule (PN) characterization and risk stratification in subjects undergoing LDCT-based LC screening. As a proof-of-concept for radiomic analyses, the first aim of our study was to assess whether indeterminate PNs could be automatically classified by an LDCT radiomic classifier as solid or sub-solid (first-level classification), and in particular for sub-solid lesions, as non-solid versus part-solid (second-level classification). The second aim of the study was to assess whether an LCDT radiomic classifier could automatically predict PN risk of malignancy, and thus optimize LDCT recall timing in screening programs. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC), accuracy, positive predictive value, negative predictive value, sensitivity, and specificity. The experimental results showed that an LDCT radiomic machine learning classifier can achieve excellent performance for characterization of screen-detected PNs (mean AUC of 0.89 ± 0.02 and 0.80 ± 0.18 on the blinded test dataset for the first-level and second-level classifiers, respectively), providing quantitative information to support clinical management. Our study showed that a radiomic classifier could be used to optimize LDCT recall for indeterminate PNs. According to the performance of such a classifier on the blinded test dataset, within the first 6 months, 46% of the malignant PNs and 38% of the benign ones were identified, improving early detection of LC by doubling the current detection rate of malignant nodules from 23% to 46% at a low cost of false positives. In conclusion, we showed the high potential of LDCT-based radiomics for improving the characterization and optimizing screening recall intervals of indeterminate PNs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

de Moura, Joaquim, Plácido L. Vidal, Jorge Novo, José Rouco, Manuel G. Penedo y Marcos Ortega. "Intraretinal Fluid Pattern Characterization in Optical Coherence Tomography Images". Sensors 20, n.º 7 (3 de abril de 2020): 2004. http://dx.doi.org/10.3390/s20072004.

Texto completo
Resumen
Optical Coherence Tomography (OCT) has become a relevant image modality in the ophthalmological clinical practice, as it offers a detailed representation of the eye fundus. This medical imaging modality is currently one of the main means of identification and characterization of intraretinal cystoid regions, a crucial task in the diagnosis of exudative macular disease or macular edema, among the main causes of blindness in developed countries. This work presents an exhaustive analysis of intensity and texture-based descriptors for its identification and classification, using a complete set of 510 texture features, three state-of-the-art feature selection strategies, and seven representative classifier strategies. The methodology validation and the analysis were performed using an image dataset of 83 OCT scans. From these images, 1609 samples were extracted from both cystoid and non-cystoid regions. The different tested configurations provided satisfactory results, reaching a mean cross-validation test accuracy of 92.69%. The most promising feature categories identified for the issue were the Gabor filters, the Histogram of Oriented Gradients (HOG), the Gray-Level Run-Length matrix (GLRL), and the Laws’ texture filters (LAWS), being consistently and considerably selected along all feature selector algorithms in the top positions of different relevance rankings.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Munjal, Rohan, Sohaib Arif, Frank Wendler y Olfa Kanoun. "Comparative Study of Machine-Learning Frameworks for the Elaboration of Feed-Forward Neural Networks by Varying the Complexity of Impedimetric Datasets Synthesized Using Eddy Current Sensors for the Characterization of Bi-Metallic Coins". Sensors 22, n.º 4 (9 de febrero de 2022): 1312. http://dx.doi.org/10.3390/s22041312.

Texto completo
Resumen
A suitable framework for the development of artificial neural networks is important because it decides the level of accuracy, which can be reached for a certain dataset and increases the certainty about the reached classification results. In this paper, we conduct a comparative study for the performance of four frameworks, Keras with TensorFlow, Pytorch, TensorFlow, and Cognitive Toolkit (CNTK), for the elaboration of neural networks. The number of neurons in the hidden layer of the neural networks is varied from 8 to 64 to understand its effect on the performance metrics of the frameworks. A test dataset is synthesized using an analytical model and real measured impedance spectra by an eddy current sensor coil on EUR 2 and TRY 1 coins. The dataset has been extended by using a novel method based on interpolation technique to create datasets with different difficulty levels to replicate the scenario with a good imitation of EUR 2 coins and to investigate the limit of the prediction accuracy. It was observed that the compared frameworks have high accuracy performance for a lower level of difficulty in the dataset. As the difficulty in the dataset is raised, there was a drop in the accuracy of CNTK and Keras with TensorFlow depending upon the number of neurons in the hidden layers. It was observed that CNTK has the overall worst accuracy performance with an increase in the difficulty level of the datasets. Therefore, the major comparison was confined to Pytorch and TensorFlow. It was observed for Pytorch and TensorFlow with 32 and 64 neurons in hidden layers that there is a minor drop in the accuracy with an increase in the difficulty level of the dataset and was above 90% until both the coins were 80% closer to each other in terms of electrical and magnetic properties. However, Pytorch with 32 neurons in the hidden layer has a reduction in model size by 70% and 16.3% and predicts the class, 73.6% and 15.6% faster in comparison to TensorFlow and Pytorch with 64 neurons.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Moraru, Luminita, Simona Moldovanu, Anisia-Luiza Culea-Florescu, Dorin Bibicu, Nilanjan Dey, Amira Salah Ashour y Robert Simon Sherratt. "Texture Spectrum Coupled with Entropy and Homogeneity Image Features for Myocardium Muscle Characterization". Current Bioinformatics 14, n.º 4 (10 de abril de 2019): 295–304. http://dx.doi.org/10.2174/1574893614666181220095343.

Texto completo
Resumen
Background: People in middle/later age often suffer from heart muscle damage due to coronary artery disease associated to myocardial infarction. In young people, the genetic forms of cardiomyopathies (heart muscle disease) are the utmost protuberant cause of myocardial disease. Objective: Accurate early detected information regarding the myocardial tissue structure is a key answer for tracking the progress of several myocardial diseases. associations while known disease-lncRNA associations are required only. Method: The present work proposes a new method for myocardium muscle texture classification based on entropy, homogeneity and on the texture unit-based texture spectrum approaches. Entropy and homogeneity are generated in moving windows of size 3x3 and 5x5 to enhance the texture features and to create the premise of differentiation of the myocardium structures. The texture is then statistically analyzed using the texture spectrum approach. Texture classification is achieved based on a fuzzy c–means descriptive classifier. The proposed method has been tested on a dataset of 80 echocardiographic ultrasound images in both short-axis and long-axis in apical two chamber view representations, for normal and infarct pathologies. Results: The noise sensitivity of the fuzzy c–means classifier was overcome by using the image features. The results established that the entropy-based features provided superior clustering results compared to homogeneity. Conclusion: Entropy image feature has a lower spread of the data in the clusters of healthy subjects and myocardial infarction. Also, the Euclidean distance function between the cluster centroids has higher values for both LAX and SAX views for entropy images.</P>
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Jiang, Yueyi, Yunfan Jiang, Liu Leqi y Piotr Winkielman. "Many Ways to Be Lonely: Fine-Grained Characterization of Loneliness and Its Potential Changes in COVID-19". Proceedings of the International AAAI Conference on Web and Social Media 16 (31 de mayo de 2022): 405–16. http://dx.doi.org/10.1609/icwsm.v16i1.19302.

Texto completo
Resumen
Loneliness has been associated with negative outcomes for physical and mental health. Understanding how people express and cope with various forms of loneliness is critical for early screening and targeted interventions to reduce loneliness, particularly among vulnerable groups such as young adults. To examine how different forms of loneliness and coping strategies manifest in loneliness self-disclosure, we built a dataset, FIG-Loneliness (FIne-Grained Loneliness) by using Reddit posts in two young adult-focused forums and two loneliness related forums consisting of a diverse age group. We provided annotations by trained human annotators for binary and fine-grained loneliness classifications of the posts. Trained on FIG-Loneliness, two BERT-based models were used to understand loneliness forms and authors’ coping strategies in these forums. Our binary loneliness classification achieved an accuracy above 97%, and fine-grained loneliness category classification reached an average accuracy of 77% across all labeled categories. With FIG-Loneliness and model predictions, we found that loneliness expressions in the young adult related forums were distinct from other forums. Those in young adult-focused forums were more likely to express concerns pertaining to peer relationship, and were potentially more sensitive to geographical isolation impacted by the COVID-19 pandemic lockdown. Also, we showed that different forms of loneliness have differential use in coping strategies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía