Siga este link para ver outros tipos de publicações sobre o tema: Kernel mean embedding.

Artigos de revistas sobre o tema "Kernel mean embedding"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 44 melhores artigos de revistas para estudos sobre o assunto "Kernel mean embedding".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Jorgensen, Palle E. T., Myung-Sin Song e James Tian. "Conditional mean embedding and optimal feature selection via positive definite kernels". Opuscula Mathematica 44, n.º 1 (2024): 79–103. http://dx.doi.org/10.7494/opmath.2024.44.1.79.

Texto completo da fonte
Resumo:
Motivated by applications, we consider new operator-theoretic approaches to conditional mean embedding (CME). Our present results combine a spectral analysis-based optimization scheme with the use of kernels, stochastic processes, and constructive learning algorithms. For initially given non-linear data, we consider optimization-based feature selections. This entails the use of convex sets of kernels in a construction o foptimal feature selection via regression algorithms from learning models. Thus, with initial inputs of training data (for a suitable learning algorithm), each choice of a kernel \(K\) in turn yields a variety of Hilbert spaces and realizations of features. A novel aspect of our work is the inclusion of a secondary optimization process over a specified convex set of positive definite kernels, resulting in the determination of "optimal" feature representations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Muandet, Krikamol, Kenji Fukumizu, Bharath Sriperumbudur e Bernhard Schölkopf. "Kernel Mean Embedding of Distributions: A Review and Beyond". Foundations and Trends® in Machine Learning 10, n.º 1-2 (2017): 1–141. http://dx.doi.org/10.1561/2200000060.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Van Hauwermeiren, Daan, Michiel Stock, Thomas De Beer e Ingmar Nopens. "Predicting Pharmaceutical Particle Size Distributions Using Kernel Mean Embedding". Pharmaceutics 12, n.º 3 (16 de março de 2020): 271. http://dx.doi.org/10.3390/pharmaceutics12030271.

Texto completo da fonte
Resumo:
In the pharmaceutical industry, the transition to continuous manufacturing of solid dosage forms is adopted by more and more companies. For these continuous processes, high-quality process models are needed. In pharmaceutical wet granulation, a unit operation in the ConsiGma TM -25 continuous powder-to-tablet system (GEA Pharma systems, Collette, Wommelgem, Belgium), the product under study presents itself as a collection of particles that differ in shape and size. The measurement of this collection results in a particle size distribution. However, the theoretical basis to describe the physical phenomena leading to changes in this particle size distribution is lacking. It is essential to understand how the particle size distribution changes as a function of the unit operation’s process settings, as it has a profound effect on the behavior of the fluid bed dryer. Therefore, we suggest a data-driven modeling framework that links the machine settings of the wet granulation unit operation and the output distribution of granules. We do this without making any assumptions on the nature of the distributions under study. A simulation of the granule size distribution could act as a soft sensor when in-line measurements are challenging to perform. The method of this work is a two-step procedure: first, the measured distributions are transformed into a high-dimensional feature space, where the relation between the machine settings and the distributions can be learnt. Second, the inverse transformation is performed, allowing an interpretation of the results in the original measurement space. Further, a comparison is made with previous work, which employs a more mechanistic framework for describing the granules. A reliable prediction of the granule size is vital in the assurance of quality in the production line, and is needed in the assessment of upstream (feeding) and downstream (drying, milling, and tableting) issues. Now that a validated data-driven framework for predicting pharmaceutical particle size distributions is available, it can be applied in settings such as model-based experimental design and, due to its fast computation, there is potential in real-time model predictive control.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Xu, Bi-Cun, Kai Ming Ting e Yuan Jiang. "Isolation Graph Kernel". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 12 (18 de maio de 2021): 10487–95. http://dx.doi.org/10.1609/aaai.v35i12.17255.

Texto completo da fonte
Resumo:
A recent Wasserstein Weisfeiler-Lehman (WWL) Graph Kernel has a distinctive feature: Representing the distribution of Weisfeiler-Lehman (WL)-embedded node vectors of a graph in a histogram that enables a dissimilarity measurement of two graphs using Wasserstein distance. It has been shown to produce better classification accuracy than other graph kernels which do not employ such distribution and Wasserstein distance. This paper introduces an alternative called Isolation Graph Kernel (IGK) that measures the similarity between two attributed graphs. IGK is unique in two aspects among existing graph kernels. First, it is the first graph kernel which employs a distributional kernel in the framework of kernel mean embedding. This avoids the need to use the computationally expensive Wasserstein distance. Second, it is the first graph kernel that incorporates the distribution of attributed nodes (ignoring the edges) in a dataset of graphs. We reveal that this distributional information, extracted in the form of a feature map of Isolation Kernel, is crucial in building an efficient and effective graph kernel. We show that IGK is better than WWL in terms of classification accuracy, and it runs orders of magnitude faster in large datasets when used in the context of SVM classification.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Rustamov, Raif M., e James T. Klosowski. "Kernel mean embedding based hypothesis tests for comparing spatial point patterns". Spatial Statistics 38 (agosto de 2020): 100459. http://dx.doi.org/10.1016/j.spasta.2020.100459.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Hou, Boya, Sina Sanjari, Nathan Dahlin e Subhonmesh Bose. "Compressed Decentralized Learning of Conditional Mean Embedding Operators in Reproducing Kernel Hilbert Spaces". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 7 (26 de junho de 2023): 7902–9. http://dx.doi.org/10.1609/aaai.v37i7.25956.

Texto completo da fonte
Resumo:
Conditional mean embedding (CME) operators encode conditional probability densities within Reproducing Kernel Hilbert Space (RKHS). In this paper, we present a decentralized algorithm for a collection of agents to cooperatively approximate CME over a network. Communication constraints limit the agents from sending all data to their neighbors; we only allow sparse representations of covariance operators to be exchanged among agents, compositions of which defines CME. Using a coherence-based compression scheme, we present a consensus-type algorithm that preserves the average of the approximations of the covariance operators across the network. We theoretically prove that the iterative dynamics in RKHS is stable. We then empirically study our algorithm to estimate CMEs to learn spectra of Koopman operators for Markovian dynamical systems and to execute approximate value iteration for Markov decision processes (MDPs).
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Segera, Davies, Mwangi Mbuthia e Abraham Nyete. "Particle Swarm Optimized Hybrid Kernel-Based Multiclass Support Vector Machine for Microarray Cancer Data Analysis". BioMed Research International 2019 (16 de dezembro de 2019): 1–11. http://dx.doi.org/10.1155/2019/4085725.

Texto completo da fonte
Resumo:
Determining an optimal decision model is an important but difficult combinatorial task in imbalanced microarray-based cancer classification. Though the multiclass support vector machine (MCSVM) has already made an important contribution in this field, its performance solely depends on three aspects: the penalty factor C, the type of kernel, and its parameters. To improve the performance of this classifier in microarray-based cancer analysis, this paper proposes PSO-PCA-LGP-MCSVM model that is based on particle swarm optimization (PSO), principal component analysis (PCA), and multiclass support vector machine (MCSVM). The MCSVM is based on a hybrid kernel, i.e., linear-Gaussian-polynomial (LGP) that combines the advantages of three standard kernels (linear, Gaussian, and polynomial) in a novel manner, where the linear kernel is linearly combined with the Gaussian kernel embedding the polynomial kernel. Further, this paper proves and makes sure that the LGP kernel confirms the features of a valid kernel. In order to reveal the effectiveness of our model, several experiments were conducted and the obtained results compared between our model and other three single kernel-based models, namely, PSO-PCA-L-MCSVM (utilizing a linear kernel), PSO-PCA-G-MCSVM (utilizing a Gaussian kernel), and PSO-PCA-P-MCSVM (utilizing a polynomial kernel). In comparison, two dual and two multiclass imbalanced standard microarray datasets were used. Experimental results in terms of three extended assessment metrics (F-score, G-mean, and Accuracy) reveal the superior global feature extraction, prediction, and learning abilities of this model against three single kernel-based models.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Wang, Yufan, Zijing Wang, Kai Ming Ting e Yuanyi Shang. "A Principled Distributional Approach to Trajectory Similarity Measurement and its Application to Anomaly Detection". Journal of Artificial Intelligence Research 79 (13 de março de 2024): 865–93. http://dx.doi.org/10.1613/jair.1.15849.

Texto completo da fonte
Resumo:
This paper aims to solve two enduring challenges in existing trajectory similarity measures: computational inefficiency and the absence of the ‘uniqueness’ property that should be guaranteed in a distance function: dist(X, Y ) = 0 if and only if X = Y , where X and Y are two trajectories. In this work, we present a novel approach utilizing a distributional kernel for trajectory representation and similarity measurement, based on the kernel mean embedding framework. It is the very first time a distributional kernel is used for trajectory representation and similarity measurement. Our method does not rely on point-to-point distances which are used in most existing distances for trajectories. Unlike prevalent learning and deep learning approaches, our method requires no learning. We show the generality of this new approach in anomalous trajectory and sub-trajectory detection. We identify that the distributional kernel has (i) a data-dependent property and the ‘uniqueness’ property which are the key factors that lead to its superior task-specific performance, and (ii) runtime orders of magnitude faster than existing distance measures.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Brandman, David M., Michael C. Burkhart, Jessica Kelemen, Brian Franco, Matthew T. Harrison e Leigh R. Hochberg. "Robust Closed-Loop Control of a Cursor in a Person with Tetraplegia using Gaussian Process Regression". Neural Computation 30, n.º 11 (novembro de 2018): 2986–3008. http://dx.doi.org/10.1162/neco_a_01129.

Texto completo da fonte
Resumo:
Intracortical brain computer interfaces can enable individuals with paralysis to control external devices through voluntarily modulated brain activity. Decoding quality has been previously shown to degrade with signal nonstationarities—specifically, the changes in the statistics of the data between training and testing data sets. This includes changes to the neural tuning profiles and baseline shifts in firing rates of recorded neurons, as well as nonphysiological noise. While progress has been made toward providing long-term user control via decoder recalibration, relatively little work has been dedicated to making the decoding algorithm more resilient to signal nonstationarities. Here, we describe how principled kernel selection with gaussian process regression can be used within a Bayesian filtering framework to mitigate the effects of commonly encountered nonstationarities. Given a supervised training set of (neural features, intention to move in a direction)-pairs, we use gaussian process regression to predict the intention given the neural data. We apply kernel embedding for each neural feature with the standard radial basis function. The multiple kernels are then summed together across each neural dimension, which allows the kernel to effectively ignore large differences that occur only in a single feature. The summed kernel is used for real-time predictions of the posterior mean and variance under a gaussian process framework. The predictions are then filtered using the discriminative Kalman filter to produce an estimate of the neural intention given the history of neural data. We refer to the multiple kernel approach combined with the discriminative Kalman filter as the MK-DKF. We found that the MK-DKF decoder was more resilient to nonstationarities frequently encountered in-real world settings yet provided similar performance to the currently used Kalman decoder. These results demonstrate a method by which neural decoding can be made more resistant to nonstationarities.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Ali, Sarwan, e Murray Patterson. "Improving ISOMAP Efficiency with RKS: A Comparative Study with t-Distributed Stochastic Neighbor Embedding on Protein Sequences". J 6, n.º 4 (31 de outubro de 2023): 579–91. http://dx.doi.org/10.3390/j6040038.

Texto completo da fonte
Resumo:
Data visualization plays a crucial role in gaining insights from high-dimensional datasets. ISOMAP is a popular algorithm that maps high-dimensional data into a lower-dimensional space while preserving the underlying geometric structure. However, ISOMAP can be computationally expensive, especially for large datasets, due to the computation of the pairwise distances between data points. The motivation behind this study is to improve efficiency by leveraging an approximate method, which is based on random kitchen sinks (RKS). This approach provides a faster way to compute the kernel matrix. Using RKS significantly reduces the computational complexity of ISOMAP while still obtaining a meaningful low-dimensional representation of the data. We compare the performance of the approximate ISOMAP approach using RKS with the traditional t-SNE algorithm. The comparison involves computing the distance matrix using the original high-dimensional data and the low-dimensional data computed from both t-SNE and ISOMAP. The quality of the low-dimensional embeddings is measured using several metrics, including mean squared error (MSE), mean absolute error (MAE), and explained variance score (EVS). Additionally, the runtime of each algorithm is recorded to assess its computational efficiency. The comparison is conducted on a set of protein sequences, used in many bioinformatics tasks. We use three different embedding methods based on k-mers, minimizers, and position weight matrix (PWM) to capture various aspects of the underlying structure and the relationships between the protein sequences. By comparing different embeddings and by evaluating the effectiveness of the approximate ISOMAP approach using RKS and comparing it against t-SNE, we provide insights on the efficacy of our proposed approach. Our goal is to retain the quality of the low-dimensional embeddings while improving the computational performance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Zhang, Hansong, Shikun Li, Pengju Wang, Dan Zeng e Shiming Ge. "M3D: Dataset Condensation by Minimizing Maximum Mean Discrepancy". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 8 (24 de março de 2024): 9314–22. http://dx.doi.org/10.1609/aaai.v38i8.28784.

Texto completo da fonte
Resumo:
Training state-of-the-art (SOTA) deep models often requires extensive data, resulting in substantial training and storage costs. To address these challenges, dataset condensation has been developed to learn a small synthetic set that preserves essential information from the original large-scale dataset. Nowadays, optimization-oriented methods have been the primary method in the field of dataset condensation for achieving SOTA results. However, the bi-level optimization process hinders the practical application of such methods to realistic and larger datasets. To enhance condensation efficiency, previous works proposed Distribution-Matching (DM) as an alternative, which significantly reduces the condensation cost. Nonetheless, current DM-based methods still yield less comparable results to SOTA optimization-oriented methods. In this paper, we argue that existing DM-based methods overlook the higher-order alignment of the distributions, which may lead to sub-optimal matching results. Inspired by this, we present a novel DM-based method named M3D for dataset condensation by Minimizing the Maximum Mean Discrepancy between feature representations of the synthetic and real images. By embedding their distributions in a reproducing kernel Hilbert space, we align all orders of moments of the distributions of real and synthetic images, resulting in a more generalized condensed set. Notably, our method even surpasses the SOTA optimization-oriented method IDC on the high-resolution ImageNet dataset. Extensive analysis is conducted to verify the effectiveness of the proposed method. Source codes are available at https://github.com/Hansong-Zhang/M3D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Solodukha, Roman. "STATISTICAL STEGANALYSIS OF PHOTOREALISTIC IMAGES USING GRADIENT PATHS". Voprosy kiberbezopasnosti, n.º 1(47) (2022): 26–36. http://dx.doi.org/10.21681/2311-3456-2022-1-26-36.

Texto completo da fonte
Resumo:
Purpose of the article is experimentally test the efficiency of the feature vector based on gradient paths in the spatial domain of the image. Research method is a comparison of steganalytical feature vectors based on the mean square error and the coefficient of determination obtained using SVM-regression in Matlab. The dataset is formed by automating the freeware steganoprograms that implement embedding into the spatial area of the image with sequential and pseudorandom selection of a pixel for embedding. Results of the study: the optimal parameters of the algorithm for seeking gradient paths from the point of view of embedding detection are experimentally obtained. The results of applying machine learning models are obtained and analyzed, the optimal scale of the SVM-regression kernel is determined. The computation durations of feature vectors obtaining, models training, recognizing containers are calculated. It is shown experimentally that the gradient paths feature vector is expedient to use for solving problems where it is necessary to vary the detection accuracy depending on functioning capacity of system, because the proposed feature vector allows to determine the dimension / accuracy ratio. Also, by experiment, a complex 20D vector is selected from several one-dimensional quantitative steganodetectors and the gradient paths feature vector. The effectiveness of result vector is comparable to the 686D feature vector SPAM.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Ting, Kai Ming, Zongyou Liu, Hang Zhang e Ye Zhu. "A new distributional treatment for time series and an anomaly detection investigation". Proceedings of the VLDB Endowment 15, n.º 11 (julho de 2022): 2321–33. http://dx.doi.org/10.14778/3551793.3551796.

Texto completo da fonte
Resumo:
Time series is traditionally treated with two main approaches, i.e., the time domain approach and the frequency domain approach. These approaches must rely on a sliding window so that time-shift versions of a periodic subsequence can be measured to be similar. Coupled with the use of a root point-to-point measure, existing methods often have quadratic time complexity. We offer the third R domain approach. It begins with an insight that subsequences in a periodic time series can be treated as sets of independent and identically distributed (iid) points generated from an unknown distribution in R. This R domain treatment enables two new possibilities: (a) the similarity between two subsequences can be computed using a distributional measure such as Wasserstein distance (WD), kernel mean embedding or Isolation Distributional kernel (IDK); and (b) these distributional measures become non-sliding-window-based. Together, they offer an alternative that has more effective similarity measurements and runs significantly faster than the point-to-point and sliding-window-based measures. Our empirical evaluation shows that IDK and WD are effective distributional measures for time series; and IDK-based detectors have better detection accuracy than existing sliding-window-based detectors, and they run faster with linear time complexity.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Qian, Hangwei, Sinno Jialin Pan e Chunyan Miao. "Distribution-Based Semi-Supervised Learning for Activity Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 7699–706. http://dx.doi.org/10.1609/aaai.v33i01.33017699.

Texto completo da fonte
Resumo:
Supervised learning methods have been widely applied to activity recognition. The prevalent success of existing methods, however, has two crucial prerequisites: proper feature extraction and sufficient labeled training data. The former is important to differentiate activities, while the latter is crucial to build a precise learning model. These two prerequisites have become bottlenecks to make existing methods more practical. Most existing feature extraction methods highly depend on domain knowledge, while labeled data requires intensive human annotation effort. Therefore, in this paper, we propose a novel method, named Distribution-based Semi-Supervised Learning, to tackle the aforementioned limitations. The proposed method is capable of automatically extracting powerful features with no domain knowledge required, meanwhile, alleviating the heavy annotation effort through semi-supervised learning. Specifically, we treat data stream of sensor readings received in a period as a distribution, and map all training distributions, including labeled and unlabeled, into a reproducing kernel Hilbert space (RKHS) using the kernel mean embedding technique. The RKHS is further altered by exploiting the underlying geometry structure of the unlabeled distributions. Finally, in the altered RKHS, a classifier is trained with the labeled distributions. We conduct extensive experiments on three public datasets to verify the effectiveness of our method compared with state-of-the-art baselines.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Huang, Shimeng, Elisabeth Ailer, Niki Kilbertus e Niklas Pfister. "Supervised learning and model analysis with compositional data". PLOS Computational Biology 19, n.º 6 (30 de junho de 2023): e1011240. http://dx.doi.org/10.1371/journal.pcbi.1011240.

Texto completo da fonte
Resumo:
Supervised learning, such as regression and classification, is an essential tool for analyzing modern high-throughput sequencing data, for example in microbiome research. However, due to the compositionality and sparsity, existing techniques are often inadequate. Either they rely on extensions of the linear log-contrast model (which adjust for compositionality but cannot account for complex signals or sparsity) or they are based on black-box machine learning methods (which may capture useful signals, but lack interpretability due to the compositionality). We propose KernelBiome, a kernel-based nonparametric regression and classification framework for compositional data. It is tailored to sparse compositional data and is able to incorporate prior knowledge, such as phylogenetic structure. KernelBiome captures complex signals, including in the zero-structure, while automatically adapting model complexity. We demonstrate on par or improved predictive performance compared with state-of-the-art machine learning methods on 33 publicly available microbiome datasets. Additionally, our framework provides two key advantages: (i) We propose two novel quantities to interpret contributions of individual components and prove that they consistently estimate average perturbation effects of the conditional mean, extending the interpretability of linear log-contrast coefficients to nonparametric models. (ii) We show that the connection between kernels and distances aids interpretability and provides a data-driven embedding that can augment further analysis. KernelBiome is available as an open-source Python package on PyPI and at https://github.com/shimenghuang/KernelBiome.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Bie, Mei, Huan Xu, Quanle Liu, Yan Gao, Kai Song e Xiangjiu Che. "DA-FER: Domain Adaptive Facial Expression Recognition". Applied Sciences 13, n.º 10 (22 de maio de 2023): 6314. http://dx.doi.org/10.3390/app13106314.

Texto completo da fonte
Resumo:
Facial expression recognition (FER) is an important field in computer vision with many practical applications. However, one of the challenges in FER is dealing with small sample data, where the number of samples available for training machine learning algorithms is limited. To address this issue, a domain adaptive learning strategy is proposed in this paper. The approach uses a public dataset with sufficient samples as the source domain and a small sample dataset as the target domain. Furthermore, the maximum mean discrepancy with kernel mean embedding is utilized to reduce the disparity between the source and target domain data samples, thereby enhancing expression recognition accuracy. The proposed Domain Adaptive Facial Expression Recognition (DA-FER) method integrates the SSPP module and Slice module to fuse expression features of different dimensions. Moreover, this method retains the regions of interest of the five senses to accomplish more discriminative feature extraction and improve the transfer learning capability of the network. Experimental results indicate that the proposed method can effectively enhance the performance of expression recognition. Specifically, when the self-collected Selfie-Expression dataset is used as the target domain, and the public datasets RAF-DB and Fer2013 are used as the source domain, the performance of expression recognition is improved to varying degrees, which demonstrates the effectiveness of this domain adaptive method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Ji, Bo-Ya, Liang-Rui Pan, Ji-Ren Zhou, Zhu-Hong You e Shao-Liang Peng. "SMMDA: Predicting miRNA-Disease Associations by Incorporating Multiple Similarity Profiles and a Novel Disease Representation". Biology 11, n.º 5 (20 de maio de 2022): 777. http://dx.doi.org/10.3390/biology11050777.

Texto completo da fonte
Resumo:
Increasing evidence has suggested that microRNAs (miRNAs) are significant in research on human diseases. Predicting possible associations between miRNAs and diseases would provide new perspectives on disease diagnosis, pathogenesis, and gene therapy. However, considering the intrinsic time-consuming and expensive cost of traditional Vitro studies, there is an urgent need for a computational approach that would allow researchers to identify potential associations between miRNAs and diseases for further research. In this paper, we presented a novel computational method called SMMDA to predict potential miRNA-disease associations. In particular, SMMDA first utilized a new disease representation method (MeSHHeading2vec) based on the network embedding algorithm and then fused it with Gaussian interaction profile kernel similarity information of miRNAs and diseases, disease semantic similarity, and miRNA functional similarity. Secondly, SMMDA utilized a deep auto-coder network to transform the original features further to achieve a better feature representation. Finally, the ensemble learning model, XGBoost, was used as the underlying training and prediction method for SMMDA. In the results, SMMDA acquired a mean accuracy of 86.68% with a standard deviation of 0.42% and a mean AUC of 94.07% with a standard deviation of 0.23%, outperforming many previous works. Moreover, we also compared the predictive ability of SMMDA with different classifiers and different feature descriptors. In the case studies of three common Human diseases, the top 50 candidate miRNAs have 47 (esophageal neoplasms), 48 (breast neoplasms), and 48 (colon neoplasms) are successfully verified by two other databases. The experimental results proved that SMMDA has a reliable prediction ability in predicting potential miRNA-disease associations. Therefore, it is anticipated that SMMDA could be an effective tool for biomedical researchers.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

De Cannière, Hélène, Federico Corradi, Christophe J. P. Smeets, Melanie Schoutteten, Carolina Varon, Chris Van Hoof, Sabine Van Huffel, Willemijn Groenendaal e Pieter Vandervoort. "Wearable Monitoring and Interpretable Machine Learning Can Objectively Track Progression in Patients during Cardiac Rehabilitation". Sensors 20, n.º 12 (26 de junho de 2020): 3601. http://dx.doi.org/10.3390/s20123601.

Texto completo da fonte
Resumo:
Cardiovascular diseases (CVD) are often characterized by their multifactorial complexity. This makes remote monitoring and ambulatory cardiac rehabilitation (CR) therapy challenging. Current wearable multimodal devices enable remote monitoring. Machine learning (ML) and artificial intelligence (AI) can help in tackling multifaceted datasets. However, for clinical acceptance, easy interpretability of the AI models is crucial. The goal of the present study was to investigate whether a multi-parameter sensor could be used during a standardized activity test to interpret functional capacity in the longitudinal follow-up of CR patients. A total of 129 patients were followed for 3 months during CR using 6-min walking tests (6MWT) equipped with a wearable ECG and accelerometer device. Functional capacity was assessed based on 6MWT distance (6MWD). Linear and nonlinear interpretable models were explored to predict 6MWD. The t-distributed stochastic neighboring embedding (t-SNE) technique was exploited to embed and visualize high dimensional data. The performance of support vector machine (SVM) models, combining different features and using different kernel types, to predict functional capacity was evaluated. The SVM model, using chronotropic response and effort as input features, showed a mean absolute error of 42.8 m (±36.8 m). The 3D-maps derived using the t-SNE technique visualized the relationship between sensor-derived biomarkers and functional capacity, which enables tracking of the evolution of patients throughout the CR program. The current study showed that wearable monitoring combined with interpretable ML can objectively track clinical progression in a CR population. These results pave the road towards ambulatory CR.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Harris, Matthew. "KLRfome - Kernel Logistic Regression on Focal Mean Embeddings". Journal of Open Source Software 4, n.º 35 (19 de março de 2019): 722. http://dx.doi.org/10.21105/joss.00722.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Hempel, John. "One-relator surface groups". Mathematical Proceedings of the Cambridge Philosophical Society 108, n.º 3 (novembro de 1990): 467–74. http://dx.doi.org/10.1017/s030500410006936x.

Texto completo da fonte
Resumo:
For X a subset of a group G, the smallest normal subgroup of G which contains X is called the normal closure of X and is denoted by ngp (X; G) or simply by ngp (X) if there is no possibility of ambiguity. By a surface group we mean the fundamental group of a compact surface. We are interested in determining when a normal subgroup of a surface group contains a simple loop – the homotopy class of an embedding of S1 in the surface, or more generally, a power of a simple loop. This is significant to the study of 3-manifolds since a Heegaard splitting of a 3-manifold is reducible (cf. [2]) if and only if the kernel of the corresponding splitting homomorphism contains a simple loop. We give an answer in the case that the normal subgroup is the normal closure ngp (α) of a single element α: if ngp (α) contains a (power of a) simple loop β then α is homotopic to a (power of a) simple loop and β±1 is homotopic either to (a power of) α or to the commutator [α, γ] of a with some simple loop γ meeting a transversely in a single point. This implies that if a is not homotopic to a power of a simple loop, then the quotient map π1(S) → π1(S)/ngp (α) does not factor through a group with more than one end. In the process we show that π1(S)/ngp (α) is locally indicable if and only if α is not a proper power and that α always lifts to a simple loop in the covering space Sα of S corresponding to ngp (α). We also obtain some estimates on the minimal number of double points in certain homotopy classes of loops.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Dong, Alice X. D., Jennifer S. K. Chan e Gareth W. Peters. "RISK MARGIN QUANTILE FUNCTION VIA PARAMETRIC AND NON-PARAMETRIC BAYESIAN APPROACHES". ASTIN Bulletin 45, n.º 3 (9 de julho de 2015): 503–50. http://dx.doi.org/10.1017/asb.2015.8.

Texto completo da fonte
Resumo:
AbstractWe develop quantile functions from regression models in order to derive risk margin and to evaluate capital in non-life insurance applications. By utilizing the entire range of conditional quantile functions, especially higher quantile levels, we detail how quantile regression is capable of providing an accurate estimation of risk margin and an overview of implied capital based on the historical volatility of a general insurers loss portfolio. Two modeling frameworks are considered based around parametric and non-parametric regression models which we develop specifically in this insurance setting. In the parametric framework, quantile functions are derived using several distributions including the flexible generalized beta (GB2) distribution family, asymmetric Laplace (AL) distribution and power-Pareto (PP) distribution. In these parametric model based quantile regressions, we detail two basic formulations. The first involves embedding the quantile regression loss function from the nonparameteric setting into the argument of the kernel of a parametric data likelihood model, this is well known to naturally lead to the AL parametric model case. The second formulation we utilize in the parametric setting adopts an alternative quantile regression formulation in which we assume a structural expression for the regression trend and volatility functions which act to modify a base quantile function in order to produce the conditional data quantile function. This second approach allows a range of flexible parametric models to be considered with different tail behaviors. We demonstrate how to perform estimation of the resulting parametric models under a Bayesian regression framework. To achieve this, we design Markov chain Monte Carlo (MCMC) sampling strategies for the resulting Bayesian posterior quantile regression models. In the non-parametric framework, we construct quantile functions by minimizing an asymmetrically weighted loss function and estimate the parameters under the AL proxy distribution to resemble the minimization process. This quantile regression model is contrasted to the parametric AL mean regression model and both are expressed as a scale mixture of uniform distributions to facilitate efficient implementation. The models are extended to adopt dynamic mean, variance and skewness and applied to analyze two real loss reserve data sets to perform inference and discuss interesting features of quantile regression for risk margin calculations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Saito, Shota. "Hypergraph Modeling via Spectral Embedding Connection: Hypergraph Cut, Weighted Kernel k-Means, and Heat Kernel". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 7 (28 de junho de 2022): 8141–49. http://dx.doi.org/10.1609/aaai.v36i7.20787.

Texto completo da fonte
Resumo:
We propose a theoretical framework of multi-way similarity to model real-valued data into hypergraphs for clustering via spectral embedding. For graph cut based spectral clustering, it is common to model real-valued data into graph by modeling pairwise similarities using kernel function. This is because the kernel function has a theoretical connection to the graph cut. For problems where using multi-way similarities are more suitable than pairwise ones, it is natural to model as a hypergraph, which is generalization of a graph. However, although the hypergraph cut is well-studied, there is not yet established a hypergraph cut based framework to model multi-way similarity. In this paper, we formulate multi-way similarities by exploiting the theoretical foundation of kernel function. We show a theoretical connection between our formulation and hypergraph cut in two ways, generalizing both weighted kernel k-means and the heat kernel, by which we justify our formulation. We also provide a fast algorithm for spectral clustering. Our algorithm empirically shows better performance than existing graph and other heuristic modeling methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Zang, Xian, e Kil To Chong. "Embedding Global Optimization and Kernelization into Fuzzy C-Means Clustering for Consonant/Vowel Segmentation". Applied Mechanics and Materials 419 (outubro de 2013): 814–19. http://dx.doi.org/10.4028/www.scientific.net/amm.419.814.

Texto completo da fonte
Resumo:
This paper proposes a novel clustering algorithm named global kernel fuzzy-c means (GK-FCM) to segment the speech into small non-overlapping blocks for consonant/vowel segmentation. This algorithm is realized by embedding global optimization and kernelization into the classical fuzzy c-means clustering algorithm. It proceeds in an incremental way attempting to optimally add new cluster center at each stage through the kernel-based fuzzy c-means. By solving all the intermediate problems, the final near-optimal solution is determined in a deterministic way. This algorithm overcomes the well-known shortcomings of fuzzy c-means and improves the clustering accuracy. Simulation results demonstrate the effectiveness of the proposed method in consonant/vowel segmentation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Kanagawa, Motonobu, Yu Nishiyama, Arthur Gretton e Kenji Fukumizu. "Filtering with State-Observation Examples via Kernel Monte Carlo Filter". Neural Computation 28, n.º 2 (fevereiro de 2016): 382–444. http://dx.doi.org/10.1162/neco_a_00806.

Texto completo da fonte
Resumo:
This letter addresses the problem of filtering with a state-space model. Standard approaches for filtering assume that a probabilistic model for observations (i.e., the observation model) is given explicitly or at least parametrically. We consider a setting where this assumption is not satisfied; we assume that the knowledge of the observation model is provided only by examples of state-observation pairs. This setting is important and appears when state variables are defined as quantities that are very different from the observations. We propose kernel Monte Carlo filter, a novel filtering method that is focused on this setting. Our approach is based on the framework of kernel mean embeddings, which enables nonparametric posterior inference using the state-observation examples. The proposed method represents state distributions as weighted samples, propagates these samples by sampling, estimates the state posteriors by kernel Bayes’ rule, and resamples by kernel herding. In particular, the sampling and resampling procedures are novel in being expressed using kernel mean embeddings, so we theoretically analyze their behaviors. We reveal the following properties, which are similar to those of corresponding procedures in particle methods: the performance of sampling can degrade if the effective sample size of a weighted sample is small, and resampling improves the sampling performance by increasing the effective sample size. We first demonstrate these theoretical findings by synthetic experiments. Then we show the effectiveness of the proposed filter by artificial and real data experiments, which include vision-based mobile robot localization.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Zhang, Yi, Jie Lu, Feng Liu, Qian Liu, Alan Porter, Hongshu Chen e Guangquan Zhang. "Does deep learning help topic extraction? A kernel k-means clustering method with word embedding". Journal of Informetrics 12, n.º 4 (novembro de 2018): 1099–117. http://dx.doi.org/10.1016/j.joi.2018.09.004.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Tamas, Ambrus, e Balazs Csanad Csaji. "Exact Distribution-Free Hypothesis Tests for the Regression Function of Binary Classification via Conditional Kernel Mean Embeddings". IEEE Control Systems Letters 6 (2022): 860–65. http://dx.doi.org/10.1109/lcsys.2021.3087409.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Cong, Zhang, Zeng Shan e Zhang Hui. "Electrocardiography Classification Based on Revised Locally Linear Embedding Algorithm and Kernel-Based Fuzzy C-Means Clustering". Journal of Medical Imaging and Health Informatics 4, n.º 6 (1 de dezembro de 2014): 916–21. http://dx.doi.org/10.1166/jmihi.2014.1342.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Liu, Yang, Jiayun Tian, Xuemei Liu, Tianran Tao, Zehong Ren, Xingzhi Wang e Yize Wang. "Research on a Knowledge Graph Embedding Method Based on Improved Convolutional Neural Networks for Hydraulic Engineering". Electronics 12, n.º 14 (17 de julho de 2023): 3099. http://dx.doi.org/10.3390/electronics12143099.

Texto completo da fonte
Resumo:
In response to the shortcomings of existing knowledge graph embedding strategies, such as weak feature interaction and latent knowledge representation, a unique hydraulic knowledge graph embedding method is suggested. The proposed method incorporates spatial position features into the entity-relation embedding process, thereby enhancing the representation capability of latent knowledge. Furthermore, it utilizes a multi-layer convolutional neural network to fuse features at different levels, effectively capturing more abundant semantic information. Additionally, the method employs multi-scale dilated convolution kernels to capture rich explicit interaction features across different scales of space. In this study, the effectiveness of the proposed model was validated on the link prediction task. Experimental results demonstrated that, compared to the ConvE model, the proposed model achieved a significant improvement of 14.8% in terms of mean reciprocal rank (MRR) on public datasets. Additionally, the suggested model outperformed the ConvR model on the hydraulic dataset, leading to a 10.1% increase in MRR. The results indicate that the proposed approach exhibits good applicability and performance in the task of hydraulic knowledge graph complementation. This suggests that the method has the potential to offer significant assistance for knowledge discovery and application research in the field of hydraulics.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Tschan-Plessl, Astrid, Eivind Heggernes Ask, Thea Johanne Gjerdingen, Michelle Saetersmoen, Hanna Julie Hoel, Merete Thune Wiiger, Johanna Olweus et al. "System-Level Disease-Driven Immune Signatures in Patients with Diffuse Large B-Cell Lymphoma Associated with Poor Survival". Blood 134, Supplement_1 (13 de novembro de 2019): 2897. http://dx.doi.org/10.1182/blood-2019-131359.

Texto completo da fonte
Resumo:
Global gene expression profiling of the tumor microenvironment in diffuse large B-cell lymphoma (DLBCL) has revealed broad innate immune signatures that distinguish the heterogeneous disease subtypes and correlate with good treatment outcome. However, we still lack tools to identify the relatively large group of patients that are refractory to initial therapy and have a dismal prognosis. Here, we used mass cytometry and serum profiling in a systems-level approach to analyze immune responses in 36 patients with aggressive B cell lymphoma and age- and sex-matched healthy controls. Stochastic neighbor embedding (t-SNE) analysis of protein profiles divided patients into two distinct clusters, with cluster 2 representing patients with a more severe deviation in their protein expression compared to healthy controls. Patients in cluster 2 showed a more dramatic perturbation of their immune cell repertoires with expansion of myeloid-derived suppressor cells (MDSCs), increased T cell differentiation and significantly higher expression of metabolic markers such as GLUT-1 and activation markers, including Ki67, CD38 and PD-1. An extended analysis of serum protein profiles in two independent cohorts (n=69 and n=80 patients, respectively) revealed that that the identified systemic immune signatures were linked to poor progression free survival (PFS) and inferior overall survival (OS). Immune monitoring during chemo-immunotherapy showed that most patients normalized their serum protein profiles. Notably, non-responding patients retained higher than normal expression of several proteins, including PD-L1, CD70, IL-18, granzyme A and CD83. These studies demonstrate distinct patterns of disease-driven alterations in the systemic immune response of DLBCL patients that are associated with poor survival and persist in patients who are refractory to therapy. Figure 1 System-level immune signatures associated with poor prognosis in DLBCL. A) Altered serum profiles in patients compared to healthy controls. Two clusters of patients were identified based on t-SNE analysis of serum profiles. B) Patients in cluster 2 had bulky disease and B symptoms. C) t-SNE map of all patients (n=36) and controls (n=17). Relative abundance of cells from healthy controls and patients in all areas of the t-SNE clustering, highlighting cell subsets that are larger or smaller in patients compared to healthy donors. Colors indicate the difference in kernel density estimation of the t-SNE data for patients and healthy controls. D) Abundance of monocytic myeloid-derived suppressor cells as percentage of all CD45+ cells in healthy donors and the two patient clusters. White, Healthy controls; Blue, Cluster 1; Red, Cluster 2. E) Major phenotypic differences between patient clusters shown as mean mass intensity (MMI) or percent positive cells for selected markers (CD38 and PD-1) across multiple subsets. White, Healthy controls; Blue, Cluster 1; Red, Cluster 2. F-G) Overall survival in patients with serologically defined immune signatures belonging to cluster 1 or 2. H) Abundance of serum proteins in patients that stayed in remission (n=24) compared to those that did not (n=6). Figure 1 Disclosures Olweus: Gilead Kite: Research Funding; Intellia: Consultancy, Honoraria, Membership on an entity's Board of Directors or advisory committees. Wahlin:Roche and Gilead: Consultancy. Fehniger:Cyto-Sen Therapeutics: Consultancy; Horizon Pharma PLC: Other: Consultancy (Spouse). Holte:Novartis: Honoraria, Other: Advisory board. Kolstad:Nordic Nanovector: Membership on an entity's Board of Directors or advisory committees, Research Funding; Merck: Research Funding. Malmberg:Fate Therapeutics, Inc.: Consultancy, Research Funding; Vycellix: Consultancy, Membership on an entity's Board of Directors or advisory committees.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Huang, Jingxu, Qiong Liu, Lang Xiang, Guangrui Li, Yiqing Zhang e Wenbai Chen. "A Lightweight Residual Model for Corrosion Segmentation with Local Contextual Information". Applied Sciences 12, n.º 18 (9 de setembro de 2022): 9095. http://dx.doi.org/10.3390/app12189095.

Texto completo da fonte
Resumo:
Metal corrosion in high-risk areas, such as high-altitude cables and chemical factories, is very complex and inaccessible to people, which can be a hazard and compromise people’s safety. Embedding deep learning models into edge computing devices is urgently needed to conduct corrosion inspections. However, the parameters of current state-of-the-art models are too large to meet the computation and storage requirements of mobile devices, while lightweight models perform poorly in complex corrosion environments. To address these issues, a lightweight residual deep-learning model based on an encoder–decoder structure is proposed in this paper. We designed small and large kernels to extract local detailed information and capture distant dependencies at all stages of the encoder. A sequential operation consisting of a channel split, depthwise separable convolution, and channel shuffling were implemented to reduce the size of the model. We proposed a simple, efficient decoder structure by fusing multi-scale features to augment feature representation. In extensive experiments, our proposed model, with only 2.41 MB of parameters, demonstrated superior performance over state-of-the-art segmentation methods: 75.64% mean intersection over union (IoU), 86.07% mean pixel accuracy and a 0.838 F1-score. Moreover, a larger version was designed by increasing the number of output channels, and model accuracy improved further: 79.06% mean IoU, 88.07% mean pixel accuracy, and 0.891 F1-score. The size of the model remained competitive at 8.25 MB. Comparison work with other networks and visualized results were used for validation and to determine the accuracy of metal corrosion surface segmentation with limited resources.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Kusaba, Minoru, Yoshihiro Hayashi, Chang Liu, Araki Wakiuchi e Ryo Yoshida. "Representation of materials by kernel mean embedding". Physical Review B 108, n.º 13 (16 de outubro de 2023). http://dx.doi.org/10.1103/physrevb.108.134107.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Wu, Xi-Zhu, Wenkai Xu, Song Liu e Zhi-Hua Zhou. "Model Reuse with Reduced Kernel Mean Embedding Specification". IEEE Transactions on Knowledge and Data Engineering, 2021, 1. http://dx.doi.org/10.1109/tkde.2021.3086619.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Li, Guofa, Zefeng Ji, Yunlong Chang, Shen Li, Xingda Qu e Dongpu Cao. "ML-ANet: A Transfer Learning Approach Using Adaptation Network for Multi-label Image Classification in Autonomous Driving". Chinese Journal of Mechanical Engineering 34, n.º 1 (23 de agosto de 2021). http://dx.doi.org/10.1186/s10033-021-00598-9.

Texto completo da fonte
Resumo:
AbstractTo reduce the discrepancy between the source and target domains, a new multi-label adaptation network (ML-ANet) based on multiple kernel variants with maximum mean discrepancies is proposed in this paper. The hidden representations of the task-specific layers in ML-ANet are embedded in the reproducing kernel Hilbert space (RKHS) so that the mean-embeddings of specific features in different domains could be precisely matched. Multiple kernel functions are used to improve feature distribution efficiency for explicit mean embedding matching, which can further reduce domain discrepancy. Adverse weather and cross-camera adaptation examinations are conducted to verify the effectiveness of our proposed ML-ANet. The results show that our proposed ML-ANet achieves higher accuracies than the compared state-of-the-art methods for multi-label image classification in both the adverse weather adaptation and cross-camera adaptation experiments. These results indicate that ML-ANet can alleviate the reliance on fully labeled training data and improve the accuracy of multi-label image classification in various domain shift scenarios.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Alyakin, Anton A., Joshua Agterberg, Hayden S. Helm e Carey E. Priebe. "Correcting a nonparametric two-sample graph hypothesis test for graphs with different numbers of vertices with applications to connectomics". Applied Network Science 9, n.º 1 (3 de janeiro de 2024). http://dx.doi.org/10.1007/s41109-023-00607-x.

Texto completo da fonte
Resumo:
AbstractRandom graphs are statistical models that have many applications, ranging from neuroscience to social network analysis. Of particular interest in some applications is the problem of testing two random graphs for equality of generating distributions. Tang et al. (Bernoulli 23:1599–1630, 2017) propose a test for this setting. This test consists of embedding the graph into a low-dimensional space via the adjacency spectral embedding (ASE) and subsequently using a kernel two-sample test based on the maximum mean discrepancy. However, if the two graphs being compared have an unequal number of vertices, the test of Tang et al. (Bernoulli 23:1599–1630, 2017) may not be valid. We demonstrate the intuition behind this invalidity and propose a correction that makes any subsequent kernel- or distance-based test valid. Our method relies on sampling based on the asymptotic distribution for the ASE. We call these altered embeddings the corrected adjacency spectral embeddings (CASE). We also show that CASE remedies the exchangeability problem of the original test and demonstrate the validity and consistency of the test that uses CASE via a simulation study. Lastly, we apply our proposed test to the problem of determining equivalence of generating distributions in human connectomes extracted from diffusion magnetic resonance imaging at different scales.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Hayati, Saeed, Kenji Fukumizu e Afshin Parvardeh. "Kernel Mean Embedding of Probability Measures and its Applications to Functional Data Analysis". Scandinavian Journal of Statistics, 12 de outubro de 2023. http://dx.doi.org/10.1111/sjos.12691.

Texto completo da fonte
Resumo:
AbstractThis study intends to introduce kernel mean embedding of probability measures over infinite‐dimensional separable Hilbert spaces induced by functional response statistical models. The embedded function represents the concentration of probability measures in small open neighborhoods, which identifies a pseudo‐likelihood and fosters a rich framework for statistical inference. Utilizing Maximum Mean Discrepancy, we devise new tests in functional response models. The performance of new derived tests is evaluated against competitors in three major problems in functional data analysis including function‐on‐scalar regression, functional one‐way ANOVA, and equality of covariance operators.This article is protected by copyright. All rights reserved.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Das, Srinjoy, Hrushikesh N. Mhaskar e Alexander Cloninger. "Kernel Distance Measures for Time Series, Random Fields and Other Structured Data". Frontiers in Applied Mathematics and Statistics 7 (22 de dezembro de 2021). http://dx.doi.org/10.3389/fams.2021.787455.

Texto completo da fonte
Resumo:
This paper introduces kdiff, a novel kernel-based measure for estimating distances between instances of time series, random fields and other forms of structured data. This measure is based on the idea of matching distributions that only overlap over a portion of their region of support. Our proposed measure is inspired by MPdist which has been previously proposed for such datasets and is constructed using Euclidean metrics, whereas kdiff is constructed using non-linear kernel distances. Also, kdiff accounts for both self and cross similarities across the instances and is defined using a lower quantile of the distance distribution. Comparing the cross similarity to self similarity allows for measures of similarity that are more robust to noise and partial occlusions of the relevant signals. Our proposed measure kdiff is a more general form of the well known kernel-based Maximum Mean Discrepancy distance estimated over the embeddings. Some theoretical results are provided for separability conditions using kdiff as a distance measure for clustering and classification problems where the embedding distributions can be modeled as two component mixtures. Applications are demonstrated for clustering of synthetic and real-life time series and image data, and the performance of kdiff is compared to competing distance measures for clustering.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Alquier, P., e M. Gerber. "Universal Robust Regression via Maximum Mean Discrepancy". Biometrika, 10 de maio de 2023. http://dx.doi.org/10.1093/biomet/asad031.

Texto completo da fonte
Resumo:
Abstract Many modern datasets are collected automatically and are thus easily contaminated by outliers. This has led to a renewed interest in robust estimation, including new notions of robustness such as robustness to adversarial contamination of the data. However, most robust estimation methods are designed for a specific model. Notably, many methods were proposed recently to obtain robust estimators in linear models, or generalized linear models, and a few were developed for very specific settings, for example beta regression or sample selection models. In this paper we develop a new approach for robust estimation in arbitrary regression models, based on maximum mean discrepancy minimization. We build two estimators which are both proven to be robust to Huber-type contamination. We obtain a non-asymptotic error bound for them and show that it is also robust to adversarial contamination, but this estimator is computationally more expensive to use in practice than the other one. As a by-product of our theoretical analysis of the proposed estimators we derive new results on kernel conditional mean embedding of distributions which are of independent interest.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Fiedler, Christian, Michael Herty e Sebastian Trimpe. "Mean Field Limits for Discrete-Time Dynamical Systems via Kernel Mean Embeddings". IEEE Control Systems Letters, 2023, 1. http://dx.doi.org/10.1109/lcsys.2023.3341280.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Guo, Yu, Yi Lingzhi, Wang Yahui, Dong Tengfei, Yu Huang e She Haixiang. "Abnormal Status Detection of Catenary Based on TSNE Dimensionality Reduction Method and IGWO-LSSVM Model". Recent Patents on Mechanical Engineering 16 (5 de maio de 2023). http://dx.doi.org/10.2174/2212797616666230505151008.

Texto completo da fonte
Resumo:
Background: Catenary is a crucial component of an electrified railroad's traction power supply system. There is a considerable incidence of abnormal status and failures due to prolonged outside exposure. Driving safety will be directly impacted if an abnormal status or failure occurs. Currently, catenary detection vehicles are the most often utilized technique for gathering data and identifying faults based on manual experience. However, this technology cannot meet the demands of prompt detection and correction of faults in railways engineering due to its extremely low work efficiency. Objective: Based on the above, an abnormal status detection method of catenary based on the improved gray wolf (IGWO) algorithm optimized the least squares support vector machine (LSSVM) with the t-distributed stochastic neighbor embedding (TSNE) is proposed in this paper. In order to improve the accuracy of catenary abnormal status detection and shorten the detection time. Methods: Firstly, the TSNE dimensionality reduction technology is used to reduce the original catenary data to three-dimensional space. Then, in order to address the issue that the parameters of the LSSVM detection model are hard to determine, the improved GWO algorithm is used to optimize the penalty factor and kernel parameter in the LSSVM and establish the TSNE-IGWO-LSSVM catenary abnormal status detection model. Finally, contrasting experimental results of different detection models. The T-distributed Stochastic Domain Embedding (TSNE) is an improved nonlinear dimensionality reduction method based on the Stochastic Neighbor Embedding (SNE). TSNE no longer adopts the distance invariance in linear dimensionality reduction methods such as ISOMAP. TSNE is much better than the linear dimensionality reduction method in the reduction degree of the original dimension. The GWO algorithm, which is frequently used in engineering research, has the advantages of a simple model, great generalization capability, and good optimization performance. The premature convergence is one of the remaining flaws. By applying a good point set to initialize the gray wolf population and the nonlinear control parameters, the gray wolf algorithm is improved in this research. The IGWO algorithm effectively makes up for the problem of balancing the local exploitation and global search capabilities of GWO. Additionally, this IGWO algorithm performs the Cauchy variation operation on the current generation optimal solution to improve population diversity, enlarge the search space, and increase the likelihood of the algorithm escaping the local optimal solution in order to prevent the algorithm from failing the local optimum. The Least Squares Support Vector Machine (LSSVM) is an improved version of the Support Vector Machine (SVM), which replaces the original inequality constraint with a linear least squares criterion for the loss function. The kernel parameters of the RBF function and the penalty factor, these two parameters directly determine the detection effect of LSSVM. In this paper, the IGWO is utilized to adjust and determine the LSSVM parameters in order to enhance the detection capacity of the LSSVM model. Results:: In this paper, in order to minimize the experiment's bias, the training data and the test data are allocated in a ratio of 4:1, the training data are set to 400 groups, and the test data are set to 100 groups. After training the five models, the test data is used to validate and compare the detection capacity of the models. After each of the five detection models was tested ten times, the TSNE-IGWO-LSSVM model is compared with the IGWO-LSSVM model, the TSNE-FA-LSSVM model, the GWO-LSSVM model, and the GWO-ELM model, the results show that the TSNE-IGWO-LSSVM model has the highest average detection accuracy of 97.1% and the shortest running time of 26.9s. For the root mean squared error (RMSE) and the root mean squared error (RMSE), the TSNE-IGWO-LSSVM model is 0.17320 and 2.51% respectively, which is the best among the five models, indicating that it not only has higher detection accuracy but also better convergence of detection accuracy than the other models. Conclusions: With the thousands of miles of catenary and the complexity of the data, it is crucial to shorten the running time in order to improve the efficiency and ease the burden of the processors. The experiments demonstrate that the TSNE-IGWO-LSSVM detection model can detect the abnormal status of catenary more accurately and quickly, providing a new method for the abnormal status detection of catenary, which has certain application value and engineering significance in the era of fully electrified railways.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Verma, Vaibhav, e Mircea R. Stan. "AI-PiM—Extending the RISC-V processor with Processing-in-Memory functional units for AI inference at the edge of IoT". Frontiers in Electronics 3 (11 de agosto de 2022). http://dx.doi.org/10.3389/felec.2022.898273.

Texto completo da fonte
Resumo:
The recent advances in Artificial Intelligence (AI) achieving “better-than-human” accuracy in a variety of tasks such as image classification and the game of Go have come at the cost of exponential increase in the size of artificial neural networks. This has lead to AI hardware solutions becoming severely memory-bound and scrambling to keep-up with the ever increasing “von Neumann bottleneck”. Processing-in-Memory (PiM) architectures offer an excellent solution to ease the von Neumann bottleneck by embedding compute capabilities inside the memory and reducing the data traffic between the memory and the processor. But PiM accelerators break the standard von Neumann programming model by fusing memory and compute operations together which impedes their integration in the standard computing stack. There is an urgent requirement for system-level solutions to take full advantage of PiM accelerators for end-to-end acceleration of AI applications. This article presents AI-PiM as a solution to bridge this research gap. AI-PiM proposes a hardware, ISA and software co-design methodology which allows integration of PiM accelerators in the RISC-V processor pipeline as functional execution units. AI-PiM also extends the RISC-V ISA with custom instructions which directly target the PiM functional units resulting in their tight integration with the processor. This tight integration is especially important for edge AI devices which need to process both AI and non-AI tasks on the same hardware due to area, power, size and cost constraints. AI-PiM ISA extensions expose the PiM hardware functionality to software programmers allowing efficient mapping of applications to the PiM hardware. AI-PiM adds support for custom ISA extensions to the complete software stack including compiler, assembler, linker, simulator and profiler to ensure programmability and evaluation with popular AI domain-specific languages and frameworks like TensorFlow, PyTorch, MXNet, Keras etc. AI-PiM improves the performance for vector-matrix multiplication (VMM) kernel by 17.63x and provides a mean speed-up of 2.74x for MLPerf Tiny benchmark compared to RV64IMC RISC-V baseline. AI-PiM also speeds-up MLPerf Tiny benchmark inference cycles by 2.45x (average) compared to state-of-the-art Arm Cortex-A72 processor.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Singh, Abhinav, Alejandra Foggia, Pietro Incardona e Ivo F. Sbalzarini. "A Meshfree Collocation Scheme for Surface Differential Operators on Point Clouds". Journal of Scientific Computing 96, n.º 3 (7 de agosto de 2023). http://dx.doi.org/10.1007/s10915-023-02313-3.

Texto completo da fonte
Resumo:
AbstractWe present a meshfree collocation scheme to discretize intrinsic surface differential operators over scalar fields on smooth curved surfaces with given normal vectors and a non-intersecting tubular neighborhood. The method is based on discretization-corrected particle strength exchange (DC-PSE), which generalizes finite difference methods to meshfree point clouds. The proposed Surface DC-PSE method is derived from an embedding theorem, but we analytically reduce the operator kernels along surface normals to obtain a purely intrinsic computational scheme over surface point clouds. We benchmark Surface DC-PSE by discretizing the Laplace–Beltrami operator on a circle and a sphere, and we present convergence results for both explicit and implicit solvers. We then showcase the algorithm on the problem of computing Gauss and mean curvature of an ellipsoid and of the Stanford Bunny by approximating the intrinsic divergence of the normal vector field. Finally, we compare Surface DC-PSE with surface finite elements (SFEM) and diffuse-interface finite elements (DI FEM) in a validation case.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Meghanani, Amit, C. S. Anoop e Angarai Ganesan Ramakrishnan. "Recognition of Alzheimer’s Dementia From the Transcriptions of Spontaneous Speech Using fastText and CNN Models". Frontiers in Computer Science 3 (24 de março de 2021). http://dx.doi.org/10.3389/fcomp.2021.624558.

Texto completo da fonte
Resumo:
Alzheimer’s dementia (AD) is a type of neurodegenerative disease that is associated with a decline in memory. However, speech and language impairments are also common in Alzheimer’s dementia patients. This work is an extension of our previous work, where we had used spontaneous speech for Alzheimer’s dementia recognition employing log-Mel spectrogram and Mel-frequency cepstral coefficients (MFCC) as inputs to deep neural networks (DNN). In this work, we explore the transcriptions of spontaneous speech for dementia recognition and compare the results with several baseline results. We explore two models for dementia recognition: 1) fastText and 2) convolutional neural network (CNN) with a single convolutional layer, to capture the n-gram-based linguistic information from the input sentence. The fastText model uses a bag of bigrams and trigrams along with the input text to capture the local word orderings. In the CNN-based model, we try to capture different n-grams (we use n = 2, 3, 4, 5) present in the text by adapting the kernel sizes to n. In both fastText and CNN architectures, the word embeddings are initialized using pretrained GloVe vectors. We use bagging of 21 models in each of these architectures to arrive at the final model using which the performance on the test data is assessed. The best accuracies achieved with CNN and fastText models on the text data are 79.16 and 83.33%, respectively. The best root mean square errors (RMSE) on the prediction of mini-mental state examination (MMSE) score are 4.38 and 4.28 for CNN and fastText, respectively. The results suggest that the n-gram-based features are worth pursuing, for the task of AD detection. fastText models have competitive results when compared to several baseline methods. Also, fastText models are shallow in nature and have the advantage of being faster in training and evaluation, by several orders of magnitude, compared to deep models.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Lootus, Meelis, Lulu Beatson, Lucas Atwood, Theo Bourdais, Sandra Steyaert, Chethan Sarabu, Zeenia Framroze et al. "Development and Assessment of an Artificial Intelligence-Based Tool for Ptosis Measurement in Adult Myasthenia Gravis Patients Using Selfie Video Clips Recorded on Smartphones". Digital Biomarkers, 28 de julho de 2023, 63–73. http://dx.doi.org/10.1159/000531224.

Texto completo da fonte
Resumo:
<b><i>Introduction:</i></b> Myasthenia gravis (MG) is a rare autoimmune disease characterized by muscle weakness and fatigue. Ptosis (eyelid drooping) occurs due to fatigue of the muscles for eyelid elevation and is one symptom widely used by patients and healthcare providers to track progression of the disease. Margin reflex distance 1 (MRD1) is an accepted clinical measure of ptosis and is typically assessed using a hand-held ruler. In this work, we develop an AI model that enables automated measurement of MRD1 in self-recorded video clips collected using patient smartphones. <b><i>Methods:</i></b> A 3-month prospective observational study collected a dataset of video clips from patients with MG. Study participants were asked to perform an eyelid fatigability exercise to elicit ptosis while filming “selfie” videos on their smartphones. These images were collected in nonclinical settings, with no in-person training. The dataset was annotated by non-clinicians for (1) eye landmarks to establish ground truth MRD1 and (2) the quality of the video frames. The ground truth MRD1 (in millimeters, mm) was calculated from eye landmark annotations in the video frames using a standard conversion factor, the horizontal visible iris diameter of the human eye. To develop the model, we trained a neural network for eye landmark detection consisting of a ResNet50 backbone plus two dense layers of 78 dimensions on publicly available datasets. Only the ResNet50 backbone was used, discarding the last two layers. The embeddings from the ResNet50 were used as features for a support vector regressor (SVR) using a linear kernel, for regression to MRD1, in mm. The SVR was trained on data collected remotely from MG patients in the prospective study, split into training and development folds. The model’s performance for MRD1 estimation was evaluated on a separate test fold from the study dataset. <b><i>Results:</i></b> On the full test fold (<i>N</i> = 664 images), the correlation between the ground truth and predicted MRD1 values was strong (<i>r</i> = 0.732). The mean absolute error was 0.822 mm; the mean of differences was −0.256 mm; and 95% limits of agreement (LOA) were −0.214–1.768 mm. Model performance showed no improvement when test data were gated to exclude “poor” quality images. <b><i>Conclusions:</i></b> On data generated under highly challenging real-world conditions from a variety of different smartphone devices, the model predicts MRD1 with a strong correlation (<i>r</i> = 0.732) between ground truth and predicted MRD1.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Caesar Dib, Caio. "Bioethics-CSR Divide". Voices in Bioethics 10 (21 de março de 2024). http://dx.doi.org/10.52214/vib.v10i.12376.

Texto completo da fonte
Resumo:
Photo by Sean Pollock on Unsplash ABSTRACT Bioethics and Corporate Social Responsibility (CSR) were born out of similar concerns, such as the reaction to scandal and the restraint of irresponsible actions by individuals and organizations. However, these fields of knowledge are seldom explored together. This article attempts to explain the motives behind the gap between bioethics and CSR, while arguing that their shared agenda – combined with their contrasting principles and goals – suggests there is potential for fruitful dialogue that enables the actualization of bioethical agendas and provides a direction for CSR in health-related organizations. INTRODUCTION Bioethics and Corporate Social Responsibility (CSR) seem to be cut from the same cloth: the concern for human rights and the response to scandal. Both are tools for the governance of organizations, shaping how power flows and decisions are made. They have taken the shape of specialized committees, means of stakeholder inclusion at deliberative forums, compliance programs, and internal processes. It should be surprising, then, that these two fields of study and practice have developed separately, only recently re-approaching one another. There have been displays of this reconnection both in academic and corporate spaces, with bioethics surfacing as part of the discourse of CSR and compliance initiatives. However, this is still a relatively timid effort. Even though the bioethics-CSR divide presents mostly reasonable explanations for this difficult relationship between the disciplines, current proposals suggest there is much to be gained from a stronger relationship between them. This article explores the common history of bioethics and corporate social responsibility and identifies their common features and differences. It then explores the dispute of jurisdictions due to professional and academic “pedigree” and incompatibilities in the ideological and teleological spheres as possible causes for the divide. The discussion turns to paths for improving the reflexivity of both disciplines and, therefore, their openness to mutual contributions. I. Cut Out of the Same Cloth The earliest record of the word “bioethics” dates back to 1927 as a term that designates one’s ethical responsibility toward not only human beings but other lifeforms as well, such as animals and plants.[1] Based on Kantian ethics, the term was coined as a response to the great prestige science held at its time. It remained largely forgotten until the 1970s, when it resurfaced in the United States[2] as the body of knowledge that can be employed to ensure the responsible pursuit and application of science. The resurgence was prompted by a response to widespread irresponsible attitudes toward science and grounded in a pluralistic perspective of morality.[3] In the second half of the twentieth century, states and the international community assumed the duty to protect human rights, and bioethics became a venue for discussing rights.[4] There is both a semantic gap and a contextual gap between these two iterations, with some of them already being established. Corporate social responsibility is often attributed to the Berle-Dodd debate. The discussion was characterized by diverging views on the extent of the responsibility of managers.[5] It was later settled as positioning the company, especially the large firm, as an entity whose existence is fomented by the law due to its service to the community. The concept has evolved with time, departing from a largely philanthropic meaning to being ingrained in nearly every aspect of a company’s operations. This includes investments, entrepreneurship models, and its relationship to stakeholders, leading to an increasing operationalization and globalization of the concept.[6] At first sight, these two movements seem to stem from different contexts. Despite the difference, it is also possible to tell a joint history of bioethics and CSR, with their point of contact being a generalized concern with technological and social changes that surfaced in the sixties. The publishing of Silent Spring in 1962 by Rachel Carson exemplifies this growing concern over the sustainability of the ruling economic growth model of its time by commenting on the effects of large-scale agriculture and the use of pesticides in the population of bees, one of the most relevant pollinators of crops consumed by humans. The book influenced both the author responsible for the coining bioethics in the 1971[7] and early CSR literature.[8] By initiating a debate over the sustainability of economic models, the environmentalist discourse became a precursor to vigorous social movements for civil rights. Bioethics was part of the trend as it would be carried forward by movements such as feminism and the patients’ rights movement.[9] Bioethics would gradually move from a public discourse centered around the responsible use of science and technology to academic and government spaces.[10] This evolution led to an increasing emphasis on intellectual rigor and governance. The transformation would unravel the effort to take effective action against scandal and turn bioethical discourse into governance practices,[11] such as bioethics and research ethics committees. The publication of the Belmont Report[12] in the aftermath of the Tuskegee Syphilis Experiment, as well as the creation of committees such as the “God Committee,”[13] which aimed to develop and enforce criteria for allocating scarce dialysis machines, exemplify this shift. On the side of CSR, this period represents, at first, a stronger pact between businesses and society due to more stringent environmental and consumer regulations. But afterward, a joint trend emerged: on one side, the deregulation within the context of neoliberalism, and on the other, the operationalization of corporate social responsibility as a response to societal concerns.[14] The 1990s saw both opportunities and crises that derived from globalization. In the political arena, the end of the Cold War led to an impasse in the discourse concerning human rights,[15] which previously had been split between the defense of civil and political rights on one side and social rights on the other. But at the same time, agendas that were previously restricted territorially became institutionalized on a global scale.[16] Events such as the European Environment Agency (1990), ECO92 in Rio de Janeiro (1992), and the UN Global Compact (2000) are some examples of the globalization of CSR. This process of institutionalization would also mirror a crisis in CSR, given that its voluntarist core would be deemed lackluster due to the lack of corporate accountability. The business and human rights movement sought to produce new binding instruments – usually state-based – that could ensure that businesses would comply with their duties to respect human rights.[17] This rule-creation process has been called legalization: a shift from business standards to norms of varying degrees of obligation, precision, and delegation.[18] Bioethics has also experienced its own renewed identity in the developed world, perhaps because of its reconnection to public and global health. Global health has been the object of study for centuries under other labels (e.g., the use of tropical medicine to assist colonial expeditions) but it resurfaced in the political agenda recently after the pandemics of AIDS and respiratory diseases.[19] Bioethics has been accused from the inside of ignoring matters beyond the patient-provider relationship,[20] including those related to public health and/or governance. Meanwhile, scholars claimed the need to expand the discourse to global health.[21] In some countries, bioethics developed a tight relationship with public health, such as Brazil,[22] due to its connections to the sanitary reform movement. The United Kingdom has also followed a different path, prioritizing governance practices and the use of pre-established institutions in a more community-oriented approach.[23] The Universal Declaration on Bioethics and Rights followed this shift toward a social dimension of bioethics despite being subject to criticism due to its human rights-based approach in a field characterized by ethical pluralism.[24] This scenario suggests bioethics and CSR have developed out of similar concerns: the protection of human rights and concerns over responsible development – be it economic, scientific, or technological. However, the interaction between these two fields (as well as business and human rights) is fairly recent both in academic and business settings. There might be a divide between these fields and their practitioners. II. A Tale of Jurisdictions It can be argued that CSR and business and human rights did not face jurisdictional disputes. These fields owe much of their longevity to their roots in institutional economics, whose debates, such as the Berle-Dodd debate, were based on interdisciplinary dialogue and the abandonment of sectorial divisions and public-private dichotomies.[25] There was opposition to this approach to the role of companies in society that could have implications for CSR’s interdisciplinarity, such as the understanding that corporate activities should be restricted to profit maximization.[26] Yet, those were often oppositions to CSR or business and human rights themselves. The birth of bioethics in the USA can be traced back to jurisdictional disputes over the realm of medicine and life sciences.[27] The dispute unfolded between representatives of science and those of “society’s conscience,” whether through bioethics as a form of applied ethics or other areas of knowledge such as theology.[28] Amid the civil rights movements, outsiders would gain access to the social sphere of medicine, simultaneously bringing it to the public debate and emphasizing the decision-making process as the center of the medical practice.[29] This led to the emergence of the bioethicist as a professional whose background in philosophy, theology, or social sciences deemed the bioethicist qualified to speak on behalf of the social consciousness. In other locations this interaction would play out differently: whether as an investigation of philosophically implied issues, a communal effort with professional institutions to enhance decision-making capability, or a concern with access to healthcare.[30] In these situations, the emergence and regulation of bioethics would be way less rooted in disputes over jurisdictions. This contentious birth of bioethics would have several implications, most related to where the bioethicist belongs. After the civil rights movements subsided, bioethics moved from the public sphere into an ivory tower: intellectual, secular, and isolated. The scope of the bioethicist would be increasingly limited to the spaces of academia and hospitals, where it would be narrowed to the clinical environment.[31] This would become the comfort zone of professionals, much to the detriment of social concerns. This scenario was convenient to social groups that sought to affirm their protagonism in the public arena, with conservative and progressive movements alike questioning the legitimacy of bioethics in the political discourse.[32] Even within the walls of hospitals and clinics, bioethics would not be excused from criticism. Afterall, the work of bioethicists is often unregulated and lacks the same kind of accountability that doctors and lawyers have. Then, is there a role to be played by the bioethicist? This trend of isolation leads to a plausible explanation for why bioethics did not develop an extensive collaboration with corporate social responsibility nor with business and human rights. Despite stemming from similar agendas, bioethics’ orientation towards the private sphere resulted in a limited perspective on the broader implications of its decisions. This existential crisis of the discipline led to a re-evaluation of its nature and purpose. Its relevance has been reaffirmed due to the epistemic advantage of philosophy when engaging normative issues. Proper training enables the bioethicist to avoid falling into traps of subjectivism or moralism, which are unable to address the complexity of decision-making. It also prevents the naïve seduction of “scientifying” ethics.[33] This is the starting point of a multitude of roles that can be attributed to the bioethicists. There are three main responsibilities that fall under bioethics: (i) activism in biopolicy, through the engagement in the creation of laws, jurisprudence, and public policies; (ii) the exercise of bioethics expertise, be it through the specialized knowledge in philosophical thought, its ability to juggle multiple languages related to various disciplines related to bioethics, or its capacity to combat and avoid misinformation and epistemic distortion; (iii) and, intellectual exchange, by exercising awareness that it is necessary to work with specialists from different backgrounds to achieve its goals.[34] All of those suggest the need for bioethics to improve its dialogue with CSR and business and human rights. Both CSR and business and human rights have been the arena of political disputes over the role of regulations and corporations themselves, and the absence of strong stances by bioethicists risks deepening their exclusion from the public arena. Furthermore, CSR and business and human rights are at the forefront of contemporary issues, such as the limits to sustainable development and appropriate governance structures, which may lead to the acceptance of values and accomplishment of goals cherished by bioethics. However, a gap in identifying the role and nature of bioethics and CSR may also be an obstacle for bridging the chasm between bioethics and CSR. III. From Substance to Form: Philosophical Groundings of CSR and Bioethics As mentioned earlier, CSR is, to some extent, a byproduct of institutionalism. Institutional economics has a philosophical footprint in the pragmatic tradition[35], which has implications for the purpose of the movement and the typical course of the debate. The effectiveness of regulatory measures is often at the center of CSR and business and human rights debates: whatever the regulatory proposal may be, compliance, feasibility, and effectiveness are the kernel of the discussion. The axiological foundation is often the protection of human rights. But discussions over the prioritization of some human rights over others or the specific characteristics of the community to be protected are often neglected.[36] It is worth reinforcing that adopting human rights as an ethical standard presents problems to bioethics, given its grounding in the recognition of ethical pluralism. Pragmatism adopts an anti-essentialist view, arguing that concepts derive from their practical consequences instead of aprioristic elements.[37] Therefore, truth is transitory and context dependent. Pragmatism embraces a form of moral relativism and may find itself in an impasse in the context of political economy and policymaking due to its tendency to be stuck between the preservation of the status quo and the defense of a technocratic perspective, which sees technical and scientific progress as the solution to many of society’s issues.[38] These characteristics mean that bioethics has a complicated relationship with pragmatism. Indeed, there are connections between pragmatism and the bioethics discourse. Both can be traced back to American naturalism.[39] The early effort in bioethics to make it ecumenical, thus building on a common but transitory morality,[40] sounds pragmatic. Therefore, scholars suggest that bioethics should rely on pragmatism's perks and characteristics to develop solutions to new ethical challenges that emerge from scientific and technological progress. Nonetheless, ethical relativism is a problem for bioethics when it bleeds from a metaethical level into the subject matters themselves. After all, the whole point of bioethics is either descriptive, where it seeks to understand social values and conditions that pertain to its scope, or normative, where it investigates what should be done in matters related to medicine, life sciences, and social and technological change. It is a “knowledge of how to use knowledge.” Therefore, bioethics is a product of disillusionment regarding science and technology's capacity to produce exclusively good consequences. It was built around an opposition to ethical relativism—even though the field is aware of the particularity of its answers. This is true not only for the scholarly arena, where the objective is to produce ethically sound answers but also for bioethics governance, where relativism may induce decision paralysis or open the way to points of view disconnected from facts.[41] But there might be a point for more pragmatic bioethics. Bioethics has become an increasingly public enterprise which seeks political persuasion and impact in the regulatory sphere. When bioethics is seen as an enterprise, achieving social transformation is its main goal. In this sense, pragmatism can provide critical tools to identify idiosyncrasies in regulation that prove change is needed. An example of how this may play out is the abortion rights movement in the global south.[42] Despite barriers to accessing safe abortion, this movement came up with creative solutions and a public discourse focused on the consequences of its criminalization rather than its moral aspects. IV. Bridging the Divide: Connections Between Bioethics and CSR There have been attempts to bring bioethics and CSR closer to each other. Corporate responsibility can be a supplementary strategy for achieving the goals of bioethics. The International Bioethics Committee (IBC), an institution of the United Nations Educational, Scientific and Cultural Organization (UNESCO), highlights the concept that social responsibility regarding health falls under the provisions of the Universal Declaration on Bioethics and Human Rights (UDBHR). It is a means of achieving good health (complete physical, mental, and social well-being) through social development.[43] Thus, it plays out as a condition for actualizing the goals dear to bioethics and general ethical standards,[44] such as autonomy and awareness of the social consequences of an organization’s governance. On this same note, CSR is a complementary resource for healthcare organizations that already have embedded bioethics into their operations[45] as a way of looking at the social impact of their practices. And bioethics is also an asset of CSR. Bioethics can inform the necessary conditions for healthcare institutions achieving a positive social impact. When taken at face value, bioethics may offer guidelines for ethical and socially responsible behavior in the industry, instructing how these should play out in a particular context such as in research, and access to health.[46] When considering the relevance of rewarding mechanisms,[47] bioethics can guide the establishment of certification measures to restore lost trust in the pharmaceutical sector.[48] Furthermore, recognizing that the choice is a more complex matter than the maximization of utility can offer a nuanced perspective on how organizations dealing with existentially relevant choices understand their stakeholders.[49] However, all of those proposals might come with the challenge of proving that something can be gained from its addition to self-regulatory practices[50] within the scope of a dominant rights-based approach to CSR and global and corporate law. It is evident that there is room for further collaboration between bioethics and CSR. Embedding either into the corporate governance practices of an organization tends to be connected to promoting the other.[51] While there are some incompatibilities, organizations should try to overcome them and take advantage of the synergies and similarities. CONCLUSION Despite their common interests and shared history, bioethics and corporate social responsibility have not produced a mature exchange. Jurisdictional issues and foundational incompatibilities have prevented a joint effort to establish a model of social responsibility that addresses issues particular to the healthcare sector. Both bioethics and CSR should acknowledge that they hold two different pieces of a cognitive competence necessary for that task: CSR offers experience on how to turn corporate ethical obligations operational, while bioethics provides access to the prevailing practical and philosophical problem-solving tools in healthcare that were born out of social movements. Reconciling bioethics and CSR calls for greater efforts to comprehend and incorporate the social knowledge developed by each field reflexively[52] while understanding their insights are relevant to achieving some common goals. - [1]. Fritz Jahr, “Bio-Ethik: Eine Umschau Über Die Ethischen Beziehungen Des Menschen Zu Tier Und Pflanze,” Kosmos - Handweiser Für Naturfreunde 24 (1927): 2–4. [2]. Van Rensselaer Potter, “Bioethics, the Science of Survival,” Perspectives in Biology and Medicine 14, no. 1 (1970): 127–53, https://doi.org/10.1353/pbm.1970.0015. [3]. Maximilian Schochow and Jonas Grygier, eds., “Tagungsbericht: 1927 – Die Geburt der Bioethik in Halle (Saale) durch den protestantischen Theologen Fritz Jahr (1895-1953),” Jahrbuch für Recht und Ethik / Annual Review of Law and Ethics 21 (June 11, 2014): 325–29, https://doi.org/10.3726/978-3-653-02807-2. [4] George J. Annas, American Bioethics: Crossing Human Rights and Health Law Boundaries (Oxford ; New York: Oxford University Press, 2005). [5] Philip L. Cochran, “The Evolution of Corporate Social Responsibility,” Business Horizons 50, no. 6 (November 2007): 449–54, https://doi.org/10.1016/j.bushor.2007.06.004. p. 449. [6] Mauricio Andrés Latapí Agudelo, Lára Jóhannsdóttir, and Brynhildur Davídsdóttir, “A Literature Review of the History and Evolution of Corporate Social Responsibility,” International Journal of Corporate Social Responsibility 4, no. 1 (December 2019): 23, https://doi.org/10.1186/s40991-018-0039-y. [7] Potter, “Bioethics, the Science of Survival.” p. 129. [8] Latapí Agudelo, Jóhannsdóttir, and Davídsdóttir, “A Literature Review of the History and Evolution of Corporate Social Responsibility.” p. 4. [9] Albert R. Jonsen, The Birth of Bioethics (New York: Oxford University Press, 2003). p. 368-371. [10] Jonsen. p. 372. [11] Jonathan Montgomery, “Bioethics as a Governance Practice,” Health Care Analysis 24, no. 1 (March 2016): 3–23, https://doi.org/10.1007/s10728-015-0310-2. [12]. The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, “The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research” (Washington: Department of Health, Education, and Welfare, April 18, 1979), https://www.hhs.gov/ohrp/sites/default/files/the-belmont-report-508c_FINAL.pdf. [13] Shana Alexander, “They Decide Who Lives, Who Dies,” in LIFE, by Time Inc, 19th ed., vol. 53 (Nova Iorque: Time Inc, 1962), 102–25. [14]. Latapí Agudelo, Jóhannsdóttir, and Davídsdóttir, “A Literature Review of the History and Evolution of Corporate Social Responsibility.” [15]. Boaventura de Sousa Santos, “Por Uma Concepção Multicultural Dos Direitos Humanos,” Revista Crítica de Ciências Sociais, no. 48 (June 1997): 11–32. [16] Latapí Agudelo, Jóhannsdóttir, and Davídsdóttir, “A Literature Review of the History and Evolution of Corporate Social Responsibility.” [17]. Anita Ramasastry, “Corporate Social Responsibility Versus Business and Human Rights: Bridging the Gap Between Responsibility and Accountability,” Journal of Human Rights 14, no. 2 (April 3, 2015): 237–59, https://doi.org/10.1080/14754835.2015.1037953. [18]. Kenneth W Abbott et al., “The Concept of Legalization,” International Organization, Legalization and World Politics, 54, no. 3 (2000): 401–4019. [19]. Jens Holst, “Global Health – Emergence, Hegemonic Trends and Biomedical Reductionism,” Globalization and Health 16, no. 1 (December 2020): 42–52, https://doi.org/10.1186/s12992-020-00573-4. [20]. Albert R. Jonsen, “Social Responsibilities of Bioethics,” Journal of Urban Health: Bulletin of the New York Academy of Medicine 78, no. 1 (March 1, 2001): 21–28, https://doi.org/10.1093/jurban/78.1.21. [21]. Solomon R Benatar, Abdallah S Daar, and Peter A Singer, “Global Health Challenges: The Need for an Expanded Discourse on Bioethics,” PLoS Medicine 2, no. 7 (July 26, 2005): e143, https://doi.org/10.1371/journal.pmed.0020143. [22]. Márcio Fabri dos Anjos and José Eduardo de Siqueira, eds., Bioética No Brasil: Tendências e Perspectivas, 1st ed., Bio & Ética (São Paulo: Sociedade Brasileira de Bioética, 2007). [23]. Montgomery, “Bioethics as a Governance Practice.” p. 8-9. [24]. Aline Albuquerque S. de Oliveira, “A Declaração Universal Sobre Bioética e Direitos Humanos e a Análise de Sua Repercussão Teórica Na Comunidade Bioética,” Revista Redbioética/UNESCO 1, no. 1 (2010): 124–39. [25] John R. Commons, “Law and Economics,” The Yale Law Journal 34, no. 4 (February 1925): 371, https://doi.org/10.2307/788562; Robert L. Hale, “Bargaining, Duress, and Economic Liberty,” Columbia Law Review 43, no. 5 (July 1943): 603–28, https://doi.org/10.2307/1117229; Karl N. Llewellyn, “The Effect of Legal Institutions Upon Economics,” The American Economic Review 15, no. 4 (1925): 665–83; Carlos Portugal Gouvêa, Análise Dos Custos Da Desigualdade: Efeitos Institucionais Do Círculo Vicioso de Desigualdade e Corrupção, 1st ed. (São Paulo: Quartier Latin, 2021). p. 84-94. [26] Milton Friedman, “A Friedman Doctrine‐- The Social Responsibility of Business Is to Increase Its Profits,” The New York Times, September 13, 1970, sec. Archives, https://www.nytimes.com/1970/09/13/archives/a-friedman-doctrine-the-social-responsibility-of-business-is-to.html. [27] Montgomery, “Bioethics as a Governance Practice.” p. 8. [28] John Hyde Evans, The History and Future of Bioethics: A Sociological View, 1st ed. (New York: Oxford University Press, 2012). [29] David J. Rothman, Strangers at the Bedside: A History of How Law and Bioethics Transformed Medical Decision Making, 2nd pbk. ed, Social Institutions and Social Change (New York: Aldine de Gruyter, 2003). p. 3. [30] Volnei Garrafa, Thiago Rocha Da Cunha, and Camilo Manchola, “Access to Healthcare: A Central Question within Brazilian Bioethics,” Cambridge Quarterly of Healthcare Ethics 27, no. 3 (July 2018): 431–39, https://doi.org/10.1017/S0963180117000810. [31] Jonsen, “Social Responsibilities of Bioethics.” [32] Evans, The History and Future of Bioethics. p. 75-79, 94-96. [33] Julian Savulescu, “Bioethics: Why Philosophy Is Essential for Progress,” Journal of Medical Ethics 41, no. 1 (January 2015): 28–33, https://doi.org/10.1136/medethics-2014-102284. [34] Silvia Camporesi and Giulia Cavaliere, “Can Bioethics Be an Honest Way of Making a Living? A Reflection on Normativity, Governance and Expertise,” Journal of Medical Ethics 47, no. 3 (March 2021): 159–63, https://doi.org/10.1136/medethics-2019-105954; Jackie Leach Scully, “The Responsibilities of the Engaged Bioethicist: Scholar, Advocate, Activist,” Bioethics 33, no. 8 (October 2019): 872–80, https://doi.org/10.1111/bioe.12659. [35] Philip Mirowski, “The Philosophical Bases of Institutionalist Economics,” Journal of Economic Issues, Evolutionary Economics I: Foundations of Institutional Thought, 21, no. 3 (September 1987): 1001–38. [36] David Kennedy, “The International Human Rights Movement: Part of the Problem?,” Harvard Human Rights Journal 15 (2002): 101–25. [37] Richard Rorty, “Pragmatism, Relativism, and Irrationalism,” Proceedings and Addresses of the American Philosophical Association 53, no. 6 (August 1980): 717+719-738. [38]. Mirowski, “The Philosophical Bases of Institutionalist Economics.” [39]. Glenn McGee, ed., Pragmatic Bioethics, 2nd ed, Basic Bioethics (Cambridge, Mass: MIT Press, 2003). [40]. Tom L. Beauchamp and James F. Childress, Principles of Biomedical Ethics, 7th ed (New York: Oxford University Press, 2013). [41]. Montgomery, “Bioethics as a Governance Practice.” [42]. Debora Diniz and Giselle Carino, “What Can Be Learned from the Global South on Abortion and How We Can Learn?,” Developing World Bioethics 23, no. 1 (March 2023): 3–4, https://doi.org/10.1111/dewb.12385. [43]. International Bioethics Committee, On Social Responsibility and Health Report (Paris: Unesco, 2010). [44]. Cristina Brandão et al., “Social Responsibility: A New Paradigm of Hospital Governance?,” Health Care Analysis 21, no. 4 (December 2013): 390–402, https://doi.org/10.1007/s10728-012-0206-3. [45] Intissar Haddiya, Taha Janfi, and Mohamed Guedira, “Application of the Concepts of Social Responsibility, Sustainability, and Ethics to Healthcare Organizations,” Risk Management and Healthcare Policy Volume 13 (August 2020): 1029–33, https://doi.org/10.2147/RMHP.S258984. [46]The Biopharmaceutical Bioethics Working Group et al., “Considerations for Applying Bioethics Norms to a Biopharmaceutical Industry Setting,” BMC Medical Ethics 22, no. 1 (December 2021): 31–41, https://doi.org/10.1186/s12910-021-00600-y. [47] Anne Van Aaken and Betül Simsek, “Rewarding in International Law,” American Journal of International Law 115, no. 2 (April 2021): 195–241, https://doi.org/10.1017/ajil.2021.2. [48] Jennifer E. Miller, “Bioethical Accreditation or Rating Needed to Restore Trust in Pharma,” Nature Medicine 19, no. 3 (March 2013): 261–261, https://doi.org/10.1038/nm0313-261. [49] John Hardwig, “The Stockholder – A Lesson for Business Ethics from Bioethics?,” Journal of Business Ethics 91, no. 3 (February 2010): 329–41, https://doi.org/10.1007/s10551-009-0086-0. [50] Stefan van Uden, “Taking up Bioethical Responsibility?: The Role of Global Bioethics in the Social Responsibility of Pharmaceutical Corporations Operating in Developing Countries” (Mestrado, Coimbra, Coimbra University, 2012). [51] María Peana Chivite and Sara Gallardo, “La bioética en la empresa: el caso particular de la Responsabilidad Social Corporativa,” Revista Internacional de Organizaciones, no. 13 (January 12, 2015): 55–81, https://doi.org/10.17345/rio13.55-81. [52] Teubner argues that social spheres tend to develop solutions autonomously, but one sphere interfering in the way other spheres govern themselves tends to result in ineffective regulation and demobilization of their autonomous rule-making capabilities. These spheres should develop “reflexion mechanisms” that enable the exchange of their social knowledge and provide effective, non-damaging solutions to social issues. See Gunther Teubner, “Substantive and Reflexive Elements in Modern Law,” Law & Society Review 17, no. 2 (1983): 239–85, https://doi.org/10.2307/3053348.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia