Статті в журналах з теми "Subspaces methods"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Subspaces methods.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Subspaces methods".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Eiermann, Michael, and Oliver G. Ernst. "Geometric aspects of the theory of Krylov subspace methods." Acta Numerica 10 (May 2001): 251–312. http://dx.doi.org/10.1017/s0962492901000046.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The development of Krylov subspace methods for the solution of operator equations has shown that two basic construction principles underlie the most commonly used algorithms: the orthogonal residual (OR) and minimal residual (MR) approaches. It is shown that these can both be formulated as techniques for solving an approximation problem on a sequence of nested subspaces of a Hilbert space, an abstract problem not necessarily related to an operator equation. Essentially all Krylov subspace algorithms result when these subspaces form a Krylov sequence. The well-known relations among the iterates and residuals of MR/OR pairs are shown to hold also in this rather general setting. We further show that a common error analysis for these methods involving the canonical angles between subspaces allows many of the known residual and error bounds to be derived in a simple and consistent manner. An application of this analysis to compact perturbations of the identity shows that MR/OR pairs of Krylov subspace methods converge q-superlinearly when applied to such operator equations.
2

Freund, Roland W. "Model reduction methods based on Krylov subspaces." Acta Numerica 12 (May 2003): 267–319. http://dx.doi.org/10.1017/s0962492902000120.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In recent years, reduced-order modelling techniques based on Krylov-subspace iterations, especially the Lanczos algorithm and the Arnoldi process, have become popular tools for tackling the large-scale time-invariant linear dynamical systems that arise in the simulation of electronic circuits. This paper reviews the main ideas of reduced-order modelling techniques based on Krylov subspaces and describes some applications of reduced-order modelling in circuit simulation.
3

Sia, Florence, and Rayner Alfred. "Tree-based mining contrast subspace." International Journal of Advances in Intelligent Informatics 5, no. 2 (July 23, 2019): 169. http://dx.doi.org/10.26555/ijain.v5i2.359.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
All existing mining contrast subspace methods employ density-based likelihood contrast scoring function to measure the likelihood of a query object to a target class against other class in a subspace. However, the density tends to decrease when the dimensionality of subspaces increases causes its bounds to identify inaccurate contrast subspaces for the given query object. This paper proposes a novel contrast subspace mining method that employs tree-based likelihood contrast scoring function which is not affected by the dimensionality of subspaces. The tree-based scoring measure recursively binary partitions the subspace space in the way that objects belong to the target class are grouped together and separated from objects belonging to other class. In contrast subspace, the query object should be in a group having a higher number of objects of the target class than other class. It incorporates the feature selection approach to find a subset of one-dimensional subspaces with high likelihood contrast score with respect to the query object. Therefore, the contrast subspaces are then searched through the selected subset of one-dimensional subspaces. An experiment is conducted to evaluate the effectiveness of the tree-based method in terms of classification accuracy. The experiment results show that the proposed method has higher classification accuracy and outperform the existing method on several real-world data sets.
4

LENG, JINSONG, and ZHIHU HUANG. "OUTLIERS DETECTION WITH CORRELATED SUBSPACES FOR HIGH DIMENSIONAL DATASETS." International Journal of Wavelets, Multiresolution and Information Processing 09, no. 02 (March 2011): 227–36. http://dx.doi.org/10.1142/s0219691311004067.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Detecting outliers in high dimensional datasets is quite a difficult data mining task. Mining outliers in subspaces seems to be a promising solution, because outliers may be embedded in some interesting subspaces. Due to the existence of many irrelevant dimensions in high dimensional datasets, it is of great importance to eliminate the irrelevant or unimportant dimensions and identify outliers in interesting subspaces with strong correlation. Normally, the correlation among dimensions can be determined by traditional feature selection techniques and subspace-based clustering methods. The dimension-growth subspace clustering techniques find interesting subspaces in relatively lower possible dimension space, while dimension-growth approaches intend to find the maximum cliques in high dimensional datasets. This paper presents a novel approach by identifying outliers in correlated subspaces. The degree of correlation among dimensions is measured in terms of the mean squared residue. In doing so, we employ the frequent pattern algorithms to find the correlated subspaces. Based on the correlated subspaces obtained, outliers are distinguished from the projected subspaces by using classical outlier detection techniques. Empirical studies show that the proposed approach can identify outliers effectively in high dimensional datasets.
5

Laaksonen, Jorma, and Erkki Oja. "Learning Subspace Classifiers and Error-Corrective Feature Extraction." International Journal of Pattern Recognition and Artificial Intelligence 12, no. 04 (June 1998): 423–36. http://dx.doi.org/10.1142/s0218001498000270.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Subspace methods are a powerful class of statistical pattern classification algorithms. The subspaces form semiparametric representations of the pattern classes in the form of principal components. In this sense, subspace classification methods are an application of classical optimal data compression techniques. Additionally, the subspace formalism can be given a neural network interpretation. There are learning versions of the subspace classification methods, in which error-driven learning procedures are applied to the subspaces in order to reduce the number of misclassified vectors. An algorithm for iterative selection of the subspace dimensions is presented in this paper. Likewise, a modified formula for calculating the projection lengths in the subspaces is investigated. The principle of adaptive learning in subspace methods can further be applied to feature extraction. In our work, we have studied two adaptive feature extraction schemes. The adaptation process is directed by errors occurring in the classifier. Unlike most traditional classifier models which take the preceding feature extraction stage as given, this scheme allows for reducing the loss of information in the feature extraction stage. The enhanced overall classification performance resulting from the added adaptivity is demonstrated with experiments in which recognition of handwritten digits has been used as an exemplary application.
6

Seshadri, P., S. Yuchi, G. T. Parks, and S. Shahpar. "Supporting multi-point fan design with dimension reduction." Aeronautical Journal 124, no. 1279 (July 27, 2020): 1371–98. http://dx.doi.org/10.1017/aer.2020.50.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractMotivated by the idea of turbomachinery active subspace performance maps, this paper studies dimension reduction in turbomachinery 3D CFD simulations. First, we show that these subspaces exist across different blades—under the same parametrisation—largely independent of their Mach number or Reynolds number. This is demonstrated via a numerical study on three different blades. Then, in an attempt to reduce the computational cost of identifying a suitable dimension reducing subspace, we examine statistical sufficient dimension reduction methods, including sliced inverse regression, sliced average variance estimation, principal Hessian directions and contour regression. Unsatisfied by these results, we evaluate a new idea based on polynomial variable projection—a non-linear least-squares problem. Our results using polynomial variable projection clearly demonstrate that one can accurately identify dimension reducing subspaces for turbomachinery functionals at a fraction of the cost associated with prior methods. We apply these subspaces to the problem of comparing design configurations across different flight points on a working line of a fan blade. We demonstrate how designs that offer a healthy compromise between performance at cruise and sea-level conditions can be easily found by visually inspecting their subspaces.
7

Nagi, Sajid, Dhruba Kumar Bhattacharyya, and Jugal K. Kalita. "A Preview on Subspace Clustering of High Dimensional Data." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 6, no. 3 (May 21, 2013): 441–48. http://dx.doi.org/10.24297/ijct.v6i3.4466.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
When clustering high dimensional data, traditional clustering methods are found to be lacking since they consider all of the dimensions of the dataset in discovering clusters whereas only some of the dimensions are relevant. This may give rise to subspaces within the dataset where clusters may be found. Using feature selection, we can remove irrelevant and redundant dimensions by analyzing the entire dataset. The problem of automatically identifying clusters that exist in multiple and maybe overlapping subspaces of high dimensional data, allowing better clustering of the data points, is known as Subspace Clustering. There are two major approaches to subspace clustering based on search strategy. Top-down algorithms find an initial clustering in the full set of dimensions and evaluate the subspaces of each cluster, iteratively improving the results. Bottom-up approaches start from finding low dimensional dense regions, and then use them to form clusters. Based on a survey on subspace clustering, we identify the challenges and issues involved with clustering gene expression data.
8

Zhou, Jie, Chucheng Huang, Can Gao, Yangbo Wang, Xinrui Shen, and Xu Wu. "Weighted Subspace Fuzzy Clustering with Adaptive Projection." International Journal of Intelligent Systems 2024 (January 31, 2024): 1–18. http://dx.doi.org/10.1155/2024/6696775.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Available subspace clustering methods often contain two stages, finding low-dimensional subspaces of data and then conducting clustering in the subspaces. Therefore, how to find the subspaces that better represent the original data becomes a research challenge. However, most of the reported methods are based on the premise that the contributions of different features are equal, which may not be ideal for real scenarios, i.e., the contributions of the important features may be overwhelmed by a large amount of redundant features. In this study, a weighted subspace fuzzy clustering (WSFC) model with a locality preservation mechanism is presented, which can adaptively capture the importance of different features, achieve an optimal lower-dimensional subspace, and perform fuzzy clustering simultaneously. Since each feature can be well quantified in terms of its importance, the proposed model exhibits the sparsity and robustness of fuzzy clustering. The intrinsic geometrical structures of data can also be preserved while enhancing the interpretability of clustering tasks. Extensive experimental results show that WSFC can allocate appropriate weights to different features according to data distributions and clustering tasks and achieve superior performance compared to other clustering models on real-world datasets.
9

Pang, Guansong, Kai Ming Ting, David Albrecht, and Huidong Jin. "ZERO++: Harnessing the Power of Zero Appearances to Detect Anomalies in Large-Scale Data Sets." Journal of Artificial Intelligence Research 57 (December 29, 2016): 593–620. http://dx.doi.org/10.1613/jair.5228.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper introduces a new unsupervised anomaly detector called ZERO++ which employs the number of zero appearances in subspaces to detect anomalies in categorical data. It is unique in that it works in regions of subspaces that are not occupied by data; whereas existing methods work in regions occupied by data. ZERO++ examines only a small number of low dimensional subspaces to successfully identify anomalies. Unlike existing frequency-based algorithms, ZERO++ does not involve subspace pattern searching. We show that ZERO++ is better than or comparable with the state-of-the-art anomaly detection methods over a wide range of real-world categorical and numeric data sets; and it is efficient with linear time complexity and constant space complexity which make it a suitable candidate for large-scale data sets.
10

Il’in, V. P. "Projection Methods in Krylov Subspaces." Journal of Mathematical Sciences 240, no. 6 (June 28, 2019): 772–82. http://dx.doi.org/10.1007/s10958-019-04395-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Tianhe, Yin, Mohammad Reza Mahmoudi, Sultan Noman Qasem, Bui Anh Tuan, and Kim-Hung Pho. "Numerical function optimization by conditionalized PSO algorithm." Journal of Intelligent & Fuzzy Systems 39, no. 3 (October 7, 2020): 3275–95. http://dx.doi.org/10.3233/jifs-191685.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A lot of research has been directed to the new optimizers that can find a suboptimal solution for any optimization problem named as heuristic black-box optimizers. They can find the suboptimal solutions of an optimization problem much faster than the mathematical programming methods (if they find them at all). Particle swarm optimization (PSO) is an example of this type. In this paper, a new modified PSO has been proposed. The proposed PSO incorporates conditional learning behavior among birds into the PSO algorithm. Indeed, the particles, little by little, learn how they should behave in some similar conditions. The proposed method is named Conditionalized Particle Swarm Optimization (CoPSO). The problem space is first divided into a set of subspaces in CoPSO. In CoPSO, any particle inside a subspace will be inclined towards its best experienced location if the particles in its subspace have low diversity; otherwise, it will be inclined towards the global best location. The particles also learn to speed-up in the non-valuable subspaces and to speed-down in the valuable subspaces. The performance of CoPSO has been compared with the state-of-the-art methods on a set of standard benchmark functions.
12

Li, Jiamu, Ji Zhang, Mohamed Jaward Bah, Jian Wang, Youwen Zhu, Gaoming Yang, Lingling Li, and Kexin Zhang. "An Auto-Encoder with Genetic Algorithm for High Dimensional Data: Towards Accurate and Interpretable Outlier Detection." Algorithms 15, no. 11 (November 15, 2022): 429. http://dx.doi.org/10.3390/a15110429.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
When dealing with high-dimensional data, such as in biometric, e-commerce, or industrial applications, it is extremely hard to capture the abnormalities in full space due to the curse of dimensionality. Furthermore, it is becoming increasingly complicated but essential to provide interpretations for outlier detection results in high-dimensional space as a consequence of the large number of features. To alleviate these issues, we propose a new model based on a Variational AutoEncoder and Genetic Algorithm (VAEGA) for detecting outliers in subspaces of high-dimensional data. The proposed model employs a neural network to create a probabilistic dimensionality reduction variational autoencoder (VAE) that applies its low-dimensional hidden space to characterize the high-dimensional inputs. Then, the hidden vector is sampled randomly from the hidden space to reconstruct the data so that it closely matches the input data. The reconstruction error is then computed to determine an outlier score, and samples exceeding the threshold are tentatively identified as outliers. In the second step, a genetic algorithm (GA) is used as a basis for examining and analyzing the abnormal subspace of the outlier set obtained by the VAE layer. After encoding the outlier dataset’s subspaces, the degree of anomaly for the detected subspaces is calculated using the redefined fitness function. Finally, the abnormal subspace is calculated for the detected point by selecting the subspace with the highest degree of anomaly. The clustering of abnormal subspaces helps filter outliers that are mislabeled (false positives), and the VAE layer adjusts the network weights based on the false positives. When compared to other methods using five public datasets, the VAEGA outlier detection model results are highly interpretable and outperform or have competitive performance compared to current contemporary methods.
13

Filisbino, Tiene A., Gilson A. Giraldi, and Carlos E. Thomaz. "Comparing Ranking Methods for Tensor Components in Multilinear and Concurrent Subspace Analysis with Applications in Face Images." International Journal of Image and Graphics 15, no. 01 (January 2015): 1550006. http://dx.doi.org/10.1142/s0219467815500060.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the area of multi-dimensional image databases modeling, the multilinear principal component analysis (MPCA) and concurrent subspace analysis (CSA) approaches were independently proposed and applied for mining image databases. The former follows the classical principal component analysis (PCA) paradigm that centers the sample data before subspace learning. The CSA, on the other hand, performs the learning procedure using the raw data. Besides, the corresponding tensor components have been ranked in order to identify the principal tensor subspaces for separating sample groups for face image analysis and gait recognition. In this paper, we first demonstrate that if CSA receives centered input samples and we consider full projection matrices then the obtained solution is equal to the one generated by MPCA. Then, we consider the general problem of ranking tensor components. We examine the theoretical aspects of typical solutions in this field: (a) Estimating the covariance structure of the database; (b) Computing discriminant weights through separating hyperplanes; (c) Application of Fisher criterium. We discuss these solutions for tensor subspaces learned using centered data (MPCA) and raw data (CSA). In the experimental results we focus on tensor principal components selected by the mentioned techniques for face image analysis considering gender classification as well as reconstruction problems.
14

Lovitz, Benjamin, and Nathaniel Johnston. "Entangled subspaces and generic local state discrimination with pre-shared entanglement." Quantum 6 (July 7, 2022): 760. http://dx.doi.org/10.22331/q-2022-07-07-760.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Walgate and Scott have determined the maximum number of generic pure quantum states that can be unambiguously discriminated by an LOCC measurement [Journal of Physics A: Mathematical and Theoretical, 41:375305, 08 2008]. In this work, we determine this number in a more general setting in which the local parties have access to pre-shared entanglement in the form of a resource state. We find that, for an arbitrary pure resource state, this number is equal to the Krull dimension of (the closure of) the set of pure states obtainable from the resource state by SLOCC. Surprisingly, a generic resource state maximizes this number.Local state discrimination is closely related to the topic of entangled subspaces, which we study in its own right. We introduce r-entangled subspaces, which naturally generalize previously studied spaces to higher multipartite entanglement. We use algebraic-geometric methods to determine the maximum dimension of an r-entangled subspace, and present novel explicit constructions of such spaces. We obtain similar results for symmetric and antisymmetric r-entangled subspaces, which correspond to entangled subspaces of bosonic and fermionic systems, respectively.
15

Il’in, V. P. "On Moment Methods in Krylov Subspaces." Doklady Mathematics 102, no. 3 (November 2020): 478–82. http://dx.doi.org/10.1134/s1064562420060241.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Vourdas, A. "Phase space methods: independence of subspaces." Journal of Physics: Conference Series 1194 (April 2019): 012111. http://dx.doi.org/10.1088/1742-6596/1194/1/012111.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Il’in, V. P. "Biconjugate direction methods in Krylov subspaces." Journal of Applied and Industrial Mathematics 4, no. 1 (January 2010): 65–78. http://dx.doi.org/10.1134/s1990478910010102.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Il’in, V. P. "Least Squares Methods in Krylov Subspaces." Journal of Mathematical Sciences 224, no. 6 (June 27, 2017): 900–910. http://dx.doi.org/10.1007/s10958-017-3460-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Ito, Kazufumi, and Jari Toivanen. "Preconditioned iterative methods on sparse subspaces." Applied Mathematics Letters 19, no. 11 (November 2006): 1191–97. http://dx.doi.org/10.1016/j.aml.2005.11.027.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Wang, Xinsheng, Chenxu Wang, and Mingyan Yu. "The Minimum Norm Least-Squares Solution in Reduction by Krylov Subspace Methods." Journal of Circuits, Systems and Computers 26, no. 01 (October 4, 2016): 1750006. http://dx.doi.org/10.1142/s0218126617500062.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In recent years, model order reduction (MOR) of interconnect system has become an important technique to reduce the computation complexity and improve the verification efficiency in the nanometer VLSI design. The Krylov subspaces techniques in existing MOR methods are efficient, and have become the methods of choice for generating small-scale macro-models of the large-scale multi-port RCL networks that arise in VLSI interconnect analysis. Although the Krylov subspace projection-based MOR methods have been widely studied over the past decade in the electrical computer-aided design community, all of them do not provide a best optimal solution in a given order. In this paper, a minimum norm least-squares solution for MOR by Krylov subspace methods is proposed. The method is based on generalized inverse (or pseudo-inverse) theory. This enables a new criterion for MOR-based Krylov subspace projection methods. Two numerical examples are used to test the PRIMA method based on the method proposed in this paper as a standard model.
21

Wu, Zhihong, Weisong Gu, Yuan Zhu, and Ke Lu. "Current Control Methods for an Asymmetric Six-Phase Permanent Magnet Synchronous Motor." Electronics 9, no. 1 (January 16, 2020): 172. http://dx.doi.org/10.3390/electronics9010172.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Via the vector space decomposition (VSD) transformation, the currents in an asymmetric six-phase permanent magnet synchronous motor (ASP_PMSM) can be decoupled into three orthogonal subspaces. Control of α–β currents in α–β subspace is important for torque regulation, while control of x-y currents in x-y subspace can suppress the harmonics due to the dead time of converters and other nonlinear factors. The zero-sequence components in O1-O2 subspace are 0 due to isolated neutral points. In α–β subspace, a state observer is constructed by introducing the error variable between the real current and the internal model current based on the internal model control method, which can improve the current control performance compared to the traditional internal model control method. In x–y subspace, in order to suppress the current harmonics, an adaptive-linear-neuron (ADALINE)-based control algorithm is employed to generate the compensation voltage, which is self-tuned by minimizing the estimated current distortion through the least mean square (LMS) algorithm. The modulation technique to implement the four-dimensional current control based on the three-phase SVPWM is given. The experimental results validate the robustness and effectiveness of the proposed control method.
22

Ougraz, Hassan, Said Safi, Ahmed Boumezzough, and Miloud Frikel. "Performance Comparison of Several Algorithms for Localization of Wideband Sources." Journal of Telecommunications and Information Technology, no. 3 (September 1, 2023): 21–29. http://dx.doi.org/10.26636/jtit.2023.3.1359.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In recent years, researchers have tried to estimate the direction-of-arrival (DOA) of wideband sources and several novel techniques have been proposed. In this paper, we compare six algorithms for calculating the DOA of broadband signals, namely coherent subspace signal method (CSSM), two-sided correlation transformation (TCT), incoherent multiple signal classification (IMUSIC), test of orthogonality of frequency subspaces (TOFS), test of orthogonality of projected subspaces (TOPS), and squared TOPS (S-TOPS). The comparison is made through computer simulations for different parameters, such as signal-to-noise ratio (SNR), in order to establish the efficiency and performance of the discussed methods in noisy environments. CSSM and TCT require initial values, but the remaining approaches do not need any preprocessing.
23

Ovtchinnikov, Evgueni E., and Leonidas S. Xanthis. "Iterative subspace correction methods for thin elastic structures and Korn's type inequality in subspaces." Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences 454, no. 1976 (August 8, 1998): 2023–39. http://dx.doi.org/10.1098/rspa.1998.0247.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Aldroubi, Akram. "A Review of Subspace Segmentation: Problem, Nonlinear Approximations, and Applications to Motion Segmentation." ISRN Signal Processing 2013 (February 13, 2013): 1–13. http://dx.doi.org/10.1155/2013/417492.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The subspace segmentation problem is fundamental in many applications. The goal is to cluster data drawn from an unknown union of subspaces. In this paper we state the problem and describe its connection to other areas of mathematics and engineering. We then review the mathematical and algorithmic methods created to solve this problem and some of its particular cases. We also describe the problem of motion tracking in videos and its connection to the subspace segmentation problem and compare the various techniques for solving it.
25

Al-Halees, Hasan, and Richard J. Fleming. "Extreme Point Methods and Banach-Stone Theorems." Journal of the Australian Mathematical Society 75, no. 1 (August 2003): 125–43. http://dx.doi.org/10.1017/s1446788700003505.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractAn operator is said to be nice if its conjugate maps extreme points of the dual unit ball to extreme points. The classical Banach-Stone Theorem says that an isometry from a space of continuous functions on a compact Hausdorff space onto another such space is a weighted composition operator. One common proof of this result uses the fact that an isometry is a nice operator. We use extreme point methods and the notion of centralizer to characterize nice operators as operator weighted compositions on subspaces of spaces of continuous functions with values in a Banach space. Previous characterizations of isometries from a subspace M of C0( Q, X) into C0(K, Y) require Y to be strictly convex, but we are able to obtain some results without that assumption. Important use is made of a vector-valued version of the Choquet Boundary. We also characterize nice operators from one function module to another.
26

Salami, Mesbaholdin, Farzad Movahedi Sobhani, and Mohammad Sadegh Ghazizadeh. "Shared Subscribe Hyper Simulation Optimization (SUBHSO) Algorithm for Clustering Big Data – Using Big Databases of Iran Electricity Market." Applied Computer Systems 24, no. 1 (May 1, 2019): 49–60. http://dx.doi.org/10.2478/acss-2019-0007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Many real world problems have big data, including recorded fields and/or attributes. In such cases, data mining requires dimension reduction techniques because there are serious challenges facing conventional clustering methods in dealing with big data. The subspace selection method is one of the most important dimension reduction techniques. In such methods, a selected set of subspaces is substituted for the general dataset of the problem and clustering is done using this set. This article introduces the Shared Subscribe Hyper Simulation Optimization (SUBHSO) algorithm to introduce the optimized cluster centres to a set of subspaces. SUBHSO uses an optimization loop for modifying and optimizing the coordinates of the cluster centres with the particle swarm optimization (PSO) and the fitness function calculation using the Monte Carlo simulation. The case study on the big data of Iran electricity market (IEM) has shown the improvement of the defined fitness function, which represents the cluster cohesion and separation relative to other dimension reduction algorithms.
27

Il’in, V. P. "Parallel Variable-Triangular Iterative Methods in Krylov Subspaces." Journal of Mathematical Sciences 255, no. 3 (April 29, 2021): 281–90. http://dx.doi.org/10.1007/s10958-021-05371-w.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Gurieva, Y. L., and V. P. Il’in. "On Coarse Grid Correction Methods in Krylov Subspaces." Journal of Mathematical Sciences 232, no. 6 (June 23, 2018): 774–82. http://dx.doi.org/10.1007/s10958-018-3907-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Il’in, V. P. "Two-Level Least Squares Methods in Krylov Subspaces." Journal of Mathematical Sciences 232, no. 6 (June 28, 2018): 892–902. http://dx.doi.org/10.1007/s10958-018-3916-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Gratton, Serge, and Philippe L. Toint. "Approximate invariant subspaces and quasi-newton optimization methods." Optimization Methods and Software 25, no. 4 (August 2010): 507–29. http://dx.doi.org/10.1080/10556780902992746.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Demmel, J. W. "Three methods for refining estimates of invariant subspaces." Computing 38, no. 1 (March 1987): 43–57. http://dx.doi.org/10.1007/bf02253743.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Zhang, Binbin, Weiwei Wang, and Xiangchu Feng. "Subspace Clustering with Sparsity and Grouping Effect." Mathematical Problems in Engineering 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/4787039.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Subspace clustering aims to group a set of data from a union of subspaces into the subspace from which it was drawn. It has become a popular method for recovering the low-dimensional structure underlying high-dimensional dataset. The state-of-the-art methods construct an affinity matrix based on the self-representation of the dataset and then use a spectral clustering method to obtain the final clustering result. These methods show that sparsity and grouping effect of the affinity matrix are important in recovering the low-dimensional structure. In this work, we propose a weighted sparse penalty and a weighted grouping effect penalty in modeling the self-representation of data points. The experimental results on Extended Yale B, USPS, and Berkeley 500 image segmentation datasets show that the proposed model is more effective than state-of-the-art methods in revealing the subspace structure underlying high-dimensional dataset.
33

Chen, Jing, and Yang Liu. "Wireless Sensor Network Localization Based on Semi-Supervised Local Subspace Alignment." Applied Mechanics and Materials 385-386 (August 2013): 1618–21. http://dx.doi.org/10.4028/www.scientific.net/amm.385-386.1618.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A semi-supervised local subspace alignment (SSLSA) algorithm for wireless sensor networks localization is presented. SSLSA method has the following features: 1) only using pair-wise distance between each node and its neighbors within a certain communication range, 2) aligning subspaces with the constraints of anchor nodes thus avoiding the accumulation of error, and 3) obtaining absolute positions of all sensor nodes directly without any other transformation process. The localization performance of this method is compared to the MDS-MAP(P, R) and LSA methods.
34

Galego, Elói Medina. "Decomposition Methods in Banach Spaces via Supplemented Subspaces Resembling Pełczyński’s Decomposition Methods." Results in Mathematics 50, no. 1-2 (April 2007): 27–41. http://dx.doi.org/10.1007/s00025-006-0233-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Du, Yukun, Yitao Cai, Xiao Jin, Hongxia Wang, Yao Li, and Min Lu. "ASIDS: A Robust Data Synthesis Method for Generating Optimal Synthetic Samples." Mathematics 11, no. 18 (September 13, 2023): 3891. http://dx.doi.org/10.3390/math11183891.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Most existing data synthesis methods are designed to tackle problems with dataset imbalance, data anonymization, and an insufficient sample size. There is a lack of effective synthesis methods in cases where the actual datasets have a limited number of data points but a large number of features and unknown noise. Thus, in this paper we propose a data synthesis method named Adaptive Subspace Interpolation for Data Synthesis (ASIDS). The idea is to divide the original data feature space into several subspaces with an equal number of data points, and then perform interpolation on the data points in the adjacent subspaces. This method can adaptively adjust the sample size of the synthetic dataset that contains unknown noise, and the generated sample data typically contain minimal errors. Moreover, it adjusts the feature composition of the data points, which can significantly reduce the proportion of the data points with large fitting errors. Furthermore, the hyperparameters of this method have an intuitive interpretation and usually require little calibration. Analysis results obtained using simulated original data and benchmark original datasets demonstrate that ASIDS is a robust and stable method for data synthesis.
36

Versaci. "KRYLOV’S SUBSPACES ITERATIVE METHODS TO EVALUATE ELECTROSTATIC PARAMETERS." American Journal of Applied Sciences 11, no. 3 (March 1, 2014): 396–405. http://dx.doi.org/10.3844/ajassp.2014.396.405.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Baker, C. G., K. A. Gallivan, and P. Van Dooren. "Low-rank incremental methods for computing dominant singular subspaces." Linear Algebra and its Applications 436, no. 8 (April 2012): 2866–88. http://dx.doi.org/10.1016/j.laa.2011.07.018.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Vourdas, A. "Independence and totalness of subspaces in phase space methods." Annals of Physics 391 (April 2018): 83–111. http://dx.doi.org/10.1016/j.aop.2018.02.010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Gordon, Y., O. Guédon, M. Meyer, and A. Pajor. "Random Euclidean sections of some classical Banach spaces." MATHEMATICA SCANDINAVICA 91, no. 2 (December 1, 2002): 247. http://dx.doi.org/10.7146/math.scand.a-14389.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Using probabilistic arguments, we give precise estimates of the Banach-Mazur distance of subspaces of the classical $\ell_q^n$ spaces and of Schatten classes of operators $S_q^n$ for $q \ge 2$ to the Euclidean space. We also estimate volume ratios of random subspaces of a normed space with respect to subspaces of quotients of $\ell_q$. Finally, the preceeding methods are applied to give estimates of Gelfand numbers of some linear operators.
40

Su, Xiruo, Qiuyan Miao, Xinglin Sun, Haoran Ren, Lingyun Ye, and Kaichen Song. "An Optimal Subspace Deconvolution Algorithm for Robust and High-Resolution Beamforming." Sensors 22, no. 6 (March 17, 2022): 2327. http://dx.doi.org/10.3390/s22062327.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Utilizing the difference in phase and power spectrum between signals and noise, the estimation of direction of arrival (DOA) can be transferred to a spatial sample classification problem. The power ratio, namely signal-to-noise ratio (SNR), is highly required in most high-resolution beamforming methods so that high resolution and robustness are incompatible in a noisy background. Therefore, this paper proposes a Subspaces Deconvolution Vector (SDV) beamforming method to improve the robustness of a high-resolution DOA estimation. In a noisy environment, to handle the difficulty in separating signals from noise, we intend to initial beamforming value presets by incoherent eigenvalue in the frequency domain. The high resolution in the frequency domain guarantees the stability of the beamforming. By combining the robustness of conventional beamforming, the proposed method makes use of the subspace deconvolution vector to build a high-resolution beamforming process. The SDV method is aimed to obtain unitary frequency matrixes more stably and improve the accuracy of signal subspaces. The results of simulations and experiments show that when the input SNR is less than −27 dB, signals of decomposition differ unremarkably in the subspace while the SDV method can still obtain clear angles. In a marine background, this method works well in separating the noise and recruiting the characteristics of the signal into the DOA for subsequent processing.
41

Wang, Hefeng, Yuan Cao, Xinxia Liu, and Yantao Yang. "Evaluation and zoning of various urban land spaces based on restrictive indicators: the case of Shanghai, China." World Journal of Engineering 14, no. 4 (August 7, 2017): 307–17. http://dx.doi.org/10.1108/wje-08-2016-0052.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Purpose Using Shanghai as an example, the purpose of this paper is to perform grade evaluation and zoning for different land use spaces by GIS by identifying the major restrictive factors in current socio-economic development. Design/methodology/approach Based on short plate theory, 11 major restrictive indicators that will restrict socio-economic development in Shanghai are identified, and urban land is divided into four subspaces and the restrictive grade evaluation of urban land subspace is achieved with GIS spatial analysis; then, land development zoning is processed according to the results of the evaluation. Findings In all, 11 major restrictive indicators that will restrict socio-economic development in Shanghai are identified. The restrictive grades of the agricultural production, urban construction and ecological protection subspaces are mainly common, weak and weaker, and the relatively strong restrictive grade of industrial development subspace is mainly concentrated in the more developed industrial districts (counties). The areas of the common and good regions of constructive development and ecological development zones account for 87.4 and 98.3 per cent of each total area, respectively, and urban land still has significant development potential in Shanghai. Originality/value This paper proposes various urban land space evaluations and zoning strategies based on restrictive indicators and perspectives, enriching the ideas and methods of urban land use evaluation.
42

Shahbazi Avarvand, Forooz, Arne Ewald, and Guido Nolte. "Localizing True Brain Interactions from EEG and MEG Data with Subspace Methods and Modified Beamformers." Computational and Mathematical Methods in Medicine 2012 (2012): 1–11. http://dx.doi.org/10.1155/2012/402341.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
To address the problem of mixing in EEG or MEG connectivity analysis we exploit that noninteracting brain sources do not contribute systematically to the imaginary part of the cross-spectrum. Firstly, we propose to apply the existing subspace method “RAP-MUSIC” to the subspace found from the dominant singular vectors of the imaginary part of the cross-spectrum rather than to the conventionally used covariance matrix. Secondly, to estimate the specific sources interacting with each other, we use a modified LCMV-beamformer approach in which the source direction for each voxel was determined by maximizing the imaginary coherence with respect to a given reference. These two methods are applicable in this form only if the number of interacting sources is even, because odd-dimensional subspaces collapse to even-dimensional ones. Simulations show that (a) RAP-MUSIC based on the imaginary part of the cross-spectrum accurately finds the correct source locations, that (b) conventional RAP-MUSIC fails to do so since it is highly influenced by noninteracting sources, and that (c) the second method correctly identifies those sources which are interacting with the reference. The methods are also applied to real data for a motor paradigm, resulting in the localization of four interacting sources presumably in sensory-motor areas.
43

Antipin, K. V. "Construction of genuinely entangled subspaces and the associated bounds on entanglement measures for mixed states." Journal of Physics A: Mathematical and Theoretical 54, no. 50 (November 25, 2021): 505303. http://dx.doi.org/10.1088/1751-8121/ac37e5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Genuine entanglement is the strongest form of multipartite entanglement. Genuinely entangled pure states contain entanglement in every bipartition and as such can be regarded as a valuable resource in the protocols of quantum information processing. A recent direction of research is the construction of genuinely entangled subspaces (GESs)—the class of subspaces consisting entirely of genuinely entangled pure states. In this paper we present methods of construction of such subspaces including those of maximal possible dimension. The approach is based on the composition of bipartite entangled subspaces and quantum channels of certain types. The examples include maximal subspaces for systems of three qubits, four qubits, three qutrits. We also provide lower bounds on two entanglement measures for mixed states, the concurrence and the convex-roof extended negativity, which are directly connected with the projection on GESs.
44

Fang, Hongjian, Robert D. van der Hilst, Maarten V. de Hoop, Konik Kothari, Sidharth Gupta, and Ivan Dokmanić. "Parsimonious Seismic Tomography with Poisson Voronoi Projections: Methodology and Validation." Seismological Research Letters 91, no. 1 (October 30, 2019): 343–55. http://dx.doi.org/10.1785/0220190141.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Ill‐posed seismic inverse problems are often solved using Tikhonov‐type regularization, that is, incorporation of damping and smoothing to obtain stable results. This typically results in overly smooth models, poor amplitude resolution, and a difficult choice between plausible models. Recognizing that the average of parameters can be better constrained than individual parameters, we propose a seismic tomography method that stabilizes the inverse problem by projecting the original high‐dimension model space onto random low‐dimension subspaces and then infers the high‐dimensional solution from combinations of such subspaces. The subspaces are formed by functions constant in Poisson Voronoi cells, which can be viewed as the mean of parameters near a certain location. The low‐dimensional problems are better constrained, and image reconstruction of the subspaces does not require explicit regularization. Moreover, the low‐dimension subspaces can be recovered by subsets of the whole dataset, which increases efficiency and offers opportunities to mitigate uneven sampling of the model space. The final (high‐dimension) model is then obtained from the low‐dimension images in different subspaces either by solving another normal equation or simply by averaging the low‐dimension images. Importantly, model uncertainty can be obtained directly from images in different subspaces. Synthetic tests show that our method outperforms conventional methods both in terms of geometry and amplitude recovery. The application to southern California plate boundary region also validates the robustness of our method by imaging geologically consistent features as well as strong along‐strike variations of San Jacinto fault that are not clearly seen using conventional methods.
45

Zhu, Kehui, Hang Jiang, Yuchong Huo, Qin Yu, and Jianfeng Li. "A Direct Position Determination Method Based on Subspace Orthogonality in Cross-Spectra under Multipath Environments." Sensors 22, no. 19 (September 24, 2022): 7245. http://dx.doi.org/10.3390/s22197245.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Without the estimation of the intermediate parameters, the direct position determination (DPD) method can achieve higher localization accuracy than conventional two-step methods. However, multipath environments are still a key problem, and complex high-dimensional matrix operations are required in most DPD methods. In this paper, a time-difference-of-arrival-based (TDOA-based) DPD method is proposed based on the subspace orthogonality in the cross-spectra between the different sensors. Firstly, the cross-spectrum between the segmented received signal and reference signal is calculated and eigenvalue decomposition is performed to obtain the subspaces. Then, the cost functions are constructed by using the orthogonality of subspace. Finally, the location of the radiation source is obtained by searching the superposition of these cost functions in the target area. Compared with other DPD methods, our proposed DPD method leads to better localization accuracy with less complexity. The superiority of this method is verified by both simulated and real measured data when compared to other TDOA and DPD algorithms.
46

Boykov, A. A., and A. V. Seliverstov. "On a cube and subspace projections." Vestnik Udmurtskogo Universiteta. Matematika. Mekhanika. Komp'yuternye Nauki 33, no. 3 (September 2023): 402–15. http://dx.doi.org/10.35634/vm230302.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We consider the arrangement of vertices of a unit multidimensional cube, an affine subspace, and its orthogonal projections onto coordinate subspaces. Upper and lower bounds on the subspace dimension are given under which some orthogonal projection always preserves the incidence relation between the subspace and cube vertices. Some oblique projections are also considered. Moreover, a brief review of the history of the development of multidimensional descriptive geometry is given. Analytic and synthetic methods in geometry diverged since the 17th century. Although both synthesis and analysis are tangled, from this time forth many geometers as well as engineers keep up a nice distinction. One can find references to the idea of higher-dimensional spaces in the 18th-century works, but proper development has been since the middle of the 19th century. Soon such works have appeared in Russian. Next, mathematicians generalized their theories to many dimensions. Our new results are obtained by both analytic and synthetic methods. They illustrate the complexity of pseudo-Boolean programming problems because reducing the problem dimension by orthogonal projection meets obstacles in the worst case.
47

Wang, Xuan, and Liuqi Zhang. "Intensified Partial Least Squares Fault Diagnosis Method based on CARS." Journal of Physics: Conference Series 2449, no. 1 (March 1, 2023): 012001. http://dx.doi.org/10.1088/1742-6596/2449/1/012001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Partial Least Squares (PLS) is a fault diagnosis model used in statistical process monitoring. PLS decomposes the data space into a quality correlated subspace and an uncorrelated subspace, and monitors the state of both subspaces by setting the appropriate statistics. However, post-processing methods such as modified-PLS (MPLS) and efficient projection to latent structures (EPLS) gradually lose their ability to detect faults when low-intensity faults that do not affect quality. In order to eliminate the drawbacks of post-processing methods and improve the stability of algorithms under different conditions, this paper introduces the competitive and adaptive reweighted sampling method (CARS) into the field of fault diagnosis and proposes a fault filtering method combining CARS and orthogonal signal correction algorithm (OSC). First, an initial screening of the variable space was completed using CARS method. OSC algorithm is then used to remove quality- uncorrelated components. Next, the variable space is completely decomposed into a quality-correlated and an irrelevant subspace using intensified partial least squares method (IPLS). Finally, the validity of the model was verified using the simulations.
48

Sharpee, Tatyana, Nicole C. Rust, and William Bialek. "Analyzing Neural Responses to Natural Signals: Maximally Informative Dimensions." Neural Computation 16, no. 2 (February 1, 2004): 223–50. http://dx.doi.org/10.1162/089976604322742010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We propose a method that allows for a rigorous statistical analysis of neural responses to natural stimuli that are nongaussian and exhibit strong correlations. We have in mind a model in which neurons are selective for a small number of stimulus dimensions out of a high-dimensional stimulus space, but within this subspace the responses can be arbitrarily nonlinear. Existing analysis methods are based on correlation functions between stimuli and responses, but these methods are guaranteed to work only in the case of gaussian stimulus ensembles. As an alternative to correlation functions, we maximize the mutual information between the neural responses and projections of the stimulus onto low-dimensional subspaces. The procedure can be done iteratively by increasing the dimensionality of this subspace. Those dimensions that allow the recovery of all of the information between spikes and the full unprojected stimuli describe the relevant subspace. If the dimensionality of the relevant subspace indeed is small, it becomes feasible to map the neuron's input-output function even under fully natural stimulus conditions. These ideas are illustrated in simulations on model visual and auditory neurons responding to natural scenes and sounds, respectively.
49

Daas, Hussam Al, Laura Grigori, Pascal Hénon, and Philippe Ricoux. "Recycling Krylov Subspaces and Truncating Deflation Subspaces for Solving Sequence of Linear Systems." ACM Transactions on Mathematical Software 47, no. 2 (April 2021): 1–30. http://dx.doi.org/10.1145/3439746.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This article presents deflation strategies related to recycling Krylov subspace methods for solving one or a sequence of linear systems of equations. Besides well-known strategies of deflation, Ritz-, and harmonic Ritz-based deflation, we introduce an Singular Value Decomposition based deflation technique. We consider the recycling in two contexts: recycling the Krylov subspace between the restart cycles and recycling a deflation subspace when the matrix changes in a sequence of linear systems. Numerical experiments on real-life reservoir simulation demonstrate the impact of our proposed strategy.
50

Wu, Guangyuan, Shijun Niu, and Yifan Xiong. "Optimized clustering sample selection for spectral reflectance recovery." Laser Physics Letters 20, no. 11 (October 5, 2023): 115204. http://dx.doi.org/10.1088/1612-202x/acfb73.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The accuracy of spectral recovery depends heavily on the selection of an appropriate sample set, so the optimized sample selection by clustering strategy can improve the spectral recovery results. This paper presents a sample optimization method that combines hierarchical clustering and K-mean angle similar clustering to achieve this process. The proposed method employs the hierarchical clustering to divide the training sample dataset into 15 subspaces and obtain 15 subspace centroids. The similarity distance is then calculated between the testing sample and each subspace samples, and the subspace with the sample having the smallest distance is selected. The testing sample is utilized as a priori centroid, which clusters the optimal subspace by competition with the centroid of the subspace selected. This iterative process continues until the centroid of the subspace remains unaltered. Finally, the training samples within the optimal subspace use to recover spectral reflectance through Euclidean distance weighting. Experimental results demonstrate that the proposed method outperforms existing methods in terms of spectral and colorimetric accuracy, as well as stability and robustness. This research provides a solution to the problem of data redundancy in the spectral recovery process and enhances the accuracy and efficiency of spectral recovery.

До бібліографії