Добірка наукової літератури з теми "Subspaces methods"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Subspaces methods".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Subspaces methods":

1

Eiermann, Michael, and Oliver G. Ernst. "Geometric aspects of the theory of Krylov subspace methods." Acta Numerica 10 (May 2001): 251–312. http://dx.doi.org/10.1017/s0962492901000046.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The development of Krylov subspace methods for the solution of operator equations has shown that two basic construction principles underlie the most commonly used algorithms: the orthogonal residual (OR) and minimal residual (MR) approaches. It is shown that these can both be formulated as techniques for solving an approximation problem on a sequence of nested subspaces of a Hilbert space, an abstract problem not necessarily related to an operator equation. Essentially all Krylov subspace algorithms result when these subspaces form a Krylov sequence. The well-known relations among the iterates and residuals of MR/OR pairs are shown to hold also in this rather general setting. We further show that a common error analysis for these methods involving the canonical angles between subspaces allows many of the known residual and error bounds to be derived in a simple and consistent manner. An application of this analysis to compact perturbations of the identity shows that MR/OR pairs of Krylov subspace methods converge q-superlinearly when applied to such operator equations.
2

Freund, Roland W. "Model reduction methods based on Krylov subspaces." Acta Numerica 12 (May 2003): 267–319. http://dx.doi.org/10.1017/s0962492902000120.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In recent years, reduced-order modelling techniques based on Krylov-subspace iterations, especially the Lanczos algorithm and the Arnoldi process, have become popular tools for tackling the large-scale time-invariant linear dynamical systems that arise in the simulation of electronic circuits. This paper reviews the main ideas of reduced-order modelling techniques based on Krylov subspaces and describes some applications of reduced-order modelling in circuit simulation.
3

Sia, Florence, and Rayner Alfred. "Tree-based mining contrast subspace." International Journal of Advances in Intelligent Informatics 5, no. 2 (July 23, 2019): 169. http://dx.doi.org/10.26555/ijain.v5i2.359.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
All existing mining contrast subspace methods employ density-based likelihood contrast scoring function to measure the likelihood of a query object to a target class against other class in a subspace. However, the density tends to decrease when the dimensionality of subspaces increases causes its bounds to identify inaccurate contrast subspaces for the given query object. This paper proposes a novel contrast subspace mining method that employs tree-based likelihood contrast scoring function which is not affected by the dimensionality of subspaces. The tree-based scoring measure recursively binary partitions the subspace space in the way that objects belong to the target class are grouped together and separated from objects belonging to other class. In contrast subspace, the query object should be in a group having a higher number of objects of the target class than other class. It incorporates the feature selection approach to find a subset of one-dimensional subspaces with high likelihood contrast score with respect to the query object. Therefore, the contrast subspaces are then searched through the selected subset of one-dimensional subspaces. An experiment is conducted to evaluate the effectiveness of the tree-based method in terms of classification accuracy. The experiment results show that the proposed method has higher classification accuracy and outperform the existing method on several real-world data sets.
4

LENG, JINSONG, and ZHIHU HUANG. "OUTLIERS DETECTION WITH CORRELATED SUBSPACES FOR HIGH DIMENSIONAL DATASETS." International Journal of Wavelets, Multiresolution and Information Processing 09, no. 02 (March 2011): 227–36. http://dx.doi.org/10.1142/s0219691311004067.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Detecting outliers in high dimensional datasets is quite a difficult data mining task. Mining outliers in subspaces seems to be a promising solution, because outliers may be embedded in some interesting subspaces. Due to the existence of many irrelevant dimensions in high dimensional datasets, it is of great importance to eliminate the irrelevant or unimportant dimensions and identify outliers in interesting subspaces with strong correlation. Normally, the correlation among dimensions can be determined by traditional feature selection techniques and subspace-based clustering methods. The dimension-growth subspace clustering techniques find interesting subspaces in relatively lower possible dimension space, while dimension-growth approaches intend to find the maximum cliques in high dimensional datasets. This paper presents a novel approach by identifying outliers in correlated subspaces. The degree of correlation among dimensions is measured in terms of the mean squared residue. In doing so, we employ the frequent pattern algorithms to find the correlated subspaces. Based on the correlated subspaces obtained, outliers are distinguished from the projected subspaces by using classical outlier detection techniques. Empirical studies show that the proposed approach can identify outliers effectively in high dimensional datasets.
5

Laaksonen, Jorma, and Erkki Oja. "Learning Subspace Classifiers and Error-Corrective Feature Extraction." International Journal of Pattern Recognition and Artificial Intelligence 12, no. 04 (June 1998): 423–36. http://dx.doi.org/10.1142/s0218001498000270.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Subspace methods are a powerful class of statistical pattern classification algorithms. The subspaces form semiparametric representations of the pattern classes in the form of principal components. In this sense, subspace classification methods are an application of classical optimal data compression techniques. Additionally, the subspace formalism can be given a neural network interpretation. There are learning versions of the subspace classification methods, in which error-driven learning procedures are applied to the subspaces in order to reduce the number of misclassified vectors. An algorithm for iterative selection of the subspace dimensions is presented in this paper. Likewise, a modified formula for calculating the projection lengths in the subspaces is investigated. The principle of adaptive learning in subspace methods can further be applied to feature extraction. In our work, we have studied two adaptive feature extraction schemes. The adaptation process is directed by errors occurring in the classifier. Unlike most traditional classifier models which take the preceding feature extraction stage as given, this scheme allows for reducing the loss of information in the feature extraction stage. The enhanced overall classification performance resulting from the added adaptivity is demonstrated with experiments in which recognition of handwritten digits has been used as an exemplary application.
6

Seshadri, P., S. Yuchi, G. T. Parks, and S. Shahpar. "Supporting multi-point fan design with dimension reduction." Aeronautical Journal 124, no. 1279 (July 27, 2020): 1371–98. http://dx.doi.org/10.1017/aer.2020.50.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractMotivated by the idea of turbomachinery active subspace performance maps, this paper studies dimension reduction in turbomachinery 3D CFD simulations. First, we show that these subspaces exist across different blades—under the same parametrisation—largely independent of their Mach number or Reynolds number. This is demonstrated via a numerical study on three different blades. Then, in an attempt to reduce the computational cost of identifying a suitable dimension reducing subspace, we examine statistical sufficient dimension reduction methods, including sliced inverse regression, sliced average variance estimation, principal Hessian directions and contour regression. Unsatisfied by these results, we evaluate a new idea based on polynomial variable projection—a non-linear least-squares problem. Our results using polynomial variable projection clearly demonstrate that one can accurately identify dimension reducing subspaces for turbomachinery functionals at a fraction of the cost associated with prior methods. We apply these subspaces to the problem of comparing design configurations across different flight points on a working line of a fan blade. We demonstrate how designs that offer a healthy compromise between performance at cruise and sea-level conditions can be easily found by visually inspecting their subspaces.
7

Nagi, Sajid, Dhruba Kumar Bhattacharyya, and Jugal K. Kalita. "A Preview on Subspace Clustering of High Dimensional Data." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 6, no. 3 (May 21, 2013): 441–48. http://dx.doi.org/10.24297/ijct.v6i3.4466.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
When clustering high dimensional data, traditional clustering methods are found to be lacking since they consider all of the dimensions of the dataset in discovering clusters whereas only some of the dimensions are relevant. This may give rise to subspaces within the dataset where clusters may be found. Using feature selection, we can remove irrelevant and redundant dimensions by analyzing the entire dataset. The problem of automatically identifying clusters that exist in multiple and maybe overlapping subspaces of high dimensional data, allowing better clustering of the data points, is known as Subspace Clustering. There are two major approaches to subspace clustering based on search strategy. Top-down algorithms find an initial clustering in the full set of dimensions and evaluate the subspaces of each cluster, iteratively improving the results. Bottom-up approaches start from finding low dimensional dense regions, and then use them to form clusters. Based on a survey on subspace clustering, we identify the challenges and issues involved with clustering gene expression data.
8

Zhou, Jie, Chucheng Huang, Can Gao, Yangbo Wang, Xinrui Shen, and Xu Wu. "Weighted Subspace Fuzzy Clustering with Adaptive Projection." International Journal of Intelligent Systems 2024 (January 31, 2024): 1–18. http://dx.doi.org/10.1155/2024/6696775.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Available subspace clustering methods often contain two stages, finding low-dimensional subspaces of data and then conducting clustering in the subspaces. Therefore, how to find the subspaces that better represent the original data becomes a research challenge. However, most of the reported methods are based on the premise that the contributions of different features are equal, which may not be ideal for real scenarios, i.e., the contributions of the important features may be overwhelmed by a large amount of redundant features. In this study, a weighted subspace fuzzy clustering (WSFC) model with a locality preservation mechanism is presented, which can adaptively capture the importance of different features, achieve an optimal lower-dimensional subspace, and perform fuzzy clustering simultaneously. Since each feature can be well quantified in terms of its importance, the proposed model exhibits the sparsity and robustness of fuzzy clustering. The intrinsic geometrical structures of data can also be preserved while enhancing the interpretability of clustering tasks. Extensive experimental results show that WSFC can allocate appropriate weights to different features according to data distributions and clustering tasks and achieve superior performance compared to other clustering models on real-world datasets.
9

Pang, Guansong, Kai Ming Ting, David Albrecht, and Huidong Jin. "ZERO++: Harnessing the Power of Zero Appearances to Detect Anomalies in Large-Scale Data Sets." Journal of Artificial Intelligence Research 57 (December 29, 2016): 593–620. http://dx.doi.org/10.1613/jair.5228.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper introduces a new unsupervised anomaly detector called ZERO++ which employs the number of zero appearances in subspaces to detect anomalies in categorical data. It is unique in that it works in regions of subspaces that are not occupied by data; whereas existing methods work in regions occupied by data. ZERO++ examines only a small number of low dimensional subspaces to successfully identify anomalies. Unlike existing frequency-based algorithms, ZERO++ does not involve subspace pattern searching. We show that ZERO++ is better than or comparable with the state-of-the-art anomaly detection methods over a wide range of real-world categorical and numeric data sets; and it is efficient with linear time complexity and constant space complexity which make it a suitable candidate for large-scale data sets.
10

Il’in, V. P. "Projection Methods in Krylov Subspaces." Journal of Mathematical Sciences 240, no. 6 (June 28, 2019): 772–82. http://dx.doi.org/10.1007/s10958-019-04395-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Subspaces methods":

1

Shank, Stephen David. "Low-rank solution methods for large-scale linear matrix equations." Diss., Temple University Libraries, 2014. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/273331.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Mathematics
Ph.D.
We consider low-rank solution methods for certain classes of large-scale linear matrix equations. Our aim is to adapt existing low-rank solution methods based on standard, extended and rational Krylov subspaces to solve equations which may viewed as extensions of the classical Lyapunov and Sylvester equations. The first class of matrix equations that we consider are constrained Sylvester equations, which essentially consist of Sylvester's equation along with a constraint on the solution matrix. These therefore constitute a system of matrix equations. The second are generalized Lyapunov equations, which are Lyapunov equations with additional terms. Such equations arise as computational bottlenecks in model order reduction.
Temple University--Theses
2

UGWU, UGOCHUKWU OBINNA. "Iterative tensor factorization based on Krylov subspace-type methods with applications to image processing." Kent State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=kent1633531487559183.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Hossain, Mohammad Sahadet. "Numerical Methods for Model Reduction of Time-Varying Descriptor Systems." Doctoral thesis, Universitätsbibliothek Chemnitz, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-74776.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This dissertation concerns the model reduction of linear periodic descriptor systems both in continuous and discrete-time case. In this dissertation, mainly the projection based approaches are considered for model order reduction of linear periodic time varying descriptor systems. Krylov based projection method is used for large continuous-time periodic descriptor systems and balancing based projection technique is applied to large sparse discrete-time periodic descriptor systems to generate the reduce systems. For very large dimensional state space systems, both the techniques produce large dimensional solutions. Hence, a recycling technique is used in Krylov based projection methods which helps to compute low rank solutions of the state space systems and also accelerate the computational convergence. The outline of the proposed model order reduction procedure is given with more details. The accuracy and suitability of the proposed method is demonstrated through different examples of different orders. Model reduction techniques based on balance truncation require to solve matrix equations. For periodic time-varying descriptor systems, these matrix equations are projected generalized periodic Lyapunov equations and the solutions are also time-varying. The cyclic lifted representation of the periodic time-varying descriptor systems is considered in this dissertation and the resulting lifted projected Lyapunov equations are solved to achieve the periodic reachability and observability Gramians of the original periodic systems. The main advantage of this solution technique is that the cyclic structures of projected Lyapunov equations can handle the time-varying dimensions as well as the singularity of the period matrix pairs very easily. One can also exploit the theory of time-invariant systems for the control of periodic ones, provided that the results achieved can be easily re-interpreted in the periodic framework. Since the dimension of cyclic lifted system becomes very high for large dimensional periodic systems, one needs to solve the very large scale periodic Lyapunov equations which also generate very large dimensional solutions. Hence iterative techniques, which are the generalization and modification of alternating directions implicit (ADI) method and generalized Smith method, are implemented to obtain low rank Cholesky factors of the solutions of the periodic Lyapunov equations. Also the application of the solvers in balancing-based model reduction of discrete-time periodic descriptor systems is discussed. Numerical results are given to illustrate the effciency and accuracy of the proposed methods.
4

Ahmed, Nisar. "Implicit restart schemes for Krylov subspace model reduction methods." Thesis, Imperial College London, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.340535.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Shatnawi, Heba Awad Addad. "Frequency estimation using subspace methods." Thesis, Wichita State University, 2009. http://hdl.handle.net/10057/2419.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Complex frequency estimation problem plays a significant role in many engineering applications. The estimation process was traditionally achieved by the Eigenvalue Decomposition (EVD) of the spatial correlation matrix of observations. Frequency estimation has fundamental significant and wide relevance for many reasons. First, any arbitrary signal may be modeled as a sum of frequencies. Hence, any signal estimation problem may be expressed in terms of frequency estimation problems. Second, many parameter estimation applications may be mathematically expressed as a frequency estimation problem. In this thesis an improved frequency estimation technique is presented based on the unitary transformation, which was basically applied in the direction of arrival problem. The key idea of the proposed technique is to convert the complex valued autocorrelation, cumulant, or the direct data matrix in Hankel like shape into a real valued data matrix with the same dimension. The resultant real valued matrix will be used to extract the noise and/or the signal subspace instead of the original complex one. It is well known that real manipulations are easier and faster than the complex ones.
Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical and Computer Engineering
6

Ensor, Jonathan Edward. "Subspace methods for eigenstructure assignment." Thesis, University of York, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341821.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Mestrah, Ali. "Identification de modèles sous forme de représentation d'état pour les systèmes à sortie binaire." Electronic Thesis or Diss., Normandie, 2023. http://www.theses.fr/2023NORMC255.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cette thèse porte sur la modélisation paramétrique des systèmes linéaires invariants à partir de mesures binaires de la sortie. Ce problème demodélisation est abordée via l’usage des méthodes des sous-espaces. Ces méthodes permettent l’estimation de modèles sous forme de représentation d’état,un des avantages de ces méthodes étant que leur mise en œuvre ne nécessite pas la connaissance préalable de l’ordre du système. Ces méthodes ne sontinitialement pas adaptées au traitement de données binaires, l’objectif de cette thèse est ainsi leur adaptation à ce contexte d’identification. Dans cette thèse nousproposons trois méthodes des sous-espaces. Les propriétés de convergence de deux d’entre elles sont établies. Des résultats de simulations de Monte-Carlo sontprésentés afin de montrer les bonnes performances, mais aussi les limites, de ces méthodes
This thesis focuses on parametric modeling of invariant linear systems from binary output measurements. This identification problem is addressed via the use ofsubspace methods. These methods allow the estimation of state-space models, an added benefit of these methods being the fact that their implementation doesnot require the prior knowledge of the order of the system. These methods are initially adapted to high resolution data processing, the objective of this thesis istherefore their adaptation to the identification using binary measurements. In this thesis we propose three subspace methods. Convergence properties of two ofthem are established. Monte Carlo simulation results are presented to show the good performance, but also limits, of these methods
8

Nguyen, Hieu. "Linear subspace methods in face recognition." Thesis, University of Nottingham, 2011. http://eprints.nottingham.ac.uk/12330/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Despite over 30 years of research, face recognition is still one of the most difficult problems in the field of Computer Vision. The challenge comes from many factors affecting the performance of a face recognition system: noisy input, training data collection, speed-accuracy trade-off, variations in expression, illumination, pose, or ageing. Although relatively successful attempts have been made for special cases, such as frontal faces, no satisfactory methods exist that work under completely unconstrained conditions. This thesis proposes solutions to three important problems: lack of training data, speed-accuracy requirement, and unconstrained environments. The problem of lacking training data has been solved in the worst case: single sample per person. Whitened Principal Component Analysis is proposed as a simple but effective solution. Whitened PCA performs consistently well on multiple face datasets. Speed-accuracy trade-off problem is the second focus of this thesis. Two solutions are proposed to tackle this problem. The first solution is a new feature extraction method called Compact Binary Patterns which is about three times faster than Local Binary Patterns. The second solution is a multi-patch classifier which performs much better than a single classifier without compromising speed. Two metric learning methods are introduced to solve the problem of unconstrained face recognition. The first method called Indirect Neighourhood Component Analysis combines the best ideas from Neighourhood Component Analysis and One-shot learning. The second method, Cosine Similarity Metric Learning, uses Cosine Similarity instead of the more popular Euclidean distance to form the objective function in the learning process. This Cosine Similarity Metric Learning method produces the best result in the literature on the state-of-the-art face dataset: the Labelled Faces in the Wild dataset. Finally, a full face verification system based on our real experience taking part in ICPR 2010 Face Verification contest is described. Many practical points are discussed.
9

Tao, Dacheng. "Discriminative linear and multilinear subspace methods." Thesis, Birkbeck (University of London), 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.438996.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Yu, Xuebo. "Generalized Krylov subspace methods with applications." Kent State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=kent1401937618.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Subspaces methods":

1

Demmel, James Weldon. Three methods for refining estimates of invariant subspaces. New York: Courant Institute of Mathematical Sciences, New York University, 1985.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Watkins, David S. The matrix eigenvalue problem: GR and Krylov subspace methods. Philadelphia: Society for Industrial and Applied Mathematics, 2007.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Mats, Viberg, and Stoica Petre 1949-, eds. Subspace methods. Amsterdam: Elsevier, 1996.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Katayama, Tohru. Subspace methods for system identification. London: Springer, 2005.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Katayama, Tohru. Subspace Methods for System Identification. London: Springer London, 2005. http://dx.doi.org/10.1007/1-84628-158-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Saad, Y. Krylov subspace methods on supercomputers. [Moffett Field, Calif.?]: Research Institute for Advanced Computer Science, NASA Ames Research Center, 1988.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Sogabe, Tomohiro. Krylov Subspace Methods for Linear Systems. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-8532-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Heeger, David J. Subspace methods for recovering rigid motion. Toronto, Ont: University of Toronto, 1990.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Jepson, Allan D. Linear subspace methods for recovering translational direction. Toronto: University of Toronto, Dept. of Computer Science, 1992.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

F, Chan Tony, and Research Institute for Advanced Computer Science (U.S.), eds. Preserving symmetry in preconditioned Krylov subspace methods. [Moffett Field, Calif.]: Research Institute for Advanced Computer Science, NASA Ames Research Center, 1996.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Subspaces methods":

1

Schechter, Martin. "Estimates on Subspaces." In Linking Methods in Critical Point Theory, 131–44. Boston, MA: Birkhäuser Boston, 1999. http://dx.doi.org/10.1007/978-1-4612-1596-7_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Downey, R. G., and Jeffrey B. Remmel. "Effectively and Noneffectively Nowhere Simple Subspaces." In Logical Methods, 314–51. Boston, MA: Birkhäuser Boston, 1993. http://dx.doi.org/10.1007/978-1-4612-0325-4_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Nenciu, G. "Almost Invariant Subspaces for Quantum Evolutions." In Multiscale Methods in Quantum Mechanics, 83–97. Boston, MA: Birkhäuser Boston, 2004. http://dx.doi.org/10.1007/978-0-8176-8202-6_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Fischer, Bernd. "Orthogonal Polynomials and Krylov Subspaces." In Polynomial Based Iteration Methods for Symmetric Linear Systems, 132–36. Wiesbaden: Vieweg+Teubner Verlag, 1996. http://dx.doi.org/10.1007/978-3-663-11108-5_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Froelich, John, and Michael Marsalli. "Operator Semigroups, Invariant Sets and Invariant Subspaces." In Algebraic Methods in Operator Theory, 10–14. Boston, MA: Birkhäuser Boston, 1994. http://dx.doi.org/10.1007/978-1-4612-0255-4_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ilin, Valery P. "Multi-preconditioned Domain Decomposition Methods in the Krylov Subspaces." In Lecture Notes in Computer Science, 95–106. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57099-0_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Anton, Cristina, and Iain Smith. "Model Based Clustering of Functional Data with Mild Outliers." In Studies in Classification, Data Analysis, and Knowledge Organization, 11–19. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-09034-9_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractWe propose a procedure, called CFunHDDC, for clustering functional data with mild outliers which combines two existing clustering methods: the functional high dimensional data clustering (FunHDDC) [1] and the contaminated normal mixture (CNmixt) [3] method for multivariate data. We adapt the FunHDDC approach to data with mild outliers by considering a mixture of multivariate contaminated normal distributions. To fit the functional data in group-specific functional subspaces we extend the parsimonious models considered in FunHDDC, and we estimate the model parameters using an expectation-conditional maximization algorithm (ECM). The performance of the proposed method is illustrated for simulated and real-world functional data, and CFunHDDC outperforms FunHDDC when applied to functional data with outliers.
8

Boot, Tom, and Didier Nibbering. "Subspace Methods." In Macroeconomic Forecasting in the Era of Big Data, 267–91. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31150-6_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Fukui, Kazuhiro. "Subspace Methods." In Computer Vision, 1–5. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-03243-2_708-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Fukui, Kazuhiro. "Subspace Methods." In Computer Vision, 777–81. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_708.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Subspaces methods":

1

Zhou, Lei, Xiao Bai, Dong Wang, Xianglong Liu, Jun Zhou, and Edwin Hancock. "Latent Distribution Preserving Deep Subspace Clustering." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/617.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Subspace clustering is a useful technique for many computer vision applications in which the intrinsic dimension of high-dimensional data is smaller than the ambient dimension. Traditional subspace clustering methods often rely on the self-expressiveness property, which has proven effective for linear subspace clustering. However, they perform unsatisfactorily on real data with complex nonlinear subspaces. More recently, deep autoencoder based subspace clustering methods have achieved success owning to the more powerful representation extracted by the autoencoder network. Unfortunately, these methods only considering the reconstruction of original input data can hardly guarantee the latent representation for the data distributed in subspaces, which inevitably limits the performance in practice. In this paper, we propose a novel deep subspace clustering method based on a latent distribution-preserving autoencoder, which introduces a distribution consistency loss to guide the learning of distribution-preserving latent representation, and consequently enables strong capacity of characterizing the real-world data for subspace clustering. Experimental results on several public databases show that our method achieves significant improvement compared with the state-of-the-art subspace clustering methods.
2

Renaud, J. E., and G. A. Gabriele. "Sequential Global Approximation in Non-Hierarchic System Decomposition and Optimization." In ASME 1991 Design Technical Conferences. American Society of Mechanical Engineers, 1991. http://dx.doi.org/10.1115/detc1991-0086.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract A procedure for the optimization of non-hierarchic systems by decomposition into reduced subspaces is presented. Sequential global approximation is proposed as a coordination procedure for subspace optimizations. The same objective function and cumulative constraints are imposed at each subspace. Non-local functions are approximated at the subspaces using global sensitivities. The method optimizes the subspace problems concurrently allowing for parallel processing. Following each sequence of concurrent subspace optimizations an approximation to the global problem is formed using design data accumulated during the subspace optimizations. The solution of the global approximation problem is used as the starting point for subsequent subspace optimizations in an iterative solution procedure. Preliminary studies on two engineering design examples illustrate the methods potential.
3

Ying, Shihui, Lipeng Cai, Changzhou He, and Yaxin Peng. "Geometric Understanding for Unsupervised Subspace Learning." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/579.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper, we address the unsupervised subspace learning from a geometric viewpoint. First, we formulate the subspace learning as an inverse problem on Grassmannian manifold by considering all subspaces as points on it. Then, to make the model computable, we parameterize the Grassmannian manifold by using an orbit of rotation group action on all standard subspaces, which are spanned by the orthonormal basis. Further, to improve the robustness, we introduce a low-rank regularizer which makes the dimension of subspace as low as possible. Thus, the subspace learning problem is transferred to a minimization problem with variables of rotation and dimension. Then, we adopt the alternately iterative strategy to optimize the variables, where a structure-preserving method, based on the geodesic structure of the rotation group, is designed to update the rotation. Finally, we compare the proposed approach with six state-of-the-art methods on three different kinds of real datasets. The experimental results validate that our proposed method outperforms all compared methods.
4

Tripathy, Rohit, and Ilias Bilionis. "Deep Active Subspaces: A Scalable Method for High-Dimensional Uncertainty Propagation." In ASME 2019 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/detc2019-98099.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract A problem of considerable importance within the field of uncertainty quantification (UQ) is the development of efficient methods for the construction of accurate surrogate models. Such efforts are particularly important to applications constrained by high-dimensional uncertain parameter spaces. The difficulty of accurate surrogate modeling in such systems, is further compounded by data scarcity brought about by the large cost of forward model evaluations. Traditional response surface techniques, such as Gaussian process regression (or Kriging) and polynomial chaos are difficult to scale to high dimensions. To make surrogate modeling tractable in expensive high-dimensional systems, one must resort to dimensionality reduction of the stochastic parameter space. A recent dimensionality reduction technique that has shown great promise is the method of ‘active subspaces’. The classical formulation of active subspaces, unfortunately, requires gradient information from the forward model — often impossible to obtain. In this work, we present a simple, scalable method for recovering active subspaces in high-dimensional stochastic systems, without gradient-information that relies on a reparameterization of the orthogonal active subspace projection matrix, and couple this formulation with deep neural networks. We demonstrate our approach on challenging synthetic datasets and show favorable predictive comparison to classical active subspaces.
5

Arora, Akhil, Alberto Garcia-Duran, and Robert West. "Low-Rank Subspaces for Unsupervised Entity Linking." In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.emnlp-main.634.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Xie, Zhihui, Handong Zhao, Tong Yu, and Shuai Li. "Discovering Low-rank Subspaces for Language-agnostic Multilingual Representations." In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.emnlp-main.379.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Smith, Malcolm J., T. S. Koko, and I. R. Orisamolu. "Comparative Assessment of Optimal Control Methods With Integrated Performance Constraints." In ASME 1998 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1998. http://dx.doi.org/10.1115/imece1998-0947.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The relative performance of optimal vibration control methods is investigated within the context of integrated finite element modeling of piezothermoelastic structures. An IMSC and a LQR optimal control method are modified to include explicit performance constraints related to the transient settling time and steady-state gain of the modal response. When this approach is applied to the IMSC method, the result is a pole placement technique. But when it is applied to the LQR method operating in a modal subspace, the result is a method incorporating both optimal and pole placement features. The two methods are compared by means of a transient response analysis of an aluminum strip with piezoceramic sensing and actuating patches. For identical performance constraints and using identical modal subspaces, it is found that maximum actuator voltages and their distribution vary according to the control design method and the number of modes under control. In particular, it was found that the maximum actuator voltages are less for LQR than for IMSC. This is attributed to difficulty in controlling the second bending and first torsional modes of the strip independently. It is also found that for a given number of controlled modes, the maximum actuator voltage varies in a manner approximately inversely proportional to the settling time.
8

Bahamonde, Juan S., Matteo Pini, and Piero Colonna. "ACTIVE SUBSPACES FOR THE PRELIMINARY FLUID DYNAMIC DESIGN OF UNCONVENTIONAL TURBOMACHINERY." In VII European Congress on Computational Methods in Applied Sciences and Engineering. Athens: Institute of Structural Analysis and Antiseismic Research School of Civil Engineering National Technical University of Athens (NTUA) Greece, 2016. http://dx.doi.org/10.7712/100016.2433.7806.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Al-Seraji, Najm Abdulzahra Makhrib, Abeer Jabbar Al-Rikabi, and Emad Bakr Al-Zangana. "Represent the space PG(3, 8) by subspaces and sub-geometries." In INTERNATIONAL CONFERENCE OF COMPUTATIONAL METHODS IN SCIENCES AND ENGINEERING ICCMSE 2021. AIP Publishing, 2023. http://dx.doi.org/10.1063/5.0114859.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Chapron, Maxime, Christophe Blondeau, Michel Bergmann, Itham Salah el Din, and Denis Sipp. "SCALABLE CLUSTERED ACTIVE SUBSPACES FOR KRIGING REGRESSION IN HIGH DIMENSION." In 15th International Conference on Evolutionary and Deterministic Methods for Design, Optimization and Control. Athens: Institute of Structural Analysis and Antiseismic Research National Technical University of Athens, 2023. http://dx.doi.org/10.7712/140123.10192.18902.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Subspaces methods":

1

Harris, D. B. Characterizing source regions with signal subspace methods: Theory and computational methods. Office of Scientific and Technical Information (OSTI), December 1989. http://dx.doi.org/10.2172/5041042.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Wang, Qiqi. Active Subspace Methods for Data-Intensive Inverse Problems. Office of Scientific and Technical Information (OSTI), April 2017. http://dx.doi.org/10.2172/1353429.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Constantine, Paul. Active Subspace Methods for Data-Intensive Inverse Problems. Office of Scientific and Technical Information (OSTI), September 2019. http://dx.doi.org/10.2172/1566065.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Carson, Erin, Nicholas Knight, and James Demmel. Avoiding Communication in Two-Sided Krylov Subspace Methods. Fort Belvoir, VA: Defense Technical Information Center, August 2011. http://dx.doi.org/10.21236/ada555879.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Meza, Juan C., and W. W. Symes. Deflated Krylov Subspace Methods for Nearly Singular Linear Systems. Fort Belvoir, VA: Defense Technical Information Center, February 1987. http://dx.doi.org/10.21236/ada455101.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Needell, Deanna, and Rachel Ward. Two-subspace Projection Method for Coherent Overdetermined Systems. Claremont Colleges Digital Library, 2012. http://dx.doi.org/10.5642/tspmcos.2012.01.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Bui-Thanh, Tan. Active Subspace Methods for Data-Intensive Inverse Problems (Final Report). Office of Scientific and Technical Information (OSTI), February 2019. http://dx.doi.org/10.2172/1494035.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Li, Zhilin, and Kazufumi Ito. Subspace Iteration and Immersed Interface Methods: Theory, Algorithm, and Applications. Fort Belvoir, VA: Defense Technical Information Center, August 2010. http://dx.doi.org/10.21236/ada532686.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Elman, Howard C. Multigrid and Krylov Subspace Methods for the Discrete Stokes Equations. Fort Belvoir, VA: Defense Technical Information Center, June 1994. http://dx.doi.org/10.21236/ada598913.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Freund, R. W., and N. M. Nachtigal. A new Krylov-subspace method for symmetric indefinite linear systems. Office of Scientific and Technical Information (OSTI), October 1994. http://dx.doi.org/10.2172/10190810.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії