Добірка наукової літератури з теми "Structured sparsity model"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Structured sparsity model".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Structured sparsity model"
Niu, Wei, Mengshu Sun, Zhengang Li, Jou-An Chen, Jiexiong Guan, Xipeng Shen, Yanzhi Wang, Sijia Liu, Xue Lin, and Bin Ren. "RT3D: Achieving Real-Time Execution of 3D Convolutional Neural Networks on Mobile Devices." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (May 18, 2021): 9179–87. http://dx.doi.org/10.1609/aaai.v35i10.17108.
Повний текст джерелаSun, Jun, Qidong Chen, Jianan Sun, Tao Zhang, Wei Fang, and Xiaojun Wu. "Graph-structured multitask sparsity model for visual tracking." Information Sciences 486 (June 2019): 133–47. http://dx.doi.org/10.1016/j.ins.2019.02.043.
Повний текст джерелаRuan, Xiaofeng, Yufan Liu, Bing Li, Chunfeng Yuan, and Weiming Hu. "DPFPS: Dynamic and Progressive Filter Pruning for Compressing Convolutional Neural Networks from Scratch." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 3 (May 18, 2021): 2495–503. http://dx.doi.org/10.1609/aaai.v35i3.16351.
Повний текст джерелаWu, Hao, Shu Li, Yingpin Chen, and Zhenming Peng. "Seismic impedance inversion using second-order overlapping group sparsity with A-ADMM." Journal of Geophysics and Engineering 17, no. 1 (November 22, 2019): 97–116. http://dx.doi.org/10.1093/jge/gxz094.
Повний текст джерелаZhu, Zijiang, Junshan Li, Yi Hu, and Xiaoguang Deng. "Research on Age Estimation Algorithm Based on Structured Sparsity." International Journal of Pattern Recognition and Artificial Intelligence 33, no. 06 (April 21, 2019): 1956006. http://dx.doi.org/10.1142/s0218001419560068.
Повний текст джерелаZhang, Lingli. "Total variation with modified group sparsity for CT reconstruction under low SNR." Journal of X-Ray Science and Technology 29, no. 4 (July 27, 2021): 645–62. http://dx.doi.org/10.3233/xst-200833.
Повний текст джерелаOu, Weihua, and Wenjun Xiao. "Structured sparsity model with spatial similarity regularisation for semantic feature selection." International Journal of Advanced Media and Communication 7, no. 2 (2017): 138. http://dx.doi.org/10.1504/ijamc.2017.085941.
Повний текст джерелаXiao, Wenjun, and Weihua Ou. "Structured sparsity model with spatial similarity regularisation for semantic feature selection." International Journal of Advanced Media and Communication 7, no. 2 (2017): 138. http://dx.doi.org/10.1504/ijamc.2017.10006892.
Повний текст джерелаMa, Xiaolong, Fu-Ming Guo, Wei Niu, Xue Lin, Jian Tang, Kaisheng Ma, Bin Ren, and Yanzhi Wang. "PCONV: The Missing but Desirable Sparsity in DNN Weight Pruning for Real-Time Execution on Mobile Devices." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5117–24. http://dx.doi.org/10.1609/aaai.v34i04.5954.
Повний текст джерелаJavanmardi, Mohammadreza, Amir Hossein Farzaneh, and Xiaojun Qi. "A Robust Structured Tracker Using Local Deep Features." Electronics 9, no. 5 (May 20, 2020): 846. http://dx.doi.org/10.3390/electronics9050846.
Повний текст джерелаДисертації з теми "Structured sparsity model"
Tillander, Annika. "Classification models for high-dimensional data with sparsity patterns." Doctoral thesis, Stockholms universitet, Statistiska institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-95664.
Повний текст джерелаMed dagens teknik, till exempel spektrometer och genchips, alstras data i stora mängder. Detta överflöd av data är inte bara till fördel utan orsakar även vissa problem, vanligtvis är antalet variabler (p) betydligt fler än antalet observation (n). Detta ger så kallat högdimensionella data vilket kräver nya statistiska metoder, då de traditionella metoderna är utvecklade för den omvända situationen (p<n). Dessutom är det vanligtvis väldigt få av alla dessa variabler som är relevanta för något givet projekt och styrkan på informationen hos de relevanta variablerna är ofta svag. Därav brukar denna typ av data benämnas som gles och svag (sparse and weak). Vanligtvis brukar identifiering av de relevanta variablerna liknas vid att hitta en nål i en höstack. Denna avhandling tar upp tre olika sätt att klassificera i denna typ av högdimensionella data. Där klassificera innebär, att genom ha tillgång till ett dataset med både förklaringsvariabler och en utfallsvariabel, lära en funktion eller algoritm hur den skall kunna förutspå utfallsvariabeln baserat på endast förklaringsvariablerna. Den typ av riktiga data som används i avhandlingen är microarrays, det är cellprov som visar aktivitet hos generna i cellen. Målet med klassificeringen är att med hjälp av variationen i aktivitet hos de tusentals gener (förklaringsvariablerna) avgöra huruvida cellprovet kommer från cancervävnad eller normalvävnad (utfallsvariabeln). Det finns klassificeringsmetoder som kan hantera högdimensionella data men dessa är ofta beräkningsintensiva, därav fungera de ofta bättre för diskreta data. Genom att transformera kontinuerliga variabler till diskreta (diskretisera) kan beräkningstiden reduceras och göra klassificeringen mer effektiv. I avhandlingen studeras huruvida av diskretisering påverkar klassificeringens prediceringsnoggrannhet och en mycket effektiv diskretiseringsmetod för högdimensionella data föreslås. Linjära klassificeringsmetoder har fördelen att vara stabila. Nackdelen är att de kräver en inverterbar kovariansmatris och vilket kovariansmatrisen inte är för högdimensionella data. I avhandlingen föreslås ett sätt att skatta inversen för glesa kovariansmatriser med blockdiagonalmatris. Denna matris har dessutom fördelen att det leder till additiv klassificering vilket möjliggör att välja hela block av relevanta variabler. I avhandlingen presenteras även en metod för att identifiera och välja ut blocken. Det finns också probabilistiska klassificeringsmetoder som har fördelen att ge sannolikheten att tillhöra vardera av de möjliga utfallen för en observation, inte som de flesta andra klassificeringsmetoder som bara predicerar utfallet. I avhandlingen förslås en sådan Bayesiansk metod, givet den blockdiagonala matrisen och normalfördelade utfallsklasser. De i avhandlingen förslagna metodernas relevans och fördelar är visade genom att tillämpa dem på simulerade och riktiga högdimensionella data.
Vinyes, Marina. "Convex matrix sparsity for demixing with an application to graphical model structure estimation." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1130/document.
Повний текст джерелаThe goal of machine learning is to learn a model from some data that will make accurate predictions on data that it has not seen before. In order to obtain a model that will generalize on new data, and avoid overfitting, we need to restrain the model. These restrictions are usually some a priori knowledge of the structure of the model. First considered approaches included a regularization, first ridge regression and later Lasso regularization for inducing sparsity in the solution. Sparsity, also known as parsimony, has emerged as a fundamental concept in machine learning. Parsimonious models are appealing since they provide more interpretability and better generalization (avoid overfitting) through the reduced number of parameters. Beyond general sparsity and in many cases, models are constrained structurally so they have a simple representation in terms of some fundamental elements, consisting for example of a collection of specific vectors, matrices or tensors. These fundamental elements are called atoms. In this context, atomic norms provide a general framework for estimating these sorts of models. The goal of this thesis is to use the framework of convex sparsity provided by atomic norms to study a form of matrix sparsity. First, we develop an efficient algorithm based on Frank-Wolfe methods that is particularly adapted to solve problems with an atomic norm regularization. Then, we focus on the structure estimation of Gaussian graphical models, where the structure of the graph is encoded in the precision matrix and study the case with unobserved variables. We propose a convex formulation with an algorithmic approach and provide a theoretical result that states necessary conditions for recovering the desired structure. Finally, we consider the problem of signal demixing into two or more components via the minimization of a sum of norms or gauges, encoding each a structural prior on the corresponding components to recover. In particular, we provide general exact recovery guarantees in the noiseless setting based on incoherence measures
Smith, Chandler B. "Sparsity Constrained Inverse Problems - Application to Vibration-based Structural Health Monitoring." ScholarWorks @ UVM, 2019. https://scholarworks.uvm.edu/graddis/1143.
Повний текст джерелаKim, Yookyung. "Compressed Sensing Reconstruction Using Structural Dependency Models." Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/238613.
Повний текст джерелаMcGrady, Christopher Dwain. "Linking Rheological and Processing Behavior to Molecular Structure in Sparsely-Branched Polyethylenes Using Constitutive Relationships." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/37924.
Повний текст джерелаPh. D.
Yan, Enxu. "Sublinear-Time Learning and Inference for High-Dimensional Models." Research Showcase @ CMU, 2018. http://repository.cmu.edu/dissertations/1207.
Повний текст джерелаRösmann, Christoph [Verfasser], Torsten [Akademischer Betreuer] Bertram, and Martin [Gutachter] Mönnigmann. "Time-optimal nonlinear model predictive control : Direct transcription methods with variable discretization and structural sparsity exploitation / Christoph Rösmann ; Gutachter: Martin Mönnigmann ; Betreuer: Torsten Bertram." Dortmund : Universitätsbibliothek Dortmund, 2019. http://d-nb.info/1199106364/34.
Повний текст джерелаRoulet, Vincent. "On the geometry of optimization problems and their structure." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE069/document.
Повний текст джерелаIn numerous fields such as machine learning, operational research or circuit design, a task is modeled by a set of parameters to be optimized in order to take the best possible decision. Formally, the problem amounts to minimize a function describing the desired objective with iterative algorithms. The development of these latter depends then on the characterization of the geometry of the function or the structure of the problem. In a first part, this thesis studies how sharpness of a function around its minimizers can be exploited by restarting classical algorithms. Optimal schemes are presented for general convex problems. They require however a complete description of the function that is rarely available. Adaptive strategies are therefore developed and shown to achieve nearly optimal rates. A specific analysis is then carried out for sparse problems that seek for compressed representation of the variables of the problem. Their underlying conic geometry, that describes sharpness of the objective, is shown to control both the statistical performance of the problem and the efficiency of dedicated optimization methods by a single quantity. A second part is dedicated to machine learning problems. These perform predictive analysis of data from large set of examples. A generic framework is presented to both solve the prediction problem and simplify it by grouping either features, samples or tasks. Systematic algorithmic approaches are developed by analyzing the geometry induced by partitions of the data. A theoretical analysis is then carried out for grouping features by analogy to sparse methods
Kolar, Mladen. "Uncovering Structure in High-Dimensions: Networks and Multi-task Learning Problems." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/229.
Повний текст джерелаTodeschini, Adrien. "Probabilistic and Bayesian nonparametric approaches for recommender systems and networks." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0237/document.
Повний текст джерелаWe propose two novel approaches for recommender systems and networks. In the first part, we first give an overview of recommender systems and concentrate on the low-rank approaches for matrix completion. Building on a probabilistic approach, we propose novel penalty functions on the singular values of the low-rank matrix. By exploiting a mixture model representation of this penalty, we show that a suitably chosen set of latent variables enables to derive an expectation-maximization algorithm to obtain a maximum a posteriori estimate of the completed low-rank matrix. The resulting algorithm is an iterative soft-thresholded algorithm which iteratively adapts the shrinkage coefficients associated to the singular values. The algorithm is simple to implement and can scale to large matrices. We provide numerical comparisons between our approach and recent alternatives showing the interest of the proposed approach for low-rank matrix completion. In the second part, we first introduce some background on Bayesian nonparametrics and in particular on completely random measures (CRMs) and their multivariate extension, the compound CRMs. We then propose a novel statistical model for sparse networks with overlapping community structure. The model is based on representing the graph as an exchangeable point process, and naturally generalizes existing probabilistic models with overlapping block-structure to the sparse regime. Our construction builds on vectors of CRMs, and has interpretable parameters, each node being assigned a vector representing its level of affiliation to some latent communities. We develop methods for simulating this class of random graphs, as well as to perform posterior inference. We show that the proposed approach can recover interpretable structure from two real-world networks and can handle graphs with thousands of nodes and tens of thousands of edges
Частини книг з теми "Structured sparsity model"
Karimi, Amir-Hossein, Julius von Kügelgen, Bernhard Schölkopf, and Isabel Valera. "Towards Causal Algorithmic Recourse." In xxAI - Beyond Explainable AI, 139–66. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_8.
Повний текст джерелаCarson, Dean B., Doris A. Carson, Per Axelsson, Peter Sköld, and Gabriella Sköld. "Disruptions and Diversions: The Demographic Consequences of Natural Disasters in Sparsely Populated Areas." In The Demography of Disasters, 81–99. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-49920-4_5.
Повний текст джерелаDeng, Zhaoxian, and Zhiqiang Zeng. "Multi-View Subspace Clustering by Combining ℓ2,p-Norm and Multi-Rank Minimization of Tensors." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2022. http://dx.doi.org/10.3233/faia220020.
Повний текст джерелаScheir, Peter, Peter Prettenhofer, Stefanie N. Lindstaedt, and Chiara Ghidini. "An Associative and Adaptive Network Model For Information Retrieval In The Semantic Web." In Advances in Semantic Web and Information Systems, 309–44. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-60566-992-2.ch014.
Повний текст джерелаCui, Jin, Yifei Zou, and Siyuan Zhang. "Using MRNet to Predict Lunar Rock Categories Detected by Chang’e 5 Probe." In Advances in Transdisciplinary Engineering. IOS Press, 2022. http://dx.doi.org/10.3233/atde220491.
Повний текст джерелаТези доповідей конференцій з теми "Structured sparsity model"
Zhao, Chen, Jian Zhang, Siwei Ma, Ruiqin Xiong, and Wen Gao. "A dual structured-sparsity model for compressive-sensed video reconstruction." In 2015 Visual Communications and Image Processing (VCIP). IEEE, 2015. http://dx.doi.org/10.1109/vcip.2015.7457804.
Повний текст джерелаCai, Xiao, Feiping Nie, Weidong Cai, and Heng Huang. "New Graph Structured Sparsity Model for Multi-label Image Annotations." In 2013 IEEE International Conference on Computer Vision (ICCV). IEEE, 2013. http://dx.doi.org/10.1109/iccv.2013.104.
Повний текст джерелаFeng, Fangchen, and Matthieu Kowalski. "Hybrid model and structured sparsity for under-determined convolutive audio source separation." In ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2014. http://dx.doi.org/10.1109/icassp.2014.6854893.
Повний текст джерелаWang, Zheng, Feiping Nie, Lai Tian, Rong Wang, and Xuelong Li. "Discriminative Feature Selection via A Structured Sparse Subspace Learning Module." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/416.
Повний текст джерелаXu, Jie, Cheng Deng, Xinbo Gao, Dinggang Shen, and Heng Huang. "Predicting Alzheimer's Disease Cognitive Assessment via Robust Low-Rank Structured Sparse Model." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/542.
Повний текст джерелаZhu, Xiaotian, Wengang Zhou, and Houqiang Li. "Improving Deep Neural Network Sparsity through Decorrelation Regularization." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/453.
Повний текст джерелаNiu, Yue, and Hongjie Zhang. "A Self-Aggregated Hierarchical Topic Model for Short Texts." In 2nd International Conference on Machine Learning, IOT and Blockchain (MLIOB 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.111212.
Повний текст джерелаFan, Mingyu, Xiaojun Chang, Xiaoqin Zhang, Di Wang, and Liang Du. "Top-k Supervise Feature Selection via ADMM for Integer Programming." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/228.
Повний текст джерелаLi, Yanyu, Pu Zhao, Geng Yuan, Xue Lin, Yanzhi Wang, and Xin Chen. "Pruning-as-Search: Efficient Neural Architecture Search via Channel Pruning and Structural Reparameterization." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/449.
Повний текст джерелаLiu, Huan, Qinghua Zheng, Minnan Luo, Dingwen Zhang, Xiaojun Chang, and Cheng Deng. "How Unlabeled Web Videos Help Complex Event Detection?" In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/564.
Повний текст джерелаЗвіти організацій з теми "Structured sparsity model"
Yu, Guoshen, Guillermo Sapiro, and Stephane Mallat. Solving Inverse Problems with Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity. Fort Belvoir, VA: Defense Technical Information Center, June 2010. http://dx.doi.org/10.21236/ada540722.
Повний текст джерелаRahmani, Mehran, Xintong Ji, and Sovann Reach Kiet. Damage Detection and Damage Localization in Bridges with Low-Density Instrumentations Using the Wave-Method: Application to a Shake-Table Tested Bridge. Mineta Transportation Institute, September 2022. http://dx.doi.org/10.31979/mti.2022.2033.
Повний текст джерела