Academic literature on the topic 'Novel recovery algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Novel recovery algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Novel recovery algorithms"

1

Liang, Rui Hua, Xin Peng Du, Qing Bo Zhao, and Li Zhi Cheng. "Sparse Signal Recovery Based on Simulated Annealing." Applied Mechanics and Materials 321-324 (June 2013): 1295–98. http://dx.doi.org/10.4028/www.scientific.net/amm.321-324.1295.

Full text
Abstract:
Sparse signal recovery is a hot topic in the fields of optimization theory and signal processing. Two main algorithmic approaches, i.e. greedy pursuit algorithms and convex relaxation algorithms have been extensively used to solve this problem. However, these algorithms cannot guarantee to find the global optimum solution, and then they perform poorly when the sparsity level is relatively large. Based on the simulated annealing algorithm and greedy pursuit algorithms, we propose a novel algorithm on solving the sparse recovery problem. Numerical simulations show that the proposed algorithm has very good recovery performance.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhu, Ying, Yong Xing Jia, Chuan Zhen Rong, and Yu Yang. "Study on Compressed Sensing Recovery Algorithms." Applied Mechanics and Materials 433-435 (October 2013): 322–25. http://dx.doi.org/10.4028/www.scientific.net/amm.433-435.322.

Full text
Abstract:
Abastruct. Compressive sensing is a novel signal sampling theory under the condition that the signalis sparse or compressible.In this case,the small amount of signal values can be reconstructed when signal is sparse or compressible.This paper has reviewed the idea of OMP,GBP and SP,given algorithms and analyzed the experiment results,suggested some improvements.
APA, Harvard, Vancouver, ISO, and other styles
3

Battiston, Adrian, Inna Sharf, and Meyer Nahon. "Attitude estimation for collision recovery of a quadcopter unmanned aerial vehicle." International Journal of Robotics Research 38, no. 10-11 (August 8, 2019): 1286–306. http://dx.doi.org/10.1177/0278364919867397.

Full text
Abstract:
An extensive evaluation of attitude estimation algorithms in simulation and experiments is performed to determine their suitability for a collision recovery pipeline of a quadcopter unmanned aerial vehicle. A multiplicative extended Kalman filter (MEKF), unscented Kalman filter (UKF), complementary filter, [Formula: see text] filter, and novel adaptive varieties of the selected filters are compared. The experimental quadcopter uses a PixHawk flight controller, and the algorithms are implemented using data from only the PixHawk inertial measurement unit (IMU). Performance of the aforementioned filters is first evaluated in a simulation environment using modified sensor models to capture the effects of collision on inertial measurements. Simulation results help define the efficacy and use cases of the conventional and novel algorithms in a quadcopter collision scenario. An analogous evaluation is then conducted by post-processing logged sensor data from collision flight tests, to gain new insights into algorithms’ performance in the transition from simulated to real data. The post-processing evaluation compares each algorithm’s attitude estimate, including the stock attitude estimator of the PixHawk controller, to data collected by an offboard infrared motion capture system. Based on this evaluation, two promising algorithms, the MEKF and an adaptive [Formula: see text] filter, are selected for implementation on the physical quadcopter in the control loop of the collision recovery pipeline. Experimental results show an improvement in the metric used to evaluate experimental performance, the time taken to recover from the collision, when compared with the stock attitude estimator on the PixHawk (PX4) software.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Runsong, Xuelian Li, Juntao Gao, Hui Li, and Baocang Wang. "Quantum rotational cryptanalysis for preimage recovery of round-reduced Keccak." Quantum Information & Computation 23, no. 3&4 (February 2023): 223–34. http://dx.doi.org/10.26421/qic23.3-4-3.

Full text
Abstract:
The Exclusive-OR Sum-of-Product (ESOP) minimization problem has long been of interest to the research community because of its importance in classical logic design (including low-power design and design for test), reversible logic synthesis, and knowledge discovery, among other applications. However, no exact minimal minimization method has been presented for more than seven variables on arbitrary functions. This paper presents a novel quantum-classical hybrid algorithm for the exact minimal ESOP minimization of incompletely specified Boolean functions. This algorithm constructs oracles from sets of constraints and leverages the quantum speedup offered by Grover's algorithm to find solutions to these oracles, thereby improving over classical algorithms. Improved encoding of ESOP expressions results in substantially fewer decision variables compared to many existing algorithms for many classes of Boolean functions. This paper also extends the idea of exact minimal ESOP minimization to additionally minimize the cost of realizing an ESOP expression as a quantum circuit. To the extent of the authors' knowledge, such a method has never been published. This algorithm was tested on completely and incompletely specified Boolean functions via quantum simulation.
APA, Harvard, Vancouver, ISO, and other styles
5

Acharya, Deep Shekhar, and Sudhansu Kumar Mishra. "Optimal Consensus Recovery of Multi-agent System Subjected to Agent Failure." International Journal on Artificial Intelligence Tools 29, no. 06 (September 2020): 2050017. http://dx.doi.org/10.1142/s0218213020500177.

Full text
Abstract:
Multi-Agent Systems are susceptible to external disturbances, sensor failures or collapse of communication channel/media. Such failures disconnect the agent network and thereby hamper the consensus of the system. Quick recovery of consensus is vital to continue the normal operation of an agent-based system. However, only limited works in the past have investigated the problem of recovering the consensus of an agent-based system in the event of a failure. This work proposes a novel algorithmic approach to recover the lost consensus, when an agent-based system is subject to the failure of an agent. The main focus of the algorithm is to reconnect the multi-agent network in a way so as to increase the connectivity of the network, post recovery. The proposed algorithm may be applied to both linear and non-linear continuous-time consensus protocols. To verify the efficiency of the proposed algorithm, it has been applied and tested on two multi-agent networks. The results, thus obtained, have been compared with other state-of-the-art recovery algorithms. Finally, it has been established that the proposed algorithm achieves better connectivity and therefore, faster consensus when compared to the other state-of-the-art.
APA, Harvard, Vancouver, ISO, and other styles
6

Shukla, Vasundhara, and Preety D. Swami. "Sparse Signal Recovery through Long Short-Term Memory Networks for Compressive Sensing-Based Speech Enhancement." Electronics 12, no. 14 (July 17, 2023): 3097. http://dx.doi.org/10.3390/electronics12143097.

Full text
Abstract:
This paper presents a novel speech enhancement approach based on compressive sensing (CS) which uses long short-term memory (LSTM) networks for the simultaneous recovery and enhancement of the compressed speech signals. The advantage of this algorithm is that it does not require an iterative process to recover the compressed signals, which makes the recovery process fast and straight forward. Furthermore, the proposed approach does not require prior knowledge of signal and noise statistical properties for sensing matrix optimization because the used LSTM can directly extract and learn the required information from the training data. The proposed technique is evaluated against white, babble, and f-16 noises. To validate the effectiveness of the proposed approach, perceptual evaluation of speech quality (PESQ), short-time objective intelligibility (STOI), and signal-to-distortion ratio (SDR) were compared to other variants of OMP-based CS algorithms The experimental outcomes show that the proposed approach achieves the maximum improvements of 50.06%, 43.65%, and 374.16% for PESQ, STOI, and SDR respectively, over the different variants of OMP-based CS algorithms.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Hongyang, Zhouchen Lin, Chao Zhang, and Junbin Gao. "Relations Among Some Low-Rank Subspace Recovery Models." Neural Computation 27, no. 9 (September 2015): 1915–50. http://dx.doi.org/10.1162/neco_a_00762.

Full text
Abstract:
Recovering intrinsic low-dimensional subspaces from data distributed on them is a key preprocessing step to many applications. In recent years, a lot of work has modeled subspace recovery as low-rank minimization problems. We find that some representative models, such as robust principal component analysis (R-PCA), robust low-rank representation (R-LRR), and robust latent low-rank representation (R-LatLRR), are actually deeply connected. More specifically, we discover that once a solution to one of the models is obtained, we can obtain the solutions to other models in closed-form formulations. Since R-PCA is the simplest, our discovery makes it the center of low-rank subspace recovery models. Our work has two important implications. First, R-PCA has a solid theoretical foundation. Under certain conditions, we could find globally optimal solutions to these low-rank models at an overwhelming probability, although these models are nonconvex. Second, we can obtain significantly faster algorithms for these models by solving R-PCA first. The computation cost can be further cut by applying low-complexity randomized algorithms, for example, our novel [Formula: see text] filtering algorithm, to R-PCA. Although for the moment the formal proof of our [Formula: see text] filtering algorithm is not yet available, experiments verify the advantages of our algorithm over other state-of-the-art methods based on the alternating direction method.
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Cheng, Tong Wang, Kun Liu, and Xinying Zhang. "A Novel Sparse Bayesian Space-Time Adaptive Processing Algorithm to Mitigate Off-Grid Effects." Remote Sensing 14, no. 16 (August 11, 2022): 3906. http://dx.doi.org/10.3390/rs14163906.

Full text
Abstract:
Space-time adaptive processing (STAP) algorithms based on sparse recovery (SR) have been researched because of their low requirement of training snapshots. However, once some portion of clutter is not located on the grids, i.e., off-grid problems, the performances of most SR-STAP algorithms degrade significantly. Reducing the grid interval can mitigate off-grid effects, but brings strong column coherence of the dictionary, heavy computational load, and heavy storage load. A sparse Bayesian learning approach is proposed to mitigate the off-grid effects in the paper. The algorithm employs an efficient sequential addition and deletion of dictionary atoms to estimate the clutter subspace, which means that strong column coherence has no effect on the performance of the proposed algorithm. Besides, the proposed algorithm does not require much computational load and storage load. Off-grid effects can be mitigated with the proposed algorithm when the grid-interval is sufficiently small. The excellent performance of the novel algorithm is demonstrated on the simulated data.
APA, Harvard, Vancouver, ISO, and other styles
9

Song, Chen, Jiarui Deng, Zehao Liu, Bingnan Wang, Yirong Wu, and Hui Bi. "Complex-Valued Sparse SAR-Image-Based Target Detection and Classification." Remote Sensing 14, no. 17 (September 2, 2022): 4366. http://dx.doi.org/10.3390/rs14174366.

Full text
Abstract:
It is known that synthetic aperture radar (SAR) images obtained by typical matched filtering (MF)-based algorithms always suffer from serious noise, sidelobes and clutter. However, the improvement in image quality means that the complexity of SAR systems will increase, which affects the applications of SAR images. The introduction of sparse signal processing technologies into SAR imaging proposes a new way to solve this problem. Sparse SAR images obtained by sparse recovery algorithms show better image performance than typical complex SAR images with lower sidelobes and higher signal-to-noise ratios (SNR). As the most widely applied fields of SAR images, target detection and target classification rely on SAR images with high quality. Therefore, in this paper, a target detection framework based on sparse images recovered by complex approximate message passing (CAMP) algorithm and a novel classification network via sparse images reconstructed by the new iterative soft thresholding (BiIST) algorithm are proposed. Experimental results show that sparse SAR images have better performance whether for target classification or for target detection than the images recovered by MF-based algorithms, which validates the huge application potentials of sparse images.
APA, Harvard, Vancouver, ISO, and other styles
10

Malik, Jameel, Ahmed Elhayek, and Didier Stricker. "WHSP-Net: A Weakly-Supervised Approach for 3D Hand Shape and Pose Recovery from a Single Depth Image." Sensors 19, no. 17 (August 31, 2019): 3784. http://dx.doi.org/10.3390/s19173784.

Full text
Abstract:
Hand shape and pose recovery is essential for many computer vision applications such as animation of a personalized hand mesh in a virtual environment. Although there are many hand pose estimation methods, only a few deep learning based algorithms target 3D hand shape and pose from a single RGB or depth image. Jointly estimating hand shape and pose is very challenging because none of the existing real benchmarks provides ground truth hand shape. For this reason, we propose a novel weakly-supervised approach for 3D hand shape and pose recovery (named WHSP-Net) from a single depth image by learning shapes from unlabeled real data and labeled synthetic data. To this end, we propose a novel framework which consists of three novel components. The first is the Convolutional Neural Network (CNN) based deep network which produces 3D joints positions from learned 3D bone vectors using a new layer. The second is a novel shape decoder that recovers dense 3D hand mesh from sparse joints. The third is a novel depth synthesizer which reconstructs 2D depth image from 3D hand mesh. The whole pipeline is fine-tuned in an end-to-end manner. We demonstrate that our approach recovers reasonable hand shapes from real world datasets as well as from live stream of depth camera in real-time. Our algorithm outperforms state-of-the-art methods that output more than the joint positions and shows competitive performance on 3D pose estimation task.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Novel recovery algorithms"

1

Allam, Vineel Reddy. "A novel recovery algorithm for distributed computing environment /." Available to subscribers only, 2008. http://proquest.umi.com/pqdweb?did=1594491041&sid=2&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mohsin, Yasir Qasim. "Novel MR image recovery using patch-smoothness iterative shrinkage algorithm." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6538.

Full text
Abstract:
Obtaining high spatial or spatiotemporal resolution along with good slice coverage is challenging in dynamic magnetic resonance imaging, MRI, due to the slow nature of the acquisition process. In recent years, there has been a rapid growth of MRI techniques that allow faster scan speed by exploiting spatial or spatiotemporal redundancy of the images. These techniques can improve the performance of imaging significantly across multiple clinical applications, including cardiac functional examinations, perfusion imaging, blood flow assessment, contrast-enhanced angiography, functional MRI, and interventional imaging, among others. The ultimate goal of this thesis is to develop novel algorithms to reconstruct heavily undersampled sparse imaging. The designed schemes aim to achieve a shorter scan duration, higher spatial resolution, increased temporal resolution, signal-to-noise ratio and coverage in multidimensional multichannel MRI. In addition to improving patients comfort and compliance while imaging under the MRI device, the newly developed schemes will allow patients with arrhythmia problems, pediatric and obese subjects to breath freely without the need for any breath-hold scans. Shortening examination periods also reduces patient's stress, lowers the entire visit to the clinic and finally decreases the associated economic costs. Rapid imaging acquisitions will also allow for efficient extraction of quantitative information needed for the patients' diagnosis eg. tumor characterization and veins blockages through myocardial perfusion MRI. Current applications of interests include real-time CINE MRI and contrast changing perfusion MRI.
APA, Harvard, Vancouver, ISO, and other styles
3

Jupally, Vamshi Krishna Rao. "A novel recovery algorithm for concurrent failures in distributed computing environment /." Available to subscribers only, 2008. http://proquest.umi.com/pqdweb?did=1594491061&sid=13&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Novel recovery algorithms"

1

Abuzneid, Abdelshakour, Chennaipattinam Raghuram Vijay Iyengar, and Ramaswamy Gandhi Dasan Prabhu. "VDisaster recovery with the help of real time video streaming using MANET support." In Novel Algorithms and Techniques in Telecommunications and Networking, 507–12. Dordrecht: Springer Netherlands, 2009. http://dx.doi.org/10.1007/978-90-481-3662-9_87.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chun, Kwang Ho, and Myoung Seob Lim. "Novel Symbol Timing Recovery Algorithm for Multi-level Signal." In Lecture Notes in Computer Science, 52–59. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30134-9_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ramponi, Giorgia. "Learning in the Presence of Multiple Agents." In Special Topics in Information Technology, 93–103. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-15374-7_8.

Full text
Abstract:
AbstractReinforcement Learning (RL) has emerged as a powerful tool to solve sequential decision-making problems, where a learning agent interacts with an unknown environment in order to maximize its rewards. Although most RL real-world applications involve multiple agents, the Multi-Agent Reinforcement Learning (MARL) framework is still poorly understood from a theoretical point of view. In this manuscript, we take a step toward solving this problem, providing theoretically sound algorithms for three RL sub-problems with multiple agents: Inverse Reinforcement Learning (IRL), online learning in MARL, and policy optimization in MARL. We start by considering the IRL problem, providing novel algorithms in two different settings: the first considers how to recover and cluster the intentions of a set of agents given demonstrations of near-optimal behavior; the second aims at inferring the reward function optimized by an agent while observing its actual learning process. Then, we consider online learning in MARL. We showed how the presence of other agents can increase the hardness of the problem while proposing statistically efficient algorithms in two settings: Non-cooperative Configurable Markov Decision Processes and Turn-based Markov Games. As the third sub-problem, we study MARL from an optimization viewpoint, showing the difficulties that arise from multiple function optimization problems and providing a novel algorithm for this scenario.
APA, Harvard, Vancouver, ISO, and other styles
4

Molisz, Wojciech, and Jacek Rak. "A Novel Class-Based Protection Algorithm Providing Fast Service Recovery in IP/WDM Networks." In NETWORKING 2008 Ad Hoc and Sensor Networks, Wireless Networks, Next Generation Internet, 338–45. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-79549-0_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Koh, Sungshik. "A Novel Recovery Algorithm of Incomplete Observation Matrix for Converting 2-D Video to 3-D Content." In Advances in Machine Vision, Image Processing, and Pattern Analysis, 260–69. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11821045_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Luo, Dan, Joseph M. Gattas, and Poah Shiun Shawn Tan. "Real-Time Defect Recognition and Optimized Decision Making for Structural Timber Jointing." In Proceedings of the 2020 DigitalFUTURES, 36–45. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-4400-6_4.

Full text
Abstract:
AbstractNon-structural or out-of-grade timber framing material contains a large proportion of visual and natural defects. A common strategy to recover usable material from these timbers is the marking and removing of defects, with the generated intermediate lengths of clear wood then joined into a single piece of full-length structural timber. This paper presents a novel workflow that uses machine learning based image recognition and a computational decision-making algorithm to enhance the automation and efficiency of current defect identification and re-joining processes. The proposed workflow allows the knowledge of worker to be translated into a classifier that automatically recognizes and removes areas of defects based on image capture. In addition, a real-time optimization algorithm in decision making is developed to assign a joining sequence of fragmented timber from a dynamic inventory, creating a single piece of targeted length with a significant reduction in material waste. In addition to an industrial application, this workflow also allows for future inventory-constrained customizable fabrication, for example in production of non-standard architectural components or adaptive reuse or defect-avoidance in out-of-grade timber construction.
APA, Harvard, Vancouver, ISO, and other styles
7

Hamidi, Hodjatollah. "A General Framework of Algorithm-Based Fault Tolerance Technique for Computing Systems." In Analyzing Security, Trust, and Crime in the Digital World, 1–21. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-4856-2.ch001.

Full text
Abstract:
The Algorithm-Based Fault Tolerance (ABFT) approach transforms a system that does not tolerate a specific type of faults, called the fault-intolerant system, to a system that provides a specific level of fault tolerance, namely recovery. The ABFT philosophy leads directly to a model from which error correction can be developed. By employing an ABFT scheme with effective convolutional code, the design allows high throughput as well as high fault coverage. The ABFT techniques that detect errors rely on the comparison of parity values computed in two ways. The parallel processing of input parity values produce output parity values comparable with parity values regenerated from the original processed outputs and can apply convolutional codes for the redundancy. This method is a new approach to concurrent error correction in fault-tolerant computing systems. This chapter proposes a novel computing paradigm to provide fault tolerance for numerical algorithms. The authors also present, implement, and evaluate early detection in ABFT.
APA, Harvard, Vancouver, ISO, and other styles
8

Raju, Sanjay, Rishiikeshwer B.S., Aswin Shriram T., Brindha G.R., Santhi B., and Bharathi N. "COVID-19 - Novel Short Term Prediction Methods." In Mobile Computing Solutions for Healthcare Systems, 16–35. BENTHAM SCIENCE PUBLISHERS, 2023. http://dx.doi.org/10.2174/9789815050592123010006.

Full text
Abstract:
The recent outbreak of Severe Acute Respiratory Syndrome Corona Virus (SARS-CoV-2), also called COVID-19, is a major global health problem due to an increase in mortality and morbidity. The virus disturbs the respirational process of a human being and is highly spreadable. The current distressing COVID-19 pandemic has caused heavy financial crashing and the assets and standards of the highly impacted countries being compromised. Therefore, prediction methods should be devised, supporting the development of recovery strategies. To make accurate predictions, understanding the natural progression of the disease is very important. The developed novel mathematical models may help the policymakers and government control the infection and protect society from this pandemic infection. Due to the nature of the data, the uncertainty may lead to an error in the estimation. In this scenario, the uncertainty arises due to the dynamic rate of change based on time in the infectious count because of the different stages of lockdowns, population density, social distancing, and many other reasons concerning demography. The period between exposure to the virus and the first symptom of infection is large compared to other viruses. It is mandatory to follow the infected persons. The exposure needs to be controlled to prevent the spreading in the long term, and the infected people must be in isolation for the above-mentioned period to avoid short-term infections. Officials need to know about the long-term scenario as well as the short.term for policymaking. Many studies are focusing on long-term forecasting using mathematical modelling. For the short-term prediction, this paper proposed two algorithms: 1) to predict next-day count from the past 2 days data irrespective of population size with less error rate and 2) to predict the next M days based on the deviation of the rate of change in previous N-days active cases. The proposed methods can be adopted by government officials, researchers, and medical professionals by developing a mobile application. So that they can use it whenever and wherever necessary. The mobile health (M-Health) App. helps the user to know the status of the pandemic state and act accordingly.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Xin, Shen Wang, Jianzhi Sang, and weizhe zhang. "A Novel Pixel Merging-Based Lossless Recovery Algorithm for Basic Matrix VSS." In Cryptography, 545–55. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-1763-5.ch032.

Full text
Abstract:
Lossless recovery in visual secret share (VSS) is very meaningful. In this paper, a novel lossless recovery algorithm for the basic matrix VSS is proposed. The secret image is reconstructed losslessly by using simple exclusive XOR operation and merging pixel. The algorithm not only can apply to the VSS without pixel expansion but also can apply to VSS with pixel expansion. The condition of lossless recovery of a VSS is given by analyzing the XOR all columns of basic matrixes. Simulations are conducted to evaluate the efficiency of the proposed scheme.
APA, Harvard, Vancouver, ISO, and other styles
10

Yushun, Wang, and Zhuang Yueting. "Quasi-Facial Communication for Online Learning Using 3D Modeling Techniques." In Advances in Distance Education Technologies, 181–91. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-60566-934-2.ch012.

Full text
Abstract:
Online interaction with 3D facial animation is an alternative way of face-to-face communication for distance education. 3D facial modeling is essential for virtual educational environments establishment. This article presents a novel 3D facial modeling solution that facilitates quasi-facial communication for online learning. Our algorithm builds 3D facial models from a single image, with support of a 3D face database. First from the image, we extract a set of feature points, which are then used to automatically estimate the head pose parameters using the 3D mean face in our database as a reference model. After the pose recovery, a similarity measurement function is proposed to locate the neighborhood for the given image in the 3D face database. The scope of neighborhood can be determined adaptively using our cross-validation algorithm. Furthermore, the individual 3D shape is synthesized by neighborhood interpolation. Texture mapping is achieved based on feature points. The experimental results show that our algorithm can robustly produce 3D facial models from images captured in various scenarios to enhance the lifelikeness in distant learning.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Novel recovery algorithms"

1

Cao, Ha H., and Ha H. Nguyen. "Novel Multi-Dimensional Spatially-Adaptive Image Recovery Using Spline Interpolation." In 2019 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA). IEEE, 2019. http://dx.doi.org/10.23919/spa.2019.8936721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chae, Jeongmin, and Song-Nam Hong. "A Novel B-MAP Proxy for Greedy Sparse Signal Recovery Algorithms." In 2020 IEEE International Symposium on Information Theory (ISIT). IEEE, 2020. http://dx.doi.org/10.1109/isit44484.2020.9174022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lei, Lei, and Yuefeng Ji. "Study on novel recovery strategies and algorithms in IP over WDM networks." In Asia-Pacific Optical and Wireless Communications, edited by S. J. Ben Yoo, Kwok-wai Cheung, Yun-Chur Chung, and Guangcheng Li. SPIE, 2004. http://dx.doi.org/10.1117/12.520189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fernandez, Marcel, Grigory Kabatiansky, and Ying Miao. "A Novel Support Recovery Algorithms and Its Applications to Multiple-Access Channels." In 2022 IEEE International Multi-Conference on Engineering, Computer and Information Sciences (SIBIRCON). IEEE, 2022. http://dx.doi.org/10.1109/sibircon56155.2022.10017094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hendraningrat, Luky, Saeed Majidaie, Nor Idah Ketchut, Fraser Skoreyko, and Seyed Mousa MousaviMirkalaei. "Advanced Reservoir Simulation: A Novel Robust Modelling of Nanoparticles for Improved Oil Recovery." In SPE Annual Technical Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/205927-ms.

Full text
Abstract:
Abstract The potential of nanoparticles, which are classified as advanced fluid material, have been unlocked for improved oil recovery in recent years such as nanoparticles-assisted waterflood process. However, there is no existing commercial reservoir simulation software that could properly model phase behaviour and transport phenomena of nanoparticles. This paper focuses on the development of a novel robust advanced simulation algorithms for nanoparticles that incorporate all the main mechanisms that have been observed for interpreting and predicting performance. The general algorithms were developed by incorporating important physico-chemical interactions that exist across nanoparticles along with the porous media and fluid: phase behaviour and flow characteristic of nanoparticles that includes aggregation, splitting and solid phase deposition. A new reaction stoichiometry was introduced to capture the aggregation process. The new algorithm was also incorporated to describe disproportionate permeability alteration and adsorption of nanoparticles, aqueous phase viscosities effect, interfacial tension reduction, and rock wettability alteration. Then, the model was tested and duly validated using several previously published experimental datasets that involved various types of nanoparticles, different chemical additives, hardness of water, wide range of water salinity and rock permeability and oil viscosity from ambient to reservoir temperature. A novel advanced simulation tool has successfully been developed to model advanced fluid material, particularly nanoparticles for improved/enhanced oil recovery. The main scripting of physics and mechanisms of nanoparticle injection are accomplished in the model and have acceptable match with various type of nanoparticles, concentration, initial wettability, solvent, stabilizer, water hardness and temperature. Reasonable matching for all experimental published data were achieved for pressure and production data. Critical parameters have been observed and should be considered as important input for laboratory experimental design. Sensitivity studies have been conducted on critical parameters and reported in the paper as the most sensitive for obtaining the matches of both pressure and production data. Observed matching parameters could be used as benchmarks for training and data validation. Prior to using in a 3D field-scale prediction in Malaysian oilfields, upscaling workflows must be established with critical parameters. For instance, some reaction rates at field-scale can be assumed to be instantaneous since the time scale for field-scale models is much larger than these reaction rates in the laboratory.
APA, Harvard, Vancouver, ISO, and other styles
6

Suranthiran, Sugathevan, and Suhada Jayasuriya. "Robust Signal Recovery From Distorted Nonlinear Sensor Data." In ASME 2003 International Mechanical Engineering Congress and Exposition. ASMEDC, 2003. http://dx.doi.org/10.1115/imece2003-41487.

Full text
Abstract:
In an attempt to facilitate the design and implementation of memory-less nonlinear sensors, the signal reconstruction schemes are analyzed and necessary modifications are proposed to improve the accuracy and minimize errors in sensor measurements. The problem of recovering chirp signal from the distorted nonlinear output is considered and an efficient reconstruction approach is developed. Model uncertainty is a serious issue with any model-based algorithms and a novel technique, which uses a norminal model instead of an accurate model and produces the results that are robust to model uncertainty, is proposed.
APA, Harvard, Vancouver, ISO, and other styles
7

Lu, Yunan, Weiwei Li, and Xiuyi Jia. "Label Enhancement via Joint Implicit Representation Clustering." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/447.

Full text
Abstract:
Label distribution is an effective label form to portray label polysemy (i.e., the cases that an instance can be described by multiple labels simultaneously). However, the expensive annotating cost of label distributions limits its application to a wider range of practical tasks. Therefore, LE (label enhancement) techniques are extensively studied to solve this problem. Existing LE algorithms mostly estimate label distributions by the instance relation or the label relation. However, they suffer from biased instance relations, limited model capabilities, or suboptimal local label correlations. Therefore, in this paper, we propose a deep generative model called JRC to simultaneously learn and cluster the joint implicit representations of both features and labels, which can be used to improve any existing LE algorithm involving the instance relation or local label correlations. Besides, we develop a novel label distribution recovery module, and then integrate it with JRC model, thus constituting a novel generative label enhancement model that utilizes the learned joint implicit representations and instance clusters in a principled way. Finally, extensive experiments validate our proposal.
APA, Harvard, Vancouver, ISO, and other styles
8

Gupta, Vishal, Manoj K. Mohanty, Ajay Mahajan, and Surendra K. Biswal. "Performance Optimization of a Coal Preparation Plant Using Genetic Algorithms." In ASME 2004 International Mechanical Engineering Congress and Exposition. ASMEDC, 2004. http://dx.doi.org/10.1115/imece2004-60870.

Full text
Abstract:
A coal preparation plant typically has multiple cleaning circuits based on size of coal particles. The traditional way of optimizing the plant output and meeting the product constraints such as ash, sulfur and moisture content is to equalize the average product quality from each circuit. The present study includes multiple incremental product quality approach to optimize the clean coal recovery while satisfying the product constraints. The plant output was optimized at the given constraints of 7.5% ash and 1.3% sulfur. It was observed that utilizing incremental product quality process gives 2.13% higher yield which can generate additional revenue of $4,260,000 per annum than that obtained by using the equal average product quality approach in this particular case. This paper introduces a novel approach for optimizing plant output using Genetic Algorithms (GA) while satisfying the multiple quality constraints. The same plant product constraints were used for GA based analysis. The results showed that using GA as an optimization process gives 2.23% higher yield that will result in additional revenue generation of $4,460,000 per annum than average product quality approach. The GA serves as an alternative process to optimize the coal processing plant yield with multiple quality constraints.
APA, Harvard, Vancouver, ISO, and other styles
9

Davudov, Davud, Ashwin Venkatraman, Ademide O. Mabadeje, Anton Malkov, Gurpreet Singh, Birol Dindoruk, and Talal Al-Aulaqi. "Machine Learning Assisted Well Placement Optimization." In SPE Western Regional Meeting. SPE, 2023. http://dx.doi.org/10.2118/213038-ms.

Full text
Abstract:
Abstract Well placement optimization is a complicated problem which is usually solved by direct combination of reservoir simulators with optimization algorithm. However, depending on complexity of the reservoir model studied, thousands of simulations is usually needed for accurate and reliable results. In this research, we present a novel approach – machine learning (ML) assisted proxy model that combines reservoir simulations and reduced physics model to reduce computational cost. In the proposed model framework, first several (depending on the complexity of the problem) uniformly distributed random coordinates are selected. These chosen coordinates are considered as data points for ML model. For the chosen coordinates (training set) reservoir simulations are executed and NPV/recovery values are calculated (target variable). Spatial locations as well as petrophysical properties of the same coordinates extracted from simulation model are also used as an input to the ML model. ML model is further improved by combining with Fast Marching Model (FMM) which is a robust reduced physics model. The inclusion of FMM helps identify drainage volume for producers and hence enhance model training. Finally, the trained ML model is coupled with stochastic optimization algorithms to determine infill well location with highest NPV/recovery. Using an example field data, we present two specific cases of using proposed model: a) for greenfield with a single new well, b) for greenfield with multiple new wells. Results indicate that developed ML model can predict NPV with around 96% accuracy (testing score). This gives great confidence in predictions from the trained hybrid model that can be used as a proxy model for reservoir simulations. Coupling the trained hybrid model with Particle Swarm Optimization (PSO), the location of the new producers with maximum NPV are determined. The results are further confirmed with an exhaustive search from all potential locations. A novel approach is presented to show how traditional physics-based approaches can be combined with machine learning algorithms to optimize well placement. Such approaches can be integrated in current greenfield and brownfield reservoir engineering workflow to drastically reduce decision making times.
APA, Harvard, Vancouver, ISO, and other styles
10

Bermudez, Fernando, Noor Al Nahhas, Hafsa Yazdani, Michael LeTan, and Mohammed Shono. "An ESP Production Optimization Algorithm Applied to Unconventional Wells." In Abu Dhabi International Petroleum Exhibition & Conference. SPE, 2021. http://dx.doi.org/10.2118/207824-ms.

Full text
Abstract:
Abstract The objectives and Scope is to evaluate the feasibility of a Production Maximization algorithm for ESPs on unconventional wells using projected operating conditions instead of current ones, which authors expect will be crucial in adjusting the well deliverability to optimum frequencies on the rapidly changing conditions of tight oil wells. Actual production data for an unconventional well was used, covering from the start of Natural Flow production up to 120 days afterwards. Simulating what the production would be if a VFD running on IMP Optimization algorithms had been installed, new values for well flowing pressures were calculated, daily production scenarios were evaluated, and recommended operating frequencies were plotted. Result, observations, and conclusions: A. Using the Intelligent Maximum Production (IMP) algorithm allows maximum production from tight oil wells during the initial high production stage, and the prevention of gas-locking at later stages when gas production increases. B. The adjustment of frequency at later stages for GOR wells is key to maintaining maximum production while controlling free gas at the intake when compared against controlling the surface choke. Novel/additive information: The use of Electrical Submersible Pumps for the production of unconventional wells paired with the use of a VFD and properly designed control algorithms allows faster recovery of investment by pumping maximum allowable daily rates while constraining detrimental conditions such as free gas at the intake.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography