To see the other types of publications on this topic, follow the link: Ill-posed nature.

Journal articles on the topic 'Ill-posed nature'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Ill-posed nature.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Markovsky, Alexander. "Development and Application of Ill-Posed Problems in the USSR." Applied Mechanics Reviews 41, no. 6 (June 1, 1988): 247–56. http://dx.doi.org/10.1115/1.3151896.

Full text
Abstract:
This article describes the earlier stage of development of ill-posed problems in the Soviet Union where this area of mathematics originated. There are several types of the problems, such as Fredholm and Volterra integral equations of the first kind, algebraic systems with ill-conditioned matrix, optimal regulations, approximate Fourier summation, inverse heat conduction, etc, where ill-posed nature is a serious barrier to construct a stable solution. Different methods, in increasing order of generality, showing how to interpret and solve these types of problems are reviewed. The main point is to demonstrate that all approaches in solving the ill-posed problems can be based on common sense and intuition, though formalization is needed to explore the methodology in different fields of applied mathematics and engineering.
APA, Harvard, Vancouver, ISO, and other styles
2

Kornilov, Viktor S. "Methods of teaching inverse and incorrect problems to students in the context of informatization of education." RUDN Journal of Informatization in Education 17, no. 4 (December 15, 2020): 315–22. http://dx.doi.org/10.22363/2312-8631-2020-17-4-315-322.

Full text
Abstract:
Problem and goal. Computer technologies are now widely used in applied research aimed at obtaining new scientific knowledge. These studies used the method of computer modeling and computing experiment, from which it is possible to study the properties of remote or inaccessible objects, processes and phenomena of different nature. The above mentioned is directly related to teaching students applied mathematics in general, and, in particular, to teaching students of physical and mathematical training areas inverse and ill-posed problems, which are the scientific direction of applied mathematics. It is obvious that in the process of teaching students inverse and ill-posed problems, it is advisable to use computer technologies. However, the use of computer technology should be appropriate and correct. Methodology. The process of finding solutions to inverse and ill-posed problems is usually time-consuming, since such mathematical problems are non-linear in their formulation and may have a non-unique and unstable solution. These circumstances pose a mathematical difficulty in the proof of the theorems of existence, uniqueness and stability of solutions to inverse and ill-posed problems. Computer technologies help to overcome mathematical difficulties associated with routine transformations, analysis of information about solving such mathematical problems. Results. Using computer technologies, students gain experience in mobile research of various inverse and ill-posed problems, as well as in identifying the capabilities of computer technologies in solving various applied mathematical problems, and develop ICT competence. Conclusion. When using multimedia and computer technologies in the process of teaching students inverse and ill-posed problems, didactic principles of teaching are implemented, which allow students to acquire deep scientific knowledge on inverse and ill-posed problems, and develop their information culture.
APA, Harvard, Vancouver, ISO, and other styles
3

Moutsoglou, A. "An Inverse Convection Problem." Journal of Heat Transfer 111, no. 1 (February 1, 1989): 37–43. http://dx.doi.org/10.1115/1.3250655.

Full text
Abstract:
The nature of inverse problems in convective environments is investigated. The ill-posed quality inherent in inverse problems is verified for free convection laminar flow in a vertical channel. A sequential function specification algorithm is adapted for the semiparabolic system of equations that governs the flow and heat transfer in the channel. The procedure works very well in alleviating the ill-posed symptoms of inverse problems. The performance of a simple smoothing routine is also tested for the prescribed conditions.
APA, Harvard, Vancouver, ISO, and other styles
4

Sullivan, B., and B. Liu. "The ill-posed nature of a method for sub-Nyquist rate signal reconstruction." IEEE Transactions on Circuits and Systems 34, no. 2 (February 1987): 203–5. http://dx.doi.org/10.1109/tcs.1987.1086106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hanes, Rebecca J., Nathan B. Cruze, Prem K. Goel, and Bhavik R. Bakshi. "Allocation Games: Addressing the Ill-Posed Nature of Allocation in Life-Cycle Inventories." Environmental Science & Technology 49, no. 13 (June 23, 2015): 7996–8003. http://dx.doi.org/10.1021/acs.est.5b01192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Barrett, T. A. "Reconstruction with noisy data." Proceedings, annual meeting, Electron Microscopy Society of America 44 (August 1986): 186–87. http://dx.doi.org/10.1017/s0424820100142554.

Full text
Abstract:
The problem of deducing the 3-D structure of an object given a limited number of 2-D projections (e.g., STEM micrographs) is well-known. It is an ill-posed problem in the sense that very many solutions exist that have the same 2-D projections. If the data are assumed to be noisy (most STEM images are very noisy), then the problem is still ill-posed: many solutions exist that yield the same least-square error from the data.Crewe et al. have shown that, remarkably, the constraint that many objects are essentially Boolean in nature (have constant density) means that their structure can be determined very well even with very few projections, if there is little noise.
APA, Harvard, Vancouver, ISO, and other styles
7

Thompson, Owen E., Donald D. Dazlich, and Yu-Tai Hou. "The Ill-posed Nature of the Satellite Temperature Retrieval Problem and the Limits of Retrievability." Journal of Atmospheric and Oceanic Technology 3, no. 4 (December 1986): 643–49. http://dx.doi.org/10.1175/1520-0426(1986)003<0643:tipnot>2.0.co;2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Price, Michael, André Marshall, and Arnaud Trouvé. "A Multi-observable Approach to Address the Ill-Posed Nature of Inverse Fire Modeling Problems." Fire Technology 52, no. 6 (December 19, 2015): 1779–97. http://dx.doi.org/10.1007/s10694-015-0541-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Papa, Frank J. "Learning sciences principles that can inform the construction of new approaches to diagnostic training." Diagnosis 1, no. 1 (January 1, 2014): 125–29. http://dx.doi.org/10.1515/dx-2013-0013.

Full text
Abstract:
AbstractThe author suggests that the ill-defined nature of human diseases is a little appreciated, nonetheless important contributor to persistent and high levels of diagnostic error. Furthermore, medical education’s continued use of traditional, non-evidence based approaches to diagnostic training represents a systematic flaw likely perpetuating sub-optimal diagnostic performance in patients suffering from ill-defined diseases. This manuscript briefly describes how Learning Sciences findings elucidating how humans reason in the face of the uncertainty and complexity posed by ill-defined diseases might serve as guiding principles in the formulation of first steps towards a codified, 21st century approach to training and assessing the diagnostic capabilities of future health care providers.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Ye, Rongfang Gong, Mårten Gulliksson, and Xiaoliang Cheng. "A coupled complex boundary expanding compacts method for inverse source problems." Journal of Inverse and Ill-posed Problems 27, no. 1 (February 1, 2019): 67–86. http://dx.doi.org/10.1515/jiip-2017-0002.

Full text
Abstract:
Abstract In this paper, we consider an inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary conditions. The unknown source term is to be determined by additional boundary data. This problem is ill-posed since the dimensionality of the boundary is lower than the dimensionality of the inner domain. To overcome the ill-posed nature, using the a priori information (sourcewise representation), and based on the coupled complex boundary method, we propose a coupled complex boundary expanding compacts method (CCBECM). A finite element method is used for the discretization of CCBECM. The regularization properties of CCBECM for both the continuous and discrete versions are proved. Moreover, an a posteriori error estimate of the obtained finite element approximate solution is given and calculated by a projected gradient algorithm. Finally, numerical results show that the proposed method is stable and effective.
APA, Harvard, Vancouver, ISO, and other styles
11

Xiang, Li, Jie Xiang, Jiping Guan, Fuhan Zhang, Yanling Zhao, and Lifeng Zhang. "A Novel Reference-Based and Gradient-Guided Deep Learning Model for Daily Precipitation Downscaling." Atmosphere 13, no. 4 (March 24, 2022): 511. http://dx.doi.org/10.3390/atmos13040511.

Full text
Abstract:
The spatial resolution of precipitation predicted by general circulation models is too coarse to meet current research and operational needs. Downscaling is one way to provide finer resolution data at local scales. The single-image super-resolution method in the computer vision field has made great strides lately and has been applied in various fields. In this article, we propose a novel reference-based and gradient-guided deep learning model (RBGGM) to downscale daily precipitation considering the discontinuity of precipitation and ill-posed nature of downscaling. Global Precipitation Measurement Mission (GPM) precipitation data, variables in ERA5 re-analysis data, and topographic data are selected to perform the downscaling, and a residual dense attention block is constructed to extract features of them. By exploring the discontinuous feature of precipitation, we introduce gradient feature to reconstruct precipitation distribution. We also extract the feature of high-resolution monthly precipitation as a reference feature to resolve the ill-posed nature of downscaling. Extensive experimental results on benchmark data sets demonstrate that our proposed model performs better than other baseline methods. Furthermore, we construct a daily precipitation downscaling data set based on GPM precipitation data, ERA5 re-analysis data and topographic data.
APA, Harvard, Vancouver, ISO, and other styles
12

DE MUNCK, J. C., TH J. C. FAES, A. J. HERMANS, and R. M. HEETHAAR. "A Parametric Method to Resolve the Ill-Posed Nature of the EIT Reconstruction Problem: A Simulation Study." Annals of the New York Academy of Sciences 873, no. 1 ELECTRICAL BI (April 1999): 440–53. http://dx.doi.org/10.1111/j.1749-6632.1999.tb09493.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Levitan, Nathaniel, Barry Gross, Fred Moshary, and Yonghua Wu. "Potential Retrieval of Aerosol Microphysics From Multistatic Space-Borne Lidar." EPJ Web of Conferences 176 (2018): 05017. http://dx.doi.org/10.1051/epjconf/201817605017.

Full text
Abstract:
HSRL lidars are being considered for deployment to space to retrieve aerosol microphysics. The literature is mostly focused on the monostatic configuration; but, in this paper, we explore whether additional information for the retrieval of microphysics can be obtained by adding a second detector in a bistatic configuration. The information gained from the additional measurements can under certain conditions reduce the ill-posed nature of aerosol microphysics retrieval and reducing the uncertainty in the retrievals.
APA, Harvard, Vancouver, ISO, and other styles
14

Power, J. F., and M. C. Prystay. "Expectation Minimum (EM): A New Principle for the Solution of Ill-Posed Problems in Photothermal Science." Applied Spectroscopy 49, no. 6 (June 1995): 709–24. http://dx.doi.org/10.1366/0003702953964499.

Full text
Abstract:
The expectation-minimum (EM) principle is a new strategy for recovering robust solutions to the ill-posed inverse problems of photothermal science. The expectation-minimum principle uses the addition of well-characterized random noise to a model basis to be fitted to the experimental response by linear minimization or projection techniques. The addition of noise to the model basis improves the conditioning of the basis by many orders of magnitude. Multiple projections of the data onto the basis in the presence of noise are averaged, to give the solution vector as an expectation value which reliably estimates the global minimum solution for general cases, while the conventional approaches fail. This solution is very stable in the presence of random error on the data. The expectation-minimum principle has been demonstrated in conjunction with several projection algorithms. The nature of the solutions recovered by the expectation minimum principle is nearly independent of the minimization algorithms used and depends principally on the noise level set in the model basis.
APA, Harvard, Vancouver, ISO, and other styles
15

Szmukler, George, and Frank Holloway. "Mental health legislation is now a harmful anachronism." Psychiatric Bulletin 22, no. 11 (November 1998): 662–65. http://dx.doi.org/10.1192/pb.22.11.662.

Full text
Abstract:
Two recent developments have served to highlight the contradictory and discriminatory nature of UK mental health legislation, and indeed all ‘mental health’ acts. These are the Court of Appeal ruling in L. v. Bournewood (1997) and the increasing use of coercion in an attempt to alleviate society's fears of the dangers posed by the mentally ill in the community (Holloway, 1996). At the same time, a third development, the proposal for a Mental Incapacity Act, and the consequent Government Green Paper (Lord Chancellor's Department, 1997) Who Decides provides a framework rendering mental health legislation redundant.
APA, Harvard, Vancouver, ISO, and other styles
16

WANG, YANFEI, CHANGCHUN YANG, and JINGJIE CAO. "ON TIKHONOV REGULARIZATION AND COMPRESSIVE SENSING FOR SEISMIC SIGNAL PROCESSING." Mathematical Models and Methods in Applied Sciences 22, no. 02 (February 2012): 1150008. http://dx.doi.org/10.1142/s0218202511500084.

Full text
Abstract:
Using compressive sensing and sparse regularization, one can nearly completely reconstruct the input (sparse) signal using limited numbers of observations. At the same time, the reconstruction methods by compressing sensing and optimizing techniques overcome the obstacle of the number of sampling requirement of the Shannon/Nyquist sampling theorem. It is well known that seismic reflection signal may be sparse, sometimes and the number of sampling is insufficient for seismic surveys. So, the seismic signal reconstruction problem is ill-posed. Considering the ill-posed nature and the sparsity of seismic inverse problems, we study reconstruction of the wavefield and the reflection seismic signal by Tikhonov regularization and the compressive sensing. The l0, l1 and l2 regularization models are studied. Relationship between Tikhonov regularization and the compressive sensing is established. In particular, we introduce a general lp - lq (p, q ≥ 0) regularization model, which overcome the limitation on the assumption of convexity of the objective function. Interior point methods and projected gradient methods are studied. To show the potential for application of the regularized compressive sensing method, we perform both synthetic seismic signal and field data compression and restoration simulations using a proposed piecewise random sub-sampling. Numerical performance indicates that regularized compressive sensing is applicable for practical seismic imaging.
APA, Harvard, Vancouver, ISO, and other styles
17

Jaffri, Nasif R., Ahmad Almogren, Aftab Ahmad, Usama Abrar, Ayman Radwan, and Farzan A. Khan. "Iterative Algorithms for Deblurring of Images in Case of Electrical Capacitance Tomography." Mathematical Problems in Engineering 2021 (August 10, 2021): 1–11. http://dx.doi.org/10.1155/2021/2268544.

Full text
Abstract:
Electrical capacitance tomography (ECT) has been used to measure flow by applying gas-solid flow in coal gasification, pharmaceutical, and other industries. ECT is also used for creating images of physically confined objects. The data collected by the acquisition system to produce images undergo blurring because of ambient conditions and the electronic circuitry used. This research includes the principle of ECT techniques for deblurring images that were created during measurement. The data recorded by the said acquisition system ascends a large number of linear equations. This system of equations is sparse and ill-conditioned and hence is ill-posed in nature. A variety of reconstruction algorithms with many pros and cons are available to deal with ill-posed problems. Large-scale systems of linear equations resulting during image deblurring problems are solved using iterative regularization algorithms. The conjugate gradient algorithm for least-squares problems (CGLS), least-squares QR factorization (LSQR), and the modified residual norm steepest descent (MRNSD) algorithm are the famous variations of iterative algorithms. These algorithms exhibit a semiconvergence behavior; that is, the computed solution quality first improves and then reduces as the error norm decreases and later increases with each iteration. In this work, soft thresholding has been used for image deblurring problems to tackle the semiconvergence issues. Numerical test problems were executed to indicate the efficacy of the suggested algorithms with criteria for optimal stopping iterations. Results show marginal improvement compared to the traditional iterative algorithms (CGLS, LSQR, and MRNSD) for resolving semiconvergence behavior and image restoration.
APA, Harvard, Vancouver, ISO, and other styles
18

Hassan, Hashim, Fabio Semperlotti, Kon-Well Wang, and Tyler N. Tallman. "Enhanced imaging of piezoresistive nanocomposites through the incorporation of nonlocal conductivity changes in electrical impedance tomography." Journal of Intelligent Material Systems and Structures 29, no. 9 (February 1, 2018): 1850–61. http://dx.doi.org/10.1177/1045389x17754269.

Full text
Abstract:
Electrical impedance tomography is a method of noninvasively imaging the internal conductivity distribution of a domain. Because many materials exhibit piezoresistivity, electrical impedance tomography has considerable potential for application in structural health monitoring. Despite its numerous benefits such as being low cost, providing continuous sensing, and having the ability to be employed in real time, electrical impedance tomography is limited by several important factors such as the ill-posed nature of the inverse problem and the requirement for large electrode arrays to produce quality images. Unfortunately, current methods of mitigating these limitations impose upon the benefits of electrical impedance tomography. Herein, we propose a multi-physics approach of enhancing electrical impedance tomography without sacrificing any of its benefits. This approach is predicated on coupling global conductivity changes with the electrical impedance tomography inversion process thereby adding additional constraints and rendering the problem less ill-posed. Additionally, we leverage physically motivated global conductivity changes in the context of piezoresistive nanocomposites. We demonstrate this proof of concept with numerical simulations and demonstrate that by incorporating multiple conductivity changes, the rank of the sensitivity matrix can be improved and the quality of electrical impedance tomography reconstructions can be enhanced. The proposed method, therefore, has the potential of easing the implementation burden of electrical impedance tomography while concurrently enabling high-quality images to be produced without imposing on the major advantages of electrical impedance tomography.
APA, Harvard, Vancouver, ISO, and other styles
19

Ayanbayev, Birzhan, and Nikos Katzourakis. "On the Inverse Source Identification Problem in $L^{\infty }$ for Fully Nonlinear Elliptic PDE." Vietnam Journal of Mathematics 49, no. 3 (July 22, 2021): 815–29. http://dx.doi.org/10.1007/s10013-021-00515-6.

Full text
Abstract:
AbstractIn this paper we generalise the results proved in N. Katzourakis (SIAM J. Math. Anal. 51, 1349–1370, 2019) by studying the ill-posed problem of identifying the source of a fully nonlinear elliptic equation. We assume Dirichlet data and some partial noisy information for the solution on a compact set through a fully nonlinear observation operator. We deal with the highly nonlinear nonconvex nature of the problem and the lack of weak continuity by introducing a two-parameter Tykhonov regularisation with a higher order L2 “viscosity term” for the $L^{\infty }$ L ∞ minimisation problem which allows to approximate by weakly lower semicontinuous cost functionals.
APA, Harvard, Vancouver, ISO, and other styles
20

Delahaies, Sylvain, Ian Roulstone, and Nancy Nichols. "Constraining DALECv2 using multiple data streams and ecological constraints: analysis and application." Geoscientific Model Development 10, no. 7 (July 10, 2017): 2635–50. http://dx.doi.org/10.5194/gmd-10-2635-2017.

Full text
Abstract:
Abstract. We use a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALECv2 (Data Assimilation Linked Ecosystem Carbon). Ecological and dynamical constraints have recently been introduced to constrain unresolved components of this otherwise ill-posed problem. Here we recast these constraints as a multivariate Gaussian distribution to incorporate them into the variational framework and we demonstrate their advantage through a linear analysis. Using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matrices to diagnose the nature of the ill-posedness and evaluate regularisation strategies. We then study the non-linear problem with an application to real data. Finally, we propose a modification to the model: introducing a spin-up period provides us with a built-in formulation of some ecological constraints which facilitates the variational approach.
APA, Harvard, Vancouver, ISO, and other styles
21

BEN BELGACEM, FAKER. "UNIQUENESS IN A CAUCHY PROBLEM FOR REACTION–DIFFUSION SYSTEM AND INVERSE SOURCE PROBLEMS IN WATER POLLUTION." Mathematical Models and Methods in Applied Sciences 22, no. 10 (August 13, 2012): 1250029. http://dx.doi.org/10.1142/s0218202512500297.

Full text
Abstract:
The purpose is a uniqueness result for an ill-posed multi-dimensional parabolic system arising in pollution modeling of surface waters like lakes, estuaries or bays. Exploring organic pollution effects mostly needs the recovery of two tracers, the biochemical oxygen demand (BOD) and the dissolved oxygen (DO) densities when only measurements on the dissolved oxygen concentration are available. The particularity of the reaction–diffusion system resides then in the nature of the boundary conditions. Missing boundary data on BOD is compensated by over-determined boundary conditions on DO which induces a strong coupling in the system. We check first the ill-posedness of the problem. Then, a uniqueness theorem is stated. The saddle point theory and tools from the semi-group analysis turn out to be at the basis of the proof. The results established here may serve for the data completion and the identifiability of pointwise pollution sources in multi-dimensional water bodies.
APA, Harvard, Vancouver, ISO, and other styles
22

Plokhonina, T. V. "On the ill-posed nature of the algebraic closure of the second power of a set of algorithms for calculating estimates." USSR Computational Mathematics and Mathematical Physics 25, no. 4 (January 1985): 74–79. http://dx.doi.org/10.1016/0041-5553(85)90144-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Neumayer, Markus, Thomas Suppan, and Thomas Bretterklieber. "Statistical solution of inverse problems using a state reduction." COMPEL - The international journal for computation and mathematics in electrical and electronic engineering 38, no. 5 (September 2, 2019): 1521–32. http://dx.doi.org/10.1108/compel-12-2018-0500.

Full text
Abstract:
Purpose The application of statistical inversion theory provides a powerful approach for solving estimation problems including the ability for uncertainty quantification (UQ) by means of Markov chain Monte Carlo (MCMC) methods and Monte Carlo integration. This paper aims to analyze the application of a state reduction technique within different MCMC techniques to improve the computational efficiency and the tuning process of these algorithms. Design/methodology/approach A reduced state representation is constructed from a general prior distribution. For sampling the Metropolis Hastings (MH) Algorithm and the Gibbs sampler are used. Efficient proposal generation techniques and techniques for conditional sampling are proposed and evaluated for an exemplary inverse problem. Findings For the MH-algorithm, high acceptance rates can be obtained with a simple proposal kernel. For the Gibbs sampler, an efficient technique for conditional sampling was found. The state reduction scheme stabilizes the ill-posed inverse problem, allowing a solution without a dedicated prior distribution. The state reduction is suitable to represent general material distributions. Practical implications The state reduction scheme and the MCMC techniques can be applied in different imaging problems. The stabilizing nature of the state reduction improves the solution of ill-posed problems. The tuning of the MCMC methods is simplified. Originality/value The paper presents a method to improve the solution process of inverse problems within the Bayesian framework. The stabilization of the inverse problem due to the state reduction improves the solution. The approach simplifies the tuning of MCMC methods.
APA, Harvard, Vancouver, ISO, and other styles
24

Bakushinsky, Anatoly, and Alexandra Smirnova. "A study of frozen iteratively regularized Gauss–Newton algorithm for nonlinear ill-posed problems under generalized normal solvability condition." Journal of Inverse and Ill-posed Problems 28, no. 2 (April 1, 2020): 275–86. http://dx.doi.org/10.1515/jiip-2019-0099.

Full text
Abstract:
AbstractA parameter identification inverse problem in the form of nonlinear least squares is considered. In the lack of stability, the frozen iteratively regularized Gauss–Newton (FIRGN) algorithm is proposed and its convergence is justified under what we call a generalized normal solvability condition. The penalty term is constructed based on a semi-norm generated by a linear operator yielding a greater flexibility in the use of qualitative and quantitative a priori information available for each particular model. Unlike previously known theoretical results on the FIRGN method, our convergence analysis does not rely on any nonlinearity conditions and it is applicable to a large class of nonlinear operators. In our study, we leverage the nature of ill-posedness in order to establish convergence in the noise-free case. For noise contaminated data, we show that, at least theoretically, the process does not require a stopping rule and is no longer semi-convergent. Numerical simulations for a parameter estimation problem in epidemiology illustrate the efficiency of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
25

Sánchez-Bajo, F., and F. L. Cumbrera. "Deconvolution of X-ray diffraction profiles by using series expansion." Journal of Applied Crystallography 33, no. 2 (April 1, 2000): 259–66. http://dx.doi.org/10.1107/s0021889899015575.

Full text
Abstract:
The deconvolution of X-ray diffraction profiles is a basic step in order to obtain reliable results on the microstructure of crystalline powder (crystallite size, lattice microstrain,etc.). A procedure for unfolding the linear integral equationh=g finvolved in the kinematical theory of X-ray diffraction is proposed. This technique is based on the series expansion of the `pure' profile,f. The method has been tested with a simulated instrument-broadened profile overlaid with random noise by using Hermite polynomials and Fourier series, and applied to the deconvolution of the (111) peak of a sample of 9-YSZ. In both cases, the effects of the `ill-posed' nature of this deconvolution problem were minimized, especially when using the zero-order regularization combined with the series expansion.
APA, Harvard, Vancouver, ISO, and other styles
26

Fletcher, Scott William Hugh. "Who are we Trying to Protect? The Role of Vulnerability Analysis in New Zealand's Law of Negligence." Victoria University of Wellington Law Review 47, no. 1 (June 1, 2016): 19. http://dx.doi.org/10.26686/vuwlr.v47i1.4877.

Full text
Abstract:
New Zealand has incorporated ideas of vulnerability within its law of negligence for some years. It has not, however, clarified what is meant by vulnerability or the role the concept plays within the broader duty of care framework. Several obiter comments in Body Corporate No 207624 v North Shore City Council (Spencer on Byron) suggest the concept ought not to be part of the law due to its uncertain and confusing nature. Subsequent cases have, however, continued to use the concept, and continue to use it despite both its historically ill-defined nature and the additional uncertainty added by Spencer on Byron. This article argues that vulnerability can and ought to be a part of New Zealand negligence law. With a consistent application of a single test for vulnerability – that established in the High Court of Australia in Woolcock Street Investments Pty Ltd v CDG Pty Ltd – vulnerability can be conceptually certain and provide useful insight into the issues posed by the law of negligence.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhu, Mingwei, Min Zhao, Min Yao, and Ruipeng Guo. "Generative Adversarial Network of Industrial Positron Images on Memory Module." Entropy 24, no. 6 (June 7, 2022): 793. http://dx.doi.org/10.3390/e24060793.

Full text
Abstract:
PET (Positron Emission Computed Tomography) imaging is a challenge due to the ill-posed nature and the low data of photo response lines. Generative adversarial networks have been widely used in computer vision and made great success recently. In our paper, we trained an adversarial model to improve the industrial positron images quality based on the attention mechanism. The innovation of the proposed method is that we build a memory module that focuses on the contribution of feature details to interested parts of images. We use an encoder to get the hidden vectors from a basic dataset as the prior knowledge and train the nets jointly. We evaluate the quality of the simulation positron images by MS-SSIM and PSNR. At the same time, the real industrial positron images also show a good visual effect.
APA, Harvard, Vancouver, ISO, and other styles
28

Navale, Miss Anjana, Prof Namdev Sawant, and Prof Umaji Bagal. "Color Attenuation Prior (CAP) for Single Image Dehazing." International Journal Of Engineering And Computer Science 7, no. 02 (February 20, 2018): 23578–84. http://dx.doi.org/10.18535/ijecs/v7i2.10.

Full text
Abstract:
Single image haze removal has been a challenging problem due to its ill-posed nature. In this paper, we have used a simple but powerful color attenuation prior for haze removal from a single input hazy image. By creating a linear model for modeling the scene depth of the hazy image under this novel prior and learning the parameters of the model with a supervised learning method, the depth information can be well recovered. With the depth map of the hazy image, we can easily estimate the transmission and restore the scene radiance via the atmospheric scattering model, and thus effectively remove the haze from a single image. Experimental results show that the proposed approach outperforms state-of-the-art haze removal algorithms in terms of both efficiency and the dehazing effect.
APA, Harvard, Vancouver, ISO, and other styles
29

Stanley, Michael, Pratik Patil, and Mikael Kuusela. "Uncertainty quantification for wide-bin unfolding: one-at-a-time strict bounds and prior-optimized confidence intervals." Journal of Instrumentation 17, no. 10 (October 1, 2022): P10013. http://dx.doi.org/10.1088/1748-0221/17/10/p10013.

Full text
Abstract:
Abstract Unfolding is an ill-posed inverse problem in particle physics aiming to infer a true particle-level spectrum from smeared detector-level data. For computational and practical reasons, these spaces are typically discretized using histograms, and the smearing is modeled through a response matrix corresponding to a discretized smearing kernel of the particle detector. This response matrix depends on the unknown shape of the true spectrum, leading to a fundamental systematic uncertainty in the unfolding problem. To handle the ill-posed nature of the problem, common approaches regularize the problem either directly via methods such as Tikhonov regularization, or implicitly by using wide-bins in the true space that match the resolution of the detector. Unfortunately, both of these methods lead to a non-trivial bias in the unfolded estimator, thereby hampering frequentist coverage guarantees for confidence intervals constructed from these methods. We propose two new approaches to addressing the bias in the wide-bin setting through methods called One-at-a-time Strict Bounds (OSB) and Prior-Optimized (PO) intervals. The OSB intervals are a bin-wise modification of an existing guaranteed-coverage procedure, while the PO intervals are based on a decision-theoretic view of the problem. Importantly, both approaches provide well-calibrated frequentist confidence intervals even in constrained and rank-deficient settings. These methods are built upon a more general answer to the wide-bin bias problem, involving unfolding with fine bins first, followed by constructing confidence intervals for linear functionals of the fine-bin counts. We test and compare these methods to other available methodologies in a wide-bin deconvolution example and a realistic particle physics simulation of unfolding a steeply falling particle spectrum.
APA, Harvard, Vancouver, ISO, and other styles
30

Guo, Wei, Tao Xu, Keming Tang, Jianjiang Yu, and Shuangshuang Chen. "Online Sequential Extreme Learning Machine with Generalized Regularization and Adaptive Forgetting Factor for Time-Varying System Prediction." Mathematical Problems in Engineering 2018 (May 31, 2018): 1–22. http://dx.doi.org/10.1155/2018/6195387.

Full text
Abstract:
Many real world applications are of time-varying nature and an online learning algorithm is preferred in tracking the real-time changes of the time-varying system. Online sequential extreme learning machine (OSELM) is an excellent online learning algorithm, and some improved OSELM algorithms incorporating forgetting mechanism have been developed to model and predict the time-varying system. But the existing algorithms suffer from a potential risk of instability due to the intrinsic ill-posed problem; besides, the adaptive tracking ability of these algorithms for complex time-varying system is still very weak. In order to overcome the above two problems, this paper proposes a novel OSELM algorithm with generalized regularization and adaptive forgetting factor (AFGR-OSELM). In the AFGR-OSELM, a new generalized regularization approach is employed to replace the traditional exponential forgetting regularization to make the algorithm have a constant regularization effect; consequently the potential ill-posed problem of the algorithm can be completely avoided and a persistent stability can be guaranteed. Moreover, the AFGR-OSELM adopts an adaptive scheme to adjust the forgetting factor dynamically and automatically in the online learning process so as to better track the dynamic changes of the time-varying system and reduce the adverse effects of the outdated data in time; thus it tends to provide desirable prediction results in time-varying environment. Detailed performance comparisons of AFGR-OSELM with other representative algorithms are carried out using artificial and real world data sets. The experimental results show that the proposed AFGR-OSELM has higher prediction accuracy with better stability than its counterparts for predicting time-varying system.
APA, Harvard, Vancouver, ISO, and other styles
31

Dowding, K. J., and J. V. Beck. "A Sequential Gradient Method for the Inverse Heat Conduction Problem (IHCP)." Journal of Heat Transfer 121, no. 2 (May 1, 1999): 300–306. http://dx.doi.org/10.1115/1.2825980.

Full text
Abstract:
A sequential-in-time implementation is proposed for a conjugate gradient method using an adjoint equation approach to solve the inverse heat conduction problem (IHCP). Because the IHCP is generally ill-posed, Tikhonov regularization is included to stabilize the solution and allow for the inclusion of prior information. Aspects of the sequential gradient method are discussed and examined. Simulated one and two-dimensional test cases are evaluated to study the sequential implementation. Numerical solutions are obtained using a finite difference procedure. Results indicate the sequential implementation has accuracy comparable to the standard whole-domain solution, but in certain cases requires significantly more computational time. Benefits of the on-line nature of a sequential method may outweigh the additional computational requirements. Methods to improve the computational requirements, which make the method competitive with a whole domain solution, are given.
APA, Harvard, Vancouver, ISO, and other styles
32

Chen, Xiaojun, Xiao Han, and Xiaoying Tang. "Magnetic Particle Imaging Reconstruction Based on the Least Absolute Shrinkage and Selection Operator Regularization." Journal of Medical Imaging and Health Informatics 11, no. 3 (March 1, 2021): 703–11. http://dx.doi.org/10.1166/jmihi.2021.3445.

Full text
Abstract:
Magnetic particle imaging is a new medical imaging modality which is based on the non-linear response of magnetic nanoparticles. The reconstruction task is an inverse problem and ill-posed in nature. To overcome the problem, we propose to use the least absolute shrinkage and selection operator (LASSO) regularization model. In order to reach a good result with a short reconstruction time, we use the truncated system matrix and the truncated measurement based on two threshold setting methods for reconstruction research. In this paper, we study the reconstruction quality of different threshold values and different regularization parameter values. We compare the reconstruction performance of the proposed model with the Tikhonov model from visualization and performance indicators. The conducted study illustrated that the proposed method yields significantly higher reconstruction quality than the state-of-the-art reconstruction method based on Tikhonov model.
APA, Harvard, Vancouver, ISO, and other styles
33

Sahani, Maneesh, and Peter Dayan. "Doubly Distributional Population Codes: Simultaneous Representation of Uncertainty and Multiplicity." Neural Computation 15, no. 10 (October 1, 2003): 2255–79. http://dx.doi.org/10.1162/089976603322362356.

Full text
Abstract:
Perceptual inference fundamentally involves uncertainty, arising from noise in sensation and the ill-posed nature of many perceptual problems. Accurate perception requires that this uncertainty be correctly represented, manipulated, and learned about. The choicessubjects makein various psychophysical experiments suggest that they do indeed take such uncertainty into account when making perceptual inferences, posing the question as to how uncertainty is represented in the activities of neuronal populations. Most theoretical investigations of population coding have ignored this issue altogether; the few existing proposals that address it do so in such a way that it is fatally conflated with another facet of perceptual problems that also needs correct handling: multiplicity (that is, the simultaneous presence of multiple distinct stimuli). We present and validate a more powerful proposal for the way that population activity may encode uncertainty, both distinctly from and simultaneously with multiplicity.
APA, Harvard, Vancouver, ISO, and other styles
34

Lu, Weijun, Guanyi Ma, and Qingtao Wan. "A Review of Voxel-Based Computerized Ionospheric Tomography with GNSS Ground Receivers." Remote Sensing 13, no. 17 (August 29, 2021): 3432. http://dx.doi.org/10.3390/rs13173432.

Full text
Abstract:
Ionized by solar radiation, the ionosphere causes a phase rotation or time delay to trans-ionospheric radio waves. Reconstruction of ionospheric electron density profiles with global navigation satellite system (GNSS) observations has become an indispensable technique for various purposes ranging from space physics studies to radio applications. This paper conducts a comprehensive review on the development of voxel-based computerized ionospheric tomography (CIT) in the last 30 years. A brief introduction is given in chronological order starting from the first report of CIT with simulation to the newly proposed voxel-based algorithms for ionospheric event analysis. The statement of the tomographic geometry and voxel models are outlined with the ill-posed and ill-conditioned nature of CIT addressed. With the additional information from other instrumental observations or initial models supplemented to make the coefficient matrix less ill-conditioned, equation constructions are categorized into constraints, virtual data assimilation and multi-source observation fusion. Then, the paper classifies and assesses the voxel-based CIT algorithms of the algebraic method, statistical approach and artificial neural networks for equation solving or electron density estimation. The advantages and limitations of the algorithms are also pointed out. Moreover, the paper illustrates the representative height profiles and two-dimensional images of ionospheric electron densities from CIT. Ionospheric disturbances studied with CIT are presented. It also demonstrates how the CIT benefits ionospheric correction and ionospheric monitoring. Finally, some suggestions are provided for further research about voxel-based CIT.
APA, Harvard, Vancouver, ISO, and other styles
35

Tran, Hai, and Tat-Hien Le. "Wavelet deconvolution technique for impact force reconstruction: mutual deconvolution approach." Science & Technology Development Journal - Engineering and Technology 3, SI2 (January 22, 2021): first. http://dx.doi.org/10.32508/stdjet.v3isi2.507.

Full text
Abstract:
In the field of impact engineering, one of the most concerned issues is how to exactly know the history of impact force which often difficult or impossible to be measured directly. In reality, information of impact force apply to structure can be identified by means of indirect method from using information of corresponding output responses measured on structure. Namely, by using the output responses (caused by the unknown impact force) such as acceleration, displacement, or strain, etc. in cooperation with the impulse response function, the profile of unknown impact force can be rebuilt. A such indirect method is well known as impact force reconstruction or impact force deconvolution technique. Unfortunately, a simple deconvolution technique for reconstructing impact force has often encountered difficulty due to the ill-posed nature of inversion. Deconvolution technique thus often results in unexpected reconstruction of impact force with the influences of unavoidable errors which is often magnified to a large value in reconstructed result. This large magnification of errors dominates profile of desired impact force. Although there have been some regularization methods in order to improve this ill-posed problem so far, most of these regularizations are considered in the whole-time domain, and this may make the reconstruction inefficient and inaccurate because impact force is normally limited to some portions of impact duration. This work is concerned with the development of deconvolution technique using wavelets transform. Based on the advantages of wavelets (i.e., localized in time and the possibility to be analyzed at different scales and shifts), the mutual reconstruction process is proposed and formulated by considering different scales of wavelets. The experiment is conducted to verify the proposed technique. Results demonstrated the robustness of the present technique when reconstructing impact force with more stability and higher accuracy.
APA, Harvard, Vancouver, ISO, and other styles
36

Angulo, J. M., and M. D. Ruiz-Medina. "Multi-resolution approximation to the stochastic inverse problem." Advances in Applied Probability 31, no. 4 (December 1999): 1039–57. http://dx.doi.org/10.1239/aap/1029955259.

Full text
Abstract:
The linear inverse problem of estimating the input random field in a first-kind stochastic integral equation relating two random fields is considered. For a wide class of integral operators, which includes the positive rational functions of a self-adjoint elliptic differential operator on L2(ℝd), the ill-posed nature of the problem disappears when such operators are defined between appropriate fractional Sobolev spaces. In this paper, we exploit this fact to reconstruct the input random field from the orthogonal expansion (i.e. with uncorrelated coefficients) derived for the output random field in terms of wavelet bases, transformed by a linear operator factorizing the output covariance operator. More specifically, conditions under which the direct orthogonal expansion of the output random field coincides with the integral transformation of the orthogonal expansion derived for the input random field, in terms of an orthonormal wavelet basis, are studied.
APA, Harvard, Vancouver, ISO, and other styles
37

Angulo, J. M., and M. D. Ruiz-Medina. "Multi-resolution approximation to the stochastic inverse problem." Advances in Applied Probability 31, no. 04 (December 1999): 1039–57. http://dx.doi.org/10.1017/s0001867800009617.

Full text
Abstract:
The linear inverse problem of estimating the input random field in a first-kind stochastic integral equation relating two random fields is considered. For a wide class of integral operators, which includes the positive rational functions of a self-adjoint elliptic differential operator on L 2(ℝ d ), the ill-posed nature of the problem disappears when such operators are defined between appropriate fractional Sobolev spaces. In this paper, we exploit this fact to reconstruct the input random field from the orthogonal expansion (i.e. with uncorrelated coefficients) derived for the output random field in terms of wavelet bases, transformed by a linear operator factorizing the output covariance operator. More specifically, conditions under which the direct orthogonal expansion of the output random field coincides with the integral transformation of the orthogonal expansion derived for the input random field, in terms of an orthonormal wavelet basis, are studied.
APA, Harvard, Vancouver, ISO, and other styles
38

Fábregas Ibáñez, Luis, Gunnar Jeschke, and Stefan Stoll. "DeerLab: a comprehensive software package for analyzing dipolar electron paramagnetic resonance spectroscopy data." Magnetic Resonance 1, no. 2 (October 1, 2020): 209–24. http://dx.doi.org/10.5194/mr-1-209-2020.

Full text
Abstract:
Abstract. Dipolar electron paramagnetic resonance (EPR) spectroscopy (DEER and other techniques) enables the structural characterization of macromolecular and biological systems by measurement of distance distributions between unpaired electrons on a nanometer scale. The inference of these distributions from the measured signals is challenging due to the ill-posed nature of the inverse problem. Existing analysis tools are scattered over several applications with specialized graphical user interfaces. This renders comparison, reproducibility, and method development difficult. To remedy this situation, we present DeerLab, an open-source software package for analyzing dipolar EPR data that is modular and implements a wide range of methods. We show that DeerLab can perform one-step analysis based on separable non-linear least squares, fit dipolar multi-pathway models to multi-pulse DEER data, run global analysis with non-parametric distributions, and use a bootstrapping approach to fully quantify the uncertainty in the analysis.
APA, Harvard, Vancouver, ISO, and other styles
39

Ju, Mingye, Zhenfei Gu, Dengyin Zhang, and Haoxing Qin. "Visibility Restoration for Single Hazy Image Using Dual Prior Knowledge." Mathematical Problems in Engineering 2017 (2017): 1–10. http://dx.doi.org/10.1155/2017/8190182.

Full text
Abstract:
Single image haze removal has been a challenging task due to its super ill-posed nature. In this paper, we propose a novel single image algorithm that improves the detail and color of such degraded images. More concretely, we redefine a more reliable atmospheric scattering model (ASM) based on our previous work and the atmospheric point spread function (APSF). Further, by taking the haze density spatial feature into consideration, we design a scene-wise APSF kernel prediction mechanism to eliminate the multiple-scattering effect. With the redefined ASM and designed APSF, combined with the existing prior knowledge, the complex dehazing problem can be subtly converted into one-dimensional searching problem, which allows us to directly obtain the scene transmission and thereby recover visually realistic results via the proposed ASM. Experimental results verify that our algorithm outperforms several state-of-the-art dehazing techniques in terms of robustness, effectiveness, and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
40

Piasentin, Flora, Rosalina Gabriel, Ana M. Arroz, Alexandra R. Silva, and Isabel R. Amorim. "What Is Most Desirable for Nature? An Analysis of Azorean Pupils’ Biodiversity Perspectives When Deciding on Ecological Scenarios." Sustainability 13, no. 22 (November 13, 2021): 12554. http://dx.doi.org/10.3390/su132212554.

Full text
Abstract:
Understanding pupils’ biodiversity perspectives is essential to developing educators’ sensitivity to students’ multi-faceted views of the world, thus increasing teaching effectiveness. In this study, we asked 1528 school pupils in the Azores to choose between alternative schemes in three ecological scenarios and to justify their decisions. The study’s objectives were to understand biodiversity perspectives underlying pupils’ choice of the most desirable schemes for nature and to examine whether gender and school level (middle school/high school) influenced their choices. Quantitative (frequency analysis and Chi-square statistics) and qualitative (thematic analysis) methods were applied for data analysis. The majority of pupils made appropriate choices, arguing from different biodiversity perspectives, which were classified in 10 categories and 24 subcategories. High school pupils did not exhibit significant differences among the main arguments employed, and mostly referred to ecological concepts, while middle school pupils exhibited different choices according to gender, emphasizing richness over the threats posed by introduced species. Biodiversity education should thus be strengthened, especially at the middle school level, where different complex issues would benefit from classroom discussion and systematization. The chosen methodological strategy proved to be effective in assessing pupils’ biodiversity perspectives, which may be useful to deal with other ill-structured problems.
APA, Harvard, Vancouver, ISO, and other styles
41

García-Rodríguez, R., and V. Parra-Vega. "Cartesian sliding PID control schemes for tracking robots with uncertain Jacobian." Transactions of the Institute of Measurement and Control 34, no. 4 (April 21, 2011): 448–62. http://dx.doi.org/10.1177/0142331210394908.

Full text
Abstract:
Owing to the fact that desired tasks are usually defined in operational coordinates, inverse and direct kinematics must be computed to obtain joint coordinates and Cartesian coordinates, respectively. However, in order to avoid the ill-posed nature of the inverse kinematics, Cartesian controllers have been proposed. Considering that Cartesian controllers are based on the assumption that the Jacobian is well known, an uncertain Jacobian will produce a non-exact localization of the end-effector. In this paper, we present an alternative approach to solve the problem of Cartesian tracking for free and constrained motion subject to Jacobian uncertainty. These Cartesian schemes are based on sliding PID controllers where the Cartesian errors are mapped into joint errors without any knowledge of robot dynamics. Sufficient conditions for feedback gains and stability properties of the estimate inverse Jacobian are presented to guarantee stability. Experimental results are provided to visualize the real-time stability properties of the Cartesian proposed schemes.
APA, Harvard, Vancouver, ISO, and other styles
42

Hu, Wenyi, Aria Abubakar, and Tarek M. Habashy. "Joint electromagnetic and seismic inversion using structural constraints." GEOPHYSICS 74, no. 6 (November 2009): R99—R109. http://dx.doi.org/10.1190/1.3246586.

Full text
Abstract:
We have developed a frequency-domain joint electromagnetic (EM) and seismic inversion algorithm for reservoir evaluation and exploration applications. EM and seismic data are jointly inverted using a cross-gradient constraint that enforces structural similarity between the conductivity image and the compressional wave (P-wave) velocity image. The inversion algorithm is based on a Gauss-Newton optimization approach. Because of the ill-posed nature of the inverse problem, regularization is used to constrain the solution. The multiplicative regularization technique selects the regularization parameters automatically, improving the robustness of the algorithm. A multifrequency data-weighting scheme prevents the high-frequency data from dominating the inversion process. When the joint-inversion algorithm is applied in integrating marine controlled-source electromagnetic data with surface seismic data for subsea reservoir exploration applications and in integrating crosswell EM and sonic data for reservoir monitoring and evaluation applications, results improve significantly over those obtained from separate EM or seismic inversions.
APA, Harvard, Vancouver, ISO, and other styles
43

Vasan, Vishal, and Bernard Deconinck. "The inverse water wave problem of bathymetry detection." Journal of Fluid Mechanics 714 (January 2, 2013): 562–90. http://dx.doi.org/10.1017/jfm.2012.497.

Full text
Abstract:
AbstractThe inverse water wave problem of bathymetry detection is the problem of deducing the bottom topography of the seabed from measurements of the water wave surface. In this paper, we present a fully nonlinear method to address this problem in the context of the Euler equations for inviscid irrotational fluid flow with no further approximation. Given the water wave height and its first two time derivatives, we demonstrate that the bottom topography may be reconstructed from the numerical solution of a set of two coupled non-local equations. Owing to the presence of growing hyperbolic functions in these equations, their numerical solution is increasingly difficult if the length scales involved are such that the water is sufficiently deep. This reflects the ill-posed nature of the inverse problem. A new method for the solution of the forward problem of determining the water wave surface at any time, given the bathymetry, is also presented.
APA, Harvard, Vancouver, ISO, and other styles
44

Mahmoud, Fatin F. "On the Nonexistence of a Feasible Solution in the Context of the Differential Form of Eringen’s Constitutive Model: A Proposed Iterative Model Based on a Residual Nonlocality Formulation." International Journal of Applied Mechanics 09, no. 07 (October 2017): 1750094. http://dx.doi.org/10.1142/s1758825117500946.

Full text
Abstract:
The motivation of this paper is to highlight and discuss critically the details of two main aspects related to Eringen’s nonlocal constitutive model. The first aspect is to point out the inconsistency of the integral and differential forms of Eringen’s nonlocal constitutive model. In addition, to point out the ill-posed form and physical infeasibility of the results which may be obtained by using the differential form of the model. The critical analysis focuses on the lack of consistency between the set of the boundary constraints required by the differential form of Eringen’s model and the set of the prescribed boundary conditions of the nonlocal static equilibrium problem. Because of this lack of consistency between the two constitutive forms, it can be concluded that the formulation in context of the differential form is ill-posed and the existence of a feasible solution is questionable and might not be admitted. The second aspect deals with the intractability of the analytical solution of the nonlocal continuum problems based on the integral form of Eringen’s nonlocal constitutive model (IENCM). In the meantime, it mentions the cumbersome of numerical work required by the direct computational methods such as the nonlocal finite element method. The complexity of using the integral form of Eringen’s constitutive model and the lack of existence of a feasible solution by using the differential form of Eringen’s constitutive model lead to the mandatory need for developing an efficient iterative computational approach based on the integral form of Eringen’s constitutive model. In this paper, an iterative computational method, based on the nonlocality residual formulation for nonlocal continuums, capable to investigate different elasticity problems, is proposed. The traditional local continuum solution is taken as an initial solution. To point out the inconsistences between the integral and differential forms of Eringen’s nonlocal constitutive model, and to illustrate the efficiency and capability of the proposed iterative model, four static bending of beam problems with different nature are solved.
APA, Harvard, Vancouver, ISO, and other styles
45

Liao, Zhiwu. "Regularized Multidirections and Multiscales Anisotropic Diffusion for Sinogram Restoration of Low-Dosed Computed Tomography." Computational and Mathematical Methods in Medicine 2013 (2013): 1–12. http://dx.doi.org/10.1155/2013/190571.

Full text
Abstract:
Although most of existing anisotropic diffusion (AD) methods are supported by prefect mathematical theories, they still lead to smoothed edges and anatomy details (EADs). They are caused by not considering the discrete nature of digital signal. In order to improve the performance of AD in sinogram restoration of low-dosed computed tomography (LDCT), we propose a new AD method, named regularized multidirections and multiscales anisotropic diffusion (RMDMS-AD), by extending AD to regularized AD (RAD) in multidirections and multiscales. Since the multidirections can reduce the discrete errors to the maximum extent, meanwhile multiscales and RAD make searching neighborhood of solution be as large as possible which can get more optimal solution to AD, the new proposed method can improve the performance of AD both in denoising and in stability of solution. Moreover, the discrete errors and ill-posed solutions occur mostly near the EADs; the RMDMS-AD will also preserve EADs well. Comparing the proposed new method to existing AD methods using real sinogram, the new method shows good performance in EADs preserving while denoising and suppressing artifacts.
APA, Harvard, Vancouver, ISO, and other styles
46

Wei, Kent (Hsin-Yu), Chang-Hua Qiu, and Ken Primrose. "Super-sensing technology: industrial applications and future challenges of electrical tomography." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 374, no. 2070 (June 28, 2016): 20150328. http://dx.doi.org/10.1098/rsta.2015.0328.

Full text
Abstract:
Electrical tomography is a relatively new imaging technique that can image the distribution of the passive electrical properties of an object. Since electrical tomography technology was proposed in the 1980s, the technique has evolved rapidly because of its low cost, easy scale-up and non-invasive features. The technique itself can be sensitive to all passive electrical properties, such as conductivity, permittivity and permeability. Hence, it has a huge potential to be applied in many applications. Owing to its ill-posed nature and low image resolution, electrical tomography attracts more attention in industrial fields than biomedical fields. In the past decades, there have been many research developments and industrial implementations of electrical tomography; nevertheless, the awareness of this technology in industrial sectors is still one of the biggest limitations for technology implementation. In this paper, the authors have summarized several representative applications that use electrical tomography. Some of the current tomography research activities will also be discussed. This article is part of the themed issue ‘Supersensing through industrial process tomography’.
APA, Harvard, Vancouver, ISO, and other styles
47

McNally, Bridget, Anne M. Garvey, and Thomas O’Connor. "Valuation of defined benefit pension schemes in IAS 19 employee benefits - true and fair?" Journal of Financial Regulation and Compliance 27, no. 1 (February 11, 2019): 31–42. http://dx.doi.org/10.1108/jfrc-03-2018-0048.

Full text
Abstract:
PurposeThis paper aims to argue that the accounting standards’ requirements for the valuation of defined benefit pension schemes in the financial statements of scheme sponsoring companies potentially produce an artificial result which is at odds with the “faithful representation” and “relevance” objectives of these standards.Design/methodology/approachThe approach is a theoretical analysis of the relevant reporting standards with the use of a practical example to demonstrate the impact where trustees adopt a hedged approach to portfolio investment.FindingsWhere a pension fund engages in asset liability matching and invests in “risk-free” assets, the term, quantity and duration/maturity of which is intended to match some or all of its scheme liabilities, the required accounting treatment potentially results in the sponsoring company’s financial statements reporting fluctuating surpluses or deficits each year which are potentially ill informed and misleading.Originality/valuePension scheme surpluses or deficits reported in the financial statements of listed companies are potentially very significant numbers; however, the dangers posed by theoretical nature of the calculation have largely gone unreported.
APA, Harvard, Vancouver, ISO, and other styles
48

RIENSTRA, SJOERD W., and MIRELA DARAU. "Boundary-layer thickness effects of the hydrodynamic instability along an impedance wall." Journal of Fluid Mechanics 671 (February 11, 2011): 559–73. http://dx.doi.org/10.1017/s0022112010006051.

Full text
Abstract:
The Ingard–Myers condition, modelling the effect of an impedance wall under a mean flow by assuming a vanishingly thin boundary layer, is known to lead to an ill-posed problem in time domain. By analysing the stability of a linear-then-constant mean flow over a mass-spring-damper liner in a two-dimensional incompressible limit, we show that the flow is absolutely unstable for h smaller than a critical hc and convectively unstable or stable otherwise. This critical hc is by nature independent of wavelength or frequency and is a property of liner and mean flow only. An analytical approximation of hc is given, which is complemented by a contour plot covering all parameter values. For an aeronautically relevant example, hc is shown to be extremely small, which explains why this instability has never been observed in industrial practice. A systematically regularised boundary condition, to replace the Ingard–Myers condition, is proposed that retains the effects of a finite h, such that the stability of the approximate problem correctly follows the stability of the real problem.
APA, Harvard, Vancouver, ISO, and other styles
49

Tuczyński, Tomasz, and Jerzy Stopa. "Uncertainty Quantification in Reservoir Simulation Using Modern Data Assimilation Algorithm." Energies 16, no. 3 (January 20, 2023): 1153. http://dx.doi.org/10.3390/en16031153.

Full text
Abstract:
Production forecasting using numerical simulation has become a standard in the oil and gas industry. The model construction process requires an explicit definition of multiple uncertain parameters; thus, the outcome of the modelling is also uncertain. For the reservoirs with production data, the uncertainty can be reduced by history-matching. However, the manual matching procedure is time-consuming and usually generates one deterministic realization. Due to the ill-posed nature of the calibration process, the uncertainty cannot be captured sufficiently with only one simulation model. In this paper, the uncertainty quantification process carried out for a gas-condensate reservoir is described. The ensemble-based uncertainty approach was used with the ES-MDA algorithm, conditioning the models to the observed data. Along with the results, the author described the solutions proposed to improve the algorithm’s efficiency and to analyze the factors controlling modelling uncertainty. As a part of the calibration process, various geological hypotheses regarding the presence of an active aquifer were verified, leading to important observations about the drive mechanism of the analyzed reservoir.
APA, Harvard, Vancouver, ISO, and other styles
50

Chen, Chun-Teh, and Grace X. Gu. "Learning hidden elasticity with deep neural networks." Proceedings of the National Academy of Sciences 118, no. 31 (July 29, 2021): e2102721118. http://dx.doi.org/10.1073/pnas.2102721118.

Full text
Abstract:
Elastography is an imaging technique to reconstruct elasticity distributions of heterogeneous objects. Since cancerous tissues are stiffer than healthy ones, for decades, elastography has been applied to medical imaging for noninvasive cancer diagnosis. Although the conventional strain-based elastography has been deployed on ultrasound diagnostic-imaging devices, the results are prone to inaccuracies. Model-based elastography, which reconstructs elasticity distributions by solving an inverse problem in elasticity, may provide more accurate results but is often unreliable in practice due to the ill-posed nature of the inverse problem. We introduce ElastNet, a de novo elastography method combining the theory of elasticity with a deep-learning approach. With prior knowledge from the laws of physics, ElastNet can escape the performance ceiling imposed by labeled data. ElastNet uses backpropagation to learn the hidden elasticity of objects, resulting in rapid and accurate predictions. We show that ElastNet is robust when dealing with noisy or missing measurements. Moreover, it can learn probable elasticity distributions for areas even without measurements and generate elasticity images of arbitrary resolution. When both strain and elasticity distributions are given, the hidden physics in elasticity—the conditions for equilibrium—can be learned by ElastNet.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography