Littérature scientifique sur le sujet « Ill-posed nature »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Ill-posed nature ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Ill-posed nature"

1

Markovsky, Alexander. « Development and Application of Ill-Posed Problems in the USSR ». Applied Mechanics Reviews 41, no 6 (1 juin 1988) : 247–56. http://dx.doi.org/10.1115/1.3151896.

Texte intégral
Résumé :
This article describes the earlier stage of development of ill-posed problems in the Soviet Union where this area of mathematics originated. There are several types of the problems, such as Fredholm and Volterra integral equations of the first kind, algebraic systems with ill-conditioned matrix, optimal regulations, approximate Fourier summation, inverse heat conduction, etc, where ill-posed nature is a serious barrier to construct a stable solution. Different methods, in increasing order of generality, showing how to interpret and solve these types of problems are reviewed. The main point is to demonstrate that all approaches in solving the ill-posed problems can be based on common sense and intuition, though formalization is needed to explore the methodology in different fields of applied mathematics and engineering.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Kornilov, Viktor S. « Methods of teaching inverse and incorrect problems to students in the context of informatization of education ». RUDN Journal of Informatization in Education 17, no 4 (15 décembre 2020) : 315–22. http://dx.doi.org/10.22363/2312-8631-2020-17-4-315-322.

Texte intégral
Résumé :
Problem and goal. Computer technologies are now widely used in applied research aimed at obtaining new scientific knowledge. These studies used the method of computer modeling and computing experiment, from which it is possible to study the properties of remote or inaccessible objects, processes and phenomena of different nature. The above mentioned is directly related to teaching students applied mathematics in general, and, in particular, to teaching students of physical and mathematical training areas inverse and ill-posed problems, which are the scientific direction of applied mathematics. It is obvious that in the process of teaching students inverse and ill-posed problems, it is advisable to use computer technologies. However, the use of computer technology should be appropriate and correct. Methodology. The process of finding solutions to inverse and ill-posed problems is usually time-consuming, since such mathematical problems are non-linear in their formulation and may have a non-unique and unstable solution. These circumstances pose a mathematical difficulty in the proof of the theorems of existence, uniqueness and stability of solutions to inverse and ill-posed problems. Computer technologies help to overcome mathematical difficulties associated with routine transformations, analysis of information about solving such mathematical problems. Results. Using computer technologies, students gain experience in mobile research of various inverse and ill-posed problems, as well as in identifying the capabilities of computer technologies in solving various applied mathematical problems, and develop ICT competence. Conclusion. When using multimedia and computer technologies in the process of teaching students inverse and ill-posed problems, didactic principles of teaching are implemented, which allow students to acquire deep scientific knowledge on inverse and ill-posed problems, and develop their information culture.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Moutsoglou, A. « An Inverse Convection Problem ». Journal of Heat Transfer 111, no 1 (1 février 1989) : 37–43. http://dx.doi.org/10.1115/1.3250655.

Texte intégral
Résumé :
The nature of inverse problems in convective environments is investigated. The ill-posed quality inherent in inverse problems is verified for free convection laminar flow in a vertical channel. A sequential function specification algorithm is adapted for the semiparabolic system of equations that governs the flow and heat transfer in the channel. The procedure works very well in alleviating the ill-posed symptoms of inverse problems. The performance of a simple smoothing routine is also tested for the prescribed conditions.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Sullivan, B., et B. Liu. « The ill-posed nature of a method for sub-Nyquist rate signal reconstruction ». IEEE Transactions on Circuits and Systems 34, no 2 (février 1987) : 203–5. http://dx.doi.org/10.1109/tcs.1987.1086106.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Hanes, Rebecca J., Nathan B. Cruze, Prem K. Goel et Bhavik R. Bakshi. « Allocation Games : Addressing the Ill-Posed Nature of Allocation in Life-Cycle Inventories ». Environmental Science & ; Technology 49, no 13 (23 juin 2015) : 7996–8003. http://dx.doi.org/10.1021/acs.est.5b01192.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Barrett, T. A. « Reconstruction with noisy data ». Proceedings, annual meeting, Electron Microscopy Society of America 44 (août 1986) : 186–87. http://dx.doi.org/10.1017/s0424820100142554.

Texte intégral
Résumé :
The problem of deducing the 3-D structure of an object given a limited number of 2-D projections (e.g., STEM micrographs) is well-known. It is an ill-posed problem in the sense that very many solutions exist that have the same 2-D projections. If the data are assumed to be noisy (most STEM images are very noisy), then the problem is still ill-posed: many solutions exist that yield the same least-square error from the data.Crewe et al. have shown that, remarkably, the constraint that many objects are essentially Boolean in nature (have constant density) means that their structure can be determined very well even with very few projections, if there is little noise.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Thompson, Owen E., Donald D. Dazlich et Yu-Tai Hou. « The Ill-posed Nature of the Satellite Temperature Retrieval Problem and the Limits of Retrievability ». Journal of Atmospheric and Oceanic Technology 3, no 4 (décembre 1986) : 643–49. http://dx.doi.org/10.1175/1520-0426(1986)003<0643:tipnot>2.0.co;2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Price, Michael, André Marshall et Arnaud Trouvé. « A Multi-observable Approach to Address the Ill-Posed Nature of Inverse Fire Modeling Problems ». Fire Technology 52, no 6 (19 décembre 2015) : 1779–97. http://dx.doi.org/10.1007/s10694-015-0541-7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Papa, Frank J. « Learning sciences principles that can inform the construction of new approaches to diagnostic training ». Diagnosis 1, no 1 (1 janvier 2014) : 125–29. http://dx.doi.org/10.1515/dx-2013-0013.

Texte intégral
Résumé :
AbstractThe author suggests that the ill-defined nature of human diseases is a little appreciated, nonetheless important contributor to persistent and high levels of diagnostic error. Furthermore, medical education’s continued use of traditional, non-evidence based approaches to diagnostic training represents a systematic flaw likely perpetuating sub-optimal diagnostic performance in patients suffering from ill-defined diseases. This manuscript briefly describes how Learning Sciences findings elucidating how humans reason in the face of the uncertainty and complexity posed by ill-defined diseases might serve as guiding principles in the formulation of first steps towards a codified, 21st century approach to training and assessing the diagnostic capabilities of future health care providers.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Zhang, Ye, Rongfang Gong, Mårten Gulliksson et Xiaoliang Cheng. « A coupled complex boundary expanding compacts method for inverse source problems ». Journal of Inverse and Ill-posed Problems 27, no 1 (1 février 2019) : 67–86. http://dx.doi.org/10.1515/jiip-2017-0002.

Texte intégral
Résumé :
Abstract In this paper, we consider an inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary conditions. The unknown source term is to be determined by additional boundary data. This problem is ill-posed since the dimensionality of the boundary is lower than the dimensionality of the inner domain. To overcome the ill-posed nature, using the a priori information (sourcewise representation), and based on the coupled complex boundary method, we propose a coupled complex boundary expanding compacts method (CCBECM). A finite element method is used for the discretization of CCBECM. The regularization properties of CCBECM for both the continuous and discrete versions are proved. Moreover, an a posteriori error estimate of the obtained finite element approximate solution is given and calculated by a projected gradient algorithm. Finally, numerical results show that the proposed method is stable and effective.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Ill-posed nature"

1

Valentina, Giorgetti. « Ill-Posed Problems in Computer Vision ». Doctoral thesis, 2022. http://hdl.handle.net/2158/1274371.

Texte intégral
Résumé :
The visual reconstruction problems have often an ill-posed nature. In this thesis we deal with analyzing and solving three kinds of visual reconstruction problems: Blind Source Separation, Demosaicing and Deblurring. The demosaicing problem is related to the acquisition of RGB color images by means of CCD digital cameras. In the RGB model, each pixel of a digital color image is associated to a triple of numbers, which indicate the light intensity of the red, green and blue channel, respectively. However, most cameras use a single sensor, associated with a color filter that allows only the measure at each pixel of the reflectance of the scene at one of the three colors, according to a given scheme or pattern, called Color Filter Array (CFA). For this reason, at each pixel, the other two missing colors should be estimated. Different CFA’s are proposed for the acquisition. The most common is the Bayer pattern. In this scheme, the numbers of pixels in which the green color is sampled are double with respect to those associated with the red and blue channels, because of the higher sensibility of the human eye to the green wavelengths. If we decompose the acquired image into three channels, we obtain three downsampled grayscale images, so that demosaicing could be interpreted as interpolating grayscale images from sparse data. In most cameras, demosaicing is a part of the processing required to obtain a visible images. The camera’s built-in-firmware is substantially based on fast local interpolation algorithms. The heuristic approaches, which do not try to solve an optimization problem defined in math- ematical terms, are widely used in the literature. These methods, in general, are very fast. Our proposed technique is of heuristic kind. In general, the heuristic techniques consist of filtering operations, which are formulated by means of suitable observations on color images. The non- adaptive algorithms, among which bilinear and bicubic interpolation, yield satisfactory results in smooth regions of an image, but they can fail in textured or edge areas. Edge-directed in- terpolation is an adaptive approach, where, by analyzing the area around each pixel, we choose the possible interpolation direction. In practice, the interpolation direction is chosen to avoid interpolating across the edges. The algorithm here presented consists of three steps. The first two ones are initialization steps, while the third one is an iterative steps. In the first one, the missing valued in the green component are determined, in particular a weighted average-type technique is used. The weights are determined in an edge-directed approach, in which we consider also the possible edges in the red and blue components. In the second step, we determine the missing values in the red and blue components. In this case we use two alternative techniques, according to the position of the involved pixel in the Bayer pattern. In the first technique, the missing value is determined by imposing that the second derivative of the intensity value of the red/blue channel is equal to the second derivative of the intensity values of the green channel. This is done according to the proposed approaches in the AP algorithm and the regularization algorithm. In particular, a constraint is imposed, to get the derivatives of all channels similar as soon as possible. At the third step, all values of the three channels are recursively updated, by means of a constant-hue-based technique. In particular, we assume the constant color difference. The technique we propose at this step is similar to that used by W. T. Freeman. Indeed, even here a median filter is employed, in order to correct small spurious imperfections. We repeat iteratively the third step. However, to avoid increasing excessively the computational cost, we experimentally estimate that only four iterations are necessary to obtain an accurate demosaicing. We call our technique as Local Edge Preserving (LEP) algorithm. The results related to this technique have been published in A. Boccuto, I. Gerace, V. Giorgetti and M. Rinaldi, A Fast Algorithm for the Demosaicing Problem Concerning the Bayer Pattern. The Open Signal Processing Journal 6 (2019), 1–14. In this thesis, we also propose an algorithm for image demosaicing that does not work within the framework of the regularization approaches and is suited, in a natural way, to deal with noisy data. More precisely, we propose an algorithm for joint demosaicing and denoising. Regular- ization requires the adoption of constraints for the solution. The constraints we consider are intra-channel and inter-channel local correlation. With respect to the intra-channel correlation, we assume the intensity of each channel to be locally regular, i.e. piecewise smooth, so that also noise can be removed. We describe this constraint through stabilizers that are functions discour- aging intensity discontinuities of first, second and third order in a selective way, so that those associated to truly edges in the scene are left to emerge. This allows to describe scenes even very complex. Indeed, first order local smoothness characterizes images consisting of constant patches, second order local smoothness describes patches whose pixels have values varying lin- early, while third order local smoothness is used to represent images made up of quadratic-valued patches. As per the inter-channel correlation, we enforce it in correspondence with the intensity discontinuities, by means of constraints that promote their amplitude in the three channels to be equal almost everywhere. Note that all these constraints are by no means biased in favor of one of the three channels, nor the geometry of the sampling pattern is in any way exploited. Thus, the method we propose is completely independent of the CFA considered, although, in the experimental result section, we present its application to images mosaiced through the Bayer CFA. All the above constraints, including the data fidelity term, are merged in a non-convex en- ergy function, whose minimizer is taken as our desired solution. The optimization is performed through an iterative deterministic algorithm entailing the minimization in a sequence of a family of approximating functions that, starting with a first componentwise convex function, gradually converges to the original energy. Our regularization approach can produce image solutions that exhibit reliable discontinuities of both the intensity and the gradients, despite the necessary smoothness constraints. There- fore, we propose an edge-preserving regularization approach, which means that the significant discontinuities in the reconstructed image are geometrically consistent. In the very first works proposing edge-preserving regularization, the image discontinuities were often represented by means of extra, explicit variables, the so-called “line processes”. In that way, it was relatively easy to formulate in terms of constraints the various properties required by significant discontinuities. Nevertheless, the use of explicit line variables entails large computational costs. Thus, so-called “duality theorems” were derived to demonstrate the edge-preserving properties of suitable stabilizers, without introducing extra variables. In particular, we developed duality theorems to determine the properties required for a stabilizer to implicitly manage lines with the desired regularity features. In this work, we choose a suitable family of approximations with the peculiarity that each function satisfies the conditions required for an im- plicit treatment of geometrically significant edges, as expressed in the duality theorems. This allows a better adherence of the approximations to the ideal energy function, with a consequent better coherence with the properties required for the desired solution. In this thesis we also study a Blind Source Separation (BSS) problem. These topics have been widely investigated since the end of the last century, and have various applications. In particular, we analyze the digital reconstruction of degraded documents. We observe that weathering, powder, humidity, seeping of ink, mold and light transmission can determine the degradation of the paper and the ink of written text. Some of the consequences in damaged documents are, for instance, stains, noise, transparency of writing on the verso side and on the close pages, unfocused or overlapping characters, and so on. Historically, the first techniques of restoration for degraded documents were manual, and they led to a material restoration. Re- cently, thanks to the diffusion of scanners and software for reconstruction of images, videos, texts, photographs and films, several new techniques were used in the recovery and restoration of deteriorated material, like for instance digital or virtual restoration. Digital imaging for doc- uments is very important, because it allows to have digital achieves, to make always possible the accessibility and the readability. The Digital Document Restoration consists of a set of pro- cesses finalized to the visual and aesthetic improvement of a virtual reconstruction of a corrupted document, without risk of deterioration. We deal with show-through and bleed-through effects. The show-through is a front-to-back interference, caused by the transparency of the paper and the scanning process, and by means of which the text in the recto side of the document can appear also in the verso side, and conversely. The bleed-through is an intrinsic front-to-back physical deterioration caused by ink seeping, and its effect is similar to that of show-through. The physical model for the show-through distortion, is very complex, because there are the spreading of light in the paper, the features of the paper, the reflectance of the verso and the transmittance parameters. Sharma gave a mathematical model was first analyzed and then further approximated so to become easier to handle. This model describes the observed recto and verso images as mixtures of the two uncorrupted texts. Locally, we consider a classical linear and stationary recto-verso model developed for this purpose, and are concerned with the problem of estimating both the ideal source images of the recto and the verso of the document and the mixture matrix producing the bleed-through or show-through effects. This problem is ill-posed in the sense of Hadamard. In fact, as the estimated mixture matrix varies, the corresponding estimated sources are in general different, and thus infinitely many solutions exist. Many techniques to solve this ill-posed inverse problem have been proposed in the literature. Among them, the Independent Component Analysis (ICA) methods are based on the assumption that the sources are mutually independent. The best-known ICA technique is the so-called FastICA, which by means of a fixed point iteration finds an orthogonal rotation of the prewhitened data that maximizes a measure of non-Gaussianity of the rotated components. The FastICA algorithm is a parameter-free and extremely fast procedure, but ICA is not a viable approach in our setting, as for the problem we consider there is a clear correlation among the sources. On the other hand, several techniques for ill-posed inverse problems require that the estimated sources are only mutually uncorrelated. In this case, the estimated sources are determined via a linear transformation of the data, which is obtained by imposing either an orthogonality condition, as in Principal Component Analysis (PCA), or an orthonormality condition, as in Whitening (W) and Symmetric Whitening (SW) techniques. These approaches all require only a single and very fast processing step. In [49, 156] it is observed that the results obtained by means of the SW method are substantially equivalent to those produced by an ICA technique in the symmetric mixing case. Here we assume that the sum of all rows of the mixing matrix is equal to one, since we expect the color of the background of the source to be the same as that of the data. In our setting, we change the variables of the data so that high and low light intensities correspond to presence and absence of text in the document, respectively, and we impose a nonnegativity constraint on the estimated sources. We define the overlapping matrix of both the observed data and the ideal sources, a quantity related to the cross-correlation between the signals. From the overlapping matrix we can deduce the overlapping level, which measures the similarity between the front and the back of the document. In order to obtain an accurate estimate of the sources, it is necessary to determine a correct source overlapping level. To this aim, we propose the following iterative procedure. At each it- eration, given the current source overlapping level, we estimate the mixture matrix that produces the sources with the lowest possible source overlapping level among those having light intensity in the desired range. This mixture matrix is computed by means of a suitable symmetric factor- ization of the data overlapping matrix. We then use the estimated sources to update the source overlapping level, and iterate the procedure until a fixed point is reached. At the fixed point, the corresponding source overlapping level is the smallest one that allows to estimate the ideal recto and verso sides with the desired properties. We consider this level as an adequate estimate of the ideal source overlapping level. Thus, by means of this technique, we can estimate not only the ideal sources and the mixture matrix, but also the source overlapping level, a value that indicates the correlation between the ideal sources. Therefore, our method can be classified as a Correlated Component Analysis (CCA) technique. We refer to this method as the Minimum Amount of Text Overlapping in Document Separation (MATODS) algorithm. Similarly to the FastICA technique, the MATODS algorithm is a parameter-free and extremely fast procedure. We use the MATODS algorithm to solve the non-stationary and locally linear model we propose, and in particular we present an extension of this technique that fits this model, which we call the Not Invariant for Translation MATODS (NIT-MATODS) algorithm. The related results have been published in A. Boccuto, I. Gerace and V. Giorgetti, A Blind Source Separation Technique for Document Restoration. SIAM J. Imaging Sci. 12 (2) (2019), 1135–1162. In this thesis we modify the MATODS algorithm to deal with the derivatives of the images of the original sources. In this case, we assume that the overlapping level is equal to zero. By means of our experimental results, we show that the proposed technique improves the results obtained by MATODS in terms both of accuracy of the estimates and of computational costs. We refer to this method as the Zero Edge Overlapping in Document Separation (ZEODS) algorithm. The obtained results are published in A. Boccuto, I. Gerace, V. Giorgetti and G. Valenti, A Blind Source Separation Technique for Document Restoration Based on Edge Estimation. http://viXra.org/abs/2201.0141 (2022). In [148], Sharma gave a mathematical model was first analyzed and then further approxi- mated so to become easier to handle. This model describes the observed recto and verso images as mixtures of the two uncorrupted texts. Now we analyze in detail the iterative technique to solve such a model, in which the sources, the blur operators and the interference level are computed separately at every step, until a fixed point is found. In this work, in particular, we deal with determining the interference level, by fixing the blur operators and the ideal sources. To this aim, we use a GNC-type technique. In forthcoming papers, the steps about finding the blur operators and the ideal sources will be treated. The results concerning such a technique have been published in A. Boccuto, I. Gerace and V. Giorgetti, Blind Source Separation in Document Restoration: an Interference Level Estimation. http://viXra.org/abs/2201.0050 (2022). The problem of restoring images consists of estimating the original image, starting from the observed image and the supposed blur. In our model, we suppose to know the blur mask. In general, this problem is ill–conditioned and/or ill–posed in the Hadamard sense. Thanks to known regularization techniques, it is possible to reduce this problem to a well–posed problem, whose solution is the minimum of the so-called primal energy function, which consists of the sum of two terms. The former indicates the faithfulness of the solution to the data, and the latter is in connection with the regularity properties of the solution. In order to obtain more realistic restored images, the discontinuities in the intensity field is considered. Indeed, in images of real scenes, there are some dis- continuities in correspondence with edges of several objects. To deal with such discontinuities, we consider some line variables. It is possible to minimize a priori the primal energy function in these variables, to determine a dual energy function, which treats implicitly discontinuities. Indeed, minimizing the dual energy function is more computationally efficient than minimizing directly the primal energy function. In general, the dual energy function has a quadratic term, related to the faithfulness with the data, and a not necessarily convex addend, the regularization term. In order to link these two kinds of energy functions, some suitable duality theorems are used. In order to improve the quality of the reconstructed images, it is possible to consider a dual energy function which implicitly treats Boolean line variables. The proposed duality theorems can be used even with such a function. However, the related dual energy function is not neces- sarily convex. So, to minimize it, we use a GNC-type technique, which considers as first convex approximation the proposed convex dual energy function. It is possible to verify experimentally that the more expensive minimization is the first one, because the other ones just start with a good approximation of the solution. Hence, when we minimize the first convex approximation, we will approximate every block of the blur operator by matrices whose product can be computed by a suitable fast discrete transform. As every block is a symmetric Toeplitz matrix, we deal with determining a class of matrices easy to handle from the computational point of view, which yield a good approximation of the Toeplitz matrices. Toeplitz-type linear systems arise from numerical approximation of differential equations. Moreover, in restoration of blurred images, it is often dealt with Toeplitz matrices. Thus, in this thesis we investigate a particular class, which is a sum of two families of simultaneously diagonalizable real matrices, whose elements we call β-matrices. Such a class includes both circulant and reverse circulant matrices. Symmetric circulant matrices have several applications to ordinary and partial differential equations, images and signal restoration, graph theory. Reverse circulant matrices have different applications, for instance in exponential data fitting and signal processing. The obtained results have been published in A. Boccuto, I. Gerace and V. Giorgetti, Image deblurring: a class of matrices approximating Toeplitz matrices http://viXra.org/abs/2201.0155 (2022). The thesis is structured as follows. In Chapter 1 we deal with the demosaicing problem, proposing a fast technique which locally estimates the edges. In Chapter 2 we treat the same problem, by giving a regularization technique for solving it. In Chapter 3 we consider the BSS problem for ancient documents, proposing a technique which uses symmetric factorizations. In Chapter 4 we modify the technique illustrated in the previous chapter, by introducing disconti- nuities. In Chapter 5 we deal with the BSS problem, by giving a regularization technique, and in particular we study the estimates of the interference levels. In Chapter 6 we treat the prob- lem of image deblurring, and in particular we analyze how symmetric Toeplitz operators can be approximated in the proposed GNC technique.
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Ill-posed nature"

1

Tikhonov, A. N. Ill-posed Problems in Natural Sciences : Proceedings of the International Conference, Moscow, August 1991. Brill Academic Publishers, 1992.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Ill-posed nature"

1

« Ill-posed problems of mechanics ». Dans Mathematical Understanding of Nature, 65–67. Providence, Rhode Island : American Mathematical Society, 2014. http://dx.doi.org/10.1090/mbk/085/21.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

James, Carolyn. « Risk-Taking and Risk Management ». Dans A Renaissance Marriage, 92–112. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780199681211.003.0006.

Texte intégral
Résumé :
This chapter explores the ways in which gender shaped the respective approaches to political decision-making by the marquis and his wife. I argue that while the delegated nature of her authority encouraged Isabella to keep her emotions strictly in check and to be prudent in a diplomatic setting, Francesco was far more erratic. On the one hand, he adopted strategies of temporizing, prevarication, and swift changes of allegiance to hedge his bets politically, seen by contemporaries as intrinsically female vices, on the other, he indulged in reckless and competitive behaviour designed to display his masculine courage and princely disdain for caution. Together the couple evaded the dangers posed by the second French descent and the fall of Milan to Louis XII, but it was Isabella’s prudence that neutralized the ill-considered risk-taking of her husband.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Hämäläinen, Matti S. « Magnetoencephalography Source Estimation ». Dans Fifty Years of Magnetoencephalography, 81–94. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780190935689.003.0006.

Texte intégral
Résumé :
This chapter describes the source estimation approaches to magnetoencephalography (MEG) analysis. Both MEG and electroencephalography (EEG) are measures of ongoing neuronal activity, and are ultimately generated by the same sources: postsynaptic currents in groups of neurons which have a geometrical arrangement favoring currents with a uniform direction across nearby neurons. From the outset, the overarching theme of MEG analysis methods has been the desire to transform the signals measured by the MEG sensors outside the head into estimates of source activity. This problem is challenging because of the ill-posed nature of the electromagnetic inverse problem. However, thanks to being able to capitalize on appropriate physiological and anatomical constraints, several reliable and widely used source estimation methods have emerged. The chapter then identifies the forward modeling approaches needed to relate the signals in the source and sensor spaces, and characterizes two popular approaches to source estimation: the parametric dipole model and distributed source estimates. Until 50 years ago, electroencephalography (EEG) was the only noninvasive technique capable of directly measuring neuronal activity with a millisecond time resolution. However, with the birth of magnetoencephalography (MEG), functional brain activity can now be resolved with this time resolution at a new level of spatial detail. The use of MEG in practical studies began with the first real-time measurements in the beginning of 1970s. During the following decade, multichannel MEG systems were developed in parallel with both investigations of normal brain activity and clinical studies, especially in epileptic patients. The first whole-head MEG system with more than 100 channels was introduced in 1992. Up to now, such instruments have been delivered to researchers and clinicians worldwide. The overarching theme of MEG analysis methods has been from the outset the desire to transform the signals measured by the MEG sensors outside the head into estimates of source activity. This problem is challenging because of the ill-posed nature of the electromagnetic inverse problem. However, thanks to being able to capitalize on appropriate physiological and anatomical constraints, several reliable and widely used source estimation methods have emerged. This chapter starts by describing the overall characteristics of MEG, followed a general description of the source estimation problem. The chapter then discusses the forward modeling approaches needed to relate the signals in the source and sensor spaces, and finally characterizes two popular approaches to source estimation: the parametric dipole model and distributed source estimates.
Styles APA, Harvard, Vancouver, ISO, etc.
4

« Preface ». Dans Ill-Posed Problems in Natural Sciences, XI—XII. De Gruyter, 1992. http://dx.doi.org/10.1515/9783112313930-001.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

« Generalized maximum likelihood method and its application for solving ill-posed problems ». Dans Ill-Posed Problems in Natural Sciences, 1–12. De Gruyter, 1992. http://dx.doi.org/10.1515/9783112313930-002.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

« Iterative method for the solution of nonlinear ill-posed problems and their applications ». Dans Ill-Posed Problems in Natural Sciences, 13–17. De Gruyter, 1992. http://dx.doi.org/10.1515/9783112313930-003.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

« Uniform regularization of first kind operator equations in Hilbert space ». Dans Ill-Posed Problems in Natural Sciences, 18–23. De Gruyter, 1992. http://dx.doi.org/10.1515/9783112313930-004.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

« On the discretization error in regularized projection methods with parameter choice by discrepancy principle ». Dans Ill-Posed Problems in Natural Sciences, 24–28. De Gruyter, 1992. http://dx.doi.org/10.1515/9783112313930-005.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

« Regularization find uniqueness of solutions of systems of Volterra nonlinear integral equations of the first kind with two arguments ». Dans Ill-Posed Problems in Natural Sciences, 29–32. De Gruyter, 1992. http://dx.doi.org/10.1515/9783112313930-006.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

« Pointwise residual method for solving systems of linear algebraic equations and inequalities ». Dans Ill-Posed Problems in Natural Sciences, 33–43. De Gruyter, 1992. http://dx.doi.org/10.1515/9783112313930-007.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Ill-posed nature"

1

Johnson, Steven A., David T. Borup, James W. Wiskin et Michael J. Berggren. « Mediation of Ill-posedness in Multiparameter Inverse Scattering ». Dans Signal Recovery and Synthesis. Washington, D.C. : Optica Publishing Group, 1992. http://dx.doi.org/10.1364/srs.1992.tub3.

Texte intégral
Résumé :
It is advantageous to formulate the Inverse Scattering problem as a nonlinear integro-differential equation. However, intrinsic to the nature of these inverse problems is their potential for ill-posedness. We review our experience in overcoming ill-posedness of inverse scattering problems and some of the theoretical and practical results we have achieved in our pursuit of numerically efficient and robust algorithms, i.e. fast algorithms that also mediate the ill-posed nature of some inverse problems. This is commensurate with the stated goal of the Center for Inverse Problems, Imaging, and Tomography, which is the development of imaging algorithms that produce quantitative maps of material parameters. The incident (electromagnetic or acoustic) energy is diffracting in the cases we study, so that standard Computed Tomographic (CT) algorithms are inadequate for the kind of resolution required for quantitative maps. The need to solve the wave equation exactly, dramatically increases the complexity of the inversion problem.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Sun, Xiaopeng, Muxingzi Li, Tianyu He et Lubin Fan. « Enhance Image as You Like with Unpaired Learning ». Dans Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California : International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/140.

Texte intégral
Résumé :
Low-light image enhancement exhibits an ill-posed nature, as a given image may have many enhanced versions, yet recent studies focus on building a deterministic mapping from input to an enhanced version. In contrast, we propose a lightweight one-path conditional generative adversarial network (cGAN) to learn a one-to-many relation from low-light to normal-light image space, given only sets of low- and normal-light training images without any correspondence. By formulating this ill-posed problem as a modulation code learning task, our network learns to generate a collection of enhanced images from a given input conditioned on various reference images. Therefore our inference model easily adapts to various user preferences, provided with a few favorable photos from each user. Our model achieves competitive visual and quantitative results on par with fully supervised methods on both noisy and clean datasets, while being 6 to 10 times lighter than state-of-the-art generative adversarial networks (GANs) approaches.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Niebsch, Jenny, et Ronny Ramlau. « Mathematical Imbalance Determination From Vibrational Measurements and Industrial Applications ». Dans ASME 2008 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/detc2008-49197.

Texte intégral
Résumé :
The paper focuses on the identification of imbalances from vibrational measurements in rotating machinery and its application to industrial problems. Since it is an ill-posed inverse problem the reconstruction is based on regularization techniques. To handle the direct problem, a model of the rotor under consideration has to be provided. We have employed the imbalance reconstruction principle to several industrial applications of linear and nonlinear nature.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Poggio, Tomaso. « Parallel Processes in Early Vision : from the computational structure to algorithms and parallel hardware ». Dans Optical Computing. Washington, D.C. : Optica Publishing Group, 1985. http://dx.doi.org/10.1364/optcomp.1985.ma3.

Texte intégral
Résumé :
One of the best definitions of early vision is that it is inverse optics — a set of computational problems that both machines and biological organisms have to solve. While in classical optics the problem is to determine the images of physical objects, vision is confronted with the inverse problem of recovering three-dimensional shape from the light distribution in the image. Most processes of early vision such as stereomatching, computation of motion and all the “structure from" processes can be regarded as solutions to inverse problems. This common characteristic of early vision can be formalized: most early vision problems are “ill-posed problems" in the sense of Hadamard. In this article we will first review a new framework suggested by Poggio and Torre (1984). They suggested that the mathematical theory developed for regularizing ill-posed problems leads in a natural way to the solution of early vision problems in terms of variational principles of a certain class. They argued that this is a new theoretical framework for some of the variational solutions already obtained in the analysis of early vision processes. They also showed how several other problems in early vision can be approached and solved. Thus the computational, ill-posed nature of early vision problems dictates a specific class of algorithms for solving them, based on variational principles of a certain class. It is natural to consider next which classes of parallel hardware may efficiently implement regularization algorithms. We are especially interested in implementations that are suggestive for biology. I will thus review a model of computation proposed by Poggio and Koch (1984) that maps easily into biologically plausible mechanisms. They showed that a natural way of implementing variational principles of the regularization type is to use electrical, chemical or neuronal networks. They also showed how to derive specific networks for solving several low-level vision problems, such as the computation of visual motion and edge detection.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Agarwal, Harish, Amit Kale, Srikanth Akkaram, Mahadevan Balasubramaniam, Susan Ebacher et Paul Gilleberto. « Inverse Modeling Techniques for Application to Engine Transient Performance Data Matching ». Dans ASME Turbo Expo 2008 : Power for Land, Sea, and Air. ASMEDC, 2008. http://dx.doi.org/10.1115/gt2008-51313.

Texte intégral
Résumé :
A framework demonstrating the application of inverse modeling technology for engine performance data matching is presented. Transient aero-thermodynamic cycle models are used to simulate engine performance and control characteristics over the entire flight envelope. These models are used not only for engine design and certification but also to provide performance guarantees to the customer and for engine diagnostics. Therefore, it is extremely important that these models are able to accurately predict the performance metrics of interest. Accuracy of these models can be improved by fine-tuning model parameters so that the model output best matches the flight test data. The performance of an aircraft engine is fine tuned from several sensor observations, e.g. exhaust gas temperature, fuel flow, and fan speed. These observations vary with parameters like power level, core speed and operating conditions like altitude, inlet conditions (temperature and pressure), and Mach number, and are used in conjunction with a transient performance simulation model to assess engine performance. This is normally achieved through an iterative manual approach that requires a lot of expert judgment. Simulating transient performance characteristics often requires an engineer to estimate model parameters by matching model response to engine sensor data. Such an estimation problem can be posed using inverse modeling technology. One of the main challenges in the application of inverse modeling for parameter estimation is that the problem can be ill-posed that leads to instability and non-uniqueness of the solution. The inverse method employed here for parameter estimation provides a solution for both well-posed and ill-posed problems. Sensitivity analysis can be used to better pose the data-matching problem. Singular value decomposition (SVD) technique is used to address the ill-posed nature of the inverse problem, which is solved as a finite dimensional non-linear optimization problem. Typically, the transient response is highly nonlinear and it may not be possible to match the whole transient simultaneously. This paper extends the framework on transient inverse modeling developed in [1] for engine transient performance applications. Variable weighting mechanism allows providing different weights to different sensors. This helps in better control on data matching, identify drift in parameter values over time, and point towards incorrect modeling assumptions. The application of the inverse methodology is demonstrated on a single spool non-afterburning engine and a commercial aviation engine model.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Woodbury, Keith A., Courtney Graham, John Baker et Charles Karr. « An Inverse Method Using a Genetic Algorithm to Determine Spatial Temperature Distribution From Infrared Tranmissivity Measurements in a Gas ». Dans ASME 2004 Heat Transfer/Fluids Engineering Summer Conference. ASMEDC, 2004. http://dx.doi.org/10.1115/ht-fed2004-56779.

Texte intégral
Résumé :
The ill-posed nature of inverse problems suggests that a solution be obtained through an optimization method. Genetic algorithms (GAs) effectively locate the global optimum, and are therefore an appealing technique to solve inverse problems. GAs mimic biological evolution, refining a set of solutions until the best solution is found. In this report, a genetic algorithm is developed and demonstrated based on a simple problem of determining the equation of a straight line. Then the GA is modified and implemented to estimate the temperature distribution in a gas based on the measured infrared tranmissivity distribution. The ulitimate task of this inverse method will be determination of the gas composition based on these transmissivity measurements.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Ghosh, Sayan, Valeria Andreoli, Govinda A. Padmanabha, Cheng Peng, Steven Atkinson, Piyush Pandita, Thomas Vandeputte, Nicholas Zabaras et Liping Wang. « A Probabilistic Machine Learning Framework for Explicit Inverse Design of Industrial Gas Turbine Blades ». Dans ASME Turbo Expo 2021 : Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/gt2021-58842.

Texte intégral
Résumé :
Abstract One of the critical components in Industrial Gas Turbines (IGT) is the turbine blade. Design of turbine blades needs to consider multiple aspects like aerodynamic efficiency, durability, safety and manufacturing, which make the design process sequential and iterative. The sequential nature of these iterations forces a long design cycle time, ranging from a several months to years. Due to the reactionary nature of these iterations, little effort has been made to accumulate data in a manner that allows for deep exploration and understanding of the total design space. This is exemplified in the process of designing the individual components of the IGT resulting in a potential unrealized efficiency. To overcome the aforementioned challenges, we demonstrate a probabilistic inverse design machine learning framework, namely Pro-ML IDeAS, to carry out an explicit inverse design. Pro-ML IDeAS calculates the design explicitly without costly iteration and overcomes the challenges associated with ill-posed inverse problems. In this work the framework will be demonstrated on inverse aerodynamic design of 2D airfoil of turbine blades.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Zhu, Wenwu, Yao Wang, Harry L. Graber, Randall L. Barbour et Jenghwa Chang. « Regularized Progressive Expansion Algorithm for Recovery of Scattering Media from Time-Resolved Data ». Dans Advances in Optical Imaging and Photon Migration. Washington, D.C. : Optica Publishing Group, 2022. http://dx.doi.org/10.1364/aoipm.1994.ncpdir.211.

Texte intégral
Résumé :
A principal difficulty encountered in dealing with highly diffused signals is that the inverse problem is ill-posed and often underdetermined. A progressive expansion (PE) algorithm has previously been reported, which has proven to be quite effective in circumventing the underdetermined nature of the inverse problem. However, the PE approach is sensitive to noise. Propagation of errors can become especially severe when evaluating regions deep beneath the surface. Here we describe results of using a Regularized PE (RPE) algorithm, which is shown to exhibit improved stability. The RPE algorithm has been applied to time–resolved data calculated from a perturbation The media tested include isotropically scattering slabs containing one or two compact absorbers at different depths below the surface. The data were corrupted by additive noise with varying strength. Compared to the original PE algorithm, the RPE algorithm has yielded more accurate and stable reconstructions under the same noise level.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Mandali, Priyanka, et Qiao Sun. « Stable Inversion Using the Assumed-Modes Rayleigh-Ritz Approximation for Tip Tracking of an Elastic Beam ». Dans ASME 2009 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/detc2009-87794.

Texte intégral
Résumé :
Inversion control has been very successful in the control of nonlinear dynamical systems. However, when applied to flexible manipulators, inverse dynamics through direct integration in temporal space causes unbounded controller command. For nearly three decades, researchers have exhausted all possible solutions. Still, a perfect solution does not exist that would yield a perfect tip tracking. It has been suggested that seeking an inverse dynamics solution for a given tip trajectory is an ill-posed problem. It has also been suggested that increasing model accuracy by including more terms in a truncated beam model worsens the controller’s ability of stability. In this paper, we seek to understand the nature of the inverse dynamics instability. We use one assumed mode to approximate the beam deflection in a single link manipulator. The mode shape is chosen to ensure a boundedness of the inverse dynamics solution. Such solution is then applied in an inversion controller. Numerical simulations and experiments have both been conducted to show its performance.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Michopoulos, J. G., et A. P. Iliopoulos. « Ontological Cross-Reducibility of Failure Theories for Composite Materials ». Dans ASME 2008 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/detc2008-49936.

Texte intégral
Résumé :
Failure criteria have a significant role in the design of composite structural systems. Often the questions of “which criterion is more physical” and “which criterion is the best” create uncertainty in the design decision making process. To underline the ill-posed nature of both of these questions in the present paper we are describing the initial steps of an effort to address two ontological characteristics of failure criteria as they are applicable to composite materials applications. The first characteristic is the non-objective nature of failure criteria and an informal description is provided. The second characteristic is an ontologically based cross-reducibility between criteria. To underline more formally this characteristic we utilize an ontology-based framework to clarify “how a criterion relates with another” in terms of its main semantic attributes. The non-physical nature of a theory is exposed when it is evaluated from the semantic view of a systemic perspective. The human role on the formation of any failure criterion is shown to have a foundationally subjective character, thus rendering the corresponding criterion as non-objective. In the context of the second effort, the creation of classification ontology in terms of the semantic projections of failure criteria in their structural heritage and usage is created. The common attributes of failure criteria are utilized to identify the bases of the attribute space that they can be ontologically classified. Web ontology software is utilized to aid the ontological construction process and the visual interpretation of the ontological context. The derived cross-reducibility suggests that failure theories are special reductions of one another.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie