Siga este enlace para ver otros tipos de publicaciones sobre el tema: Expectation-Minimization.

Artículos de revistas sobre el tema "Expectation-Minimization"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Expectation-Minimization".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Sekine, Jun. "Dynamic Minimization of Worst Conditional Expectation of Shortfall". Mathematical Finance 14, n.º 4 (octubre de 2004): 605–18. http://dx.doi.org/10.1111/j.0960-1627.2004.00207.x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Power, J. F. y M. C. Prystay. "Expectation Minimum (EM): A New Principle for the Solution of Ill-Posed Problems in Photothermal Science". Applied Spectroscopy 49, n.º 6 (junio de 1995): 709–24. http://dx.doi.org/10.1366/0003702953964499.

Texto completo
Resumen
The expectation-minimum (EM) principle is a new strategy for recovering robust solutions to the ill-posed inverse problems of photothermal science. The expectation-minimum principle uses the addition of well-characterized random noise to a model basis to be fitted to the experimental response by linear minimization or projection techniques. The addition of noise to the model basis improves the conditioning of the basis by many orders of magnitude. Multiple projections of the data onto the basis in the presence of noise are averaged, to give the solution vector as an expectation value which reliably estimates the global minimum solution for general cases, while the conventional approaches fail. This solution is very stable in the presence of random error on the data. The expectation-minimum principle has been demonstrated in conjunction with several projection algorithms. The nature of the solutions recovered by the expectation minimum principle is nearly independent of the minimization algorithms used and depends principally on the noise level set in the model basis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Cheung, Ka Chun. "Optimal Reinsurance Revisited – A Geometric Approach". ASTIN Bulletin 40, n.º 1 (mayo de 2010): 221–39. http://dx.doi.org/10.2143/ast.40.1.2049226.

Texto completo
Resumen
AbstractIn this paper, we reexamine the two optimal reinsurance problems studied in Cai et al. (2008), in which the objectives are to find the optimal reinsurance contracts that minimize the value-at-risk (VaR) and the conditional tail expectation (CTE) of the total risk exposure under the expectation premium principle. We provide a simpler and more transparent approach to solve these problems by using intuitive geometric arguments. The usefulness of this approach is further demonstrated by solving the VaR-minimization problem when the expectation premium principle is replaced by Wang's premium principle.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Chen, Fenge, Xingchun Peng y Wenyuan Wang. "Risk minimization for an insurer with investment and reinsurance via g-expectation". Communications in Statistics - Theory and Methods 48, n.º 20 (20 de febrero de 2019): 5012–35. http://dx.doi.org/10.1080/03610926.2018.1504077.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Cohen, Shay B. y Noah A. Smith. "Empirical Risk Minimization for Probabilistic Grammars: Sample Complexity and Hardness of Learning". Computational Linguistics 38, n.º 3 (septiembre de 2012): 479–526. http://dx.doi.org/10.1162/coli_a_00092.

Texto completo
Resumen
Probabilistic grammars are generative statistical models that are useful for compositional and sequential structures. They are used ubiquitously in computational linguistics. We present a framework, reminiscent of structural risk minimization, for empirical risk minimization of probabilistic grammars using the log-loss. We derive sample complexity bounds in this framework that apply both to the supervised setting and the unsupervised setting. By making assumptions about the underlying distribution that are appropriate for natural language scenarios, we are able to derive distribution-dependent sample complexity bounds for probabilistic grammars. We also give simple algorithms for carrying out empirical risk minimization using this framework in both the supervised and unsupervised settings. In the unsupervised case, we show that the problem of minimizing empirical risk is NP-hard. We therefore suggest an approximate algorithm, similar to expectation-maximization, to minimize the empirical risk.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Galkanov, Allaberdi G. "ABOUT INNOVATIVE METHODS OF NUMERICAL DATA AVERAGING". RSUH/RGGU Bulletin. Series Information Science. Information Security. Mathematics, n.º 2 (2023): 81–101. http://dx.doi.org/10.28995/2686-679x-2023-2-81-101.

Texto completo
Resumen
Numerical data refers to any finite set of data in the form of numbers, vectors, functions, matrices representing the results of an experiment or field observations. Averaging of deterministic, random variables and matrices is considered from a single point of view as a minimization of a function in the form of a generalized least squares problem. A new definition of the mean is given. Three generalizations of averages are obtained as solutions to the minimization problem. If the known averages are harmonic, geometric, arithmetic and quadratic averages and, perhaps, some other averages, then the first generalization of averages has already given an uncountable set of averages. Two new averages are derived from the first generalization. For particular types of averages arising from the first generalization, their interpretations are given in terms of absolute and relative deviations (errors). A sufficient condition of the mean is proved for all averages. Inequalities for six averages are proved. The law of nine numbers has been discovered. The concept of a complex average is given. The concept of optimal mean is introduced. New definitions of mathematical expectation and variance and their generalizations are proposed. In the family of mathematical expectations obtained, only the classical mathematical expectation turned out to be linear. The application of generalized mathematical expectation has led to the discovery of two new distributions in probability theory, namely, the harmonic and relative distributions of a continuous random variable are determined and analytically presented.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Fotakis, Dimitris, Piotr Krysta y Carmine Ventre. "Efficient Truthful Scheduling and Resource Allocation through Monitoring". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 6 (18 de mayo de 2021): 5423–31. http://dx.doi.org/10.1609/aaai.v35i6.16683.

Texto completo
Resumen
We study the power and limitations of the Vickrey-Clarke-Groves mechanism with monitoring (VCGmon) for cost minimization problems with objective functions that are more general than the social cost. We identify a simple and natural sufficient condition for VCGmon to be truthful for general objectives. As a consequence, we obtain that for any cost minimization problem with non-decreasing objective μ, VCGmon is truthful, if the allocation is Maximal-in-Range and μ is 1-Lipschitz (e.g., μ can be the Lp-norm of the agents’ costs, for any p ≥ 1 or p = ∞). We apply VCGmon to scheduling on restricted-related machines and obtain a polynomial-time truthful-in-expectation 2-approximate (resp. O(1)-approximate) mechanism for makespan (resp. Lp- norm) minimization. Moreover, applying VCGmon, we obtain polynomial-time truthful O(1)-approximate mechanisms for some fundamental bottleneck network optimization problems with single-parameter agents. On the negative side, we provide strong evidence that VCGmon could not lead to computationally efficient truthful mechanisms with reasonable approximation ratios for binary covering social cost minimization problems. However, we show that VCGmon results in computationally efficient approximately truthful mechanisms for binary covering problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Ansley, Craig F. y Robert Kohn. "On the equivalence of two stochastic approaches to spline smoothing". Journal of Applied Probability 23, A (1986): 391–405. http://dx.doi.org/10.2307/3214367.

Texto completo
Resumen
Wahba (1978) and Weinert et al. (1980), using different models, show that an optimal smoothing spline can be thought of as the conditional expectation of a stochastic process observed with noise. This observation leads to efficient computational algorithms. By going back to the Hilbert space formulation of the spline minimization problem, we provide a framework for linking the two different stochastic models. The last part of the paper reviews some new efficient algorithms for spline smoothing.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Ansley, Craig F. y Robert Kohn. "On the equivalence of two stochastic approaches to spline smoothing". Journal of Applied Probability 23, A (1986): 391–405. http://dx.doi.org/10.1017/s002190020011722x.

Texto completo
Resumen
Wahba (1978) and Weinert et al. (1980), using different models, show that an optimal smoothing spline can be thought of as the conditional expectation of a stochastic process observed with noise. This observation leads to efficient computational algorithms. By going back to the Hilbert space formulation of the spline minimization problem, we provide a framework for linking the two different stochastic models. The last part of the paper reviews some new efficient algorithms for spline smoothing.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Weng, Wenting y Wen Luo. "A Comparative Analysis of Data Mining Methods and Hierarchical Linear Modeling Using PISA 2018 Data". International Journal of Database Management Systems 15, n.º 2/3 (27 de junio de 2023): 1–16. http://dx.doi.org/10.5121/ijdms.2023.15301.

Texto completo
Resumen
Educational research often encounters clustered data sets, where observations are organized into multilevel units, consisting of lower-level units (individuals) nested within higher-level units (clusters). However, many studies in education utilize tree-based methods like Random Forest without considering the hierarchical structure of the data sets. Neglecting the clustered data structure can result in biased or inaccurate results. To address this issue, this study aimed to conduct a comprehensive survey of three tree- based data mining algorithms and hierarchical linear modeling (HLM). The study utilized the Programme for International Student Assessment (PISA) 2018 data to compare different methods, including non-mixed- effects tree models (e.g., Random Forest) and mixed-effects tree models (e.g., random effects expectation minimization recursive partitioning method, mixed-effects Random Forest), as well as the HLM approach. Based on the findings of this study, mixed-effects Random Forest demonstrated the highest prediction accuracy, while the random effects expectation minimization recursive partitioning method had the lowest prediction accuracy. However, it is important to note that tree-based methods limit deep interpretation of the results. Therefore, further analysis is needed to gain a more comprehensive understanding. In comparison, the HLM approach retains its value in terms of interpretability. Overall, this study offers valuable insights for selecting and utilizing suitable methods when analyzing clustered educational datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Roy, Amlan K. "Quantum confinement in 1D systems through an imaginary-time evolution method". Modern Physics Letters A 30, n.º 37 (16 de noviembre de 2015): 1550176. http://dx.doi.org/10.1142/s021773231550176x.

Texto completo
Resumen
Quantum confinement is studied by numerically solving time-dependent (TD) Schrödinger equation (SE). An imaginary-time evolution technique is employed in conjunction with the minimization of an expectation value, to reach the global minimum. Excited states are obtained by imposing the orthogonality constraint with all lower states. Applications are made on three important model quantum systems, namely, harmonic, repulsive and quartic oscillators; enclosed inside an impenetrable box. The resulting diffusion equation is solved using finite-difference method. Both symmetric and asymmetric confinement are considered for attractive potential; for others only symmetrical confinement. Accurate eigenvalue, eigenfunction and position expectation values are obtained, which show excellent agreement with existing literature results. Variation of energies with respect to box length is followed for small, intermediate and large sizes. In essence, a simple accurate and reliable method is proposed for confinement in quantum systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Varma, R., S. Bhusarapu, J. A. O'Sullivan y M. H. Al-Dahhan. "A comparison of alternating minimization and expectation maximization algorithms for single source gamma ray tomography". Measurement Science and Technology 19, n.º 1 (30 de noviembre de 2007): 015506. http://dx.doi.org/10.1088/0957-0233/19/1/015506.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Gurov, Ilya. "Theoretical Approaches to Inflation Expectation Management Justification in Today’s Russia". Moscow University Economics Bulletin 2014, n.º 6 (30 de diciembre de 2014): 35–51. http://dx.doi.org/10.38050/01300105201462.

Texto completo
Resumen
Inflation expectations significantly influence economic environment. During the past decades there was high and unstable inflation and systematic excess and mismatch between actual inflation and official forecasts in Russia. At present economic agents have low level of trust in official inflation forecasts. The subject of the research are inflation expectations in Russia. The aim of the research is to justify the possibility of inflation expectation management provision in Russia. The article shows that currently, nowadays inflation expectations are predominantly adaptive in Russia. Nevertheless, inflation reduction and stabilization in 2011-2013 can become the basis for inflation expectations anchor provision and perceived inflation uncertainty minimization.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Arbanas, Goran, Jinghua Feng, Zia J. Clifton, Andrew M. Holcomb, Marco T. Pigni, Dorothea Wiarda, Christopher W. Chapman, Vladimir Sobes, Li Emily Liu y Yaron Danon. "Bayesian optimization of generalized data". EPJ Nuclear Sciences & Technologies 4 (2018): 30. http://dx.doi.org/10.1051/epjn/2018038.

Texto completo
Resumen
Direct application of Bayes' theorem to generalized data yields a posterior probability distribution function (PDF) that is a product of a prior PDF of generalized data and a likelihood function, where generalized data consists of model parameters, measured data, and model defect data. The prior PDF of generalized data is defined by prior expectation values and a prior covariance matrix of generalized data that naturally includes covariance between any two components of generalized data. A set of constraints imposed on the posterior expectation values and covariances of generalized data via a given model is formally solved by the method of Lagrange multipliers. Posterior expectation values of the constraints and their covariance matrix are conventionally set to zero, leading to a likelihood function that is a Dirac delta function of the constraining equation. It is shown that setting constraints to values other than zero is analogous to introducing a model defect. Since posterior expectation values of any function of generalized data are integrals of that function over all generalized data weighted by the posterior PDF, all elements of generalized data may be viewed as nuisance parameters marginalized by this integration. One simple form of posterior PDF is obtained when the prior PDF and the likelihood function are normal PDFs. For linear models without a defect this PDF becomes equivalent to constrained least squares (CLS) method, that is, the χ2 minimization method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Power, J. F. y M. C. Prystay. "Nondestructive Optical Depth Profiling in Thin Films through Robust Inversion of the Laser Photopyroelectric Effect Impulse Response". Applied Spectroscopy 49, n.º 6 (junio de 1995): 725–46. http://dx.doi.org/10.1366/0003702953964570.

Texto completo
Resumen
The laser photopyroelectric effect measures an optical absorption depth profile in a thin film through the spatial dependence of a heat flux source established below the film surface by light absorption from a short optical pulse. In this work, inverse depth profile reconstruction was achieved by means of an inverse method based on the expectation-minimum principle (as reported in a companion paper), applied in conjunction with a constrained least-squares minimization, to invert the photopyroelectric theory. This method and zero-order Tikhonov regularization were applied to the inversion of experimental photopyroelectric data obtained from samples with a variety of discrete and continuous depth dependences of optical absorption. While both methods were found to deliver stable and accurate performance under experimental conditions, the method based on the constrained expectation-minimum principle was found to exhibit improved resolution and robustness over zero-order Tikhonov regularization.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

El Attar, Abderrahim, Mostafa El Hachloufi y Zine El Abidine Guennoun. "Optimal Reinsurance through Minimizing New Risk Measures under Technical Benefit Constraints". International Journal of Engineering Research in Africa 35 (marzo de 2018): 24–37. http://dx.doi.org/10.4028/www.scientific.net/jera.35.24.

Texto completo
Resumen
In this paper we present an approach to minimize the actuarial risk for the optimal choice of a form of reinsurance, and this is intended to be through a choice of treated parameters that minimize the risk using the Conditional Tail Expectation and the Conditional Tail Variance risk measures. The minimization procedure is based on the Augmented Lagrangian and a genetic algorithm with technical benefit as a constraint. This approach can be seen as a decision support tool that can be used by managers to minimize the actuarial risk in the insurance company.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Xie, Yanqi, Apurbo Sarkar, Md Shakhawat Hossain, Ahmed Khairul Hasan y Xianli Xia. "Determinants of Farmers’ Confidence in Agricultural Production Recovery during the Early Phases of the COVID-19 Pandemic in China". Agriculture 11, n.º 11 (31 de octubre de 2021): 1075. http://dx.doi.org/10.3390/agriculture11111075.

Texto completo
Resumen
The COVID-19 pandemic has adversely impacted the agricultural supply chain, export of agricultural products, and overall food security. However, minimal exploration has been attempted of farmers’ confidence in agricultural production recovery after the COVID-19 pandemic. Therefore, this study intends to explore the determinants of farmers’ confidence in agricultural production recovery in China during the early stages of the COVID-19 pandemic. More specifically, we analyzed the relationship between risk expectation and social support on the farmers’ confidence in agricultural production recovery by using the ordered probit model. Cross-sectional survey data were collected from February to March 2020 from 458 farm households in the 7 provinces of China to produce the findings. We found that the risk expectation of farmers had a significant negative impact on farmers’ confidence in agricultural production recovery. Social support seemingly had a significant positive impact on the farmers’ confidence in agricultural production recovery, and could play a supportive role in moderating the relationship between risk expectation and farmers’ confidence in recovery. However, social support alleviates the adverse effect of risk expectation on farmers’ confidence in agricultural production recovery to a certain extent. In addition, there were intergenerational differences in the effects of risk expectation and social support on farmers’ confidence in agricultural production recovery. These results imply that policies establishing the risk early warning mechanisms for agricultural production and strengthening the social support from governments and financial institutions are likely to significantly impact agricultural development in the post-COVID-19 era. The formal and informal risk minimization mechanisms should extend their support to vulnerable sectors such as agribusiness.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Petra, Stefania. "Randomized Sparse Block Kaczmarz as Randomized Dual Block-Coordinate Descent". Analele Universitatii "Ovidius" Constanta - Seria Matematica 23, n.º 3 (1 de noviembre de 2015): 129–49. http://dx.doi.org/10.1515/auom-2015-0052.

Texto completo
Resumen
Abstract We show that the Sparse Kaczmarz method is a particular instance of the coordinate gradient method applied to an unconstrained dual problem corresponding to a regularized ℓ1-minimization problem subject to linear constraints. Based on this observation and recent theoretical work concerning the convergence analysis and corresponding convergence rates for the randomized block coordinate gradient descent method, we derive block versions and consider randomized ordering of blocks of equations. Convergence in expectation is thus obtained as a byproduct. By smoothing the ℓ1-objective we obtain a strongly convex dual which opens the way to various acceleration schemes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Takenouchi, Takashi. "A Novel Parameter Estimation Method for Boltzmann Machines". Neural Computation 27, n.º 11 (noviembre de 2015): 2423–46. http://dx.doi.org/10.1162/neco_a_00781.

Texto completo
Resumen
We propose a novel estimator for a specific class of probabilistic models on discrete spaces such as the Boltzmann machine. The proposed estimator is derived from minimization of a convex risk function and can be constructed without calculating the normalization constant, whose computational cost is exponential order. We investigate statistical properties of the proposed estimator such as consistency and asymptotic normality in the framework of the estimating function. Small experiments show that the proposed estimator can attain comparable performance to the maximum likelihood expectation at a much lower computational cost and is applicable to high-dimensional data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Mir, Shabir Ahmed y T. Padma. "Review About Various Satellite Image Segmentation". Indonesian Journal of Electrical Engineering and Computer Science 9, n.º 3 (1 de marzo de 2018): 633. http://dx.doi.org/10.11591/ijeecs.v9.i3.pp633-636.

Texto completo
Resumen
<p>In this paper, a review about different algorithm is proposed efficiently to segment the satellite images. Segmentation of Image is one of the promising and active researches in recent years. As literature prove that region segmentation will produce better results. Human visual perception is more effective than any machine vision systems for extracting semantic information from image. There are various segmentation techniques are available. Fuzzy C Means (FCM), Expectation Minimization (EM) and K-Means algorithm is developed to estimate parameters of the prior probabilities and likelihood probabilities. Finally Peak Signal to Noise Ratio (PSNR) is calculated for all the algorithms and reviewed.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Pradana, Johan Alfian. "UTILITY 1 SERVER ON QUEUE SERVICE (STUDY: BANK ACCOUNT NUMBER CONVERSION)". Airlangga Journal of Innovation Management 2, n.º 2 (15 de noviembre de 2021): 187. http://dx.doi.org/10.20473/ajim.v2i2.30232.

Texto completo
Resumen
Fast-paced, precise demands and time minimization are dominant to support the service business. Service activities are always expected to be the best by customers. Especially ABC bank customers. Since the information about account conversion, many customers have come to the Bank. The server utility of the queue system plays an important role. One of them is about measuring the usefulness of the queue system, average expectations of waiting times, and expectations of the number of customers in the system. Services that focus on providing services always experience long lines. Therefore, the queue theory is used to assess utilities, waiting for time expectations, and expectations of customer numbers. Research methods using system performance. First, calculate the value of the speed, average service, service level, and performance of the queue system. The result is a queue system of 1 server on average - working with a utility value of 83.5% and the highest in the 4th week, with an average expectation - average waiting time of 0.428 or 25.6 minutes and an expectation of the number of customers in the system of 4.8 or 5 customers. The role of 1 server has not been practical to minimize waiting time expectations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Tateishi, Kiyoko, Yusaku Yamaguchi, Omar M. Abou Al-Ola y Tetsuya Yoshinaga. "Continuous Analog of Accelerated OS-EM Algorithm for Computed Tomography". Mathematical Problems in Engineering 2017 (2017): 1–8. http://dx.doi.org/10.1155/2017/1564123.

Texto completo
Resumen
The maximum-likelihood expectation-maximization (ML-EM) algorithm is used for an iterative image reconstruction (IIR) method and performs well with respect to the inverse problem as cross-entropy minimization in computed tomography. For accelerating the convergence rate of the ML-EM, the ordered-subsets expectation-maximization (OS-EM) with a power factor is effective. In this paper, we propose a continuous analog to the power-based accelerated OS-EM algorithm. The continuous-time image reconstruction (CIR) system is described by nonlinear differential equations with piecewise smooth vector fields by a cyclic switching process. A numerical discretization of the differential equation by using the geometric multiplicative first-order expansion of the nonlinear vector field leads to an exact equivalent iterative formula of the power-based OS-EM. The convergence of nonnegatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem for consistent inverse problems. We illustrate through numerical experiments that the convergence characteristics of the continuous system have the highest quality compared with that of discretization methods. We clarify how important the discretization method approximates the solution of the CIR to design a better IIR method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Kato, Kosuke y Masatoshi Sakawa. "An interactive fuzzy satisficing method based on variance minimization under expectation constraints for multiobjective stochastic linear programming problems". Soft Computing 15, n.º 1 (28 de enero de 2010): 131–38. http://dx.doi.org/10.1007/s00500-010-0540-z.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Anil Meera, Ajith y Martijn Wisse. "Dynamic Expectation Maximization Algorithm for Estimation of Linear Systems with Colored Noise". Entropy 23, n.º 10 (5 de octubre de 2021): 1306. http://dx.doi.org/10.3390/e23101306.

Texto completo
Resumen
The free energy principle from neuroscience has recently gained traction as one of the most prominent brain theories that can emulate the brain’s perception and action in a bio-inspired manner. This renders the theory with the potential to hold the key for general artificial intelligence. Leveraging this potential, this paper aims to bridge the gap between neuroscience and robotics by reformulating an FEP-based inference scheme—Dynamic Expectation Maximization—into an algorithm that can perform simultaneous state, input, parameter, and noise hyperparameter estimation of any stable linear state space system subjected to colored noises. The resulting estimator was proved to be of the form of an augmented coupled linear estimator. Using this mathematical formulation, we proved that the estimation steps have theoretical guarantees of convergence. The algorithm was rigorously tested in simulation on a wide variety of linear systems with colored noises. The paper concludes by demonstrating the superior performance of DEM for parameter estimation under colored noise in simulation, when compared to the state-of-the-art estimators like Sub Space method, Prediction Error Minimization (PEM), and Expectation Maximization (EM) algorithm. These results contribute to the applicability of DEM as a robust learning algorithm for safe robotic applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Adaramola, Anthony Olugbenga y Yusuf Olatunji Oyedeko. "Effect of Drawdown Strategy on Risk and Return in Nigerian Stock Market". Financial Markets, Institutions and Risks 6, n.º 3 (2022): 71–82. http://dx.doi.org/10.21272/fmir.6(3).71-82.2022.

Texto completo
Resumen
The study examined effect of drawdown on return in the Nigerian stock market. The study covered the period of 2005 to 2020. Purposive sampling was employed and the sample size comprising 90 regularly traded stocks were used for the analysis. Monthly data sourced from the CBN statistical bulletin and Nigeria Stock Exchange on stock prices, market index, risk-free rate ownership shareholdings, market capitalization, book value of equity, earnings before interest and taxes, total assets and drawdown were used for study. The Fama-MacBeth two-step regression method was employed. The study found that the drawdown has a negative and significant effect on stock returns but has a positive and significant effect on risk in the Nigerian stock market over the whole sample period. Findings also revealed that the sub-period are not stable in terms of the magnitude of effect and significance on risk and return. Our findings contradict the a-priori expectation that drawdown could improve performance through risk minimization and return maximization in the Nigerian stock market. Based on the findings investors and other market participant are encouraged to use drawdown as one of the investment performance measures to guide investors’ expectation and their tolerance on the size of stock market disruption or crashes or rallies in Nigeria.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Bagaev, Dmitry y Bert de Vries. "Reactive Message Passing for Scalable Bayesian Inference". Scientific Programming 2023 (27 de mayo de 2023): 1–26. http://dx.doi.org/10.1155/2023/6601690.

Texto completo
Resumen
We introduce reactive message passing (RMP) as a framework for executing schedule-free, scalable, and, potentially, more robust message passing-based inference in a factor graph representation of a probabilistic model. RMP is based on the reactive programming style, which only describes how nodes in a factor graph react to changes in connected nodes. We recognize reactive programming as the suitable programming abstraction for message passing-based methods that improve robustness, scalability, and execution time of the inference procedure and are useful for all future implementations of message passing methods. We also present our own implementation ReactiveMP.jl, which is a Julia package for realizing RMP through minimization of a constrained Bethe free energy. By user-defined specification of local form and factorization constraints on the variational posterior distribution, ReactiveMP.jl executes hybrid message passing algorithms including belief propagation, variational message passing, expectation propagation, and expectation maximization update rules. Experimental results demonstrate the great performance of our RMP implementation compared to other Julia packages for Bayesian inference across a range of probabilistic models. In particular, we show that the RMP framework is capable of performing Bayesian inference for large-scale probabilistic state-space models with hundreds of thousands of random variables on a standard laptop computer.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Krepki, R., Ye Pu, Hui Meng y K. Obermayer. "A new algorithm for the interrogation of 3D holographic PTV data based on deterministic annealing and expectation minimization optimization". Experiments in Fluids 29, n.º 7 (31 de diciembre de 2000): S099—S107. http://dx.doi.org/10.1007/s003480070012.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Yuille, A. L. y Anand Rangarajan. "The Concave-Convex Procedure". Neural Computation 15, n.º 4 (1 de abril de 2003): 915–36. http://dx.doi.org/10.1162/08997660360581958.

Texto completo
Resumen
The concave-convex procedure (CCCP) is a way to construct discrete-time iterative dynamical systems that are guaranteed to decrease global optimization and energy functions monotonically. This procedure can be applied to almost any optimization problem, and many existing algorithms can be interpreted in terms of it. In particular, we prove that all expectation-maximization algorithms and classes of Legendre minimization and variational bounding algorithms can be reexpressed in terms of CCCP. We show that many existing neural network and mean-field theory algorithms are also examples of CCCP. The generalized iterative scaling algorithm and Sinkhorn's algorithm can also be expressed as CCCP by changing variables. CCCP can be used both as a new way to understand, and prove the convergence of, existing optimization algorithms and as a procedure for generating new algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Johnson, Natalie T., Holger Ott y Michael R. Probert. "CAPOW: a standalone program for the calculation of optimal weighting parameters for least-squares crystallographic refinements". Journal of Applied Crystallography 51, n.º 1 (1 de febrero de 2018): 200–204. http://dx.doi.org/10.1107/s1600576717016600.

Texto completo
Resumen
The rigorous analysis of crystallographic models, refined through the use of least-squares minimization, is founded on the expectation that the data provided have a normal distribution of residuals. Processed single-crystal diffraction data rarely exhibit this feature without a weighting scheme being applied. These schemes are designed to reflect the precision and accuracy of the measurement of observed reflection intensities. While many programs have the ability to calculate optimal parameters for applied weighting schemes, there are still programs that do not contain this functionality, particularly when moving beyond the spherical atom model. For this purpose,CAPOW(calculation and plotting of optimal weights), a new program for the calculation of optimal weighting parameters for aSHELXLweighting scheme, is presented and an example of its application in a multipole refinement is given.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Cao, Hai-Yan y Zhen-Yu Ye. "Theoretical analysis and algorithm design of optimized pilot for downlink channel estimation in massive MIMO systems based on compressed sensing". Acta Physica Sinica 71, n.º 5 (2022): 050101. http://dx.doi.org/10.7498/aps.71.20211504.

Texto completo
Resumen
Aiming at the pilot design problem in channel estimation of large-scale multiple input multiple output (MIMO) systems, an adaptive autocorrelation matrix reduction parameter pilot optimization algorithm based on channel reconstruction error rate minimization is proposed under the framework of compression perception theory. Firstly, the system model and orthogonal matching pursuit (OMP) algorithm are introduced. Secondly, for minimizing the channel reconstruction error rate, the relation between the expected value of the correlation decision in each iteration of the OMP algorithm and the reconstruction error rate is analyzed. For the optimal expected value of the correlation decision, the relation between the channel reconstruction error rate and the correlation of the pilot matrix column under the OMP algorithm is derived, and the two criteria of optimizing the pilot matrix are obtained: the pilot matrix column correlation expectation and the variance minimization. Then the method of optimizing the pilot matrix is studied, and the corresponding adaptive autocorrelation matrix reduction parameter pilot matrix optimization algorithm is proposed. In each iteration, whether the average column correlation degree of the matrix to be optimized is reduced is used as a judgment condition. The autocorrelation matrix reduction parameter value is adjusted to make the parameters close to the theoretical optimization. The simulation results show that the proposed method has a better column correlation property and lower channel reconstruction error rate than the pilot matrix obtained, separately, by Gaussian matrix, Elad method and low power average column correlation method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Kirkpatrick, Anna, Kalen Patton, Prasad Tetali y Cassie Mitchell. "Markov Chain-Based Sampling for Exploring RNA Secondary Structure under the Nearest Neighbor Thermodynamic Model and Extended Applications". Mathematical and Computational Applications 25, n.º 4 (10 de octubre de 2020): 67. http://dx.doi.org/10.3390/mca25040067.

Texto completo
Resumen
Ribonucleic acid (RNA) secondary structures and branching properties are important for determining functional ramifications in biology. While energy minimization of the Nearest Neighbor Thermodynamic Model (NNTM) is commonly used to identify such properties (number of hairpins, maximum ladder distance, etc.), it is difficult to know whether the resultant values fall within expected dispersion thresholds for a given energy function. The goal of this study was to construct a Markov chain capable of examining the dispersion of RNA secondary structures and branching properties obtained from NNTM energy function minimization independent of a specific nucleotide sequence. Plane trees are studied as a model for RNA secondary structure, with energy assigned to each tree based on the NNTM, and a corresponding Gibbs distribution is defined on the trees. Through a bijection between plane trees and 2-Motzkin paths, a Markov chain converging to the Gibbs distribution is constructed, and fast mixing time is established by estimating the spectral gap of the chain. The spectral gap estimate is obtained through a series of decompositions of the chain and also by building on known mixing time results for other chains on Dyck paths. The resulting algorithm can be used as a tool for exploring the branching structure of RNA, especially for long sequences, and to examine branching structure dependence on energy model parameters. Full exposition is provided for the mathematical techniques used with the expectation that these techniques will prove useful in bioinformatics, computational biology, and additional extended applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Panontin, E., A. Dal Molin, M. Nocente, G. Croci, J. Eriksson, L. Giacomelli, G. Gorini et al. "Comparison of unfolding methods for the inference of runaway electron energy distribution from γ-ray spectroscopic measurements". Journal of Instrumentation 16, n.º 12 (1 de diciembre de 2021): C12005. http://dx.doi.org/10.1088/1748-0221/16/12/c12005.

Texto completo
Resumen
Abstract Unfolding techniques are employed to reconstruct the 1D energy distribution of runaway electrons from Bremsstrahlung hard X-ray spectrum emitted during plasma disruptions in tokamaks. Here we compare four inversion methods: truncated singular value decomposition, which is a linear algebra technique, maximum likelihood expectation maximization, which is an iterative method, and Tikhonov regularization applied to χ 2 and Poisson statistics, which are two minimization approaches. The reconstruction fidelity and the capability of estimating cumulative statistics, such as the mean and maximum energy, have been assessed on both synthetic and experimental spectra. The effect of measurements limitations, such as the low energy cut and few number of counts, on the final reconstruction has also been studied. We find that the iterative method performs best as it better describes the statistics of the experimental data and is more robust to noise in the recorded spectrum.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Dewi, Made Pratiwi. "OPTIMASI PORTOFOLIO PADA SAHAM PEFINDO 25 DENGAN MENGGUNAKAN MODEL MARKOWITZ (STUDI KASUS DI BURSA EFEK INDONESIA)". Warmadewa Management and Business Journal (WMBJ) 3, n.º 1 (28 de febrero de 2021): 32–41. http://dx.doi.org/10.22225/wmbj.3.1.2021.32-41.

Texto completo
Resumen
Fund investment activities in the capital market required expertise to minimize the investment risk. One way was to form a portfolio. Markowitz model helped investors determined the stocks which was the member of the optimal portfolio. Minimization of risk and maximization of return became the urgent thing, and the value of the return expectation became the basis of calculation. This research used non probability sampling to select Pefindo 25 indeks stocks at BEI as a population and sample. Results showed from 25 sample that only 6 (six) stocks were included in the optimal portfolio, which was Adi Sara Armada Tbk (ASSA), Wilmar Cahaya Indonesia Tbk (CEKA), Elnusa Tbk (ELSA), Erajaya Swasembada Tbk (ERAA), Champion Pacific Indonesia Tbk (IGAR), dan Vale Indonesia Tbk (INCO). The optimal investment portfolio provided total expected return portfolio was 15.592 percent and a risk of deviation / variance portfolio was 0.108 percent.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Wang, Fusheng, Yi-ming Yang, Xiaotong Li y Ovanes Petrosian. "A Modified Inexact SARAH Algorithm with Stabilized Barzilai-Borwein Step-Size in Machine learning". Statistics, Optimization & Information Computing 12, n.º 1 (18 de agosto de 2023): 1–14. http://dx.doi.org/10.19139/soic-2310-5070-1712.

Texto completo
Resumen
The Inexact SARAH (iSARAH) algorithm as a variant of SARAH algorithm, which does not require computation of the exact gradient, can be applied to solving general expectation minimization problems rather than only finite sum problems. The performance of iSARAH algorithm is frequently affected by the step size selection, and how to choose an appropriate step size is still a worthwhile problem for study. In this paper, we propose to use the stabilized Barzilai-Borwein (SBB) method to automatically compute step size for iSARAH algorithm, which leads to a new algorithm called iSARAH-SBB. By introducing this adaptive step size in the design of the new algorithm, iSARAH-SBB can take better advantages of both iSARAH and SBB methods. We analyse the convergence rate and complexity of the modified algorithm under the usual assumptions. Numerical experimental results on standard data sets demonstrate the feasibility and effectiveness of our proposed algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Ganesh, Talari, K. K. Paidipati y Christophe Chesneau. "Stochastic Transportation Problem with Multichoice Random Parameter". Computational and Mathematical Methods 2023 (26 de agosto de 2023): 1–8. http://dx.doi.org/10.1155/2023/9109009.

Texto completo
Resumen
This paper deals with the situation of multiple random choices along with multiple objective functions of the transportation problem. Due to the uncertainty in the environment, the choices of the cost coefficients are considered multichoice random parameters. The other parameters (supply and demand) are replaced by random variables with Gaussian distributions, and each multichoice parameter alternative is treated as a random variable. In this paper, the Newton divided difference interpolation technique is used to convert the multichoice parameter into a single choice in the objective function. Then, the chance-constrained method is applied to transform the probabilistic constraints into deterministic constraints. Due to the consideration of multichoices in the objective function, the expectation minimization model is used to get the deterministic form. Moreover, the fuzzy programming approach with the membership function is utilized to convert the multiobjective function into a single-objective function. A case study is also illustrated for a better understanding of the methodology.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Albers, Susanne y Maximilian Janke. "Scheduling in the Random-Order Model". Algorithmica 83, n.º 9 (9 de junio de 2021): 2803–32. http://dx.doi.org/10.1007/s00453-021-00841-8.

Texto completo
Resumen
AbstractMakespan minimization on identical machines is a fundamental problem in online scheduling. The goal is to assign a sequence of jobs to m identical parallel machines so as to minimize the maximum completion time of any job. Already in the 1960s, Graham showed that Greedy is $$(2-1/m)$$ ( 2 - 1 / m ) -competitive. The best deterministic online algorithm currently known achieves a competitive ratio of 1.9201. No deterministic online strategy can obtain a competitiveness smaller than 1.88. In this paper, we study online makespan minimization in the popular random-order model, where the jobs of a given input arrive as a random permutation. It is known that Greedy does not attain a competitive factor asymptotically smaller than 2 in this setting. We present the first improved performance guarantees. Specifically, we develop a deterministic online algorithm that achieves a competitive ratio of 1.8478. The result relies on a new analysis approach. We identify a set of properties that a random permutation of the input jobs satisfies with high probability. Then we conduct a worst-case analysis of our algorithm, for the respective class of permutations. The analysis implies that the stated competitiveness holds not only in expectation but with high probability. Moreover, it provides mathematical evidence that job sequences leading to higher performance ratios are extremely rare, pathological inputs. We complement the results by lower bounds, for the random-order model. We show that no deterministic online algorithm can achieve a competitive ratio smaller than 4/3. Moreover, no deterministic online algorithm can attain a competitiveness smaller than 3/2 with high probability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Ravi, Sujith, Sergei Vassilivitskii y Vibhor Rastogi. "Parallel Algorithms for Unsupervised Tagging". Transactions of the Association for Computational Linguistics 2 (diciembre de 2014): 105–18. http://dx.doi.org/10.1162/tacl_a_00169.

Texto completo
Resumen
We propose a new method for unsupervised tagging that finds minimal models which are then further improved by Expectation Maximization training. In contrast to previous approaches that rely on manually specified and multi-step heuristics for model minimization, our approach is a simple greedy approximation algorithm DMLC (Distributed-Minimum-Label-Cover) that solves this objective in a single step. We extend the method and show how to efficiently parallelize the algorithm on modern parallel computing platforms while preserving approximation guarantees. The new method easily scales to large data and grammar sizes, overcoming the memory bottleneck in previous approaches. We demonstrate the power of the new algorithm by evaluating on various sequence labeling tasks: Part-of-Speech tagging for multiple languages (including low-resource languages), with complete and incomplete dictionaries, and supertagging, a complex sequence labeling task, where the grammar size alone can grow to millions of entries. Our results show that for all of these settings, our method achieves state-of-the-art scalable performance that yields high quality tagging outputs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Aladahalli, Chandankumar, Jonathan Cagan y Kenji Shimada. "Objective Function Effect Based Pattern Search—Theoretical Framework Inspired by 3D Component Layout". Journal of Mechanical Design 129, n.º 3 (16 de marzo de 2006): 243–54. http://dx.doi.org/10.1115/1.2406095.

Texto completo
Resumen
Though pattern search algorithms have been successfully applied to three-dimensional (3D) component layout problems, a number of unanswered questions remain regarding their parameter tuning. One such question is the scheduling of patterns in the search. Current pattern search methods treat all patterns similarly and all of them are active from the beginning to the end of the search. Observations from 3D component layout motivate the question whether patterns should be introduced in some different order during the search. This paper presents a novel method for scheduling patterns that is inspired by observations from 3D component layout problems. The new method introduces patterns into the search in the decreasing order of a priori expectation of the objective function change due to the patterns. Pattern search algorithms based on the new pattern schedule run 30% faster on average than conventional pattern search based algorithms on 3D component layout problems and general 2D multimodal surface minimization problems. However since determining the expected change in objective function value due to the patterns is expensive, we explore approximations using domain information.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Pande, Sohan Kumar, Sanjaya Kumar Panda y Satyabrata Das. "A Customer-Oriented Task Scheduling for Heterogeneous Multi-Cloud Environment". International Journal of Cloud Applications and Computing 6, n.º 4 (octubre de 2016): 1–17. http://dx.doi.org/10.4018/ijcac.2016100101.

Texto completo
Resumen
Task scheduling is widely studied in various environments such as cluster, grid and cloud computing systems. Moreover, it is NP-Complete as the optimization criteria is to minimize the overall processing time of all the tasks (i.e., makespan). However, minimization of makespan does not equate to customer satisfaction. In this paper, the authors propose a customer-oriented task scheduling algorithm for heterogeneous multi-cloud environment. The basic idea of this algorithm is to assign a suitable task for each cloud which takes minimum execution time. Then it balances the makespan by inserting as much as tasks into the idle slots of each cloud. As a result, the customers will get better services in minimum time. They simulate the proposed algorithm in a virtualized environment and compare the simulation results with a well-known algorithm, called cloud min-min scheduling. The results show the superiority of the proposed algorithm in terms of customer satisfaction and surplus customer expectation. The authors validate the results using two statistical techniques, namely T-test and ANOVA.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Liu, Jie, Guilin Wen, Qixiang Qing, Fangyi Li y Yi Min Xie. "Robust topology optimization for continuum structures with random loads". Engineering Computations 35, n.º 2 (16 de abril de 2018): 710–32. http://dx.doi.org/10.1108/ec-10-2016-0369.

Texto completo
Resumen
Purpose This paper aims to tackle the challenge topic of continuum structural layout in the presence of random loads and to develop an efficient robust method. Design/methodology/approach An innovative robust topology optimization approach for continuum structures with random applied loads is reported. Simultaneous minimization of the expectation and the variance of the structural compliance is performed. Uncertain load vectors are dealt with by using additional uncertain pseudo random load vectors. The sensitivity information of the robust objective function is obtained approximately by using the Taylor expansion technique. The design problem is solved using bi-directional evolutionary structural optimization method with the derived sensitivity numbers. Findings The numerical examples show the significant topological changes of the robust solutions compared with the equivalent deterministic solutions. Originality/value A simple yet efficient robust topology optimization approach for continuum structures with random applied loads is developed. The computational time scales linearly with the number of applied loads with uncertainty, which is very efficient when compared with Monte Carlo-based optimization method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Chen, Jun, Hong Chen, Xue Jiang, Bin Gu, Weifu Li, Tieliang Gong y Feng Zheng. "On the Stability and Generalization of Triplet Learning". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 6 (26 de junio de 2023): 7033–41. http://dx.doi.org/10.1609/aaai.v37i6.25859.

Texto completo
Resumen
Triplet learning, i.e. learning from triplet data, has attracted much attention in computer vision tasks with an extremely large number of categories, e.g., face recognition and person re-identification. Albeit with rapid progress in designing and applying triplet learning algorithms, there is a lacking study on the theoretical understanding of their generalization performance. To fill this gap, this paper investigates the generalization guarantees of triplet learning by leveraging the stability analysis. Specifically, we establish the first general high-probability generalization bound for the triplet learning algorithm satisfying the uniform stability, and then obtain the excess risk bounds of the order O(log(n)/(√n) ) for both stochastic gradient descent (SGD) and regularized risk minimization (RRM), where 2n is approximately equal to the number of training samples. Moreover, an optimistic generalization bound in expectation as fast as O(1/n) is derived for RRM in a low noise case via the on-average stability analysis. Finally, our results are applied to triplet metric learning to characterize its theoretical underpinning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Gomi, Tsutomu y Yukio Koibuchi. "Use of a Total Variation Minimization Iterative Reconstruction Algorithm to Evaluate Reduced Projections during Digital Breast Tomosynthesis". BioMed Research International 2018 (19 de junio de 2018): 1–14. http://dx.doi.org/10.1155/2018/5239082.

Texto completo
Resumen
Purpose. We evaluated the efficacies of the adaptive steepest descent projection onto convex sets (ASD-POCS), simultaneous algebraic reconstruction technique (SART), filtered back projection (FBP), and maximum likelihood expectation maximization (MLEM) total variation minimization iterative algorithms for reducing exposure doses during digital breast tomosynthesis for reduced projections. Methods. Reconstructions were evaluated using normal (15 projections) and half (i.e., thinned-out normal) projections (seven projections). The algorithms were assessed by determining the full width at half-maximum (FWHM), and the BR3D Phantom was used to evaluate the contrast-to-noise ratio (CNR) for the in-focus plane. A mean similarity measure of structural similarity (MSSIM) was also used to identify the preservation of contrast in clinical cases. Results. Spatial resolution tended to deteriorate in ASD-POCS algorithm reconstructions involving a reduced number of projections. However, the microcalcification size did not affect the rate of FWHM change. The ASD-POCS algorithm yielded a high CNR independently of the simulated mass lesion size and projection number. The ASD-POCS algorithm yielded a high MSSIM in reconstructions from reduced numbers of projections. Conclusions. The ASD-POCS algorithm can preserve contrast despite a reduced number of projections and could therefore be used to reduce radiation doses.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Misati, Roseline y Anne Kamau. "Local and international dimensions of credit provision by commercial banks in Kenya". Banks and Bank Systems 12, n.º 3 (1 de septiembre de 2017): 87–99. http://dx.doi.org/10.21511/bbs.12(3).2017.07.

Texto completo
Resumen
Although considerable research has focused on the determinants of credit to the private sector, the issue still remains controversial, particularly with respect to the role of foreign banks in emerging markets. This study sought to understand the factors that affect lending of commercial bank loans both in form of foreign and local loans. It used panel data methods on quarterly bank-specific data covering the period from 2000 to 2013. In general, the results reveal that the ownership structure, housing variable and the size of the bank are the main determinants of aggregate commercial bank lending. This conclusion is maintained even when the determinants of foreign loans and local loans are specifically examined separately. However, the role of the liquidity measure is in not consistent in the different specifications while the role of interest rates is largely in line with expectation in most of the specifications. Implicitly, the results seem to suggest a need for mergers of small banks, policy focus on incentives for more local bank ownership and continued efforts on minimization of interest rate spread, which not only promote mortgage financing and home ownership, but also overall credit growth.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Venter, Elmarie. "How and why actions are selected: action selection and the dark room problem". Kairos. Journal of Philosophy & Science 15, n.º 1 (1 de abril de 2016): 19–45. http://dx.doi.org/10.1515/kjps-2016-0002.

Texto completo
Resumen
Abstract In this paper, I examine an evolutionary approach to the action selection problem and illustrate how it helps raise an objection to the predictive processing account. Clark examines the predictive processing account as a theory of brain function that aims to unify perception, action, and cognition, but - despite this aim - fails to consider action selection overtly. He off ers an account of action control with the implication that minimizing prediction error is an imperative of living organisms because, according to the predictive processing account, action is employed to fulfill expectations and reduce prediction error. One way in which this can be achieved is by seeking out the least stimulating environment and staying there (Friston et al. 2012: 2). Bayesian, neuroscientific, and machine learning approaches into a single framework whose overarching principle is the minimization of surprise (or, equivalently, the maximization of expectation. But, most living organisms do not find, and stay in, surprise free environments. This paper explores this objection, also called the “dark room problem”, and examines Clark’s response to the problem. Finally, I recommend that if supplemented with an account of action selection, Clark’s account will avoid the dark room problem.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Brewer, Daniel, Martino Barenco, Robin Callard, Michael Hubank y Jaroslav Stark. "Fitting ordinary differential equations to short time course data". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 366, n.º 1865 (13 de agosto de 2007): 519–44. http://dx.doi.org/10.1098/rsta.2007.2108.

Texto completo
Resumen
Ordinary differential equations (ODEs) are widely used to model many systems in physics, chemistry, engineering and biology. Often one wants to compare such equations with observed time course data, and use this to estimate parameters. Surprisingly, practical algorithms for doing this are relatively poorly developed, particularly in comparison with the sophistication of numerical methods for solving both initial and boundary value problems for differential equations, and for locating and analysing bifurcations. A lack of good numerical fitting methods is particularly problematic in the context of systems biology where only a handful of time points may be available. In this paper, we present a survey of existing algorithms and describe the main approaches. We also introduce and evaluate a new efficient technique for estimating ODEs linear in parameters particularly suited to situations where noise levels are high and the number of data points is low. It employs a spline-based collocation scheme and alternates linear least squares minimization steps with repeated estimates of the noise-free values of the variables. This is reminiscent of expectation–maximization methods widely used for problems with nuisance parameters or missing data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Gao, Jingjing, Mei Xie y Yan Zhou. "INTERLEAVED EM SEGMENTATION FOR MR IMAGE WITH INTENSITY INHOMOGENEITY". Biomedical Engineering: Applications, Basis and Communications 26, n.º 05 (26 de septiembre de 2014): 1450058. http://dx.doi.org/10.4015/s1016237214500586.

Texto completo
Resumen
Expectation–maximization (EM) algorithm has been extensively applied in brain MR image segmentation. However, the conventional EM method usually leads to severe misclassifications MR images with bias field, due to the significant intensity inhomogeneity. It limits the applications of the conventional EM method in MR image segmentation. In this paper, we proposed an interleaved EM method to perform tissue segmentation and bias field estimation. In the proposed method, the tissue segmentation is performed by the modified EM classification, and the bias field estimation is accomplished by an energy minimization. Moreover, the tissue segmentation and bias field estimation are performed in an interleaved process, and the two processes potentially benefit from each other during the iteration. A salient advantage of the proposed method is that it overcomes the misclassifications from the conventional EM classification for the MR images with bias field. Furthermore, the modified EM algorithm performs the soft segmentation in our method, which is more suitable for MR images than the hard segmentation achieved in Li et al.'s12 method. We have tested our method in the synthetic images with different levels of bias field and different noise, and compared with two baseline methods. Experimental results have demonstrated the effectiveness and advantages of the proposed algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Huang, Ren-Jie, Jung-Hua Wang, Chun-Shun Tseng, Zhe-Wei Tu y Kai-Chun Chiang. "Bayesian Edge Detector Using Deformable Directivity-Aware Sampling Window". Entropy 22, n.º 10 (25 de septiembre de 2020): 1080. http://dx.doi.org/10.3390/e22101080.

Texto completo
Resumen
Conventional image entropy merely involves the overall pixel intensity statistics which cannot respond to intensity patterns over spatial domain. However, spatial distribution of pixel intensity is definitely crucial to any biological or computer vision system, and that is why gestalt grouping rules involve using features of both aspects. Recently, the increasing integration of knowledge from gestalt research into visualization-related techniques has fundamentally altered both fields, offering not only new research questions, but also new ways of solving existing issues. This paper presents a Bayesian edge detector called GestEdge, which is effective in detecting gestalt edges, especially useful for forming object boundaries as perceived by human eyes. GestEdge is characterized by employing a directivity-aware sampling window or mask that iteratively deforms to probe or explore the existence of principal direction of sampling pixels; when convergence is reached, the window covers pixels best representing the directivity in compliance with the similarity and proximity laws in gestalt theory. During the iterative process based on the unsupervised Expectation-Minimization (EM) algorithm, the shape of the sampling window is optimally adjusted. Such a deformable window allows us to exploit the similarity and proximity among the sampled pixels. Comparisons between GestEdge and other edge detectors are shown to justify the effectiveness of GestEdge in extracting the gestalt edges.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Angelini, Elsa D., Ting Song, Brett D. Mensh y Andrew F. Laine. "Brain MRI Segmentation with Multiphase Minimal Partitioning: A Comparative Study". International Journal of Biomedical Imaging 2007 (2007): 1–15. http://dx.doi.org/10.1155/2007/10526.

Texto completo
Resumen
This paper presents the implementation and quantitative evaluation of a multiphase three-dimensional deformable model in a level set framework for automated segmentation of brain MRIs. The segmentation algorithm performs an optimal partitioning of three-dimensional data based on homogeneity measures that naturally evolves to the extraction of different tissue types in the brain. Random seed initialization was used to minimize the sensitivity of the method to initial conditions while avoiding the need fora prioriinformation. This random initialization ensures robustness of the method with respect to the initialization and the minimization set up. Postprocessing corrections with morphological operators were applied to refine the details of the global segmentation method. A clinical study was performed on a database of 10 adult brain MRI volumes to compare the level set segmentation to three other methods: “idealized” intensity thresholding, fuzzy connectedness, and an expectation maximization classification using hidden Markov random fields. Quantitative evaluation of segmentation accuracy was performed with comparison to manual segmentation computing true positive and false positive volume fractions. A statistical comparison of the segmentation methods was performed through a Wilcoxon analysis of these error rates and results showed very high quality and stability of the multiphase three-dimensional level set method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

SIM, ADELENE Y. L., OLIVIER SCHWANDER, MICHAEL LEVITT y JULIE BERNAUER. "EVALUATING MIXTURE MODELS FOR BUILDING RNA KNOWLEDGE-BASED POTENTIALS". Journal of Bioinformatics and Computational Biology 10, n.º 02 (abril de 2012): 1241010. http://dx.doi.org/10.1142/s0219720012410107.

Texto completo
Resumen
Ribonucleic acid (RNA) molecules play important roles in a variety of biological processes. To properly function, RNA molecules usually have to fold to specific structures, and therefore understanding RNA structure is vital in comprehending how RNA functions. One approach to understanding and predicting biomolecular structure is to use knowledge-based potentials built from experimentally determined structures. These types of potentials have been shown to be effective for predicting both protein and RNA structures, but their utility is limited by their significantly rugged nature. This ruggedness (and hence the potential's usefulness) depends heavily on the choice of bin width to sort structural information (e.g. distances) but the appropriate bin width is not known a priori. To circumvent the binning problem, we compared knowledge-based potentials built from inter-atomic distances in RNA structures using different mixture models (Kernel Density Estimation, Expectation Minimization and Dirichlet Process). We show that the smooth knowledge-based potential built from Dirichlet process is successful in selecting native-like RNA models from different sets of structural decoys with comparable efficacy to a potential developed by spline-fitting — a commonly taken approach — to binned distance histograms. The less rugged nature of our potential suggests its applicability in diverse types of structural modeling.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Romero, Ignacio O., Yile Fang, Michael Lun y Changqing Li. "X-ray Fluorescence Computed Tomography (XFCT) Imaging with a Superfine Pencil Beam X-ray Source". Photonics 8, n.º 7 (25 de junio de 2021): 236. http://dx.doi.org/10.3390/photonics8070236.

Texto completo
Resumen
X-ray fluorescence computed tomography (XFCT) is a molecular imaging technique that can be used to sense different elements or nanoparticle (NP) agents inside deep samples or tissues. However, XFCT has not been a popular molecular imaging tool because it has limited molecular sensitivity and spatial resolution. We present a benchtop XFCT imaging system in which a superfine pencil-beam X-ray source and a ring of X-ray spectrometers were simulated using GATE (Geant4 Application for Tomographic Emission) Monte Carlo software. An accelerated majorization minimization (MM) algorithm with an L1 regularization scheme was used to reconstruct the XFCT image of molybdenum (Mo) NP targets. Good target localization was achieved with a DICE coefficient of 88.737%. The reconstructed signal of the targets was found to be proportional to the target concentrations if detector number, detector placement, and angular projection number are optimized. The MM algorithm performance was compared with the maximum likelihood expectation maximization (ML-EM) and filtered back projection (FBP) algorithms. Our results indicate that the MM algorithm is superior to the ML-EM and FBP algorithms. We found that the MM algorithm was able to reconstruct XFCT targets as small as 0.25 mm in diameter. We also found that measurements with three angular projections and a 20-detector ring are enough to reconstruct the XFCT images.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía