Journal articles on the topic 'CONEX information model'

To see the other types of publications on this topic, follow the link: CONEX information model.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'CONEX information model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ben-Haim, Y. "Convex Models of Uncertainty in Radial Pulse Buckling of Shells." Journal of Applied Mechanics 60, no. 3 (September 1, 1993): 683–88. http://dx.doi.org/10.1115/1.2900858.

Full text
Abstract:
The buckling of shells subject to radial impulse loading has been studied by many investigators, and it is well known that the severity of the buckling response is greatly amplified by initial geometrical imperfections in the shell shape. Traditionally, these imperfections have been modeled stochastically. In this study convex models provide a convenient alternative to probabilistic representation of uncertainty. Convex models are well suited to the limitations of the available information on the nature of the geometrical uncertainties. A n ellipsoidal convex model is employed and the maximum pulse response is evaluated. The ellipsoidal convex model is based on three types of information concerning the initial geometrical uncertainty of the shell: (1) which mode shapes contribute to the imperfections, (2) bounds on the relative amplitudes of these modes, and (3) the magnitude of the maximum initial deviation of the shell from its nominal shape. The convex model analysis yields reasonable results in comparison with a probabilistic analysis due to Lindberg (1992a,b). We also consider localized imperfections of the shell. Results with a localized envelope-bound convex model indicate that very small regions of localized geometrical imperfections result in buckling damage which is a substantial fraction of the damage resulting from full circumferential initial imperfection.
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Quanhui, En Fan, and Pengfei Li. "Fuzzy-Logic-Based, Obstacle Information-Aided Multiple-Model Target Tracking." Information 10, no. 2 (February 2, 2019): 48. http://dx.doi.org/10.3390/info10020048.

Full text
Abstract:
Incorporating obstacle information into maneuvering target-tracking algorithms may lead to a better performance when the target when the target maneuver is caused by avoiding collision with obstacles. In this paper, we propose a fuzzy-logic-based method incorporating new obstacle information into the interacting multiple-model (IMM) algorithm (FOIA-MM). We use convex polygons to describe the obstacles and then extract the distance from and the field angle of these obstacle convex polygons to the predicted target position as obstacle information. This information is fed to two fuzzy logic inference systems; one system outputs the model weights to their probabilities, the other yields the expected sojourn time of the models for the transition probability matrix assignment. Finally, simulation experiments and an Unmanned Aerial Vehicle experiment are carried out to demonstrate the efficiency and effectiveness of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
3

LI, H., and L. Z. JIA. "ASSESSMENT OF DAMAGE AND LOSS OF SEISMICALLY EXCITED STRUCTURES BASED ON CONVEX ANALYSIS." Journal of Earthquake and Tsunami 05, no. 02 (June 2011): 101–18. http://dx.doi.org/10.1142/s1793431111000954.

Full text
Abstract:
Probabilistic results drawn upon inadequate information are suspicious. The convex set theory, which requires much less information, is employed to model the uncertainties of the spectral displacement and damage state medians. Furthermore, a convex model of fragility function is established based on the envelope bound convex models of the spectral displacement and damage state medians. A bound loss estimation method is derived by integrating HAZUS-AEBM module with the convex set theory. The loss bounds of a hotel in southern China are obtained and compared to the loss calculated by HAZUS-AEBM method, which locates in the lower half interval of convex analysis results. The uncertainty propagation is analyzed and damage state medians are found to be the most critical factor to the loss. Finally, the PEER's probabilistic loss estimation methodology is also applied to this example to deduce the probability of loss exceeding the bound values of convex analysis results.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Sanjiang, and Weiming Liu. "Topological Relations between Convex Regions." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (July 3, 2010): 321–26. http://dx.doi.org/10.1609/aaai.v24i1.7586.

Full text
Abstract:
Topological relations between spatial objects are the most important kind of qualitative spatial information. Dozens of relation models have been proposed in the past two decades. These models usually make a small number of distinctions and therefore can only cope with spatial information at a fixed granularity of spatial knowledge. In this paper, we propose a topological relation model in which the topological relation between two convex plane regions can be uniquely represented as a circular string over the alphabet {u; v; x; y}. A linear algorithm is given to compute the topological relation between two convex polygons. The infinite relation calculus could be used in hierarchical spatial reasoning as well as in qualitative shape description.
APA, Harvard, Vancouver, ISO, and other styles
5

Herskovic, Bernard, and João Ramos. "Acquiring Information through Peers." American Economic Review 110, no. 7 (July 1, 2020): 2128–52. http://dx.doi.org/10.1257/aer.20181798.

Full text
Abstract:
We develop an endogenous network formation model, in which agents form connections to acquire information. Our model features complementarity in actions as agents care not only about accuracy of their decision-making but also about the actions of other agents. In equilibrium, the information structure is a hierarchical network, and, under weakly convex cost of forming links, the equilibrium network is core-periphery. Although agents are ex ante identical, there is ex post heterogeneity in payoffs and actions. (JEL D83, D85, Z13)
APA, Harvard, Vancouver, ISO, and other styles
6

Yamaka, Woraphon, Rungrapee Phadkantha, and Paravee Maneejuk. "A Convex Combination Approach for Artificial Neural Network of Interval Data." Applied Sciences 11, no. 9 (April 28, 2021): 3997. http://dx.doi.org/10.3390/app11093997.

Full text
Abstract:
As the conventional models for time series forecasting often use single-valued data (e.g., closing daily price data or the end of the day data), a large amount of information during the day is neglected. Traditionally, the fixed reference points from intervals, such as midpoints, ranges, and lower and upper bounds, are generally considered to build the models. However, as different datasets provide different information in intervals and may exhibit nonlinear behavior, conventional models cannot be effectively implemented and may not be guaranteed to provide accurate results. To address these problems, we propose the artificial neural network with convex combination (ANN-CC) model for interval-valued data. The convex combination method provides a flexible way to explore the best reference points from both input and output variables. These reference points were then used to build the nonlinear ANN model. Both simulation and real application studies are conducted to evaluate the accuracy of the proposed forecasting ANN-CC model. Our model was also compared with traditional linear regression forecasting (information-theoretic method, parametrized approach center and range) and conventional ANN models for interval-valued data prediction (regularized ANN-LU and ANN-Center). The simulation results show that the proposed ANN-CC model is a suitable alternative to interval-valued data forecasting because it provides the lowest forecasting error in both linear and nonlinear relationships between the input and output data. Furthermore, empirical results on two datasets also confirmed that the proposed ANN-CC model outperformed the conventional models.
APA, Harvard, Vancouver, ISO, and other styles
7

Agrawal, Akshay, Shane Barratt, and Stephen Boyd. "Learning Convex Optimization Models." IEEE/CAA Journal of Automatica Sinica 8, no. 8 (August 2021): 1355–64. http://dx.doi.org/10.1109/jas.2021.1004075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shah, S., and M. D. Levine. "Visual information processing in primate cone pathways. I. A model." IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics) 26, no. 2 (April 1996): 259–74. http://dx.doi.org/10.1109/3477.485837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Yunyun, and Boying Wu. "Convex Image Segmentation Model Based on Local and Global Intensity Fitting Energy and Split Bregman Method." Journal of Applied Mathematics 2012 (2012): 1–16. http://dx.doi.org/10.1155/2012/692589.

Full text
Abstract:
We propose a convex image segmentation model in a variational level set formulation. Both the local information and the global information are taken into consideration to get better segmentation results. We first propose a globally convex energy functional to combine the local and global intensity fitting terms. The proposed energy functional is then modified by adding an edge detector to force the active contour to the boundary more easily. We then apply the split Bregman method to minimize the proposed energy functional efficiently. By using a weight function that varies with location of the image, the proposed model can balance the weights between the local and global fitting terms dynamically. We have applied the proposed model to synthetic and real images with desirable results. Comparison with other models also demonstrates the accuracy and superiority of the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
10

Gallagher, Ryan J., Kyle Reing, David Kale, and Greg Ver Steeg. "Anchored Correlation Explanation: Topic Modeling with Minimal Domain Knowledge." Transactions of the Association for Computational Linguistics 5 (December 2017): 529–42. http://dx.doi.org/10.1162/tacl_a_00078.

Full text
Abstract:
While generative models such as Latent Dirichlet Allocation (LDA) have proven fruitful in topic modeling, they often require detailed assumptions and careful specification of hyperparameters. Such model complexity issues only compound when trying to generalize generative models to incorporate human input. We introduce Correlation Explanation (CorEx), an alternative approach to topic modeling that does not assume an underlying generative model, and instead learns maximally informative topics through an information-theoretic framework. This framework naturally generalizes to hierarchical and semi-supervised extensions with no additional modeling assumptions. In particular, word-level domain knowledge can be flexibly incorporated within CorEx through anchor words, allowing topic separability and representation to be promoted with minimal human intervention. Across a variety of datasets, metrics, and experiments, we demonstrate that CorEx produces topics that are comparable in quality to those produced by unsupervised and semi-supervised variants of LDA.
APA, Harvard, Vancouver, ISO, and other styles
11

Ishihara, Akito. "A simulation study on the effect of ionic currents on transmission from cones to retinal OFF type cone bipolar cells." Modeling and Artificial Intelligence in Ophthalmology 2, no. 3 (June 4, 2019): 14–20. http://dx.doi.org/10.35119/maio.v2i3.87.

Full text
Abstract:
The retinal cone bipolar cells are interneurons which receive inputs from cone photoreceptors and send outputs to retinal ganglion cells. Several subtypes of bipolar cells have been identified by morphology and electrophysiology in the mammalian retina, which convey distinct visual information to higher order neurons in parallel. The neural circuit in the retina not only converts light information to neural . information, but also performs visual information preprocessing that has not yet been fully understood. Recently, it has been revealed that the neural circuits in retinas of higher vertebrates, such as mammals and primates, have various biophysical properties arising from being composed of ionic channels, ionic pumps, and neurotransmitter receptors. Analysis using a mathematical model based on their ionic mechanisms is essential to understand the visual information processing in the retinal neural circuit of the higher vertebrates. The cones and the bipolar cells respond to continuous variation of light with a graded potential, in an analog manner. Especially, glutamate is continuously released from a cone synapse in the dark and is decreased by hyperpolarization of the cone that receives the light stimulus. The alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) and kainate type ionotropic glutamate receptors (iGluRs) of the OFF type bipolar cells (OFF-BCs) exhibit partial or nearly complete desensitization in the sustained presence of glutamate. In the dark, glutamate concentration in the synaptic cleft of the cone pedicle rises to 0.1–0.5 mM.5,6 The baseline glutamate concentration depends on a sustained hyperpolarization of the cone by light. Thus, for understanding the working of the OFF-BCs, it is important to elucidate the mechanisms of synaptic transmission from cones to OFF-BCs via iGluRs, which undergo desensitization in the various background light conditions. Furthermore, there are various kinds of ionic channels in OFF-BCs that mediate membrane potential responses. It is considered that information transmitted from cones to OFF-BCs is modulated by the intrinsic ionic currents. We analyzed how ionic currents of OFF-BCs contribute to the transmission of light responses by developing a mathematical model.
APA, Harvard, Vancouver, ISO, and other styles
12

Golden, Henley, White, and Kashner. "Consequences of Model Misspecification for Maximum Likelihood Estimation with Missing Data." Econometrics 7, no. 3 (September 5, 2019): 37. http://dx.doi.org/10.3390/econometrics7030037.

Full text
Abstract:
Researchers are often faced with the challenge of developing statistical models with incomplete data. Exacerbating this situation is the possibility that either the researcher’s complete-data model or the model of the missing-data mechanism is misspecified. In this article, we create a formal theoretical framework for developing statistical models and detecting model misspecification in the presence of incomplete data where maximum likelihood estimates are obtained by maximizing the observable-data likelihood function when the missing-data mechanism is assumed ignorable. First, we provide sufficient regularity conditions on the researcher’s complete-data model to characterize the asymptotic behavior of maximum likelihood estimates in the simultaneous presence of both missing data and model misspecification. These results are then used to derive robust hypothesis testing methods for possibly misspecified models in the presence of Missing at Random (MAR) or Missing Not at Random (MNAR) missing data. Second, we introduce a method for the detection of model misspecification in missing data problems using recently developed Generalized Information Matrix Tests (GIMT). Third, we identify regularity conditions for the Missing Information Principle (MIP) to hold in the presence of model misspecification so as to provide useful computational covariance matrix estimation formulas. Fourth, we provide regularity conditions that ensure the observable-data expected negative log-likelihood function is convex in the presence of partially observable data when the amount of missingness is sufficiently small and the complete-data likelihood is convex. Fifth, we show that when the researcher has correctly specified a complete-data model with a convex negative likelihood function and an ignorable missing-data mechanism, then its strict local minimizer is the true parameter value for the complete-data model when the amount of missingness is sufficiently small. Our results thus provide new robust estimation, inference, and specification analysis methods for developing statistical models with incomplete data.
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Xiaoliang, Liping Pang, Qi Wu, and Mingkun Zhang. "An Adaptive Proximal Bundle Method with Inexact Oracles for a Class of Nonconvex and Nonsmooth Composite Optimization." Mathematics 9, no. 8 (April 15, 2021): 874. http://dx.doi.org/10.3390/math9080874.

Full text
Abstract:
In this paper, an adaptive proximal bundle method is proposed for a class of nonconvex and nonsmooth composite problems with inexact information. The composite problems are the sum of a finite convex function with inexact information and a nonconvex function. For the nonconvex function, we design the convexification technique and ensure the linearization errors of its augment function to be nonnegative. Then, the sum of the convex function and the augment function is regarded as an approximate function to the primal problem. For the approximate function, we adopt a disaggregate strategy and regard the sum of cutting plane models of the convex function and the augment function as a cutting plane model for the approximate function. Then, we give the adaptive nonconvex proximal bundle method. Meanwhile, for the convex function with inexact information, we utilize the noise management strategy and update the proximal parameter to reduce the influence of inexact information. The method can obtain an approximate solution. Two polynomial functions and six DC problems are referred to in the numerical experiment. The preliminary numerical results show that our algorithm is effective and reliable.
APA, Harvard, Vancouver, ISO, and other styles
14

Heyden, S., and M. Ortiz. "Optimizing information transmission rates drives brain gyrification." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 474, no. 2219 (November 2018): 20180527. http://dx.doi.org/10.1098/rspa.2018.0527.

Full text
Abstract:
We investigate the functional optimality of the cerebral cortex of an adult human brain geometry. Unlike most competing models, we postulate that the cerebral cortex formation is driven by the objective of maximizing the total information transmission rate. Starting from a random path model, we show that this optimization problem is related to the Steklov eigenvalue problem. Combining realistic brain geometries with the finite-element method, we calculate the underlying Steklov eigenvalues and eigenfunctions. By comparison to a convex hull approximation, we show that the adult human brain geometry indeed reduces the Steklov eigenvalue spectrum and thus increases the rate at which information is exchanged between points on the cerebral cortex. With a view to possible clinical applications, the leading Steklov eigenfunctions and the resulting induced magnetic fields are computed and reported.
APA, Harvard, Vancouver, ISO, and other styles
15

Ramli, Azizul Azhar, Junzo Watada, and Witold Pedrycz. "An Efficient Solution of Real-Time Fuzzy Regression Analysis to Information Granules Problem." Journal of Advanced Computational Intelligence and Intelligent Informatics 16, no. 2 (March 20, 2012): 199–209. http://dx.doi.org/10.20965/jaciii.2012.p0199.

Full text
Abstract:
Regression models are well known and widely used as one of the important categories of models in system modeling. In this paper, we extend the concept of fuzzy regression in order to handle real-time implementation of data analysis of information granules. An ultimate objective of this study is to develop a hybrid of a genetically-guided clustering algorithm called genetic algorithm-based Fuzzy C-Means (GAFCM) and a convex hull-based regression approach being regarded as a potential solution to the formation of information granules. It is shown that a setting of Granular Computing helps us reduce the computing time, especially in case of real-time data analysis, as well as an overall computational complexity. We propose an efficient real-time information granules regression analysis based on the convex hull approach in which a Beneath-Beyond algorithm is employed to design sub-convex hulls as well as a main convex hull structure. In the proposed design setting, we emphasize a pivotal role of the convex hull approach or more specifically the Beneath-Beyond algorithm, which becomes crucial in alleviating limitations of linear programming manifesting in system modeling.
APA, Harvard, Vancouver, ISO, and other styles
16

Li, Na, Jiquan Yang, Aiqing Guo, Yijian Liu, and Hai Liu. "Triangulation Reconstruction for 3D Surface Based on Information Model." Cybernetics and Information Technologies 16, no. 5 (October 1, 2016): 27–33. http://dx.doi.org/10.1515/cait-2016-0049.

Full text
Abstract:
Abstract The aim of this paper is to address the surface reconstruction from point cloud in reverser engineering. The data was acquired through a 3D scan device and was processed as point cloud data. The points in cloud were connected to build 3D surface. The points cloud was processed in four steps to get 3D information surface. First, the subtraction scheme was used to get cover boxes ended with the set of convex was found under the convergence rule. Secondly, the points in the box were projected to the directions which were close to the normal direction method. Thirdly the overlap was avoided by using convergence rule and inner subdivision rule. Finally the information model was used to reconstruction. The method was used in landslide monitoring of Three Gorges area for 3D surface reconstruction and monitoring. The reconstruction method obtains high precision and low complexity. It is effective for large scale monitoring.
APA, Harvard, Vancouver, ISO, and other styles
17

Hong, Seung Hee, and Adel Alaeddini. "A Multi-way Multi-task Learning Approach for Multinomial Logistic Regression." Methods of Information in Medicine 56, no. 04 (2017): 294–307. http://dx.doi.org/10.3414/me16-01-0112.

Full text
Abstract:
SummaryObjectives: Whether they have been engineered for it or not, most healthcare systems experience a variety of unexpected events such as appointment miss-opportunities that can have significant impact on their revenue, cost and resource utilization. In this paper, a multi-way multi-task learning model based on multinomial logistic regression is proposed to jointly predict the occurrence of different types of miss-opportunities at multiple clinics.Methods: An extension of L 1/L 2 regulariza- tion is proposed to enable transfer of information among various types of miss-opportunities as well as different clinics. A proximal algorithm is developed to transform the convex but non-smooth likelihood function of the multi-way multi-task learning model into a convex and smooth optimization problem solvable using gradient descent algorithm.Results: A dataset of real attendance records of patients at four different clinics of a VA medical center is used to verify the performance of the proposed multi-task learning approach. Additionally, a simulation study, investigating more general data situations is provided to highlight the specific aspects of the proposed approach. Various individual and integrated multinomial logistic regression models with/without LASSO penalty along with a number of other common classification algorithms are fitted and compared against the proposed multi-way multi-task learning approach. Fivefold cross validation is used to estimate comparing models parameters and their predictive accuracy. The multi-way multi-task learning framework enables the proposed approach to achieve a considerable rate of parameter shrinkage and superior prediction accuracy across various types of miss-opportunities and clinics.Conclusions: The proposed approach provides an integrated structure to effectively transfer knowledge among different miss-opportunities and clinics to reduce model size, increase estimation efficacy, and more importantly improve predictions results. The proposed framework can be effectively applied to medical centers with multiple clinics, especially those suffering from information scarcity on some type of disruptions and/or clinics.
APA, Harvard, Vancouver, ISO, and other styles
18

Rebennack, Steffen, and Vitaliy Krasko. "Piecewise Linear Function Fitting via Mixed-Integer Linear Programming." INFORMS Journal on Computing 32, no. 2 (April 2020): 507–30. http://dx.doi.org/10.1287/ijoc.2019.0890.

Full text
Abstract:
Piecewise linear (PWL) functions are used in a variety of applications. Computing such continuous PWL functions, however, is a challenging task. Software packages and the literature on PWL function fitting are dominated by heuristic methods. This is true for both fitting discrete data points and continuous univariate functions. The only exact methods rely on nonconvex model formulations. Exact methods compute continuous PWL function for a fixed number of breakpoints minimizing some distance function between the original function and the PWL function. An optimal PWL function can only be computed if the breakpoints are allowed to be placed freely and are not fixed to a set of candidate breakpoints. In this paper, we propose the first convex model for optimal continuous univariate PWL function fitting. Dependent on the metrics chosen, the resulting formulations are either mixed-integer linear programming or mixed-integer quadratic programming problems. These models yield optimal continuous PWL functions for a set of discrete data. On the basis of these convex formulations, we further develop an exact algorithm to fit continuous univariate functions. Computational results for benchmark instances from the literature demonstrate the superiority of the proposed convex models compared with state-of-the-art nonconvex models.
APA, Harvard, Vancouver, ISO, and other styles
19

Klaus, Kristina V., and Nicholas J. Matzke. "Statistical Comparison of Trait-Dependent Biogeographical Models Indicates That Podocarpaceae Dispersal Is Influenced by Both Seed Cone Traits and Geographical Distance." Systematic Biology 69, no. 1 (May 17, 2019): 61–75. http://dx.doi.org/10.1093/sysbio/syz034.

Full text
Abstract:
Abstract The ability of lineages to disperse long distances over evolutionary timescales may be influenced by the gain or loss of traits adapted to enhance local, ecological dispersal. For example, some species in the southern conifer family Podocarpaceae have fleshy cones that encourage bird dispersal, but it is unknown how this trait has influenced the clade’s historical biogeography, or its importance compared with other predictors of dispersal such as the geographic distance between regions. We answer these questions quantitatively by using a dated phylogeny of 197 species of southern conifers (Podocarpaceae and their sister family Araucariaceae) to statistically compare standard, trait-independent biogeography models with new BioGeoBEARS models where an evolving trait can influence dispersal probability, and trait history, biogeographical history, and model parameters are jointly inferred. We validate the method with simulation-inference experiments. Comparing all models, those that include trait-dependent dispersal accrue 87.5% of the corrected Akaike Information Criterion (AICc) model weight. Averaged across all models, lineages with nonfleshy cones had a dispersal probability multiplier of 0.49 compared with lineages with fleshy cones. Distance is included as a predictor of dispersal in all credible models (100% model weight). However, models with changing geography earned only 22.0% of the model weight, and models submerging New Caledonia/New Zealand earned only 0.01%. The importance of traits and distance suggests that long-distance dispersal over macroevolutionary timespans should not be thought of as a highly unpredictable chance event. Instead, long-distance dispersal can be modeled, allowing statistical model comparison to quantify support for different hypotheses.
APA, Harvard, Vancouver, ISO, and other styles
20

Bárdossy, András, and Shailesh Kumar Singh. "Regionalization of hydrological model parameters using data depth." Hydrology Research 42, no. 5 (October 1, 2011): 356–71. http://dx.doi.org/10.2166/nh.2011.031.

Full text
Abstract:
The parameters of hydrological models with no or short discharge records can only be estimated using regional information. We can assume that catchments with similar characteristics show a similar hydrological behaviour. A regionalization of hydrological model parameters on the basis of catchment characteristics is therefore plausible. However, due to the non-uniqueness of the rainfall/runoff model parameters (equifinality), a procedure of a regional parameter estimation by model calibration and a subsequent fit of a regional function is not appropriate. In this paper, a different procedure based on the depth function and convex combinations of model parameters is introduced. Catchment characteristics to be used for regionalization can be identified by the same procedure. Regionalization is then performed using different approaches: multiple linear regression using the deepest parameter sets and convex combinations. The assessment of the quality of the regionalized models is also discussed. An example of 28 British catchments illustrates the methodology.
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Guodong, Zhenkuan Pan, Qian Dong, Ximei Zhao, Zhimei Zhang, and Jinming Duan. "Unsupervised Texture Segmentation Using Active Contour Model and Oscillating Information." Journal of Applied Mathematics 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/614613.

Full text
Abstract:
Textures often occur in real-world images and may cause considerable difficulties in image segmentation. In order to segment texture images, we propose a new segmentation model that combines image decomposition model and active contour model. The former model is capable of decomposing structural and oscillating components separately from texture image, and the latter model can be used to provide smooth segmentation contour. In detail, we just replace the data term of piecewise constant/smooth approximation in CCV (convex Chan-Vese) model with that of image decomposition model-VO (Vese-Osher). Therefore, our proposed model can estimate both structural and oscillating components of texture images as well as segment textures simultaneously. In addition, we design fast Split-Bregman algorithm for our proposed model. Finally, the performance of our method is demonstrated by segmenting some synthetic and real texture images.
APA, Harvard, Vancouver, ISO, and other styles
22

He, Xin-Dang, Wen-Xuan Gou, Yong-Shou Liu, and Zong-Zhan Gao. "Structure and System Nonprobabilistic Reliability Solution Method Based on Enhanced Optimal Latin Hypercube Sampling." International Journal of Structural Stability and Dynamics 15, no. 01 (January 2015): 1450034. http://dx.doi.org/10.1142/s0219455414500345.

Full text
Abstract:
Using the convex model approach, the bounds of uncertain variables are only required rather than the precise probability distributions, based on which it can be made possible to conduct the reliability analysis for many complex engineering problems with limited information. This paper aims to develop a novel nonprobabilistic reliability solution method for structures with interval uncertainty variables. In order to explore the entire domain represented by interval variables, an enhanced optimal Latin hypercube sampling (EOLHS) is used to reduce the computational effort considerably. Through the proposed method, the safety degree of a structure with convex modal uncertainty can be quantitatively evaluated. More importantly, this method can be used to deal with any general problems with nonlinear and black-box performance functions. By introducing the suggested reliability method, a convex-model-based system reliability method is also formulated. Three numerical examples are investigated to demonstrate the efficiency and accuracy of the method.
APA, Harvard, Vancouver, ISO, and other styles
23

METHA, ANDREW B., and PETER LENNIE. "Transmission of spatial information in S-cone pathways." Visual Neuroscience 18, no. 6 (November 2001): 961–72. http://dx.doi.org/10.1017/s095252380118613x.

Full text
Abstract:
The mosaics of S-cones and the neurons to which they are connected are relatively well characterized, so the S-cone system is a good vehicle for exploring how the sampling of the retinal image controls visual performance. We used an interferometer to measure the grating acuity of the S-cone system in the fovea and at a range of eccentricities out to 20 deg. We also developed a simple model observer that, by assuming only that cone pathways are noisy and that signals are subject to eccentricity-dependent postreceptoral pooling, predicts the measured acuities from the sampling properties of the S-cone mosaic. The amount of pooling required to explain performance is consistent with that suggested by anatomical and physiological measurements.
APA, Harvard, Vancouver, ISO, and other styles
24

Keaty, Thomas C., and Paul A. Jensen. "Gapsplit: efficient random sampling for non-convex constraint-based models." Bioinformatics 36, no. 8 (January 8, 2020): 2623–25. http://dx.doi.org/10.1093/bioinformatics/btz971.

Full text
Abstract:
Abstract Summary Gapsplit generates random samples from convex and non-convex constraint-based models by targeting under-sampled regions of the solution space. Gapsplit provides uniform coverage of linear, mixed-integer and general non-linear models. Availability and implementation Python and Matlab source code are freely available at http://jensenlab.net/tools. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
25

Tao, Feng, Yu Feng, Baobin Sun, Jianwei Wang, Xiaole Chen, and Jiarui Gong. "Septoplasty Effect on the Enhancement of Airflow Distribution and Particle Deposition in Nasal Cavity: A Numerical Study." Healthcare 10, no. 9 (September 5, 2022): 1702. http://dx.doi.org/10.3390/healthcare10091702.

Full text
Abstract:
The surgery outcomes after fixing nasal airway obstruction (NAO) are sometimes not satisfactory in improving ventilations of airflow. A case study is presented in this paper with computational fluid dynamics applied to determine the key factors for successful septoplasty plans for a patient with a deviated nasal septum. Specifically, airflow, as well as particle transport and deposition were predicted in a pre-surgery nasal cavity model reconstructed from patient-specific Computer Tomography (CT) images and two post-surgery nasal cavity models (i.e., VS1 and VS2) with different virtual surgery plans A and B. Plan A corrected the deviated septal cartilage, the perpendicular plate of the ethmoid bone, vomer, and nasal crest of the maxilla. Plan B further corrected the obstruction in the nasal vestibule and caudal nasal septal deviation based on Plan A. Simulations were performed in the three nose-to-throat airway models to compare the airflow velocity distributions and local particle depositions. Numerical results indicate that the VS2 model has a better improvement in airflow allocation between the two sides than the VS1 model. In addition, the deposition fractions in the VS2 model are lower than that in both the original and VS1 models, up to 25.32%. The better surgical plan (i.e., Plan B) reduces the particle deposition on the convex side, but slightly increases the deposition on the concave side. However, the overall deposition in the nasal cavity is reduced.
APA, Harvard, Vancouver, ISO, and other styles
26

Yamaka, Woraphon, and Sukrit Thongkairat. "A Mixed Copula-Based Vector Autoregressive Model for Econometric Analysis." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 28, Supp01 (August 28, 2020): 113–21. http://dx.doi.org/10.1142/s0218488520400103.

Full text
Abstract:
In many practical applications, the dynamics of different quantities is reasonably well described by linear equations. In economics, such linear dynamical models are known as vector autoregressive (VAR) models. These linear models are, however, only approximate. The deviations of the actual value of each quantity from the predictions of the linear model are usually well described by normal or Student-t distributions. To complete the description of the joint distribution of all these deviations, we need to supplement these marginal distributions with the information about the corresponding copula. To describe this dependence, in the past, researchers followed the usual idea of trying copulas from several standard families: Gaussian, Student, Clayton, Frank, Gumbel, and Joe families. To get a better description, we propose to also use convex combinations of copulas from different families; such convex combinations are known as mixed copulas. On the example of the dynamics of US macroeconomic data, including GDP, unemployment, consumer price index, and the real effective exchange rate, we show that mixed copulas indeed lead to a better description of the actual data. Specifically, it turns out that the best description is obtained if we use a convex combination of Student and Frank copulas.
APA, Harvard, Vancouver, ISO, and other styles
27

Huai, Mengdi, Di Wang, Chenglin Miao, Jinhui Xu, and Aidong Zhang. "Pairwise Learning with Differential Privacy Guarantees." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (April 3, 2020): 694–701. http://dx.doi.org/10.1609/aaai.v34i01.5411.

Full text
Abstract:
Pairwise learning has received much attention recently as it is more capable of modeling the relative relationship between pairs of samples. Many machine learning tasks can be categorized as pairwise learning, such as AUC maximization and metric learning. Existing techniques for pairwise learning all fail to take into consideration a critical issue in their design, i.e., the protection of sensitive information in the training set. Models learned by such algorithms can implicitly memorize the details of sensitive information, which offers opportunity for malicious parties to infer it from the learned models. To address this challenging issue, in this paper, we propose several differentially private pairwise learning algorithms for both online and offline settings. Specifically, for the online setting, we first introduce a differentially private algorithm (called OnPairStrC) for strongly convex loss functions. Then, we extend this algorithm to general convex loss functions and give another differentially private algorithm (called OnPairC). For the offline setting, we also present two differentially private algorithms (called OffPairStrC and OffPairC) for strongly and general convex loss functions, respectively. These proposed algorithms can not only learn the model effectively from the data but also provide strong privacy protection guarantee for sensitive information in the training set. Extensive experiments on real-world datasets are conducted to evaluate the proposed algorithms and the experimental results support our theoretical analysis.
APA, Harvard, Vancouver, ISO, and other styles
28

Jia, Li Zhe, and Zhong Dong Duan. "Convex Model for a New Lateral Load Pattern of Pushover Analysis." Advanced Materials Research 243-249 (May 2011): 4013–16. http://dx.doi.org/10.4028/www.scientific.net/amr.243-249.4013.

Full text
Abstract:
The uncertainties of earthquake currently were not considered with the various lateral load patterns of pushover. The convex set theory, which requires much less information, is employed to model the uncertainties of the seismic influence coefficient maximum and the characteristic period of response spectrum. Then the convex analysis method is integrated into the fundamental equation of pushover, and the analytic relationship of lateral seismic load and top displacement of buildings is derived. The results of numerical example shows that the new lateral load pattern of pushover proposed in this research may effective simulate the uncertainties of strong ground motion.
APA, Harvard, Vancouver, ISO, and other styles
29

KABEER, V., and N. K. NARAYANAN. "WAVELET-BASED ARTIFICIAL LIGHT RECEPTOR MODEL FOR HUMAN FACE RECOGNITION." International Journal of Wavelets, Multiresolution and Information Processing 07, no. 05 (September 2009): 617–27. http://dx.doi.org/10.1142/s0219691309003124.

Full text
Abstract:
This paper presents a novel biologically-inspired and wavelet-based model for extracting features of faces from face images. The biological knowledge about the distribution of light receptors, cones and rods, over the surface of the retina, and the way they are associated with the nerve ends for pattern vision forms the basis for the design of this model. A combination of classical wavelet decomposition and wavelet packet decomposition is used for simulating the functional model of cones and rods in pattern vision. The paper also describes the experiments performed for face recognition using the features extracted on the AT & T face database (formerly, ORL face database) containing 400 face images of 40 different individuals. In the recognition stage, we used the Artificial Neural Network Classifier. A feature vector of size 40 is formed for face images of each person and recognition accuracy is computed using the ANN classifier. Overall recognition accuracy obtained for the AT & T face database is 95.5%.
APA, Harvard, Vancouver, ISO, and other styles
30

Wei, Yanling, Mao Wang, Hamid Reza Karimi, Nan Wang, and Jianbin Qiu. "ℋ∞Model Reduction for Discrete-Time Markovian Jump Systems with Deficient Mode Information." Mathematical Problems in Engineering 2013 (2013): 1–11. http://dx.doi.org/10.1155/2013/537174.

Full text
Abstract:
This paper investigates the problem ofℋ∞model reduction for a class of discrete-time Markovian jump linear systems (MJLSs) with deficient mode information, which simultaneously involves the exactly known, partially unknown, and uncertain transition probabilities. By fully utilizing the properties of the transition probability matrices, together with the convexification of uncertain domains, a newℋ∞performance analysis criterion for the underlying MJLSs is first derived, and then two approaches, namely, the convex linearisation approach and iterative approach, for theℋ∞model reduction synthesis are proposed. Finally, a simulation example is provided to illustrate the effectiveness of the proposed design methods.
APA, Harvard, Vancouver, ISO, and other styles
31

Peng, Yi, Gang Kou, Yong Shi, and Zhengxin Chen. "A Multi-criteria Convex Quadratic Programming model for credit data analysis." Decision Support Systems 44, no. 4 (March 2008): 1016–30. http://dx.doi.org/10.1016/j.dss.2007.12.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

PHRUKSAPHANRAT, BUSABA, and ARIO OHSATO. "LINEAR COORDINATION METHOD FOR FUZZY MULTI-OBJECTIVE LINEAR PROGRAMMING PROBLEMS WITH CONVEX POLYHEDRAL MEMBERSHIP FUNCTIONS." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 12, no. 03 (June 2004): 269–85. http://dx.doi.org/10.1142/s0218488504002837.

Full text
Abstract:
A fuzzy multi-objective decision-making with nonlinear membership functions is proposed in this paper by assuming that the decision maker has a fuzzy goal for each objective function. The fuzzy goals can be quantified by convex polyhedral membership functions, which are expressed by linguistic terms. The concept of the convex cone is used to formulate a normalized convex polyhedral penalty function, which can also be considered conversely as a convex polyhedral membership function. The most desirable value of membership functions are selected to be reference membership values of achievement of convex polyhedral membership functions that can be viewed as the extension of the idea of reference point method. The formulated model can be solved by existing linear programming solvers and can find the satisficing solution for the decision maker, which can be derived efficiently from among an M-Pareto optimal solution set together with the trade-off rates between the membership functions. The proposed model uses convex polyhedral membership functions to represent vague aspirations of the decision maker. It enriches the existing satisficing methods for fuzzy multi-objective linear programming in a more practical way with the effective method based on convex cone.
APA, Harvard, Vancouver, ISO, and other styles
33

Montoya, Oscar Danilo, Farhad Zishan, and Diego Armando Giral-Ramírez. "Recursive Convex Model for Optimal Power Flow Solution in Monopolar DC Networks." Mathematics 10, no. 19 (October 5, 2022): 3649. http://dx.doi.org/10.3390/math10193649.

Full text
Abstract:
This paper presents a new optimal power flow (OPF) formulation for monopolar DC networks using a recursive convex representation. The hyperbolic relation between the voltages and power at each constant power terminal (generator or demand) is represented as a linear constraint for the demand nodes and generators. To reach the solution for the OPF problem a recursive evaluation of the model that determines the voltage variables at the iteration t+1 (vt+1) by using the information of the voltages at the iteration t (vt) is proposed. To finish the recursive solution process of the OPF problem via the convex relaxation, the difference between the voltage magnitudes in two consecutive iterations less than the predefined tolerance is considered as a stopping criterion. The numerical results in the 85-bus grid demonstrate that the proposed recursive convex model can solve the classical power flow problem in monopolar DC networks, and it also solves the OPF problem efficiently with a reduced convergence error when compared with semidefinite programming and combinatorial optimization methods. In addition, the proposed approach can deal with radial and meshed monopolar DC networks without modifications in its formulation. All the numerical implementations were in the MATLAB programming environment and the convex models were solved with the CVX and the Gurobi solver.
APA, Harvard, Vancouver, ISO, and other styles
34

Fang, S. C., and J. R. Rajasekera. "Entropy Optimization Models with Convex Constraints." Information and Computation 116, no. 2 (February 1995): 304–11. http://dx.doi.org/10.1006/inco.1995.1022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Yang, Lijie, Songlin Nie, and Anqing Zhang. "Non-probabilistic wear reliability analysis of swash-plate/slipper of water hydraulic piston motor based on convex model." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 227, no. 3 (October 5, 2012): 609–19. http://dx.doi.org/10.1177/0954406212463501.

Full text
Abstract:
Water hydraulic piston motor is one of the important transmission components in a water hydraulic system. For lack of enough experimental information to analyze the reliability in water hydraulic motor design, the convex model is chosen as wear reliability model for hydraulic friction pairs, which is more economical and reasonable than the interval model. The non-probabilistic reliability index based on the convex model is established using first-order Taylor series expansion, and the non-probabilistic reliability model of wear reliability analysis for the friction pair of swash-plate/slipper in water hydraulic piston motor is introduced based on the convex model. For the non-probabilistic reliability model, only the bounds on uncertain parameters are needed, instead of probabilistic density distribution or statistical quantities. Numerical examples are used to illustrate the validity and feasibility of the presented method.
APA, Harvard, Vancouver, ISO, and other styles
36

He, Xin-dang, Wen-xuan Gou, Yong-shou Liu, and Zong-zhan Gao. "A Practical Method of Nonprobabilistic Reliability and Parameter Sensitivity Analysis Based on Space-Filling Design." Mathematical Problems in Engineering 2015 (2015): 1–12. http://dx.doi.org/10.1155/2015/561202.

Full text
Abstract:
Using the convex model approach, the bounds of uncertain variables are only required rather than the precise probability distributions, based on which it can be made possible to conduct the reliability analysis for many complex engineering problems with limited information. In this paper, three types of convex model including interval, ellipsoid, and multiellipsoid convex uncertainty model are investigated, and a uniform model of nonprobabilistic reliability analysis is built. In the reliability analysis process, an effective space-filling design is introduced to generate representative samples of uncertainty space so as to reduce the computational cost and provide an accurate depiction of possible model outcome. Finally, Spearman’s rank correlation coefficient is used to perform parameters global sensitivity analysis. Three numerical examples are investigated to demonstrate the feasibility and accuracy of the presented method.
APA, Harvard, Vancouver, ISO, and other styles
37

Wu, Di, Yu Zhang, and Yong Chen. "Joint Optimization Method of Spectrum Resource for UAV Swarm Information Transmission." Electronics 11, no. 20 (October 19, 2022): 3372. http://dx.doi.org/10.3390/electronics11203372.

Full text
Abstract:
For the problems brought by malicious interference in the unmanned aerial vehicle (UAV) swarm network, we establish a cluster-based UAV swarm information transmission model. We mainly consider four aspects: cluster head selection, channel allocation, power allocation and UAV position. In order to improve the backhaul information rate of UAV swarm, we propose a joint optimization method of spectrum resource with the goal of maximizing the sum throughput of the cluster head UAV. We decompose the original mixed integer nonlinear programming (MINLP) problem into multiple sub-problems based on the block coordinate descent (BCD) technique, and then solve them based on convex optimization and successive convex approximation (SCA) technique. Simulation results show that the proposed method can obtain the spectrum resource optimization strategy of UAV swarm information transmission, reduce the impact of malicious interference, and effectively improve the backhaul information rate of UAV swarm.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhao, Chen Xi, Qun Zhang, Jin Wu Xu, Min Li, and Jian Hong Yang. "The Prediction Model of COREX Cold Gas Content of Carbon Dioxide Based on MOSC-PLS." Advanced Materials Research 339 (September 2011): 420–25. http://dx.doi.org/10.4028/www.scientific.net/amr.339.420.

Full text
Abstract:
In COREX processes, the cold gas is produced in melter gasifier, after being cooled and dust controlled, blown into the blast furnace and used in the reduction reaction. The cold gas content plays a key role in the reaction of lump ore and pellets reduction. A prediction model of COREX cold gas content of carbon dioxide is proposed based on modified orthogonal signal correction partial least squares algorithm (MOSC-PLS). Firstly, the input and output variables of the model are selected according to the COREX processes principle. Secondly, MOSC algorithm is used to preprocess the data, in order to remove the irrelevant information between the input and output variables of the model. Finally, prediction model is built based on PLS. The real field data of cold gas content of carbon dioxide from Baosteel COREX are used for verification. The results show that MOSC-PLS has an advantage over the orthogonal signal correction partial least squares (OSC-PLS) in prediction accuracy. Thus the necessary decision supports and analysis tools for the cold gas content control are provided.
APA, Harvard, Vancouver, ISO, and other styles
39

SONG, Kechen. "Convex Active Contour Segmentation Model of Strip Steel Defects Image Based on Local Information." Journal of Mechanical Engineering 48, no. 20 (2012): 1. http://dx.doi.org/10.3901/jme.2012.20.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Chen, Shiwa, Jianyun Zhang, Yunxiang Mao, Chengcheng Xu, and Yu Gu. "Efficient Distributed Method for NLOS Cooperative Localization in WSNs." Sensors 19, no. 5 (March 7, 2019): 1173. http://dx.doi.org/10.3390/s19051173.

Full text
Abstract:
The accuracy of cooperative localization can be severely degraded in non-line-of-sight (NLOS) environments. Although most existing approaches modify models to alleviate NLOS impact, computational speed does not satisfy practical applications. In this paper, we propose a distributed cooperative localization method for wireless sensor networks (WSNs) in NLOS environments. The convex model in the proposed method is based on projection relaxation. This model was designed for situations where prior information on NLOS connections is unavailable. We developed an efficient decomposed formulation for the convex counterpart, and designed a parallel distributed algorithm based on the alternating direction method of multipliers (ADMM), which significantly improves computational speed. To accelerate the convergence rate of local updates, we approached the subproblems via the proximal algorithm and analyzed its computational complexity. Numerical simulation results demonstrate that our approach is superior in processing speed and accuracy to other methods in NLOS scenarios.
APA, Harvard, Vancouver, ISO, and other styles
41

Su, Yong, Jiawei Jin, Weilong Peng, Keke Tang, Asad Khan, Simin An, and Meng Xing. "A Convex Relaxation Approach for Learning the Robust Koopman Operator." Wireless Communications and Mobile Computing 2022 (June 28, 2022): 1–11. http://dx.doi.org/10.1155/2022/5010251.

Full text
Abstract:
Although data-driven models, especially deep learning, have achieved astonishing results on many prediction tasks for nonlinear sequences, challenges still remain in finding an appropriate way to embed prior knowledge of physical dynamics in these models. In this work, we introduce a convex relaxation approach for learning robust Koopman operators of nonlinear dynamical systems, which are intended to construct approximate space spanned by eigenfunctions of the Koopman operator. Different from the classical dynamic mode decomposition, we use the layer weights of neural networks as eigenfunctions of the Koopman operator, providing intrinsic coordinates that globally linearize the dynamics. We find that the approximation of space can be regarded as an orthogonal Procrustes problem on the Stiefel manifold, which is highly sensitive to noise. The key contribution of this paper is to demonstrate that strict orthogonal constraint can be replaced by its convex relaxation, and the performance of the model can be improved without increasing the complexity when dealing with both clean and noisy data. After that, the overall model can be optimized via backpropagation in an end-to-end manner. The comparisons of the proposed method against several state-of-the-art competitors are shown on nonlinear oscillators and the lid-driven cavity flow.
APA, Harvard, Vancouver, ISO, and other styles
42

Li, Shi Jun, and Xu Yong Chen. "Non-Probabilistic Reliability of RC Bridge Retrofitted in Flexure with Externally Bonded Steel Plates." Applied Mechanics and Materials 351-352 (August 2013): 1590–95. http://dx.doi.org/10.4028/www.scientific.net/amm.351-352.1590.

Full text
Abstract:
Sufficient information about uncertain parameters is unavailable and very costly in the conventional reliability analysis of the retrofitted RC bridge. The non-probabilistic convex set only requires small samples to obtain the variation bounds of uncertain parameters. An interval set denotes the variation range of an uncertain parameter, and two convex sets reflect the relationships of uncertainty parameters. The non-probabilistic reliability analysis model is developed based on limit-state function and convex model. The non-probabilistic reliability index is solved by the gradient projection method. The engineering example shows that the non-probabilistic reliability is reasonable and economical, and perfects the method and theory of reliability analysis for the retrofitted BC bridge.
APA, Harvard, Vancouver, ISO, and other styles
43

NENCKA, H., and R. F. STREATER. "INFORMATION GEOMETRY FOR SOME LIE ALGEBRAS." Infinite Dimensional Analysis, Quantum Probability and Related Topics 02, no. 03 (September 1999): 441–60. http://dx.doi.org/10.1142/s0219025799000254.

Full text
Abstract:
For certain unitary representations of a Lie algebra [Formula: see text] we define the statistical manifold ℳ of states as the convex cone of [Formula: see text] for which the partition function Z= Tr exp {-X} is finite. The Hessian of Ψ= log Z defines a Riemannian metric g on [Formula: see text], (the Bogoliubov–Kubo–Mori metric); [Formula: see text] foliates into the union of coadjoint orbits, each of which can be given a complex structure (that of Kostant). The program is carried out for so(3), and for sl(2,R) in the discrete series. We show that ℳ=R+× CP 1 and R+×H respectively. We show that for the metaplectic representation of the quadratic canonical algebra, ℳ=R+× CP 2/Z2. Exactly solvable model dynamics is constructed in each case.
APA, Harvard, Vancouver, ISO, and other styles
44

Ikeda, Shiro, Toshiyuki Tanaka, and Shun-ichi Amari. "Stochastic Reasoning, Free Energy, and Information Geometry." Neural Computation 16, no. 9 (September 1, 2004): 1779–810. http://dx.doi.org/10.1162/0899766041336477.

Full text
Abstract:
Belief propagation (BP) is a universal method of stochastic reasoning. It gives exact inference for stochastic models with tree interactions and works surprisingly well even if the models have loopy interactions. Its performance has been analyzed separately in many fields, such as AI, statistical physics, information theory, and information geometry. This article gives a unified framework for understanding BP and related methods and summarizes the results obtained in many fields. In particular, BP and its variants, including tree reparameterization and concave-convex procedure, are reformulated with information-geometrical terms, and their relations to the free energy function are elucidated from an information-geometrical viewpoint. We then propose a family of new algorithms. The stabilities of the algorithms are analyzed, and methods to accelerate them are investigated.
APA, Harvard, Vancouver, ISO, and other styles
45

Lindberg, H. E. "Convex Models for Uncertain Imperfection Control in Multimode Dynamic Buckling." Journal of Applied Mechanics 59, no. 4 (December 1, 1992): 937–45. http://dx.doi.org/10.1115/1.2894064.

Full text
Abstract:
Control of uncertain imperfections by means of convex bounds on finite Fourier transforms is shown to be more direct and not as overly conservative as control based on uniform bounds, i.e., bounding maximum and minimum imperfections. With either method, conservatism in bounds on buckling response is reduced by filtering the imperfection measurements. Extraction of the needed filtered information by operating directly on the Fourier coefficients is straightforward and allows use of additional information on the variation of the coefficients with mode number. Use of this information in example multimode buckling problems gives a bound on maximum possible buckling response that is a factor of1.6 larger than the response at a reliability of 99.5 percent for hypothetical (but reasonably representative) probabilistic imperfections. The bounding response itself, of course, does not depend on any assumptions concerning the probabilistic distribution of imperfections. Two additional combined uniform and Fourier ellipsoid bound models further reduce this factor to 1.1 and 0.5, and require only a simple, unfiltered imperfection bound measurement during quality control inspection.
APA, Harvard, Vancouver, ISO, and other styles
46

Zhou, Bin, Dong-jun Ye, Wei Wei, and Marcin Wozniak. "Alternating Direction Projections onto Convex Sets for Super-Resolution Image Reconstruction." Information Technology And Control 49, no. 1 (March 25, 2020): 179–90. http://dx.doi.org/10.5755/j01.itc.49.1.24121.

Full text
Abstract:
Image reconstruction is important in computer vision and many technologies have been presented to achieve better results. In this paper, gradient information is introduced to define new convex sets. A novel POCS-based model is proposed for super resolution reconstruction. The projection on the convex sets is alternative according to the gray value field and the gradient field. Then the local noise estimation is introduced to determine the threshold adaptively. The efficiency of our proposed model is verified by several numerical experiments. Experimental results show that, the PSNR and the SSIM can be both significantly improved by the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
47

Hagymássy, Zoltán. "Approximate Model for Examination of Cone Dispenser Eccentricity." Acta Agraria Debreceniensis, no. 15 (December 14, 2004): 47–49. http://dx.doi.org/10.34101/actaagrar/15/3357.

Full text
Abstract:
The most important distributing construction of small plot seed-drills, fertiliser dispensers is the cone dispenser. The cone dispenser can operate based on simple gravity or with Oyjord-type cone-cell or Hege-type cone-belt structure. The unevenness of spreading of each type is significantly influenced by the feeding roll above the cone and the misalignment of the cone dispenser. On designing, preparing and setting I realised that this job could be done much faster and more precisely if there would be model for testing the misalignment of the cone dispenser. In my article I will provide information about the essence of the mathematical model and its derivations. For the calculations I prepared a chart program in Microsoft Excel.In order to test the computed model I made experimental examinations. I put together a test bench for the measurements. I revealed that the mathematical model describes the unevenness of spreading caused by misalignment. In addition, I discovered that even a misalignment of 0.25-0.5 mm can be pointed out by measurements and is in proportion to the variety factor determined theoretically.
APA, Harvard, Vancouver, ISO, and other styles
48

Gaschler, Andre, Ronald P. A. Petrick, Oussama Khatib, and Alois Knoll. "KABouM: Knowledge-Level Action and Bounding Geometry Motion Planner." Journal of Artificial Intelligence Research 61 (February 27, 2018): 323–62. http://dx.doi.org/10.1613/jair.5560.

Full text
Abstract:
For robots to solve real world tasks, they often require the ability to reason about both symbolic and geometric knowledge. We present a framework, called KABouM, for integrating knowledge-level task planning and motion planning in a bounding geometry. By representing symbolic information at the knowledge level, we can model incomplete information, sensing actions and information gain; by representing all geometric entities--objects, robots and swept volumes of motions--by sets of convex polyhedra, we can efficiently plan manipulation actions and raise reasoning about geometric predicates, such as collisions, to the symbolic level. At the geometric level, we take advantage of our bounded convex decomposition and swept volume computation with quadratic convergence, and fast collision detection of convex bodies. We evaluate our approach on a wide set of problems using real robots, including tasks with multiple manipulators, sensing and branched plans, and mobile manipulation.
APA, Harvard, Vancouver, ISO, and other styles
49

Paul, Grégory, Janick Cardinale, and Ivo F. Sbalzarini. "Coupling Image Restoration and Segmentation: A Generalized Linear Model/Bregman Perspective." International Journal of Computer Vision 104, no. 1 (March 8, 2013): 69–93. http://dx.doi.org/10.1007/s11263-013-0615-2.

Full text
Abstract:
Abstract We introduce a new class of data-fitting energies that couple image segmentation with image restoration. These functionals model the image intensity using the statistical framework of generalized linear models. By duality, we establish an information-theoretic interpretation using Bregman divergences. We demonstrate how this formulation couples in a principled way image restoration tasks such as denoising, deblurring (deconvolution), and inpainting with segmentation. We present an alternating minimization algorithm to solve the resulting composite photometric/geometric inverse problem. We use Fisher scoring to solve the photometric problem and to provide asymptotic uncertainty estimates. We derive the shape gradient of our data-fitting energy and investigate convex relaxation for the geometric problem. We introduce a new alternating split-Bregman strategy to solve the resulting convex problem and present experiments and comparisons on both synthetic and real-world images.
APA, Harvard, Vancouver, ISO, and other styles
50

Lee, L. H., and K. Poolla. "On Statistical Model Validation." Journal of Dynamic Systems, Measurement, and Control 118, no. 2 (June 1, 1996): 226–36. http://dx.doi.org/10.1115/1.2802308.

Full text
Abstract:
In this paper we formulate a particular statistical model validation problem in which we wish to determine the probability that a certain hypothesized parametric uncertainty model is consistent with a given input-output data record. Using a Bayesian approach and ideas from the field of hypothesis testing, we show that in many cases of interest this problem reduces to computing relative weighted volumes of convex sets in RN (where N is the number of uncertain parameters). We also present and discuss a randomized algorithm based on gas kinetics, as well as the existing Hit-and-Run family of algorithms, for probable approximate computation of these volumes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography