Journal articles on the topic 'Constraint weighting'

To see the other types of publications on this topic, follow the link: Constraint weighting.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Constraint weighting.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhang, Jie. "Constraint weighting and constraint domination: a formal comparison." Phonology 24, no. 3 (November 30, 2007): 433–59. http://dx.doi.org/10.1017/s0952675707001285.

Full text
Abstract:
The advent of Optimality Theory has revived the interest in articulatorily and perceptually driven markedness in phonological research. To some researchers, the cross-linguistic prevalence of such markedness relations is indication that synchronic phonological grammar should include phonetic details. However, there are at least two distinct ways in which phonetics can be incorporated in an optimality-theoretic grammar: traditional constraint domination and Flemming (2001) 's proposal that the costs of constraint violations should be weighted and summed. I argue that constraint weighting is unnecessary as an innovation in Optimality Theory. The arguments are twofold. First, using constraint families with intrinsic rankings, constraint domination formally predicts the same range of phonological realisations as constraint weighting. Second, with proper constraint definitions and rankings, both the additive effect and the locus effect predicted by constraint weighting can be replicated in constraint domination.
APA, Harvard, Vancouver, ISO, and other styles
2

Gillett, Nathan P. "Weighting climate model projections using observational constraints." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 373, no. 2054 (November 13, 2015): 20140425. http://dx.doi.org/10.1098/rsta.2014.0425.

Full text
Abstract:
Projected climate change integrates the net response to multiple climate feedbacks. Whereas existing long-term climate change projections are typically based on unweighted individual climate model simulations, as observed climate change intensifies it is increasingly becoming possible to constrain the net response to feedbacks and hence projected warming directly from observed climate change. One approach scales simulated future warming based on a fit to observations over the historical period, but this approach is only accurate for near-term projections and for scenarios of continuously increasing radiative forcing. For this reason, the recent Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5) included such observationally constrained projections in its assessment of warming to 2035, but used raw model projections of longer term warming to 2100. Here a simple approach to weighting model projections based on an observational constraint is proposed which does not assume a linear relationship between past and future changes. This approach is used to weight model projections of warming in 2081–2100 relative to 1986–2005 under the Representative Concentration Pathway 4.5 forcing scenario, based on an observationally constrained estimate of the Transient Climate Response derived from a detection and attribution analysis. The resulting observationally constrained 5–95% warming range of 0.8–2.5 K is somewhat lower than the unweighted range of 1.1–2.6 K reported in the IPCC AR5.
APA, Harvard, Vancouver, ISO, and other styles
3

Buatoom, Uraiwan, Waree Kongprawechnon, and Thanaruk Theeramunkong. "Document Clustering Using K-Means with Term Weighting as Similarity-Based Constraints." Symmetry 12, no. 6 (June 6, 2020): 967. http://dx.doi.org/10.3390/sym12060967.

Full text
Abstract:
In similarity-based constrained clustering, there have been various approaches on how to define the similarity between documents to guide the grouping of similar documents together. This paper presents an approach to use term-distribution statistics extracted from a small number of cue instances with their known classes, for term weightings as indirect distance constraint. As for distribution-based term weighting, three types of term-oriented standard deviations are exploited: distribution of a term in a collection (SD), average distribution of a term in a class (ACSD), and average distribution of a term among classes (CSD). These term weightings are explored with the consideration of symmetry concepts by varying the magnitude to positive and negative for promoting and demoting effects of three standard deviations. In k-means, followed the symmetry concept, both seeded and unseeded centroid initializations are investigated and compared to the centroid-based classification. Our experiment is conducted using five English text collections and one Thai text collection, i.e., Amazon, DI, WebKB1, WebKB2, and 20Newsgroup, as well as TR, a collection of Thai reform-related opinions. Compared to the conventional TFIDF, the distribution-based term weighting improves the centroid-based method, seeded k-means, and k-means with the error reduction rate of 22.45%, 31.13%, and 58.96%.
APA, Harvard, Vancouver, ISO, and other styles
4

Jing, Hao, James Pinchin, Chris Hill, and Terry Moore. "An Adaptive Weighting based on Modified DOP for Collaborative Indoor Positioning." Journal of Navigation 69, no. 2 (September 23, 2015): 225–45. http://dx.doi.org/10.1017/s037346331500065x.

Full text
Abstract:
Indoor localisation has always been a challenging problem due to poor Global Navigation Satellite System (GNSS) availability in such environments. While inertial measurement sensors have become popular solutions for indoor positioning, they suffer large drifts after initialisation. Collaborative positioning enhances positioning robustness by integrating multiple localisation information, especially relative ranging measurements between local users and transmitters. However, not all ranging measurements are useful throughout the whole positioning process and integrating too much data will increase the computation cost. To enable a more reliable positioning system, an adaptive collaborative positioning algorithm is proposed which selects units for the collaborative network and integrates ranging measurement to constrain inertial measurement errors. The algorithm selects the network adaptively from three perspectives: the network geometry, the network size and the accuracy level of the ranging measurements between the units. The collaborative relative constraint is then defined according to the selected network geometry and anticipated measurement quality. In the case of trials with real data, the positioning accuracy is improved by 60% by adjusting the range constraint adaptively according to the selected network situation, while also improving the system robustness.
APA, Harvard, Vancouver, ISO, and other styles
5

Chien, Wei, Chien-Ching Chiu, Po-Hsiang Chen, Yu-Ting Cheng, Eng Hock Lim, Yue-Li Liang, and Jia-Rui Wang. "Different Object Functions for SWIPT Optimization by SADDE and APSO." Symmetry 13, no. 8 (July 24, 2021): 1340. http://dx.doi.org/10.3390/sym13081340.

Full text
Abstract:
Multiple objective function with beamforming techniques by algorithms have been studied for the Simultaneous Wireless Information and Power Transfer (SWIPT) technology at millimeter wave. Using the feed length to adjust the phase for different objects of SWIPT with Bit Error Rate (BER) and Harvesting Power (HP) are investigated in the broadband communication. Symmetrical antenna array is useful for omni bearing beamforming adjustment with multiple receivers. Self-Adaptive Dynamic Differential Evolution (SADDE) and Asynchronous Particle Swarm Optimization (APSO) are used to optimize the feed length of the antenna array. Two different object functions are proposed in the paper. The first one is the weighting factor multiplying the constraint BER and HP plus HP. The second one is the constraint BER multiplying HP. Simulations show that the first object function is capable of optimizing the total harvesting power under the BER constraint and APSO can quickly converges quicker than SADDE. However, the weighting for the final object function requires a pretest in advance, whereas the second object function does not need to set the weighting case by case and the searching is more efficient than the first one. From the numerical results, the proposed criterion can achieve the SWIPT requirement. Thus, we can use the novel proposed criterion (the second criterion) to optimize the SWIPT problem without testing the weighting case by case.
APA, Harvard, Vancouver, ISO, and other styles
6

Xie, Minghua, Lili Xie, and Peidong Zhu. "An Efficient Feature Weighting Method for Support Vector Regression." Mathematical Problems in Engineering 2021 (March 2, 2021): 1–7. http://dx.doi.org/10.1155/2021/6675218.

Full text
Abstract:
Support vector regression (SVR) is a powerful kernel-based method which has been successfully applied in regression problems. Regarding the feature-weighted SVR algorithms, its contribution to model output has been taken into account. However, the performance of the model is subject to the feature weights and the time consumption on training. In the paper, an efficient feature-weighted SVR is proposed. Firstly, the value constraint of each weight is obtained according to the maximal information coefficient which reveals the relationship between each input feature and output. Then, the constrained particle swarm optimization (PSO) algorithm is employed to optimize the feature weights and the hyperparameters simultaneously. Finally, the optimal weights are used to modify the kernel function. Simulation experiments were conducted on four synthetic datasets and seven real datasets by using the proposed model, classical SVR, and some state-of-the-art feature-weighted SVR models. The results show that the proposed method has the superior generalization ability within acceptable time.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhu, Hancan, and Guanghua He. "Joint Neighboring Coding with a Low-Rank Constraint for Multi-Atlas Based Image Segmentation." Journal of Medical Imaging and Health Informatics 10, no. 2 (February 1, 2020): 310–15. http://dx.doi.org/10.1166/jmihi.2020.2884.

Full text
Abstract:
Multi-atlas methods have been successful for solving many medical image segmentation problems. Under the multi-atlas segmentation framework, labels of atlases are first propagated to the target image space with the deformation fields generated by registering atlas images onto a target image, and then these labels are fused to obtain the final segmentation. While many label fusion strategies have been developed, weighting based label fusion methods have attracted considerable attention. In this paper, we first present a unified framework for weighting based label fusion methods. Under this unified framework, we find that most of recent developed weighting based label fusion methods jointly consider the pair-wise dependency between atlases. However, they independently label voxels to be segmented, ignoring their neighboring spatial structure that might be informative for obtaining robust segmentation results for noisy images. Taking into consideration of potential correlation among neighboring voxels to be segmented, we propose a joint coding method (JCM) with a low-rank constraint for the multi-atlas based image segmentation in a general framework that unifies existing weighting based label fusion methods. The method has been validated for segmenting hippocampus from MR images. It is demonstrated that our method can achieve competitive segmentation performance as the state-of-the-art methods, especially when the quality of images is poor.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhao, Qingzhi, Yibin Yao, and Wanqiang Yao. "A troposphere tomography method considering the weighting of input information." Annales Geophysicae 35, no. 6 (December 13, 2017): 1327–40. http://dx.doi.org/10.5194/angeo-35-1327-2017.

Full text
Abstract:
Abstract. Troposphere tomography measurement using a global navigation satellite system (GNSS) generally consists of several types of input information including the observation equation, horizontal constraint equation, vertical constraint equation, and a priori constraint equation. The reasonable weightings of input information are a prerequisite for ensuring the reliability of the adjustment of the parameters. This forms the focus of this research, which tries to determine the weightings, including the observations, for the same type of equation and the optimal weightings for different types of equations. The optimal weightings of the proposed method are realized on the basis of the stable equilibrium relationship between different types of a posteriori unit weight variances, which are capable of adaptively adjusting the weightings for different types of equations and enables the ratio between the two arbitrary a posteriori unit weight variances to tend to unity. A troposphere tomography experiment, which was used to consider these weightings, was implemented using global positioning system (GPS) data from the Hong Kong Satellite Positioning Reference Station Network (SatRef). Numerical results show the applicability and stability of the proposed method for GPS troposphere tomography assessment under different weather conditions. In addition, the root mean square (RMS) error in the water vapor density differences between tomography-radiosonde and tomography-ECMWF (European Centre for Medium-Range Weather Forecasts) are 0.91 and 1.63 g m−3, respectively, over a 21-day test.
APA, Harvard, Vancouver, ISO, and other styles
9

Kimmel, Melanie, Jannick Pfort, Jan Wöhlke, and Sandra Hirche. "Shared invariance control for constraint satisfaction in multi-robot systems." International Journal of Robotics Research 38, no. 10-11 (August 12, 2019): 1268–85. http://dx.doi.org/10.1177/0278364919867133.

Full text
Abstract:
In systems involving multiple intelligent agents, e.g. multi-robot systems, the satisfaction of environmental, inter-agent, and task constraints is essential to ensure safe and successful task execution. This requires a constraint enforcing control scheme, which is able to allocate and distribute the required evasive control actions adequately among the agents, ideally according to the role of the agents or the importance of the executed tasks. In this work, we propose a shared invariance control scheme in combination with a suitable agent prioritization to control multiple agents safely and reliably. Based on the projection of the constraints into the input spaces of the individual agents using input–output linearization, shared invariance control determines constraint enforcing control inputs and facilitates implementation in a distributed manner. In order to allow for shared evasive actions, the control approach introduces weighting factors derived from a two-stage prioritization scheme, which allots the weights according to a variety of factors such as a fixed task priority, the number of constraints affecting each agent or a manipulability measure. The proposed control scheme is proven to guarantee constraint satisfaction. The approach is illustrated in simulations and an experimental evaluation on a dual-arm robotic platform.
APA, Harvard, Vancouver, ISO, and other styles
10

Yin, Fulian, Lu Lu, Jianping Chai, and Yanbing Yang. "Combination Weighting Method Based on Maximizing Deviations and Normalized Constraint Condition." International Journal of Security and Its Applications 10, no. 2 (February 28, 2016): 39–50. http://dx.doi.org/10.14257/ijsia.2016.10.2.04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Bouhmala, N. "A Variable Depth Search Algorithm for Binary Constraint Satisfaction Problems." Mathematical Problems in Engineering 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/637809.

Full text
Abstract:
The constraint satisfaction problem (CSP) is a popular used paradigm to model a wide spectrum of optimization problems in artificial intelligence. This paper presents a fast metaheuristic for solving binary constraint satisfaction problems. The method can be classified as a variable depth search metaheuristic combining a greedy local search using a self-adaptive weighting strategy on the constraint weights. Several metaheuristics have been developed in the past using various penalty weight mechanisms on the constraints. What distinguishes the proposed metaheuristic from those developed in the past is the update ofkvariables during each iteration when moving from one assignment of values to another. The benchmark is based on hard random constraint satisfaction problems enjoying several features that make them of a great theoretical and practical interest. The results show that the proposed metaheuristic is capable of solving hard unsolved problems that still remain a challenge for both complete and incomplete methods. In addition, the proposed metaheuristic is remarkably faster than all existing solvers when tested on previously solved instances. Finally, its distinctive feature contrary to other metaheuristics is the absence of parameter tuning making it highly suitable in practical scenarios.
APA, Harvard, Vancouver, ISO, and other styles
12

Starkey, J. M., and P. M. Kelecy. "Simultaneous Structural and Control Design Using Constraint Functions." Journal of Mechanisms, Transmissions, and Automation in Design 110, no. 1 (March 1, 1988): 65–72. http://dx.doi.org/10.1115/1.3258908.

Full text
Abstract:
A design technique is presented which modifies system dynamics by simultaneously considering control system gains and structural design parameters. Constraint functions are devised that become smaller as (1) structural design parameters and feedback gains become smaller, and (2) closed-loop eigenvalues migrate toward more desirable regions. By minimizing a weighted sum of these functions, the interaction between design performance and design parameters can be explored. Examples are given that show the effects of the weighting parameters, and the potential advantages of this technique over traditional pole placement techniques.
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Bin, Chang-Hong Wang, Wei Li, and Zhuo Li. "Robust Controller Design Using the Nevanlinna-Pick Interpolation in Gyro Stabilized Pod." Discrete Dynamics in Nature and Society 2010 (2010): 1–16. http://dx.doi.org/10.1155/2010/569850.

Full text
Abstract:
The sensitivity minimization of feedback system is solved based on the theory of Nevanlinna-Pick interpolation with degree constraint without using weighting functions. More details of the dynamic characteristic of second-order system investigated, which is determined by the location of spectral zeroes, the upper boundγofS, the length of the spectral radius and the additional interpolation constraints. And the guidelines on how to tune the design parameters are provided. Gyro stabilized pod as a typical tracking system is studied, which is based on the typical structure of two-axis and four-frame. The robust controller is designed based on Nevanlinna-Pick interpolation with degree constraint. When both friction of LuGre model and disturbance exist, the closed-loop system has stronger disturbance rejection ability and high tracking precision. Numerical examples illustrate the potential of the method in designing robust controllers with relatively low degrees.
APA, Harvard, Vancouver, ISO, and other styles
14

Jung, B., K.-I. Jang, B.-K. Min, S. J. Lee, and J. Seok. "Parameter optimization for finishing hard materials with magnetorheological fluid using the penalized multi-response Taguchi method." Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture 223, no. 8 (May 6, 2009): 955–68. http://dx.doi.org/10.1243/09544054jem1351.

Full text
Abstract:
This paper presents a novel penalized multi-response Taguchi method that is used to determine the optimal conditions and parameters for a wheel-type magnetorheological (MR) finishing process that uses sintered iron-carbon nanotube (I-CNT) abrasives. The main goal of this study is to achieve the best compromise, within given boundary constraint conditions, between the maximum material removal rate and the minimum surface roughness. The proposed Taguchi method includes two main parameters, namely the weighting loss factor and the severity factor, that account, respectively, for the response weights and the constraint conditions and that control the optimality direction. The method is applied to the finishing of hard-disk slider surfaces made of Al2O3-TiC, and the effects of the weighting loss and severity factors, along with their significance and relative importance in optimizing the finishing process, are thoroughly examined.
APA, Harvard, Vancouver, ISO, and other styles
15

Zhang, Xiaoling, Debiao Meng, Ruan-Jian Yang, Zhonglai Wang, and Hong-Zhong Huang. "Bounded Target Cascading in Hierarchical Design Optimization." Advances in Mechanical Engineering 6 (January 1, 2014): 790620. http://dx.doi.org/10.1155/2014/790620.

Full text
Abstract:
For large scale systems, as a hierarchical multilevel decomposed design optimization method, analytical target cascading coordinates the inconsistency between the assigned targets and response in each level by a weighted-sum formulation. To avoid the problems associated with the weighting coefficients, single objective functions in the hierarchical design optimization are formulated by a bounded target cascading method in this paper. In the BTC method, a single objective optimization problem is formulated in the system level, and two kinds of coordination constraints are added: one is bound constraint for the design points based on the response from each subsystem level and the other is linear equality constraint for the common variables based on their sensitivities with respect to each subsystem. In each subsystem level, the deviation with target for design point is minimized in the objective function, and the common variables are constrained by target bounds. Therefore, in the BTC method, the targets are coordinated based on the optimization iteration information in the hierarchical design problem and the performance of the subsystems, and BTC method will converge to the global optimum efficiently. Finally, comparisons of the results from BTC method and the weighted-sum analytical target cascading method are presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
16

Gan, Keng Hoon, and Keat Keong Phang. "Finding target and constraint concepts for XML query construction." International Journal of Web Information Systems 11, no. 4 (November 16, 2015): 468–90. http://dx.doi.org/10.1108/ijwis-04-2015-0017.

Full text
Abstract:
Purpose – This paper aims to focus on automatic selection of two important structural concepts required in an XML query, namely, target and constraint concepts, when given a keywords query. Due to the diversities of concepts used in XML resources, it is not easy to select a correct concept when constructing an XML query. Design/methodology/approach – In this paper, a Context-based Term Weighting model that performs term weighting based on part of documents. Each part represents a specific context, thus offering better capturing of concept and term relationship. For query time analysis, a Query Context Graph and two algorithms, namely, Select Target and Constraint (QC) and Select Target and Constraint (QCAS) are proposed to find the concepts for constructing XML query. Findings – Evaluations were performed using structured document for conference domain. For constraint concept selection, the approach CTX+TW achieved better result than its baseline, NCTX, when search term has ambiguous meanings by using context-based scoring for the concepts. CTX+TW also shows its stability on various scoring models like BM25, TFIEF and LM. For target concept selection, CTX+TW outperforms the standard baseline, SLCA, whereas it also records higher coverage than FCA, when structural keywords are used in query. Originality/value – The idea behind this approach is to capture the concepts required for term interpretation based on parts of the collections rather than the entire collection. This allows better selection of concepts, especially when a structured XML document consists many different types of information.
APA, Harvard, Vancouver, ISO, and other styles
17

Potts, Christopher, Joe Pater, Karen Jesney, Rajesh Bhatt, and Michael Becker. "Harmonic Grammar with linear programming: from linear systems to linguistic typology." Phonology 27, no. 1 (April 16, 2010): 77–117. http://dx.doi.org/10.1017/s0952675710000047.

Full text
Abstract:
AbstractHarmonic Grammar is a model of linguistic constraint interaction in which well-formedness is calculated in terms of the sum of weighted constraint violations. We show how linear programming algorithms can be used to determine whether there is a weighting for a set of constraints that fits a set of linguistic data. The associated software package OT-Help provides a practical tool for studying large and complex linguistic systems in the Harmonic Grammar framework and comparing the results with those of OT. We first describe the translation from harmonic grammars to systems solvable by linear programming algorithms. We then develop a Harmonic Grammar analysis of ATR harmony in Lango that is, we argue, superior to the existing OT and rule-based treatments. We further highlight the usefulness of OT-Help, and the analytic power of Harmonic Grammar, with a set of studies of the predictions Harmonic Grammar makes for phonological typology.
APA, Harvard, Vancouver, ISO, and other styles
18

SUI, Y. K., X. R. PENG, J. L. FENG, and H. L. YE. "TOPOLOGY OPTIMIZATION OF STRUCTURE WITH GLOBAL STRESS CONSTRAINTS BY INDEPENDENT CONTINUUM MAP METHOD." International Journal of Computational Methods 03, no. 03 (September 2006): 295–319. http://dx.doi.org/10.1142/s0219876206000758.

Full text
Abstract:
We establish topology optimization model in terms of Independent Continuum Map method (ICM), so as to avoid the difficulties caused by multiple objective functions of compliance, owing to referring to weight as objective function. Using the distorted-strain-energy criterion, we transform stress constraints on all elements into structure strain-energy constraints in global sense. Then, the problem of topological optimum of continuum structure subjected to global strain-energy constraints is formulated and solved. The process of optimization is conducted through three basic steps which include the computation of the minimum strain energy of structure corresponding to the maximum strain-energy under the load case due to prescribing weight constraint, the determination of the allowable strain-energy of structure for every load case by using a formula from our numerical tests, as well as the establishment and solution of optimization model with the weight function due to all allowable strain energies. A strategy that is available to cope with complicated load ill-posedness in terms of different complementary approaches one by one is presented in the present work. Several numerical examples demonstrate that the topology path of transferring forces can be obtained more readily by global strain energy constraints rather than local stress constraints, and the problem of load ill-posedness can be tackled very well by the weighting method with regard to structural strain energy as weighting coefficient.
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, Xikui, Guiling Li, and Yan Li. "Stochastic Linear Quadratic Optimal Control with Indefinite Control Weights and Constraint for Discrete-Time Systems." Mathematical Problems in Engineering 2015 (2015): 1–11. http://dx.doi.org/10.1155/2015/476545.

Full text
Abstract:
The Karush-Kuhn-Tucker (KKT) theorem is used to study stochastic linear quadratic optimal control with terminal constraint for discrete-time systems, allowing the control weighting matrices in the cost to be indefinite. A generalized difference Riccati equation is derived, which is different from those without constraint case. It is proved that the well-posedness and the attainability of stochastic linear quadratic optimal control problem are equivalent. Moreover, an optimal control can be denoted by the solution of the generalized difference Riccati equation.
APA, Harvard, Vancouver, ISO, and other styles
20

Abdi, Bahman, Mehdi Mirzaei, and Reza Mojed Gharamaleki. "A new approach to optimal control of nonlinear vehicle suspension system with input constraint." Journal of Vibration and Control 24, no. 15 (May 4, 2017): 3307–20. http://dx.doi.org/10.1177/1077546317704598.

Full text
Abstract:
The vehicle active suspension system is a multi-objective control system with the input constraint. In this paper, a new effective method is proposed for constrained optimal control of a vehicle suspension system including nonlinear characteristics for elasto-damping elements. In the proposed method, an equivalent constrained optimization problem is firstly formulated by performing a performance index which is defined as a weighted combination of predicted responses of nonlinear suspension system and control signal. Then, the constrained optimization problem is analytically solved by the Kerush–Kuhn–Tucker (KKT) theorem to find the control law. The proposed constrained controller is compared with the unconstrained optimal controller for which the limitation of control force is satisfied by regulation of its weighting factor in the performance index. Simulation studies are conducted to show the effectiveness of two controllers. The results indicate that the constrained controller utilizes the maximum capacity of external forces and consequently attains a better performance in the presence of force limitations.
APA, Harvard, Vancouver, ISO, and other styles
21

Minh, Vu Trieu, and Fakhruldin Bin Mohd Hashim. "Robust Model Predictive Control Schemes for Tracking Setpoints." Journal of Control Science and Engineering 2010 (2010): 1–9. http://dx.doi.org/10.1155/2010/649461.

Full text
Abstract:
This paper briefly reviews the development of nontracking robust model predictive control (RMPC) schemes for uncertain systems using linear matrix inequalities (LMIs) subject to input saturated and softened state constraints. Then we develop two new tracking setpoint RMPC schemes with common Lyapunov function and with zero terminal equality subject to input saturated and softened state constraints. The novel tracking setpoint RMPC schemes are able to stabilize uncertain systems once the output setpoints lead to the violation of the state constraints. The state violation can be regulated by changing the value of the weighting factor. A brief comparative simulation study of the two tracking setpoint RMPC schemes is done via simple examples to demonstrate the ability of the softened state constraint schemes. Finally, some features of future research from this study are discussed.
APA, Harvard, Vancouver, ISO, and other styles
22

Pan, DG, GD Chen, and LL Gao. "A constrained optimal Rayleigh damping coefficients for structures with closely spaced natural frequencies in seismic analysis." Advances in Structural Engineering 20, no. 1 (July 28, 2016): 81–95. http://dx.doi.org/10.1177/1369433216646007.

Full text
Abstract:
A constrained optimization method is proposed to determine Rayleigh damping coefficients for the accurate analysis of complex structures. To this end, an objective function was defined to be a complete quadratic combination of the modal errors of a peak base reaction evaluated by response spectral analysis. An optimization constraint was enforced to make the damping ratio of a prominent contribution mode exact. Parametric studies were conducted to investigate the effects of the constraint, the cross term of modes, and weighting factors on the optimization objective. A two-story building and a real-world lattice structure were analyzed under six earthquake ground motions to understand the characteristics and demonstrate the accuracy and effectiveness of the proposed optimization method. Unlike the conventional Rayleigh damping, the optimization method provided an optimal load-dependent reference frequencies that account for varying frequency characteristics of earthquakes around the prominent contribution mode.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhao, Yongdong, and Suhada Jayasuriya. "An H∞ Formulation of Quantitative Feedback Theory." Journal of Dynamic Systems, Measurement, and Control 120, no. 3 (September 1, 1998): 305–13. http://dx.doi.org/10.1115/1.2805401.

Full text
Abstract:
The QFT robust performance problem in its entirety may be reduced to an H∞ problem by casting each specification as a frequency domain constraint either on the nominal sensitivity function or the complementary sensitivity function. In order to alleviate the conservative nature of a standard H∞ solution that is obtainable for a plant with parametric uncertainty we develop a new stability criterion to replace the small gain condition. With this new stability criterion it is shown that the existence of a solution to the standard H∞ problem guarantees a solution to the QFT problem. Specifically, we provide an explicit characterization of necessary frequency weighting functions for an H∞ embedding of the QFT specifications. Due to the transparency in selecting the weighting functions, the robust performance constraints can be easily relaxed, if needed, for the purpose of assuring a solution to the H∞ problem. Since this formulation provides only a sufficient condition for the existence of a QFT controller one can then use the resulting H∞ compensator to initiate the QFT loop shaping step.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhong, Bo, and Rong Chen. "Based on Mechanical Vibration Test Machine V-Belt Transmission Design." Advanced Materials Research 538-541 (June 2012): 2518–21. http://dx.doi.org/10.4028/www.scientific.net/amr.538-541.2518.

Full text
Abstract:
The essay has solved the optimization problem turning multi-objective problem into a single target by using weighting factor to the use of V-belt driving gear. According to engineering requirements, mechanical vibration test machine has determined the coordination of the various coefficients as a single objective optimization function after considering multiple conflicting incommensurability of the objective function, and combining practical experience and computing experience. Conditions on system requirements and relative constraints have been given in the form of constraint functions. Finally, the overall optimal solution has been calculated with the direct method, and an integrated optimal solution can have different numerical values by calculating it for different project objectives and specific requirements.
APA, Harvard, Vancouver, ISO, and other styles
25

Shimada, Keishi, Shigeru Aoki, and Kay I. Ohshima. "Creation of a Gridded Dataset for the Southern Ocean with a Topographic Constraint Scheme." Journal of Atmospheric and Oceanic Technology 34, no. 3 (March 2017): 511–32. http://dx.doi.org/10.1175/jtech-d-16-0075.1.

Full text
Abstract:
AbstractThis study investigated a method for creating a climatological dataset with improved reproducibility and reliability for the Southern Ocean. Despite sparse observational sampling, the Southern Ocean has a dominant physical characteristic of a strong topographic constraint formed under weak stratification and strong Coriolis effect. To increase the fidelity of gridded data, the topographic constraint is incorporated into the interpolation method, the weighting function of which includes a contribution from bottom depth differences and horizontal distances. Spatial variability of physical properties was also analyzed to estimate a realistic decorrelation scale for horizontal distance and bottom depth differences using hydrographic datasets. A new gridded dataset, the topographic constraint incorporated (TCI), was then developed for temperature, salinity, and dissolved oxygen, using the newly derived weighting function and decorrelation scales. The root-mean-square (RMS) of the difference between the interpolated values and the neighboring observed values (RMS difference) was compared among available gridded datasets. That the RMS differences are smaller for the TCI than for the previous datasets by 12%–21% and 8%–20% for potential temperature and salinity, respectively, demonstrates the effectiveness of incorporating the topographic constraint and realistic decorrelation scales. Furthermore, a comparison of decorrelation scales and an analysis of interpolation error suggests that the decorrelation scales adopted in previous gridded datasets are 2 times or more larger than realistic scales and that the overestimation would increase the interpolation error. The interpolation method proposed in this study can be applied to other high-latitude oceans, which are weakly stratified but undersampled.
APA, Harvard, Vancouver, ISO, and other styles
26

Dong, Guirong, Chengyang Liu, Yijie Liu, Ling Wu, Xiaoan Mao, and Dianzi Liu. "Computationally Efficient Approximations Using Adaptive Weighting Coefficients for Solving Structural Optimization Problems." Mathematical Problems in Engineering 2021 (March 10, 2021): 1–12. http://dx.doi.org/10.1155/2021/1743673.

Full text
Abstract:
With rapid development of advanced manufacturing technologies and high demands for innovative lightweight constructions to mitigate the environmental and economic impacts, design optimization has attracted increasing attention in many engineering subjects, such as civil, structural, aerospace, automotive, and energy engineering. For nonconvex nonlinear constrained optimization problems with continuous variables, evaluations of the fitness and constraint functions by means of finite element simulations can be extremely expensive. To address this problem by algorithms with sufficient accuracy as well as less computational cost, an extended multipoint approximation method (EMAM) and an adaptive weighting-coefficient strategy are proposed to efficiently seek the optimum by the integration of metamodels with sequential quadratic programming (SQP). The developed EMAM stems from the principle of the polynomial approximation and assimilates the advantages of Taylor’s expansion for improving the suboptimal continuous solution. Results demonstrate the superiority of the proposed EMAM over other evolutionary algorithms (e.g., particle swarm optimization technique, firefly algorithm, genetic algorithm, metaheuristic methods, and other metamodeling techniques) in terms of the computational efficiency and accuracy by four well-established engineering problems. The developed EMAM reduces the number of simulations during the design phase and provides wealth of information for designers to effectively tailor the parameters for optimal solutions with computational efficiency in the simulation-based engineering optimization problems.
APA, Harvard, Vancouver, ISO, and other styles
27

Chen, Yuefen, and Minghai Yang. "Indefinite LQ Optimal Control with Terminal State Constraint for Discrete-Time Uncertain Systems." Journal of Control Science and Engineering 2016 (2016): 1–10. http://dx.doi.org/10.1155/2016/7241390.

Full text
Abstract:
Uncertainty theory is a branch of mathematics for modeling human uncertainty based on the normality, duality, subadditivity, and product axioms. This paper studies a discrete-time LQ optimal control with terminal state constraint, whereas the weighting matrices in the cost function are indefinite and the system states are disturbed by uncertain noises. We first transform the uncertain LQ problem into an equivalent deterministic LQ problem. Then, the main result given in this paper is the necessary condition for the constrained indefinite LQ optimal control problem by means of the Lagrangian multiplier method. Moreover, in order to guarantee the well-posedness of the indefinite LQ problem and the existence of an optimal control, a sufficient condition is presented in the paper. Finally, a numerical example is presented at the end of the paper.
APA, Harvard, Vancouver, ISO, and other styles
28

ter Meulen, Alice G. B. "Agency, argument structure, and causal inference." Behavioral and Brain Sciences 31, no. 6 (December 2008): 728–29. http://dx.doi.org/10.1017/s0140525x08006055.

Full text
Abstract:
AbstractLogically, weighting is transitive, but similarity is not, so clustering cannot be either. Entailments must help a child to review attribute lists more efficiently. Children's understanding of exceptions to generic claims precedes their ability to articulate explanations. So agency, as enabling constraint, may show coherent covariation with attributes, as mere extensional, observable effect of intensional entailments.
APA, Harvard, Vancouver, ISO, and other styles
29

Liu, Tianjun, Jian Wang, Hang Yu, Xinyun Cao, and Yulong Ge. "A New Weighting Approach with Application to Ionospheric Delay Constraint for GPS/GALILEO Real-Time Precise Point Positioning." Applied Sciences 8, no. 12 (December 7, 2018): 2537. http://dx.doi.org/10.3390/app8122537.

Full text
Abstract:
The real-time precise point positioning (RT PPP) technique has attracted increasing attention due to its high-accuracy and real-time performance. However, a considerable initialization time, normally a few hours, is required in order to achieve the proper convergence of the real-valued ambiguities and other estimate parameters. The RT PPP convergence time may be reduced by combining quad-constellation global navigation satellite system (GNSS), or by using RT ionospheric products to constrain the ionosphere delay. But to improve the performance of convergence and achieve the best positioning solutions in the whole data processing, proper and precise variances of the observations and ionospheric constraints are important, since they involve the processing of measurements of different types and with different accuracy. To address this issue, a weighting approach is proposed by a combination of the weight factors searching algorithm and a moving-window average filter. In this approach, the variances of ionospheric constraints are adjusted dynamically according to the principle that the sum of the quadratic forms of weighted residuals is the minimum, and the filter is applied to combine all epoch-by-epoch weight factors within a time window. To evaluate the proposed approach, datasets from 31 Multi-GNSS Experiment (MGEX) stations during the period of DOY (day of year) 023-054 in 2018 are analyzed with different positioning modes and different data processing methods. Experimental results show that the new weighting approach can significantly improve the convergence performance, and that the maximum improvement rate reaches 35.9% in comparison to the traditional method of priori variance in the static dual-frequency positioning mode. In terms of the RMS (Root Mean Square) statistics of positioning errors calculated by the new method after filter convergence, the same accuracy level as that of RT PPP without constraints can be achieved.
APA, Harvard, Vancouver, ISO, and other styles
30

Ge, Guoqing, Jidong Gao, and Ming Xue. "Impact of a Diagnostic Pressure Equation Constraint on Tornadic Supercell Thunderstorm Forecasts Initialized Using 3DVAR Radar Data Assimilation." Advances in Meteorology 2013 (2013): 1–12. http://dx.doi.org/10.1155/2013/947874.

Full text
Abstract:
A diagnostic pressure equation constraint has been incorporated into a storm-scale three-dimensional variational (3DVAR) data assimilation system. This diagnostic pressure equation constraint (DPEC) is aimed to improve dynamic consistency among different model variables so as to produce better data assimilation results and improve the subsequent forecasts. Ge et al. (2012) described the development of DPEC and testing of it with idealized experiments. DPEC was also applied to a real supercell case, but only radial velocity was assimilated. In this paper, DPEC is further applied to two real tornadic supercell thunderstorm cases, where both radial velocity and radar reflectivity data are assimilated. The impact of DPEC on radar data assimilation is examined mainly based on the storm forecasts. It is found that the experiments using DPEC generally predict higher low-level vertical vorticity than the experiments not using DPEC near the time of observed tornadoes. Therefore, it is concluded that the use of DPEC improves the forecast of mesocyclone rotation within supercell thunderstorms. The experiments using different weighting coefficients generate similar results. This suggests that DPEC is not very sensitive to the weighting coefficients.
APA, Harvard, Vancouver, ISO, and other styles
31

Dekhici, Latifa, Khaled Guerraiche, and Khaled Belkadi. "Environmental Economic Power Dispatch Using Bat Algorithm with Generalized Fly and Evolutionary Boundary Constraint Handling Scheme." International Journal of Applied Metaheuristic Computing 11, no. 2 (April 2020): 171–91. http://dx.doi.org/10.4018/ijamc.2020040109.

Full text
Abstract:
This article intends to resolve the evolving environmental economic power dispatching problem (EED) using an enhanced version of the bat algorithm (BA) which is the Bat Algorithm with Generalized Fly (BAG). A good solution based on the Evolutionary Boundary Constraint Handling Scheme rather than the well-known absorbing technique and a good choice of the bi-objective function are provided to maintain the advantages of such algorithms on this problem. In the first stage, an individual economic power dispatch problem is considered by minimizing the fuel cost and taking into account the maximum pollutant emission. In the second stage and after weighting soft constraints satisfaction maximization and hard constraints abuse penalties, the proposed approach of the bi-objective environmental and economic load dispatch was built on a pareto function. The approach was tested on a thermal power plant with 10 generators and an IEEE30 power system of 6 generators. The results on the two datasets compared to those of other methods show that the proposed technique yields better cost and pollutant emissions.
APA, Harvard, Vancouver, ISO, and other styles
32

Cheng, Zixuan, and Li Liu. "Brain Magnetic Resonance Imaging Segmentation Using Possibilistic Clustering Algorithm Combined with Context Constraints." Journal of Medical Imaging and Health Informatics 10, no. 7 (July 1, 2020): 1669–74. http://dx.doi.org/10.1166/jmihi.2020.3093.

Full text
Abstract:
Because the FCM method is simple and effective, a series of research results based on this method are widely used in medical image segmentation. Compared with the traditional FCM, the probability clustering (PCM) algorithm cancels the constraint on the normalization of each sample membership degree in the iterative process, and the clustering effect of the method is improved within a certain range. However, the above two methods only use the gray value of the image pixels in the iterative process, ignoring the context constraint relationship between the high-dimensional image pixels. The two are easily affected by image noise during the segmentation process, resulting in poor robustness, which will affect the segmentation accuracy in practical applications. In order to alleviate this problem, this paper introduces the context constraint information of image based on PCM, and proposes a PCM algorithm that combines context constraints (CCPCM) and successfully applies it to human brain MR image segmentation to further improve the noise immunity of the new algorithm. Expand the applicability of new algorithms in the medical field. Through simulation results on medical images, it is found that compared with the previous classical clustering methods, such as FCM, PCM, etc., the CCPCM has better anti-interference to different noises, and the segmentation boundary is clearer. At the same time, CCPCM algorithm introduces the spatial neighbor information adaptive weighting mechanism in the clustering process, which can adaptively adjust the constraint weight of spatial information and optimize the clustering process, thus improving the segmentation efficiency.
APA, Harvard, Vancouver, ISO, and other styles
33

NAMPALLY, ARUN, TIMOTHY ZHANG, and C. R. RAMAKRISHNAN. "Constraint-Based Inference in Probabilistic Logic Programs." Theory and Practice of Logic Programming 18, no. 3-4 (July 2018): 638–55. http://dx.doi.org/10.1017/s1471068418000273.

Full text
Abstract:
AbstractProbabilistic Logic Programs (PLPs) generalize traditional logic programs and allow the encoding of models combining logical structure and uncertainty. In PLP, inference is performed by summarizing the possible worlds which entail the query in a suitable data structure, and using this data structure to compute the answer probability. Systems such as ProbLog, PITA, etc., use propositional data structures like explanation graphs, BDDs, SDDs, etc., to represent the possible worlds. While this approach saves inference time due to substructure sharing, there are a number of problems where a more compact data structure is possible. We propose a data structure called Ordered Symbolic Derivation Diagram (OSDD) which captures the possible worlds by means of constraint formulas. We describe a program transformation technique to construct OSDDs via query evaluation, and give procedures to perform exact and approximate inference over OSDDs. Our approach has two key properties. Firstly, the exact inference procedure is a generalization of traditional inference, and results in speedup over the latter in certain settings. Secondly, the approximate technique is a generalization of likelihood weighting in Bayesian Networks, and allows us to perform sampling-based inference with lower rejection rate and variance. We evaluate the effectiveness of the proposed techniques through experiments on several problems.
APA, Harvard, Vancouver, ISO, and other styles
34

Cho, In-Ky, Myung-Jin Ha, and Hwan-Ho Yong. "Effective Data Weighting and Cross-model Constraint in Time-lapse Inversion of Resistivity Monitoring Data." Journal of the Korean Society of Mineral and Energy Resources Engineers 50, no. 2 (April 1, 2013): 264–77. http://dx.doi.org/10.12972/ksmer.2013.50.2.264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Zheng, Zhongqiao, Xiaojing Wang, Yanhong Zhang, and Jiangsheng Zhang. "Research on Neural Network PID Quadratic Optimal Controller in Active Magnetic Levitation." Open Mechanical Engineering Journal 8, no. 1 (March 21, 2014): 42–47. http://dx.doi.org/10.2174/1874155x01408010042.

Full text
Abstract:
In response to the uncertainty, nonlinearity and open-loop instability of active magnetic levitation control system, a neural network PID quadratic optimal controller has been designed using optimum control theory. By introducing supervised Hebb learning rule, constraint control for positioning errors and control increment weighting are realized by adjusting weighting coefficients, using weighed sum-squares of the control increment and the deviation between actual position and equilibrium position of the rotor in active magnetic levitation system as objective function. The simulation results show that neural network PID quadratic optimal controller can maintain the stable levitation of rotor by effectively improving static and dynamic performances of the system, so as to maintain the stable levitation of rotor in active magnetic levitation system which has stronger anti-jamming capacity and robustness.
APA, Harvard, Vancouver, ISO, and other styles
36

Zhang, Weihai, and Guiling Li. "Discrete-Time Indefinite Stochastic Linear Quadratic Optimal Control with Second Moment Constraints." Mathematical Problems in Engineering 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/278142.

Full text
Abstract:
This paper studies the discrete-time stochastic linear quadratic (LQ) problem with a second moment constraint on the terminal state, where the weighting matrices in the cost functional are allowed to be indefinite. By means of the matrix Lagrange theorem, a new class of generalized difference Riccati equations (GDREs) is introduced. It is shown that the well-posedness, and the attainability of the LQ problem and the solvability of the GDREs are equivalent to each other.
APA, Harvard, Vancouver, ISO, and other styles
37

DelSole, Timothy, Liwei Jia, and Michael K. Tippett. "Scale-Selective Ridge Regression for Multimodel Forecasting." Journal of Climate 26, no. 20 (October 4, 2013): 7957–65. http://dx.doi.org/10.1175/jcli-d-13-00030.1.

Full text
Abstract:
Abstract This paper proposes a new approach to linearly combining multimodel forecasts, called scale-selective ridge regression, which ensures that the weighting coefficients satisfy certain smoothness constraints. The smoothness constraint reflects the “prior assumption” that seasonally predictable patterns tend to be large scale. In the absence of a smoothness constraint, regression methods typically produce noisy weights and hence noisy predictions. Constraining the weights to be smooth ensures that the multimodel combination is no less smooth than the individual model forecasts. The proposed method is equivalent to minimizing a cost function comprising the familiar mean square error plus a “penalty function” that penalizes weights with large spatial gradients. The method reduces to pointwise ridge regression for a suitable choice of constraint. The method is tested using the Ensemble-Based Predictions of Climate Changes and Their Impacts (ENSEMBLES) hindcast dataset during 1960–2005. The cross-validated skill of the proposed forecast method is shown to be larger than the skill of either ordinary least squares or pointwise ridge regression, although the significance of this difference is difficult to test owing to the small sample size. The model weights derived from the method are much smoother than those obtained from ordinary least squares or pointwise ridge regression. Interestingly, regressions in which the weights are completely independent of space give comparable overall skill. The scale-selective ridge is numerically more intensive than pointwise methods since the solution requires solving equations that couple all grid points together.
APA, Harvard, Vancouver, ISO, and other styles
38

Lee, Chang-Hun, and Moo-Yong Ryu. "Practical generalized optimal guidance law with impact angle constraint." Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering 233, no. 10 (October 26, 2018): 3790–809. http://dx.doi.org/10.1177/0954410018807000.

Full text
Abstract:
In this paper, we provide a practical solution to the generalized optimal guidance problem with an impact angle constraint. The optimal guidance problem with arbitrary weighting functions is extended to explicitly consider a missile dynamic lag effect as well as a missile velocity variation. Therefore, compared to existing results, the proposed result can prevent performance degradation due to the dynamic lag effect and the velocity variation, which is an essential issue in practice. Besides, since the proposed guidance law is formulated from the generalized optimal control framework, it can directly inherit a vital feature of the framework: providing an additional degree of freedom in shaping a guidance command for achieving a specific guidance operational goal. An illustrative example is provided in order to validate this property. In this study, the proposed solution is also compared with the existing solutions. The comparison results indicate that the proposed result is a more general and practical solution. Finally, numerical simulations are also conducted to demonstrate the practical significance of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
39

Zahedi, A., and M. H. Kahaei. "Frequency Estimation of Irregularly Sampled Data Using a Sparsity Constrained Weighted Least-Squares Approach." Engineering, Technology & Applied Science Research 3, no. 1 (February 11, 2013): 368–72. http://dx.doi.org/10.48084/etasr.187.

Full text
Abstract:
In this paper, a new method for frequency estimation of irregularly sampled data is proposed. In comparison with the previous sparsity-based methods where the sparsity constraint is applied to a least-squares fitting problem, the proposed method is based on a sparsity constrained weighted least-squares problem. The resulting problem is solved in an iterative manner, allowing the usage of the solution obtained at each iteration to determine the weights of the least-squares fitting term at the next iteration. Such an appropriate weighting of the least-squares fitting term enhances the performance of the proposed method. Simulation results verify that the proposed method can detect the spectral peaks using a very short data record. Compared to the previous one, the proposed method is less probable to miss the actual spectral peaks and exhibit spurious peaks.
APA, Harvard, Vancouver, ISO, and other styles
40

Jarosz, Gaja. "Learning with hidden structure in Optimality Theory and Harmonic Grammar: beyond Robust Interpretive Parsing." Phonology 30, no. 1 (May 2013): 27–71. http://dx.doi.org/10.1017/s0952675713000031.

Full text
Abstract:
This paper explores the relative merits of constraint rankingvs. weighting in the context of a major outstanding learnability problem in phonology: learning in the face of hidden structure. Specifically, the paper examines a well-known approach to the structural ambiguity problem, Robust Interpretive Parsing (RIP; Tesar & Smolensky 1998), focusing on its stochastic extension first described by Boersma (2003). Two related problems with the stochastic formulation of RIP are revealed, rooted in a failure to take full advantage of probabilistic information available in the learner's grammar. To address these problems, two novel parsing strategies are introduced and applied to learning algorithms for both probabilistic ranking and weighting. The novel parsing strategies yield significant improvements in performance, asymmetrically improving performance of OT learners. Once RIP is replaced with the proposed modifications, the apparent advantage of HG over OT learners reported in previous work disappears (Boersma & Pater 2008).
APA, Harvard, Vancouver, ISO, and other styles
41

Hasseni, Seif-El-Islam, and Latifa Abdou. "Robust LFT-LPV H∞ Control of an Underactuated Inverted Pendulum on a Cart with Optimal Weighting Functions Selection by GA and ES." Acta Mechanica et Automatica 14, no. 4 (December 1, 2020): 186–97. http://dx.doi.org/10.2478/ama-2020-0027.

Full text
Abstract:
Abstract This article investigates the robust stabilization and control of the inverted pendulum on a cart against disturbances, measurement noises, and parametric uncertainties by the LFT-based LPV technique (Linear-Fractional-Transformation based Linear-Parameter-Varying). To make the applying of the LPV technique possible, the LPV representation of the inverted pendulum on a cart model is developed. Besides, the underactuated constraint of this vehicle is overcome by considering both degrees of freedom (the rotational one and the translational one) in the structure. Moreover, the selection of the weighting functions that represent the desired performance is solved by two approaches of evolutionary algorithms; Genetic Algorithms (GA) and Evolutionary Strategies (ES) to find the weighting functions’ optimal parameters. To validate the proposed approach, simulations are performed and they show the effectiveness of the proposed approach to obtain robust controllers against external signals, as well as the parametric uncertainties.
APA, Harvard, Vancouver, ISO, and other styles
42

Zhang, Haiyong, Dong Han, Kai Ji, Zhong Ren, Chi Xu, Lijuan Zhu, and Hongyan Tan. "Optimal Spatial Matrix Filter Design for Array Signal Preprocessing." Journal of Applied Mathematics 2014 (2014): 1–8. http://dx.doi.org/10.1155/2014/564976.

Full text
Abstract:
An efficient technique of designing spatial matrix filter for array signal preprocessing based on convex programming was proposed. Five methods were considered for designing the filter. In design method 1, we minimized the passband fidelity subject to the controlled overall stopband attenuation level. In design method 2, the objective function and the constraint in the design method 1 were reversed. In design method 3, the optimal matrix filter which has the general mean square error was considered. In design method 4, the left stopband and the right stopband were constrained with specific attenuation level each, and the minimized passband fidelity was received. In design method 5, the optimization objective function was the sum of the left stopband and the right stopband attenuation levels with the weighting factors 1 andγ, respectively, and the passband fidelity was the constraints. The optimal solution of the optimizations above was derived by the Lagrange multiplier theory. The relations between the optimal solutions were analyzed. The generalized singular value decomposition was introduced to simplify the optimal solution of design methods 1 and 2 and enhanced the efficiency of solving the Lagrange multipliers. By simulations, it could be found that the proposed method was effective for designing the spatial matrix filter.
APA, Harvard, Vancouver, ISO, and other styles
43

Seo, Ki-Weon, Seokhoon Oh, Jooyoung Eom, Jianli Chen, and Clark R. Wilson. "Constrained Linear Deconvolution of GRACE Anomalies to Correct Spatial Leakage." Remote Sensing 12, no. 11 (June 2, 2020): 1798. http://dx.doi.org/10.3390/rs12111798.

Full text
Abstract:
Time-varying gravity observed by the Gravity Recovery and Climate Experiment (GRACE) satellites measures surface water and ice mass redistribution driven by weather and climate forcing and has emerged as one of the most important data types in measuring changes in Earth’s climate. However, spatial leakage of GRACE signals, especially in coastal areas, has been a recognized limitation in quantitatively assessing mass change. It is evident that larger terrestrial signals in coastal regions spread into the oceans and vice versa and various remedies have been developed to address this problem. An especially successful one has been Forward Modeling but it requires knowledge of geographical locations of mass change to be fully effective. In this study, we develop a new method to suppress leakage effects using a linear least squares operator applied to GRACE spherical harmonic data. The method is effectively a constrained deconvolution of smoothing inherent in GRACE data. It assumes that oceanic mass changes near the coast are negligible compared to terrestrial changes, with additional spatial regularization constraints. Some calibration of constraint weighting is required. We apply the method to estimate surface mass loads over Australia using both synthetic and real GRACE data. Leakage into the oceans is effectively suppressed and when compared with mascon solutions there is better performance over interior basins.
APA, Harvard, Vancouver, ISO, and other styles
44

Kibble, Rodger, and Richard Power. "Optimizing Referential Coherence in Text Generation." Computational Linguistics 30, no. 4 (December 2004): 401–16. http://dx.doi.org/10.1162/0891201042544893.

Full text
Abstract:
This article describes an implemented system which uses centering theory for planning of coherent texts and choice of referring expressions. We argue that text and sentence planning need to be driven in part by the goal of maintaining referential continuity and thereby facilitating pronoun resolution: Obtaining a favorable ordering of clauses, and of arguments within clauses, is likely to increase opportunities for nonambiguous pronoun use. Centering theory provides the basis for such an integrated approach. Generating coherent texts according to centering theory is treated as a constraint satisfaction problem. The well-known Rule 2 of centering theory is reformulated in terms of a set of constraints—cohesion, salience, cheapness, and continuity—and we show sample outputs obtained under a particular weighting of these constraints. This framework facilitates detailed research into evaluation metrics and will therefore provide a productive research tool in addition to the immediate practical benefit of improving the fluency and readability of generated texts. The technique is generally applicable to natural language generation systems, which perform hierarchical text structuring based on a theory of coherence relations with certain additional assumptions.
APA, Harvard, Vancouver, ISO, and other styles
45

Uozumi, Jun, and Toshimitsu Asakura. "Estimation Errors of Component Spectra Estimated by Means of the Concentration-Spectrum Correlation: Part II." Applied Spectroscopy 43, no. 5 (July 1989): 855–60. http://dx.doi.org/10.1366/0003702894202300.

Full text
Abstract:
Estimation errors accompanying component spectra calculated by means of the concentration-spectrum correlation method are investigated by theoretical analysis and computer simulations. Discussion is concentrated on a modified version of the method, which operates under the constraint that the sum of all the component concentrations in a sample is unity. In an agreement similar to that for the basic method, which was treated in an earlier paper [Appl. Spectrosc. 43, 74 (1989)], the estimation error consists of a superposition of other component spectra, each multiplied by a weighting factor. In this case, however, the weighting factor is a function of five sample statistics: the averages and the standard deviations of the concentrations of both the objective and the interfering components, and the correlation coefficient of these two components. It is shown again that the nonparametric statistical technique called a bootstrap is useful as a tool of false-true discrimination of the peaks in the estimated spectra.
APA, Harvard, Vancouver, ISO, and other styles
46

Myagmartseren, Purevtseren, Myagmarsuren Buyandelger, and S. Anders Brandt. "Implications of a Spatial Multicriteria Decision Analysis for Urban Development in Ulaanbaatar, Mongolia." Mathematical Problems in Engineering 2017 (2017): 1–16. http://dx.doi.org/10.1155/2017/2819795.

Full text
Abstract:
New technology has provided new tools for effective spatial planning. Through the example of locating suitable sites for urban development in Ulaanbaatar, this paper illustrates how multicriteria decision analysis and geographical information systems can be used for more effective urban planning. Several constraint and factor criteria were identified, transformed into map layers, and weighted together using the analytic hierarchy process. Besides localization results, this study shows the effect of using poor elevation data and how a sensitivity analysis can be applied to yield further information, spot weighting weaknesses, and assess the quality of the criteria.
APA, Harvard, Vancouver, ISO, and other styles
47

Kang, Min-Sig, and Chong-Won Lee. "Weighted Minimum Variance Control of Servo-Damper System With Input Energy Constraint." Journal of Dynamic Systems, Measurement, and Control 110, no. 1 (March 1, 1988): 70–77. http://dx.doi.org/10.1115/1.3152651.

Full text
Abstract:
The weighted minimum variance control is extended to include the rational transfer function as well as the polynomial as the weighting on system input energy so as to guarantee the closed-loop stability in the whole range of the scalar input cost and the monotonic property of the controlled output and control input variances with respect to the scalar input cost. In addition, the Robbins-Monro scheme is modified for on-line adaptation of the scalar input cost under the time varying external disturbance to satisfy the constraint on input variance. The two schemes are then combined and applied to the vibration control of a servo-damper beam system. The experimental and simulation results show that the maximum usage of the available input energy is ensured in the presence of time varying external disturbance so as to achieve the good control performance and the effectiveness of the servo-damper system used for the suppression of the beam vibration.
APA, Harvard, Vancouver, ISO, and other styles
48

Hu, Wenyi, Aria Abubakar, and Tarek M. Habashy. "Joint electromagnetic and seismic inversion using structural constraints." GEOPHYSICS 74, no. 6 (November 2009): R99—R109. http://dx.doi.org/10.1190/1.3246586.

Full text
Abstract:
We have developed a frequency-domain joint electromagnetic (EM) and seismic inversion algorithm for reservoir evaluation and exploration applications. EM and seismic data are jointly inverted using a cross-gradient constraint that enforces structural similarity between the conductivity image and the compressional wave (P-wave) velocity image. The inversion algorithm is based on a Gauss-Newton optimization approach. Because of the ill-posed nature of the inverse problem, regularization is used to constrain the solution. The multiplicative regularization technique selects the regularization parameters automatically, improving the robustness of the algorithm. A multifrequency data-weighting scheme prevents the high-frequency data from dominating the inversion process. When the joint-inversion algorithm is applied in integrating marine controlled-source electromagnetic data with surface seismic data for subsea reservoir exploration applications and in integrating crosswell EM and sonic data for reservoir monitoring and evaluation applications, results improve significantly over those obtained from separate EM or seismic inversions.
APA, Harvard, Vancouver, ISO, and other styles
49

Rodrigues, Lucas Lima, J. Sebastian Solis-Chaves, Omar A. C. Vilcanqui, and Alfeu J. Sguarezi Filho. "Predictive Incremental Vector Control for DFIG With Weighted-Dynamic Objective Constraint-Handling Method-PSO Weighting Matrices Design." IEEE Access 8 (2020): 114112–22. http://dx.doi.org/10.1109/access.2020.3003285.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Chen, Eryang, Ruichun Chang, Ke Guo, Fang Miao, Kaibo Shi, Ansheng Ye, and Jianghong Yuan. "Hyperspectral image spectral-spatial classification via weighted Laplacian smoothing constraint-based sparse representation." PLOS ONE 16, no. 7 (July 13, 2021): e0254362. http://dx.doi.org/10.1371/journal.pone.0254362.

Full text
Abstract:
As a powerful tool in hyperspectral image (HSI) classification, sparse representation has gained much attention in recent years owing to its detailed representation of features. In particular, the results of the joint use of spatial and spectral information has been widely applied to HSI classification. However, dealing with the spatial relationship between pixels is a nontrivial task. This paper proposes a new spatial-spectral combined classification method that considers the boundaries of adjacent features in the HSI. Based on the proposed method, a smoothing-constraint Laplacian vector is constructed, which consists of the interest pixel and its four nearest neighbors through their weighting factor. Then, a novel large-block sparse dictionary is developed for simultaneous orthogonal matching pursuit. Our proposed method can obtain a better accuracy of HSI classification on three real HSI datasets than the existing spectral-spatial HSI classifiers. Finally, the experimental results are presented to verify the effectiveness and superiority of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography