Academic literature on the topic 'Mixed Complementarity Problem (MCP)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Mixed Complementarity Problem (MCP).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Mixed Complementarity Problem (MCP)"

1

Yang, Cheng Wei, Francis Tin-Loi, Sawekchai Tangaramvong, and Wei Gao. "A Complementarity Approach for Elastoplastic Analysis of Frames with Uncertainties." Applied Mechanics and Materials 553 (May 2014): 594–99. http://dx.doi.org/10.4028/www.scientific.net/amm.553.594.

Full text
Abstract:
Traditional elastoplastic analysis presumes that all structural data are known exactly. However, these values often cannot be predicted precisely: they are influenced by such factors as manufacturing errors, material defects, and environmental changes. Ignoring these may lead to inaccurate (overly conservative or nonconservative) results. It is thus important that the effects of uncertain data be quantified in a reliable assessment of structural safety. This paper presents the elastoplastic analysis of skeletal frames with uncertaintiesassumed to be interval quantities (i.e. with known upper and lower bounds). Starting from the well-known mixed complementarity program (MCP) statement of the state problem without uncertainties, we extend this to an optimisation problem formulation involving complementarity constraints (that represent the plastic nature of the ductile material). Calculations for both upper and lower bounds of displacements corresponding to monotonically increasing loads are computed. The final results are checked through a comparison with interval limit analyses and Monte Carlo simulations.
APA, Harvard, Vancouver, ISO, and other styles
2

Kuhn, A., and W. Britz. "Can hydro-economic river basin models simulate water shadow prices under asymmetric access?" Water Science and Technology 66, no. 4 (August 1, 2012): 879–86. http://dx.doi.org/10.2166/wst.2012.251.

Full text
Abstract:
Hydro-economic river basin models (HERBM) based on mathematical programming are conventionally formulated as explicit ‘aggregate optimization’ problems with a single, aggregate objective function. Often unintended, this format implicitly assumes that decisions on water allocation are made via central planning or functioning markets such as to maximize social welfare. In the absence of perfect water markets, however, individually optimal decisions by water users will differ from the social optimum. Classical aggregate HERBMs cannot simulate that situation and thus might be unable to describe existing institutions governing access to water and might produce biased results for alternative ones. We propose a new solution format for HERBMs, based on the format of the mixed complementarity problem (MCP), where modified shadow price relations express spatial externalities resulting from asymmetric access to water use. This new problem format, as opposed to commonly used linear (LP) or non-linear programming (NLP) approaches, enables the simultaneous simulation of numerous ‘independent optimization’ decisions by multiple water users while maintaining physical interdependences based on water use and flow in the river basin. We show that the alternative problem format allows the formulation HERBMs that yield more realistic results when comparing different water management institutions.
APA, Harvard, Vancouver, ISO, and other styles
3

Fang, Ya-ping, and Nan-jing Huang. "The equivalence of upper semi-continuity of the solution map and the R0-condition in the mixed linear complementarity problem." Applied Mathematics Letters 19, no. 7 (July 2006): 667–72. http://dx.doi.org/10.1016/j.aml.2005.08.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Huang, Chongchao, and Song Wang. "A penalty method for a mixed nonlinear complementarity problem." Nonlinear Analysis: Theory, Methods & Applications 75, no. 2 (January 2012): 588–97. http://dx.doi.org/10.1016/j.na.2011.08.061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Oh, Kong Ping. "The Formulation of the Mixed Lubrication Problem as a Generalized Nonlinear Complementarity Problem." Journal of Tribology 108, no. 4 (October 1, 1986): 598–603. http://dx.doi.org/10.1115/1.3261274.

Full text
Abstract:
The mixed-lubrication problem is formulated as a nonlinear generalized complementarity problem in which the pressure acting on the load-bearing surface is taken as the unknown, and the lubricant flow and the gap between the surfaces are taken as its complements. An iterative method was developed to find the solution, which satisfies the complementarity condition that at each point on the load-bearing surface, the pressure or at least one of its complements, is zero at all times. Moreover, the pressure and its complements satisfy non-negativity constraints. The solution intrinsically decomposes the load-bearing surface into three distinct subregions: solid-to-solid contact, hydrodynamically lubricated contact, and no contact (or cavitation). It is shown that the mixed-lubrication formulation degenerates into the special cases of hydrodynamic or solid-to-solid contacts under appropriate load and speed conditions. A journal bearing with elastic support is analyzed to illustrate the method of solution. The transition of the lubrication mode from pure hydrodynamic contact to mixed contact is demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
6

Khan, Suhel Ahmad, and Naeem Ahmad. "Existence Results for Vector Mixed Quasi-Complementarity Problems." Journal of Mathematics 2013 (2013): 1–6. http://dx.doi.org/10.1155/2013/204348.

Full text
Abstract:
We introduce strong vector mixed quasi-complementarity problems and the corresponding strong vector mixed quasi-variational inequality problems. We establish equivalence between strong mixed quasi-complementarity problems and strong mixed quasi-variational inequality problem in Banach spaces. Further, using KKM-Fan lemma, we prove the existence of solutions of these problems, under pseudomonotonicity assumption. The results presented in this paper are extensions and improvements of some earlier and recent results in the literature.
APA, Harvard, Vancouver, ISO, and other styles
7

Sun, Hongchun, and Yiju Wang. "The Solution Set Characterization and Error Bound for the Extended Mixed Linear Complementarity Problem." Journal of Applied Mathematics 2012 (2012): 1–15. http://dx.doi.org/10.1155/2012/219478.

Full text
Abstract:
For the extended mixed linear complementarity problem (EML CP), we first present the characterization of the solution set for the EMLCP. Based on this, its global error bound is also established under milder conditions. The results obtained in this paper can be taken as an extension for the classical linear complementarity problems.
APA, Harvard, Vancouver, ISO, and other styles
8

Du, Shouqiang, and Liping Zhang. "A mixed integer programming approach to the tensor complementarity problem." Journal of Global Optimization 73, no. 4 (January 2, 2019): 789–800. http://dx.doi.org/10.1007/s10898-018-00731-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tin-Loi, F., and S. H. Xia. "Nonlinear Analysis of Cable-strut Structures as a Mixed Complementarity Problem." International Journal of Space Structures 18, no. 4 (December 2003): 225–34. http://dx.doi.org/10.1260/026635103322987959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Agdeppa, Rhoda P., Nobuo Yamashita, and Masao Fukushima. "The traffic equilibrium problem with nonadditive costs and its monotone mixed complementarity problem formulation." Transportation Research Part B: Methodological 41, no. 8 (October 2007): 862–74. http://dx.doi.org/10.1016/j.trb.2007.04.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Mixed Complementarity Problem (MCP)"

1

Philip, Jean-Marc. "Dynamique intertemporelle et équilibre général calculable : Une application à l'accord de partenariat économique entre l'Union européenne et le Ghana." Thesis, Aix-Marseille 2, 2011. http://www.theses.fr/2011AIX24019.

Full text
Abstract:
L’objectif de la thèse est d’identifier la pertinence des modèles en équilibre général calculable (MEGC) pour analyser la problématique posée par les Accords de Partenariat Économique (APE) entre l’Union européenne et les pays ACP. Une revue de la littérature est d’abord réalisée, puis un modèle en équilibre général calculable (MEGC) à dynamique intertemporelle est construit pour analyser l’impact de l’APE sur un pays spécifique : le Ghana. À partir du constat portant sur la diversité des résultats de simulations, qui dépendent essentiellement de la structure du modèle et des modes de fermeture choisis par le modélisateur, ce travail cherche à mettre en évidence la largeur du faisceau de résultats possibles et l’impossibilité de mettre en avant les bénéfices potentiels qui peuvent être attendus d’un tel accord en s’appuyant simplement sur des MEGC néoclassiques standards
This work aims to analyze to what extent the use of an applied general equilibrium model (AGE) allows to correctly assess the potential economic impact of EPAs between ACP countries and the European Union. First, a review of the literature is conducted and then an intertemporal dynamic AGE model is built in order to assess the potential impact of EPA on a specific country: Ghana. From the variety of results resulting from the models simulations and depending on hypothesis made on the model structure and the type of closure chosen by the modeler, our work aims to stress the risk of using standard neoclassical Walrasian models to assess the potential benefits of an EPA on ACP countries economy
APA, Harvard, Vancouver, ISO, and other styles
2

Bassi, Gustavo Ferraresi. "Equilíbrio da expansão de capacidade sob incerteza : um estudo de caso na indústria petroquímica brasileira." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/158443.

Full text
Abstract:
A Teoria dos Jogos é amplamente utilizada no estudo de fenômenos de interação estratégica, em especial na análise de mercados de commodities. Esse trabalho faz uma análise preliminar do mercado brasileiro de eteno e propeno sob a ótica de um modelo baseado na Teoria dos Jogos, representado matematicamente através de um problema de complementaridade mista. Neste modelo, as empresas atuam em uma competição de Cournot e os custos de produção são parâmetros incertos, representados através de cenários, sendo que, no equilíbrio, três decisões devem ser tomadas: i) o portifólio de tecnologias para produção, ii) a capacidade de produção de cada tecnologia e iii) o nível de produção de cada tecnologia em cada cenário. Considerando as diversas limitações do estudo, as simulações realizadas com o modelo proposto mostram que o comportamento dos agentes da indústria petroquímica brasileira está mais próximo de tomadores de preços, sem possibilidade de regulação de preços através das quantidades produzidas.
Game Theory is widely used in the study of strategic interaction, especially in the analysis of commodity markets. This work makes a preliminary analysis of the Brazilian ethylene and propylene market from the perspective of a model based on game theory, represented mathematically by a mixed complementarity problem. In this model, firms behave as Cournot players and production costs are uncertain parameters, represented by scenarios, and in equilibrium three decisions must be made: i) the portfolio of technologies for production, ii) technologies capacity and iii) the level of production for each technology in each scenario. Considering the limitations of the study, the simulations carried out with the proposed model show that the behavior of the Brazilian petrochemical industry agents is closer to price takers, without possibility of price regulation by the quantities produced.
APA, Harvard, Vancouver, ISO, and other styles
3

Celebi, Emre. "MODELS OF EFFICIENT CONSUMER PRICING SCHEMES IN ELECTRICITY MARKETS." Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/811.

Full text
Abstract:
Suppliers in competitive electricity markets regularly respond to prices that change hour by hour or even more frequently, but most consumers respond to price changes on a very different time scale, i. e. they observe and respond to changes in price as reflected on their monthly bills. This thesis examines mixed complementarity programming models of equilibrium that can bridge the speed of response gap between suppliers and consumers, yet adhere to the principle of marginal cost pricing of electricity. It develops a computable equilibrium model to estimate the time-of-use (TOU) prices that can be used in retail electricity markets. An optimization model for the supply side of the electricity market, combined with a price-responsive geometric distributed lagged demand function, computes the TOU prices that satisfy the equilibrium conditions. Monthly load duration curves are approximated and discretized in the context of the supplier's optimization model. The models are formulated and solved by the mixed complementarity problem approach. It is intended that the models will be useful (a) in the regular exercise of setting consumer prices (i. e. , TOU prices that reflect the marginal cost of electricity) by a regulatory body (e. g. , Ontario Energy Board) for jurisdictions (e. g. , Ontario) where consumers' prices are regulated, but suppliers offer into a competitive market, (b) for forecasting in markets without price regulation, but where consumers pay a weighted average of wholesale price, (c) in evaluation of the policies regarding time-of-use pricing compared to the single pricing, and (d) in assessment of the welfare changes due to the implementation of TOU prices.
APA, Harvard, Vancouver, ISO, and other styles
4

Kim, Jingu. "Nonnegative matrix and tensor factorizations, least squares problems, and applications." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42909.

Full text
Abstract:
Nonnegative matrix factorization (NMF) is a useful dimension reduction method that has been investigated and applied in various areas. NMF is considered for high-dimensional data in which each element has a nonnegative value, and it provides a low-rank approximation formed by factors whose elements are also nonnegative. The nonnegativity constraints imposed on the low-rank factors not only enable natural interpretation but also reveal the hidden structure of data. Extending the benefits of NMF to multidimensional arrays, nonnegative tensor factorization (NTF) has been shown to be successful in analyzing complicated data sets. Despite the success, NMF and NTF have been actively developed only in the recent decade, and algorithmic strategies for computing NMF and NTF have not been fully studied. In this thesis, computational challenges regarding NMF, NTF, and related least squares problems are addressed. First, efficient algorithms of NMF and NTF are investigated based on a connection from the NMF and the NTF problems to the nonnegativity-constrained least squares (NLS) problems. A key strategy is to observe typical structure of the NLS problems arising in the NMF and the NTF computation and design a fast algorithm utilizing the structure. We propose an accelerated block principal pivoting method to solve the NLS problems, thereby significantly speeding up the NMF and NTF computation. Implementation results with synthetic and real-world data sets validate the efficiency of the proposed method. In addition, a theoretical result on the classical active-set method for rank-deficient NLS problems is presented. Although the block principal pivoting method appears generally more efficient than the active-set method for the NLS problems, it is not applicable for rank-deficient cases. We show that the active-set method with a proper starting vector can actually solve the rank-deficient NLS problems without ever running into rank-deficient least squares problems during iterations. Going beyond the NLS problems, it is presented that a block principal pivoting strategy can also be applied to the l1-regularized linear regression. The l1-regularized linear regression, also known as the Lasso, has been very popular due to its ability to promote sparse solutions. Solving this problem is difficult because the l1-regularization term is not differentiable. A block principal pivoting method and its variant, which overcome a limitation of previous active-set methods, are proposed for this problem with successful experimental results. Finally, a group-sparsity regularization method for NMF is presented. A recent challenge in data analysis for science and engineering is that data are often represented in a structured way. In particular, many data mining tasks have to deal with group-structured prior information, where features or data items are organized into groups. Motivated by an observation that features or data items that belong to a group are expected to share the same sparsity pattern in their latent factor representations, We propose mixed-norm regularization to promote group-level sparsity. Efficient convex optimization methods for dealing with the regularization terms are presented along with computational comparisons between them. Application examples of the proposed method in factor recovery, semi-supervised clustering, and multilingual text analysis are presented.
APA, Harvard, Vancouver, ISO, and other styles
5

Medeiros, Angélica Pott. "O mercado de carne de frango brasileiro no contexto dos novos acordos regionais de comércio: Transpacífico e Transatlântico." Universidade Federal de Santa Maria, 2017. http://repositorio.ufsm.br/handle/1/12572.

Full text
Abstract:
The developed countries, heavily impacted by the economic and financial crisis of 2008, signaled the recovery with negotiations on two major international trade agreements, The Trans-Pacific Partnership and the Transatlantic Trade and Investment Partnership (TPP and TTIP, respectively). The establishment of trade agreements may minimize the effects of protectionist policies of countries/blocks, eliminating or reducing existing barriers, thus stimulating the increase in trade among member countries of such agreements. The TPP and TTIP imply the reduction of tariffs and non-tariff barriers between member countries, the United States case. Thus, competition with Brazil will tend to increase in many products, case of the chicken meat, in which the United States occupies the first position in world production, while Brazil positions itself as the largest exporter of the commodity. From this new trade matrix, the present study aims to examine the possible impacts of the Transpacific and Transatlantic agreements on the Brazilian chicken meat market. The methodology derives from a Spatial Equilibrium Model as a Mixed Complementarity Problem (MCP), based on five alternative scenarios, which aimed to highlight possible changes in the market of chicken meat from the implementation of new trade agreements. The first scenario simulates the formation of the TPP with the reduction of tariff barriers, while the second scenario presupposes the elimination of tariff and non-tariff barriers. Regarding the TTIP, scenarios 3 and 4, it is assumed the reduction of tariff barriers as well as the elimination of tariff barriers and technical constraints; in the fifth scenario was considered the simultaneous occurrence of the two agreements, through the elimination of tariff and non-tariff barriers. The results indicate that, in general, with the implementation of both agreements the Brazilian chicken meat market may invariably bring losses, particularly in relation to production, consequently, affecting producers' prices and surpluses. The most damaging scenarios for Brazil are the formation of the TPP in its broadest form, based on the elimination of tariff and non-tariff barriers, as well as the simultaneous formation of the agreements, in which the country show a net loss in welfare. From this, we emphasize the importance of negotiating trade agreements to ensure the industry conditions of expansion and access to new markets, As well as greater rigor in matters related to animal health, inspection and certification, aspects of a technical nature that have great potential to distort trade flows internationally.
Os países desenvolvidos, fortemente impactados pela crise econômica e financeira de 2008, sinalizaram a recuperação a partir de negociações de dois grandes acordos no comércio internacional, a Parceria Transpacífico e a Parceria Transatlântica de Comércio e Investimento (TPP e TTIP, respectivamente). A constituição de acordos comerciais possivelmente minimizam os efeitos das políticas protecionistas dos países/blocos, eliminando ou reduzindo as barreiras existentes, estimulando assim o aumento do comércio entre os países-membros de tais acordos. O TPP e o TTIP implicarão na redução de tarifas e barreiras não tarifárias entre os países-membros, caso dos Estados Unidos. Assim, a concorrência com o Brasil tenderá a aumentar, em muitos produtos, caso da carne de frango, no qual o país norte-americano ocupa a primeira posição na produção mundial, enquanto que o Brasil posiciona-se como maior exportador da commodity. A partir desta nova matriz de comércio, o presente estudo tem por objetivo examinar os possíveis impactos da implementação dos acordos Transpacífico e Transatlântico sobre o mercado de carne de frango brasileiro. A metodologia deriva de um Modelo de Equilíbrio Espacial na forma de um Problema de Complementaridade Mista (PCM), baseando em cinco cenários alternativos, cujo objetivo foi evidenciar possíveis mudanças no mercado da carne de frango decorrentes da implementação de novos acordos comerciais. O primeiro cenário simula a formação do TPP a partir da redução das barreiras tarifárias, já o segundo cenário pressupõe a eliminação das barreiras tarifarias e não tarifárias. Em relação ao TTIP, cenários 3 e 4, pressupõe-se a redução das barreiras tarifarias como também a eliminação das barreiras tarifárias e restrições técnicas; num quinto cenário considerou-se a ocorrência simultânea dos dois acordos, por meio da eliminação das barreiras tarifárias e não tarifárias. Os resultados indicam que, de maneira geral, com a efetivação de ambos os acordos o setor de carne de frango brasileiro, pode invariavelmente trazer prejuízos, principalmente no que se refere à produção e, em consequência, afetando os preços e os excedentes dos produtores. Os cenários mais prejudiciais ao Brasil consistem na formação do TPP em sua forma mais ampla, baseando-se na eliminação das barreiras tarifárias e não tarifarias, como também na formação simultânea dos acordos, nos quais o país apresenta perda líquida de bem estar. A partir disso, ressalta-se a importância da negociação de acordos comerciais para garantir ao setor condições de expansão e acesso a novos mercados, assim como maior rigor nas questões relacionadas à saúde animal, fiscalização e certificação, aspectos de natureza técnica que tem grande potencial de distorção dos fluxos de comércio internacionalmente.
APA, Harvard, Vancouver, ISO, and other styles
6

Proença, Sara Isabel Azevedo. "Impact assessment of energy and climate policies : a hybrid botton-up general equilibrium model (HyBGem) for Portugal." Doctoral thesis, Instituto Superior de Economia e Gestão, 2013. http://hdl.handle.net/10400.5/6126.

Full text
Abstract:
Doutoramento em Economia
Climate change mitigation and the imperative of a new sustainable energy paradigm are among the greatest challenges facing the world today, and they are high on the priority list of policy makers as well as within the scientific community. In this context significant efforts are being made in the design and implementation of energy and carbon mitigation policies at both European and national level. Evidence of this can be seen in the recent adoption by the EU of an integrated climate and energy policy that setts ambitious binding targets to be achieved by 2020 – known as the 20-20-20 targets of the EU Climate and Energy Package. Undoubtedly, the cost of these policies can be substantially reduced if a comprehensive impact assessment is made of the most efficient and cost-effective policy measures and technological options. Policy impact assessment therefore plays an important role in supporting the energy and climate decision-making process. This is the context of and motivation for the research presented in this thesis. The first part of the thesis, the conceptual framework, describes the development of the Hybrid Bottom-up General Equilibrium Model (HyBGEM) for Portugal, as a decision-support tool to assist national policy makers in conducting energy and climate policy analysis. HyBGEM is a single integrated, multi-sector, hybrid top-down/bottom-up general equilibrium E3 model formulated as a mixed complementarity problem. The second part of the thesis, the empirical analysis, provides an impact assessment of Portugal’s 2020 energy-climate policy targets under the EU Climate and Energy Package commitments, based on the HyBGEM model and the baseline projections previously developed. Five policy scenarios have been modelled and simulated to evaluate the economic, environmental and technological impacts on Portugal of complying with its individual 2020 carbon emissions and renewable energy targets. Furthermore, insights are gained into how these targets interact with each other, what are the most efficient and cost-effective policy options, and how alternative pathways affect the extent of policy-induced effects. The numerical analysis reveals that Portugal’s 2020 energy-climate targets can be achieved without significant compliance costs. A major challenge for policy makers is to promote an effective decarbonisation of the electricity generation sector through renewable-based technologies. There is evidence that the compliance costs of Portugal’s low carbon target in 2020 are significantly higher than the costs of achieving the national RES-E target, given that imposing carbon emissions constraints and subsidising renewable electricity generation via a feed-in tariffs scheme both have a similar impact on economy-wide emissions. This result suggests that the most cost-effective policy option to achieve the national energy-climate targets is to promote renewable power generation technologies, recommending that policy makers should proceed with the mechanisms that support it. The transition to a ‘greener’ economy is thus central to the ongoing fight against climate change. There is also evidence that emission market segmentation as imposed by the current EU-ETS creates substantial excess costs compared to uniform emissions pricing through a comprehensive cap-and-trade system. The economic argument on counterproductive overlapping regulation is not corroborated by the findings. Furthermore, there is no potential for a double dividend arising from environmental tax reforms. To conclude, the results highlight the critical importance of market distortions and revenue-recycling schemes, together with baseline projections in policy impact assessment.
A mitigação das alterações climáticas e o imperativo de um novo paradigma energético sustentável estão entre os maiores desafios que o mundo de hoje enfrenta, surgindo no topo da lista de prioridades quer dos decisores políticos quer da comunidade científica. Neste contexto, têm sido envidados esforços significativos na conceção e aplicação de políticas energéticas e de mitigação de carbono, tanto a nível europeu como nacional. A recente adoção de uma política integrada da UE em matéria de clima e energia, com objetivos ambiciosos a serem alcançados até 2020 – os denominados objetivos 20-20-20 do Pacote Clima-Energia da UE, é prova disso. Não há dúvida de que o custo destas políticas pode ser substancialmente reduzido se for feita uma avaliação global das medidas e das opções tecnológicas mais eficientes e com melhor relação custo-eficácia. A avaliação de impacto das políticas desempenha assim um papel importante no apoio à tomada de decisão em matéria energética e climática. São estes o contexto e a motivação para a investigação apresentada nesta tese. A primeira parte da tese, referente à estrutura conceptual, descreve o desenvolvimento do modelo HyBGEM – Hybrid Bottom-up General Equilibrium Model, concebido para Portugal. Trata-se de uma ferramenta de apoio à decisão em matéria de políticas de energia-clima. O HyBGEM é um modelo E3 de equilíbrio geral, com uma estrutura híbrida top-down/bottom-up integrada, multi-setorial e formulado como um problema de complementaridade mista. A segunda parte da tese, referente à análise empírica, apresenta uma avaliação de impacto das políticas de energia-clima para Portugal no quadro dos compromissos assumidos no Pacote Clima-Energia da UE, com base no modelo HyBGEM e em projeções de base previamente construídas. Foram modelados e simulados cinco cenários de política para avaliar os impactos económicos, ambientais e tecnológicos do cumprimento das metas nacionais traçadas para 2020 em matéria de limitação de emissões de carbono e promoção das energias renováveis. Avalia-se também o modo como estes objetivos interagem entre si, quais são as opções de política mais eficientes e custo-eficazes, e em que medida opções alternativas influenciam a magnitude dos impactos. A análise numérica revela que as metas energia-clima 2020 para Portugal podem ser alcançadas sem incorrer em custos de cumprimento significativos. O desafio fundamental que se coloca aos decisores políticos consiste em impulsionar a descarbonização do setor de produção de energia elétrica através de tecnologias de energia renovável. Existe evidência de que os custos de cumprimento da meta de redução de carbono são significativamente mais elevados que os custos de cumprimento da meta de FER-E, sendo que a imposição de restrições às emissões e a subsidiação da produção de eletricidade a partir de fontes de energia renovável (regime de tarifas feed-in) têm um impacto semelhante sobre o total de emissões. Este resultado sugere que a promoção das tecnologias de base renovável no sistema energético nacional é a opção com melhor relação custo-eficácia para a concretização dos objetivos nacionais energia-clima para 2020, instando os decisores políticos a prosseguir com os mecanismos de apoio existentes. A transição para uma economia mais ‘verde’ afigura-se assim fundamental no combate em curso contra as alterações climáticas. A análise revela também que a segmentação do mercado de emissões imposta pelo atual CELE gera custos adicionais substanciais quando comparada com um sistema de direitos de emissão uniforme. O argumento económico de que a sobreposição de regulamentação é contraproducente não é corroborado pelos resultados. A expectativa de um duplo dividendo decorrente das reformas fiscais em matéria ambiental não foi confirmada. Os resultados destacam ainda a importância crítica das distorções de mercado, dos sistemas de reciclagem de receitas e das projeções de base, para a avaliação de impacto das políticas.
APA, Harvard, Vancouver, ISO, and other styles
7

Chakrabarty, Shantanu. "Algorithms for Adjusted Load Flow Solutions using the Complementarity Principle." Thesis, 2016. http://etd.iisc.ac.in/handle/2005/4161.

Full text
Abstract:
The state of a given power system i.e. voltage magnitudes and angles at all the buses can be computed using the Newton-Raphson Load Flow (NRLF) method when active power and reactive power loads are specified at all the buses of the system. This computation can be carried out more e ciently(in terms of computer memory and time) using the Fast Decoupled Load Flow(FDLF) method for a large class of systems. These methods are the most widely used methods in power system studies. NRLF and FDLF methods require modifications, if voltage magnitude is spec-i ed at some of the generator buses instead of the reactive power load. In such situations, generator reactive power outputs(manipulated by adjusting the field excitation) have to be adjusted to meet this specification. This necessitates the determination of the value of these additional control variables. There are some more similar adjustments that are required to be made in a practical load flow. Sometimes, the voltage at some load buses may be specified. They are to be maintained at the scheduled value using the taps on in-phase transformers(OLTC transformers). Similarly, it is possible in some situations that the active power flow in some lines are specified to be kept at a particular value. The device which facilitates such a control is the phase shifting transformer(PSTs) and the PST tap value is the additional control variable to be determined. The other operation of interest in interconnected power systems is the area interchange control(AIC). This requires that the sum of active power flow between two areas of the system is maintained at the specified value. The control variable value that enables this adjustment is the active generation in a particular generator bus in the area referred to as a swing bus. The load flow problem is referred to as a adjusted load flow problem in cases where in, some of these control variables must also be determined in addition to the state of the system. It must be pointed out here that the control variables must be strictly kept within their limits while bringing the controlled variables to their specified values. If a control variable tends to reach a value beyond its limits, then it is to be set at the limit and the corresponding controlled variable will not be at its scheduled value. Adjusted load flow problems generally involve many control variables of the same type or multiple control variables of different types. The challenge in finding adjusted load flow solutions stems from the fact that the relation between the controlling and controlled variables is not one to one; each controlling variable affects many of the controlled variables. The existing approaches of adjusted load flow solutions generally consider only one type of these adjustments. There are only a very few attempts where more than one type of adjustment is considered. The two broad directions pursued for developing algorithms for adjusted solutions, by the earlier researchers are (1) Introducing additional equations in order to include control variable(between iterations) and (2) Adjusting the controlling variables between unadjusted load flow solution iterations based on the local sensitivity of the controlled variable with respect to a particular controlling variable. The schemes in use for finding adjusted load flow solutions have a flavour of trial and error type of algorithms. Their success in any situation is known to depend on specific details of implementation. Implementation details that guarantee success are not in the public domain. Many times they exhibit oscillatory convergence behaviour requiring very large number of iterations or fail to converge. It is also known that in some situations these algorithms could converge to anomalous solutions(solutions that are inconsistent with practical system behaviour). Such limitations of the existing approaches and also the need for developing better methods is well documented in the literature. Some recent work has shown the promise of the formulation of the adjusted load flow problem in the complementarity framework considering a few of the adjustments. This thesis is intended to further explore this promising direction of investigation. In particular, in this thesis, we develop new algorithms in complementarity framework for the following situations and demonstrate their attractive features as compared with the existing approaches. In this thesis, the following algorithms have been proposed, developed, tested and their performance compared with the existing algorithms. . Two algorithms for including OLTC adjustments, in the FDLF method as Mixed Complementarity Problem(MCP) and Non-linear Complementarity(NCP) formulations. In addition, the above algorithms are further extended to incorporate generator bus Q-limit adjustments simultaneously with the OLTC adjustments. Two new algorithms(two each in MCP and NCP formulations) are developed to handle generator Q-limits and OLTC adjustments individually as well as together in the NRLF formulation in rectangular coordinates. Four algorithms(two in MCP and two in NCP) to handle PST constraints in NRLF and FDLF methods. Four algorithms(two in MCP and two in NCP) to handle AIC constraints in NRLF and FDLF methods. In addition, the PST and AIC adjustment algorithms above are combined to simultaneously carry out PST and AIC adjustments in NRLF as well as FDLF methods. Four algorithms(two for NRLF and two for FDLF) to simultaneously incorporate all the four adjustments simultaneously using MCP and NCP formulations. These algorithms are also shown to be capable of incorporating simultaneously any subset of these four adjustments The thesis focusses only on incorporating adjustments in the NRLF and FDLF methods as they are the most widely used schemes in the industry as well as the academia. It is also pointed out that the investigations here consider the adjustment problem in the traditional framework and hence, none of the power electronics based control equipment or the modern distributed generation sources are considered here. Results of extensive computational experiments are presented and the attractive performance of the new algorithms as compared with the traditional ones are high-lighted. All the new algorithms developed here are fundamentally different from the existing adjusted load flow approaches(not based on complementarity framework) in that they meet the specifications on the system variables and limits on the controlling variables automatically; without requiring either heuristic algorithmic choices or problem specific algorithm manipulation - a fairly common feature in all the existing approaches. This extremely desirable feature of the proposed algorithms is due to the fact that the pro-posed formulations for the adjusted load flow problems in complementarity framework, transform these problems to that of solving a fixed set of non-linear equations. The results in the thesis provide strong evidence of the promise of the new methods for adoption into the widely used NRLF and FDLF programs so as to make solving the adjusted load flow problem as simple as solving the unadjusted load flow problem.
APA, Harvard, Vancouver, ISO, and other styles
8

Kumar, Shailendra. "Vertical Structure of Convective Clouds Using the TRMM PR Data." Thesis, 2016. http://etd.iisc.ac.in/handle/2005/4290.

Full text
Abstract:
Very small fractional area (0.1%) occupied by the cumulonimbus (Cb) clouds belies their importance in Earths hydrological cycle and climate. For example, Riehl and Malkus (1958) estimated that the vertical transport of energy needed for the global energy balance can be accomplished by 1500 to 5000 active, undiluted Cb clouds (i.e., hot towers). Cb clouds feed hydrometeors to the anvil cloud region in mesoscale convective system (MCS). Applications such as the estimation of the vertical profile of latent heating, cumulus parameterizations, satellite rainfall retrievals, inferring the probability of lightening, etc., require information on the vertical distribution of hydrometeors in convective clouds (e.g., Xu and Zipser, 2012). Knowledge of the vertical structure of Cb clouds near individual cloud scale becomes necessary for validating cloud resolving model results. However, information on the vertical structure of convective clouds at horizontal scales comparable to that of a deep convective cloud is not available over most regions in the tropics. The PR provides an unprecedented long time series of data on the 3D structure of precipitating clouds in the tropics. The TRMM PR equivalent radar reflectivity factor (Ze) data product 2A25 version 6 is the main data used in the study. The present thesis work primarily focuses on the properties of convective clouds at the PR pixel scale. TRMM, operational since December 1997, is a non-sunsynchronous satellite with 350 inclination and samples the tropics several times a day (e.g., Kummerow et al., 1998, 2000). The PR works in Ku band (13.8 GHz or 2.2 cm wavelength), and its scan, consisting of 49 beams, had a width of 215 km when launched and _250 km after August 2001. The beam width is 0.710; nearby beams are separated by 0.710, giving a maximum scan angle of 170 (Kummerow et al., 1998). There are 80 levels in the vertical, each having 250 m resolution with the lowest level being the Earths ellipsoid. The height corresponding to different vertical levels in the 2A25 data set is the distance measured along the PR beam. Hence, corrections to pixel heights along different beams have been applied. Present thesis presents the vertical structure of radar reflectivity factor in tall cumulonimbus towers (CbTs) and intense convective clouds (ICCs) embedded in the South Asian monsoon systems and other tropical deep cloud systems. CbT is defined referring to a reflectivity threshold of 20 dBZ at 12 km altitude and is at least 9 km thick. ICCs are constructed referring to reflectivity thresholds at 8 km and 3 km altitudes. Cloud properties reported here are based on 10 year climatology. It is observed that the frequency of occurrence of CbTs is highest over the foothills of Himalayas, plains of northern India and Bangladesh, and minimum over the Arabian Sea and equatorial Indian Ocean west of 900E. The regional differences depend on the reference height selected, namely, small in the case of CbTs and prominent in 6􀀀13 km height range for ICCs. Land cells are more intense than the oceanic ones for convective cells defined using the reflectivity threshold at 3 km, whereas land versus ocean contrasts are not observed in the case of CbTs. Compared to cumulonimbus clouds elsewhere in the tropics, the South Asian counterparts have higher reflectivity values above 11 km altitude. One of the main findings of the present thesis is the close similarity in the average vertical profiles of CbTs and ICCs in the mid and lower troposphere across the ocean basins, while differences over land areas are larger and depend on the reference height selected. Foothills of the Western Himalayas, southeast South America and Indo-Gangetic Plain contain the most intense CbTs, while equatorial Africa, foothills of the Western Himalayas and equatorial South America contain the most intense ICCs. Close similarity among the oceanic cells suggests that the development of vigorous convective cells over warm oceans is similar and understanding gained in one region is extendable to other areas. South Asia contains several areas where the seasonal summer monsoon rainfall is influenced by the orography. One of the fundamental questions concerning the orographic rainfall is the nature of the associated precipitating clouds in the absence of synoptic forcing. It is believed that these are shallow and mid-level clouds, however, there is not much information in the literature on their vertical structure. Chapter 4 explores the vertical structure of active shallow (SC) and mid-level clouds (MLC) in Southeast Asia which are associated with the orographic features. Shallow and mid-level clouds have been defined such that their tops lie below 4.5 km and between 4.5 and 8 km, respectively. Only those TRMM PR passes are considered for active shallow and mid level cloud, which consists less than 5% deep cloud (_ 8 Km), compared to shallow cloud (_ 4.5 km) and mid level cloud (4.5 and _ 8 km). The reflectivity and height thresholds with constraint on percentage of deep clouds, ensure that we only captures the intense and isolated shallow and mid level clouds, away from deep cloud. The Western Ghats contains the highest fraction of the shallow clouds followed by the adjacent eastern Arabian Sea, while the Khasi hills in Meghalaya and Cardamom Mountains in Cambodia contain the least fraction of them. Average vertical profiles of shallow clouds are similar in different mountainous areas while that of mid-level clouds show some differences. Below 3 km, cloud liquid water content of the mid-level clouds is the highest over the Western Ghats and the eastern Arabian Sea. The average cloud liquid water content increases by 0.19 gm m􀀀3 for SCs between 3 km and 1.5 km, while the corresponding increase for MLCs is around 0.08 gm m􀀀3. MCS has a life cycle consisting of formative, intensifying, mature and dissipating stages. From the maximum projection of reflectivity on longitude and latitude plane from the 3D reflectivity fields, CS is defined as the common area of connected pixels with Ze _ 17 dBZ and polarized corrected temperature (PCT) _ 250 K, with atleast 500 km2. A CS is considered in subsequent analysis if its area detected in the Ze projection is at least 50% of its area seen in the PCT imagery. An algorithm is applied to obtain the phase of evolution. The algorithm is based on the average vertical profile of CSs and the reflectivity peak altitude (Hmax). An index namely reflectivity difference (RD) and Hmax is used to identify the phases of evolution. A close similarity has been observed during different phases in average vertical profiles as well as in CFAD. Growing or intensifying stage consists the highest reflectivity below the 2 km altitude. Mature phase does not show the much variation in Ze below the freezing level, whereas in the decaying stage, shows the largest regional differences in this layer of the atmosphere. Melting band signature is most pronounced in the decaying stage. Fraction of convective area decreases as CSs go through its life cycle, except over Atlantic Ocean during winter.
APA, Harvard, Vancouver, ISO, and other styles
9

Chin-YiTseng and 曾蓁宜. "Data Envelopment Analysis and Stochastic Mixed Complementarity Problem for Solving Nash Equilibrium in Two-level Coal-fired Electricity Market." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/h3j9q3.

Full text
Abstract:
碩士
國立成功大學
製造資訊與系統研究所
106
Under the trend of electricity liberalization, the electricity market becomes more competitive. In fact, due to the energy-type features of electricity power systems, even the market with several generation firms can exercise the market power if there are some limitations, such as transportation restrictions caused by geographical locations or political and economic factors. Thus, for some countries like China, the market structures of electricity generation plants and transmission system operators (TSO) are still imperfectly competitive which may lead to productive inefficiency. This study connects coal-fired plants in upstream market and grid operators in downstream market together, and then investigate the Nash equilibrium which claims “rational inefficiency” but bring more profits for power industries. We propose the data envelopment analysis (DEA) and mixed complementarity problem (MiCP) to identify the Nash equilibrium in the estimated production possibility set of the two-level electricity market. Two cases of deterministic models based on different assumptions about operation of market power are discussed. Then the directional distance function approach is applied to estimate the direction toward the Nash equilibrium and measure the efficiency. Additionally, to understand the impacts of price parameters on the equilibrium results, the sensitivity analysis and optimization approaches for uncertainty are employed. This study develops the stochastic mixed complementarity problem for solving the Nash equilibrium with stochastic programming and robust optimization respectively. An empirical study of the electricity market in both north and northeast regions of China is conducted to validate the proposed models. The results show the optimal equilibrium solution in the two-level electricity market, and the efficiency analysis with respect to the Nash solution provides the managerial insights to drive the productivity in the two-level electricity market system. The discussion for the parameter reveals that the price in the upstream market have significant influence to the equilibrium solution. And the analysis of Nash solutions generated from stochastic mixed complementarity problem illustrates the necessity to model with uncertain parameters.
APA, Harvard, Vancouver, ISO, and other styles
10

Stępień, Jakub. "Physics-based animation of articulated rigid body systems for virtual environments." Rozprawa doktorska, 2013. https://repolis.bg.polsl.pl/dlibra/docmetadata?showContent=true&id=12190.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Mixed Complementarity Problem (MCP)"

1

Pae, Hye K. "Conclusion: Convergence or Divergence between the East and the West?" In Literacy Studies, 219–29. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-55152-0_12.

Full text
Abstract:
Abstract This chapter briefly reviews language as a cultural tool and claims written language or script to be the influential force that runs cognition and culture. As an extension of the linguistic relativity hypothesis, script relativity is considered to be the engines and underpinnings of our cognition, everyday problem-solving strategies, and overarching culture as the consequence of accommodated brain pathways upon reading. The mixed-script advantage is also discussed. Uni-script use has evolved to the use of bi-scripts or multi-scripts, as in Chinese with Pinyin and Japanese multi-scripts as well as the recent adoptions of Hindi-English bilinguals’ Romanagari, Aralish that is used to supplement Arabic, and the Greeks’ additional use of Greeklish. As the results of the co-use of words and images, the adoption of bi-scripts or multi-scripts, and a mixture of digital and paper-based texts, more convergence as well as the state of complementarity and harmony between the East and the West are expected. The chapter ends with the notations of limitations of the book and recommendations.
APA, Harvard, Vancouver, ISO, and other styles
2

Phaladi, Malefetjane Phineas. "Studying Knowledge Management and Human Resource Management Practices in the State-Owned Entities Using Mixed Methods Research Design." In Handbook of Research on Mixed Methods Research in Information Science, 340–61. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-8844-4.ch017.

Full text
Abstract:
The chapter demonstrates the lessons learnt from the application of an exploratory sequential design in a mixed methods research project to investigate the integration of knowledge management and human resource management practices for the reduction of knowledge loss in the public-owned entities in context. The research design was considered to be suited for studying a research problem from different perspectives and offers complementarity and diversity in data collection and analysis. The chapter revealed that qualitative data need to be complemented with the statistical or quantitative data to make an informed and complete analysis, conclusions, and recommendations of the research findings because, alone, qualitative data lack completeness and generalisability. The chapter presents a framework to design, apply, and evaluate a mixed methods research study so that information scientists, knowledge management researchers, and scholars from other related fields may play a role in its application and development when researching complex phenomena.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Mixed Complementarity Problem (MCP)"

1

Jiang, Jiamin, and Xian-Huan Wen. "Smooth Formulation for Three-Phase Black-Oil Simulation with Superior Nonlinear Convergence." In SPE Reservoir Simulation Conference. SPE, 2023. http://dx.doi.org/10.2118/212261-ms.

Full text
Abstract:
Abstract Black-oil simulations with phase changes are challenging, because of the complex interactions between the different components and the equilibrium behavior of the phases. The common method for solving this type of nonlinear problem is to use a fully-implicit approach. However, the conventional black-oil model can lead to difficulties with converging using Newton's method. Discontinuities in discrete system can occur when a phase transition happens, which can lead to oscillations or even failure of the Newton iterations. The goal is to design a smoothing formulation that eliminates any sudden changes in properties or discontinuities that occur during phase transitions. We first employ a compositional formulation based on K-values to describe the standard black-oil model. Next, the coupled system is reformulated such that the discontinuities are carried over to the phase equilibrium model. In this manner, a single, succinct non-smooth equation is obtained, which allows for deriving a smoothing approximation. A mixed complementarity problem (MCP) for phase-equilibrium in the area of chemical process modeling served as the foundation for the reformulation. The new formulation is non-intrusive and simple to implement, requiring minor changes to current black-oil simulator frameworks. We analyze and demonstrate that phase changes lead to the changes of fluid-properties and discrete system, under the conventional black-oil formulation. By comparison, the newly proposed formulation uses a smoothing parameter to ensure smooth transitions of variables between the phase regimes. It also generates unique solutions that are valid for all three phases. Several complex heterogeneous problems are tested. The conventional black-oil model experiences many time-step cuttings and wasting nonlinear iterates. On the contrary, the smoothing model exhibits excellent convergence behaviors. Overall, the new formulation addresses the issues with convergence caused by phase-changes, while barely affecting solution results.
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Jiaoyu. "Mixed-Type Splitting Iterative Method for Linear Complementarity Problem." In 2009 International Joint Conference on Computational Sciences and Optimization, CSO. IEEE, 2009. http://dx.doi.org/10.1109/cso.2009.224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Guoqiang, Xinzhong Cai, and Yujing Yue. "A New Polynomial Interior-Point Algorithm for Monotone Mixed Linear Complementarity Problem." In 2008 Fourth International Conference on Natural Computation. IEEE, 2008. http://dx.doi.org/10.1109/icnc.2008.245.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kim, Youngdae, and Kibaek Kim. "A mixed complementarity problem approach for steady-state voltage and frequency stability analysis." In 2022 IEEE Power & Energy Society General Meeting (PESGM). IEEE, 2022. http://dx.doi.org/10.1109/pesgm48719.2022.9916803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pirnia, M., C. A. Canizares, K. Bhattacharya, and A. Vaccaro. "An Affine Arithmetic method to solve the stochastic power flow problem based on a mixed complementarity formulation." In 2012 IEEE Power & Energy Society General Meeting. New Energy Horizons - Opportunities and Challenges. IEEE, 2012. http://dx.doi.org/10.1109/pesgm.2012.6345100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bhalerao, Kishor D., Kurt S. Anderson, and Jeffery C. Trinkle. "A Recursive Hybrid Time-Stepping Scheme for Intermittent Contact in Multi-Rigid-Body Dynamics." In ASME 2009 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/detc2009-86774.

Full text
Abstract:
This paper describes a novel method for the modeling of intermittent contact in multi-rigid-body problems. We use a complementarity based time-stepping scheme in Featherstone’s Divide and Conquer framework to efficiently model the unilateral and bilateral constraints in the system. The time-stepping scheme relies on impulse-based equations and does not require explicit collision detection. A set of complementarity conditions is used to model the interpenetration constraint and a linearized friction cone is used to yield a linear complementarity problem. The Divide and Conquer framework ensures that the size of the resulting mixed linear complementarity problem is independent of the number of bilateral constraints in the system. This makes the proposed method especially efficient for systems where the number of bilateral constraints are much greater than the number of unilateral constraints. The method is demonstrated by applying it to a falling 3D double pendulum.
APA, Harvard, Vancouver, ISO, and other styles
7

Enzenhöfer, Andreas, Albert Peiret, Marek Teichmann, and József Kövecses. "A Unit-Consistent Error Measure for Mixed Linear Complementarity Problems in Multibody Dynamics With Contact." In ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/detc2018-85232.

Full text
Abstract:
Modeling multibody systems subject to unilateral contacts and friction efficiently is challenging, and dynamic formulations based on the mixed linear complementarity problem (MLCP) are commonly used for this purpose. The accuracy of the MLCP solution method can be evaluated by determining the error introduced by it. In this paper, we find that commonly used MLCP error measures suffer from unit inconsistency leading to the error lacking any physical meaning. We propose a unit-consistent error measure which computes energy error components for each constraint dependent on the inverse effective mass and compliance. It is shown by means of a simple example that the unit consistency issue does not occur using this proposed error measure. Simulation results confirm that the error decreases with convergence toward the solution. If a pivoting algorithm does not find a solution of the MLCP due to an iteration limit, e.g. in real-time simulations, choosing the result with the least error can reduce the risk of simulation instabilities.
APA, Harvard, Vancouver, ISO, and other styles
8

Chakraborty, Nilanjan, Stephen Berard, Srinivas Akella, and Jeff Trinkle. "An Implicit Time-Stepping Method for Quasi-Rigid Multibody Systems With Intermittent Contact." In ASME 2007 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/detc2007-35526.

Full text
Abstract:
We recently developed a time-stepping method for simulating rigid multi-body systems with intermittent contact that is implicit in the geometric information [1]. In this paper, we extend this formulation to quasi-rigid or locally compliant objects, i.e., objects with a rigid core surrounded by a compliant layer, similar to Song et al. [2]. The difference in our compliance model from existing quasi-rigid models is that, based on physical motivations, we assume the compliant layer has a maximum possible normal deflection beyond which it acts as a rigid body. Therefore, we use an extension of the Kelvin-Voigt (i.e. linear spring-damper) model for obtaining the normal contact forces by incorporating the thickness of the compliant layer explicitly in the contact model. We use the Kelvin-Voigt model for the tangential forces and assume that the contact forces and moment satisfy an ellipsoidal friction law. We model each object as an intersection of convex inequalities and write the contact constraint as a complementarity constraint between the contact force and a distance function dependent on the closest points and the local deformation of the body. The closest points satisfy a system of nonlinear algebraic equations and the resultant continuous model is a Differential Complementarity Problem (DCP). This enables us to formulate a geometrically implicit time-stepping scheme for solving the DCP which is more accurate than a geometrically explicit scheme. The discrete problem to be solved at each time-step is a mixed nonlinear complementarity problem.
APA, Harvard, Vancouver, ISO, and other styles
9

Xie, Jiayin, and Nilanjan Chakraborty. "Rigid Body Dynamic Simulation With Multiple Convex Contact Patches." In ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/detc2018-85429.

Full text
Abstract:
We present a principled method for dynamic simulation of rigid bodies in intermittent contact with each other where the contact is assumed to be a non-convex contact patch that can be modeled as a union of convex patches. The prevalent assumption in simulating rigid bodies undergoing intermittent contact with each other is that the contact is a point contact. In recent work, we introduced an approach to simulate contacting rigid bodies with convex contact patches (line and surface contact). In this paper, for non-convex contact patches modeled as a union of convex patches, we formulate a discrete-time mixed complementarity problem where we solve the contact detection and integration of the equations of motion simultaneously. Thus, our method is a geometrically-implicit method and we prove that in our formulation, there is no artificial penetration between the contacting rigid bodies. We solve for the equivalent contact point (ECP) and contact impulse of each contact patch simultaneously along with the state, i.e., configuration and velocity of the objects. We provide empirical evidence to show that if the number of contact patches between two objects is less than or equal to three, the state evolution of the bodies is unique, although the contact impulses and ECP may not be unique. We also present simulation results showing that our method can seamlessly capture transition between different contact modes like non-convex patch to point (or line contact) and vice-versa during simulation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography