Letteratura scientifica selezionata sul tema "Expected-rate constraints"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Expected-rate constraints".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Expected-rate constraints":

1

Guan, Jinying, Jiawei He, Sisi Peng e Tiantian Xue. "Comparisons to Investment Portfolios under Markowitz Model and Index Model based on US’s Stock Market". BCP Business & Management 26 (19 settembre 2022): 905–15. http://dx.doi.org/10.54691/bcpbm.v26i.2053.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In order to construct an investment portfolio, it is crucial to select risky assets and arrange the weights to each asset. This article selects 6 stocks to compose the portfolio and compare their performances under 5 constraints that are always considered in real life by using Markowitz Model and Index Model. These two models would produce different investment portfolios as they take different factors of stock into account. By calculating the maximal return rate determined by Sharpe ratio, and the minimal risk rate determined by standard deviation, comparing the two models and conclude which model is more suitable under each constraint. According to the results, Markowitz model proves that under certain constraints, investors' portfolio selection can be simplified to balance two factors, namely, the expected return and variance of the portfolio. In the case of Index Model, the conclusion is more general and regular. The results would play a significant role in determining the stocks’ future performance and help investors constructing their portfolios under different constraints.
2

Kundu, Soumita, Tripti Chakrabarti e Dipak Kumar Jana. "Optimal Manufacturing-Remanufacturing Production Policy for a Closed-Loop Supply Chain under Fill Rate and Budget Constraint in Bifuzzy Environments". International Journal of Mathematics and Mathematical Sciences 2014 (2014): 1–14. http://dx.doi.org/10.1155/2014/690435.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We study a closed-loop supply chain involving a manufacturing facility and a remanufacturing facility. The manufacturer satisfies stochastic market demand by remanufacturing the used product into “as-new” one and producing new products from raw material in the remanufacturing facility and the manufacturing facility, respectively. The remanufacturing cost depends on the quality of used product. The problem is maximizing the manufacturer’s expected profit by jointly determining the collected quantity of used product and the ordered quantity of raw material. Following that we analyze the model with a fill rate constraint and a budget constraint separately and then with both the constraints. Next, to handle the imprecise nature of some parameters of the model, we develop the model with both constraints in bifuzzy environment. Finally numerical examples are presented to illustrate the models. The sensitivity analysis is also conducted to generate managerial insight.
3

Kiendrebeogo, R. Weizmann, Amanda M. Farah, Emily M. Foley, Abigail Gray, Nina Kunert, Anna Puecher, Andrew Toivonen et al. "Updated Observing Scenarios and Multimessenger Implications for the International Gravitational-wave Networks O4 and O5". Astrophysical Journal 958, n. 2 (21 novembre 2023): 158. http://dx.doi.org/10.3847/1538-4357/acfcb1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract An advanced LIGO and Virgo’s third observing run brought another binary neutron star merger (BNS) and the first neutron-star black hole mergers. While no confirmed kilonovae were identified in conjunction with any of these events, continued improvements of analyses surrounding GW170817 allow us to project constraints on the Hubble Constant (H 0), the Galactic enrichment from r-process nucleosynthesis, and ultra-dense matter possible from forthcoming events. Here, we describe the expected constraints based on the latest expected event rates from the international gravitational-wave network and analyses of GW170817. We show the expected detection rate of gravitational waves and their counterparts, as well as how sensitive potential constraints are to the observed numbers of counterparts. We intend this analysis as support for the community when creating scientifically driven electromagnetic follow-up proposals. During the next observing run O4, we predict an annual detection rate of electromagnetic counterparts from BNS of 0.43 − 0.26 + 0.58 ( 1.97 − 1.2 + 2.68 ) for the Zwicky Transient Facility (Rubin Observatory).
4

Wen, Yuzhen, e Chuancun Yin. "Optimal Expected Utility of Dividend Payments with Proportional Reinsurance under VaR Constraints and Stochastic Interest Rate". Journal of Function Spaces 2020 (11 agosto 2020): 1–13. http://dx.doi.org/10.1155/2020/4051969.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this paper, we consider the problem of maximizing the expected discounted utility of dividend payments for an insurance company taking into account the time value of ruin. We assume the preference of the insurer is of the CRRA form. The discounting factor is modeled as a geometric Brownian motion. We introduce the VaR control levels for the insurer to control its loss in reinsurance strategies. By solving the corresponding Hamilton-Jacobi-Bellman equation, we obtain the value function and the corresponding optimal strategy. Finally, we provide some numerical examples to illustrate the results and analyze the VaR control levels on the optimal strategy.
5

Cameron, Richard. "Ambiguous agreement, functional compensation, and nonspecific tú in the Spanish of San Juan, Puerto Rico, and Madrid, Spain". Language Variation and Change 5, n. 3 (ottobre 1993): 305–34. http://dx.doi.org/10.1017/s0954394500001526.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
ABSTRACTRichness of subject-verb agreement is implicit in the functional compensation interpretation of variable second person /-s/ in Puerto Rican Spanish (PRS). Because /-s/ is not variable in Madrid Spanish (MS), richer agreement is assumed, and a lower rate of pronominal expression is expected. Central to this interpretation are effects associated with ambiguous marking of person on finite singular verbs. Although an increase of pronominal expression correlates to ambiguous marking for PRS speakers, a similar result has not been reported for MS speakers. Nonetheless, a varbrul analysis yields similar weights for this constraint in both dialects. Moreover, ambiguity effects are best understood as constraints on null subject variation that interact with switch reference. Identity of varbrul weights for constraints on pronominal and null subject variation in PRS and MS also supports the Constant Rate Hypothesis. However, the two dialects do show a diametrically opposed effect associated with nonspecific tú.
6

Yamani, Ehab, e David Rakowski. "Cash Flow and Discount Rate Risk in the Investment Effect: A Downside Risk Approach". Quarterly Journal of Finance 08, n. 03 (7 agosto 2018): 1850002. http://dx.doi.org/10.1142/s2010139218500027.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We examine whether sensitivities to cash flow and discount rate risk in down markets explain the investment effect, in which low-investment stocks earn higher expected returns than high-investment stocks. We show how productivity and financing constraints asymmetrically impact the systematic risk of low-investment and high-investment firms, conditional on market state. Our evidence is consistent with both productivity constraints and financing constraints as explanations for the investment effect, but, contrary to expectations, more when prices are rising than falling.
7

Chava, Sudheer, e Alex Hsu. "Financial Constraints, Monetary Policy Shocks, and the Cross-Section of Equity Returns". Review of Financial Studies 33, n. 9 (20 novembre 2019): 4367–402. http://dx.doi.org/10.1093/rfs/hhz140.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract We analyze the impact ofa unanticipated monetary policy changes on the cross-section of U.S. equity returns. Financially constrained firms earn a significantly lower (higher) return following surprise interest rate increases (decreases) as compared to unconstrained firms. This differential return response between constrained and unconstrained firms appears after a delay of 3 to 4 days. Further, unanticipated Federal funds rate increases are associated with a larger decrease in expected cash flow news, but not discount rate news, for constrained firms relative to unconstrained firms. Authors have furnished an Internet Appendix, which is available on the Oxford University Press Web site next to the link to the final published paper online.
8

Marafico, Sullivan, Jonathan Biteau, Antonio Condorelli, Olivier Deligny e Quentin Luce. "Observational constraints on accelerators of ultra-high energy cosmic rays". Journal of Physics: Conference Series 2429, n. 1 (1 febbraio 2023): 012012. http://dx.doi.org/10.1088/1742-6596/2429/1/012012.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract We explore two generic hypotheses for tracing the sources of ultra-high energy cosmic rays (UHECRs) in the Universe: star formation rate density or stellar mass density. For each scenario, we infer a set of constraints for the emission mechanisms in the accelerators, for their energetics and for the abundances of elements at escape from their environments. From these constraints, we generate sky maps above 40 EeV expected from a catalog that comprises 410,761 galaxies out to 350 Mpc and provides a near-infrared flux-limited sample to map both stellar mass and star formation rate over the full sky. Considering a scenario of intermittent sources hosted in every galaxy, we show that the main features observed in arrival directions of UHECRs can in turn constrain the burst rate of the sources provided that magnetic-horizon effects are at play in clusters of galaxies.
9

Yang, Liu, Yao Xiong e Xiao-jiao Tong. "A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation". Mathematical Problems in Engineering 2017 (2017): 1–7. http://dx.doi.org/10.1155/2017/5681502.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD) constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA) method to approximate the expected values of the underlying random functions. A smoothing approach is proposed with which we can get the global solution and avoid introducing new variables and constraints. Meanwhile, we investigate the convergence of an optimal value from solving the transformed model and show that, with probability approaching one at exponential rate, the optimal value converges to its counterpart as the sample size increases. Numerical results show the effectiveness of the proposed algorithm and analysis.
10

Sekine, Jun. "A note on long-term optimal portfolios under drawdown constraints". Advances in Applied Probability 38, n. 3 (settembre 2006): 673–92. http://dx.doi.org/10.1239/aap/1158684997.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The maximization of the long-term growth rate of expected utility is considered under drawdown constraints. In a general situation, the value and the optimal strategy of the problem are related to those of another ‘standard’ risk-sensitive-type portfolio optimization problem. Furthermore, an upside-chance maximization problem of a large deviation probability is stated as a ‘dual’ optimization problem. As an example, a ‘linear-quadratic’ model is studied in detail: the conditions to ensure the solvabilities of the problems are discussed and explicit expressions for the solutions are presented.

Tesi sul tema "Expected-rate constraints":

1

Hamad, Mustapha. "Sharing resources for enhanced distributed hypothesis testing". Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAT029.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Les tests d'hypothèses distribués ont de nombreuses applications dans la sécurité, la surveillance de la santé, le contrôle automobile ou la détection d'anomalies. À l'aide de capteurs distribués, les centres de décision de ces systèmes visent à distinguer une situation normale (hypothèse nulle) d'une situation d'alerte (hypothèse alternative). Nous nous concentrons sur la maximisation de la décroissance exponentielle des probabilités d'erreur de type-II (correspondant aux détections manquées), avec un nombre croissant d'observations, tout en maintenant les probabilités d'erreur de type-I (correspondant aux fausses alertes) en dessous de seuils fixés. Dans cette thèse, nous supposons que différents systèmes ou applications partagent les ressources limitées du réseau et imposent des contraintes de taux moyen sur les liens de communication. Nous caractérisons les premières limites fondamentales de la théorie de l'information sous des contraintes de taux moyen pour les systèmes avec capteurs multiples et centres de décision multiples. Notre caractérisation révèle un nouveau compromis entre les exposants maximaux d'erreur de type-II aux différents centres de décision qui découle des différentes marges à exploiter sous des contraintes de taux moyen correspondant aux différents seuils d'erreur de type-I des centres de décision. Nous proposons une nouvelle stratégie de multiplexage et de partage du taux pour atteindre ces exposants d'erreur. Notre stratégie se généralise également à toute configuration avec des contraintes de taux moyen et permet d'obtenir des gains prometteurs par rapport aux résultats sur la même configuration avec des contraintes de taux maximal. La méthode de preuve de "converse" que nous utilisons pour caractériser ces limites théoriques peut également être utilisée pour dériver de nouveaux résultats de "converse forte" sous des contraintes de taux maximal. Elle est même applicable à d'autres problèmes tels que la compression ou le calcul distribué
Distributed hypothesis testing has many applications in security, health monitoring, automotive car control, or anomaly detection. With the help of distributed sensors, the decision centers (DCs) in such systems aim to distinguish between a normal situation (null hypothesis) and an alert situation (alternative hypothesis). Our focus will be on maximizing the exponential decay of the type-II error probabilities (corresponding to missed detections), with increasing numbers of observations, while keeping the type-I error probabilities (corresponding to false alarms) below given thresholds. In this thesis, we assume that different systems or applications share the limited network resources and impose expected-rate constraints on the system's communication links. We characterize the first information-theoretic fundamental limits under expected-rate constraints for multi-sensor multi-DC systems. Our characterization reveals a new tradeoff between the maximum type-II error exponents at the different DCs that stems from different margins to exploit under expected-rate constraints corresponding to the DCs' different type-I error thresholds. We propose a new multiplexing and rate-sharing strategy to achieve the error-exponents. Our strategy also generalizes to any setup with expected-rate constraints with promising gains compared to the results on the same setup under maximum-rate constraints. The converse proof method that we use to characterize the information-theoretic limits can also be used to derive new strong converse results under maximum-rate constraints. It is even applicable to other problems such as distributed compression or computation
2

Wong, Chung To (Charles). "Applications of constrained non-parametric smoothing methods in computing financial risk". Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/20537/1/Chung_To_Wong_Thesis.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The aim of this thesis is to improve risk measurement estimation by incorporating extra information in the form of constraint into completely non-parametric smoothing techniques. A similar approach has been applied in empirical likelihood analysis. The method of constraints incorporates bootstrap resampling techniques, in particular, biased bootstrap. This thesis brings together formal estimation methods, empirical information use, and computationally intensive methods. In this thesis, the constraint approach is applied to non-parametric smoothing estimators to improve the estimation or modelling of risk measures. We consider estimation of Value-at-Risk, of intraday volatility for market risk, and of recovery rate densities for credit risk management. Firstly, we study Value-at-Risk (VaR) and Expected Shortfall (ES) estimation. VaR and ES estimation are strongly related to quantile estimation. Hence, tail estimation is of interest in its own right. We employ constrained and unconstrained kernel density estimators to estimate tail distributions, and we estimate quantiles from the fitted tail distribution. The constrained kernel density estimator is an application of the biased bootstrap technique proposed by Hall & Presnell (1998). The estimator that we use for the constrained kernel estimator is the Harrell-Davis (H-D) quantile estimator. We calibrate the performance of the constrained and unconstrained kernel density estimators by estimating tail densities based on samples from Normal and Student-t distributions. We find a significant improvement in fitting heavy tail distributions using the constrained kernel estimator, when used in conjunction with the H-D quantile estimator. We also present an empirical study demonstrating VaR and ES calculation. A credit event in financial markets is defined as the event that a party fails to pay an obligation to another, and credit risk is defined as the measure of uncertainty of such events. Recovery rate, in the credit risk context, is the rate of recuperation when a credit event occurs. It is defined as Recovery rate = 1 - LGD, where LGD is the rate of loss given default. From this point of view, the recovery rate is a key element both for credit risk management and for pricing credit derivatives. Only the credit risk management is considered in this thesis. To avoid strong assumptions about the form of the recovery rate density in current approaches, we propose a non-parametric technique incorporating a mode constraint, with the adjusted Beta kernel employed to estimate the recovery density function. An encouraging result for the constrained Beta kernel estimator is illustrated by a large number of simulations, as genuine data are very confidential and difficult to obtain. Modelling high frequency data is a popular topic in contemporary finance. The intraday volatility patterns of standard indices and market-traded assets have been well documented in the literature. They show that the volatility patterns reflect the different characteristics of different stock markets, such as double U-shaped volatility pattern reported in the Hang Seng Index (HSI). We aim to capture this intraday volatility pattern using a non-parametric regression model. In particular, we propose a constrained function approximation technique to formally test the structure of the pattern and to approximate the location of the anti-mode of the U-shape. We illustrate this methodology on the HSI as an empirical example.
3

Wong, Chung To (Charles). "Applications of constrained non-parametric smoothing methods in computing financial risk". Queensland University of Technology, 2008. http://eprints.qut.edu.au/20537/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The aim of this thesis is to improve risk measurement estimation by incorporating extra information in the form of constraint into completely non-parametric smoothing techniques. A similar approach has been applied in empirical likelihood analysis. The method of constraints incorporates bootstrap resampling techniques, in particular, biased bootstrap. This thesis brings together formal estimation methods, empirical information use, and computationally intensive methods. In this thesis, the constraint approach is applied to non-parametric smoothing estimators to improve the estimation or modelling of risk measures. We consider estimation of Value-at-Risk, of intraday volatility for market risk, and of recovery rate densities for credit risk management. Firstly, we study Value-at-Risk (VaR) and Expected Shortfall (ES) estimation. VaR and ES estimation are strongly related to quantile estimation. Hence, tail estimation is of interest in its own right. We employ constrained and unconstrained kernel density estimators to estimate tail distributions, and we estimate quantiles from the fitted tail distribution. The constrained kernel density estimator is an application of the biased bootstrap technique proposed by Hall & Presnell (1998). The estimator that we use for the constrained kernel estimator is the Harrell-Davis (H-D) quantile estimator. We calibrate the performance of the constrained and unconstrained kernel density estimators by estimating tail densities based on samples from Normal and Student-t distributions. We find a significant improvement in fitting heavy tail distributions using the constrained kernel estimator, when used in conjunction with the H-D quantile estimator. We also present an empirical study demonstrating VaR and ES calculation. A credit event in financial markets is defined as the event that a party fails to pay an obligation to another, and credit risk is defined as the measure of uncertainty of such events. Recovery rate, in the credit risk context, is the rate of recuperation when a credit event occurs. It is defined as Recovery rate = 1 - LGD, where LGD is the rate of loss given default. From this point of view, the recovery rate is a key element both for credit risk management and for pricing credit derivatives. Only the credit risk management is considered in this thesis. To avoid strong assumptions about the form of the recovery rate density in current approaches, we propose a non-parametric technique incorporating a mode constraint, with the adjusted Beta kernel employed to estimate the recovery density function. An encouraging result for the constrained Beta kernel estimator is illustrated by a large number of simulations, as genuine data are very confidential and difficult to obtain. Modelling high frequency data is a popular topic in contemporary finance. The intraday volatility patterns of standard indices and market-traded assets have been well documented in the literature. They show that the volatility patterns reflect the different characteristics of different stock markets, such as double U-shaped volatility pattern reported in the Hang Seng Index (HSI). We aim to capture this intraday volatility pattern using a non-parametric regression model. In particular, we propose a constrained function approximation technique to formally test the structure of the pattern and to approximate the location of the anti-mode of the U-shape. We illustrate this methodology on the HSI as an empirical example.

Libri sul tema "Expected-rate constraints":

1

Jappelli, Tullio, e Luigi Pistaferri. Lifetime Uncertainty. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780199383146.003.0011.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Lifetime uncertainty represents an additional risk that affects intertemporal choice, because consumers may live longer than expected and run the risk of exhausting the resources accumulated for retirement. Lifetime uncertainty introduces an incentive to consume earlier in life because consumers discount future utility at a higher rate. Second, since in each period there is some positive probability that the consumer will not survive to the next period, the terminal condition on wealth corresponds effectively to a liquidity constraint. Third, with lifetime uncertainty, the decumulation of wealth by the elderly is slower than predicted by the life-cycle model. Finally, the model with lifetime uncertainty generates transfers of wealth across generations even without an express bequest motive, through what we can term involuntary or accidental bequests. The chapter highlights the necessity of accounting for lifetime uncertainty when interpreting empirical age-wealth profiles estimated from microeconomic data.

Capitoli di libri sul tema "Expected-rate constraints":

1

Sandström, Rolf. "Creep with Low Stress Exponents". In Basic Modeling and Theory of Creep of Metallic Materials, 83–114. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-49507-6_5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractPrimary creep models predict that at low stresses a stress exponent of 1 can be obtained for dislocation creep. Also experimentally this has been observed for an austenitic stainless steel. The time dependence of the primary creep verifies that it is dislocation creep. An other example is for Al at very high temperatures (Harper-Dorn creep), where at sufficiently low stresses, the stress exponent approaches 1. For both materials higher stresses give larger stress exponents as expected for dislocation creep. Obviously, diffusion and dislocation creep can be competing processes. The validity of creep models at low stresses and high temperatures as well as at high stresses and low temperatures demonstrates their wide range of usage. Since this in reality represents an extensive extrapolation, it can be consider as a direct verification of the basic creep models. In cases for Cu and stainless steels, the predicted creep rate by diffusion creep (Coble) exceeds the observed creep rate as well as the predicted one by dislocation creep by an order of magnitude. The likely explanation is that constrained boundary creep is taken place, i.e. the grain boundary creep rate cannot be essentially faster than that of the bulk.
2

Chaikalis, Costas, e Felip Riera-Palou. "Efficient Receiver Implementation for Mobile Applications". In Handbook of Research on Heterogeneous Next Generation Networking, 271–98. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-108-7.ch013.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Modern and future wireless communication systems such as UMTS and beyond 3G systems (B3G) are expected to support very high data rates to/from mobile users. This poses important challenges on the handset design as they should be able to attain an acceptable operating bit error rate (BER) while employing a limited set of resources (i.e. low complexity, low power) and often, with tight processing delay constraints. In this chapter we study how channel decoding and equalisation, two widely used mechanisms to combat the deleterious channel effects, can be made adaptable in accordance to the instantaneous operating environment. Simulation results are given demonstrating how receiver reconfigurability is a promising method to achieve complexity/delay efficient receivers while maintaining prescribed quality of service (QoS) constraints.
3

Keerthivasan K e Shibu S. "5G Wireless/Wired Convergence of UFMC Based Modulation for Intensity Modulation Direct Detection". In Advances in Parallel Computing. IOS Press, 2021. http://dx.doi.org/10.3233/apc210094.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Faster data speeds, shorter end-to-end latencies, improved end-user service efficiency, and a wider range of multi-media applications are expected with the new 5G wireless services. The dramatic increase in the number of base stations required to meet these criteria, which undermines the low-cost constraints imposed by operators, demonstrates the need for a paradigm shift in modern network architecture. Alternative formats will be required for next-generation architectures, where simplicity is the primary goal. The number of connections is expected to increase rapidly, breaking the inherent complexity of traditional coherent solutions and lowering the resulting cost percentage. A novel implementation model is used to migrate complex-nature modulation structures in a highly efficient and cost-effective manner. Theoretical work to analyses modulations’ behavior over a wired/fiber setup and wireless mode is also provided. The state-of-the-art computational complexity, simplicity, and ease of execution while maintaining efficiency throughput and bit error rate.
4

Cyr, Hélène. "Individual Energy Use and the Allometry of Population Density". In Scaling in Biology, 267–96. Oxford University PressNew York, NY, 2000. http://dx.doi.org/10.1093/oso/9780195131413.003.0015.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract Physiologists have known for a long time that the rate at which organisms function, for example how fast they eat, grow, or respire, is related exponentially to their body mass (i.e., physiological rate ex (body mass)b). The exponents of these relationships (b) are remarkably constant, around 0.75 [15, 103]. Therefore, physiological rates in an individual (e.g., rates of feeding or growth) increase less than proportionately with increases in its body mass. In other words, per unit biomass (e.g., per kg of organisms) small organisms have higher physiological rates than larger organisms. These so-called allometric relationships are so prevalent that physiologists routinely extract them from their data before pursuing any other analysis. Viewed from an ecological perspective, however, these allometric relationships represent physiological “constraints” on individual organisms which are expected to determine, at least in part, the structure and functioning of populations, communities, and ecosystems. Ecologists are interested in using these physiological allometric relationships to predict the structure, dynamics, and interactions of complex assemblages of organisms in nature.
5

Belli, Laura, Simone Cirani, Luca Davoli, Gianluigi Ferrari, Lorenzo Melegari e Marco Picone. "Applying Security to a Big Stream Cloud Architecture for the Internet of Things". In Securing the Internet of Things, 1260–84. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-5225-9866-4.ch057.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The Internet of Things (IoT) is expected to interconnect billions (around 50 by 2020) of heterogeneous sensor/actuator-equipped devices denoted as “Smart Objects” (SOs), characterized by constrained resources in terms of memory, processing, and communication reliability. Several IoT applications have real-time and low-latency requirements and must rely on architectures specifically designed to manage gigantic streams of information (in terms of number of data sources and transmission data rate). We refer to “Big Stream” as the paradigm which best fits the selected IoT scenario, in contrast to the traditional “Big Data” concept, which does not consider real-time constraints. Moreover, there are many security concerns related to IoT devices and to the Cloud. In this paper, we analyze security aspects in a novel Cloud architecture for Big Stream applications, which efficiently handles Big Stream data through a Graph-based platform and delivers processed data to consumers, with low latency. The authors detail each module defined in the system architecture, describing all refinements required to make the platform able to secure large data streams. An experimentation is also conducted in order to evaluate the performance of the proposed architecture when integrating security mechanisms.
6

Zverovich, Vadim. "Graph Models for Backbone Sets and Limited Packings in Networks". In Modern Applications of Graph Theory, 213–74. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198856740.003.0004.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Here, a graph-theoretic approach is applied to some problems in networks, for example in wireless sensor networks (WSNs) where some sensor nodes should be selected to behave as a backbone/dominating set to support routing communications in an efficient and fault-tolerant way. Four different types of multiple domination (k-, k-tuple, α‎- and α‎-rate domination) are considered and recent upper bounds for cardinality of these types of dominating sets are discussed. Randomized algorithms are presented for finding multiple dominating sets whose expected size satisfies the upper bounds. Limited packings in networks are studied, in particular the k-limited packing number. One possible application of limited packings is a secure facility location problem when there is a need to place as many resources as possible in a given network subject to some security constraints. The last section is devoted to two general frameworks for multiple domination: <r,s>-domination and parametric domination. Finally, different threshold functions for multiple domination are considered.
7

Heiser, Willem J., e Jacqueline J. Meulman. "Nonlinear Methods for the Analysis of Homogeneity and Heterogeneity". In Recent Advances In Descriptive Multivariate Analysis, 51–89. Oxford University PressOxford, 1995. http://dx.doi.org/10.1093/oso/9780198522850.003.0004.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract Although nonlinear phenomena are ubiquitous in science and technology, the majority of methods for analysing statistical data is linear. Part of an answer to the puzzling question of how such a discrepancy could have come into existence would seem to be that first approximations are hard to beat. Statistics often enters the stage when the subject of study is badly understood, and then it is sensible to use linear approximation as a first try. If linear approximation does reasonably well, not infrequently after being aided by quite some effort on data cleaning and readjustment of residuals, the returns from bringing in nonlinearity are expected to be low. Another reason is the fact that linearity is a versatile and universal concept in statistics: a model may predict nonlinear regression (as in curve fitting), yet it will be called linear as long as it is linear in the parameters; an estimate may determine the shape of a nonlinear function (as in density estimation), yet the estimator will be called linear as long as it is linear in the data. In multivariate analysis, we often work in a linear space, with !linear operators, under linear constraints, and with computation methods that have linear convergence rate.
8

Zhai, Yongning, e Weiwei Li. "The Optimal Checkpoint Interval for the Long-Running Application". In Research Anthology on Architectures, Frameworks, and Integration Strategies for Distributed and Cloud Computing, 2590–99. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-5339-8.ch125.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
For the distributed computing system, excessive or deficient checkpointing operations would result in severe performance degradation. To minimize the expected computation execution of the long-running application with a general failure distribution, the optimal equidistant checkpoint interval for fault tolerant performance optimization is analyzed and derived in this paper. More precisely, the optimal checkpointing period to determine the proper checkpoint sequence is proposed, and the derivation of the expected effective rate of the defined computation cycle is introduced. Corresponding to the maximal expected effective rate, the constraint of the optimal checkpoint sequence can be obtained. From the constraint of optimality, the optimal equidistant checkpoint interval can be obtained according to the minimal fault tolerant overhead ratio. By the numerical results, the proposal is practical to determine a proper equidistant checkpoint interval for fault tolerant performance optimization.
9

Tatar, Marc. "Senescence". In Evolutionary Ecology. Oxford University Press, 2001. http://dx.doi.org/10.1093/oso/9780195131543.003.0015.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
At all taxonomic levels, there exists tremendous variation in life expectancy. A field mouse Peromyscus may live 1.2 years, while the African elephant may persist for 60 years, and even a mousesized bat such as Corynorhinus rafinesquei lives a healthy 20 years (Promislow 1991). Part of this variance is caused by differences in ecological risks, rodents being perhaps the most susceptible to predation, and to vagaries of climate and resources. Another portion is caused by differences in senescence, the intrinsic degeneration of function that produces progressive decrement in age-specific survival and fecundity. Senescence occurs in natural populations, where it affects life expectancy and reproduction as can be seen, for instance, from the progressive change in age-specific mortality and maternity of lion and baboon in East Africa. The occurrence of senescence and of the widespread variation in longevity presents a paradox: How does the age-dependent deterioration of fitness components evolve under natural selection? The conceptual and empirical resolutions to this problem will be explored in this chapter. We shall see that the force of natural selection does not weigh equally on all ages and that there is therefore an increased chance for genes with late-age-deleterious effects to be expressed. Life histories are expected to be optimized to regulate intrinsic deterioration, and in this way, longevity evolves despite the maladaptive nature of senescence. From this framework, we will then consider how the model is tested, both through studies of laboratory evolution and of natural variation, and through the physiological and molecular dissection of constraints underlying trade-offs between reproduction and longevity. As humans are well aware from personal experience, performance and physical condition progressively deteriorate with adult age. And in us, as well as in many other species, mortality rates progressively increase with cohort age. Medawar (1955), followed by Williams (1957), stated the underlying assumption connecting these events: Senescent decline in function causes a progressive increase in mortality rate. Although mortality may increase episodically across some age classes, such as with increases in reproductive effort, we assume that the continuous increase of mortality across the range of adult ages represents our best estimate of senescence.
10

Deng, Der-Jiunn, Yueh-Ming Huang e Hsiao-Hwa Chen. "Delay Constrained Admission Control and Scheduling Policy for IEEE 802.11e HCCA Method". In Digital Rights Management, 589–606. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2136-7.ch027.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In order to obtain better Quality-of-Service (QoS) requirements for multimedia traffic, the 802.11TGe has proposed HCF Controlled Channel Access (HCCA) method for the Controlled Access Period (CAP) in the HCF (Hybrid Coordination Function) to enhance the original IEEE 802.11 Medium Access Control (MAC) protocol, and it is expected to provide integrated traffic service to realize mobile multimedia communications. However, the reference design of admission control and scheduling policy in HCCA still can not provide stringent delay requirements to fulfill a hard QoS guarantee, a necessary feature for most multimedia applications. In this chapter, the authors propose a pragmatic admission control scheme with a novel polling based packet scheduling policy for multimedia transmission, such as Constant Bit Rate (CBR) and Variable Bit Rate (VBR) traffic, for IEEE 802.11e HCCA method. Our design is simple, it is compatible with the standard, it can guarantee delay constraint, and it utilizes bandwidth efficiently. A simple and accurate analytical model is carried out to study the average queueing delay estimation of the proposed scheme. In addition to theoretical analysis, simulations are conducted in NS2 network simulator to verify our analysis and to validate the promising performance of the proposed scheme.

Atti di convegni sul tema "Expected-rate constraints":

1

Hamad, Mustapha, Michele Wigger e Mireille Sarkiss. "Optimal Exponents in Cascaded Hypothesis Testing under Expected Rate Constraints". In 2021 IEEE Information Theory Workshop (ITW). IEEE, 2021. http://dx.doi.org/10.1109/itw48936.2021.9611470.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Hamad, Mustapha, Michele Wigger e Mireille Sarkiss. "Two-Hop Network with Multiple Decision Centers under Expected-Rate Constraints". In GLOBECOM 2021 - 2021 IEEE Global Communications Conference. IEEE, 2021. http://dx.doi.org/10.1109/globecom46510.2021.9685750.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Jialin He e Dinesh Rajan. "Expected rate of slow-fading channel with partial CSIT under computational and transmit power constraints". In 2013 IEEE Global Communications Conference (GLOBECOM 2013). IEEE, 2013. http://dx.doi.org/10.1109/glocom.2013.6831352.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Li, James C. F., e Subhrakanti Dey. "Optimum Power Allocation for Expected Achievable Rate Maximization with Outage Constraints in Cooperative Relay Networks". In 2009 IEEE Wireless Communications and Networking Conference. IEEE, 2009. http://dx.doi.org/10.1109/wcnc.2009.4917873.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Ugarte, Sergio, Mohamad Metghalchi e James C. Keck. "Methanol Oxidation Induction Times Using the Rate-Controlled Constrained-Equilibrium Method". In ASME 2003 International Mechanical Engineering Congress and Exposition. ASMEDC, 2003. http://dx.doi.org/10.1115/imece2003-55289.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Methanol oxidation has been modeled using the Rate-Controlled Constrained-Equilibrium method (RCCE). In this method, composition of the system is determined by constraints rather than by species. Since the number of constraints can be much smaller than the number of species present, the number of rate equations required to describe the time evolution of the system can be considerably reduced. In the present paper, C1 chemistry with 29 species and 140 reactions has been used to investigate the oxidation of stoichiometric methanol/oxygen mixture at constant energy and volume. Three fixed elemental constraints: elemental carbon, elemental oxygen and elemental hydrogen and from one to nine variable constraints: moles of fuel, total number of moles, moles of free oxygen, moles of free oxygen, moles of free valence, moles of fuel radical, moles of formaldehyde H2CO, moles of HCO, moles of CO and moles of CH3O were used. The four to twelve rate equations for the constraint potentials (LaGrange multipliers conjugate to the constraints) were integrated for a wide range of initial temperatures and pressures. As expected, the RCCE calculations gave correct equilibrium values in all cases. Only 8 constraints were required to give reasonable agreement with detailed calculations. Results of using 9 constraints showed compared very well to those of the detailed calculations at all conditions. For this system, ignition delay times and major species concentrations were within 0.5% to 5% of the values given by detailed calculations. Adding up to 12 constraints improved the accuracy of the minor species mole fractions at early times, but only had a little effect on the ignition delay times. RCCE calculations reduced the time required for input and output of data in 25% and 10% when using 8 and 9 constraints respectively. In addition, RCCE calculations gave valuable insight into the important reaction paths and rate-limiting reactions involved in methanol oxidation.
6

Nguyen, Quang, Mustafa Onur e Faruk Omer Alpak. "Nonlinearly Constrained Life-Cycle Production Optimization Using Sequential Quadratic Programming (SQP) With Stochastic Simplex Approximated Gradients (StoSAG)". In SPE Reservoir Simulation Conference. SPE, 2023. http://dx.doi.org/10.2118/212178-ms.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Summary Life-cycle production optimization is a crucial component of closed-loop reservoir management, referring to optimizing a production-driven objective function via varying well controls during a reservoir's lifetime. When nonlinear-state constraints (e.g., field liquid production rate and field gas production rate) at each control step need to be honored, solving a large-scale production optimization problem, particularly in geological uncertainty, becomes significantly challenging. This study presents a stochastic gradient-based framework to efficiently solve a nonlinearly constrained deterministic (based on a single realization of a geological model) or a robust (based on multiple realizations of the geologic model) production optimization problem. The proposed framework is based on a novel sequential quadratic programming (SQP) method using stochastic simplex approximated gradients (StoSAG). The novelty is due to the implementation of a line-search procedure into the SQP, which we refer to as line-search sequential quadratic programming (LS-SQP). Another variant of the method, called the trust-region SQP (TR-SQP), a dual method to the LS-SQP, is also introduced. For robust optimization, we couple LS-SQP with two different constraint handling schemes; the expected value constraint scheme and minimum-maximum (min-max) constraint scheme, to avoid the explicit application of nonlinear constraints for each reservoir model. We provide the basic theoretical development that led to our proposed algorithms and demonstrate their performances in three case studies: a simple synthetic deterministic problem (a two-phase waterflooding model), a large-scale deterministic optimization problem, and a large-scale robust optimization problem, both conducted on the Brugge model. Results show that the LS-SQP and TR-SQP algorithms with StoSAG can effectively handle the nonlinear constraints in a life-cycle production optimization problem. Numerical experiments also confirm similar converged ultimate solutions for both LS-SQP and TR-SQP variants. It has been observed that TR-SQP yields shorter but more safeguarded update steps compared to LS-SQP. However, it requires slightly more objective-function evaluations. We also demonstrate the superiority of these SQP methods over the augmented Lagrangian method (ALM) in a deterministic optimization example. For robust optimization, our results show that the LS-SQP framework with any of the two different constraint handling schemes considered effectively handles the nonlinear constraints in a life-cycle robust production optimization problem. However, the expected value constraint scheme results in higher optimal NPV than the min- max constraint scheme, but at the cost of possible constraint violation for some individual geological realizations.
7

Mogensen, K., S. Samajpati, L. Kunakbayeva, A. H. Danardatu e M. AlZaabi. "Detailed Integrated Asset Model Boosts Water Injection Potential in Giant Fields". In SPE Water Lifecycle Management Conference and Exhibition. SPE, 2024. http://dx.doi.org/10.2118/219047-ms.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract Integrated asset models for the water injection networks have been developed to complement the optimization of oil production while honoring system constraints and balancing offtake across the reservoirs. A well-calibrated asset model enables operation closer to the system constraints leading to higher water injection rate potential and the ability to better allocate water to the right drainage regions. The integrated model comprises individual well models linked to a surface network model. While the well modeling itself is uncomplicated due to single-phase flow conditions, complications arise when hundreds of wells are tied together via multiple injection clusters linked together to provide optimum routing flexibility. Some of the routing options give rise to pressure imbalances in the network and require special attention to set up and solve. The wells fall into three distinct categories. Category 1 includes wells that belong to injection clusters constrained by pump capacity. These wells may inject into different reservoirs; hence optimization requires redistributing a fixed volume according to voidage replacement ratio considerations. Category 2 consists of wells that inject just below fracturing pressure. Any increase in rate would require stimulation, in this case matrix acidization. The wells are therefore ranked based on the expected rate increase from a fixed injectivity improvement, such as 10, 15 or 25%. The predicted injectivity index improvement is then supplied to the network solver to predict the potential water injection increase at field level, considering all the system constraints. Category 3 comprises wells that should be ramped up after checking for well integrity and verifying the fracturing conditions.
8

Zhainakov, Timur, e Yin Chao Chong. "Application of Linear Programing in Optimisation of Gas Blending Operations". In ADIPEC. SPE, 2022. http://dx.doi.org/10.2118/211208-ms.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract Different fields and wells produce natural gas of different composition, which directly influences the value of the gas. This gas is then sent to customers depending on the individual specification. Gases from different origins are sometimes blended to make up for the minimum rate requirement. This study presents a mathematical approach, linear programming, to process big data and generate an optimized route that solves a rate allocation problem keeping in place various operational constraints. A base model was created based on linear programming algorithm for the proof of concept. As in industry, multiple sets of constraints were applied to the model. These constraints include, for instance, maximum allowable carbon dioxide concentration of the total blend, minimum amount of gas, or target ethane concentration for various purposes. Another feature that was included in the model was Blend Specification Requirements. This feature navigates the solver to which result is more favourable and would provide higher profit. The final goal of the solver is to provide a scenario that complies with all the constraints providing the best revenue. The results of the multiple test optimizations, generally, showed close agreement with predictions made prior the tests. Once the model was validated, more complex scenarios were evaluated. Here, the model-generated results showed completely different from the expected. This error in predictions is due to the nature of the problem i.e., rate allocation planning, where the number of considered variables directly influences the complexity of the decision, making it impossible for basic experience-based predictions. The soft-constraint Blend Specification Requirement principles proved to function and direct the solver towards the higher profitable arrangements, while complying to the hard-constraints. Soft and hard terms here relate to the level of the obligement that must be performed by the solver. Overall, the model succeeded in optimizing the relationship between the economic values of different natural gas compositions and the operational and blend constraints, identifying the most profitable gas distribution plan. It is no longer practical to rely solely on the experience of individuals when dealing with complex rate allocation operations. This study presents a simple, reliable and elegant method to build computer-based optimisation models. Connected to an online stream of Big Data, the models can potentially contribute to oil and gas pipeline rate allocation operations, making decisions fit-for-purpose, cost effective and reliable.
9

Klecker, Sophie, e Peter Plapper. "BELBIC-Sliding Mode Control of Robotic Manipulators With Uncertainties and Switching Constraints". In ASME 2016 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/imece2016-65620.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper addresses the control problem for trajectory tracking of a class of robotic manipulators presenting uncertainties and switching constraints using a biomimetic approach. Uncertainties, system-inherent as well as environmental disturbances deteriorate the performance of the system. A change in constraints between the robot’s end-effector and the environment resulting in a switched nonlinear system, undermines the stable system performance. In this work, a robust adaptive controller combining sliding mode control and BELBIC (Brain Emotional Learning-Based Intelligent Control) is suggested to remediate the expected impacts on the overall system tracking performance and stability. The controller is based on an interplay of inputs relating to environmental information through error-signals of position and sliding surfaces and of emotional signals regulating the learning rate and adapting the future behaviour based on prior experiences. The proposed control algorithm is designed to be applicable to discontinuous freeform geometries. Its stability is proven theoretically and a simulation, performed on a two-link manipulator verifies its efficacy.
10

Behandish, Morad, e Horea T. Ilieş. "Haptic Assembly Using Skeletal Densities and Fourier Transforms". In ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/detc2015-47923.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Haptic-assisted virtual assembly and prototyping has seen significant attention over the past two decades. However, in spite of the appealing prospects, its adoption has been slower than expected. We identify the main roadblocks as the inherent geometric complexities faced when assembling objects of arbitrary shape, and the computation time limitation imposed by the notorious 1 kHz haptic refresh rate. We solved the first problem in a recent work by introducing a generic energy model for geometric guidance and constraints between features of arbitrary shape. In the present work, we address the second challenge by leveraging Fourier transforms to compute the constraint forces and torques. Our new concept of ‘geometric energy’ field is computed automatically from a cross-correlation of ‘skeletal densities’ in the frequency domain, and serves as a generalization of the manually specified virtual fixtures or heuristically identified mating constraints proposed in the literature. The formulation of the energy field as a convolution enables efficient computation using GPU-accelerated Fast Fourier Transforms (FFT). We show that our method is effective for low-clearance assembly of objects of arbitrary geometric and syntactic complexity.

Vai alla bibliografia