Journal articles on the topic 'Rate-cost tradeoff'

To see the other types of publications on this topic, follow the link: Rate-cost tradeoff.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Rate-cost tradeoff.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wargo, Andrew R., Gael Kurath, Robert J. Scott, and Benjamin Kerr. "Virus shedding kinetics and unconventional virulence tradeoffs." PLOS Pathogens 17, no. 5 (May 10, 2021): e1009528. http://dx.doi.org/10.1371/journal.ppat.1009528.

Full text
Abstract:
Tradeoff theory, which postulates that virulence provides both transmission costs and benefits for pathogens, has become widely adopted by the scientific community. Although theoretical literature exploring virulence-tradeoffs is vast, empirical studies validating various assumptions still remain sparse. In particular, truncation of transmission duration as a cost of virulence has been difficult to quantify with robust controlled in vivo studies. We sought to fill this knowledge gap by investigating how transmission rate and duration were associated with virulence for infectious hematopoietic necrosis virus (IHNV) in rainbow trout (Oncorhynchus mykiss). Using host mortality to quantify virulence and viral shedding to quantify transmission, we found that IHNV did not conform to classical tradeoff theory. More virulent genotypes of the virus were found to have longer transmission durations due to lower recovery rates of infected hosts, but the relationship was not saturating as assumed by tradeoff theory. Furthermore, the impact of host mortality on limiting transmission duration was minimal and greatly outweighed by recovery. Transmission rate differences between high and low virulence genotypes were also small and inconsistent. Ultimately, more virulent genotypes were found to have the overall fitness advantage, and there was no apparent constraint on the evolution of increased virulence for IHNV. However, using a mathematical model parameterized with experimental data, it was found that host culling resurrected the virulence tradeoff and provided low virulence genotypes with the advantage. Human-induced or natural culling, as well as host population fragmentation, may be some of the mechanisms by which virulence diversity is maintained in nature. This work highlights the importance of considering non-classical virulence tradeoffs.
APA, Harvard, Vancouver, ISO, and other styles
2

Nicolaou, Panicos, Deborah L. Thurston, and James V. Carnahan. "Machining Quality and Cost: Estimation and Tradeoffs." Journal of Manufacturing Science and Engineering 124, no. 4 (October 23, 2002): 840–51. http://dx.doi.org/10.1115/1.1511169.

Full text
Abstract:
Simultaneous improvement of machining cost, quality and environmental impact is sometimes possible, but after the Pareto optimal frontier has been reached, decisions must be made regarding unavoidable tradeoffs. This paper presents a method for formulating a mathematical model for first estimating quality, cost and cutting fluid wastewater treatment impacts of two machining operations (end milling and drilling), and then for tradeoff decision making. The milling quality estimation model is developed through virtual experimentation on a simulation model, while the drilling quality estimation model is developed through physical experimentation. Cost is estimated through an activity based costing approach. Cutting fluid wastewater treatment impacts (BOD and TSS) are estimated through stoichiometric analysis of cutting fluids. Input decision variables include material choice, design, manufacturing and limited lubrication parameters. The contribution of this paper is the integration of activity based cost estimation, machining quality estimation via statistical analysis of data from virtual and physical experiments, cutting fluid wastewater treatment impact estimation and formal decision theory. A case study of an automotive steering knuckle is presented, where decision variables include material choice (cast iron versus aluminum), feed rate, cutting speed and wet versus dry machining.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Yan, Yu Mei Hu, Yun Feng Luo, and Chang Chen Liu. "Research on Electric Power Equipment Preventive Maintenance Cycle on the Basis of Economic Life Cycle Costing." Advanced Materials Research 605-607 (December 2012): 296–99. http://dx.doi.org/10.4028/www.scientific.net/amr.605-607.296.

Full text
Abstract:
The maintenance strategy is a tradeoff between cost and reliability. In this paper we consider the maintenance plan from the view of economic life cycle cost and reliability. We discuss the maintenance interval optimization on the premise that the preventive maintenance mitigates failure rate level and intensifies failure variance ratio meanwhile. The specific effect of this kind of preventive maintenance on failure rate and its variance ratio is explored, and then we construct a life cycle cost model of electric power equipment and propose the annuity of life cycle cost minimization as a method for seeking an optimal maintenance interval solution.
APA, Harvard, Vancouver, ISO, and other styles
4

Ma, Guofeng, and Lingzhi Zhang. "Exact Overlap Rate Analysis and the Combination with 4D BIM of Time-Cost Tradeoff Problem in Project Scheduling." Advances in Civil Engineering 2019 (May 2, 2019): 1–12. http://dx.doi.org/10.1155/2019/9120795.

Full text
Abstract:
Schedulers can compress the schedule of construction projects by overlapping design and construction activities. However, overlapping may induce increased total cost with the decrease of duration. To solve the concurrency-based time-cost tradeoff problem effectively, this paper demonstrates an overlapping optimization algorithm that identifies an optimal overlapping strategy with exact overlap rates and generates the required duration at the minimum cost. The method makes use of overlapping strategy matrix (OSM) to illustrate the dependency relationships between activities. This method then optimizes the genetic algorithm (GA) to compute an overlapping strategy with exact overlap rates by means of overlapping and crashing. This paper then proposes an integrated framework of genetic algorithm and building information modeling (BIM) to prove the practice feasibility of theoretical research. The study is valuable to practitioners because the method allows establishing a compressed schedule which meets the limited budget within the contract duration. This article is also significant to researchers because it can compute the optimal scheduling strategy with exact overlap rates, crashing degree, and resources expeditiously. The usability and validity of the optimized method are verified by a test case in this paper.
APA, Harvard, Vancouver, ISO, and other styles
5

Luo, Chen, and Anshumali Shrivastava. "Scaling-Up Split-Merge MCMC with Locality Sensitive Sampling (LSS)." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4464–71. http://dx.doi.org/10.1609/aaai.v33i01.33014464.

Full text
Abstract:
Split-Merge MCMC (Monte Carlo Markov Chain) is one of the essential and popular variants of MCMC for problems when an MCMC state consists of an unknown number of components. It is well known that state-of-the-art methods for split-merge MCMC do not scale well. Strategies for rapid mixing requires smart and informative proposals to reduce the rejection rate. However, all known smart proposals involve expensive operations to suggest informative transitions. As a result, the cost of each iteration is prohibitive for massive scale datasets. It is further known that uninformative but computationally efficient proposals, such as random split-merge, leads to extremely slow convergence. This tradeoff between mixing time and per update cost seems hard to get around.We leverage some unique properties of weighted MinHash, which is a popular LSH, to design a novel class of split-merge proposals which are significantly more informative than random sampling but at the same time efficient to compute. Overall, we obtain a superior tradeoff between convergence and per update cost. As a direct consequence, our proposals are around 6X faster than the state-of-the-art sampling methods on two large real datasets KDDCUP and PubMed with several millions of entities and thousands of clusters.
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Qing-Quan, and Fang Jin. "LQG Control of Networked Control Systems with Limited Information." Mathematical Problems in Engineering 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/206391.

Full text
Abstract:
This paper addresses linear quadratic Gaussian (LQG) control problems for multi-input multioutput (MIMO), linear time-invariant (LTI) systems, where the sensors and controllers are geographically separated and connected via a digital communication channel with limited data rates. An observer-based, quantized state feedback control scheme is employed in order to achieve the minimum data rate for mean square stabilization of the unstable plant. An explicit expression is presented to state the tradeoff between the LQ cost and the data rate. Sufficient conditions on the data rate for mean square stabilization are derived. An illustrative example is given to demonstrate the effectiveness of the proposed scheme.
APA, Harvard, Vancouver, ISO, and other styles
7

Kerins, Frank, Janet Kiholm Smith, and Richard Smith. "Opportunity Cost of Capital for Venture Capital Investors and Entrepreneurs." Journal of Financial and Quantitative Analysis 39, no. 2 (June 2004): 385–405. http://dx.doi.org/10.1017/s0022109000003124.

Full text
Abstract:
AbstractWe use a database of recent high tech IPOs to estimate opportunity cost of capital for venture capital investors and entrepreneurs. Entrepreneurs face the risk-return tradeoff of the CAPM as the opportunity cost of holding a portfolio that necessarily is underdiversified. For early stage firms, we estimate the effects of underdiversification, industry, and financial maturity on opportunity cost. Assuming a one-year holding period, the entrepreneur's opportunity cost generally is two to four times as high as that of a well-diversified investor. With a 4.0% risk-free rate and 6.0% market risk premium, for the sample average, we estimate the cost of capital of a well-diversified investor to be 11.4%, which equates to 16.7% before the management fees and carried interest of a typical venture capital fund. For an entrepreneur with 25% of total wealth invested in the venture, our corresponding estimate of cost of capital is 40.0%.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhong, Lingshu, and Mingyang Pei. "Optimal Design for a Shared Swap Charging System Considering the Electric Vehicle Battery Charging Rate." Energies 13, no. 5 (March 6, 2020): 1213. http://dx.doi.org/10.3390/en13051213.

Full text
Abstract:
Swap charging (SC) technology offers the possibility of swapping the batteries of electric vehicles (EVs), providing a perfect solution for achieving a long-distance freeway trip. Based on SC technology, a shared SC system (SSCS) concept is proposed to overcome the difficulties in optimal swap battery strategies for a large number of EVs with charging requests and to consider the variance in the battery charging rate simultaneously. To realize the optimal SSCS design, a binary integer programming model is developed to balance the tradeoff between the detour travel cost and the total battery recharge cost in the SSCS. The proposed method is verified with a numerical example of the freeway system in Guangdong Province, China, and can obtain an exact solution using off-the-shelf commercial solvers (e.g., Gurobi).
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Zaiming, Wei Deng, and Gang Chen. "Analysis of the Optimal Resource Allocation for a Tandem Queueing System." Mathematical Problems in Engineering 2017 (2017): 1–10. http://dx.doi.org/10.1155/2017/5964272.

Full text
Abstract:
We study a controllable two-station tandem queueing system, where customers (jobs) must first be processed at upstream station and then the downstream station. A manager dynamically allocates the service resource to each station to adjust the service rate, leading to a tradeoff between the holding cost and resource cost. The goal of the manager is to find the optimal policy to minimize the long-run average costs. The problem is constructed as a Markov decision process (MDP). In this paper, we consider the model in which the resource cost and service rate functions are more general than linear. We derive the monotonicity of the optimal allocation policy by the quasiconvexity properties of the value function. Furthermore, we obtain the relationship between the two stations’ optimal policy and conditions under which the optimal policy is unique and has the bang-bang control property. Finally, we provide some numerical experiments to illustrate these results.
APA, Harvard, Vancouver, ISO, and other styles
10

Rim, Suk-Chul, and Hang T. T. Vu. "Transshipment Vehicle Routing with Pickup and Delivery for Cross-Filling." Mathematical Problems in Engineering 2021 (January 16, 2021): 1–12. http://dx.doi.org/10.1155/2021/6667765.

Full text
Abstract:
Distribution centers (DCs) typically receive orders from the customers (mostly retail stores) located in their vicinity and deliver the ordered goods the next day morning. To maintain high item fill rate, DCs have to hold a high level of inventory, which will increase inventory cost. As an alternative, cross-filling is that, after closing the daily order receipt, DCs exchange surplus items during the night to reduce the shortage. The economic justification of such cross-filling will depend on the tradeoff between extra transshipment and handling cost versus saved shortage cost. In this paper, as an extension of Rim and Jiang, 2019, vehicles are allowed to drop and pick up items at the intermediate DCs in the route. We present a genetic algorithm to determine the routes and amount to pick up/drop at each DC to minimize the total cost.
APA, Harvard, Vancouver, ISO, and other styles
11

Shang, Lijun, Guojun Shang, and Qingan Qiu. "A Bivariate Post-Warranty Maintenance Model for the Product under a 2D Warranty." Mathematics 10, no. 12 (June 7, 2022): 1957. http://dx.doi.org/10.3390/math10121957.

Full text
Abstract:
In this study, by integrating preventive maintenance (PM) into a two-dimensional warranty region, a two-dimensional warranty with customized PM (2D warranty with customized PM) is proposed from the manufacturer’s perspective to reduce the warranty cost. The warranty cost of a 2D warranty with customized PM is derived. The manufacturer’s tradeoff between PM cost and minimal repair cost saving is obtained by choosing the proper reliability threshold and the number of customized PMs, and the advantage of a 2D warranty with customized PM is illustrated. Second, by integrating PM into the post-warranty period, a bivariate post-warranty maintenance (BPWM) policy is proposed from the consumer’s perspective to ensure the reliability of the product through the 2D warranty with customized PM. The expected cost rate model of BPWM is derived. Optimal BPWM is obtained in the numerical experiments. It is shown that a 2D warranty with customized PM is beneficial for both the manufacturer and the consumer, since both the manufacturer’s warranty cost and the consumer’s total cost are reduced.
APA, Harvard, Vancouver, ISO, and other styles
12

Misra, Chinmaya, and Veena Goswami. "Analysis of Power Saving Class II Traffic in IEEE 802.16E with Multiple Sleep State and Balking." Foundations of Computing and Decision Sciences 40, no. 1 (March 1, 2015): 53–66. http://dx.doi.org/10.1515/fcds-2015-0004.

Full text
Abstract:
Abstract The battery life of Mobile Stations in IEEE 802.16e can be extended substantially by applying the sleep mode mechanism. This paper studies an effcient method to analyze the performance of the power saving class type II for delay sensitive traffc in multiple sleep state. The incoming data frames may join or balk the buffer due to impatience with some probability. We present an M/M/1/N queueing model with balking and multiple vacations in order to exhibit the self-similar property of IEEE 802.16e. We develop a cost function to determine the optimal service rate that minimizes the total expected cost. Various performance indices such as the average number of data frames in the system, the mean waiting time of the data frame in the system, the the average balking rate due to impatience, etc. have been presented. Numerical results are provided to show the inuence of various parameters on the behavior of the system. The proposed model provides a tradeoff between the average abandon rate and the power consumption.
APA, Harvard, Vancouver, ISO, and other styles
13

Li, Chengtie, Jinkuan Wang, and Mingwei Li. "Efficient Cross-Layer Optimization Algorithm for Data Transmission in Wireless Sensor Networks." Journal of Electrical and Computer Engineering 2015 (2015): 1–6. http://dx.doi.org/10.1155/2015/545798.

Full text
Abstract:
In this paper, we address the problems of joint design for channel selection, medium access control (MAC), signal input control, and power control with cooperative communication, which can achieve tradeoff between optimal signal control and power control in wireless sensor networks (WSNs). The problems are solved in two steps. Firstly, congestion control and link allocation are separately provided at transport layer and network layer, by supply and demand based on compressed sensing (CS). Secondly, we propose the cross-layer scheme to minimize the power cost of the whole network by a linear optimization problem. Channel selection and power control scheme, using the minimum power cost, are presented at MAC layer and physical layer, respectively. These functions interact through and are regulated by congestion rate so as to achieve a global optimality. Simulation results demonstrate the validity and high performance of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
14

Suttipongkaset, Pongpol, and Paveena Chaovalitwongse. "Delivery Planning of Water-Treatment Chemicals in Vendor Managed Inventory Context." Advanced Materials Research 931-932 (May 2014): 1664–68. http://dx.doi.org/10.4028/www.scientific.net/amr.931-932.1664.

Full text
Abstract:
This research aims to improve the operating delivery system of water-treatment chemicals by establishing a chemicals inventory policy for planning appropriate chemicals delivery to five customers in the case study. Currently the company does not exploit tradeoff between transportation cost and inventory holding cost which benefits for cost reduction in terms of inventory management and chemicals delivery. The customers demand for chemicals consumption is uncertain depending on the quality of raw water and production rate. While company has constraints such as limit of transportation truck quantity and inventory review interval by the company. Therefore, for the factors mentioned, this research uses the periodic review system which demand is uncertain as an inventory model. Data from customers chemicals consumption in 2012 is used to conduct the inventory order up to level (OUL) for planning chemicals delivery to customers. Then, the research tests the delivery plan by using Monte Carlo simulation compared to the actual operation data from January to June 2013. It is found that the proposed operating system can reduce inventory cost by up to 33%.
APA, Harvard, Vancouver, ISO, and other styles
15

Li, Xiaowei, Yi Cui, and Yuan Xue. "Towards an Automatic Parameter-Tuning Framework for Cost Optimization on Video Encoding Cloud." International Journal of Digital Multimedia Broadcasting 2012 (2012): 1–11. http://dx.doi.org/10.1155/2012/935724.

Full text
Abstract:
The emergence of cloud encoding services facilitates many content owners, such as the online video vendors, to transcode their digital videos without infrastructure setup. Such service provider charges the customers only based on their resource consumption. For both the service provider and customers, lowering the resource consumption while maintaining the quality is valuable and desirable. Thus, to choose a cost-effective encoding parameter, configuration is essential and challenging due to the tradeoff between bitrate, encoding speed, and resulting quality. In this paper, we explore the feasibility of an automatic parameter-tuning framework, based on which the above objective can be achieved. We introduce a simple service model, which combines the bitrate and encoding speed into a single value: encoding cost. Then, we conduct an empirical study to examine the relationship between the encoding cost and various parameter settings. Our experiment is based on the one-pass Constant Rate Factor method in x264, which can achieve relatively stable perceptive quality, and we vary each parameter we choose to observe how the encoding cost changes. The experiment results show that the tested parameters can be independently tuned to minimize the encoding cost, which makes the automatic parameter-tuning framework feasible and promising for optimizing the cost on video encoding cloud.
APA, Harvard, Vancouver, ISO, and other styles
16

Grand-Clément, Julien, and Christian Kroer. "Scalable First-Order Methods for Robust MDPs." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (May 18, 2021): 12086–94. http://dx.doi.org/10.1609/aaai.v35i13.17435.

Full text
Abstract:
Robust Markov Decision Processes (MDPs) are a powerful framework for modeling sequential decision making problems with model uncertainty. This paper proposes the first first-order framework for solving robust MDPs. Our algorithm interleaves primal-dual first-order updates with approximate Value Iteration updates. By carefully controlling the tradeoff between the accuracy and cost of Value Iteration updates, we achieve an ergodic convergence rate that is significantly better than classical Value Iteration algorithms in terms of the number of states S and the number of actions A on ellipsoidal and Kullback-Leibler s-rectangular uncertainty sets. In numerical experiments on ellipsoidal uncertainty sets we show that our algorithm is significantly more scalable than state-of-the-art approaches. Our framework is also the first one to solve robust MDPs with s-rectangular KL uncertainty sets.
APA, Harvard, Vancouver, ISO, and other styles
17

Nickles, Thomas. "The crowbar model of method and its implications." THEORIA. An International Journal for Theory, History and Foundations of Science 34, no. 3 (December 5, 2019): 357. http://dx.doi.org/10.1387/theoria.19070.

Full text
Abstract:
There is a rough, long-term tradeoff between rate of innovation and degree of strong realism in scientific practice, a point reflected in historically changing conceptions of method as they retreat from epistemological foundationism to a highly fallibilistic, modeling perspective. The successively more liberal, innovation-stimulating methods open up to investigation deep theoretical domains at the cost, in many cases, of moving away from strong realism as a likely outcome of research. The crowbar model of method highlights this tension, expressed as the crowbar compromiseand the crowbar fallacy. The tools-to-theories heuristic, described and evaluated by Gigerenzer and colleagues, can be regarded as an attempt by some scientific realists to overcome this compromise. Instead, it is an instance of it. Nonetheless, in successful applications the crowbar model implies a modest, instrumental (nonrepresentational) realism.
APA, Harvard, Vancouver, ISO, and other styles
18

Nájera, S., M. Gil-Martínez, and J. A. Zambrano. "ATAD control goals through the analysis of process variables and evaluation of quality, production and cost." Water Science and Technology 71, no. 5 (January 14, 2015): 717–24. http://dx.doi.org/10.2166/wst.2015.006.

Full text
Abstract:
The aim of this paper is to establish and quantify different operational goals and control strategies in autothermal thermophilic aerobic digestion (ATAD). This technology appears as an alternative to conventional sludge digestion systems. During the batch-mode reaction, high temperatures promote sludge stabilization and pasteurization. The digester temperature is usually the only online, robust, measurable variable. The average temperature can be regulated by manipulating both the air injection and the sludge retention time. An improved performance of diverse biochemical variables can be achieved through proper manipulation of these inputs. However, a better quality of treated sludge usually implies major operating costs or a lower production rate. Thus, quality, production and cost indices are defined to quantify the outcomes of the treatment. Based on these, tradeoff control strategies are proposed and illustrated through some examples. This paper's results are relevant to guide plant operators, to design automatic control systems and to compare or evaluate the control performance on ATAD systems.
APA, Harvard, Vancouver, ISO, and other styles
19

Dong, Zanyang, Tao Shang, Qian Li, and Tang Tang. "Adaptive Power Allocation Scheme for Mobile NOMA Visible Light Communication System." Electronics 8, no. 4 (March 29, 2019): 381. http://dx.doi.org/10.3390/electronics8040381.

Full text
Abstract:
Recently, due to its higher spectral efficiency and enhanced user experience, non-orthogonal multiple access (NOMA) has been widely studied in visible light communication (VLC) systems. As a main concern in NOMA-VLC systems, the power allocation scheme greatly affects the tradeoff between the total achievable data rate and user fairness. In this context, our main aim in this work was to find a more balanced power allocation scheme. To this end, an adaptive power allocation scheme based on multi-attribute decision making (MADM), which flexibly chooses between conventional power allocation or inverse power allocation (IPA) and the optimal power allocation factor, has been proposed. The concept of IPA is put forward for the first time and proves to be beneficial to achieving a higher total achievable data rate at the cost of user fairness. Moreover, considering users’ mobility along certain trajectories, we derived a fitting model of the optimal power allocation factor. The feasibility of the proposed adaptive scheme was verified through simulation and the fitting model was approximated to be the sum of three Gaussian functions.
APA, Harvard, Vancouver, ISO, and other styles
20

Vijaya Manasa, K., A. V Prabu, M. Sai Prathyusha, and S. Varakumari. "Performance monitoring of UPS battery using IoT." International Journal of Engineering & Technology 7, no. 2.7 (March 18, 2018): 352. http://dx.doi.org/10.14419/ijet.v7i2.7.10717.

Full text
Abstract:
The proposed proposal basically concentrate on energy efficient and cost effective model construct an automatic UPS monitoring and controlling system , with the context of PDA’s to machine level and machine level to PDA’s.RF is the essential information based communication system now a days i.e. Bluetooth/ZigBee/Wi-Fi . IoT based UPS provides the balancing tradeoff between monitoring and controlling the system remotely in order to increase the energy efficiency and production rate. Proposed system automatically monitor the 4 parameters ,i.e current ,voltage, temperature, power which runs on each UPS. During the drain out or worn-out if any of the pa-rameters like voltage andcurrent decreases from the ideal conditions then this will be monitored automatically and the values will be displayed on android app application and and uploaded to the cloud(IoT).The breakdown conditions of UPS will be notified to the chief engineer,supervisor and control room through SMS and IOT.
APA, Harvard, Vancouver, ISO, and other styles
21

Chen, Chen, Yue Wang, Shan Lu, and Xinchao Li. "Design and Multiobjective Optimization of Green Closed-Loop Manufacturing-Recycling Network Considering Raw Material Attribute." Processes 10, no. 5 (May 3, 2022): 904. http://dx.doi.org/10.3390/pr10050904.

Full text
Abstract:
Regarding decision planning in the electronic manufacturing industry, this paper designs a green closed-loop manufacturing-recycling network for multiperiod production planning for multiple products. The network considers the tradeoff between production costs and environmental pollution induced by production scraps. Therefore, a mixed integer programming model with a dual objective is designed to achieve environmental protection and reduce production costs through resource recovery and utilization. At the same time, the recycled materials are considered to be treated, not entirely new, which could affect the manufacturing qualified rate. Thus, material attributes are proposed to distinguish new raw materials from recycled (second-hand) ones through the closed-loop manufacturing-recycling process to enhance the manufacturing qualified rate. In order to solve the dual-objective optimization model and realize optimal decisions, an epsilon constraint is designed to generate a nonextreme solution set by changing the original feasible region. The results show its ability to obtain a more balanced solution in terms of cost and environmental factors compared with the fuzzy-weighted method. Meanwhile, the analysis proves that the dual-objective optimization model with distinguishing material attributes can improve the efficiency of the manufacturing qualified rate and achieve a win-win situation for production and environmental protection during enterprise production.
APA, Harvard, Vancouver, ISO, and other styles
22

Wicaksono, Nugroho Budi, and Sukma Meganova Effendi. "Heating and Cooling Rate Study on Water Cooling Thermal Cycler using Aluminium Block Sample." Journal of Electronics, Electromedical Engineering, and Medical Informatics 4, no. 2 (March 4, 2022): 55–61. http://dx.doi.org/10.35882/jeeemi.v4i2.1.

Full text
Abstract:
Temperature measurement has many applications in medical devices. In recent days, body temperature become the main screening procedure to justify people infected by SARS-CoV-2. Related to pandemic situation due to SARS-Cov-2, Polymerase Chain Reaction (PCR) method become the most accurate and reliable detection method. This method employs a device named PCR machine or Thermal Cycler. In this research, we focus to build a Thermal Cycler using a low-cost material such as aluminium and using a liquid coolant as the cooling system. We use 2 types of coolant solution: mineral water and generic liquid coolant. Peltier device in thermal cycler serves as heating and cooling element. In heating rate experiments, generic liquid coolant shows a better result than using mineral water due to specific heat capacity and thermal conductivity of water. In the cooling rate experiments, the water pump is activated to stream the liquid solution, the flow rate of liquid solution is influenced by viscosity of the liquid. Generic liquid coolant has approx. 4,5 times greater viscosity than water. The higher flow rate means better performance for cooling rate. Using 2 pieces of 60-Watt heaters and a 60-Watt chiller and aluminium material as block sample, our research shows a heating and cooling rate up to approx. 0,1°C/s. Compared to commercially thermal cycler, our thermal cycler has a lower wattage; this lower wattage performance has been tradeoff with lower ramping rate. Some factors are suspected become the source of contributors of lower ramping rate.
APA, Harvard, Vancouver, ISO, and other styles
23

Yang, Aili, Xiujuan Chen, Guohe Huang, Shan Zhao, Xiajing Lin, and Edward Mcbean. "Coordinative Urban-Rural Solid Waste Management: A Fractional Dual-Objective Programming Model for the Regional Munifcipality of Xiamen." Mathematical Problems in Engineering 2019 (April 22, 2019): 1–13. http://dx.doi.org/10.1155/2019/1360454.

Full text
Abstract:
A linear fractional programming based solid waste management planning model was proposed and applied to support the planning of urban-rural solid waste management in Xiamen, China. The model could obtain the best system efficiency while solving the tradeoff between economic and environmental objectives. It aimed to effectively address the urban and rural solid waste management planning through minimizing the system cost and optimizing system efficiency in the developed model framework. Through the model, the optimal waste flow for each facility was obtained, and the problem of overburdened landfill in Xiamen’s urban and rural solid waste management system was solved. The solutions for waste allocation and facility capacity expansion were provided for Xiamen’s urban and rural solid waste management. The planning results showed that about 42.44 × 106 tons of waste would be diverted to other facilities from landfills over the planning period of 2018-2032, and the waste diversion rate would reach 97%, which would greatly reduce the burden on landfills. The economic efficiency of waste diversion would be 5.07 × 103 ton per 106 RMB. All the capacities of Xiamen’s urban and rural solid waste management facilities including incinerators, composting plant, and landfills should be expanded because of increasing waste production rate.
APA, Harvard, Vancouver, ISO, and other styles
24

Alghamdi, Mohammed, Faissal Abdel-Hady, A. Mazher, and Abdulrahim Alzahrani. "Integration of Process Modeling, Design, and Optimization with an Experimental Study of a Solar-Driven Humidification and Dehumidification Desalination System." Processes 6, no. 9 (September 7, 2018): 163. http://dx.doi.org/10.3390/pr6090163.

Full text
Abstract:
Solar energy is becoming a promising source of heat and power for electrical generation and desalination plants. In this work, an integrated study of modeling, optimization, and experimental work is undertaken for a parabolic trough concentrator combined with a humidification and dehumidification desalination unit. The objective is to study the design performance and economic feasibility of a solar-driven desalination system. The design involves the circulation of a closed loop of synthetic blend motor oil in the concentrators and the desalination unit heat input section. The air circulation in the humidification and dehumidification unit operates in a closed loop, where the circulating water runs during the daytime and requires only makeup feed water to maintain the humidifier water level. Energy losses are reduced by minimizing the waste of treated streams. The process is environmentally friendly, since no significant chemical treatment is required. Design, construction, and operation are performed, and the system is analyzed at different circulating oil and air flow rates to obtain the optimum operating conditions. A case study in Saudi Arabia is carried out. The study reveals unit capability of producing 24.31 kg/day at a circulating air rate of 0.0631 kg/s and oil circulation rate of 0.0983 kg/s. The tradeoff between productivity, gain output ratio, and production cost revealed a unit cost of 12.54 US$/m3. The impact of the circulating water temperature has been tracked and shown to positively influence the process productivity. At a high productivity rate, the humidifier efficiency was found to be 69.1%, and the thermal efficiency was determined to be 82.94%. The efficiency of the parabolic trough collectors improved with the closed loop oil circulation, and the highest performance was achieved from noon until 14:00 p.m.
APA, Harvard, Vancouver, ISO, and other styles
25

Schloss, Patrick D., Matthew L. Jenior, Charles C. Koumpouras, Sarah L. Westcott, and Sarah K. Highlander. "Sequencing 16S rRNA gene fragments using the PacBio SMRT DNA sequencing system." PeerJ 4 (March 28, 2016): e1869. http://dx.doi.org/10.7717/peerj.1869.

Full text
Abstract:
Over the past 10 years, microbial ecologists have largely abandoned sequencing 16S rRNA genes by the Sanger sequencing method and have instead adopted highly parallelized sequencing platforms. These new platforms, such as 454 and Illumina’s MiSeq, have allowed researchers to obtain millions of high quality but short sequences. The result of the added sequencing depth has been significant improvements in experimental design. The tradeoff has been the decline in the number of full-length reference sequences that are deposited into databases. To overcome this problem, we tested the ability of the PacBio Single Molecule, Real-Time (SMRT) DNA sequencing platform to generate sequence reads from the 16S rRNA gene. We generated sequencing data from the V4, V3–V5, V1–V3, V1–V5, V1–V6, and V1–V9 variable regions from within the 16S rRNA gene using DNA from a synthetic mock community and natural samples collected from human feces, mouse feces, and soil. The mock community allowed us to assess the actual sequencing error rate and how that error rate changed when different curation methods were applied. We developed a simple method based on sequence characteristics and quality scores to reduce the observed error rate for the V1–V9 region from 0.69 to 0.027%. This error rate is comparable to what has been observed for the shorter reads generated by 454 and Illumina’s MiSeq sequencing platforms. Although the per base sequencing cost is still significantly more than that of MiSeq, the prospect of supplementing reference databases with full-length sequences from organisms below the limit of detection from the Sanger approach is exciting.
APA, Harvard, Vancouver, ISO, and other styles
26

Kostina, Victoria, and Babak Hassibi. "Rate-Cost Tradeoffs in Control." IEEE Transactions on Automatic Control 64, no. 11 (November 2019): 4525–40. http://dx.doi.org/10.1109/tac.2019.2912256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ma, Lina, Yong Tian, Songtao Yang, Can Xu, and Anmin Hao. "A Scheme of Sustainable Trajectory Optimization for Aircraft Cruise Based on Comprehensive Social Benefit." Discrete Dynamics in Nature and Society 2021 (July 1, 2021): 1–15. http://dx.doi.org/10.1155/2021/7629203.

Full text
Abstract:
The increasing demand for environmentally friendly and passenger-favored flight operation requires a systematic scheme of sustainable trajectory optimization for the aircraft cruise. This paper achieves it by proposing an innovative performance framework based on the comprehensive benefit to the society considering both economic and noneconomic ones, following which the sustainable trajectory optimization problem is modeled by discretization. A method combining forward recurrence and memoization operation, called memoization dynamic programming, is developed to solve the model with computational efficiency. Working with real-world operational data of a typical flight route, we demonstrate the effectiveness of the proposed scheme at different levels and explore the difference in its performance due to meteorological conditions, aircraft type, and time horizon. The scheme is proved to perform robustly in comprehensive performance with a stable benefit rate of about 8% through sensitivity analysis, by which we find that it is relatively better for the flights cruising on business route with a load factor of 85%. Tradeoff results suggest that the systematic consideration of both the economic and noneconomic performance contributes to improved integrated sustainability. In particular, the optimal comprehensive performance at a monthly level can be obtained when accepting an additional $26,500 economic cost.
APA, Harvard, Vancouver, ISO, and other styles
28

Frenkel, Evgeni M., Michael J. McDonald, J. David Van Dyken, Katya Kosheleva, Gregory I. Lang, and Michael M. Desai. "Crowded growth leads to the spontaneous evolution of semistable coexistence in laboratory yeast populations." Proceedings of the National Academy of Sciences 112, no. 36 (August 3, 2015): 11306–11. http://dx.doi.org/10.1073/pnas.1506184112.

Full text
Abstract:
Identifying the mechanisms that create and maintain biodiversity is a central challenge in biology. Stable diversification of microbial populations often requires the evolution of differences in resource utilization. Alternatively, coexistence can be maintained by specialization to exploit spatial heterogeneity in the environment. Here, we report spontaneous diversification maintained by a related but distinct mechanism: crowding avoidance. During experimental evolution of laboratorySaccharomyces cerevisiaepopulations, we observed the repeated appearance of “adherent” (A) lineages able to grow as a dispersed film, in contrast to their crowded “bottom-dweller” (B) ancestors. These two types stably coexist because dispersal reduces interference competition for nutrients among kin, at the cost of a slower maximum growth rate. This tradeoff causes the frequencies of the two types to oscillate around equilibrium over the course of repeated cycles of growth, crowding, and dispersal. However, further coevolution of theAandBtypes can perturb and eventually destroy their coexistence over longer time scales. We introduce a simple mathematical model of this “semistable” coexistence, which explains the interplay between ecological and evolutionary dynamics. Because crowded growth generally limits nutrient access in biofilms, the mechanism we report here may be broadly important in maintaining diversity in these natural environments.
APA, Harvard, Vancouver, ISO, and other styles
29

Seo, Hyoju, Yoon Seok Yang, and Yongtae Kim. "Design and Analysis of an Approximate Adder with Hybrid Error Reduction." Electronics 9, no. 3 (March 11, 2020): 471. http://dx.doi.org/10.3390/electronics9030471.

Full text
Abstract:
This paper presents an energy-efficient approximate adder with a novel hybrid error reduction scheme to significantly improve the computation accuracy at the cost of extremely low additional power and area overheads. The proposed hybrid error reduction scheme utilizes only two input bits and adjusts the approximate outputs to reduce the error distance, which leads to an overall improvement in accuracy. The proposed design, when implemented in 65-nm CMOS technology, has 3, 2, and 2 times greater energy, power, and area efficiencies, respectively, than conventional accurate adders. In terms of the accuracy, the proposed hybrid error reduction scheme allows that the error rate of the proposed adder decreases to 50% whereas those of the lower-part OR adder and optimized lower-part OR constant adder reach 68% and 85%, respectively. Furthermore, the proposed adder has up to 2.24, 2.24, and 1.16 times better performance with respect to the mean error distance, normalized mean error distance (NMED), and mean relative error distance, respectively, than the other approximate adder considered in this paper. Importantly, because of an excellent design tradeoff among delay, power, energy, and accuracy, the proposed adder is found to be the most competitive approximate adder when jointly analyzed in terms of the hardware cost and computation accuracy. Specifically, our proposed adder achieves 51%, 49%, and 47% reductions of the power-, energy-, and error-delay-product-NMED products, respectively, compared to the other considered approximate adders.
APA, Harvard, Vancouver, ISO, and other styles
30

Lu, Yining, Tao Wang, Zhuangzhuang Wang, Chaoyang Li, and Yi Zhang. "Modeling the Dynamic Exclusive Pedestrian Phase Based on Transportation Equity and Cost Analysis." International Journal of Environmental Research and Public Health 19, no. 13 (July 4, 2022): 8176. http://dx.doi.org/10.3390/ijerph19138176.

Full text
Abstract:
The exclusive pedestrian phase (EPP) has proven to be an effective method of eliminating pedestrian–vehicle conflicts at signalized intersections. The existing EPP setting conditions take traffic efficiency and safety as optimization goals, which may contribute to unfair interactions between vehicles and pedestrians. This study develops a multiobjective optimization framework to determine the EPP setting criteria, with consideration for the tradeoff between transportation equity and cost. In transportation equity modeling and considering environmental conditions, the transportation equity index is proposed to quantify pedestrian–vehicle equity differences. In cost modeling, traffic safety and efficiency factors are converted into monetary values, and the pedestrian–vehicle interaction is introduced. To validate the proposed optimization framework, a video-based data collection is conducted on wet and dry environment conditions at the selected intersection. The parameters in the proposed model are calibrated based on the results of the video analysis. This study compares the performance of the multiobjective evolutionary algorithm based on decomposition (MOEA) and the nondominated sorting genetic algorithm II (NSGA-II) methods in building the sets of nondominated solutions. The optimization results show that the decrease in transportation equity will lead to an increase in cost. The obtained Pareto front approximations correspond to diverse signal timing patterns and achieve a balance between optimizing either objective to different extents. The sensitivity analysis reveals the application domains for the EPP and the traditional two-way control phase (TWC) under different vehicular/pedestrian demand, yielding rate, and environment conditions. The EPP control is more suitable at intersections with high pedestrian volumes and low yielding rates, especially in wet conditions. The results provide operational guidelines for decision-makers for properly selecting the pedestrian phase pattern at signalized intersections.
APA, Harvard, Vancouver, ISO, and other styles
31

Lazo-Langner, Alejandro, Douglas A. Coyle, Philip S. Wells, Dimitrios Scarvelis, Melissa A. Forgie, and Marc A. Rodger. "Determining the Optimal Timing of Initiation for Venous Thromboembolism (VTE) Anticoagulant (AC) Prophylaxis (proph) after Orthopedic Surgery (OS)." Blood 110, no. 11 (November 16, 2007): 305. http://dx.doi.org/10.1182/blood.v110.11.305.305.

Full text
Abstract:
Abstract AC proph after OS results in a decrease in the incidence of VTE with an associated increase in the risk of major bleeding (MB). Several AC agents and schedules are used; however the optimal timing of initiation has not been determined. It is likely that the proximity to the time of surgery might influence both the efficacy and safety of the AC thus altering their risk-benefit profile. Using a clinical cost-effectiveness approach we compared different AC and timings of VTE proph in patients undergoing OS. A meta-analysis of 55 randomized trials was done to estimate the risk (MB) and benefit (averted VTE) of proph in OS using placebo (plac) or different AC (ximelagatranxim, low molecular weight heparin-LMWH, unfractionated heparin-UFH, warfarin-warf and fondaparinux-fonda) and timings of initiation (defined as preoperative (preop) if the first dose of the AC was administered >2 hrs before surgery, perioperative (periop)if between 2 hrs before or up to 12 hrs after the surgery and postoperative (postop) if starting 12 hrs or more after surgery). Means and variances of the MB and VTE estimates were used to parameterize Monte Carlo simulations for a beta distribution using 1,000 replications. Incremental risk, benefit and risk-benefit ratios (compared to placebo) were calculated from the replications. All AC/timing combinations were compared across a range of benefit-risk tradeoff values (risk acceptance) by calculating the percentage of replications with the highest net clinical benefit (NCB; e.g. incremental benefit - incremental risk · tradeoff) for each AC/timing combination. A higher NCB represents a better risk-benefit profile. In addition all anticoagulants were pooled together according to the initial timing of administration and NCB was calculated using random re-sampling of the replications. Analyses were done separately for major VTE (mVTE; proximal deep vein thrombosis (DVT) + pulmonary embolism), and total VTE (tVTE; mVTE + distal DVT) and a sensitivity analysis was done after excluding xim. The reference tradeoff was estimated from the case-fatality rate-ratios of VTE to MB (mVTE/MB=0.39; tVTE/MB=0.10). At the reference tradeoff value, the AC/timing combination with the highest probability of having the best risk-benefit profile was postop xim analyzed by both mVTE and tVTE (99 and 58%, respectively). After excluding xim the AC/timing combination of choice was postop LMWH if analyzed by mVTE (60%) or preop LMWH if analyzed by tVTE (59%). When all AC were pooled those administered postop had the highest probability of having the best risk-benefit profile analyzed by mVTE (48%) and the choice was indifferent between preop (45%) and postop (40%) if analyzed by tVTE. After excluding xim from the pooled analysis the choice was indifferent between preop (40%) and postop (35%) if analyzed by mVTE and if analyzed by tVTE the choice was preop (55%) followed by postop (27%). Our results suggest that: postop administration of AC proph has the best risk-benefit profile; the results are influenced by the event defining benefit (mVTE or tVTE); in some analyses preop administration was best, and; periop administration always had the worst risk-benefit profile. We conclude that periop AC proph after OS should not be used.
APA, Harvard, Vancouver, ISO, and other styles
32

dos Santos, Paulo L., and Ellis Scharfenaker. "Competition, self-organization, and social scaling—accounting for the observed distributions of Tobin’s q." Industrial and Corporate Change 28, no. 6 (June 25, 2019): 1587–610. http://dx.doi.org/10.1093/icc/dtz027.

Full text
Abstract:
AbstractWe develop a systemic, information-theoretic model of competitive capital-market functioning that can account for the observed statistical regularities in cross-sectional distributions of the logarithm of Tobin’s q for US non-financial corporations since 1962. The model considers capital markets as a self-organizing system driven by competitive interactions among investors and corporate managers. The persistent pattern of organization we observe in those distributions is primarily defined by the efforts of corporate managers to appropriate arbitrage capital gains defined by heterogeneity across individual measures of the logarithm of Tobin’s q. Competition ensures the structures of security prices shaped by those efforts reflect an aggregate tradeoff between the gross returns and costs they pose to corporate managers. The distributions are also influenced by the endogenous, competitive formation of the opportunity cost of capital corporations face, which is conditioned by what investors come to expect to be a typical or general expected rate of return on assets across all corporations. In addition to offering an economic account of what we observe, the resulting framework defines new conceptualizations and aggregate measures of the informational and allocative performance of capital markets. Those suggest the performance of US capital markets has deteriorated since the 1980s.
APA, Harvard, Vancouver, ISO, and other styles
33

Shah, Nehal N., Harikrishna Singapuri, and Upena D. Dalal. "Hardware Efficient Architecture with Variable Block Size for Motion Estimation." Journal of Electrical and Computer Engineering 2016 (2016): 1–11. http://dx.doi.org/10.1155/2016/5091519.

Full text
Abstract:
Video coding standards such as MPEG-x and H.26x incorporate variable block size motion estimation (VBSME) which is highly time consuming and extremely complex from hardware implementation perspective due to huge computation. In this paper, we have discussed basic aspects of video coding and studied and compared existing architectures for VBSME. Various architectures with different pixel scanning pattern give a variety of performance results for motion vector (MV) generation, showing tradeoff between macroblock processed per second and resource requirement for computation. Aim of this paper is to design VBSME architecture which utilizes optimal resources to minimize chip area and offer adequate frame processing rate for real time implementation. Speed of computation can be improved by accessing 16 pixels of base macroblock of size 4 × 4 in single clock cycle using z scanning pattern. Widely adopted cost function for hardware implementation known as sum of absolute differences (SAD) is used for VBSME architecture with multiplexer based absolute difference calculator and partial summation term reduction (PSTR) based multioperand adders. Device utilization of proposed implementation is only 22k gates and it can process 179 HD (1920 × 1080) resolution frames in best case and 47 HD resolution frames in worst case per second. Due to such higher throughput design is well suitable for real time implementation.
APA, Harvard, Vancouver, ISO, and other styles
34

Kolari, James. "Gross and net tax shield valuation." Managerial Finance 44, no. 7 (July 9, 2018): 854–64. http://dx.doi.org/10.1108/mf-09-2017-0335.

Full text
Abstract:
Purpose The purpose of this paper is to show that distinguishing between gross and net tax shields arising from interest deductions is important to firm valuation. The distinction affects the interpretation but not valuation of tax shields for the famous Miller’s (1977) model with corporate and personal taxes. However, for the well-known Miles and Ezzell’s (1985) model, the authors show that the valuation of tax shields can be materially affected. Implications to the cost of equity and optimal capital structure are discussed. Design/methodology/approach This paper proposed a simple tax shield clarification that distinguishes between gross and net tax shields. Net tax shields equal gross tax shields minus personal taxes on debt. When an after-tax riskless rate is used to discount shareholders’ tax shields, this distinction affects the interpretation but not valuation results of the Miller’s model. However, when the after-tax unlevered equity rate is used to discount tax shields under the well-known Miles and Ezzell’s (1985) model, the difference between gross and net tax shields can materially affect valuation results. According to the traditional ME model, both gross tax shields and debt interest tax payments (i.e. net tax shields) are discounted at the after-tax unlevered equity rate. By contrast, the proposed revised ME model discounts gross tax shields at the unlevered equity rate but personal taxes on debt income at the riskless rate (like debt payments). Because personal taxes on debt are nontrivial, traditional ME valuation results can noticeably differ from the revised ME model to the extent that after-tax unlevered equity and debt rates differ from one another. Findings For comparative purposes, the authors provide numerical examples of the traditional and revised ME models. The following constant tax rates and market discount rates are assumed: Tc=0.30, Tpb=0.20, Tps=0.10, r=0.06, and ρ=0.10. Table I compares these two models’ valuation results. Maximum firm value for the traditional ME model is 7.89 compared to 7.00 for the revised ME model. At a 50 percent leverage ratio, equity value is reduced from 3.71 to 3.49, respectively. Importantly, the traditional ME model suggests that firm value linearly increases with leverage and implies an all-debt capital structure, whereas firm value stays relatively constant as leverage increases in the revised ME model. These capital structure differences arise due to discounting debt tax payments with the unlevered equity rate (riskless rate) in the traditional ME (revised ME) model. Figure 1 graphically summarizes these results by comparing the traditional ME model (thin lines) to the revised ME model (bold lines). Research limitations/implications Textbook treatments of leverage gains to firms or projects with corporate and personal taxes should be amended to take into account this previously unrecognized tradeoff. Also, empirical analyses of capital structure are recommended on the sensitivity of leverage ratios to the gross-tax-gain/debt-personal taxes tradeoff. Practical implications Financial managers need to understand how to value interest tax shields on debt in making capital structure decisions, computing the cost of capital, and valuing the firm. Social implications The valuation of interest tax shields in finance is a long-standing controversy. Nobel prize winners Modigliani and Miller (MM) wrote numerous papers on this subject and gained fame from their ideas in this area. However, application of their ideas has changed over time due to the Miles and Ezzell’s (ME) model of firm valuation. The present paper adapts the pathbreaking ideas of MM to the valuation framework of ME. Students and practitioners in finance can benefit by the valuation results in the paper. Originality/value No previous studies have recognized the valuation issues resolved in the paper on the application of the popular and contemporary ME model of firm valuation to the MM valuation concepts. The new arguments in the paper are easy to understand and readily applied to firm valuation.
APA, Harvard, Vancouver, ISO, and other styles
35

Rizk, Mostafa, Amer Baghdadi, and Michel Jézéquel. "Computational Complexity Reduction of MMSE-IC MIMO Turbo Detection." Journal of Circuits, Systems and Computers 28, no. 13 (March 1, 2019): 1950228. http://dx.doi.org/10.1142/s0218126619502281.

Full text
Abstract:
High data rates and error-rate performance approaching close to theoretical limits are key trends for evolving digital wireless communication applications. To address the first requirement, multiple-input multiple-output (MIMO) techniques are adopted in emergent wireless communication standards and applications. On the other hand, turbo concept is used to alleviate the destructive effects of the channel and ensure error-rate performance close to theoretical limits. At the receiver side, the incorporation of MIMO techniques and turbo processing leads to increased complexity that has a severe impact on computation speed, power consumption and implementation area. Because of its increased complexity, the detector is considered critical among all receiver components. Low-complexity algorithms are developed at the cost of decreased performance. Minimum mean-squared error (MMSE) solution with iterative detection and decoding shows an acceptable tradeoff. In this paper, the complexity of the MMSE algorithm in turbo detection context is investigated thoroughly. Algorithmic computations are surveyed to extract the characteristics of all involved parameters. Consequently, several decompositions are applied leading to enhanced performance and to a significant reduction of utilized computations. The complexity of the algorithm is evaluated in terms of real-valued operations. The proposed decompositions save an average of [Formula: see text] and [Formula: see text] of required operations for 2 [Formula: see text] 2 and 4 [Formula: see text] 4 MIMO systems, respectively. In addition, the hardware implementation designed applying the devised simplifications and decompositions outperforms available state-of-the-art implementations in terms of maximum operating frequency, execution time, and performance.
APA, Harvard, Vancouver, ISO, and other styles
36

Cai, Linhang, Zhulin An, Chuanguang Yang, Yangchun Yan, and Yongjun Xu. "Prior Gradient Mask Guided Pruning-Aware Fine-Tuning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (June 28, 2022): 140–48. http://dx.doi.org/10.1609/aaai.v36i1.19888.

Full text
Abstract:
We proposed a Prior Gradient Mask Guided Pruning-aware Fine-Tuning (PGMPF) framework to accelerate deep Convolutional Neural Networks (CNNs). In detail, the proposed PGMPF selectively suppresses the gradient of those ”unimportant” parameters via a prior gradient mask generated by the pruning criterion during fine-tuning. PGMPF has three charming characteristics over previous works: (1) Pruning-aware network fine-tuning. A typical pruning pipeline consists of training, pruning and fine-tuning, which are relatively independent, while PGMPF utilizes a variant of the pruning mask as a prior gradient mask to guide fine-tuning, without complicated pruning criteria. (2) An excellent tradeoff between large model capacity during fine-tuning and stable convergence speed to obtain the final compact model. Previous works preserve more training information of pruned parameters during fine-tuning to pursue better performance, which would incur catastrophic non-convergence of the pruned model for relatively large pruning rates, while our PGMPF greatly stabilizes the fine-tuning phase by gradually constraining the learning rate of those ”unimportant” parameters. (3) Channel-wise random dropout of the prior gradient mask to impose some gradient noise to fine-tuning to further improve the robustness of final compact model. Experimental results on three image classification benchmarks CIFAR10/ 100 and ILSVRC-2012 demonstrate the effectiveness of our method for various CNN architectures, datasets and pruning rates. Notably, on ILSVRC-2012, PGMPF reduces 53.5% FLOPs on ResNet-50 with only 0.90% top-1 accuracy drop and 0.52% top-5 accuracy drop, which has advanced the state-of-the-art with negligible extra computational cost.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhang, Renhui, Liangde Gao, and Xuebing Chen. "Optimization design of centrifugal pump impeller based on multi-output Gaussian process regression." Modern Physics Letters B 35, no. 21 (June 15, 2021): 2150364. http://dx.doi.org/10.1142/s0217984921503644.

Full text
Abstract:
To overcome the problems of large calculation cost and high dependence on designers’ experience, an optimization design method based on multi-output Gaussian process regression (MOGPR) was proposed. The hydraulic design method of centrifugal pump based on the MOGPR model was constructed under Bayesian framework. Based on the available excellent hydraulic model, the complex relationship between the performance parameters such as head, flow rate and the geometric parameters of centrifugal pump impeller was trained. The hydraulic design of the impeller for M125-100 centrifugal pump was performed by the proposed MOGPR surrogate model design method. The initial MOGPR design was further optimized by using the proposed MOGPR and NSGA-II hybrid model. The initial sample set for NSGA-II was designed by Latin hypercube design based on the MOGPR initial design. The relationship between the impeller geometry and the CFD numerical results of the sample set was trained to construct the surrogate model for pump hydraulic performance prediction. The MOGPR surrogate model was used to evaluate the objective function value of the offspring samples in NSGA-II multi-objective optimization. The comparison of the pump hydraulic performance between the optimized designs and the initial design shows that the efficiency and the head of the tradeoff optimal design are increased by 2.5% and 2.6%, respectively. The efficiency of the optimal head constraint design is increased by 3.2%. The comparison of the inner flow field shows that turbulent kinetic energy decreases significantly and flow separation is effectively suppressed for the optimal head constraint design.
APA, Harvard, Vancouver, ISO, and other styles
38

Hartvigsen, Gregg, and William T. Starmer. "Plant-Herbivore Coevolution in a Spatially and Genetically Explicit Model." Artificial Life 2, no. 2 (January 1995): 239–59. http://dx.doi.org/10.1162/artl.1995.2.2.239.

Full text
Abstract:
A coevolutionary model was developed to test interactions between diploid plants and herbivores using genetic algorithms on a spatial lattice. Simulated plants carried defensive genes and herbivores carried genes coding for resistance (metabolism of herbivore defense) in gene-for-gene synchrony. Collectively these genes are referred to as defensive/resistance genes (DR-genes). Genes were linked on chromosomes. Regulatory genes modified both dominance at these DR loci and the tradeoff cost involved in producing either defense or resistance. We tested the effects of varying a) the number of DR-loci, b) the ratio of the number of herbivore:plant generations, c) the shape (square vs. long and thin) and function (torus vs. island) of the lattice, and d) herbivore encounter rate on plant progeny dispersal distance. Increasing both the number of DR-genes and the ratio of herbivore:plant generations caused a tighter coevolutionary response between plants and herbivores. Plant defense was highly sensitive to herbivory but not to increasing encounter rates. Plant DR-genes were selectively disadvantageous with only one lucus but selectively favored with two or more loci. Increasing the number of herbivore:plant generations caused increased fluctuations in herbivore resistance gene frequencies and a decrease in the lag time in herbivore response to changes in plant defensive gene frequencies. The relationship between heterotroph and autotroph DR-genes increased exponentially with increasing numbers of DR-loci. This relationship suggests that autotrophs benefit from increased diversity of defense that causes a relative increase in cost for the heterotrophs. The shape of the lattice interacted with lattice function, resulting in high species persistence on wraparound habitats and the greatest extinction likelihood on rectangular islands. Low to moderate herbivore encounter rates increased plant progeny dispersal distance while high herbivore encounter rates tended to reduce dispersal distance. The frequencies of genes coding for plant defense and herbivore resistance were dynamic for thousands of generations, despite the homogeneous lattice. This interaction may increase extinction probabilities in fragmented habitats.
APA, Harvard, Vancouver, ISO, and other styles
39

Tsai, Sheng-Tzung, Todd M. Herrington, Shaun Patel, Kristen Kanoff, Alik S. Widge, Darin D. Dougherty, and Emad N. Eskandar. "149 Human Subthalamic Nucleus Neurons Exhibit Increased Theta-band Phase-locking During High-conflict Decision Making." Neurosurgery 64, CN_suppl_1 (August 24, 2017): 236. http://dx.doi.org/10.1093/neuros/nyx417.149.

Full text
Abstract:
Abstract INTRODUCTION The subthalamic nucleus (STN) is thought to be preferentially engaged during high-conflict decision making in humans. The population neuronal spike rate in the STN has been reported to increase during decision conflict. Conflict and feedback-related activity is also reflected in theta-band (4-8 Hz) oscillations in the STN. It remains unknown how single-neuron activity and theta-band local field potentials (LFP) oscillations interact to support decision making. METHODS We simultaneously recorded single-neuron spike activity and LFP from the STN of eight Parkinson's disease (PD) patients while they performed a novel Aversion-Reward conflict (ARC) task. Subjects decide whether to accept an offer of a monetary reward paired with a variable risk of an aversive air puff to the eye. By varying the reward and risk, we are able to study approach-avoidance decision making across a range of conflict. Using this task, we examined the mechanism of how theta-frequency oscillation and entrained single neurons involve humans' integration of cost and benefit and decision at various conflict statuses. RESULTS >The ARC task reveals diverse risk-reward tradeoff strategies of patients. Consistent across patients, there is a positive correlation between the degree of decision conflict and reaction time (e.g., higher conflict offers require longer for subjects to decide). During high-conflict decisions, LFP in STN had increased activity of sub-theta oscillation, while increased activity of theta was found during low-conflict decisions. Single-trial STN theta-band power was correlated with degree of decision conflict. Interestingly, the decision to take or forgo the reward is predicted by theta-frequency phase-locked of STN neurons. CONCLUSION Our findings support the hypothesis that theta-band oscillations in single-neurons reflect the engagement of STN during conflict decision making. Furthermore, STN neurons with theta-band entrainment correlate with willingness to approach risk to pursue reward.
APA, Harvard, Vancouver, ISO, and other styles
40

Aziz, Michael J. "(Invited) In Pursuit of the Fountain of Youth for Organic-Based Aqueous Flow Batteries." ECS Meeting Abstracts MA2022-02, no. 46 (October 9, 2022): 1698. http://dx.doi.org/10.1149/ma2022-02461698mtgabs.

Full text
Abstract:
The ability to store large amounts of electrical energy is of increasing importance with the growing fraction of electricity generation from intermittent renewable sources such as wind and solar. We have developed high performance flow batteries based on the aqueous redox behavior of small organic and metalorganic molecules, e.g. [1-9]. These redox active materials can be inexpensive and exhibit rapid redox kinetics, high solubilities, and long lifetimes, although short lifetimes are more common [7, 10]. We will discuss the economic tradeoff between upfront capital cost and periodic chemical replacement cost [11]. We discuss the very few chemistries with long enough calendar life for practical application in stationary storage [3-6, 9], and on the prospects for reversing capacity fade by recomposing decomposed molecules [8, 12]. [1] B. Huskinson, M.P. Marshak, C. Suh, S. Er, M.R. Gerhardt, C.J. Galvin, X. Chen, A. Aspuru-Guzik, R.G. Gordon and M.J. Aziz, "A metal-free organic-inorganic aqueous flow battery", Nature 505, 195 (2014), http://dx.doi.org/10.1038/nature12909 [2] K. Lin, Q. Chen, M.R. Gerhardt, L. Tong, S.B. Kim, L. Eisenach, A.W. Valle, D. Hardee, R.G. Gordon, M.J. Aziz and M.P. Marshak, "Alkaline Quinone Flow Battery", Science 349, 1529 (2015), http://dx.doi.org/10.1126/science.aab3033 [3] E.S. Beh, D. De Porcellinis, R.L. Gracia, K.T. Xia, R.G. Gordon and M.J. Aziz, "A Neutral pH Aqueous Organic/Organometallic Redox Flow Battery with Extremely High Capacity Retention", ACS Energy Letters 2, 639 (2017). http://dx.doi.org/10.1021/acsenergylett.7b00019 [4] D.G. Kwabi, K. Lin, Y. Ji, E.F. Kerr, M.-A. Goulet, D. DePorcellinis, D.P. Tabor, D.A. Pollack, A. Aspuru-Guzik, R.G. Gordon, and M.J. Aziz, “Alkaline Quinone Flow Battery with Long Lifetime at pH 12” Joule 2, 1907 (2018). https://doi.org/10.1016/j.joule.2018.07.005 [5] Y. Ji, M.-A. Goulet, D.A. Pollack, D.G. Kwabi, S. Jin, D. DePorcellinis, E.F. Kerr, R.G. Gordon, and M.J. Aziz, “A phosphonate-functionalized quinone redox flow battery at near-neutral pH with record capacity retention rate” Advanced Energy Materials 2019, 1900039; https://doi.org/10.1002/aenm.201900039 [6] M. Wu, Y. Jing, A.A. Wong, E.M. Fell, S. Jin, Z. Tang, R.G. Gordon and M.J. Aziz, “Extremely Stable Anthraquinone Negolytes Synthesized from Common Precursors” Chem 6, 1432 (2020); https://doi.org/10.1016/j.chempr.2020.03.021 [7] M.-A. Goulet & M.J. Aziz, “Flow Battery Molecular Reactant Stability Determined by Symmetric Cell Cycling Methods”, J. Electrochem. Soc. 165, A1466 (2018). http://dx.doi.org/10.1149/2.0891807jes [8] M.-A. Goulet, L. Tong, D.A. Pollack, D.P. Tabor, S.A. Odom, A. Aspuru-Guzik, E.E. Kwan, R.G. Gordon, and M.J. Aziz, “Extending the lifetime of organic flow batteries via redox state management” J. Am. Chem. Soc. 141, 8014 (2019); https://doi.org/10.1021/jacs.8b13295 [9] M. Wu, M. Bahari, E.M. Fell, R.G. Gordon and M.J. Aziz, “High-performance anthraquinone with potentially low cost for aqueous redox flow batteries” J. Mater. Chem. A 9, 26709-26716 (2021). https://doi.org/10.1039/D1TA08900E [10] D.G. Kwabi, Y.L. Ji and M.J. Aziz, “Electrolyte Lifetime in Aqueous Organic Redox Flow Batteries: A Critical Review” Chem. Rev. 120, in press (2020); https://doi.org/10.1021/acs.chemrev.9b00599 [11] F.R. Brushett, M.J. Aziz and K.E. Rodby, “On lifetime and cost of redox-active organics for aqueous flow batteries” Invited Viewpoint article for ACS Energy Letters. 5, 879 (2020); https://doi.org/10.1021/acsenergylett.0c00140 [12] Y. Jing, E.W. Zhao, M.-A. Goulet, M. Bahari, E. Fell, S. Jin, A. Davoodi, Erlendur Jónsson, M. Wu, C.P. Grey, R.G. Gordon and M.J. Aziz, “Closing the Molecular Decomposition-Recomposition Loop in Aqueous Organic Flow Batteries” Nature Chemistry, in press (2022). Preprint: http://dx.doi.org/10.33774/chemrxiv-2021-x05x1
APA, Harvard, Vancouver, ISO, and other styles
41

Johnson, Harry. "Collaborating to bring new technology on developments." APPEA Journal 62, no. 2 (May 13, 2022): S127—S131. http://dx.doi.org/10.1071/aj21175.

Full text
Abstract:
Project delivery certainty is a key a success factor for positive business outcomes. Taking on world first applications across multiple delivery streams comes with risk that requires significant risk reduction to meeting business requirements. The Julimar Development completed this successfully across both drilling and completions, and subsea. Reliable multizone completions with robust sand control have traditionally been a challenge in high-rate gas wells. Cased hole stacked gravel packs can leave high mechanical skin, require multiple trips and are complex operations. While Open Hole Gravel Packs (OHGPs) have provided reliable sand control in such wells, multizone applications have been limited due to the tradeoff between effective gravel placement and zonal isolation. Recent developments in technology have enabled reliable gravel placement and complete zonal isolation. The collaboration between Woodside and Subsea 7 has delivered an ‘industry first’ on the Julimar Project, with the completion of an 18″ Corrosion-Resistant Alloy (CRA) gas transmission flowline installed via reel-lay – the largest diameter insulated CRA pipeline ever reeled. For background, most projects have traditionally used the ‘S-Lay’ method for installing pipe in Australia – the reel-lay method is less common. A key benefit of the reel-lay method is it removes thousands of welds from the offshore installation vessel critical path, transferring them onshore into a safer, quality-controlled environment earlier in the schedule. Pipe joints are welded into ‘stalks’ which are then spooled onto purpose-built reel-lay vessels. Woodside and Subsea 7 were able to jointly demonstrate the safety, quality, technical and cost advantages of this innovative but field-proven reeled pipe-lay technology. The 18″ CRA flowline is a tangible example of the performance that can be delivered through early collaborative engagement and strategic investment in technology and is a step change in Australia and the industry. This joint presentation discusses contracting, design, execution and evaluation of technologies on the Julimar Development. It includes the method followed for reel-lay technology selection, engineering development, post project evaluation and the health and safety benefits. The presentation provides both the Operator and Contractor perspectives on the challenges of implementing new technology in developments in Australia.
APA, Harvard, Vancouver, ISO, and other styles
42

Grotegut, Chad, Geeta Swamy, Evan Myers, Laura Havrilesky, and Maeve Hopkins. "Induction of Labor versus Scheduled Cesarean in Morbidly Obese Women: A Cost-Effectiveness Analysis." American Journal of Perinatology 36, no. 04 (August 21, 2018): 399–405. http://dx.doi.org/10.1055/s-0038-1668591.

Full text
Abstract:
Objective To assess the costs, complication rates, and harm-benefit tradeoffs of induction of labor (IOL) compared to scheduled cesarean delivery (CD) in women with class III obesity. Study Design We conducted a cost analysis of IOL versus scheduled CD in nulliparous morbidly obese women. Primary outcomes were surgical site infection (SSI), chorioamnionitis, venous thromboembolism, blood transfusion, and readmission. Model outcomes were mean cost of each strategy, cost per complication avoided, and complication tradeoffs. We assessed the costs, complication rates, and harm-benefit tradeoffs of IOL compared with scheduled CD in women with class III obesity. Results A total of 110 patients underwent scheduled CD and 114 underwent IOL, of whom 61 (54%) delivered via cesarean. The group delivering vaginally experienced fewer complications. SSI occurred in 0% in the vaginal delivery group, 13% following scheduled cesarean, and 16% following induction then cesarean. In the decision model, the mean cost of induction was $13,349 compared with $14,575 for scheduled CD. Scheduled CD costs $9,699 per case of chorioamnionitis avoided, resulted in 18 cases of chorioamnionitis avoided per additional SSI and 3 cases of chorioamnionitis avoided per additional hospital readmission. In sensitivity analysis, IOL is cost saving compared with scheduled CD unless the cesarean rate following induction exceeds 70%. Conclusion In morbidly obese women, induction of labor remains cost-saving until the rate of cesarean following induction exceeds 70%.
APA, Harvard, Vancouver, ISO, and other styles
43

Zhang, Xulong, and Chenquan Gan. "Optimal and Nonlinear Dynamic Countermeasure under a Node-Level Model with Nonlinear Infection Rate." Discrete Dynamics in Nature and Society 2017 (2017): 1–16. http://dx.doi.org/10.1155/2017/2836865.

Full text
Abstract:
This paper mainly addresses the issue of how to effectively inhibit viral spread by means of dynamic countermeasure. To this end, a controlled node-level model with nonlinear infection and countermeasure rates is established. On this basis, an optimal control problem capturing the dynamic countermeasure is proposed and analyzed. Specifically, the existence of an optimal dynamic countermeasure scheme and the corresponding optimality system are shown theoretically. Finally, some numerical examples are given to illustrate the main results, from which it is found that (1) the proposed optimal strategy can achieve a low level of infections at a low cost and (2) adjusting nonlinear infection and countermeasure rates and tradeoff factor can be conductive to the containment of virus propagation with less cost.
APA, Harvard, Vancouver, ISO, and other styles
44

McVay, D. A., and J. P. Spivey. "Optimizing Gas-Storage Reservoir Performance." SPE Reservoir Evaluation & Engineering 4, no. 03 (June 1, 2001): 173–78. http://dx.doi.org/10.2118/71867-pa.

Full text
Abstract:
Summary As gas storage becomes increasingly important in managing the nation's gas supplies, there is a need to develop more gas-storage reservoirs and to manage them more efficiently. Using computer reservoir simulation to rigorously predict gas-storage reservoir performance, we present specific procedures for efficient optimization of gas-storage reservoir performance for two different problems. The first is maximizing working gas volume and peak rates for a particular configuration of reservoir, well, and surface facilities. We present a new, simple procedure to determine the maximum performance with a minimal number of simulation runs. The second problem is minimizing the cost to satisfy a specific production and injection schedule, which is derived from the working gas volume and peak rate requirements. We demonstrate a systematic procedure to determine the optimum combination of cushion gas volume, compression horsepower, and number and locations of wells. The use of these procedures is illustrated through application to gas-reservoir data. Introduction With the unbundling of the natural gas industry as a result of Federal Energy Regulatory Commission (FERC) Order 636, the role of gas storage in managing the nation's gas supplies has increased in importance. In screening reservoirs to determine potential gas-storage reservoir candidates, it is often desirable to determine the maximum storage capacity for specific reservoirs. In designing the conversion of producing fields to storage or the upgrading of existing storage fields, it is beneficial to determine the optimum combination of wells, cushion gas and compression facilities that minimizes investment. A survey of the petroleum literature found little discussion of simulation-based methodologies for achieving these two desired outcomes. Duane1 presented a graphical technique for optimizing gas-storage field design. This method allowed the engineer to minimize the total field-development cost for a desired peak-day rate and cyclic capacity (working gas capacity). To use the method, the engineer would prepare a series of field-design optimization graphs for different compressor intake pressures. Each graph consists of a series of curves corresponding to different peak-day rates. Each curve, in turn, shows the number of wells required to deliver the given peak-day rate as a function of the gas inventory level. Thus, the tradeoff between compression horsepower costs, well costs, and cushion gas costs could be examined to determine the optimum design in terms of minimizing the total field-development cost. Duane's method implicitly assumes that boundary-dominated flow will prevail throughout the reservoir. Henderson et al. 2 presented a case history of storage-field-design optimization with a single-phase, 2D numerical model of the reservoir. They varied well placement and well schedules in their study to reduce the number of wells necessary to meet the desired demand schedule. They used a trial-and-error method and stated that the results were preliminary. They found that wells in the poorest portion of the field should be used to meet demand at the beginning of the withdrawal period. Additional wells were added over time to meet the demand schedule. The wells in the best part of the field were held in reserve to meet the peak-day requirements, which occurred at the end of the withdrawal season. Coats3 presented a method for locating new wells in a heterogeneous field. His objective was to determine the optimum drilling program to maintain a contractual deliverability during field development. He provided a discussion of whether wells should be spaced closer together in areas of high kh or in areas of low kh. He found that when f h is essentially uniformly distributed, the wells should be closer together in low kh areas. On the other hand, if the variation in kh is largely caused by variations in h, or if porosity is highly correlated with permeability, wells should be closer together in areas of high kh. Coats' method assumes boundary-dominated flow throughout the reservoir. Wattenbarger4 used linear programming to solve the problem of determining the withdrawal schedule on a well-by-well basis that would maximize the total seasonal production, subject to constraints such as fixed demand schedule and minimum wellbore pressure. Van Horn and Wienecke5 solved the gas-storage-design optimization problem with a Fibonnaci Search algorithm. They expressed the investment requirement for a storage field in terms of four variables: cushion gas, number of wells, purification equipment, and compressor horsepower. They chose as the optimum design the combination of these four variables that minimized investment cost. The authors used an empirical backpressure equation, combined with a simplified gas material-balance equation, as the reservoir model. In this paper we present systematic, simulation-based methodologies for optimizing gas-storage reservoir performance for two different problems. The first is maximizing working gas volume and peak rates for a particular configuration of reservoir, well, and surface facilities. The second problem is minimizing the cost to satisfy a specific production and injection schedule, which is derived from the working gas volume and peak rate requirements. Constructing the Reservoir Model To optimize gas-storage reservoir performance, a model of the reservoir is required. We prefer to use the simplest model that is able to predict storage-reservoir performance as a function of the number and locations of wells, compression horsepower, and cushion gas volume. Although models combining material balance with analytical or empirical deliverability equations may be used in certain situations, a reservoir-simulation model is usually best, owing to its flexibility and its ability to handle well interference and complex reservoirs accurately. It is important to calibrate the model against historical production and pressure data; we must show that the model reproduces past reservoir performance accurately before we can use it to predict future performance with reliability. However, even calibrating the model by history matching past performance may not be adequate. It is our experience that information obtained during primary depletion of a reservoir is often not adequate to predict its performance under storage operations. Primary production over many years may mask layered or dual-porosity behavior that significantly affects the ability of the reservoir to deliver large volumes of gas within a 4- or 5-month period. Wells and Evans6 presented a case history of the Loop gas storage field, which exhibited this behavior. It may be necessary to implement a program of coring, logging, pressure-transient testing, and/or simulated storage production/injection testing to characterize the reservoir accurately.
APA, Harvard, Vancouver, ISO, and other styles
45

Lin, Tao, Wei-Ting Liao, Luis F. Rodríguez, Yogendra N. Shastri, Yangfeng Ouyang, M. E. Tumbleson, and K. C. Ting. "Optimization Modeling Analysis for Grain Harvesting Management." Transactions of the ASABE 62, no. 6 (2019): 1489–508. http://dx.doi.org/10.13031/trans.13135.

Full text
Abstract:
HighlightsAn optimization model, called BioGrain, was developed to optimize grain harvesting decisions.The results highlight the tradeoffs between grain losses and drying costs for profit maximization.The optimization model can provide decision support for individual farms in different regions. Abstract. Appropriate farm management practices can improve agricultural productivity and reduce grain losses. An optimization model, called BioGrain, was developed to maximize farmers’ profits by optimizing critical grain harvesting decisions including agricultural machinery selection and harvesting schedules. This model was applied to 18 representative farms of varied sizes in Illinois, Iowa, and Minnesota. Our optimization showed that understanding crop moisture dynamics is critical for maximizing profits at the farm scale. Our results highlight the tradeoffs between grain losses and drying costs when considering profit maximization. By optimizing harvesting dates and machinery size, large farms can reduce the grain loss rate to 10%, and small farms can achieve a 5% grain loss rate. Large farms outperformed small farms on unit profits despite their higher grain loss rate. The model considers both revenue and cost related factors in harvesting decisions and quantifies the tradeoffs among corn yield, drying, and equipment selection. The model can be used to provide decision support for individual farms in different regions considering their local crop and market information. Keywords: Grain losses, Harvesting schedule, Machinery selection, Optimization, Profits.
APA, Harvard, Vancouver, ISO, and other styles
46

Wu, Hung-Ming Peter, and Keith D. Willett. "The Taxation of Environmental Pollution: A Model for Tax Revenue-Environmental Quality Tradeoffs." International Journal of Financial Research 8, no. 1 (December 8, 2016): 65. http://dx.doi.org/10.5430/ijfr.v8n1p65.

Full text
Abstract:
The paper analyzes the Willamette River in Oregon. Here a model (combining the least-cost model and the constraint method of multi-objective programming) is used to determine the appropriate tax rate on environmental externalities, incorporating both revenue and environmental quality objectives. The study finds the following. (1) By using the optimal tax rate, the appropriate tax revenue is determined. (2) The efficient solution set (including tax revenue and water quality considerations) is found by using differing optimal tax rates. (3) The optimal point (solution) in the efficient solution set is chosen by the geometrical argument approach and trade-off analysis approach.
APA, Harvard, Vancouver, ISO, and other styles
47

Shina, S. G., and A. Saigal. "Technology Cost Modeling for the Manufacture of Printed Circuit Boards in New Electronic Products." Journal of Manufacturing Science and Engineering 120, no. 2 (May 1, 1998): 368–75. http://dx.doi.org/10.1115/1.2830136.

Full text
Abstract:
The rapid pace of technological and material advances has made it difficult to ascertain the most cost effective plan for manufacturing new products. The decision to identify the appropriate manufacturing methods and materials selection is most timely made during the initial design stage of the product. However, tradeoffs in manufacturing and materials technology are not easily discernible to the design team, while traditional cost accounting systems do not reflect the continuous improvements nor the opportunities in increasing quality and reducing cost. Systems that have recently been developed to assist in estimating new product cost, such as Activity Based Costing (ABC) and Technical Cost Modeling, focus on a stable manufacturing environment. They presume that production is either dedicated to the new product, or that utilization and/or yields of machines are at static levels. In most modern companies, the majority of new products introduced are evolutionary in nature, attempting to gain maximum advantage of current material and manufacturing technologies while continuing to be made alongside existing production. These new products can significantly change the current manufactured product and material mix, and hence their cost. A technology based modeling approach is presented in this paper to address the issues of changes in a dynamic manufacturing environment, where each design selection can be evaluated individually based on its production impact. Elements of this approach are described in the design of electronic products using printed circuit boards. The design team can select from a large combinations of technologies, materials and manufacturing steps, each with its particular cost, production rate and yield. A technology based cost modeling system can be developed to guide the team in the selection process by identifying the cost tradeoffs involved in each alternative design.
APA, Harvard, Vancouver, ISO, and other styles
48

Zhang, Hai Rui, Yi Yang, Lian Li, Wen Zhi Cheng, and Cong Ding. "A Dynamic Resource Allocation Framework in the Cloud." Applied Mechanics and Materials 441 (December 2013): 974–79. http://dx.doi.org/10.4028/www.scientific.net/amm.441.974.

Full text
Abstract:
In this paper, we present and study a new system framework, which achieves allocation of cloud resource and guarantees response time of application requests and service rate of cloud resources. This architecture consists of three important components: queue controller, resource monitor and scale controller. The system leverages priority queue is used to assure response time of application requests. The resource monitor will measure the workload and resource status in cloud environment. In scale controller, an algorithm is proposed to guarantee service rate by adding/reducing cloud resources and address the tradeoffs between cost and service rate to avoid a waste of cloud resources. Finally, we evaluate this system framework through verifying feasibility and stability.
APA, Harvard, Vancouver, ISO, and other styles
49

Li, Xuexiang, Wanlin Hu, Fan Zhang, Jinxin Zhang, Feng Sheng, and Xiangyu Xu. "Carbon Sink Cost and Influence Factors Analysis in a National Afforestation Project under Different Investment Modes." International Journal of Environmental Research and Public Health 19, no. 13 (June 24, 2022): 7738. http://dx.doi.org/10.3390/ijerph19137738.

Full text
Abstract:
Afforestation projects are the main source of carbon sink. Measurement and impact analysis of carbon sink costs will help accelerate the marketization of forestry carbon sink. Considering the opportunity cost of land use and the carbon release cost of wood products, this study proposed a forestry carbon sink cost model under the Public–Private Partnership (PPP) and the direct (DI) investment mode based on the classic carbon sink model. Then, the proposed models were applied to a real-world afforestation project, the 20-year national afforestation project (NAP) in Laohekou City, Hubei Province, China. With the help of the input–output forestry carbon sink cost–benefit analysis framework, the dynamic analysis of factors such as rotation period, timber price, discount rate and yield rate for forestry is carried out. Results show that: (1) with the increasing of rotation period, wood market price, and wood yield rate, the carbon sink cost of Laohekou NAP gradually decreases, while the discount rate has the opposite trend; (2) the DI mode is more feasible than the PPP model at the present condition. The PPP mode is more feasible than the DI mode only when the wood price is lower than 73.18% of the current price, the yield rate is lower than 0.485, and the discount rate is higher than 6.77%. (3) When choosing tree species for NAP, the carbon sink capacity, wood market price, maturity time, and planting cost should be synthetically considered. The proposed model and the obtained results can not only support local governments and forestry carbon sink enterprises to make tradeoffs between PPP and DI mode, but also provide them with useful information for reducing carbon sink costs.
APA, Harvard, Vancouver, ISO, and other styles
50

MARTEN, ALEX L. "THE ROLE OF SCENARIO UNCERTAINTY IN ESTIMATING THE BENEFITS OF CARBON MITIGATION." Climate Change Economics 05, no. 03 (August 2014): 1450007. http://dx.doi.org/10.1142/s2010007814500079.

Full text
Abstract:
The benefits of carbon mitigation are subject to numerous sources of uncertainty and accounting for this uncertainty in policy analysis is crucial. One often overlooked source uncertainty are the forecasts of future baseline conditions (e.g., population, economic output, emissions) from which carbon mitigation benefits are assessed. Through, in some cases highly non-linear relationships, these baseline conditions determine the forecast level and rate of climate change, exposed populations, vulnerability, and way in which inter-temporal tradeoffs are valued. We study the impact of explicitly considering this uncertainty on a widely used metric to asses the benefits of carbon dioxide mitigation, the social cost of carbon (SCC). To explore this question, a detailed integrated assessment that couples economic and climate systems to assess the damages of climate change is driven by a library of consistent probabilistic socioeconomic-emission scenarios developed using a comprehensive global computable general equilibrium (CGE) model. We find that scenario uncertainty has a significant effect on estimates of the SCC and that excluding this source of uncertainty could lead to an underestimate of the mitigation benefits. A detailed decomposition finds that this effect is driven primarily through the role that uncertainty regarding future consumption per capita growth has on the value of inter-temporal tradeoffs through the consumption discount rate.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography