Journal articles on the topic 'Mathematical models-overhead costs'

To see the other types of publications on this topic, follow the link: Mathematical models-overhead costs.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 18 journal articles for your research on the topic 'Mathematical models-overhead costs.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bogachkov, I. M., and R. N. Khamitov. "Algorithm to select the voltage class for the gas field electricity supply system." Vestnik IGEU, no. 2 (April 30, 2021): 32–39. http://dx.doi.org/10.17588/2072-2672.2021.2.032-039.

Full text
Abstract:
The current algorithms and mathematical models to select the voltage class based on the theory of experimental planning are developed for industrial enterprises (overhead lines with a length of up to 10 km, a power of up to 20 MV, a radial arrangement with a transformation at the end of the line). They do not consider the features of gas fields (overhead lines with a length of up to 20 km and a capacity of 1 MV with a projected growth of up to 10 MV, a transmission network with one pass-through trunk line with distributed transformation along the line). Currently, to develop a mathematical model, the following factors are considered: the average length of the power line and the total load of an enterprise. The proposed models do not allow us to quantify the dynamics of the gas fields power supply system considering the multiple growth of the electrical load in each period of the life cycle. The purpose of this study is to develop a model to solve this problem. An extreme experiment has been carried out during the research. The following input data are set: the average length of the power line; the number of gas clusters; the growth rate of the electric load. The response function is the voltage class that is optimal for the minimum discounted cost. The authors suggest the regression model. In this model the “total load” factor is split into two components, they are gas clusters and growth rate of electric loads. The algorithm to select the optimal voltage class of a distribution grid is proposed. The dynamic experiment is carried out and the growth rate of electric loads in the regressive model is being changed while other factors are being unchanged. As a result, the optimal minimum of the discounted costs of the voltage class for each period of the field life cycle is obtained. The algorithm is implemented in “PRON” software. With the help of “PRON” software, the distribution grids of several operating gas fields in Western Siberia have been investigated. The optimal voltage class of a distribution grid of gas fields is 20 kV. The reliability of the results is verified by reference models of calculating discounted costs.
APA, Harvard, Vancouver, ISO, and other styles
2

Hamdollahzadeh, Ali, and Kamran Jamali Firouzabadi. "A Model to Determine Optimal Composition of Production to Obtain Maximum Profit & Reduce Overhead Costs by Linear Programming." Modern Applied Science 10, no. 10 (September 26, 2016): 289. http://dx.doi.org/10.5539/mas.v10n10p289.

Full text
Abstract:
There are many factors which put emphasize on the importance of developing pharmaceutical industry such as human health, reduced rate of using medicines, improve healthcare to global level, influence of pharmaceutical industry on exchange market, creating job and etc. Growing improvements in production systems, appearance of mechanized systems and dynamic commercial markets have highlighted the requirements of planning. This study aims to provide a model for defining the optimal composition of production in Sobhan Darou Pharmaceutical Company to obtain maximum profits and reduce overhead costs by linear programming. Lingo application is applied to reach mentioned goals. The results showed that a mathematical planning model can be used to determine minimum of total costs and inventory control strategy. Using linear programming we can take into account all perceptible and imperceptible factors to have a choice, while output models only consider quantitative values. Another advantage of leaner programming is that it can calculate production weight and rate with a systematic method which increases the efficiency and helps to have a proper choice.
APA, Harvard, Vancouver, ISO, and other styles
3

Kovalyov, М. Ya, B. M. Rozin, and I. A. Shaternik. "Approach to optimizing charging infrastructure of autonomous trolleybuses for urban routes." Informatics 18, no. 4 (December 31, 2021): 79–95. http://dx.doi.org/10.37661/1816-0301-2021-18-4-79-95.

Full text
Abstract:
P u r p o s e s. When designing a system of urban electric transport that charges while driving, including autonomous trolleybuses with batteries of increased capacity, it is important to optimize the charging infrastructure for a fleet of such vehicles. The charging infrastructure of the dedicated routes consists of overhead wire sections along the routes and stationary charging stations of a given type at the terminal stops of the routes. It is designed to ensure the movement of trolleybuses and restore the charge of their batteries, consumed in the sections of autonomous running.The aim of the study is to create models and methods for developing cost-effective solutions for charging infrastructure, ensuring the functioning of the autonomous trolleybus fleet, respecting a number of specific conditions. Conditions include ensuring a specified range of autonomous trolleybus running at a given rate of energy consumption on routes, a guaranteed service life of their batteries, as well as preventing the discharge of batteries below a critical level under various operating modes during their service life.M e t ho d s. Methods of set theory, graph theory and linear approximation are used.Re s u l t s. A mathematical model has been developed for the optimization problem of the charging infrastructure of the autonomous trolleybus fleet. The total reduced annual costs for the charging infrastructure are selected as the objective function. The model is formulated as a mathematical programming problem with a quadratic objective function and linear constraints.Co n c l u s i o n. To solve the formulated problem of mathematical programming, standard solvers such as IBM ILOG CPLEX can be used, as well as, taking into account its computational complexity, the heuristic method of "swarm of particles". The solution to the problem is to select the configuration of the location of the overhead wire sections on the routes and the durations of charging the trolleybuses at the terminal stops, which determine the corresponding number of stationary charging stations at these stops.
APA, Harvard, Vancouver, ISO, and other styles
4

Kachanov, A. N., D. A. Korenkov, A. A. Revkov, V. V. Maksimov, and O. V. Vorkunov. "Words high-frequency drying processes simulation of wooden tangent towers in a vacuum chamber." Power engineering: research, equipment, technology 22, no. 6 (March 26, 2021): 130–42. http://dx.doi.org/10.30724/1998-9903-2020-22-6-130-142.

Full text
Abstract:
THE PURPOSE. The service life duration of wooden tangent towers used on overhead transmission lines with a voltage of up to 35 kV depends on the quality of lumber drying and subsequent impregnation. The drying of tangent tower workpieces is currently carried out by atmospheric or convective methods and is the longest and one of the energy-consuming stages of their production. At the same time, there are promising electrotechnological drying installations that can reduce the duration and improve drying quality at comparable specific energy costs. Such installations include vacuum high-frequency complexes, the wide introduction of which is complicated by a number of unresolved scientific and technical problems like optimizing vacuum high-frequency drying modes and ensuring electromagnetic field uniformity in long workpieces. The purpose of this article is to obtain mathematical tools that simultaneously describe the cross-effects of electromagnetic phenomena and heat and mass transfer processes in long-sized lumber and contribute to the further solution of these problems. METHODS. The positions of the theory of electromagnetic field, heat mass transfer and heat mass exchange, methods of mathematical modeling were used for this purpose. Also the results of previous studies of electromagnetic field distribution in the cross-section and longitudinal sections of the working chamber loading are taken into account. RESULTS. А one-dimensional mathematical model is obtained. It describes the influence of electromagnetic wave distribution along the length of tangent towers and external medium parameters on the temperature and moisture content in the material. This model is characterized by the possibility of using simple algorithms for analyzing differential equation systems based on the finite differe nce method and requiring less initial data on the drying material properties. CONCLUSION. The obtained by using the proposed model and the method of its analysis the numerical study results are compared with the available experimental data. Based on this comparison it is concluded that the obtained model is adequate and more effective relative to other existing models of vacuum high-frequency drying. Generally, further use of the presented mathematical toolkit to optimize the design and modes of vacuum-high-frequency complexes in the task of drying wooden tangent towers will increase the reliability of overhead transmission lines.
APA, Harvard, Vancouver, ISO, and other styles
5

Elghaish, Faris, M. Reza Hosseini, Saeed Talebi, Sepehr Abrishami, Igor Martek, and Michail Kagioglou. "Factors Driving Success of Cost Management Practices in Integrated Project Delivery (IPD)." Sustainability 12, no. 22 (November 16, 2020): 9539. http://dx.doi.org/10.3390/su12229539.

Full text
Abstract:
Integrated project delivery (IPD) is a mode of project procurement recognised as facilitating superior project performance. However, this success is contingent on effective cost management practices that share cost data with all project stakeholders in an accurate, timely and transparent manner. Despite an extensive literature on aspects of cost management, none identifies the essential ingredients required of an effective cost management system, sufficiently robust to support successful IPD projects. Candidate cost management augmenting practices are drawn from the literature, and presented for scrutiny in questionnaire form, to fifty IPD experienced experts, based in the USA, UK and Australia. Findings reveal activity-based costing (ABC) to be effective at identifying overhead costs and creating accounting transparency. Similarly, earned value management (EVM), in combination with ABC, is effective at developing mathematical models for equitable risk-reward distribution. Moreover, web-based management systems, as supported by Building Information Modelling (BIM), are effective at generating trust and collaboration on which IPD success depends. A questionnaire survey using purposive sampling was conducted to assess the factors driving success of implementing IPD regarding cost management process. The contribution to knowledge made by this paper is in identifying requisite support mechanisms essential to elevate traditional cost management practices to the higher standard needed to ensure IPD delivery success.
APA, Harvard, Vancouver, ISO, and other styles
6

Leśniak, Agnieszka. "Statistical Methods in Bidding Decision Support for Construction Companies." Applied Sciences 11, no. 13 (June 27, 2021): 5973. http://dx.doi.org/10.3390/app11135973.

Full text
Abstract:
On the border of two phases of a building life cycle (LC), the programming phase (conception and design) and the execution phase, a contractor is selected. A particularly appropriate method of selecting a contractor for the construction market is the tendering system. It is usually based on quality and price criteria. The latter may involve the price (namely, direct costs connected with works realization as well as mark-ups, mainly overhead costs and profit) or cost (based on the life cycle costing (LCC) method of cost efficiency). A contractor’s decision to participate in a tender and to calculate a tender requires an investment of time and company resources. As this decision is often made in a limited time frame and based on the experience and subjective judgement of the contractor, a number of models have been proposed in the literature to support this process. The present paper proposes the use of statistical classification methods. The response obtained from the classification model is a recommendation to participate or not. A database consisting of historical data was used for the analyses. Two models were proposed: the LOG model—using logit regression and the LDA model—using linear discriminant analysis, which obtain better results. In the construction of the LDA model, the equation of the discriminant function was sought by indicating the statistically significant variables. For this purpose, the backward stepwise method was applied, where initially all input variables were introduced, namely, 15 identified bidding factors, and then in subsequent steps, the least statistically significant variables were removed. Finally, six variables (factors) were identified that significantly discriminate between groups: type of works, contractual conditions, project value, need for work, possible participation of subcontractors, and the degree of difficulty of the works. The model proposed in this paper using a discriminant analysis with six input variables achieved good performance. The results obtained prove that it can be used in practice. It should be emphasized, however, that mathematical models cannot replace the decision-maker’s thought process, but they can increase the effectiveness of the bidding decision.
APA, Harvard, Vancouver, ISO, and other styles
7

Akhmadi, Akhmadi, Etty Siswati, and Nurrasyidah Putri. "Studi Komparatif Tingkat Pendapatan Antara Usaha Kelapa dan Pinang pada Perkebunan Rakyat Desa Sungai Beras Kabupaten Tanjab Timur." Eksis: Jurnal Ilmiah Ekonomi dan Bisnis 10, no. 2 (January 31, 2020): 68. http://dx.doi.org/10.33087/eksis.v10i2.164.

Full text
Abstract:
This type of research is quantitative research that is research using data expressed in numerical form on parts and phenomena and their relationships. This research usually aims to develop and use mathematical models, theories or hypotheses related to natural phenomena. This research is a survey and interview directly to the owners of coconut and areca nut business and records everything that the writer needs in connection with the problem of the writer. In this study using data analysis methods, namely qualitative data analysis. Qualitative data is data in the form of not numbers but with words that can be used to describe, complete and explain and strengthen quantitative data directly in the field. (Sugiyono, 2011: 23). production costs are all costs associated with the goods produced, wherein there are elements of raw material costs, direct labor costs and factory overhead costs. Based on the time period, production can be divided into short-term and long-term. In the short term the company during its production period can add one factor of production while other factors of production are considered constant. This means that some factors of production cannot be added. Whereas in the long run, companies can change or add to all factors of production they use. The initial investment of a coconut plantation business is Rp.37,004,000 which is calculated before the production period of 7 (seven) years and the initial investment of areca nut plantation is Rp.34,750,000 which is calculated before the production period for 4 (four) years, for production costs in one the coconut plantation business year is obtained at Rp.5,600,000 and the production cost of the areca nut business in one year is Rp.108,200,000, the comparison between the coconut plantation business income and the areca nut business revenue is obtained for the income of the coconut plantation business income for one year of Rp. .14,400,000 and the results of areca nut business revenue for one year amounted to Rp. 110,200,000 So, it can be concluded that the biggest income is in the nut business compared to the coconut business.
APA, Harvard, Vancouver, ISO, and other styles
8

Ajjaj, Souad, Souad El Houssaini, Mustapha Hain, and Mohammed-Alamine El Houssaini. "Performance Assessment and Modeling of Routing Protocol in Vehicular Ad Hoc Networks Using Statistical Design of Experiments Methodology: A Comprehensive Study." Applied System Innovation 5, no. 1 (February 2, 2022): 19. http://dx.doi.org/10.3390/asi5010019.

Full text
Abstract:
The performance assessment of routing protocols in vehicular ad hoc networks (VANETs) plays a critical role in testing the efficiency of the routing algorithms before deployment in real conditions. This research introduces the statistical design of experiments (DOE) methodology as an innovative alternative to the one factor at a time (OFAT) approach for the assessment and the modeling of VANET routing protocol performance. In this paper, three design of experiments methods are applied, namely the two-level full factorial method, the Plackett–Burman method and the Taguchi method, and their outcomes are comprehensively compared. The present work considers a case study involving four factors namely: node density, number of connections, black hole and worm hole attacks. Their effects on four measured outputs called responses are simultaneously evaluated: throughput, packet loss ratio, average end-to-end delay and routing overhead of the AODV routing protocol. Further, regression models using the least squares method are generated. First, we compare the main effects of factors resulted from the three DOE methods. Second, we perform analysis of variance (ANOVA) to explore the statistical significance and compare the percentage contributions of each factor. Third, the goodness of fit of regression models is assessed using the adjusted R-squared measure and the fitting plots of measured versus predicted responses. VANET simulations are implemented using the network simulator (NS-3) and the simulator of urban mobility (SUMO). The findings reveal that the design of experiments methodology offers powerful mathematical, graphical and statistical techniques for analyzing and modeling the performance of VANET routing protocols with high accuracy and low costs. The three methods give equivalent results in terms of the main effect and ANOVA analysis. Nonetheless, the Taguchi models show higher predictive accuracy.
APA, Harvard, Vancouver, ISO, and other styles
9

Pereverzyev, K., V. Vasenko, and G. Domanska. "The technology of a condition based maintenance of an overhead contact line with Markov approximation of contact wire wear." Lighting engineering and power engineering 1, no. 57 (April 6, 2020): 3–8. http://dx.doi.org/10.33042/2079-424x-2020-1-57-3-8.

Full text
Abstract:
The technologies of OCL condition based maintenance were improved for providing reliable and economic current collecting on electrified railways and in traction power networks of urban electric transport. It is established that one of the most perspective methods of reducing operating costs is the transition to scientifically justified terms of repair and planning of works based on the actual state of OCL, the reliability of which is determined by the reliability, repair suitability and durability of the elements. The basic criteria of the current collecting process are proposed: the value of contacting pair wear, the detachment of the current collectors from the wires, the scope of oscillations of pantograph head and the contact pressure, coefficients of contact unreliability and cost savings of collecting a power from catenary, and, finally, a minimum of annual operating costs. Technology of maintenance according to Markov type parameter control is developed. Models of controlling actions on stagger and the contact wire gradient adjustment processes are developed. The standard linear programming algorithm was used to solve the problem. The application of such technology to determine the matrix of controlling actions based on the results of monitoring the stagger and the contact wire gradient is shown. The factors that determine the cost of staying in a state of stagger have been identified. The controlling actions matrices on the stagger and the contact wire gradient and the cost of staying in these gradient states are established. The technology of service with Markov approximation of wear of contact wires is offered. It has been investigated that the contact wires, primarily due to wear, are degraded during operation. Degradation processes cause gradual failures. The first group of failures: breakages, burnouts, local wear - require controlling actions to incut the insertion or putting into operation a shunt. The second group: a burnout, a decrease in the average cross-section of a wire less than the allowable one requires controlling actions to replace the wire of the entire tension length. After conducting the first type of controlling actions, the contact wire returns to the working state with the previous value of the determining parameter. When performing a second type of controlling actions, the defining parameter returns to its original state. If the quantification step is chosen such that the intensities of failures and transitions from one state to another with a sufficient degree of accuracy were constant, then the graph of states and transitions, as well as the Kolmogorov differential equations, can be used for the research. The readiness function, the density of time distribution of infallible operation, the total failure rate, and the average recovery rate from state S0i to Si are determined. Thus, all reliability indicators are defined and it is possible to use them in determining the condition of the contact wire of the tension length. Experience has shown that the most effective diagnostics of the condition of OCL devices combines the assessment of the state on mathematical, simulation models and measurements with the use of car laboratories for testing the contact network and devices for monitoring the parameters of the contact wire for the railcar. Key words –overhead contact line,contact wire, mainanance technology, controlling action models.
APA, Harvard, Vancouver, ISO, and other styles
10

Ferrucci, Filomena, Pasquale Salza, and Federica Sarro. "Using Hadoop MapReduce for Parallel Genetic Algorithms: A Comparison of the Global, Grid and Island Models." Evolutionary Computation 26, no. 4 (December 2018): 535–67. http://dx.doi.org/10.1162/evco_a_00213.

Full text
Abstract:
The need to improve the scalability of Genetic Algorithms (GAs) has motivated the research on Parallel Genetic Algorithms (PGAs), and different technologies and approaches have been used. Hadoop MapReduce represents one of the most mature technologies to develop parallel algorithms. Based on the fact that parallel algorithms introduce communication overhead, the aim of the present work is to understand if, and possibly when, the parallel GAs solutions using Hadoop MapReduce show better performance than sequential versions in terms of execution time. Moreover, we are interested in understanding which PGA model can be most effective among the global, grid, and island models. We empirically assessed the performance of these three parallel models with respect to a sequential GA on a software engineering problem, evaluating the execution time and the achieved speedup. We also analysed the behaviour of the parallel models in relation to the overhead produced by the use of Hadoop MapReduce and the GAs' computational effort, which gives a more machine-independent measure of these algorithms. We exploited three problem instances to differentiate the computation load and three cluster configurations based on 2, 4, and 8 parallel nodes. Moreover, we estimated the costs of the execution of the experimentation on a potential cloud infrastructure, based on the pricing of the major commercial cloud providers. The empirical study revealed that the use of PGA based on the island model outperforms the other parallel models and the sequential GA for all the considered instances and clusters. Using 2, 4, and 8 nodes, the island model achieves an average speedup over the three datasets of 1.8, 3.4, and 7.0 times, respectively. Hadoop MapReduce has a set of different constraints that need to be considered during the design and the implementation of parallel algorithms. The overhead of data store (i.e., HDFS) accesses, communication, and latency requires solutions that reduce data store operations. For this reason, the island model is more suitable for PGAs than the global and grid model, also in terms of costs when executed on a commercial cloud provider.
APA, Harvard, Vancouver, ISO, and other styles
11

Naksinehaboon, Nichamon, Mihaela P[un, Raja Nassar, Chokchai Box Leangsuksun, and Stephen Scott. "High Performance Computing Systems with Various Checkpointing Schemes." International Journal of Computers Communications & Control 4, no. 4 (December 1, 2009): 386. http://dx.doi.org/10.15837/ijccc.2009.4.2455.

Full text
Abstract:
Finding the failure rate of a system is a crucial step in high performance computing systems analysis. To deal with this problem, a fault tolerant mechanism, called checkpoint/ restart technique, was introduced. However, there are additional costs to perform this mechanism. Thus, we propose two models for different schemes (full and incremental checkpoint schemes). The models which are based on the reliability of the system are used to determine the checkpoint placements. Both proposed models consider a balance of between checkpoint overhead and the re-computing time. Due to the extra costs from each incremental checkpoint during the recovery period, a method to find the number of incremental checkpoints between two consecutive full checkpoints is given. Our simulation suggests that in most cases our incremental checkpoint model can reduce the waste time more than it is reduced by the full checkpoint model. The waste times produced by both models are in the range of 2% to 28% of the application completion time depending on the checkpoint overheads.
APA, Harvard, Vancouver, ISO, and other styles
12

Juszczyk, Michał, and Agnieszka Leśniak. "Modelling Construction Site Cost Index Based on Neural Network Ensembles." Symmetry 11, no. 3 (March 20, 2019): 411. http://dx.doi.org/10.3390/sym11030411.

Full text
Abstract:
Construction site overhead costs are key components of cost estimation in construction projects. The estimates are expected to be accurate, but there is a growing demand to shorten the time necessary to deliver cost estimates. The balancing (symmetry) between time of calculation and satisfaction of reliable estimation was the reason for developing a new model for cost estimation in construction. This paper reports some results from the authors’ broad research on the modelling processes in engineering related to estimation of construction costs using artificial intelligence tools. The aim of this work was to develop a model capable of predicting a construction site cost index that would benefit from combining several artificial neural networks into an ensemble. Combining selected neural networks and forming the ensemble-based models compromised their strengths and weaknesses. With the use of data including training patterns collected on the basis of studies of completed construction projects, the authors investigated various types of neural networks in order to select the members of the ensemble. Finally, three models that were assessed in terms of performance and prediction quality were proposed. The results revealed that the developed models based on ensemble averaging and stacked generalisation met the expectations of knowledge generalisation and accuracy of prediction of site overhead cost index. The proposed models offer predictions of cost in an accepted error range and prove to deliver better predictions than those based on single neural networks. The developed tools can be used in the decision-making process regarding construction cost estimation.
APA, Harvard, Vancouver, ISO, and other styles
13

Xu, Ming, Jinlong Wang, Ang Zhang, and Shengli Liu. "Probabilistic Value-Centric Optimization Design for Fractionated Spacecrafts Based on Unscented Transformation." Mathematical Problems in Engineering 2013 (2013): 1–10. http://dx.doi.org/10.1155/2013/132920.

Full text
Abstract:
Fractionated spacecrafts are of particular interest for pointing-intensive missions because of their ability to decouple physically the satellite bus and some imaging payloads, which possess a lesser lifecycle cost than a comparable monolithic spacecraft. Considering the probabilistic uncertainties during the mission lifecycle, the cost assessment or architecture optimization is essentially a stochastic problem. Thus, this research seeks to quantitatively assess different spacecraft architecture strategies for remote-sensing missions. A dynamical lifecycle simulation and parametric models are developed to evaluate the lifecycle costs, while the mass, propellant usage, and some other constraints on spacecraft are assessed using nonparametric, physics-based computer models. Compared with the traditional Monte Carlo simulation to produce uncertain distributions during the lifecycle, the unscented transformation is employed to reduce the computational overhead, just as it does in improving the extended Kalman filter. Furthermore, the genetic algorithm is applied to optimize the fractionated architecture based on the probabilistic value-centric assessments developed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
14

Basu, Aveek, and Sanchita Ghosh. "Implementing Fuzzy TOPSIS in Cloud Type and Service Provider Selection." Advances in Fuzzy Systems 2018 (November 15, 2018): 1–12. http://dx.doi.org/10.1155/2018/2503895.

Full text
Abstract:
Cloud computing can be considered as one of the leading-edge technological advances in the current IT industry. Cloud computing or simply cloud is attributed to the Service Oriented Architecture. Every organization is trying to utilize the benefit of cloud not only to reduce the cost overhead in infrastructure, network, hardware, software, etc., but also to provide seamless service to end users with the benefit of scalability. The concept of multitenancy assists cloud service providers to leverage the costs by providing services to multiple users/companies at the same time via shared resource. There are several cloud service providers currently in the market and they are rapidly changing and reorienting themselves as per market demand. In order to gain market share, the cloud service providers are trying to provide the latest technology to end users/customers with the reduction of costs. In such scenario, it becomes extremely difficult for cloud customers to select the best service provider as per their requirement. It is also becoming difficult to decide upon the deployment model to choose among the existing ones. The deployment models are suitable for different companies. There exist divergent criteria for different deployment models which are not tailor made for an organization. As a cloud customer, it is difficult to decide on the model and determine the appropriate service provider. The multicriteria decision making method is applied to find out the best suitable service provider among the top existing four companies and choose the deployment model as per requirement.
APA, Harvard, Vancouver, ISO, and other styles
15

Gabbay, Freddy, Benjamin Salomon, and Gil Shomron. "Structured Compression of Convolutional Neural Networks for Specialized Tasks." Mathematics 10, no. 19 (October 8, 2022): 3679. http://dx.doi.org/10.3390/math10193679.

Full text
Abstract:
Convolutional neural networks (CNNs) offer significant advantages when used in various image classification tasks and computer vision applications. CNNs are increasingly deployed in environments from edge and Internet of Things (IoT) devices to high-end computational infrastructures, such as supercomputers, cloud computing, and data centers. The growing amount of data and the growth in their model size and computational complexity, however, introduce major computational challenges. Such challenges present entry barriers for IoT and edge devices as well as increase the operational expenses of large-scale computing systems. Thus, it has become essential to optimize CNN algorithms. In this paper, we introduce the S-VELCRO compression algorithm, which exploits value locality to trim filters in CNN models utilized for specialized tasks. S-VELCRO uses structured compression, which can save costs and reduce overhead compared with unstructured compression. The algorithm runs in two steps: a preprocessing step identifies the filters with a high degree of value locality, and a compression step trims the selected filters. As a result, S-VELCRO reduces the computational load of the channel activation function and avoids the convolution computation of the corresponding trimmed filters. Compared with typical CNN compression algorithms that run heavy back-propagation training computations, S-VELCRO has significantly fewer computational requirements. Our experimental analysis shows that S-VELCRO achieves a compression-saving ratio between 6% and 30%, with no degradation in accuracy for ResNet-18, MobileNet-V2, and GoogLeNet when used for specialized tasks.
APA, Harvard, Vancouver, ISO, and other styles
16

Marwah, Gagan Preet Kour, Anuj Jain, Praveen Kumar Malik, Manwinder Singh, Sudeep Tanwar, Calin Ovidiu Safirescu, Traian Candin Mihaltan, Ravi Sharma, and Ahmed Alkhayyat. "An Improved Machine Learning Model with Hybrid Technique in VANET for Robust Communication." Mathematics 10, no. 21 (October 30, 2022): 4030. http://dx.doi.org/10.3390/math10214030.

Full text
Abstract:
The vehicular ad hoc network, VANET, is one of the most popular and promising technologies in intelligent transportation today. However, VANET is susceptible to several vulnerabilities that result in an intrusion. This intrusion must be solved before VANET technology can be adopted. In this study, we suggest a unique machine learning technique to improve VANET’s effectiveness. The proposed method incorporates two phases. Phase I detects the DDoS attack using a novel machine learning technique called SVM-HHO, which provides information about the vehicle. Phase II mitigates the impact of a DDoS attack and allocates bandwidth using a reliable resources management technique based on the hybrid whale dragonfly optimization algorithm (H-WDFOA). This proposed model could be an effective technique predicting and utilizing reliable information that provides effective results in smart vehicles. The novel machine learning-based technique was implemented through MATLAB and NS2 platforms. Network quality measurements included congestion, transit, collision, and QoS awareness cost. Based on the constraints, a different cost framework was designed. In addition, data preprocessing of the QoS factor and total routing costs were considered. Rider integrated cuckoo search (RI-CS) is a novel optimization algorithm that combines the concepts of the rider optimization algorithm (ROA) and cuckoo search (CS) to determine the optimal route with the lowest routing cost. The enhanced hybrid ant colony optimization routing protocol (EHACORP) is a networking technology that increases efficiency by utilizing the shortest route. The shortest path of the proposed protocol had the lowest communication overhead and the fewest number of hops between sending and receiving vehicles. The EHACORP involved two stages. To find the distance between cars in phase 1, EHACORP employed a method for calculating distance. Using starting point ant colony optimization, the ants were guided in phase 2 to develop the shortest route with the least number of connections to send information. The relatively short approach increases protocol efficiency in every way. The pairing of DCM and SBACO at H-WDFOA-VANET accelerated packet processing, reduced ant search time, eliminated blind broadcasting, and prevented stagnation issues. The delivery ratio and throughput of the H-WDFOA-packet VANET benefitted from its use of the shortest channel without stagnation, its rapid packet processing, and its rapid convergence speed. In conclusion, the proposed hybrid whale dragonfly optimization approach (H-WDFOA-VANET) was compared with industry standard models, such as rider integrated cuckoo search (RI-CS) and enhanced hybrid ant colony optimization routing protocol (EHACORP). With the proposed method, throughput could be increased. The proposed system had energy consumption values of 2.00000 mJ, latency values of 15.61668 s, and a drop at node 60 of 0.15759. Additionally, a higher throughput was achieved with the new method. With the suggested method, it is possible to meet the energy consumption targets, delay value, and drop value at node 60. The proposed method reduces the drop value at node 80 to 0.15504, delay time to 15.64318 s, and energy consumption to 2.00000 mJ. These outcomes demonstrate the effectiveness of our proposed method. Thus, the proposed system is more efficient than existing systems.
APA, Harvard, Vancouver, ISO, and other styles
17

Elghaish, Faris, Sepehr Abrishami, M. Reza Hosseini, and Soliman Abu-Samra. "Revolutionising cost structure for integrated project delivery: a BIM-based solution." Engineering, Construction and Architectural Management ahead-of-print, ahead-of-print (August 11, 2020). http://dx.doi.org/10.1108/ecam-04-2019-0222.

Full text
Abstract:
PurposeThe amalgamation of integrated project delivery (IPD) and building information modelling (BIM) is highly recommended for successful project delivery. However, IPD lacks an accurate cost estimation methodology at the “front-end” of projects, when little project information is available. This study aims to tackle this issue, through presenting analytical aspects, theoretical grounds and practical steps/procedures for integrating target value design (TVD), activity-based costing (ABC) and Monte Carlo simulation into the IPD cost structure, within a BIM-enabled platform.Design/methodology/approachA critical review was conducted to study the status of cost estimation within IPD, as well as exploring methods and tools that can enhance the cost estimation process for IPD. Thereafter, a framework is developed to present the proposed methodology of cost estimation for IPD throughout its entire stages. A case project is used to validate the practicality of the developed solution through comparing the profit-at-risk percentage for each party, using both traditional cost estimation and the proposed solution.FindingsAfter applying the proposed IPD's cost estimation framework, on a real-life case project, the findings demonstrated significant deviations in the profit-at-risk value for various work packages of the project (approximately 100% of the finishing package and 22% of openings package). By providing a precise allocation of overhead costs, the solution can be used in real-life projects to change the entire IPD cost structure and ensure a fair sharing of risk–rewards among the involved parties in IPD projects.Practical implicationsUsing the proposed methodology of cost estimation for IPD can enhance the relationship among IPD's core team members; all revealed financial deficiencies will be considered (i.e. compensation structure, profit pooling), hence enhancing the IPD performance.Originality/valueThis paper presents a comprehensive solution for integrating BIM and IPD in terms of cost estimation, offering three main contributions: (1) an innovate approach to utilise five-dimensional (5D) BIM capabilities with Monte Carlo simulation, hence providing reliable cost estimating during the conceptual TVD stage; (2) mathematical models that are developed through integrating ABC into the detailed 5D BIM to determine the three IPD's cost structure limbs; and (3) a novel mechanism of managing cost saving (rewards) through distinguishing between saved resources from organisation level, to daily task level, to increase trust among parties.
APA, Harvard, Vancouver, ISO, and other styles
18

Lockhart, Shelby, Amanda Bienz, William Gropp, and Luke Olson. "Performance Analysis and Optimal Node-Aware Communication for Enlarged Conjugate Gradient Methods." ACM Transactions on Parallel Computing, January 17, 2023. http://dx.doi.org/10.1145/3580003.

Full text
Abstract:
Krylov methods are a key way of solving large sparse linear systems of equations, but suffer from poor strong scalability on distributed memory machines. This is due to high synchronization costs from large numbers of collective communication calls alongside a low computational workload. Enlarged Krylov methods address this issue by decreasing the total iterations to convergence, an artifact of splitting the initial residual and resulting in operations on block vectors. In this paper, we present a performance study of an Enlarged Krylov Method, Enlarged Conjugate Gradients (ECG), noting the impact of block vectors on parallel performance at scale. Most notably, we observe the increased overhead of point-to-point communication as a result of denser messages in the sparse matrix-block vector multiplication kernel. Additionally, we present models to analyze expected performance of ECG, as well as, motivate design decisions. Most importantly, we introduce a new point-to-point communication approach based on node-aware communication techniques that increases efficiency of the method at scale.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography