Journal articles on the topic 'Fixed share hierarchy algorithm'

To see the other types of publications on this topic, follow the link: Fixed share hierarchy algorithm.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Fixed share hierarchy algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Qi, Xiaojun, Xianhua Zeng, Shumin Wang, Yicai Xie, and Liming Xu. "Cross-modal variable-length hashing based on hierarchy." Intelligent Data Analysis 25, no. 3 (April 20, 2021): 669–85. http://dx.doi.org/10.3233/ida-205162.

Full text
Abstract:
Due to the emergence of the era of big data, cross-modal learning have been applied to many research fields. As an efficient retrieval method, hash learning is widely used frequently in many cross-modal retrieval scenarios. However, most of existing hashing methods use fixed-length hash codes, which increase the computational costs for large-size datasets. Furthermore, learning hash functions is an NP hard problem. To address these problems, we initially propose a novel method named Cross-modal Variable-length Hashing Based on Hierarchy (CVHH), which can learn the hash functions more accurately to improve retrieval performance, and also reduce the computational costs and training time. The main contributions of CVHH are: (1) We propose a variable-length hashing algorithm to improve the algorithm performance; (2) We apply the hierarchical architecture to effectively reduce the computational costs and training time. To validate the effectiveness of CVHH, our extensive experimental results show the superior performance compared with recent state-of-the-art cross-modal methods on three benchmark datasets, WIKI, NUS-WIDE and MIRFlickr.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Ming Yong, Yan Ma, Yuan Yuan Liang, and Wen Shu Duan. "Study the Model of Hierarchy Education Resource Classified Register and Discovery in Grid." Advanced Materials Research 225-226 (April 2011): 560–64. http://dx.doi.org/10.4028/www.scientific.net/amr.225-226.560.

Full text
Abstract:
Register and discovery of education resource is the key to effective share education resources. Only based on resource register and discovery, other applications may be realized . Based on the existing discovery mechanisms, this paper proposed the model of hierarchy education resource classified register and discovery in grid. Puts forward to the register and discovery algorithm of education resources, and made a simulation test using GridSim.
APA, Harvard, Vancouver, ISO, and other styles
3

Nurek, Mateusz, and Radosław Michalski. "Combining Machine Learning and Social Network Analysis to Reveal the Organizational Structures." Applied Sciences 10, no. 5 (March 2, 2020): 1699. http://dx.doi.org/10.3390/app10051699.

Full text
Abstract:
Formation of a hierarchy within an organization is a natural way of assigning the duties, delegating responsibilities and optimizing the flow of information. Only for the smallest companies the lack of the hierarchy, that is, a flat one, is possible. Yet, if they grow, the introduction of a hierarchy is inevitable. Most often, its existence results in different nature of the tasks and duties of its members located at various organizational levels or in distant parts of it. On the other hand, employees often send dozens of emails each day, and by doing so, and also by being engaged in other activities, they naturally form an informal social network where nodes are individuals and edges are the actions linking them. At first, such a social network seems distinct from the organizational one. However, the analysis of this network may lead to reproducing the organizational hierarchy of companies. This is due to the fact that that people holding a similar position in the hierarchy possibly share also a similar way of behaving and communicating attributed to their role. The key concept of this work is to evaluate how well social network measures when combined with other features gained from the feature engineering align with the classification of the members of organizational social network. As a technique for answering this research question, machine learning apparatus was employed. Here, for the classification task, Decision Trees, Random Forest, Neural Networks and Support Vector Machines have been evaluated, as well as a collective classification algorithm, which is also proposed in this paper. The used approach allowed to compare how traditional methods of machine learning classification, while supported by social network analysis, performed in comparison to a typical graph algorithm. The results demonstrate that the social network built using the metadata on communication highly exposes the organizational structure.
APA, Harvard, Vancouver, ISO, and other styles
4

Singh, Rajvir, C. Rama Krishna, Rajnish Sharma, and Renu Vig. "Energy efficient fixed-cluster architecture for wireless sensor networks." Journal of Intelligent & Fuzzy Systems 40, no. 5 (April 22, 2021): 8727–40. http://dx.doi.org/10.3233/jifs-192177.

Full text
Abstract:
Dynamic and frequent re-clustering of nodes along with data aggregation is used to achieve energy-efficient operation in wireless sensor networks. But dynamic cluster formation supports data aggregation only when clusters can be formed using any set of nodes that lie in close proximity to each other. Frequent re-clustering makes network management difficult and adversely affects the use of energy efficient TDMA-based scheduling for data collection within the clusters. To circumvent these issues, a centralized Fixed-Cluster Architecture (FCA) has been proposed in this paper. The proposed scheme leads to a simplified network implementation for smart spaces where it makes more sense to aggregate data that belongs to a cluster of sensors located within the confines of a designated area. A comparative study is done with dynamic clusters formed with a distributive Low Energy Adaptive Clustering Hierarchy (LEACH) and a centralized Harmonic Search Algorithm (HSA). Using uniform cluster size for FCA, the results show that it utilizes the available energy efficiently by providing stability period values that are 56% and 41% more as compared to LEACH and HSA respectively.
APA, Harvard, Vancouver, ISO, and other styles
5

Shi, Ling Ling, Zhi Yuan Yan, and Zhi Jiang Du. "Fast Collision Detection and Deformation of Soft Tissue in Virtual Surgery." Applied Mechanics and Materials 380-384 (August 2013): 778–81. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.778.

Full text
Abstract:
In the virtual scene of robot assisted virtual surgery simulation system, the surgical instruments achieve complex motion following the haptic devices and the soft tissue deforms continuously under interaction forces. In order to meet the rapidity of collision detection, an algorithm based on changeable direction hull bounding volume hierarchy is proposed. Strategy of combining surface model with body model is developed for soft tissue deformation. Skeleton sphere model of soft tissue is built. Deformation can be achieved based on mass-spring theory after matching collision information with the skeleton sphere model. The experiments show that the proposed collision detection method implements faster speed compared with fixed direction hull algorithm and soft tissue deforms through combination of collision information with sphere model.
APA, Harvard, Vancouver, ISO, and other styles
6

Karunanidy, Dinesh, Subramanian Ramalingam, Ankur Dumka, Rajesh Singh, Mamoon Rashid, Anita Gehlot, Sultan S. Alshamrani, and Ahmed Saeed AlGhamdi. "JMA: Nature-Inspired Java Macaque Algorithm for Optimization Problem." Mathematics 10, no. 5 (February 23, 2022): 688. http://dx.doi.org/10.3390/math10050688.

Full text
Abstract:
In recent years, optimization problems have been intriguing in the field of computation and engineering due to various conflicting objectives. The complexity of the optimization problem also dramatically increases with respect to a complex search space. Nature-Inspired Optimization Algorithms (NIOAs) are becoming dominant algorithms because of their flexibility and simplicity in solving the different kinds of optimization problems. Hence, the NIOAs may be struck with local optima due to an imbalance in selection strategy, and which is difficult when stabilizing exploration and exploitation in the search space. To tackle this problem, we propose a novel Java macaque algorithm that mimics the natural behavior of the Java macaque monkeys. The Java macaque algorithm uses a promising social hierarchy-based selection process and also achieves well-balanced exploration and exploitation by using multiple search agents with a multi-group population, male replacement, and learning processes. Then, the proposed algorithm extensively experimented with the benchmark function, including unimodal, multimodal, and fixed-dimension multimodal functions for the continuous optimization problem, and the Travelling Salesman Problem (TSP) was utilized for the discrete optimization problem. The experimental outcome depicts the efficiency of the proposed Java macaque algorithm over the existing dominant optimization algorithms.
APA, Harvard, Vancouver, ISO, and other styles
7

Hu, Pei, Jeng-Shyang Pan, Shu-Chuan Chu, Qing-Wei Chai, Tao Liu, and Zhong-Cui Li. "New Hybrid Algorithms for Prediction of Daily Load of Power Network." Applied Sciences 9, no. 21 (October 24, 2019): 4514. http://dx.doi.org/10.3390/app9214514.

Full text
Abstract:
Two new hybrid algorithms are proposed to improve the performances of the meta-heuristic optimization algorithms, namely the Grey Wolf Optimizer (GWO) and Shuffled Frog Leaping Algorithm (SFLA). Firstly, it advances the hierarchy and position updating of the mathematical model of GWO, and then the SGWO algorithm is proposed based on the advantages of SFLA and GWO. It not only improves the ability of local search, but also speeds up the global convergence. Secondly, the SGWOD algorithm based on SGWO is proposed by using the benefit of differential evolution strategy. Through the experiments of the 29 benchmark functions, which are composed of the functions of unimodal, multimodal, fixed-dimension and composite multimodal, the performances of the new algorithms are better than that of GWO, SFLA and GWO-DE, and they greatly balances the exploration and exploitation. The proposed SGWO and SGWOD algorithms are also applied to the prediction model based on the neural network. Experimental results show the usefulness for forecasting the power daily load.
APA, Harvard, Vancouver, ISO, and other styles
8

Xiao, Yujie, and Dingxiong Zhang. "The Command Decision Method of Multiple UUV Cooperative Task Assignment Based on Contract Net Protocol." Journal of Systems Science and Information 4, no. 4 (August 25, 2016): 379–90. http://dx.doi.org/10.21078/jssi-2016-379-12.

Full text
Abstract:
Abstract With the help of multiple UCAV cooperative task control model, the mathematical model of multiple UUV cooperative task control is made. Variables related to decision are broken into goals, guidelines and programs levels by Analytical Hierarchy Process (AHP), on this basis; the command decision of multiple UUV task assignment is achieved. The correctness of task allocation algorithm is verified by case analysis. Time calculation formulas for a task assignment are given. The changes of overall effectiveness in the process of task allocation are analyzed, the time changes of each sub task allocation time in one task assignment are analyzed, the time changes of the number of tasks and platforms respectively fixed in task allocation are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
9

Ding, Wei, Hongfa Wang, and Xuerui Wei. "Many-to-Many Multicast Routing Schemes under a Fixed Topology." Scientific World Journal 2013 (2013): 1–11. http://dx.doi.org/10.1155/2013/718152.

Full text
Abstract:
Many-to-many multicast routing can be extensively applied in computer or communication networks supporting various continuous multimedia applications. The paper focuses on the case where all users share a common communication channel while each user is both a sender and a receiver of messages in multicasting as well as an end user. In this case, the multicast tree appears as a terminal Steiner tree (TeST). The problem of finding a TeST with a quality-of-service (QoS) optimization is frequently NP-hard. However, we discover that it is a good idea to find a many-to-many multicast tree with QoS optimization under a fixed topology. In this paper, we are concerned with three kinds of QoS optimization objectives of multicast tree, that is, the minimum cost, minimum diameter, and maximum reliability. All of three optimization problems are distributed into two types, the centralized and decentralized version. This paper uses the dynamic programming method to devise an exact algorithm, respectively, for the centralized and decentralized versions of each optimization problem.
APA, Harvard, Vancouver, ISO, and other styles
10

Mukherjee, Proshikshya, Prasant Kumar Pattnaik, Ahmed Abdulhakim Al-Absi, and Dae-Ki Kang. "Recommended System for Cluster Head Selection in a Remote Sensor Cloud Environment Using the Fuzzy-Based Multi-Criteria Decision-Making Technique." Sustainability 13, no. 19 (September 24, 2021): 10579. http://dx.doi.org/10.3390/su131910579.

Full text
Abstract:
Clustering is an energy-efficient routing algorithm in a sensor cloud environment (SCE). The clustering sensor nodes communicate with the base station via a cluster head (CH), which can be selected based on the remaining energy, the base station distance, or the distance from the neighboring nodes. If the CH is selected based on the remaining energy and the base station is far away from the cluster head, then it is not an energy-efficient selection technique. The same applies to other criteria. For CH selection, a single criterion is not sufficient. Moreover, the traditional clustering algorithm head nodes keep changing in every round. Therefore, the traditional algorithm energy consumption is less, and nodes die faster. In this paper, the fuzzy multi-criteria decision-making (F-MCDM) technique is used for CH selection and a threshold value is fixed for the CH selection. The fuzzy analytical hierarchy process (AHP) and the fuzzy analytical network process (ANP) are used for CH selection. The performance evaluation results exhibit a 5% improvement compared to the fuzzy AHP clustering method and 10% improvement compared to the traditional method in terms of stability, energy consumption, throughput, and control overhead.
APA, Harvard, Vancouver, ISO, and other styles
11

Yu, Vincent F., Putu A. Y. Indrakarna, Anak Agung Ngurah Perwira Redi, and Shih-Wei Lin. "Simulated Annealing with Mutation Strategy for the Share-a-Ride Problem with Flexible Compartments." Mathematics 9, no. 18 (September 19, 2021): 2320. http://dx.doi.org/10.3390/math9182320.

Full text
Abstract:
The Share-a-Ride Problem with Flexible Compartments (SARPFC) is an extension of the Share-a-Ride Problem (SARP) where both passenger and freight transport are serviced by a single taxi network. The aim of SARPFC is to increase profit by introducing flexible compartments into the SARP model. SARPFC allows taxis to adjust their compartment size within the lower and upper bounds while maintaining the same total capacity permitting them to service more parcels while simultaneously serving at most one passenger. The main contribution of this study is that we formulated a new mathematical model for the problem and proposed a new variant of the Simulated Annealing (SA) algorithm called Simulated Annealing with Mutation Strategy (SAMS) to solve SARPFC. The mutation strategy is an intensification approach to improve the solution based on slack time, which is activated in the later stage of the algorithm. The proposed SAMS was tested on SARP benchmark instances, and the result shows that it outperforms existing algorithms. Several computational studies have also been conducted on the SARPFC instances. The analysis of the effects of compartment size and the portion of package requests to the total profit showed that, on average, utilizing flexible compartments as in SARPFC brings in more profit than using a fixed-size compartment as in SARP.
APA, Harvard, Vancouver, ISO, and other styles
12

Seliaman, Mohamed, Leopoldo Cárdenas-Barrón, and Sayeed Rushd. "An Algebraic Decision Support Model for Inventory Coordination in the Generalized n-Stage Non-Serial Supply Chain with Fixed and Linear Backorders Costs." Symmetry 12, no. 12 (December 3, 2020): 1998. http://dx.doi.org/10.3390/sym12121998.

Full text
Abstract:
This paper extends and generalizes former inventory models that apply algebraic methods to derive optimal supply chain inventory decisions. In particular this paper considers the problem of coordinating production-inventory decisions in an integrated n-stage supply chain system with linear and fixed backorder costs. This supply chain system assumes information symmetry which implies that all partners share their operational information. First, a mathematical model for the supply chain system total cost is formulated under the integer multipliers coordination mechanism. Then, a recursive algebraic algorithm to derive the optimal inventory replenishment decisions is developed. The applicability of the proposed algorithm is demonstrated using two different numerical examples. Results from the numerical examples indicate that adopting the integer multiplier mechanism will reduce the overall total system cost as compared to using the common cycle time mechanism.
APA, Harvard, Vancouver, ISO, and other styles
13

Apraj, Saurabh D. "A Review on Artificial Intelligence in Stock Market." International Journal for Research in Applied Science and Engineering Technology 10, no. 6 (June 30, 2022): 4358–60. http://dx.doi.org/10.22214/ijraset.2022.44946.

Full text
Abstract:
Abstract: This paper essentially concentrates on the utilization of man-made consciousness and AI in the field of corporate share. The standards and qualities of KNN, k-Means, bisecting k-Means, and ANN algorithm are contemplated to analyse the impacts, similitudes and contrasts of various calculations. The calculations are carried out through Python programs for stock examination. As per the P/E proportion, profit rate, fixed resource turnover rate, net revenue and different marks of each stock, the stocks are characterized and grouped to anticipate the stock improvement prospects and give reference to choosing fitting speculation systems.
APA, Harvard, Vancouver, ISO, and other styles
14

Faict, Thomas, Erik H. D’Hollander, and Bart Goossens. "Mapping a Guided Image Filter on the HARP Reconfigurable Architecture Using OpenCL." Algorithms 12, no. 8 (July 27, 2019): 149. http://dx.doi.org/10.3390/a12080149.

Full text
Abstract:
Intel recently introduced the Heterogeneous Architecture Research Platform, HARP. In this platform, the Central Processing Unit and a Field-Programmable Gate Array are connected through a high-bandwidth, low-latency interconnect and both share DRAM memory. For this platform, Open Computing Language (OpenCL), a High-Level Synthesis (HLS) language, is made available. By making use of HLS, a faster design cycle can be achieved compared to programming in a traditional hardware description language. This, however, comes at the cost of having less control over the hardware implementation. We will investigate how OpenCL can be applied to implement a real-time guided image filter on the HARP platform. In the first phase, the performance-critical parameters of the OpenCL programming model are defined using several specialized benchmarks. In a second phase, the guided image filter algorithm is implemented using the insights gained in the first phase. Both a floating-point and a fixed-point implementation were developed for this algorithm, based on a sliding window implementation. This resulted in a maximum floating-point performance of 135 GFLOPS, a maximum fixed-point performance of 430 GOPS and a throughput of HD color images at 74 frames per second.
APA, Harvard, Vancouver, ISO, and other styles
15

Jin, Yi, Yulin He, and Defa Huang. "An Improved Variable Kernel Density Estimator Based on L2 Regularization." Mathematics 9, no. 16 (August 21, 2021): 2004. http://dx.doi.org/10.3390/math9162004.

Full text
Abstract:
The nature of the kernel density estimator (KDE) is to find the underlying probability density function (p.d.f) for a given dataset. The key to training the KDE is to determine the optimal bandwidth or Parzen window. All the data points share a fixed bandwidth (scalar for univariate KDE and vector for multivariate KDE) in the fixed KDE (FKDE). In this paper, we propose an improved variable KDE (IVKDE) which determines the optimal bandwidth for each data point in the given dataset based on the integrated squared error (ISE) criterion with the L2 regularization term. An effective optimization algorithm is developed to solve the improved objective function. We compare the estimation performance of IVKDE with FKDE and VKDE based on ISE criterion without L2 regularization on four univariate and four multivariate probability distributions. The experimental results show that IVKDE obtains lower estimation errors and thus demonstrate the effectiveness of IVKDE.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhou, Xiaoyang, Rui Luo, Canhui Zhao, Xiaohua Xia, Benjamin Lev, Jian Chai, and Richard Li. "Bilevel Fuzzy Chance Constrained Hospital Outpatient Appointment Scheduling Model." Scientific Programming 2016 (2016): 1–14. http://dx.doi.org/10.1155/2016/4795101.

Full text
Abstract:
Hospital outpatient departments operate by selling fixed period appointments for different treatments. The challenge being faced is to improve profit by determining the mix of full time and part time doctors and allocating appointments (which involves scheduling a combination of doctors, patients, and treatments to a time period in a department) optimally. In this paper, a bilevel fuzzy chance constrained model is developed to solve the hospital outpatient appointment scheduling problem based on revenue management. In the model, the hospital, the leader in the hierarchy, decides the mix of the hired full time and part time doctors to maximize the total profit; each department, the follower in the hierarchy, makes the decision of the appointment scheduling to maximize its own profit while simultaneously minimizing surplus capacity. Doctor wage and demand are considered as fuzzy variables to better describe the real-life situation. Then we use chance operator to handle the model with fuzzy parameters and equivalently transform the appointment scheduling model into a crisp model. Moreover, interactive algorithm based on satisfaction is employed to convert the bilevel programming into a single level programming, in order to make it solvable. Finally, the numerical experiments were executed to demonstrate the efficiency and effectiveness of the proposed approaches.
APA, Harvard, Vancouver, ISO, and other styles
17

Singh, Rashandeep, and Dr Gulshan Goyal. "Algorithm Design for Deterministic Finite Automata for a Given Regular Language with Prefix Strings." Journal of Scientific Research 66, no. 02 (2022): 16–21. http://dx.doi.org/10.37398/jsr.2022.660203.

Full text
Abstract:
Computer Science and Engineering have given us the field of automata theory, one of the largest areas that is concerned with the efficiency of an algorithm in solving a problem on a computational model. Various classes of formal languages are represented using Chomsky hierarchy. These languages are described as a set of specific strings over a given alphabet and can be described using state or transition diagrams. The state/transition diagram for regular languages is called a finite automaton which is used in compiler design for recognition of tokens. Other applications of finite automata include pattern matching, speech and text processing, CPU machine operations, etc. The construction of finite automata is a complicated and challenging process as there is no fixed mathematical approach that exists for designing Deterministic Finite Automata (DFA) and handling the validations for acceptance or rejection of strings. Consequently, it is difficult to represent the DFA’s transition table and graph. Novel learners in the field of theoretical computer science often feel difficulty in designing of DFA The present paper proposes an algorithm for designing of deterministic finite automata (DFA) for a regular language with a given prefix. The proposed method further aims to simplify the lexical analysis process of compiler design.
APA, Harvard, Vancouver, ISO, and other styles
18

Fichte, Johannes K., Markus Hecher, and Arne Meier. "Knowledge-Base Degrees of Inconsistency: Complexity and Counting." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 7 (May 18, 2021): 6349–57. http://dx.doi.org/10.1609/aaai.v35i7.16788.

Full text
Abstract:
Description logics (DLs) are knowledge representation languages that are used in the field of artificial intelligence (AI). A common technique is to query DL knowledge bases, e.g., by Boolean Datalog queries, and ask for entailment. But real world knowledge-bases are often obtained by combining data from various sources. This, inherently, might result in certain inconsistencies (with respect to a given query) and requires to estimate a degree of inconsistency before using a knowledge-base. In this paper, we provide a complexity analysis of fixed-domain non-entailment (NE) on Datalog programs for well-established families of knowledge bases (KBs). We exhibit a detailed complexity map for the decision cases, counting and projected counting, which may serve as a quantitative measure for inconsistency of a KB with respect to a query. Our results show that NE is natural for the second, third, and fourth level of the polynomial (counting) hierarchy depending on the type of the studied query (stratified, normal, disjunctive) and one level higher for the projected versions. Further, we show fixed-parameter tractability by bounding the treewidth, provide a constructive algorithm, and show its theoretical limitation in terms of conditional lower bounds.
APA, Harvard, Vancouver, ISO, and other styles
19

Kovrov, Gregory S., Ivan A. Babkin, and Nikolay E. Yegorov. "PROSPECTS FOR INVESTMENT IN THE FUEL AND ENERGY COMPLEX OF YAKUTIA WITH THE USE OF PUBLIC-PRIVATE PARTNERSHIP MECHANISMS." Север и рынок: формирование экономического порядка 71, no. 1/2021 (March 16, 2021): 37–55. http://dx.doi.org/10.37614/2220-802x.1.2021.71.004.

Full text
Abstract:
The article is devoted to the study of the investment process in the northern regions of Russia on the example of the Republic of Sakha (Yakutia). The relevance of the research topic is due to the fact that the most important factor in the development of any economy, including the economy of the northern regions of Russia, is investment in fixed assets. The aim of the study is to forecast investments in the fuel and energy complex of the northern region for the long term.The authors propose a methodological approach (algorithm) for predictive assessment of investments in the development of the fuel and energy complex of the Republic of Sakha (Yakutia). The analysis of investment activity in the Republic of Sakha (Yakutia) for the period from 2005 to 2018was carried out. It was noted that the republic has all prerequisites for maintaining leading positions in terms of investment in fixed assets among the regions of the Russian Federation. According to indicator “Investments in fixed assets per capita” theRepublic of Sakha (Yakutia) in 2018 ranked 5th among other regions of Russia.The fuel and energy complex occupies a significant share in the structure of the gross regional product of the Republic of Sakha (Yakutia). The share of investments in the fuel and energy complex in the total volume of investments in the Republic of Sakha (Yakutia) for the period from 2008 to 2018 shows an upward trend from 15.3 to 34.5 %. Forecast calculations of investments in fixed assets of the branches of the fuel and energy complex of the Republic of Sakha (Yakutia) have been made. Forecasted amount of investments required for the implementation of the Development Strategy of the Fuel and Energy Complex of the Republic of Sakha (Yakutia) for 2020–2032 will be 1,734.6 billion rubles in the moderate scenario and 2,317.9 billion rubles in the strategic scenario. The largest share in the structure of investments in the fuel and energy complex until 2032 is occupied by theoil and gas complex (59.3 %) and coal industry (15.5 %).In conclusion, it is noted that the main mechanism for implementing the energy strategy is public-private partnership. Its improvement and the search for new mechanisms are necessary conditions for the further development of the fuel and energy complex.
APA, Harvard, Vancouver, ISO, and other styles
20

Maleeva, Alexandra Vladimirovna, Vyacheslav Anatolevich Shmarov, Dmitry Olegovych Kiryukhin, Grigory Aleksandrovich Efimov, and Valery G. Savchenko. "Repertoire of Cytomegalovirus-Specific T Cells Is Focused on the Immunodominant Epitopes in Fixed Hierarchy Dependent on HLA Genotype of the Donor." Blood 134, Supplement_1 (November 13, 2019): 2327. http://dx.doi.org/10.1182/blood-2019-130241.

Full text
Abstract:
Background: Cytomegalovirus (HCMV) infection is one of the major complications and causes of mortality in patients receiving immunosuppressive therapy following hematopoietic stem cell transplantation (HSCT). Antiviral drugs have limited efficacy and high toxicity while using CMV-specific T cells from healthy individuals as a cellular vaccine had proven to be safe and efficient. Conventional protocols rely on expanding virus-specific cells ex vivo while more recently it was proposed to transfuse relatively small numbers of minimally manipulated cells and allow them to expand in vivo. To isolate virus-specific cell one could use either the stimulation with overlapping pools of peptides covering the whole immunogenic viral protein followed by isolation of INFγ-producing cells, or direct isolations of cells bound to MHC-multimer loaded with known immunodominant peptides. Both methods have their advantages. The former approach isolates all activated clones irrespective of their MHC-restriction and supplements CD8+ cells with CD4+ fraction, while the latter method is faster and simpler, besides it captures cells irrespective of their ability to produce IFNγ. The efficiency of the two methods was not directly compared and it is currently unknown to what extent T-cell populations isolated by these two approaches are different in terms of absolute quantity and clonal composition. Aims: to compare the quantity and T-cell receptor repertoire composition of T cells specific to pp65 CMV and to pp65-derived immunodominant epitopes NLV restricted by HLA-A*02, TPR and RPH restricted both by HLA-B*07. Methods: PBMC of healthy donors with different combination of HLA-A*02 and HLA-B*07 alleles were stimulated with peptides of immunodominant epitopes or overlapping peptide pool covering the entire pp65 CMV (Miltenyi Biotec, Cat.-130-093-435). Antigen-specific T cells were detected by the intracellular staining for IFNγ and flow cytometry. We obtained fractions of virus-specific T cells from PBMC either by MHC-tetramer staining or IFNγ secretion assay followed by fluorescent activated cell sorting. Then we prepared cDNA libraries of TCR α- and β-chains and sequenced them using NGS. Characterization of TCR repertories was performed by proprietary bioinformatic pipeline. Results: T-cell response to pp65 CMV is strongly focused on immunodominant pp65-derived epitopes. In donors having HLA-A*02 and/or HLA-B*07 absolute majority of pp65-reactive cells are specific to one of the three immunodominant epitopes - NLV, TPR and RPH. Immunodominant antigens are structured in a fixed hierarchy. TPR and RPH are superior to NLV. When HLA-B*07 is present response to the NLV epitope is drastically reduced, most of the pp65-specific cells recognize either TPR or RPH epitopes. TCR repertoire of the NLV-specific T cells is highly skewed and consists of few large dominant clones. TPR and RPH-specific T cells are more clonally diverse. Substantial share of the cells belonging to the large clones specific to the immunodominant antigens do not secrete IFNγ upon antigenic stimulation. Nevertheless, they may be therapeutically valuable, as they might secrete other cytokines. Conclusion: In summary, CMV-specific response in donors positive for HLA-A*02 and/or HLA-B*07 is focused to the combination of three immunodominant epitopes - NLV, TPR and RPH. Substantial share of the cells are not IFNγ-producers. For the patients having these alleles it might be beneficial to use MHC-multimers for isolation of the therapeutic lymphocytes. Figure 1: A - Healthy donors with different combination of HLA-A*02 and HLA-B*07 B - Intracellular staining for IFNγ of CD8+ T cells after stimulation with pp65-derived peptides C - NGS to identify clonally diverse of pp65-sprcific T cells on combination of HLA-A*02 and HLA-B*07 Figure 1 Disclosures No relevant conflicts of interest to declare.
APA, Harvard, Vancouver, ISO, and other styles
21

Perez-Rodriguez, Ricardo. "An estimation of distribution algorithm for combinatorial optimization problems." International Journal of Industrial Optimization 3, no. 1 (February 3, 2022): 47–67. http://dx.doi.org/10.12928/ijio.v3i1.5862.

Full text
Abstract:
This paper considers solving more than one combinatorial problem considered some of the most difficult to solve in the combinatorial optimization field, such as the job shop scheduling problem (JSSP), the vehicle routing problem with time windows (VRPTW), and the quay crane scheduling problem (QCSP). A hybrid metaheuristic algorithm that integrates the Mallows model and the Moth-flame algorithm solves these problems. Through an exponential function, the Mallows model emulates the solution space distribution for the problems; meanwhile, the Moth-flame algorithm is in charge of determining how to produce the offspring by a geometric function that helps identify the new solutions. The proposed metaheuristic, called HEDAMMF (Hybrid Estimation of Distribution Algorithm with Mallows model and Moth-Flame algorithm), improves the performance of recent algorithms. Although knowing the algebra of permutations is required to understand the proposed metaheuristic, utilizing the HEDAMMF is justified because certain problems are fixed differently under different circumstances. These problems do not share the same objective function (fitness) and/or the same constraints. Therefore, it is not possible to use a single model problem. The aforementioned approach is able to outperform recent algorithms under different metrics for these three combinatorial problems. Finally, it is possible to conclude that the hybrid metaheuristics have a better performance, or equal in effectiveness than recent algorithms.
APA, Harvard, Vancouver, ISO, and other styles
22

Bagaram, Martin B., Sándor F. Tóth, Weikko S. Jaross, and Andrés Weintraub. "A Parallelized Variable Fixing Process for Solving Multistage Stochastic Programs with Progressive Hedging." Advances in Operations Research 2020 (December 12, 2020): 1–17. http://dx.doi.org/10.1155/2020/8965679.

Full text
Abstract:
Long time horizons, typical of forest management, make planning more difficult due to added exposure to climate uncertainty. Current methods for stochastic programming limit the incorporation of climate uncertainty in forest management planning. To account for climate uncertainty in forest harvest scheduling, we discretize the potential distribution of forest growth under different climate scenarios and solve the resulting stochastic mixed integer program. Increasing the number of scenarios allows for a better approximation of the entire probability space of future forest growth but at a computational expense. To address this shortcoming, we propose a new heuristic algorithm designed to work well with multistage stochastic harvest-scheduling problems. Starting from the root-node of the scenario tree that represents the discretized probability space, our progressive hedging algorithm sequentially fixes the values of decision variables associated with scenarios that share the same path up to a given node. Once all variables from a node are fixed, the problem can be decomposed into subproblems that can be solved independently. We tested the algorithm performance on six forests considering different numbers of scenarios. The results showed that our algorithm performed well when the number of scenarios was large.
APA, Harvard, Vancouver, ISO, and other styles
23

Korayem, M. H., V. Azimirad, H. Vatanjou, and A. H. Korayem. "Maximum load determination of nonholonomic mobile manipulator using hierarchical optimal control." Robotica 30, no. 1 (April 26, 2011): 53–65. http://dx.doi.org/10.1017/s0263574711000336.

Full text
Abstract:
SUMMARYThis paper presents a new method using hierarchical optimal control for path planning and calculating maximum allowable dynamic load (MADL) of wheeled mobile manipulator (WMM). This method is useful for high degrees of freedom WMMs. First, the overall system is decoupled to a set of subsystems, and then, hierarchical optimal control is applied on them. The presented algorithm is a two-level hierarchical algorithm. In the first level, interaction terms between subsystems are fixed, and in the second level, the optimization problem for subsystems is solved. The results of second level are used for calculating new estimations of interaction variables in the first level. For calculating MADL, the load on the end effector is increased until actuators get into saturation. Given a large-scale robot, we show how the presenting in distributed hierarchy in optimal control helps to find MADL fast. Also, it enables us to treat with complicated cost functions that are generated by obstacle avoidance terms. The effectiveness of this approach on simulation case studies for different types of WMMs as well as an experiment for a mobile manipulator called Scout is shown.
APA, Harvard, Vancouver, ISO, and other styles
24

Domínguez Conde, Cristina, Jonas Philipp Lüke, and Fernando Rosa González. "Implementation of a Depth from Light Field Algorithm on FPGA." Sensors 19, no. 16 (August 15, 2019): 3562. http://dx.doi.org/10.3390/s19163562.

Full text
Abstract:
A light field is a four-dimensional function that grabs the intensity of light rays traversing an empty space at each point. The light field can be captured using devices designed specifically for this purpose and it allows one to extract depth information about the scene. Most light-field algorithms require a huge amount of processing power. Fortunately, in recent years, parallel hardware has evolved and enables such volumes of data to be processed. Field programmable gate arrays are one such option. In this paper, we propose two hardware designs that share a common construction block to compute a disparity map from light-field data. The first design employs serial data input into the hardware, while the second employs view parallel input. These designs focus on performing calculations during data read-in and producing results only a few clock cycles after read-in. Several experiments were conducted. First, the influence of using fixed-point arithmetic on accuracy was tested using synthetic light-field data. Also tests on actual light field data were performed. The performance was compared to that of a CPU, as well as an embedded processor. Our designs showed similar performance to the former and outperformed the latter. For further comparison, we also discuss the performance difference between our designs and other designs described in the literature.
APA, Harvard, Vancouver, ISO, and other styles
25

Becraft, P. W., and Y. Asuncion-Crabb. "Positional cues specify and maintain aleurone cell fate in maize endosperm development." Development 127, no. 18 (September 15, 2000): 4039–48. http://dx.doi.org/10.1242/dev.127.18.4039.

Full text
Abstract:
A genetic analysis of maize aleurone development was conducted. Cell lineage was examined by simultaneously marking cells with C1 for anthocyanin pigmentation in the aleurone and wx1 for amylose synthesis in the starchy endosperm. The aleurone and starchy endosperm share a common lineage throughout development indicating that positional cues specify aleurone fate. Mutants in dek1 block aleurone formation at an early stage and cause peripheral endosperm cells to develop as starchy endosperm. Revertant sectors of a transposon-induced dek1 allele showed that peripheral endosperm cells remain competent to differentiate as aleurone cells until late in development. Ds-induced chromosome breakage was used to generate Dek1 loss-of-function sectors. Events occurring until late development caused aleurone cells to switch fate to starchy endosperm indicating that cell fate is not fixed. Thus, positional cues are required to specify and maintain aleurone fate and Dek1 function is required to respond to these cues. An analysis of additional mutants that disrupt aleurone differentiation suggests a hierarchy of gene functions to specify aleurone cell fate and then control aleurone differentiation. These mutants disrupt aleurone differentiation in reproducible patterns suggesting a relationship to endosperm pattern formation.
APA, Harvard, Vancouver, ISO, and other styles
26

Milenkovic, Victor, Elisha Sacks, and Nabeel Butt. "Fast Detection of Degenerate Predicates in Free Space Construction." International Journal of Computational Geometry & Applications 29, no. 03 (September 2019): 219–37. http://dx.doi.org/10.1142/s0218195919500067.

Full text
Abstract:
An implementation of a computational geometry algorithm is robust if the combinatorial output is correct for every input. Robustness is achieved by ensuring that the predicates in the algorithm are evaluated correctly. A predicate is the sign of an algebraic expression whose variables are input parameters. The hardest case is detecting degenerate predicates where the value of the expression equals zero. We encounter this case in constructing the free space of a polyhedron that rotates around a fixed axis and translates freely relative to a stationary polyhedron. Each predicate involved in the construction is expressible as the sign of a univariate polynomial [Formula: see text] evaluated at a zero [Formula: see text] of a univariate polynomial [Formula: see text], where the coefficients of [Formula: see text] and [Formula: see text] are polynomials in the coordinates of the polyhedron vertices. A predicate is degenerate when [Formula: see text] is a zero of a common factor of [Formula: see text] and [Formula: see text]. We present an efficient degeneracy detection algorithm based on a one-time factoring of all the univariate polynomials over the ring of multivariate polynomials in the vertex coordinates. Our algorithm is 3500 times faster than the standard algorithm based on greatest common divisor computation. It reduces the share of degeneracy detection in our free space computations from 90% to 0.5% of the running time.
APA, Harvard, Vancouver, ISO, and other styles
27

Jimbo, Koki, Satoshi Iriyama, and Massimo Regoli. "Implementation of a New Strongly-Asymmetric Algorithm and Its Optimization." Cryptography 4, no. 3 (July 30, 2020): 21. http://dx.doi.org/10.3390/cryptography4030021.

Full text
Abstract:
A new public key agreement (PKA) algorithm, called the strongly-asymmetric algorithm (SAA-5), was introduced by Accardi et al. The main differences from the usual PKA algorithms are that Bob has some independent public keys and Alice produces her public key by using some part of the public keys from Bob. Then, the preparation and calculation processes are essentially asymmetric. This algorithms has several free parameters more than the usual symmetric PKA algorithms and the velocity of calculation is largely dependent on the parameters chosen; however, the performance of it has not yet been tested. The purpose of our study was to discuss efficient parameters to share the key with high speeds in SAA-5 and to optimize SAA-5 in terms of calculation speed. To find efficient parameters of SAA-5, we compared the calculation speed with Diffie–Hellman (D-H) while varying values of some parameters under the circumstance where the length of the secret shared key (SSK) was fixed. For optimization, we discuss a more general framework of SAA-5 to find more efficient operations. By fixing the parameters of the framework properly, a new PKA algorithm with the same security level as SAA-5 was produced. The result shows that the calculation speed of the proposed PKA algorithm is faster than D-H, especially for large key lengths. The calculation speed of the proposed PKA algorithm increases linearly as the SSK length increases, whereas D-H increases exponentially.
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Yong, Guo You Wang, Ran Wang, Yuan Chun Xia, and Zhong Chen. "A Novel and Robust Algorithm for License Plate Location Using Perceptual Salient Features." Applied Mechanics and Materials 513-517 (February 2014): 1414–20. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.1414.

Full text
Abstract:
License plate location (LPL) plays an important role in license plate (LP) recognition systems. Most existing methods can only deal with LPL well in fixed environment such as stable illumination, simple background or constant distance between cameras and vehicles. And the successful rate of LPL is quite affected by those non-LP objects in the background that share similar features with license plate regions. To overcome these problems, in this paper, we propose a novel LPL method using perceptual salient features, which is based on the visual attention mechanism to choose the most salient features for license plate. The proposed approach consists of two main steps. Firstly, candidate license plate regions consistent with the visual perception are detected based on perceptual salient features from the saliency map. Then, the candidate license plate regions are sifted to distinguish a license plate from background, using salient features which are selected and organized by minimizing probability of error. The proposed algorithm was tested with 1942 real images captured in various conditions and achieved an average accuracy of 98.51%. The experiment result shows that our algorithm is robust to the interference of non-LP objects as well as the change of illumination, view angle, position, size and color of license plates.
APA, Harvard, Vancouver, ISO, and other styles
29

Aliouat, Zibouda, and Saad Harous. "Energy efficient clustering for wireless sensor networks." International Journal of Pervasive Computing and Communications 10, no. 4 (October 28, 2014): 469–80. http://dx.doi.org/10.1108/ijpcc-05-2014-0033.

Full text
Abstract:
Purpose – The purpose of this paper is to design a hierarchical routing protocol. Wireless sensor network (WSN) consists of a set of miniature sensor nodes powered by a low-capacity energy battery. This limitation requires that energy is used in an efficient way and kept as long as possible to allow the WSN to accomplish its mission. Thus, energy conservation is a very important problem facing researchers in this context. Because sending and receiving messages is the activity that consumes the most energy in a WSN, so when designing routing protocols, this problem is targeted specifically. The aim of this paper is to propose a solution to this problem by designing a hierarchical routing protocol. Design/methodology/approach – The authors started by designing a protocol called efficient energy-aware distributed clustering (EEADC). Simulation result showed EEADC might generate clusters with very small or very large size. To solve this problem, the authors designed a new algorithm called fixed efficient energy-aware distributed clustering (FEEADC). They concluded from the simulation result that cluster-heads (CHs) far away from the base station die faster than the ones closer to it. To remedy this problem, they propose multi-hop fixed efficient energy-aware distributed clustering (M-FEEADC). It is based on a new fixed clustering mechanism, which aims to create a balanced distribution of CHs. It uses data aggregation and sleep/wakeup techniques. Findings – The simulation results show a significant improvement in terms of energy consumption and network lifetime over the well-known low-energy adaptive clustering hierarchy and threshold-sensitive energy-efficient protocols. Originality/value – The authors propose M-FEEADC. It is based on a new fixed clustering mechanism, which aims to create a balanced distribution of CHs. It uses data aggregation and sleep/wakeup techniques.
APA, Harvard, Vancouver, ISO, and other styles
30

Arestova, A. Yu, V. N. Ulyanov, and M. Yu Frolov. "The calculation algorithm of oil and gas production enterprise energy efficiency indicators." Power engineering: research, equipment, technology 23, no. 6 (March 29, 2022): 16–28. http://dx.doi.org/10.30724/1998-9903-2021-23-6-16-28.

Full text
Abstract:
THE PURPOSE. The main purpose of the study is to develop the calculation algorithm of energy efficiency indicators. To develop an algorithm, it is necessary to consider the features of the power supply system of the oil and gas production enterprise. Then, the power and electricity balance calculation algorithm should be designed to evaluate the share of the technological processes’ consumption, analyze losses, and determine the source of imbalance. The assessment of energy efficiency indicators will require adaptation of the parameter’s identification method for rotating electrical machines in operating modes. To calculate the specific equipment energy, it is necessary to use power tracing in the algorithm. The final stage is to develop a visualization of a software prototype for monitoring and control energy efficiency indicators of an oil and gas production enterprise.METHODS. The equipment parameters identification method based on actual measurements is used for designing a mathematical model. To calculate the steady-state, the nodal-voltage method is used in couple with an addressing matrix for tracing power flows.RESULTS. The paper proposes an algorithm for calculating energy efficiency indicators of equipment for an oil and gas production enterprise. The algorithm is based on the processing and analysis of actual data from the electricity metering devices installed at the enterprise. Analysis of the power consumption information together with the oil production indicators will reveal the most efficient operating mode of the equipment. The calculating algorithm for the specific electricity consumption is considered in the example of a typical hierarchy of metering devices at a field. Further, the paper discusses the possibility of applying the targeting flow distribution principle for reducing power losses. A parameters identification technique is presented for multi equipment nodes to assess energy efficiency and monitor operating parameters.CONCLUSION. The algorithm considered in the paper was introduced into the information system of the production control center at the field as an additional module for assessing energy efficiency. The large deviation of the indicator's values from the specified ones is messaging about the equipment's normal mode violations. It allows creating an optimal schedule of organizational and technical measures to regulate energy consumption and improve the energy efficiency of the enterprise.
APA, Harvard, Vancouver, ISO, and other styles
31

Donovan, Laura, Bei Hopkins, Ben Draper, Rivani Shah, Kristin Roman, and Bethany Remeniuk. "924 The spatial hierarchy of primary and recurrent medulloblastoma tumor ecosystems." Journal for ImmunoTherapy of Cancer 9, Suppl 2 (November 2021): A969. http://dx.doi.org/10.1136/jitc-2021-sitc2021.924.

Full text
Abstract:
BackgroundMedulloblastoma recurrence occurs in approximately 30% of patients and is universally fatal, presenting one of the most significant unmet clinical challenges in pediatric oncology. It is now clear that medulloblastomas are complex ecosystems, evolving under selective pressure from their microenvironment and cell of origin. Different tumor-immune cell interactions, including, but not limited to, tumor-infiltrating lymphocytes and various tumor suppressive myeloid cell populations, significantly hamper effective treatment strategies for primary, metastatic, and recurrent tumors. Recurrent medulloblastomas are rarely biopsied making biological material for interrogation scarce. Research has assumed that recurrent and primary medulloblastoma tumors have similar biological composition and therefore will respond to the same therapeutic regimens, however, therapies designed using primary biopsies, but tested in Phase I/II trials on children with recurrent disease, have been largely unsuccessful. We hypothesize that there are select immunosuppressive population differences within primary vs. recurrent tumor microenvironments (TME) that need to be elucidated in order to improve treatment modalities and outcomes in pediatric patients.MethodsUsing Akoya’s MOTiFTM PD-1/PD-L1 Panel: Auto Melanoma Kit, the staining protocol was adapted for optimal staining performance on human brain tissue. Following this, 24-formalin-fixed, paraffin embedded pediatric medulloblastoma samples (primary and recurrent biopsies from 12 patients) were stained for the following markers on the Leica BOND RX. Multispectral images were acquired using the Vectra Polaris, and five regions of interest selected on each image. An analysis algorithm was developed using inForm tissue analysis software, and samples were batch processed and data exported. Cell counts, densities, and spatial parameters were generated using the R-script package phenoptrReports to produce outputs of the image analysis data.ResultsFollowing spectral unmixing and autofluorescence isolation, no signal crosstalk was observed. The average signal intensity counts for all markers was found to be within the recommended ranges of 10–30, with a coefficient of variation of ≤15%, indicating successful and consistent staining of the medulloblastoma samples. Comparison between primary vs. recurrent tissues revealed distinctive spatial differences between immune-tumor cell interactions.ConclusionsWe have demonstrated successful adaptation of the MOTiF PD-1/PD-L1 Melanoma panel kit in conjunction with the Phenoptics workflow to support examination of the TME in patient-matched primary and recurrent pediatric medulloblastoma tumor biopsies. Our study provides the first insight into distinctive spatial interactions between primary vs. recurrent tissues, which may improve strategies to comprehend cancer progression, immune surveillance, and ultimately the development of rational, targeted therapeutics based on the differences between the tumor compartments and their immune-microenvironment.Ethics ApprovalEthical approval obtained by Brain UK, ref: 20/008. All participants gave consent to use of their material.
APA, Harvard, Vancouver, ISO, and other styles
32

Arkin, Esther M., Henk Meijer, Joseph S. B. Mitchell, David Rappaport, and Steven S. Skiena. "Decision Trees for Geometric Models." International Journal of Computational Geometry & Applications 08, no. 03 (June 1998): 343–63. http://dx.doi.org/10.1142/s0218195998000175.

Full text
Abstract:
A fundamental problem in model-based computer vision is that of identifying which of a given set of geometric models is present in an image. Considering a "probe" to be an oracle that tells us whether or not a model is present at a given point, we study the problem of computing efficient strategies ("decision trees") for probing an image, with the goal to minimize the number of probes necessary (in the worst case) to determine which single model is present. We show that a ⌈l g k⌉ height binary decision tree always exists for k polygonal models (in fixed position), provided (1) they are non-degenerate (do not share boundaries) and (2) they share a common point of intersection. Further, we give an efficient algorithm for constructing such decision tress when the models are given as a set of polygons in the plane. We show that constructing a minimum height tree is NP-complete if either of the two assumptions is omitted. We provide an efficient greedy heuristic strategy and show that, in the general case, it yields a decision tree whose height is at most ⌈l g k⌉ times that of an optimal tree. Finally, we discuss some restricted cases whose special structure allows for improved results.
APA, Harvard, Vancouver, ISO, and other styles
33

Chowdhury, Mohammed Nowaz Rabbani, Jannatul Mawa, and Subrata Kumar Aditya. "Performance Evaluation of Spectrum Sharing Methods in WiMAX/WiFi Integrated Networks." Dhaka University Journal of Science 66, no. 2 (July 26, 2018): 103–7. http://dx.doi.org/10.3329/dujs.v66i2.54554.

Full text
Abstract:
Next generation wireless communication system is influenced by different wireless technologies such as WiFi, WiMAX, Cellular and LTE which use their fixed allocated spectrum. However, lack of spectrum resources becomes an important problem for future wireless network. Hence, for efficient use of spectrum resources, spectrum sharing between the existing wireless technologies receives much attention. However, to share the spectrum more efficiently, an effective spectrum sharing method should be developed. Most of the spectrum sharing methods are based on an optimization algorithm that provides an optimal solution for spectrum sharing. In this paper, the performance of a spectrum sharing method based on Genetic Algorithm (GA) has been evaluated. The method was applied in a WiMAX/WiFi integrated network, where a WiFi system temporarily uses a spectrum band of WiMAX system. Also a Linear Programming (LP) model has been developed as an alternative for spectrum sharing in the same network and comparative analysis between these two methods reveals that both methods give better performance in terms of throughput (bits/sec), download response time (sec) and received traffic rate (bytes/sec). According to the simulation results, it has been found that Genetic Algorithm (GA) shows its effectiveness in spectrum sharing in WiMAX/WiFi integrated network by reaching the global maxima very fast. Dhaka Univ. J. Sci. 66(2): 103-107, 2018 (July)
APA, Harvard, Vancouver, ISO, and other styles
34

Zhang, Wei, Yuanxi Yang, Anmin Zeng, and Yangyin Xu. "A GNSS/5G Integrated Three-Dimensional Positioning Scheme Based on D2D Communication." Remote Sensing 14, no. 6 (March 21, 2022): 1517. http://dx.doi.org/10.3390/rs14061517.

Full text
Abstract:
The fifth generation (5G) communication has the potential to achieve ubiquitous positioning when integrated with a global navigation satellite system (GNSS). The device-to-device (D2D) communication, serving as a key technology in the 5G network, provides the possibility of cooperative positioning with high-density property. The mobile users (MUs) collaborate to jointly share the position and measurement information, which can make use of more references for positioning. In this paper, a GNSS/5G integrated three-dimensional positioning scheme based on D2D communication is proposed, where the time of arrival (TOA) and received signal strength (RSS) measurements are jointly utilized in the 5G network. The density spatial clustering of application with noise (DBSCAN) is exploited to reduce the position uncertainty of the cooperative nodes, and the positions of the requesting nodes are obtained simultaneously. The particle filter (PF) algorithm is further conducted to improve the position accuracy of the requesting nodes. Numerical results show that the position deviation of the cooperative nodes can be significantly decreased and that the proposed algorithm performs better than the nonintegrated one. The DBSCAN brings an increase of about 50% in terms of the positioning accuracy compared with GNSS results, and the PF further increases the accuracy about 8%. It is also verified that the algorithm suits the fixed and dynamic condition well.
APA, Harvard, Vancouver, ISO, and other styles
35

Bennett, Patrick, and Tom Bohman. "A natural barrier in random greedy hypergraph matching." Combinatorics, Probability and Computing 28, no. 06 (June 27, 2019): 816–25. http://dx.doi.org/10.1017/s0963548319000051.

Full text
Abstract:
AbstractLetr⩾ 2 be a fixed constant and let$ {\cal H} $be anr-uniform,D-regular hypergraph onNvertices. Assume further thatD→ ∞ asN→ ∞ and that degrees of pairs of vertices in$ {\cal H} $are at mostLwhereL=D/( logN)ω(1). We consider the random greedy algorithm for forming a matching in$ {\cal H} $. We choose a matching at random by iteratively choosing edges uniformly at random to be in the matching and deleting all edges that share at least one vertex with a chosen edge before moving on to the next choice. This process terminates when there are no edges remaining in the graph. We show that with high probability the proportion of vertices of$ {\cal H} $that are not saturated by the final matching is at most (L/D)(1/(2(r−1)))+o(1). This point is a natural barrier in the analysis of the random greedy hypergraph matching process.
APA, Harvard, Vancouver, ISO, and other styles
36

Liu, Wenhan, Jiewei Ji, Sheng Chang, Hao Wang, Jin He, and Qijun Huang. "EvoMBN: Evolving Multi-Branch Networks on Myocardial Infarction Diagnosis Using 12-Lead Electrocardiograms." Biosensors 12, no. 1 (December 29, 2021): 15. http://dx.doi.org/10.3390/bios12010015.

Full text
Abstract:
Multi-branch Networks (MBNs) have been successfully applied to myocardial infarction (MI) diagnosis using 12-lead electrocardiograms. However, most existing MBNs share a fixed architecture. The absence of architecture optimization has become a significant obstacle to a more accurate diagnosis for these MBNs. In this paper, an evolving neural network named EvoMBN is proposed for MI diagnosis. It utilizes a genetic algorithm (GA) to automatically learn the optimal MBN architectures. A novel fixed-length encoding method is proposed to represent each architecture. In addition, the crossover, mutation, selection, and fitness evaluation of the GA are defined to ensure the architecture can be optimized through evolutional iterations. A novel Lead Squeeze and Excitation (LSE) block is designed to summarize features from all the branch networks. It consists of a fully-connected layer and an LSE mechanism that assigns weights to different leads. Five-fold inter-patient cross validation experiments on MI detection and localization are performed using the PTB diagnostic database. Moreover, the model architecture learned from the PTB database is transferred to the PTB-XL database without any changes. Compared with existing studies, our EvoMBN shows superior generalization and the efficiency of its flexible architecture is suitable for auxiliary MI diagnosis in real-world.
APA, Harvard, Vancouver, ISO, and other styles
37

Brancaccio, Adriana, Giovanni Leone, Rocco Pierri, and Raffaele Solimene. "Experimental Validation of a Microwave Imaging Method for Shallow Buried Target Detection by Under-Sampled Data and a Non-Cooperative Source." Sensors 21, no. 15 (July 29, 2021): 5148. http://dx.doi.org/10.3390/s21155148.

Full text
Abstract:
In microwave imaging, it is often of interest to inspect electrically large spatial regions. In these cases, data must be collected over a great deal of measurement points which entails long measurement time and/or costly, and often unfeasible, measurement configurations. In order to counteract such drawbacks, we have recently introduced a microwave imaging algorithm that looks for the scattering targets in terms of equivalent surface currents supported over a given reference plane. While this method is suited to detect shallowly buried targets, it allows one to independently process all frequency data, and hence the source and the receivers do not need to be synchronized. Moreover, spatial data can be reduced to a large extent, without any aliasing artifacts, by properly combining single-frequency reconstructions. In this paper, we validate such an approach by experimental measurements. In particular, the experimental test site consists of a sand box in open air where metallic plate targets are shallowly buried a (few cm) under the air/soil interface. The investigated region is illuminated by a fixed transmitting horn antenna, whereas the scattered field is collected over a planar measurement aperture at a fixed height from the air-sand interface. The transmitter and the receiver share only the working frequency information. Experimental results confirm the feasibility of the method.
APA, Harvard, Vancouver, ISO, and other styles
38

Shahsavari, Shahram, Nail Akar, and Babak Hossein Khalaj. "Joint Cell Muting and User Scheduling in Multicell Networks with Temporal Fairness." Wireless Communications and Mobile Computing 2018 (2018): 1–18. http://dx.doi.org/10.1155/2018/4846291.

Full text
Abstract:
A semicentralized joint cell muting and user scheduling scheme for interference coordination in a multicell network is proposed under two different temporal fairness criteria. In the proposed scheme, at a decision instant, each base station (BS) in the multicell network employs a cell-level scheduler to nominate one user for each of its inner and outer sections and their available transmission rates to a network-level scheduler which then computes the potential overall transmission rate for each muting pattern. Subsequently, the network-level scheduler selects one pattern to unmute, out of all the available patterns. This decision is shared with all cell-level schedulers which then forward data to one of the two nominated users provided the pattern they reside in was chosen for transmission. Both user and pattern selection decisions are made on a temporal fair basis. Although some pattern sets are easily obtainable from static frequency reuse systems, we propose a general pattern set construction algorithm in this paper. As for the first fairness criterion, all cells are assigned to receive the same temporal share with the ratio between the temporal share of a cell center section and that of the cell edge section being set to a fixed desired value for all cells. The second fairness criterion is based on max-min temporal fairness for which the temporal share of the network-wide worst case user is maximized. Extensive numerical results are provided to validate the effectiveness of the proposed schemes and to study the impact of choice of the pattern set.
APA, Harvard, Vancouver, ISO, and other styles
39

Jing, Peng. "Research on the Evaluation Method of University Bi-Entrepreneurship Curriculum Based on IoT Integrated with AHP Algorithm." Mobile Information Systems 2022 (October 4, 2022): 1–13. http://dx.doi.org/10.1155/2022/6364273.

Full text
Abstract:
The Internet of Things (IoT) is essential for the success and adoption of digital entrepreneurship in universities. The Internet of Things (IoT) is the integration of electronic objects, peripherals, cities, and other items and equipment with embedded software, electronics, actuators, network connections, and sensors that allow these things and equipment to share and gather data. The usage of IoT for university bi-entrepreneurship can equalize the playing field in many areas of the economy, creating options like as remote work at any time and from anywhere. Nowadays, universities are playing an essential part in the careers of students. In this regard, government departments provide considerable assistance in recruiting individuals and companies in a variety of ways, including funding and tax policies. However, many factors affect university students’ innovation and entrepreneurship education. Therefore, it is necessary to establish a quality evaluation system of innovation and entrepreneurship education, which can guide university students’ targeted innovation and entrepreneurship by improving the comprehensive ability and employability of these students. Therefore, this paper constructs an evaluation index system of university students’ innovation and entrepreneurship education quality, including several dimensions and several sub-indicators. On this basis, the analytic hierarchy process (AHP) algorithm is used to construct the judgment matrix, and the ranking weight of each comparison element is calculated according to the judgment matrix. The suggested system first constructs a hierarchical model to conceptualize the relationship between different elements and then builds a judgment matrix. After establishing the judgment matrix, the calculation methods of the characteristic root, square root, and idempotent relative weight are systematically checked. The experimental results reveal that the proposed evaluation system has a stronger impact on promoting university students’ creativity, entrepreneurship, and employment.
APA, Harvard, Vancouver, ISO, and other styles
40

Ye, Z., Y. Xu, L. Hoegner, X. Tong, and U. Stilla. "PRECISE DISPARITY ESTIMATION FOR NARROW BASELINE STEREO BASED ON MULTISCALE SUPERPIXELS AND PHASE CORRELATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (June 4, 2019): 147–53. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-147-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> With the rapid development of subpixel matching algorithms, the estimation of image shifts with an accuracy of higher than 0.05 pixels is achieved, which makes the narrow baseline stereovision possible. Based on the subpixel matching algorithm using the robust phase correlation (PC), in this work, we present a novel hierarchical and adaptive disparity estimation scheme for narrow baseline stereo, which consists of three main steps: image coregistration, pixel-level disparity estimation, and subpixel refinement. The Fourier-Mellin transform with subpixel PC is used to co-register two input images. Then, the pixel-level disparities are estimated in an iterative manner, which is achieved through multiscale superpixels. The pixel-level PC is performed with the window sizes and locations adaptively determined according to superpixels, with the disparity values calcualted. Fast weighted median filtering based on edge-aware filter is adopted to refine the disparity results. At last, the accurate disparities are calculated via a robust subpixel PC method. The combination of multiscale superpixel hierarchy, adaptive determination of the window size and location of correlation, fast weighted median filtering and subpixel PC make the proposed scheme be able to overcome the issues of either low-texture areas or fattening effect. Experimental results on a pair of UAV images and the comparison with the fixed-window PC methods, the iterative scheme with fixed variation strategy, and a sophisticated implementation using global optimization demonstrate the superiority and reliability of the proposed scheme.</p>
APA, Harvard, Vancouver, ISO, and other styles
41

Borodin, Kirill, and Nurlan Zhangabayuly Zhangabay. "Mechanical characteristics, as well as physical-and-chemical properties of the slag-filled concretes, and investigation of the predictive power of the metaheuristic approach." Curved and Layered Structures 6, no. 1 (January 1, 2019): 236–44. http://dx.doi.org/10.1515/cls-2019-0020.

Full text
Abstract:
AbstractOur article is devoted to development and verification of the metaheuristic optimisation algorithm in the course of selection of the compositional proportions of the slag-filled concretes. The experimental selection of various compositions and working modes, which ensure one and the same fixed level of a basic property, is the very labour-intensive and time-consuming process. This process cannot be feasible in practice in many situations, for example, in the cases, where it is necessary to investigate composite materials with equal indicators of resistance to aggressive environments or with equal share of voids in the certain range of dimensions. There are many similar articles in the scientific literature. Therefore, it is possible to make the conclusion on the topicality of the above-described problem. In our article, we will consider development of the methodology of the automated experimental-and-statistical determination of optimal compositions of the slag-filled concretes. In order to optimise search of local extremums of the complicated functions of the multi-factor analysis, we will utilise the metaheuristic optimisation algorithm, which is based on the concept of the swarm intelligence. Motivation in respect of utilisation of the swarm intelligence algorithm is conditioned by the absence of the educational pattern, within which it is not necessary to establish a certain pattern of learning as it is necessary to do in the neural-network algorithms. In the course of performance of this investigation, we propose this algorithm, as well as procedure of its verification on the basis of the immediate experimental verification.
APA, Harvard, Vancouver, ISO, and other styles
42

Fomin, Fedor V., Petr A. Golovach, Daniel Lokshtanov, Fahad Panolan, Saket Saurabh, and Meirav Zehavi. "Multiplicative Parameterization Above a Guarantee." ACM Transactions on Computation Theory 13, no. 3 (September 30, 2021): 1–16. http://dx.doi.org/10.1145/3460956.

Full text
Abstract:
Parameterization above a guarantee is a successful paradigm in Parameterized Complexity. To the best of our knowledge, all fixed-parameter tractable problems in this paradigm share an additive form defined as follows. Given an instance ( I,k ) of some (parameterized) problem π with a guarantee g(I) , decide whether I admits a solution of size at least (or at most) k + g(I) . Here, g(I) is usually a lower bound on the minimum size of a solution. Since its introduction in 1999 for M AX SAT and M AX C UT (with g(I) being half the number of clauses and half the number of edges, respectively, in the input), analysis of parameterization above a guarantee has become a very active and fruitful topic of research. We highlight a multiplicative form of parameterization above (or, rather, times) a guarantee: Given an instance ( I,k ) of some (parameterized) problem π with a guarantee g(I) , decide whether I admits a solution of size at least (or at most) k · g(I) . In particular, we study the Long Cycle problem with a multiplicative parameterization above the girth g(I) of the input graph, which is the most natural guarantee for this problem, and provide a fixed-parameter algorithm. Apart from being of independent interest, this exemplifies how parameterization above a multiplicative guarantee can arise naturally. We also show that, for any fixed constant ε > 0, multiplicative parameterization above g(I) 1+ε of Long Cycle yields para-NP-hardness, thus our parameterization is tight in this sense. We complement our main result with the design (or refutation of the existence) of fixed-parameter algorithms as well as kernelization algorithms for additional problems parameterized multiplicatively above girth.
APA, Harvard, Vancouver, ISO, and other styles
43

Yuan, Xinpan, Qunfeng Liu, Jun Long, Lei Hu, and Yulou Wang. "Deep Image Similarity Measurement Based on the Improved Triplet Network with Spatial Pyramid Pooling." Information 10, no. 4 (April 8, 2019): 129. http://dx.doi.org/10.3390/info10040129.

Full text
Abstract:
Image similarity measurement is a fundamental problem in the field of computer vision. It is widely used in image classification, object detection, image retrieval, and other fields, mostly through Siamese or triplet networks. These networks consist of two or three identical branches of convolutional neural network (CNN) and share their weights to obtain the high-level image feature representations so that similar images are mapped close to each other in the feature space, and dissimilar image pairs are mapped far from each other. Especially, the triplet network is known as the state-of-the-art method on image similarity measurement. However, the basic CNN can only handle fixed-size images. If we obtain a fixed size image via cutting or scaling, the information of the image will be lost and the recognition accuracy will be reduced. To solve the problem, this paper has proposed the triplet spatial pyramid pooling network (TSPP-Net) through combing the triplet convolution neural network with the spatial pyramid pooling. Additionally, we propose an improved triplet loss function, so that the network model can realize twice distance learning by only inputting three samples at one time. Through the theoretical analysis and experiments, it is proved that the TSPP-Net model and the improved triple loss function can improve the generalization ability and the accuracy of image similarity measurement algorithm.
APA, Harvard, Vancouver, ISO, and other styles
44

Alzarrad, M. Ammar, Gary P. Moynihan, Muhammad T. Hatamleh, and Siyuan Song. "Fuzzy Multicriteria Decision-Making Model for Time-Cost-Risk Trade-Off Optimization in Construction Projects." Advances in Civil Engineering 2019 (November 27, 2019): 1–7. http://dx.doi.org/10.1155/2019/7852301.

Full text
Abstract:
As is often the case in project scheduling, when the project duration is shortened to decrease total cost, the total float is lost resulting in added critical or nearly critical activities. This, in turn, results in decreasing the probability of completing the project on time and increases the risk of schedule delays. To solve this problem, this research developed a fuzzy multicriteria decision-making (FMCDM) model. The objective of this model is to help project managers improve their decisions regarding time-cost-risk trade-offs (TCRTO) in construction projects. In this model, an optimization algorithm based on fuzzy logic and analytic hierarchy process (AHP) has been used to analyze the time-cost-risk trade-off alternatives and select the best one based on selected criteria. The algorithm was implemented in the MATLAB software and applied to two case studies to verify and validate the presented model. The presented FMCDM model could help produce a more reliable schedule and mitigate the risk of projects running overbudget or behind schedule. Further, this model is a powerful decision-making instrument to help managers reduce uncertainties and improve the accuracy of time-cost-risk trade-offs. The presented FMCDM model employed fuzzy linguistic terms, which provide decision-makers with the opportunity to give their judgments as intervals comparing to fixed value judgments. In conclusion, the presented FMCDM model has high robustness, and it is an attractive alternative to the traditional methods to solve the time-cost-risk trade-off problem in construction.
APA, Harvard, Vancouver, ISO, and other styles
45

Omidshafiei, Shayegan, Dong-Ki Kim, Miao Liu, Gerald Tesauro, Matthew Riemer, Christopher Amato, Murray Campbell, and Jonathan P. How. "Learning to Teach in Cooperative Multiagent Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 6128–36. http://dx.doi.org/10.1609/aaai.v33i01.33016128.

Full text
Abstract:
Collective human knowledge has clearly benefited from the fact that innovations by individuals are taught to others through communication. Similar to human social groups, agents in distributed learning systems would likely benefit from communication to share knowledge and teach skills. The problem of teaching to improve agent learning has been investigated by prior works, but these approaches make assumptions that prevent application of teaching to general multiagent problems, or require domain expertise for problems they can apply to. This learning to teach problem has inherent complexities related to measuring long-term impacts of teaching that compound the standard multiagent coordination challenges. In contrast to existing works, this paper presents the first general framework and algorithm for intelligent agents to learn to teach in a multiagent environment. Our algorithm, Learning to Coordinate and Teach Reinforcement (LeCTR), addresses peer-to-peer teaching in cooperative multiagent reinforcement learning. Each agent in our approach learns both when and what to advise, then uses the received advice to improve local learning. Importantly, these roles are not fixed; these agents learn to assume the role of student and/or teacher at the appropriate moments, requesting and providing advice in order to improve teamwide performance and learning. Empirical comparisons against state-of-the-art teaching methods show that our teaching agents not only learn significantly faster, but also learn to coordinate in tasks where existing methods fail.
APA, Harvard, Vancouver, ISO, and other styles
46

Wu, Junshuang, Richong Zhang, Yongyi Mao, and Junfan Chen. "On Scalar Embedding of Relative Positions in Attention Models." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 16 (May 18, 2021): 14050–57. http://dx.doi.org/10.1609/aaai.v35i16.17654.

Full text
Abstract:
Attention with positional encoding has been demonstrated as a powerful component in modern neural network models, such as transformers. However, why positional encoding works well in attention models remains largely unanswered. In this paper, we study the scalar relative positional encoding (SRPE) proposed in the T5 transformer. Such an encoding method has two features. First, it uses a scalar to embed relative positions. Second, the relative positions are bucketized using a fixed heuristic algorithm, and positions in the same bucket share the same embedding. In this work, we show that SRPE in attention has an elegant probabilistic interpretation. More specifically, the positional encoding serves to produce a prior distribution for the attended positions. The resulting attentive distribution can be viewed as a posterior distribution of the attended position given the observed input sequence. Furthermore, we propose a new SRPE (AT5) that adopts a learnable bucketization protocol and automatically adapts to the dependency range specific to the learning task. Empirical studies show that the AT5 achieves superior performance than the T5's SRPE.
APA, Harvard, Vancouver, ISO, and other styles
47

Lin, Qinghong, Xiaojun Chen, Qin Zhang, Shaotian Cai, Wenzhe Zhao, and Hongfa Wang. "Deep Unsupervised Hashing with Latent Semantic Components." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 7488–96. http://dx.doi.org/10.1609/aaai.v36i7.20713.

Full text
Abstract:
Deep unsupervised hashing has been appreciated in the regime of image retrieval. However, most prior arts failed to detect the semantic components and their relationships behind the images, which makes them lack discriminative power. To make up the defect, we propose a novel Deep Semantic Components Hashing (DSCH), which involves a common sense that an image normally contains a bunch of semantic components with homology and co-occurrence relationships. Based on this prior, DSCH regards the semantic components as latent variables under the Expectation-Maximization framework and designs a two-step iterative algorithm with the objective of maximum likelihood of training data. Firstly, DSCH constructs a semantic component structure by uncovering the fine-grained semantics components of images with a Gaussian Mixture Modal~(GMM), where an image is represented as a mixture of multiple components, and the semantics co-occurrence are exploited. Besides, coarse-grained semantics components, are discovered by considering the homology relationships between fine-grained components, and the hierarchy organization is then constructed. Secondly, DSCH makes the images close to their semantic component centers at both fine-grained and coarse-grained levels, and also makes the images share similar semantic components close to each other. Extensive experiments on three benchmark datasets demonstrate that the proposed hierarchical semantic components indeed facilitate the hashing model to achieve superior performance.
APA, Harvard, Vancouver, ISO, and other styles
48

Agarwal, Praveen, Amandeep Singh, and Adem Kilicman. "Development of key-dependent dynamic S-Boxes with dynamic irreducible polynomial and affine constant." Advances in Mechanical Engineering 10, no. 7 (July 2018): 168781401878163. http://dx.doi.org/10.1177/1687814018781638.

Full text
Abstract:
In recent years, the need to develop advanced information technology systems in the area of mechanical engineering has been growing continuously to deliver better product quality at reduced cost. Embedding the electronics and software with machines transform them into smart machines sophistically called mechatronic. These software-oriented machines collect big data using sensors and other electronics and share among other smart machines, which further helps them in controlling the manufacturing processes, decision-making, and even in maintenance. Machine-to-machine sharing of data involves the risk of data stealing or modification, which may further disrupt the working of manufacturing process and leads to poor quality product. For data security, block ciphers like advanced encryption standard are needed. Advanced encryption standard is an encryption algorithm which is widely used to provide security to sensitive information by organizations. The main core of advanced encryption standard is its non-liner component that is S-Box, which is also called substitution table. The S-Box provides confusion capabilities in algorithm. The point of interest for cryptanalysis and hackers is S-Box, which is fixed in case of advanced encryption standard. Cryptanalysis and hackers exploit this weakness of advanced encryption standard. Many researchers tried to modify S-Box using different techniques. In this article, we tried to create dynamic S-Boxes which are key-dependent and, at the same time, they are using dynamic irreducible polynomial and affine constant.
APA, Harvard, Vancouver, ISO, and other styles
49

Shakya, Dr Subarna. "Performance Analysis of Wind Turbine Monitoring Mechanism Using Integrated Classification and Optimization Techniques." March 2020 2, no. 1 (March 20, 2020): 31–41. http://dx.doi.org/10.36548/jaicn.2020.1.004.

Full text
Abstract:
The advanced improvements in the techniques utilized in the field of energy generation using the wind mills has led to the remarkable minimization in its capital investments and the cost incurred in its operation. This has even enhanced the prominence of the winds farms worldwide and has raised the market share of the energy produced using the wind mills. Thus leading to the increase in the necessity for capable monitoring mechanisms that is cost effective to report the conditions of the wind turbines regularly. So that it would be helpful in early diagnosis of any fault that has occurred in the wind turbines. To have an accurate monitoring and minimized maintenance cost the paper integrates the Support Vector machine based Cuckoo Search Algorithm. The incorporation of the SVM with the CSO is validated in MATLAB under the gain-factor and the fixed value types of faults that are liable to occur in the wind turbines and the results acquired are compared with the other existing methods such as the SVM-PSO and K-NN. The results observed shows that the SVM based CSO is more accurate in predicting the fault models than the other existing models.
APA, Harvard, Vancouver, ISO, and other styles
50

Marhavilas, Panagiotis K., Michael G. Tegas, Georgios K. Koulinas, and Dimitrios E. Koulouriotis. "A Joint Stochastic/Deterministic Process with Multi-Objective Decision Making Risk-Assessment Framework for Sustainable Constructions Engineering Projects—A Case Study." Sustainability 12, no. 10 (May 23, 2020): 4280. http://dx.doi.org/10.3390/su12104280.

Full text
Abstract:
This study, on the one hand, develops a newfangled risk assessment and analysis (RAA) methodological approach (the MCDM-STO/DET one) for sustainable engineering projects by the amalgamation of a multicriteria decision-making (MCDM) process with the joint-collaboration of a deterministic (DET) and a stochastic (STO) process. On the other hand, proceeds to the application of MCDM-STO/DET at the workplaces of the Greek construction sector and also of the fixed-telecommunications technical projects of OTE SA (that is, the Greek Telecommunications Organization S.A.) by means of real accident data coming from two official State databases, namely of “SEPE” (Labor Inspectorate, Hellenic Ministry of Employment) and of “IKA” (Social Insurance Institution, Hellenic Ministry of Health), all the way through the period of the years2009–2016.Consequently, the article’s objectives are the following: (i) The implementation and execution of the joint MCDM-STO/DET framework, and (ii) to make known that the proposed MCDM-STO/DET algorithm can be a precious method for safety managers (and/or decision-makers) to ameliorate occupational safety and health (OSH) and to endorse the sustainable operation of technical or engineering projects as well. Mainly, we mingle two different configurations of the MCDM method, initially the Analytical Hierarchy-Process (the typical-AHP), and afterwards the Fuzzy-Extended AHP (the FEAHP) one, along with the Proportional Risk Assessment Technique (PRAT) and the analysis of Time-Series Processes (TSP), and finally with the Fault-Tree Analysis (FTA).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography