Journal articles on the topic 'Index coding problem'

To see the other types of publications on this topic, follow the link: Index coding problem.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Index coding problem.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Thomas, Anoop, and Balaji Sundar Rajan. "Generalized Index Coding Problem and Discrete Polymatroids." Entropy 22, no. 6 (June 10, 2020): 646. http://dx.doi.org/10.3390/e22060646.

Full text
Abstract:
The connections between index coding and matroid theory have been well studied in the recent past. Index coding solutions were first connected to multi linear representation of matroids. For vector linear index codes, discrete polymatroids, which can be viewed as a generalization of the matroids, were used. The index coding problem has been generalized recently to accommodate receivers that demand functions of messages and possess functions of messages. In this work we explore the connections between generalized index coding and discrete polymatroids. The conditions that need to be satisfied by a representable discrete polymatroid for a generalized index coding problem to have a vector linear solution is established. From a discrete polymatroid, an index coding problem with coded side information is constructed and it is shown that if the index coding problem has a certain optimal length solution then the discrete polymatroid is representable. If the generalized index coding problem is constructed from a matroid, it is shown that the index coding problem has a binary scalar linear solution of optimal length if and only if the matroid is binary representable.
APA, Harvard, Vancouver, ISO, and other styles
2

Pedrosa, Valéria G., and Max H. M. Costa. "Index Coding with Multiple Interpretations." Entropy 24, no. 8 (August 18, 2022): 1149. http://dx.doi.org/10.3390/e24081149.

Full text
Abstract:
The index coding problem consists of a system with a server and multiple receivers with different side information and demand sets, connected by a noiseless broadcast channel. The server knows the side information available to the receivers. The objective is to design an encoding scheme that enables all receivers to decode their demanded messages with a minimum number of transmissions, referred to as an index code length. The problem of finding the minimum length index code that enables all receivers to correct a specific number of errors has also been studied. This work establishes a connection between index coding and error-correcting codes with multiple interpretations from the tree construction of nested cyclic codes. The notion of multiple interpretations using nested codes is as follows: different data packets are independently encoded, and then combined by addition and transmitted as a single codeword, minimizing the number of channel uses and offering error protection. The resulting packet can be decoded and interpreted in different ways, increasing the error correction capability, depending on the amount of side information available at each receiver. Motivating applications are network downlink transmissions, information retrieval from datacenters, cache management, and sensor networks.
APA, Harvard, Vancouver, ISO, and other styles
3

Thapa, Chandra, Lawrence Ong, Sarah Johnson, and Min Li. "Structural Characteristics of Two-Sender Index Coding." Entropy 21, no. 6 (June 21, 2019): 615. http://dx.doi.org/10.3390/e21060615.

Full text
Abstract:
This paper studies index coding with two senders. In this setup, source messages are distributed among the senders possibly with common messages. In addition, there are multiple receivers, with each receiver having some messages a priori, known as side-information, and requesting one unique message such that each message is requested by only one receiver. Index coding in this setup is called two-sender unicast index coding (TSUIC). The main goal is to find the shortest aggregate normalized codelength, which is expressed as the optimal broadcast rate. In this work, firstly, for a given TSUIC problem, we form three independent sub-problems each consisting of the only subset of the messages, based on whether the messages are available only in one of the senders or in both senders. Then, we express the optimal broadcast rate of the TSUIC problem as a function of the optimal broadcast rates of those independent sub-problems. In this way, we discover the structural characteristics of TSUIC. For the proofs of our results, we utilize confusion graphs and coding techniques used in single-sender index coding. To adapt the confusion graph technique in TSUIC, we introduce a new graph-coloring approach that is different from the normal graph coloring, which we call two-sender graph coloring, and propose a way of grouping the vertices to analyze the number of colors used. We further determine a class of TSUIC instances where a certain type of side-information can be removed without affecting their optimal broadcast rates. Finally, we generalize the results of a class of TSUIC problems to multiple senders.
APA, Harvard, Vancouver, ISO, and other styles
4

Reddy, Kota Srinivas, and Nikhil Karamchandani. "Structured Index Coding Problem and Multi-Access Coded Caching." IEEE Journal on Selected Areas in Information Theory 2, no. 4 (December 2021): 1266–81. http://dx.doi.org/10.1109/jsait.2021.3126663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

El Rouayheb, Salim, Alex Sprintson, and Costas Georghiades. "On the Index Coding Problem and Its Relation to Network Coding and Matroid Theory." IEEE Transactions on Information Theory 56, no. 7 (July 2010): 3187–95. http://dx.doi.org/10.1109/tit.2010.2048502.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

CHLAMTÁČ, EDEN, and ISHAY HAVIV. "Linear Index Coding via Semidefinite Programming." Combinatorics, Probability and Computing 23, no. 2 (November 29, 2013): 223–47. http://dx.doi.org/10.1017/s0963548313000564.

Full text
Abstract:
In theindex codingproblem, introduced by Birk and Kol (INFOCOM, 1998), the goal is to broadcast ann-bit word tonreceivers (one bit per receiver), where the receivers haveside informationrepresented by a graphG. The objective is to minimize the length of a codeword sent to all receivers which allows each receiver to learn its bit. Forlinearindex coding, the minimum possible length is known to be equal to a graph parameter calledminrank(Bar-Yossef, Birk, Jayram and Kol,IEEE Trans. Inform. Theory, 2011).We show a polynomial-time algorithm that, given ann-vertex graphGwith minrankk, finds a linear index code forGof lengthÕ(nf(k)), wheref(k) depends only onk. For example, fork= 3 we obtainf(3) ≈ 0.2574. Our algorithm employs a semidefinite program (SDP) introduced by Karger, Motwani and Sudan for graph colouring (J. Assoc. Comput. Mach., 1998) and its refined analysis due to Arora, Chlamtac and Charikar (STOC, 2006). Since the SDP we use is not a relaxation of the minimization problem we consider, a crucial component of our analysis is anupper boundon the objective value of the SDP in terms of the minrank.At the heart of our analysis lies a combinatorial result which may be of independent interest. Namely, we show an exact expression for the maximum possible value of the Lovász ϑ-function of a graph with minrankk. This yields a tight gap between two classical upper bounds on the Shannon capacity of a graph.
APA, Harvard, Vancouver, ISO, and other styles
7

Mikhaylov, Slava, Michael Laver, and Kenneth R. Benoit. "Coder Reliability and Misclassification in the Human Coding of Party Manifestos." Political Analysis 20, no. 1 (2012): 78–91. http://dx.doi.org/10.1093/pan/mpr047.

Full text
Abstract:
The Comparative Manifesto Project (CMP) provides the only time series of estimated party policy positions in political science and has been extensively used in a wide variety of applications. Recent work (e.g., Benoit, Laver, and Mikhaylov 2009; Klingemann et al. 2006) focuses on nonsystematic sources of error in these estimates that arise from the text generation process. Our concern here, by contrast, is with error that arises during the text coding process since nearly all manifestos are coded only once by a single coder. First, we discuss reliability and misclassification in the context of hand-coded content analysis methods. Second, we report results of a coding experiment that used trained human coders to code sample manifestos provided by the CMP, allowing us to estimate the reliability of both coders and coding categories. Third, we compare our test codings to the published CMP “gold standard” codings of the test documents to assess accuracy and produce empirical estimates of a misclassification matrix for each coding category. Finally, we demonstrate the effect of coding misclassification on the CMP's most widely used index, its left-right scale. Our findings indicate that misclassification is a serious and systemic problem with the current CMP data set and coding process, suggesting the CMP scheme should be significantly simplified to address reliability issues.
APA, Harvard, Vancouver, ISO, and other styles
8

Shigei, Noritaka, Hiromi Miyajima, Michiharu Maeda, and Lixin Ma. "Effective Multiple Vector Quantization for Image Compression." Journal of Advanced Computational Intelligence and Intelligent Informatics 11, no. 10 (December 20, 2007): 1189–96. http://dx.doi.org/10.20965/jaciii.2007.p1189.

Full text
Abstract:
Multiple-VQ methods generate multiple independent codebooks to compress an image by using a neural network algorithm. In the image restoration, the methods restore low quality images from the multiple codebooks, and then combine the low quality ones into a high quality one. However, the naive implementation of these methods increases the compressed data size too much. This paper proposes two improving techniques to this problem: “index inference” and “ranking based index coding.” It is shown that index inference and ranking based index coding are effective for smaller and larger codebook sizes, respectively.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhu, Yongjia, Yuyao He, Ye Fan, and Rugui Yao. "Protection scheme of subcarrier index in OFDM with index modulation aided by LDPC coding." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 39, no. 4 (August 2021): 818–23. http://dx.doi.org/10.1051/jnwpu/20213940818.

Full text
Abstract:
The receiver of OFDM with Index Modulation (OFDM-IM) usually adopts a Log Likelihood Ratio (LLR) detection algorithm based on the activation state of subcarriers. However, the LLR detection algorithm will cause detection errors in subcarrier activation pattern (SAP) or get illegal SAP. Consequently, further errors occur in demodulation, increasing the bit error rate (BER). To solve this problem, we propose the protection scheme of subcarrier index aided by LDPC coding, which reduces the SAP detection errors by encoding the index information bits. On the receiver, the LDPC Coding Aided (LA) detection algorithm is designed, and the formula of LLR of index information bits is derived in detail. Monte Carlo simulation is carried out over multi-path fading channel by MATLAB software. The results show that under the condition that the spectrum efficiency is not lower than the classical OFDM-IM scheme, the proposed protection scheme can obtain a gain of about 5~9 dB when the BER is 10-4, effectively improving the BER performance of OFDM-IM scheme.
APA, Harvard, Vancouver, ISO, and other styles
10

Nitsuwat, S., and W. Paoin. "Development of ICD-10-TM Ontology for a Semi-automated Morbidity Coding System in Thailand." Methods of Information in Medicine 51, no. 06 (2012): 519–28. http://dx.doi.org/10.3414/me11-02-0024.

Full text
Abstract:
SummaryObjectives: The International Classification of Diseases and Related Health Problems, 10th Revision, Thai Modification (ICD-10-TM) ontology is a knowledge base created from the Thai modification of the World Health Organization International Classification of Diseases and Related Health Problems, 10th Revision. The objectives of this research were to develop the ICD-10-TM ontology as a knowledge base for use in a semi-automated ICD coding system and to test the usability of this system.Methods: ICD concepts and relations were identified from a tabular list and alphabetical indexes. An ICD-10-TM ontology was defined in the resource description framework (RDF), notation-3 (N3) format. All ICD-10-TM contents available as Microsoft Word documents were transformed into N3 format using Python scripts. Final RDF files were validated by ICD experts. The ontology was implemented as a knowledge base by using a novel semi-automated ICD coding system. Evaluation of usability was performed by a survey of forty volunteer users.Results: The ICD-10-TM ontology consists of two main knowledge bases (a tabular list knowledge base and an index knowledge base) containing a total of 309,985 concepts and 162,092 relations. The tabular list knowledge base can be divided into an upper level ontology, which defines hierarchical relationships between 22 ICD chapters, and a lower level ontology which defines relations between chapters, blocks, categories, rubrics and basic elements (include, exclude, synonym etc.)of the ICD tabular list. The index knowledge base describes relations between keywords, modifiers in general format and a table format of the ICD index. In this research, the creation of an ICD index ontology revealed interesting findings on problems with the current ICD index structure. One problem with the current structure is that it defines conditions that complicate pregnancy and perinatal conditions on the same hierarchical level as organ system diseases. This could mislead a coding algorithm into a wrong selection of ICD code. To prevent these coding errors by an algorithm, the ICD-10-TM index structure was modified by raising conditions complicating pregnancy and perinatal conditions into a higher hierarchical level of the index knowledge base. The modified ICD-10-TM ontology was implemented as a knowledge base in semi-automated ICD-10-TM coding software. A survey of users of the software revealed a high percentage of correct results obtained from ontology searches (> 95%) and user satisfaction on the usability of the ontology.Conclusion: The ICD-10-TM ontology is the first ICD-10 ontology with a comprehensive description of all concepts and relations in an ICD-10-TM tabular list and alphabetical index. A researcher developing an automated ICD coding system should be aware of The ICD index structure and the complexity of coding processes. These coding systems are not a word matching process. ICD-10 ontology should be used as a knowledge base in The ICD coding software. It can be used to facilitate successful implementation of ICD in developing countries, especially in those countries which do not have an adequate number of competent ICD coders.
APA, Harvard, Vancouver, ISO, and other styles
11

Zeng, Li, and Keke Guo. "Virtual Reality Software and Data Processing Algorithms Packaged Online for Videos." Mobile Information Systems 2022 (July 4, 2022): 1–6. http://dx.doi.org/10.1155/2022/2148742.

Full text
Abstract:
Aiming at the problem of virtual reality and data processing algorithm of online video packaging, one transmission scheme uses TILES in HEVC to block the video and then applies MP4Box to pack the video and generate a DASH video stream. A method is proposed to process the same panoramic video with different quality. By designing a new index to measure the complexity of the coding tree unit, this method predicts the depth of the coding tree unit by using the complexity index and spatial correlation of the video, skipping unnecessary traversal range, and realizing fast division of coding units. Experimental results show that compared with the latest HM16.20 reference model, the proposed algorithm can reduce the coding time by 37.25%, the BD-rate only increases by 0.74%, and the video image quality is almost not lost.
APA, Harvard, Vancouver, ISO, and other styles
12

Sageer Karat, Nujoom, Anoop Thomas, and Balaji Sundar Rajan. "Optimal Linear Error Correcting Delivery Schemes for Two Optimal Coded Caching Schemes." Entropy 22, no. 7 (July 13, 2020): 766. http://dx.doi.org/10.3390/e22070766.

Full text
Abstract:
For coded caching problems with small buffer sizes and the number of users no less than the amount of files in the server, an optimal delivery scheme was proposed by Chen, Fan, and Letaief in 2016. This scheme is referred to as the CFL scheme. In this paper, an extension to the coded caching problem where the link between the server and the users is error prone, is considered. The closed form expressions for average rate and peak rate of error correcting delivery scheme are found for the CFL prefetching scheme using techniques from index coding. Using results from error correcting index coding, an optimal linear error correcting delivery scheme for caching problems employing the CFL prefetching is proposed. Another scheme that has lower sub-packetization requirement as compared to CFL scheme for the same cache memory size was considered by J. Gomez-Vilardebo in 2018. An optimal linear error correcting delivery scheme is also proposed for this scheme.
APA, Harvard, Vancouver, ISO, and other styles
13

Huang, Yujie, Hafiz Mutee-ur-Rehman, Saima Nazeer, Deeba Afzal, and Xiaoli Qiang. "Some Bounds of Weighted Entropies with Augmented Zagreb Index Edge Weights." Discrete Dynamics in Nature and Society 2020 (August 17, 2020): 1–12. http://dx.doi.org/10.1155/2020/3562382.

Full text
Abstract:
The graph entropy was proposed by Körner in the year 1973 when he was studying the problem of coding in information theory. The foundation of graph entropy is in information theory, but it was demonstrated to be firmly identified with some established and often examined graph-theoretic ideas. For instance, it gives an equal definition to a graph to be flawless, and it can likewise be connected to acquire lower bounds in graph covering problems. The objective of this study is to solve the open problem suggested by Kwun et al. in 2018. In this paper, we study the weighted graph entropy by taking augmented Zagreb edge weight and give bounds of it for regular, connected, bipartite, chemical, unicyclic, etc., graphs. Moreover, we compute the weighted graph entropy of certain nanotubes and plot our results to see dependence of weighted entropy on involved parameters.
APA, Harvard, Vancouver, ISO, and other styles
14

Ong, Lawrence, Chin Keong Ho, and Fabian Lim. "The Single-Uniprior Index-Coding Problem: The Single-Sender Case and the Multi-Sender Extension." IEEE Transactions on Information Theory 62, no. 6 (June 2016): 3165–82. http://dx.doi.org/10.1109/tit.2016.2555950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Zhang, Mingchu. "Common lossless compression algorithms and their error resiliency performance." Applied and Computational Engineering 6, no. 1 (June 14, 2023): 243–56. http://dx.doi.org/10.54254/2755-2721/6/20230778.

Full text
Abstract:
With the development of communication technology and computer technology, many related industries, such as multimedia entertainment, put forward higher requirements for storing and transmitting information data. The research of data compression technology has attracted more and more attention. Therefore, the error resiliency ability of data compression algorithm is particularly important. How to enhance the error resiliency of data compression communication systems has been a hot topic for researchers. This paper mainly introduces the lossless data compression technology and its basic principle and performance index. Two typical lossless compression codes, Huffman and Arithmetic coding are deeply studied, including the principle of coding and the problem of error resiliency. Huffman coding and Arithmetic coding are two very important lossless compression codes widely used. The ability to resist channel error is an important index for data compression in communication. It is of great significance to further improve the channel adaptability of data compression to study the above two kinds of codes and their ability to resist channel error.
APA, Harvard, Vancouver, ISO, and other styles
16

Miao, Qinglin, Xiaofeng Zhang, Pisheng Qin, and Xianguang Liu. "Modeling and I-NSGA-III-VLC Solution of Aircraft Equipment Rotation and Echelon Usage under Uncertainty." Applied Sciences 12, no. 20 (October 17, 2022): 10482. http://dx.doi.org/10.3390/app122010482.

Full text
Abstract:
Optimizing the aircraft equipment usage scheme of different units according to their task intensity has great significance in improving aircraft reliability and health management. This paper studied the modeling and solving methods of the rotation and echelon usage problems of aircraft equipment measured by dual-life indexes, one of which cannot be controlled. In order to maximize the waste rate of the rotation quantity, echelon uniformity index, life matching index and life utilization index, a decision-making model of the equipment rotation and echelon usage problem under uncertainty was constructed, and an improved NSGA-III with a variable length chromosome was proposed. An improved segmented coding method and operators were proposed, and the repeated individual control mechanism was used to improve the population diversity. When the scale of the problem was large, this method could search a wider range in a short time and obtain more feasible solutions, which verified the feasibility of this method.
APA, Harvard, Vancouver, ISO, and other styles
17

Demir, Ömer, and Süleyman Sadi Seferoğlu. "Developing a Scratch-based coding achievement test." Information and Learning Sciences 120, no. 5/6 (May 13, 2019): 383–406. http://dx.doi.org/10.1108/ils-08-2018-0078.

Full text
Abstract:
Purpose The lack of a reliable and valid measurement tool for coding achievement emerges as a major problem in Turkey. Therefore, the purpose of this study is to develop a Scratch-based coding achievement test. Design/methodology/approach Initially, an item pool with 31 items was created. The item pool was classified within the framework of Bayman and Mayer’s (1988) types of coding knowledge to support content validity of the test. Then the item pool was applied to 186 volunteer undergraduates at Hacettepe University during the spring semester of the 2017-2018 academic year. Subsequently, the item analysis was conducted for construct validity of the test. Findings In all, 13 items were discarded from the test, leaving a total of 18 items. Out of the 18-item version of the coding achievement test, 4, 5 and 9 items measured syntactic, conceptual and strategic knowledge, respectively, among the types of coding knowledge. Furthermore, average item discrimination index (0.531), average item difficulty index (0.541) and Cronbach Alpha reliability coefficient (0.801) of the test were calculated. Practical implications Scratch users, especially those who are taking introductory courses at Turkish universities, could benefit from a reliable and valid coding achievement test developed in this study. Originality/value This paper has theoretical and practical value, as it provides detailed developmental stages of a reliable and valid Scratch-based coding achievement test.
APA, Harvard, Vancouver, ISO, and other styles
18

Cui, Laizhong, Nan Lu, and Fu Chen. "Exploring a QoS Driven Scheduling Approach for Peer-to-Peer Live Streaming Systems with Network Coding." Scientific World Journal 2014 (2014): 1–10. http://dx.doi.org/10.1155/2014/513861.

Full text
Abstract:
Most large-scale peer-to-peer (P2P) live streaming systems use mesh to organize peers and leverage pull scheduling to transmit packets for providing robustness in dynamic environment. The pull scheduling brings large packet delay. Network coding makes the push scheduling feasible in mesh P2P live streaming and improves the efficiency. However, it may also introduce some extra delays and coding computational overhead. To improve the packet delay, streaming quality, and coding overhead, in this paper are as follows. we propose a QoS driven push scheduling approach. The main contributions of this paper are: (i) We introduce a new network coding method to increase the content diversity and reduce the complexity of scheduling; (ii) we formulate the push scheduling as an optimization problem and transform it to a min-cost flow problem for solving it in polynomial time; (iii) we propose a push scheduling algorithm to reduce the coding overhead and do extensive experiments to validate the effectiveness of our approach. Compared with previous approaches, the simulation results demonstrate thatpacket delay,continuity index,andcoding ratioof our system can be significantly improved, especially in dynamic environments.
APA, Harvard, Vancouver, ISO, and other styles
19

Yang, Yang, and Hai Ge Li. "XML Query Based on Indexed Sequential Table." Advanced Materials Research 532-533 (June 2012): 1177–81. http://dx.doi.org/10.4028/www.scientific.net/amr.532-533.1177.

Full text
Abstract:
The current study based on XML index and query mostly focuses on encoding and the structural relation. Region codings are widely used to improve XML query. In this paper postorder-traversal region coding is proposed. The postorder of a node’s all descendants consists of the region. Judging and ensuring structural relation of any two nodes just depend on this region, if the postorder of a node is in a region, ancestor/descendant structural relation can be ensured. Consequently, postorder-traversal region coding can effectively judge structural relation and avoid traversing the XML document tree. Based on region coding, many constructive structural query algorithms have been put forward. As we all know that Stack-Tree-Desc algorithm is one of these fine algorithms, AList and DList only need separately scan one time to judge structural relation, however some unnecessary nodes still be scanned. In order to solve this problem, Indexed Sequential Table algorithm is introduced. The optimized algorithm introduces Indexed Sequential Table to avoid scanning unwanted nodes when the two lists join to locate next node which participates in structural join. In this case, some nodes of AList and DList which don’t participate in structural joins can be jumped, the query efficiency is enhanced. As a result, ordered scanning is prevented, the consuming time of XML query shortens accordingly. Experiment results demonstrate the effectiveness of the improved coding and algorithm.
APA, Harvard, Vancouver, ISO, and other styles
20

Zheng, Wei, Long Ye, Jing Ling Wang, and Qin Zhang. "A Research of Intra Prediction Coding with Variance of Prediction Mode Number." Applied Mechanics and Materials 719-720 (January 2015): 1177–83. http://dx.doi.org/10.4028/www.scientific.net/amm.719-720.1177.

Full text
Abstract:
Intra prediction is a key step in H.264/AVC to improve the coding performance with the idea that removing the directional redundancy among neighboring blocks. In order to cover more directional information existed in the image frames, there are usually many prediction modes can be selected in the state-of-the-art coding frameworks, but more bits are also needed to encode the prediction mode index information, then how to achieve the maximum overall bit-rate reduction became a problem. In this paper, 16 kinds of prediction modes are adopted by considering the direction information for 8x8 image blocks. Through calculating the bit-rate both for the mode index and residual image under different number of prediction modes, we obtain the most suitable prediction mode number relatively from the graphs. Experimental results show that, with the increase of prediction mode number, the residual information decreases obviously, and the sum of residual information and prediction mode index information also decreases but levels off after reaching a certain mode number, even has an obviously rising trend.
APA, Harvard, Vancouver, ISO, and other styles
21

Arunachala, Chinmayananda, Vaneet Aggarwal, and B. Sundar Rajan. "On the Optimal Broadcast Rate of the Two-Sender Unicast Index Coding Problem With Fully-Participated Interactions." IEEE Transactions on Communications 67, no. 12 (December 2019): 8612–23. http://dx.doi.org/10.1109/tcomm.2019.2941470.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

LU, JIAN, JIAPENG TIAN, CHEN XU, and YURU ZOU. "A DICTIONARY LEARNING APPROACH FOR FRACTAL IMAGE CODING." Fractals 27, no. 02 (March 2019): 1950020. http://dx.doi.org/10.1142/s0218348x19500208.

Full text
Abstract:
In recent years, sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, this paper investigates incorporating a dictionary learning approach into fractal image coding, which leads to a new model containing three terms: a patch-based sparse representation prior over a learned dictionary, a quadratic term measuring the closeness of the underlying image to a fractal image, and a data-fidelity term capturing the statistics of Gaussian noise. After the dictionary is learned, the resulting optimization problem with fractal coding can be solved effectively. The new method can not only efficiently recover noisy images, but also admirably achieve fractal image noiseless coding/compression. Experimental results suggest that in terms of visual quality, peak-signal-to-noise ratio, structural similarity index and mean absolute error, the proposed method significantly outperforms the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
23

Vargas, Sylvanna M., Richard S. John, Linda C. Garro, Alex Kopelowicz, and Steven R. López. "Measuring Congruence in Problem Definition of Latino Patients and Their Psychotherapists: An Exploratory Study." Hispanic Journal of Behavioral Sciences 41, no. 3 (June 19, 2019): 392–411. http://dx.doi.org/10.1177/0739986319855672.

Full text
Abstract:
The current study developed a mixed-methods coding scheme to explore the degree of correspondence between Latino patients’ and their psychotherapists’ descriptions of the presenting problems. We interviewed 34 patients and clinicians (17 dyads) following an initial therapy session. Using a theoretical thematic approach, we generated a list of problem areas reported in participants’ descriptions. Independent coders reliably rated the presence and salience of these problems using a quantitative index. We then statistically estimated the fit between corresponding narratives. We found poor congruence across dyads’ descriptions of all problem areas, with two exceptions. We also noted patterns of incongruences, primarily characterized by therapists providing explanations that went beyond what their patients said. This study provides an innovative objective approach to estimate the nuanced degrees of concordance within dyads’ narratives. Our findings provide initial evidence of poor match between views held by Latino patients and their clinicians.
APA, Harvard, Vancouver, ISO, and other styles
24

Li, Hai Tao, and Cheng Qiao. "Comprehensive Evaluation Method for Interchange Project Based on Projection Pursuit." Applied Mechanics and Materials 71-78 (July 2011): 3342–46. http://dx.doi.org/10.4028/www.scientific.net/amm.71-78.3342.

Full text
Abstract:
Comprehensive evaluation method for interchange project is built on the basic principle and the solution process of the projection pursuit algorithm. The method can determine directly the objective weights of the indexes from the project of being evaluated, and avoid the subjective negative impact on the conclusion of the determining index weights and the conclusion influenced by the index weights given by people themselves in other evaluation methods. Real coding based Accelerating Genetic Algorithm (RAGA) is proposed and applied to deal with global optimization problem of high-dimensional data effectively by optimizing the projection index function and searching for the optimal projection directional vector.
APA, Harvard, Vancouver, ISO, and other styles
25

Krishnan, Prasad, Lakshmi Natarajan, and V. Lalitha. "An Umbrella Converse for Data Exchange: Applied to Caching, Computing, and Shuffling." Entropy 23, no. 8 (July 30, 2021): 985. http://dx.doi.org/10.3390/e23080985.

Full text
Abstract:
The problem of data exchange between multiple nodes with storage and communication capabilities models several current multi-user communication problems like Coded Caching, Data Shuffling, Coded Computing, etc. The goal in such problems is to design communication schemes which accomplish the desired data exchange between the nodes with the optimal (minimum) amount of communication load. In this work, we present a converse to such a general data exchange problem. The expression of the converse depends only on the number of bits to be moved between different subsets of nodes, and does not assume anything further specific about the parameters in the problem. Specific problem formulations, such as those in Coded Caching, Coded Data Shuffling, and Coded Distributed Computing, can be seen as instances of this generic data exchange problem. Applying our generic converse, we can efficiently recover known important converses in these formulations. Further, for a generic coded caching problem with heterogeneous cache sizes at the clients with or without a central server, we obtain a new general converse, which subsumes some existing results. Finally we relate a “centralized” version of our bound to the known generalized independence number bound in index coding and discuss our bound’s tightness in this context.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhu, Hua Bing, Feng Yu, Yun Xi, Long Wang, and Juan Zhang. "Study on Stochastic Assembly Line Balancing Based on Improved Particle Swarm Optimization Algorithm." Applied Mechanics and Materials 130-134 (October 2011): 3870–74. http://dx.doi.org/10.4028/www.scientific.net/amm.130-134.3870.

Full text
Abstract:
Focusing on a particular assembly line balancing problem of which the task time is a stochastic variable, a stochastic model is established, which aimed at maximization of assembly line balancing rate, completed probability and smoothness index. Simultaneously, an improved particle swarm optimization algorithm is proposed to solve this problem and a reasonable chromosome coding method which effectively prevent to generate infeasible solution is designed. For this reason, the algorithm convergence rate could be improved. At last, rear axle assembly line balancing designs of an automotive part company is taken to test validity of algorithm. Availability of the algorithm is verified by this example.
APA, Harvard, Vancouver, ISO, and other styles
27

Xidao, Luan, Xie Yuxiang, Zhang Lili, Zhang Xin, Li Chen, and He Jingmeng. "An Image Similarity Acceleration Detection Algorithm Based on Sparse Coding." Mathematical Problems in Engineering 2018 (2018): 1–9. http://dx.doi.org/10.1155/2018/1917421.

Full text
Abstract:
Aiming at the problem that the image similarity detection efficiency is low based on local feature, an algorithm called ScSIFT for image similarity acceleration detection based on sparse coding is proposed. The algorithm improves the image similarity matching speed by sparse coding and indexing the extracted local features. Firstly, the SIFT feature of the image is extracted as a training sample to complete the overcomplete dictionary, and a set of overcomplete bases is obtained. The SIFT feature vector of the image is sparse-coded with the overcomplete dictionary, and the sparse feature vector is used to build an index. The image similarity detection result is obtained by comparing the sparse coefficients. The experimental results show that the proposed algorithm can significantly improve the detection speed compared with the traditional algorithm based on local feature detection under the premise of guaranteeing the accuracy of algorithm detection.
APA, Harvard, Vancouver, ISO, and other styles
28

Qin, Cifeng, Wenyin Gong, and Xiang Li. "Research on the Image Description Algorithm of Double-Layer LSTM Based on Adaptive Attention Mechanism." Mathematical Problems in Engineering 2022 (May 21, 2022): 1–9. http://dx.doi.org/10.1155/2022/2315341.

Full text
Abstract:
Image text description is a multimodal data processing problem in the computer field, which involves the research tasks of computer vision and natural language processing. At present, the research focus of image text description task is mainly on the method based on deep learning. The work of this paper is mainly focused on the imprecise description of visual words and nonvisual words in the description of image description tasks in the image text description. An adaptive attention double-layer LSTM (long short-term memory) model based on coding-decoding is proposed. Compared with the algorithm based on the adaptive attention mechanism based on the coding-decoding framework, the evaluation index BLEU-1 is improved by 1.21%. The METEOR was 0.75% higher and CIDEr was 0.55%, while the indexes of BLEU-4 and ROUGE-L were not as good as those of the original model, but the index was not different. Although it cannot surpass all the performance indicators of the original model, the description of visual words and nonvisual words is more accurate in the actual image text description.
APA, Harvard, Vancouver, ISO, and other styles
29

Huang, Xiaohui, Jiabao Li, Jining Yan, and Lizhe Wang. "An adaptive geographic meshing and coding method for remote sensing data." IOP Conference Series: Earth and Environmental Science 1004, no. 1 (March 1, 2022): 012006. http://dx.doi.org/10.1088/1755-1315/1004/1/012006.

Full text
Abstract:
Abstract Spatial indexing techniques, inherently data structures, are generally used in portals opened by institutions or organizations to efficiently filter RS images according to their spatial extent, thus providing researchers with fast Remote Sensing (RS) image data discovery ability. Specifically, space-based spatial indexing approaches are widely adopted to index RS images in distributed environments by mapping RS images in two-dimensional space into several one-dimensional spatial codes. However, current spatial indexing approaches still suffer from the boundary objects problem, which leads to multiple spatial codes for a boundary-crossing RS image and thus alleviates the performance of spatial indexes built on top of these spatial codes. To solve this problem, we propose an adaptive geographic meshing and coding method (AGMD) by combining the famous subdivision model GeoSOT and XZ-ordering to generate only one spatial code for RS images with different spatial widths. Then, we implement our proposed method with a unified big data programming model, (i.e., Apache Beam), to enable its execution in various distributed computing engines (e.g., MapReduce, and Apache Spark, etc.) in distributed environments. Finally, we conduct a series of experiments on real datasets, the archived Landsat metadata collection in level 2. The results show that the proposed AGMD method performs well on metrics, including the following aspects: the effectiveness of the storage overhead and the time cost are up to 359.7% and 58.02 %, respectively.
APA, Harvard, Vancouver, ISO, and other styles
30

Ouyang, Qi, Yongbo Lv, Jihui Ma, and Jing Li. "An LSTM-Based Method Considering History and Real-Time Data for Passenger Flow Prediction." Applied Sciences 10, no. 11 (May 29, 2020): 3788. http://dx.doi.org/10.3390/app10113788.

Full text
Abstract:
With the development of big data and deep learning, bus passenger flow prediction considering real-time data becomes possible. Real-time traffic flow prediction helps to grasp real-time passenger flow dynamics, provide early warning for a sudden passenger flow and data support for real-time bus plan changes, and improve the stability of urban transportation systems. To solve the problem of passenger flow prediction considering real-time data, this paper proposes a novel passenger flow prediction network model based on long short-term memory (LSTM) networks. The model includes four parts: feature extraction based on Xgboost model, information coding based on historical data, information coding based on real-time data, and decoding based on a multi-layer neural network. In the feature extraction part, the data dimension is increased by fusing bus data and points of interest to improve the number of parameters and model accuracy. In the historical information coding part, we use the date as the index in the LSTM structure to encode historical data and provide relevant information for prediction; in the real-time data coding part, the daily half-hour time interval is used as the index to encode real-time data and provide real-time prediction information; in the decoding part, the passenger flow data for the next two 30 min interval outputs by decoding all the information. To our best knowledge, it is the first time to real-time information has been taken into consideration in passenger flow prediction based on LSTM. The proposed model can achieve better accuracy compared to the LSTM and other baseline methods.
APA, Harvard, Vancouver, ISO, and other styles
31

Liu, Hong, Jining Yan, and Xiaohui Huang. "HBase-based spatial-temporal index model for trajectory data." IOP Conference Series: Earth and Environmental Science 1004, no. 1 (March 1, 2022): 012007. http://dx.doi.org/10.1088/1755-1315/1004/1/012007.

Full text
Abstract:
Abstract The development of global positioning technology and the popularization of smart mobile terminals has led to a trend of rapid growth in the data volume and coverage of trajectory data. This type of data has the characteristics of fast update speed, high dimensional characteristics, and a large amount of information that can be mined. Many technology companies will use trajectory data to provide location-based services, such as vehicle scheduling and road condition estimation. However, the storage and query efficiency of massive trajectory data have increasingly become bottlenecks in these applications, especially for large-scale spatiotemporal query scenarios. This paper solves this problem by designing a trajectory data index model based on GeoSOT-ST spatiotemporal coding. Based on this model, the HBase-based trajectory data storage scheme and spatiotemporal range query technology are studied, and MapReduce is used as the calculation engine to complete the query attribute condition filtering in parallel. Comparative experiments prove that the index model proposed in this paper can achieve efficient trajectory data query management.
APA, Harvard, Vancouver, ISO, and other styles
32

Xia, Yao Wen, and Ji Li Xie. "The Research of XML Keyword Retrieval Algorithms Based on MapReduce." Applied Mechanics and Materials 556-562 (May 2014): 3347–49. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.3347.

Full text
Abstract:
In this paper, from the perspective of XML data management, first in the HDFS store large amount of data and XML data based on XML data query rewrite the traditional framework of MapReduce process, the design of large amount of data XML data set keywords retrieval algorithm, contain XML data classification and coding, index and search a four parts, solve the large amount of data of the XML document keywords retrieval problem. Then the design and implementation based on MapReduce of large amount of data XML keyword query system.
APA, Harvard, Vancouver, ISO, and other styles
33

Han, Shuhe. "Research on Online Social Network Information Leakage-Tracking Algorithm Based on Deep Learning." Computational Intelligence and Neuroscience 2022 (June 28, 2022): 1–11. http://dx.doi.org/10.1155/2022/1926794.

Full text
Abstract:
The rapid iteration of information technology makes the development of online social networks increasingly rapid, and its corresponding network scale is also increasingly large and complex. The corresponding algorithms to deal with social networks and their corresponding related problems are also increasing. The corresponding privacy protection algorithms such as encryption algorithm, access control strategy algorithm, and differential privacy protection algorithm have been studied and analyzed, but these algorithms do not completely solve the problem of privacy disclosure. Based on this, this article first searches and accurately filters the relevant information and content of online social networks based on the deep convolution neural network algorithm, so as to realize the perception and protection of users’ safe content. For the corresponding graphics and data, this article introduces the compressed sensing technology to randomly disturb the corresponding graphics and data. At the level of tracking network information leakage algorithm, this article proposes a network information leakage-tracking algorithm based on digital fingerprint, which mainly uses relevant plug-ins to realize the unique identification processing of users, uses the uniqueness of digital fingerprint to realize the tracking processing of leakers, and formulates the corresponding coding scheme based on the social network topology, and at the same time, the network information leakage-tracking algorithm proposed in this article also has high efficiency in the corresponding digital coding efficiency and scalability. In order to verify the advantages of the online social network information leakage-tracking algorithm based on deep learning, this article compares it with the traditional algorithm. In the experimental part, this article mainly compares the accuracy index, recall index, and performance index. At the corresponding accuracy index level, it can be seen that the maximum improvement of the algorithm proposed in this article is about 10% compared with the traditional algorithm. At the corresponding recall index level, the proposed algorithm is about 5–8% higher than the traditional algorithm. Corresponding to the overall performance index, it improves the performance by about 50% compared with the traditional algorithm. The comparison results show that the proposed algorithm has higher accuracy and the corresponding source tracking is more accurate.
APA, Harvard, Vancouver, ISO, and other styles
34

Le Roux, Emma, Peter Edwards, Emily Sanderson, Rebecca Barnes, and Matthew Ridd. "Routine consultations for dermatology problems in adults in general practice: cross-sectional study." British Journal of General Practice 69, suppl 1 (June 2019): bjgp19X703397. http://dx.doi.org/10.3399/bjgp19x703397.

Full text
Abstract:
BackgroundDermatological conditions present frequently in general practice, and treatment failure is common due to low adherence with treatments. There has been little research exploring GP consultations for skin problems.AimTo describe consultations for skin problems in adults including shared decision making (SDM) around treatment decisions, delivery of self-management advice, and follow-up.MethodData were extracted from the One in a Million Study, an archive of 327 videorecorded routine GP adult consultations and linked data. A coding instrument was developed and refined, which was applied to all consultations where a skin problem was identified as having been discussed. SDM was assessed using OPTION. Twenty per cent of the consultations were double coded and inter-rater reliability assessed. Data were analysed using Stata, with descriptive statistics reported.ResultsIn total 45 consultations (13.8%) were examined, featuring a mean of 2.2 problems. Of the 100 problem types, 51 were dermatological. Mean time spent on skin problems was 4:16 minutes (29.6% of total duration of consultations with ≥2 problems). SDM for skin problems was low, with mean OPTION score of 10.7 (range 0–35). Self-management advice was given for 47.1% of skin problems (verbal only). Most skin problems (84.3%) were not referred to secondary care; 32.6% of skin problems not referred were seen again in primary care within 12 weeks of the index consultation, of which 35.6% were unplanned.ConclusionSkin problems commonly present alongside other complaints. SDM and self-management advice are uncommon. While most dermatological problems are not referred, patients often re-consult for the same problem.
APA, Harvard, Vancouver, ISO, and other styles
35

Wei, Yuanfei, Zalinda Othman, Kauthar Mohd Daud, Shihong Yin, Qifang Luo, and Yongquan Zhou. "Equilibrium Optimizer and Slime Mould Algorithm with Variable Neighborhood Search for Job Shop Scheduling Problem." Mathematics 10, no. 21 (November 1, 2022): 4063. http://dx.doi.org/10.3390/math10214063.

Full text
Abstract:
Job Shop Scheduling Problem (JSSP) is a well-known NP-hard combinatorial optimization problem. In recent years, many scholars have proposed various metaheuristic algorithms to solve JSSP, playing an important role in solving small-scale JSSP. However, when the size of the problem increases, the algorithms usually take too much time to converge. In this paper, we propose a hybrid algorithm, namely EOSMA, which mixes the update strategy of Equilibrium Optimizer (EO) into Slime Mould Algorithm (SMA), adding Centroid Opposition-based Computation (COBC) in some iterations. The hybridization of EO with SMA makes a better balance between exploration and exploitation. The addition of COBC strengthens the exploration and exploitation, increases the diversity of the population, improves the convergence speed and convergence accuracy, and avoids falling into local optimum. In order to solve discrete problems efficiently, a Sort-Order-Index (SOI)-based coding method is proposed. In order to solve JSSP more efficiently, a neighbor search strategy based on a two-point exchange is added to the iterative process of EOSMA to improve the exploitation capability of EOSMA to solve JSSP. Then, it is utilized to solve 82 JSSP benchmark instances; its performance is evaluated compared to that of EO, Marine Predators Algorithm (MPA), Aquila Optimizer (AO), Bald Eagle Search (BES), and SMA. The experimental results and statistical analysis show that the proposed EOSMA outperforms other competing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
36

Chernyavsky, A. F., A. A. Kolyada, and S. Yu Protasenya. "Application of the neural network computing technology for calculating the interval-index characteristics of a minimally redundant modular code." Doklady of the National Academy of Sciences of Belarus 62, no. 6 (January 13, 2019): 652–60. http://dx.doi.org/10.29235/1561-8323-2018-62-6-652-660.

Full text
Abstract:
The article is devoted to the problem of creation of high-speed neural networks (NN) for calculation of interval-index characteristics of a minimally redundant modular code. The functional base of the proposed solution is an advanced class of neural networks of a final ring. These neural networks perform position-modular code transformations of scalable numbers using a modified reduction technology. A developed neural network has a uniform parallel structure, easy to implement and requires the time expenditures of the order (3[log2b]+ [log2k]+6tsum close to the lower theoretical estimate. Here b and k is the average bit capacity and the number of modules respectively; t sum is the duration of the two-place operation of adding integers. The refusal from a normalization of the numbers of the modular code leads to a reduction of the required set of NN of the finite ring on the (k – 1) component. At the same time, the abnormal configuration of minimally redundant modular coding requires an average k-fold increase in the interval index module (relative to the rest of the bases of the modular number system). It leads to an adequate increase in hardware expenses on this module. Besides, the transition from normalized to unregulated coding reduces the level of homogeneity of the structure of the NN for calculating intervalindex characteristics. The possibility of reducing the structural complexity of the proposed NN by using abnormal intervalindex characteristics is investigated.
APA, Harvard, Vancouver, ISO, and other styles
37

Wu, Gongxing, Yuchao Li, Chunmeng Jiang, Chao Wang, Jiamin Guo, and Rui Cheng. "MULTI-VESSELS COLLISION AVOIDANCE STRATEGY FOR AUTONOMOUS SURFACE VEHICLES BASED ON GENETIC ALGORITHM IN CONGESTED PORT ENVIRONMENT." Brodogradnja 73, no. 3 (July 1, 2022): 69–91. http://dx.doi.org/10.21278/brod73305.

Full text
Abstract:
An improved genetic collision avoidance algorithm is proposed in this study to address the problem that Autonomous Surface Vehicles (ASV) need to comply with the collision avoidance rules at sea in congested sea areas. Firstly, a collision risk index model for ASV safe encounters is established taking into account the international rules for collision avoidance. The ASV collision risk index and the distance of safe encounters are taken as boundary values of the correlation membership function of the collision risk index model to calculate the optimal heading of ASV in real-time. Secondly, the genetic coding, fitness function, and basic parameters of the genetic algorithm are designed to construct the collision avoidance decision system. Finally, the simulation of collision avoidance between ASV and several obstacle vessels is performed, including the simulation of three collision avoidance states head-on situation, crossing situation, and overtaking situation. The results show that the proposed intelligent genetic algorithm considering the rules of collision avoidance at sea can effectively avoid multiple other vessels in different situations.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhou, Yang. "Research on Network Control Based on QoS of the Network." Advanced Materials Research 989-994 (July 2014): 4265–68. http://dx.doi.org/10.4028/www.scientific.net/amr.989-994.4265.

Full text
Abstract:
With the technology improvement of computer communication and multimedia coding, real time communication such as audio and video is introduced to networks and become a dominant way of communication.In the control of network, the optimization problem of the network controller based on network Quality of service (QoS) is a very important problem in the research of network control. Considering the influence of network quality—of service (QoS) on the control performance,a system model combining the network parameters and the control parameters is established for networked control systems (NCSs). Based on this, the condition dependent on the network parameters and control parameters is presented for the existence of guaranteed cost controllers.LMI).Within the scope of QoS perturbation,the designed controller can not only make the system as hypnotically stable but also guarantee that the system performance index is not greater than the upper bound.
APA, Harvard, Vancouver, ISO, and other styles
39

Peng, Ji, Shi Shiliang, Lu Yi, and Li He. "Research on Risk Identification of Coal and Gas Outburst Based on PSO-CSA." Mathematical Problems in Engineering 2023 (January 10, 2023): 1–12. http://dx.doi.org/10.1155/2023/5299986.

Full text
Abstract:
Aiming at the identification of coal and gas outburst risk, using the advantages of the clone selection algorithm (CSA), such as self-adaptation and robustness, and the characteristics of fast convergence of particle swarm optimization (PSO) algorithm, the complex decoding problem, and mutation process brought by CSA binary coding are used. It is difficult to control the problem. Using PSO optimization, the problem of abnormal detection and identification in coal and gas outburst monitoring is developed and studied, and a CSA coal and gas outburst risk anomaly detection and identification model based on PSO optimization variation is established. The model uses the coal and gas outburst index data as a collection of antigen-stimulated antibodies to achieve abnormal detection and identification of measured data. With the help of the measured data, the verification results show that the model can effectively detect and identify the risk of coal and gas outburst, and the identification results are consistent with the risk of coal and gas outburst in the field. It can be used as an effective risk identification model to guide coal mining work.
APA, Harvard, Vancouver, ISO, and other styles
40

Davidovic, Bojana, Mirjana Ivanovic, and Svjetlana Jankovic. "Dental health estimation for children age twelve and fifteen." Serbian Dental Journal 59, no. 1 (2012): 35–43. http://dx.doi.org/10.2298/sgs1201035d.

Full text
Abstract:
Introduction. The problem of chronic diseases such as caries is very complex because it appears very early in life, often during childhood. The number of involved people is growing as well as the number of affected teeth and surfaces that eventually lead to teeth loss. The aim of this study was to determine the dental status of adolescents age 12 and 15 in three municipalities in Bosnia, Foca, Cajnice and Kalinovik. Material and methods. The study included 506 school children both genders of six schools in three municipalities (Foca, Cajnice and Kalinovik). Teeth examination and criteria for diagnosis and coding were estimated based of criteria of the European Academy of Paediatric Dentistry (EAPD). To estimate dental health DMFT index [number of carious teeth (D), missing (M) and filled teeth (F)] and related indices (Person Caries Index, Teeth Caries Index, Average Caries Index and DMFT structure) were used. Results. Dental health status and caries prevalence were presented through the values of the Average Caries Index. The value of Average Caries Index for examined children was 6.17. Of all examined children 96.05% of them had at least one carious tooth. The average value of Teeth Caries Index was 23.04%. Conclusion. Dental health of the children in the examined region was characterized by high values of untreated carious teeth as well as the other components of DMFT index. Therefore, preventive measures and primary dental care must be implemented better among school children.
APA, Harvard, Vancouver, ISO, and other styles
41

An, Hui, Wenjing Yang, Jin Huang, Ai Huang, Zhongchi Wan, and Min An. "Identify and Assess Hydropower Project’s Multidimensional Social Impacts with Rough Set and Projection Pursuit Model." Complexity 2020 (November 10, 2020): 1–16. http://dx.doi.org/10.1155/2020/9394639.

Full text
Abstract:
To realize the coordinated and sustainable development of hydropower projects and regional society, comprehensively evaluating hydropower projects’ influence is critical. Usually, hydropower project development has an impact on environmental geology and social and regional cultural development. Based on comprehensive consideration of complicated geological conditions, fragile ecological environment, resettlement of reservoir area, and other factors of future hydropower development in each country, we have constructed a comprehensive evaluation index system of hydropower projects, including 4 first-level indicators of social economy, environment, safety, and fairness, which contain 26 second-level indicators. To solve the problem that existing models cannot evaluate dynamic nonlinear optimization, a projection pursuit model is constructed by using rough set reduction theory to simplify the index. Then, an accelerated genetic algorithm based on real number coding is used to solve the model and empirical study is carried out with the Y hydropower station as a sample. The evaluation results show that the evaluation index system and assessment model constructed in our paper effectively reduce the subjectivity of index weight. Applying our model to the social impact assessment (SIA) of related international hydropower projects can not only comprehensively analyze the social impact of hydropower projects but also identify important social influencing factors and effectively analyze the social impact level of each dimension. Furthermore, SIA assessment can be conducive to project decision-making, avoiding social risks and social stability.
APA, Harvard, Vancouver, ISO, and other styles
42

Liu, Gang, Lei Jia, Taishan Hu, Fangming Deng, Zheng Chen, Tong Sun, and Yanchong Feng. "Novel Data Compression Algorithm for Transmission Line Condition Monitoring." Energies 14, no. 24 (December 8, 2021): 8275. http://dx.doi.org/10.3390/en14248275.

Full text
Abstract:
For the problem of data accumulation caused by massive sensor data in transmission line condition monitoring system, this paper analyzes the type and amount of data in the transmission line sensor network, compares the compression algorithms of wireless sensor network data at home and abroad, and proposes an efficient lossless compression algorithm suitable for sensor data in transmission line linear heterogeneous networks. The algorithm combines the wavelet compression algorithm and the neighborhood index sequence algorithm. It displays a fast operation speed and requires a small amount of calculation. It is suitable for battery powered wireless sensor network nodes. By combining wavelet correlation analysis and neighborhood index sequence coding, the compression algorithm proposed in this paper can achieve a high compression rate, has strong robustness to packet loss, has high compression performance, and can help to reduce network load and the packet loss rate. Simulation results show that the proposed method achieves a high compression rate in the compression of the transmission line parameter dataset, is superior to the existing data compression algorithms, and is suitable for the compression and transmission of transmission line condition monitoring data.
APA, Harvard, Vancouver, ISO, and other styles
43

Zhao, Mengchen, Xiujuan Yao, Jing Wang, Yi Yan, Xiang Gao, and Yanan Fan. "Single-Channel Blind Source Separation of Spatial Aliasing Signal Based on Stacked-LSTM." Sensors 21, no. 14 (July 16, 2021): 4844. http://dx.doi.org/10.3390/s21144844.

Full text
Abstract:
Aiming at the problem of insufficient separation accuracy of aliased signals in space Internet satellite-ground communication scenarios, a stacked long short-term memory network (Stacked-LSTM) separation method based on deep learning is proposed. First, the coding feature representation of the mixed signal is extracted. Then, the long sequence input is divided into smaller blocks through the Stacked-LSTM network with the attention mechanism of the SE module, and the deep feature mask of the source signal is trained to obtain the Hadamard product of the mask of each source and the coding feature of the mixed signal, which is the encoding feature representation of the source signal. Finally, characteristics of the source signal is decoded by 1-D convolution to to obtain the original waveform. The negative scale-invariant source-to-noise ratio (SISNR) is used as the loss function of network training, that is, the evaluation index of single-channel blind source separation performance. The results show that in the single-channel separation of spatially aliased signals, the Stacked-LSTM method improves SISNR by 10.09∼38.17 dB compared with the two classic separation algorithms of ICA and NMF and the three deep learning separation methods of TasNet, Conv-TasNet and Wave-U-Net. The Stacked-LSTM method has better separation accuracy and noise robustness.
APA, Harvard, Vancouver, ISO, and other styles
44

Li, Qingmei, Xin Chen, Xiaochong Tong, Xuantong Zhang, and Chengqi Cheng. "An Information Fusion Model between GeoSOT Grid and Global Hexagonal Equal Area Grid." ISPRS International Journal of Geo-Information 11, no. 4 (April 17, 2022): 265. http://dx.doi.org/10.3390/ijgi11040265.

Full text
Abstract:
In order to cope with the rapid growth of spatiotemporal big data, data organization models based on discrete global grid systems have developed rapidly in recent years. Due to the differences in model construction methods, grid level subdivision and coding rules, it is difficult for discrete global grid systems to integrate, share and exchange data between different models. Aiming at the problem of information fusion between a GeoSOT grid and global hexagonal equal area grid system, this paper proposes the GeoSOT equivalent aggregation model (the GEA model). We establish a spatial correlation index method between GeoSOT grids and global hexagonal equal area grids, and based on the spatial correlation index, we propose an interoperable transformation method for grid attributes information. We select the POI (points of interest) data of Beijing bus and subway stations and carry out the transformation experiment of hexagonal grid to GeoSOT grid information so as to verify the effectiveness of the GEA model. The experimental results show that when the 17th-level GeoSOT grid is selected as the particle grid to fit the hexagonal grid, the accuracy and efficiency can be well balanced. The fitting accuracy is 95.51%, and the time consumption is 30.9 ms. We establish the associated index of the GeoSOT grid and the hexagonal grid and finally realized the exchange of information.
APA, Harvard, Vancouver, ISO, and other styles
45

Li, Qingwen, Lei Xu, Qingyuan Li, and Lichao Zhang. "Identification and Classification of Enhancers Using Dimension Reduction Technique and Recurrent Neural Network." Computational and Mathematical Methods in Medicine 2020 (October 18, 2020): 1–9. http://dx.doi.org/10.1155/2020/8852258.

Full text
Abstract:
Enhancers are noncoding fragments in DNA sequences, which play an important role in gene transcription and translation. However, due to their high free scattering and positional variability, the identification and classification of enhancers have a higher level of complexity than those of coding genes. In order to solve this problem, many computer studies have been carried out in this field, but there are still some deficiencies in these prediction models. In this paper, we use various feature extraction strategies, dimension reduction technology, and a comprehensive application of machine model and recurrent neural network model to achieve an accurate prediction of enhancer identification and classification with the accuracy of was 76.7% and 84.9%, respectively. The model proposed in this paper is superior to the previous methods in performance index or feature dimension, which provides inspiration for the prediction of enhancers by computer technology in the future.
APA, Harvard, Vancouver, ISO, and other styles
46

Satria, Muhammad Aldila, Retnosari Andrajati, and Sudibyo Supardi. "The Translation Process of Pharmaceutical Care Network Europe v9.00 to Bahasa Indonesia: An Instrument to Detect Drug-Related Problem." Malaysian Journal of Medical Sciences 29, no. 3 (June 28, 2022): 133–44. http://dx.doi.org/10.21315/mjms2022.29.3.13.

Full text
Abstract:
Background: Drug-related problems (DRPs) remain a major health challenge in tertiary health services such as hospitals in Indonesia. These problems are detected and solved using classification systems such as Pharmaceutical Care Network Europe (PCNE). Therefore, this study aims to obtain a valid and reliable Bahasa Indonesia version of the PCNE. Methods: A draft of the Bahasa Indonesia version of the PCNE v9.00 was discussed by four experts from May to August 2020 using the Delphi method. Furthermore, the instrument was assessed for its readability, clarity and comprehensiveness by 46 hospital pharmacists throughout Indonesia. In October 2020, two pharmacists from Haji General Hospital, Makassar, Indonesia carried out the inter-rater agreement to assess 20 cases where the proportion of coding matches between both raters were observed. Results: The instrument was found to be valid after passing the face and content validity, and the Scale Content Validity Index (S-CVI) value for each PCNE domain was 0.91, 0.89, 0.93, 0.97 and 0.93, respectively. Moreover, there was a fair agreement between the two raters that ranged between 40%–90%. Also, kappa statistics showed a substantial agreement on the ‘Problems’ and ‘Causes’ domains. Conclusion: The Bahasa Indonesia version of the PCNE v9.00 instrument passed face and content validity as well as inter-agreement to be used in hospital settings.
APA, Harvard, Vancouver, ISO, and other styles
47

Gupta, Ritu, Anurag Mishra, and Sarika Jain. "Secure Image Watermarking in a Compressed SPIHT Domain Using Paillier Cryptosystem." International Journal of Information System Modeling and Design 10, no. 4 (October 2019): 51–70. http://dx.doi.org/10.4018/ijismd.2019100103.

Full text
Abstract:
A secure solution to the problem of copyright infringement and content authentication is to carry out image watermarking in secure signal processing (SSP) domain. Homomorphic encryption is considered one such solution for image watermarking in this domain. The Paillier encryption is found to be suitable for image processing applications in general and for watermarking in particular. In this article, a detailed investigation is carried out by using Paillier cryptosystem for twelve different color images in a compressed domain. The compression of the host images is carried out by SPIHT (Set Partitioning in Hierarchical Trees) coding. The visual quality of the images post embedding and image processing attacks is assessed by using two full reference metrics, Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM). The performance evaluation of the Paillier cryptosystem vis-à-vis watermark application development is carried out by computing three benchmark metrics: number of pixels change rate (NPCR), unified average changing intensity (UACI) and encryption speed.
APA, Harvard, Vancouver, ISO, and other styles
48

MONTAZERI, ALLAHYAR, JAVAD POSHTAN, and MOHAMMAD HOSSEIN KAHAIE. "APPLICATION OF GENETIC ALGORITHMS FOR OVERALL OPTIMIZATION OF AN ACTIVE NOISE CONTROL SYSTEM IN AN ENCLOSURE." Fluctuation and Noise Letters 08, no. 01 (March 2008): L51—L64. http://dx.doi.org/10.1142/s021947750800426x.

Full text
Abstract:
One of the main important aspects in designing an active control system is the optimization of position and number of sensors and actuators. In this paper this problem is addressed for the implementation of a multi-channel active noise control (ANC) system with the aim of global reduction of broadband noise in a telephone kiosk. This includes optimizing the locations for loudspeakers and microphones, finding proper size of the control system, i.e. the number of loudspeakers and microphones, and optimization of the control signals. The mean of acoustic potential energy in the enclosure in a frequency range of 50 Hz to 300 Hz is selected as the performance index for optimization purpose. Several genetic algorithms are proposed and compared to find the global minimum of this performance index. In order to have a better performance in reaching the global minimum, the parameters of these genetic algorithms are tuned, and the best genetic algorithm is selected among them. The main difference between the proposed algorithms is the used coding scheme. Numerical simulations of the acoustical potential energy and also sound pressure at the height where the head of a person may be located, confirms the optimality of the locations proposed by the genetic algorithm.
APA, Harvard, Vancouver, ISO, and other styles
49

Khvostov, A. A., S. G. Tikhomirov, I. A. Khaustov, A. A. Zhuravlev, and A. V. Karmanov. "Matrix-graph model of the polymer materials destruction process." Proceedings of the Voronezh State University of Engineering Technologies 80, no. 3 (December 17, 2018): 50–55. http://dx.doi.org/10.20914/2310-1202-2018-3-50-55.

Full text
Abstract:
The paper deals with the problem of mathematical modeling of the process of thermochemical destruction using the theory of graphs. To synthesize a mathematical model, the Markov chain is used. For the formalization of the model a matrix-graph method of coding is used. It is proposed to consider the process of destruction as a random process, under which the state of the system changes, characterized by the proportion of macromolecules in each fraction of the molecular mass distribution. The intensities of transitions from state to state characterize the corresponding rates of destruction processes for each fraction of the molecular weight distribution (MWD). The processes of crosslinking and polymerization in this work have been neglected, and it is accepted that there is a probability of transition from any state with a lower order index (corresponding to fractions with higher molecular weights) to any state with a higher index (corresponding fractions with lower molecular weights). A computational formula is presented for estimating the number of arcs and model parameters from a given number of fractions of the molecular weight distribution of the polymer. An example of coding in a matrix form of a graph model of the process of degradation of polybutadiene in solution for the case of six fractions of the molecular weight distribution is shown. As the simulation environment, the interactive graphical simulation environment of MathWorks Simulink is used. To evaluate the parameters of the mathematical model, experimental studies of the degradation of polybutadiene in solution were carried out. The chromatography of the polybutadiene solution was used as the initial data for the estimation of the MWD polymer. The considered matrix-graph representation of the structure of the mathematical model of the polymer destruction process makes it possible to simplify the compilation of the model and its software implementation in the case of a large number of vertices of the graph describing the process of destruction
APA, Harvard, Vancouver, ISO, and other styles
50

Lei, Bin, Zhaoyuan Jiang, and Haibo Mu. "Integrated Optimization of Mixed Cargo Packing and Cargo Location Assignment in Automated Storage and Retrieval Systems." Discrete Dynamics in Nature and Society 2019 (February 4, 2019): 1–16. http://dx.doi.org/10.1155/2019/9072847.

Full text
Abstract:
To improve the delivery efficiency of automated storage and retrieval system, the problem of the integrated optimization of mixed cargo packing and cargo location assignment is addressed. An integrated optimization model of mixed cargo packing and location assignments with the shortest time for the stacker in a certain historical period is established and is transformed into a conditional packing problem. An improved hybrid genetic algorithm based on a group coding method is designed to solve the problem. When the initial population is generated, a new heuristic algorithm is designed to improve the convergence speed of the genetic algorithm considering the correlation and frequency of the goods outbound. A heuristic algorithm for a two-dimensional rectangular-packing problem is designed to determine whether a variety of goods can be mixed in packing. Taking actual data from an automated storage and retrieval system for an aviation food company as an example, the established model and design algorithm are verified and the influence of changes in the outbound delivery orders on the optimization result is analyzed. The results show that compared to the method of separate storage of goods based on cube-per-order index rules and a phased optimization method of mixed storage of goods, an integrated optimization method of mixed cargo packing and location assignment can improve the outbound delivery efficiency of the stacking machine by 11.43–25.98% and 1.73–5.51%, respectively, and reduce the cargo location used by 50–55% and 0–10%, respectively. The stronger the correlation of the goods leaving a warehouse, the greater the potential of the design method in this paper to improve the efficiency of the stacker.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography