To see the other types of publications on this topic, follow the link: Logic optimizations.

Journal articles on the topic 'Logic optimizations'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Logic optimizations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Rus, Teodor, and Eric van Wyk. "Using Model Checking in a Parallelizing Compiler." Parallel Processing Letters 08, no. 04 (December 1998): 459–71. http://dx.doi.org/10.1142/s0129626498000468.

Full text
Abstract:
In this paper we describe the usage of temporal logic model checking in a parallelizing compiler to analyze the structure of a source program and locate opportunities for optimization and parallelization. The source program is represented as a process graph in which the nodes are sequential processes and the edges are control and data dependence relationships between the computations at the nodes. By labeling the nodes and edges with descriptive atomic propositions and by specifying the conditions necessary for optimizations and parallelizations as temporal logic formulas, we can use a model checker to locate nodes of the process graph where particular optimizations can be made. To discover opportunities for new optimizations or modify existing ones in this parallelizing compiler, we need only specify their conditions as temporal logic formulas; we do not need to add to or modify the code of the compiler. This greatly simplifies the process of locating optimization and parallelization opportunities in the source program and makes it easier to experiment with complex optimizations. Hence, this methodology provides a convenient, concise, and formal framework in which to carry out program optimizations by compilers.
APA, Harvard, Vancouver, ISO, and other styles
2

Kudva, P., Associate, A. Sullivan, and W. Dougherty. "Measurements for structural logic synthesis optimizations." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 22, no. 6 (June 2003): 665–74. http://dx.doi.org/10.1109/tcad.2003.811456.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Khurshid, Burhan, and Roohie Naaz. "Technology - Dependent Optimization of FIR Filters based on Carry - Save Multiplier and 4:2 Compressor unit." Electronics ETF 20, no. 2 (July 14, 2017): 43. http://dx.doi.org/10.7251/els1620043k.

Full text
Abstract:
This work presents an FPGA implementation of FIR filter based on 4:2 compressor and CSA multiplier unit. The hardware realizations presented in this pa per are based on the technology-dependent optimization of these individual units. The aim is to achieve an efficient mapping of these isolated units on Xilinx FPGAs. Conventional filter implementations consider only technology-independent optimizations and rely on Xilinx CAD tools to map the logic onto FPGA fabric. Very often this results in inefficient mapping. In this paper, we consider the traditional CSA-4:2 compressor based FIR filte rs and restructure these units to achieve improved integration levels. The technology optimized Boolean networks are then coded using instantiation based coding strategies. The Xilinx tool then uses its own optimization strategies to further optimize the networks and generate circuits with high logic densities and reduced depths. Experimental results indicate a significant improvement in performance over traditional realizations. An important property of technology-dependent optimizations is the simultaneous improvement in all the performance parameters. This is in contrast to the technology-independent optimizations where there is always an application driven trade-off between different performance parameters.
APA, Harvard, Vancouver, ISO, and other styles
4

Lacey, David, Neil D. Jones, Eric Van Wyk, and Carl Christian Frederiksen. "Proving correctness of compiler optimizations by temporal logic." ACM SIGPLAN Notices 37, no. 1 (January 2002): 283–94. http://dx.doi.org/10.1145/565816.503299.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhou, Neng-Fa. "Global Optimizations in a Prolog Compiler for the Toam." Journal of Logic Programming 15, no. 4 (April 1993): 275–94. http://dx.doi.org/10.1016/s0743-1066(14)80001-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

ZHOU, NENG-FA, TAISUKE SATO, and YI-DONG SHEN. "Linear tabling strategies and optimizations." Theory and Practice of Logic Programming 8, no. 01 (August 6, 2007): 81–109. http://dx.doi.org/10.1017/s147106840700316x.

Full text
Abstract:
AbstractRecently there has been a growing interest in research in tabling in the logic programming community because of its usefulness in a variety of application domains including program analysis, parsing, deductive databases, theorem proving, model checking, and logic-based probabilistic learning. The main idea of tabling is to memorize the answers to some subgoals and use the answers to resolve subsequent variant subgoals. Early resolution mechanisms proposed for tabling such as OLDT and SLG rely on suspension and resumption of subgoals to compute fixpoints. Recently, the iterative approach named linear tabling has received considerable attention because of its simplicity, ease of implementation, and good space efficiency. Linear tabling is a framework from which different methods can be derived on the basis of the strategies used in handling looping subgoals. One decision concerns when answers are consumed and returned. This article describes two strategies, namely,lazyandeagerstrategies, and compares them both qualitatively and quantitatively. The results indicate that, while the lazy strategy has good locality and is well suited for finding all solutions, the eager strategy is comparable in speed with the lazy strategy and is well suited for programs with cuts. Linear tabling relies on depth-first iterative deepening rather than suspension to compute fixpoints. Each cluster of interdependent subgoals as represented by a topmost looping subgoal is iteratively evaluated until no subgoal in it can produce any new answers. Naive re-evaluation of all looping subgoals, albeit simple, may be computationally unacceptable. In this article, we also introduce semi-naive optimization, an effective technique employed in bottom-up evaluation of logic programs to avoid redundant joins of answers, into linear tabling. We give the conditions for the technique to be safe (i.e., sound and complete) and propose an optimization technique calledearly answer promotionto enhance its effectiveness. Benchmarking in B-Prolog demonstrates that with this optimization linear tabling compares favorably well in speed with the state-of-the-art implementation of SLG.
APA, Harvard, Vancouver, ISO, and other styles
7

BÁRÁNY, VINCE, MICHAEL BENEDIKT, and BALDER TEN CATE. "SOME MODEL THEORY OF GUARDED NEGATION." Journal of Symbolic Logic 83, no. 04 (December 2018): 1307–44. http://dx.doi.org/10.1017/jsl.2018.64.

Full text
Abstract:
AbstractThe Guarded Negation Fragment (GNFO) is a fragment of first-order logic that contains all positive existential formulas, can express the first-order translations of basic modal logic and of many description logics, along with many sentences that arise in databases. It has been shown that the syntax of GNFO is restrictive enough so that computational problems such as validity and satisfiability are still decidable. This suggests that, in spite of its expressive power, GNFO formulas are amenable to novel optimizations. In this article we study the model theory of GNFO formulas. Our results include effective preservation theorems for GNFO, effective Craig Interpolation and Beth Definability results, and the ability to express the certain answers of queries with respect to a large class of GNFO sentences within very restricted logics.
APA, Harvard, Vancouver, ISO, and other styles
8

Hernández-Ramos, José L., Antonio J. Jara, Leandro Marín, and Antonio F. Skarmeta Gómez. "DCapBAC: embedding authorization logic into smart things through ECC optimizations." International Journal of Computer Mathematics 93, no. 2 (May 22, 2014): 345–66. http://dx.doi.org/10.1080/00207160.2014.915316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hsiao, K. S., and C. H. Chen. "Wake-Up Logic Optimizations Through Selective Match and Wakeup Range Limitation." IEEE Transactions on Very Large Scale Integration (VLSI) Systems 14, no. 10 (October 2006): 1089–102. http://dx.doi.org/10.1109/tvlsi.2006.884150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sheriff, Bonnie A., Dunwei Wang, James R. Heath, and Juanita N. Kurtin. "Complementary Symmetry Nanowire Logic Circuits: Experimental Demonstrations and in Silico Optimizations." ACS Nano 2, no. 9 (August 12, 2008): 1789–98. http://dx.doi.org/10.1021/nn800025q.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Hsiao-Fan, and Kuang-Yao Wu. "Preference Approach to Fuzzy Linear Inequalities and Optimizations." Fuzzy Optimization and Decision Making 4, no. 1 (February 2005): 7–23. http://dx.doi.org/10.1007/s10700-004-5567-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Saeedi, Mehdi, Mona Arabzadeh, Morteza Saheb Zamani, and Mehdi Sedighi. "Block-based quantum-logic synthesis." Quantum Information and Computation 11, no. 3&4 (March 2011): 262–77. http://dx.doi.org/10.26421/qic11.3-4-6.

Full text
Abstract:
In this paper, the problem of constructing an efficient quantum circuit for the implementation of an arbitrary quantum computation is addressed. To this end, a basic block based on the cosine-sine decomposition method is suggested which contains $l$ qubits. In addition, a previously proposed quantum-logic synthesis method based on quantum Shannon decomposition is recursively applied to reach unitary gates over $l$ qubits. Then, the basic block is used and some optimizations are applied to remove redundant gates. It is shown that the exact value of $l$ affects the number of one-qubit and CNOT gates in the proposed method. In comparison to the previous synthesis methods, the value of $l$ is examined consequently to improve either the number of CNOT gates or the total number of gates. The proposed approach is further analyzed by considering the nearest neighbor limitation. According to our evaluation, the number of CNOT gates is increased by at most a factor of $\frac{5}{3}$ if the nearest neighbor interaction is applied.
APA, Harvard, Vancouver, ISO, and other styles
13

Balasubramanian, P., D. A. Edwards, and W. B. Toms. "Redundant Logic Insertion and Latency Reduction in Self-Timed Adders." VLSI Design 2012 (May 17, 2012): 1–13. http://dx.doi.org/10.1155/2012/575389.

Full text
Abstract:
A novel concept of logic redundancy insertion is presented that facilitates significant latency reduction in self-timed adder circuits. The proposed concept is universal in the sense that it can be extended to a variety of self-timed design methods. Redundant logic can be incorporated to generate efficient self-timed realizations of iterative logic specifications. Based on the case study of a 32-bit self-timed carry-ripple adder, it has been found that redundant implementations minimize the data path latency by 21.1% at the expense of increases in area and power by 2.3% and 0.8% on average compared to their nonredundant counterparts. However, when considering further peephole logic optimizations, it has been observed in a specific scenario that the delay reduction could be as high as 31% while accompanied by only meager area and power penalties of 0.6% and 1.2%, respectively. Moreover, redundant logic adders pave the way for spacer propagation in constant time and garner actual case latency for addition of valid data.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Ming Ming, Shu Guang Zhao, and Xu Wang. "Reversible Logic Synthesis-Oriented Multi-Objective Automatic Design Method Based on Evolutionary Design Techniques." Key Engineering Materials 439-440 (June 2010): 534–39. http://dx.doi.org/10.4028/www.scientific.net/kem.439-440.534.

Full text
Abstract:
This paper applies evolutionary design techniques to the reversible logic synthesis, and proposes a reversible logic synthesis-oriented multi-objective automatic design method based on evolutionary design techniques. Firstly, build a gate-level array model of reversible logic circuits (RLC) in order to model the synthesis problems as ones of constrained multi-objective optimizations. Then, encode the candidate RLC to a set of binary evolutionary individuals which are solved by the specialized Pareto-optimal multi-objective evolutionary algorithm. In addition, adopt “pre-bit priority” mechanism to repair infeasible individuals and adopt the rule-based local transformation method to simple the redundant RLC. The results of synthesis experiments demonstrate that the proposed method is feasible and effective, and can automatically synthesize the better RLC.
APA, Harvard, Vancouver, ISO, and other styles
15

Ni, Haiyan, Jianping Hu, Xuqiang Zhang, and Haotian Zhu. "The Optimizations of Dual-Threshold Independent-Gate FinFETs and Low-Power Circuit Designs." Journal of Circuits, Systems and Computers 29, no. 07 (September 23, 2019): 2050114. http://dx.doi.org/10.1142/s0218126620501145.

Full text
Abstract:
In this paper, a method of optimizing dual-threshold independent-gate FinFET devices is discussed, and the optimal circuit design is carried out by using these optimized devices. Dual-threshold independent-gate FinFETs include low threshold devices and high threshold devices. The low threshold device is equivalent to two merging parallel short-gate devices and high threshold device is equivalent to two merging series SG devices. We optimize the device mainly by selecting the appropriate gate work function, gate oxide thickness, silicon body thickness and so on. Our optimization is based on the Berkeley BSIMIMG model and verified by TCAD tool. Based on these optimized devices, we designed the compact basic logic gates and two new compact D-type flip-flops. Additionally, we developed a circuit synthesis method based on Binary Decision Diagram (BDD) and the optimized compact basic logic gates. Hspice simulations show that the circuits using the proposed dual-threshold IG FinFETs have better performance than the circuits directly using the short-gate devices.
APA, Harvard, Vancouver, ISO, and other styles
16

Bowles, Andrew. "Trends in applying abstract interpretation." Knowledge Engineering Review 7, no. 2 (June 1992): 157–71. http://dx.doi.org/10.1017/s0269888900006275.

Full text
Abstract:
AbstractAbstract interpretation is a principled approach to inferring properties of a program's execution by simulating that execution using an interpreter which computes over some abstraction of the program's usual, concrete domain, and which collects the information of interest during the execution. Abstract interpretation has been used as the basis of research in logic and functional programming, particularly in applications concerned with compiler optimizations. However, abstract interpretation has the potential to be used in other applications, such as debugging or verification of programs. In this paper we review the use of abstract interpretation to both compiler optimizations and to other applications, attempting to give a flavour of the kind of information it is possible to infer and some of the issues involved
APA, Harvard, Vancouver, ISO, and other styles
17

Ourahou, Meriem, Wiam Ayrir, and Ali Haddi. "Current correction and fuzzy logic optimizations of Perturb & Observe MPPT technique in photovoltaic panel." International Journal for Simulation and Multidisciplinary Design Optimization 10 (2019): A6. http://dx.doi.org/10.1051/smdo/2019007.

Full text
Abstract:
This paper presents a two-way optimization of the Perturb & Observe (P&O) maximum power point tracking (MPPT) technique using current correction and fuzzy logic techniques. In fact, photovoltaic (PV) energy has become more and more coveted today. In the future, it will become a necessity. To ensure its optimization, maximum operating point tracking method is considered as a technological key in PV systems. One of the most used MPPT methods is the P&O technique. In this paper, we will focus on optimizing this method based on two techniques. A first attempt has been made to estimate a current correction of the P&O algorithm in case of illumination variation. Then, fuzzy logic optimization attempt had been highlighted to improve power loss. It is shown that both proposed techniques are very effective and allow considerable improvement of accuracy and are less affected by sudden variation of climatic parameters. The proposed approaches are tested via Matlab software and compared the classical P&O algorithm. Through applications, we could conclude that the two optimized proposed methods offer a remarkable improvement concerning power losses.
APA, Harvard, Vancouver, ISO, and other styles
18

Saabas, Ando, and Tarmo Uustalu. "Program and proof optimizations with type systems." Journal of Logic and Algebraic Programming 77, no. 1-2 (September 2008): 131–54. http://dx.doi.org/10.1016/j.jlap.2008.05.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

BOBILLO, FERNANDO, MIGUEL DELGADO, and JUAN GÓMEZ-ROMERO. "CRISP REPRESENTATIONS AND REASONING FOR FUZZY ONTOLOGIES." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 17, no. 04 (August 2009): 501–30. http://dx.doi.org/10.1142/s0218488509006121.

Full text
Abstract:
Classical ontologies are not suitable to represent imprecise nor uncertain pieces of information. Fuzzy Description Logics were born to represent the former type of knowledge, but they require an appropriate fuzzy language to be agreed on and an important number of available resources to be adapted. This paper faces these problems by presenting a reasoning preserving procedure to obtain a crisp representation for a fuzzy extension of the logic [Formula: see text] which includes fuzzy nominals and trapezoidal membership functions, and uses Gödel implication in the semantics of fuzzy concept and role subsumption. This reduction makes it possible to reuse a crisp representation language as well as currently available reasoners. Our procedure is optimized with respect to related work, reducing the size of the resulting knowledge base. Finally, we also suggest some further optimizations before applying crisp reasoning.
APA, Harvard, Vancouver, ISO, and other styles
20

Al-Rabadi, Anas. "Three-dimensional lattice logic circuits, Part III: Solving 3D volume congestion problem." Facta universitatis - series: Electronics and Energetics 18, no. 1 (2005): 29–43. http://dx.doi.org/10.2298/fuee0501029a.

Full text
Abstract:
This part is a continuation of the first and second parts of my paper. In a previous work, symmetry indices have been related to regular logic circuits for the realization of logic functions. In this paper, a more general treatment that produces 3D regular lattice circuits using operations on symmetry indices is presented. A new decomposition called the Iterative Symmetry Indices Decomposition (ISID) is implemented for the 3D design of lattice circuits. The synthesis of regular two-dimensional circuits using ISID has been introduced previously, and the synthesis of area-specific circuits using ISID has been demonstrated. The new multiple-valued ISID algorithm can have several applications such as: (1) multi-stage decompositions of multiple valued logic functions for various lattice circuit layout optimizations, and (2) the new method is useful for the synthesis of ternary functions using three-dimensional regular lattice circuits whenever volume-specific layout constraints have to be satisfied.
APA, Harvard, Vancouver, ISO, and other styles
21

Sasamal, Trailokya Nath, Anand Mohan, and Ashutosh Kumar Singh. "Efficient Design of Reversible Logic ALU Using Coplanar Quantum-Dot Cellular Automata." Journal of Circuits, Systems and Computers 27, no. 02 (September 11, 2017): 1850021. http://dx.doi.org/10.1142/s0218126618500214.

Full text
Abstract:
Quantum-dot Cellular Automata (QCA) based reversible logic is the utmost necessity to achieve an architecture at nano-scale, which promises extremely low power consumption with high device density and faster computation. This paper emphasises on the design of an efficient reversible Arithmetic Logical Unit (ALU) block in QCA technology. We have considered [Formula: see text] RUG (Reversible Universal Gate) as the basic unit, and also report a HDLQ model for RUG with 52.2% fault tolerance capability. Further, the reversible ALU has synthesized with reversible logic unit (RLU) and reversible arithmetic unit (RAU). We also demonstrate QCA implementation of RLU and RAU with lesser complexity and cell counts. The proposed ALU needs only 64 MVs (Majority Voters), which demonstrates 40% optimizations in the majority gate counts than the existing result. QCADesigner-2.0.3 is used to verify the proposed designs.
APA, Harvard, Vancouver, ISO, and other styles
22

Gunay, Noel S., and Elmer P. Dadios. "An Optimized Multi-Output Fuzzy Logic Controller for Real-Time Control." Journal of Advanced Computational Intelligence and Intelligent Informatics 12, no. 4 (July 20, 2008): 370–76. http://dx.doi.org/10.20965/jaciii.2008.p0370.

Full text
Abstract:
Any real-time control application run by a digital computer (or any sequential machine) demands a very fast processor in order to make the time-lag from data sensing to issuance of a control action closest to zero. In some instances, the algorithm used requires a relatively large primary memory which is crucial especially when implemented in a microcontroller. This paper presents a novel implementation of a multi-output fuzzy controller (which is known in this paper as MultiOFuz), which utilizes lesser memory and executes faster than a type of an existing multiple single-output fuzzy logic controllers. The design and implementation of the developed controller employed the object-oriented approach with program level code optimizations. MultiOFuz is a reusable software component and the simplicity of how to interface this to control applications is presented. Comparative analyses of algorithms, memory usage and simulations are presented to support our claim of increased efficiency in both execution time and storage use. Future directions of MultiOFuz are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhao, Lingying, Min Ye, and Xinxin Xu. "Intelligent optimization of EV comfort based on a cooperative braking system." Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 235, no. 10-11 (March 19, 2021): 2904–16. http://dx.doi.org/10.1177/09544070211004461.

Full text
Abstract:
To address the comfort of an electric vehicle, a coupling mechanism between mechanical friction braking and electric regenerative braking was studied. A cooperative braking system model was established, and comprehensive simulations and system optimizations were carried out. The performance of the cooperative braking system was analyzed. The distribution of the braking force was optimized by an intelligent method, and the distribution of a braking force logic diagram based on comfort was proposed. Using an intelligent algorithm, the braking force was distributed between the two braking systems and between the driving and driven axles. The experiment based on comfort was carried out. The results show that comfort after optimization is improved by 76.29% compared with that before optimization by comparing RMS value in the time domain. The reason is that the braking force distribution strategy based on the optimization takes into account the driver’s braking demand, the maximum braking torque of the motor, and the requirements of vehicle comfort, and makes full use of the braking torque of the motor. The error between simulation results and experimental results is 5.13%, which indicates that the braking force’s distribution strategy is feasible.
APA, Harvard, Vancouver, ISO, and other styles
24

Abdelmassih, Gorg, Mohammed Al-Numay, and Abdelali El Aroudi. "Map Optimization Fuzzy Logic Framework in Wind Turbine Site Selection with Application to the USA Wind Farms." Energies 14, no. 19 (September 26, 2021): 6127. http://dx.doi.org/10.3390/en14196127.

Full text
Abstract:
In this study, we analyze observational and predicted wind energy datasets of the lower 48 states of the United States, and we intend to predict an optimal map for new turbines placement. Several approaches have been implemented to investigate the correlation between current wind power stations, power capacity, wind seasonality, and site selection. The correlation between stations is carried out according to Pearson correlation coefficient approach joined with the spherical law of cosines to calculate the distances. The high correlation values between the stations spaced within a distance of 100 km show that installing more turbines close to the current farms would assist the electrical grid. The total power capacity indicates that the current wind turbines are utilizing approximately 70% of the wind resources available in the turbine’s sites. The Power spectrum of Fourier’s spectral density indicates main, secondary, and harmonic frequencies correspond to yearly, semiyearly, and daily wind-speed periodic patterns. We propose and validate a numerical approach based on a novel fuzzy logic framework for wind turbines placement. Map optimizations are fitted considering different parameters presented in wind speed, land use, price, and elevation. Map optimization results show that suitable sites for turbines placement are in general agreement with the direction of the correlation approach.
APA, Harvard, Vancouver, ISO, and other styles
25

Ly, Hai-Bang, Lu Minh Le, Luong Van Phi, Viet-Hung Phan, Van Quan Tran, Binh Thai Pham, Tien-Thinh Le, and Sybil Derrible. "Development of an AI Model to Measure Traffic Air Pollution from Multisensor and Weather Data." Sensors 19, no. 22 (November 13, 2019): 4941. http://dx.doi.org/10.3390/s19224941.

Full text
Abstract:
Gas multisensor devices offer an effective approach to monitor air pollution, which has become a pandemic in many cities, especially because of transport emissions. To be reliable, properly trained models need to be developed that combine output from sensors with weather data; however, many factors can affect the accuracy of the models. The main objective of this study was to explore the impact of several input variables in training different air quality indexes using fuzzy logic combined with two metaheuristic optimizations: simulated annealing (SA) and particle swarm optimization (PSO). In this work, the concentrations of NO2 and CO were predicted using five resistivities from multisensor devices and three weather variables (temperature, relative humidity, and absolute humidity). In order to validate the results, several measures were calculated, including the correlation coefficient and the mean absolute error. Overall, PSO was found to perform the best. Finally, input resistivities of NO2 and nonmetanic hydrocarbons (NMHC) were found to be the most sensitive to predict concentrations of NO2 and CO.
APA, Harvard, Vancouver, ISO, and other styles
26

RIGUZZI, FABRIZIO, and TERRANCE SWIFT. "The PITA system: Tabling and answer subsumption for reasoning under uncertainty." Theory and Practice of Logic Programming 11, no. 4-5 (July 2011): 433–49. http://dx.doi.org/10.1017/s147106841100010x.

Full text
Abstract:
AbstractMany real world domains require the representation of a measure of uncertainty. The most common such representation is probability, and the combination of probability with logic programs has given rise to the field of Probabilistic Logic Programming (PLP), leading to languages such as the Independent Choice Logic, Logic Programs with Annotated Disjunctions (LPADs), Problog, PRISM, and others. These languages share a similar distribution semantics, and methods have been devised to translate programs between these languages. The complexity of computing the probability of queries to these general PLP programs is very high due to the need to combine the probabilities of explanations that may not be exclusive. As one alternative, the PRISM system reduces the complexity of query answering by restricting the form of programs it can evaluate. As an entirely different alternative, Possibilistic Logic Programs adopt a simpler metric of uncertainty than probability.Each of these approaches—general PLP, restricted PLP, and Possibilistic Logic Programming—can be useful in different domains depending on the form of uncertainty to be represented, on the form of programs needed to model problems, and on the scale of the problems to be solved. In this paper, we show how the PITA system, which originally supported the general PLP language of LPADs, can also efficiently support restricted PLP and Possibilistic Logic Programs. PITA relies on tabling with answer subsumption and consists of a transformation along with an API for library functions that interface with answer subsumption. We show that, by adapting its transformation and library functions, PITA can be parameterized to PITA(IND, EXC) which supports the restricted PLP of PRISM, including optimizations that reduce non-discriminating arguments and the computation of Viterbi paths. Furthermore, we show PITA to be competitive with PRISM for complex queries to Hidden Markov Model examples, and sometimes much faster. We further show how PITA can be parameterized to PITA(COUNT) which computes the number of different explanations for a subgoal, and to PITA(POSS) which scalably implements Possibilistic Logic Programming. PITA is a supported package in version 3.3 of XSB.
APA, Harvard, Vancouver, ISO, and other styles
27

Böhmer, Kristof, and Stefanie Rinderle-Ma. "Automatic Business Process Test Case Selection: Coverage Metrics, Algorithms, and Performance Optimizations." International Journal of Cooperative Information Systems 25, no. 04 (December 2016): 1740002. http://dx.doi.org/10.1142/s0218843017400020.

Full text
Abstract:
Business processes describe and implement the business logic of companies, control human interaction, and invoke heterogeneous services during runtime. Therefore, ensuring the correct execution of processes is crucial. Existing work is addressing this challenge through process verification. However, the highly dynamic aspects of the current processes and the deep integration and frequent invocation of third party services limit the use of static verification approaches. Today, one frequently utilized approach to address this limitation is to apply process tests. However, the complexity of process models is steadily increasing. So, more and more test cases are required to assure process model correctness and stability during design and maintenance. But executing hundreds or even thousands of process model test cases lead to excessive test suite execution times and, therefore, high costs. Hence, this paper presents novel coverage metrics along with a genetic test case selection algorithm. Both enable the incorporation of user-driven test case selection requirements and the integration of different knowledge sources. In addition, techniques for test case selection computation performance optimization are provided and evaluated. The effectiveness of the presented genetic test case selection algorithm is evaluated against five alternative test case selection algorithms.
APA, Harvard, Vancouver, ISO, and other styles
28

Bunder, M. W. "Some improvements to Turner's algorithm for bracket abstraction." Journal of Symbolic Logic 55, no. 2 (June 1990): 656–69. http://dx.doi.org/10.2307/2274655.

Full text
Abstract:
A computer handles λ-terms more easily if these are translated into combinatory terms. This translation process is called bracket abstraction. The simplest abstraction algorithm—the (fab) algorithm of Curry (see Curry and Feys [6])—is lengthy to implement and produces combinatory terms that increase rapidly in length as the number of variables to be abstracted increases.There are several ways in which these problems can be alleviated:(1) A change in order of the clauses in the algorithm so that (f) is performed as a last resort.(2) The use of an extra clause (c), appropriate to βη reduction.(3) The introduction of a finite number of extra combinators.The original 1924 form of bracket abstraction of Schönfinkel [17], which in fact predates λ-calculus, uses all three of these techniques; all are also mentioned in Curry and Feys [6].A technique employed by many computing scientists (Turner [20], Peyton Jones [16], Oberhauser [15]) is to use the (fab) algorithm followed by certain “optimizations” or simplifications involving extra combinators and sometimes special cases of (c).Another is either to allow a fixed infinite set of (super-) combinators (Abdali [1], Kennaway and Sleep [10], Krishnamurthy [12], Tonino [19]) or to allow new combinators to be defined one by one during the abstraction process (Hughes [7] and [8]).A final method encodes the variables to be abstracted as an n-tuple—this requires only a finite number of combinators (Curien [5], Statman [18]).
APA, Harvard, Vancouver, ISO, and other styles
29

Kułaga, Rafał, and Marek Gorgoń. "FPGA Implementation of Decision Trees and Tree Ensembles for Character Recognition in Vivado Hls." Image Processing & Communications 19, no. 2-3 (September 1, 2014): 71–82. http://dx.doi.org/10.1515/ipc-2015-0012.

Full text
Abstract:
Abstract Decision trees and decision tree ensembles are popular machine learning methods, used for classification and regression. In this paper, an FPGA implementation of decision trees and tree ensembles for letter and digit recognition in Vivado High-Level Synthesis is presented. Two publicly available datasets were used at both training and testing stages. Different optimizations for tree code and tree node layout in memory are considered. Classification accuracy, throughput and resource usage for different training algorithms, tree depths and ensemble sizes are discussed. The correctness of the module’s operation was verified using C/RTL cosimulation and on a Zynq-7000 SoC device, using Xillybus IP core for data transfer between the processing system and the programmable logic.
APA, Harvard, Vancouver, ISO, and other styles
30

Finkbeiner, Bernd, Christopher Hahn, Marvin Stenger, and Leander Tentrup. "Efficient monitoring of hyperproperties using prefix trees." International Journal on Software Tools for Technology Transfer 22, no. 6 (February 20, 2020): 729–40. http://dx.doi.org/10.1007/s10009-020-00552-5.

Full text
Abstract:
Abstract Hyperproperties, such as non-interference and observational determinism, relate multiple computation traces with each other and are thus not monitorable by tools that consider computations in isolation. We present the monitoring approach implemented in the latest version of $$\text {RVHyper}$$ RVHyper , a runtime verification tool for hyperproperties. The input to the tool are specifications given in the temporal logic $$\text {HyperLTL}$$ HyperLTL , which extends linear-time temporal logic (LTL) with trace quantifiers and trace variables. $$\text {RVHyper}$$ RVHyper processes execution traces sequentially until a violation of the specification is detected. In this case, a counterexample, in the form of a set of traces, is returned. $$\text {RVHyper}$$ RVHyper employs a range of optimizations: a preprocessing analysis of the specification and a procedure that minimizes the traces that need to be stored during the monitoring process. In this article, we introduce a novel trace storage technique that arranges the traces in a tree-like structure to exploit partially equal traces. We evaluate $$\text {RVHyper}$$ RVHyper on existing benchmarks on secure information flow control, error correcting codes, and symmetry in hardware designs. As an example application outside of security, we show how $$\text {RVHyper}$$ RVHyper can be used to detect spurious dependencies in hardware designs.
APA, Harvard, Vancouver, ISO, and other styles
31

Khurshid, Burhan, and Roohie Naaz. "Cost Effective Implementation of Fixed Point Adders for LUT based FPGAs using Technology Dependent Optimizations." Electronics ETF 19, no. 1 (July 22, 2015): 14. http://dx.doi.org/10.7251/els1519014k.

Full text
Abstract:
Modern day field programmable gate arrays(FPGAs) have very huge and versatile logic resources resulting inthe migration of their application domain from prototypedesigning to low and medium volume production designing.Unfortunately most of the work pertaining to FPGAimplementations does not focus on the technology dependentoptimizations that can implement a desired functionality withreduced cost. In this paper we consider the mapping of simpleripple carry fixed-point adders (RCA) on look-up table (LUT)based FPGAs. The objective is to transform the given RCABoolean network into an optimized circuit netlist that canimplement the desired functionality with minimum cost. Weparticularly focus on 6-input LUTs that are inherent in all themodern day FPGAs. Technology dependent optimizations arecarried out to utilize this FPGA primitive efficiently and theresult is compared against various adder designs. Theimplementation targets the XC5VLX30-3FF324 device fromXilinx Virtex-5 FPGA family. The cost of the circuit is expressedin terms of the resources utilized, critical path delay and theamount of on-chip power dissipated. Our implementation resultsshow a reduction in resources usage by at least 50%; increase inspeed by at least 10% and reduction in dynamic powerdissipation by at least 30%. All this is achieved without anytechnology independent (architectural) modification.
APA, Harvard, Vancouver, ISO, and other styles
32

MOHAMMADI, MAJID, MAJID HAGHPARAST, MOHAMMAD ESHGHI, and KEIVAN NAVI. "MINIMIZATION AND OPTIMIZATION OF REVERSIBLE BCD-FULL ADDER/SUBTRACTOR USING GENETIC ALGORITHM AND DON'T CARE CONCEPT." International Journal of Quantum Information 07, no. 05 (August 2009): 969–89. http://dx.doi.org/10.1142/s0219749909005523.

Full text
Abstract:
Reversible logic and binary coded decimal (BCD) arithmetic are two concerning subjects of hardware. This paper proposes a modular synthesis method to realize a reversible BCD-full adder (BCD-FA) and subtractor circuit. We propose three approaches to design and optimize all parts of a BCD-FA circuit using genetic algorithm and don't care concept. Our first approach is based on the Hafiz's work, and the second one is based on the whole BCD-FA circuit design. In the third approach, a binary to BCD converter is presented. Optimizations are done in terms of number of gates, number of garbage inputs/outputs, and the quantum cost of the circuit. We present four designs for BCD-FA with four different goals: minimum garbage inputs/outputs, minimum quantum cost, minimum number of gates, and optimum circuit in terms of all the above parameters.
APA, Harvard, Vancouver, ISO, and other styles
33

Eiter, T., M. Fink, T. Krennwallner, C. Redl, and P. Schüller. "Efficient HEX-Program Evaluation Based on Unfounded Sets." Journal of Artificial Intelligence Research 49 (February 26, 2014): 269–321. http://dx.doi.org/10.1613/jair.4175.

Full text
Abstract:
HEX-programs extend logic programs under the answer set semantics with external computations through external atoms. As reasoning from ground Horn programs with nonmonotonic external atoms of polynomial complexity is already on the second level of the polynomial hierarchy, minimality checking of answer set candidates needs special attention. To this end, we present an approach based on unfounded sets as a generalization of related techniques for ASP programs. The unfounded set detection is expressed as a propositional SAT problem, for which we provide two different encodings and optimizations to them. We then integrate our approach into a previously developed evaluation framework for HEX-programs, which is enriched by additional learning techniques that aim at avoiding the reconstruction of the same or related unfounded sets. Furthermore, we provide a syntactic criterion that allows one to skip the minimality check in many cases. An experimental evaluation shows that the new approach significantly decreases runtime.
APA, Harvard, Vancouver, ISO, and other styles
34

HERMENEGILDO, M. V., F. BUENO, M. CARRO, P. LÓPEZ-GARCÍA, E. MERA, J. F. MORALES, and G. PUEBLA. "An overview of Ciao and its design philosophy." Theory and Practice of Logic Programming 12, no. 1-2 (December 30, 2011): 219–52. http://dx.doi.org/10.1017/s1471068411000457.

Full text
Abstract:
AbstractWe provide an overall description of the Ciao multiparadigm programming system emphasizing some of the novel aspects and motivations behind its design and implementation. An important aspect of Ciao is that, in addition to supporting logic programming (and, in particular, Prolog), it provides the programmer with a large number of useful features from different programming paradigms and styles and that the use of each of these features (including those of Prolog) can be turned on and off at will for each program module. Thus, a given module may be using, e.g., higher order functions and constraints, while another module may be using assignment, predicates, Prolog meta-programming, and concurrency. Furthermore, the language is designed to be extensible in a simple and modular way. Another important aspect of Ciao is its programming environment, which provides a powerful preprocessor (with an associated assertion language) capable of statically finding non-trivial bugs, verifying that programs comply with specifications, and performing many types of optimizations (including automatic parallelization). Such optimizations produce code that is highly competitive with other dynamic languages or, with the (experimental) optimizing compiler, even that of static languages, all while retaining the flexibility and interactive development of a dynamic language. This compilation architecture supports modularity and separate compilation throughout. The environment also includes a powerful autodocumenter and a unit testing framework, both closely integrated with the assertion system. The paper provides an informal overview of the language and program development environment. It aims at illustrating the design philosophy rather than at being exhaustive, which would be impossible in a single journal paper, pointing instead to previous Ciao literature.
APA, Harvard, Vancouver, ISO, and other styles
35

Murthy, G. Ramana, C. Senthilpari, P. Velrajkumar, and T. S. Lim. "Monte-Carlo Analysis of a New 6-T Full-Adder Cell for Power and Propagation Delay Optimizations in 180nm Process." Applied Mechanics and Materials 284-287 (January 2013): 2580–89. http://dx.doi.org/10.4028/www.scientific.net/amm.284-287.2580.

Full text
Abstract:
This paper presents a 1-bit full adder by using as few as six transistors per bit in its design. It is designed with a combination of multiplexing control input and Boolean identities. The proposed design features lower operating voltage, higher computing speed and lower energy consumption due to the efficient operation of 6-transistor adder cell. The design adopts Multiplexing with Control input technique to alleviate the threshold voltage loss problem commonly encountered in pass transistor logic design. The proposed design successfully embeds the buffering circuit in the full adder design and the transistor count is minimized. The improved buffering helps the design operate under lower supply voltage compared with existing works. It also enhances the speed performance of the cascaded operation significantly while maintaining the performance edge in energy consumption. For performance comparison, the proposed full adder is evaluated along with four existing full adders via extensive BSIM4 simulation. The simulation results, 180nm process models, indicate that the proposed design has lowest energy consumption per addition along with the performance edge in both speed and energy consumption makes it suitable for low power and high speed embedded processor applications.
APA, Harvard, Vancouver, ISO, and other styles
36

Akram, Shoaib, Alexandros Papakonstantinou, Rakesh Kumar, and Deming Chen. "A Workload-Adaptive and Reconfigurable Bus Architecture for Multicore Processors." International Journal of Reconfigurable Computing 2010 (2010): 1–22. http://dx.doi.org/10.1155/2010/205852.

Full text
Abstract:
Interconnection networks for multicore processors are traditionally designed to serve a diversity of workloads. However, different workloads or even different execution phases of the same workload may benefit from different interconnect configurations. In this paper, we first motivate the need for workload-adaptive interconnection networks. Subsequently, we describe an interconnection network framework based on reconfigurable switches for use in medium-scale (up to 32 cores) shared memory multicore processors. Our cost-effective reconfigurable interconnection network is implemented on a traditional shared bus interconnect with snoopy-based coherence, and it enables improved multicore performance. The proposed interconnect architecture distributes the cores of the processor into clusters with reconfigurable logic between clusters to support workload-adaptive policies for inter-cluster communication. Our interconnection scheme is complemented by interconnect-aware scheduling and additional interconnect optimizations which help boost the performance of multiprogramming and multithreaded workloads. We provide experimental results that show that the overall throughput of multiprogramming workloads (consisting of two and four programs) can be improved by up to 60% with our configurable bus architecture. Similar gains can be achieved also for multithreaded applications as shown by further experiments. Finally, we present the performance sensitivity of the proposed interconnect architecture on shared memory bandwidth availability.
APA, Harvard, Vancouver, ISO, and other styles
37

Tahoori, Mehdi, and Mohammad Saber Golanbari. "Cross-Layer Reliability, Energy Efficiency, and Performance Optimization of Near-Threshold Data Paths." Journal of Low Power Electronics and Applications 10, no. 4 (December 3, 2020): 42. http://dx.doi.org/10.3390/jlpea10040042.

Full text
Abstract:
Modern electronic devices are an indispensable part of our everyday life. A major enabler for such integration is the exponential increase of the computation capabilities as well as the drastic improvement in the energy efficiency over the last 50 years, commonly known as Moore’s law. In this regard, the demand for energy-efficient digital circuits, especially for application domains such as the Internet of Things (IoT), has faced an enormous growth. Since the power consumption of a circuit highly depends on the supply voltage, aggressive supply voltage scaling to the near-threshold voltage region, also known as Near-Threshold Computing (NTC), is an effective way of increasing the energy efficiency of a circuit by an order of magnitude. However, NTC comes with specific challenges with respect to performance and reliability, which mandates new sets of design techniques to fully harness its potential. While techniques merely focused at one abstraction level, in particular circuit-level design, can have limited benefits, cross-layer approaches result in far better optimizations. This paper presents instruction multi-cycling and functional unit partitioning methods to improve energy efficiency and resiliency of functional units. The proposed methods significantly improve the circuit timing, and at the same time considerably limit leakage energy, by employing a combination of cross-layer techniques based on circuit redesign and code replacement techniques. Simulation results show that the proposed methods improve performance and energy efficiency of an Arithmetic Logic Unit by 19% and 43%, respectively. Furthermore, the improved performance of the optimized circuits can be traded to improving the reliability.
APA, Harvard, Vancouver, ISO, and other styles
38

Amadio, Guilherme, Ananya, John Apostolakis, Marilena Bandieramonte, Shiba Behera, Abhijit Bhattacharyya, René Brun, et al. "Recent progress with the top to bottom approach to vectorization in GeantV." EPJ Web of Conferences 214 (2019): 02007. http://dx.doi.org/10.1051/epjconf/201921402007.

Full text
Abstract:
SIMD acceleration can potentially boost by factors the application throughput. Achieving efficient SIMD vectorization for scalar code with complex data flow and branching logic, goes however way beyond breaking some loop dependencies and relying on the compiler. Since the refactoring effort scales with the number of lines of code, it is important to understand what kind of performance gains can be expected in such complex cases. We started to investigate a couple of years ago a top to bottom vectorization approach to particle transport simulation. Percolating vector data to algorithms was mandatory since not all the components can internally vectorize. Vectorizing low-level algorithms is certainly necessary, but not sufficient to achieve relevant SIMD gains. In addition, the overheads for maintaining the concurrent vector data flow and copy data have to be minimized. In the context of a vectorization R&D for simulation we developed a framework to allow different categories of scalar and vectorized components to co-exist, dealing with data flow management and real-time heuristic optimizations. The paper describes our approach on coordinating SIMD vectorization at framework level, making a detailed quantitative analysis of the SIMD gain versus overheads, with a breakdown by components in terms of geometry, physics and magnetic field propagation. We also present the more general context of this R&D work and goals for 2018.
APA, Harvard, Vancouver, ISO, and other styles
39

Parashar, Abhinav, and Kelath M. Manoj. "Murburn Precepts for Cytochrome P450 Mediated Drug/Xenobiotic Metabolism and Homeostasis." Current Drug Metabolism 22, no. 4 (June 17, 2021): 315–26. http://dx.doi.org/10.2174/1389200222666210118102230.

Full text
Abstract:
Aims: We aim to demonstrate why deeming diffusible reactive oxygen species (DROS) as toxic wastes do not afford a comprehensive understanding of cytochrome P450 mediated microsomal xenobiotic metabolism (mXM). Background: Current pharmacokinetic investigations consider reactive oxygen species formed in microsomal reactions as toxic waste products, whereas our works (Manoj et al., 2016) showed that DROS are the reaction mainstay in cytochrome P450 mediated metabolism and that they play significant roles in explaining several unexplained physiologies. Objective: Herein, we strive to detail the thermodynamic and kinetic foundations of murburn precepts of cytochrome P450 mediated drug metabolism. Methodology: Primarily, in silico approaches (using pdb crystal structure files), murburn reaction chemistry logic and thermodynamic calculations to elucidate the new model of CYP-mediated drug metabolism. The theoretical foundations are used to explain experimental observations. Results: We visually elucidate how murburn model better explains- (i) promiscuity of the unique P450-reductase; (ii) prolific activity and inhibitions of CYP3A4; (iii) structure-function correlations of important key CYP2 family isozymes- 2C9, 2D6 and 2E1; and (iv) mutation studies and mechanism-based inactivation of CYPs. Several other miscellaneous aspects of CYP reaction chemistry are also addressed. Conclusion: In the light of our findings that DROS are crucial for explaining reaction outcomes in mXM, approaches for understanding drug-drug interactions and methodologies for lead drug candidates' optimizations should be revisited.
APA, Harvard, Vancouver, ISO, and other styles
40

Ramana Murthy, G., C. Senthilpari, P. Velrajkumar, and Lim Tien Sze. "Monte-Carlo analysis of a new 6-T full-adder cell for power and propagation delay optimizations in 180 nm process." Engineering Computations 31, no. 2 (February 25, 2014): 149–59. http://dx.doi.org/10.1108/ec-01-2013-0023.

Full text
Abstract:
Purpose – Demand and popularity of portable electronic devices are driving the designers to strive for higher speeds, long battery life and more reliable designs. Recently, an overwhelming interest has been seen in the problems of designing digital systems with low power at no performance penalty. Most of the very large-scale integration applications, such as digital signal processing, image processing, video processing and microprocessors, extensively use arithmetic operations. Binary addition is considered as the most crucial part of the arithmetic unit because all other arithmetic operations usually involve addition. Building low-power and high-performance adder cells are of great interest these days, and any modifications made to the full adder would affect the system as a whole. The full adder design has attracted many designer's attention in recent years, and its power reduction is one of the important apprehensions of the designers. This paper presents a 1-bit full adder by using as few as six transistors (6-Ts) per bit in its design. The paper aims to discuss these issues. Design/methodology/approach – The outcome of the proposed adder architectural design is based on micro-architectural specification. This is a textual description, and adder's schematic can accurately predict the performance, power, propagation delay and area of the design. It is designed with a combination of multiplexing control input (MCIT) and Boolean identities. The proposed design features lower operating voltage, higher computing speed and lower energy consumption due to the efficient operation of 6-T adder cell. The design adopts MCIT technique effectively to alleviate the threshold voltage loss problem commonly encountered in pass transistor logic design. Findings – The proposed adder circuit simulated results are used to verify the correctness and timing of each component. According to the design concepts, the simulated results are compared to the existing adders from the literature, and the significant improvements in the proposed adder are observed. Some of the drawbacks of the existing adder circuits from the literature are as follows: The Shannon theorem-based adder gives voltage swing restoration in sum circuit. Due to this problem, the Shannon circuit consumes high power and operates at low speed. The MUX-14T adder circuit is designed by using multiplexer concept which has a complex node in its design paradigm. The node drivability of input consumes high power to transmit the voltage level. The MCIT-7T adder circuit is designed by using MCIT technique, which consumes more power and leads to high power consumption in the circuit. The MUX-12T adder circuit is designed by MCIT technique. The carry circuit has buffering restoration unit, and its complement leads to high power dissipation and propagation delay. Originality/value – The new 6-T full adder circuit overcomes the drawbacks of the adders from the literature and successfully reduces area, power dissipation and propagation delay.
APA, Harvard, Vancouver, ISO, and other styles
41

Ahmed, Omer K., Raid W. Daoud, Shaimaa M. Bawa, and Ahmed H. Ahmed. "Optimization of PV/T Solar Water Collector based on Fuzzy Logic Control." International Journal of Renewable Energy Development 9, no. 2 (May 10, 2020): 303–10. http://dx.doi.org/10.14710/ijred.9.2.303-310.

Full text
Abstract:
Hybrid solar collector (PV/T) is designed to produce electricity, hot water, or hot air at the same time as they operate solar cells and solar heaters in one system. This system is designed to increase the electrical efficiency of solar cells by absorbing heat from these cells. The fuzzy logic (FL) is a tool usually used to optimize the operation of the systems. In this paper, the FL is to monitor and correct the mainsystem parameters to remain optimization efficiency at a better level. Three affected variables were studied: Effect of reflective mirrors, the effect of the glass cover, and the effect of the lower reflector angle on the performance of the PV / T hybrid solar system. These three parameters are traveled to be inputs for the FL, and the PV temperature in addition to system efficiency is the output for it. The effect of solar radiation was found to have a great effect on the efficiency of the hybrid solar collector. The thermal efficiency was 82% for the given value of the PV and mirrors, while the efficiency down to 50 for another angle. By using the artificial intelligent the system behavior depends on its output, which called feedback close loop control, at a real-time process that optimizes the system efficiency and its output. ©2020. CBIORE-IJRED. All rights reserved
APA, Harvard, Vancouver, ISO, and other styles
42

Mavrodiev, Evgeny V., Christopher Dell, and Laura Schroder. "A laid-back trip through the Hennigian Forests." PeerJ 5 (July 21, 2017): e3578. http://dx.doi.org/10.7717/peerj.3578.

Full text
Abstract:
BackgroundThis paper is a comment on the idea of matrix-free Cladistics. Demonstration of this idea’s efficiency is a major goal of the study. Within the proposed framework, the ordinary (phenetic) matrix is necessary only as “source” of Hennigian trees, not as a primary subject of the analysis. Switching from the matrix-based thinking to the matrix-free Cladistic approach clearly reveals that optimizations of the character-state changes are related not to the real processes, but to the form of the data representation.MethodsWe focused our study on the binary data. We wrote the simple ruby-based script FORESTER version 1.0 that helps represent a binary matrix as an array of the rooted trees (as a “Hennigian forest”). The binary representations of the genomic (DNA) data have been made by script1001. The Average Consensus method as well as the standard Maximum Parsimony (MP) approach has been used to analyze the data.Principle findingsThe binary matrix may be easily re-written as a set of rooted trees (maximalrelationships). The latter might be analyzed by the Average Consensus method. Paradoxically, this method, if applied to the Hennigian forests,in principlecan help to identify cladesdespitethe absence of the direct evidence from the primary data. Our approach may handle the clock- or non clock-like matrices, as well as the hypothetical, molecular or morphological data.DiscussionOur proposal clearly differs from the numerous phenetic alignment-free techniques of the construction of the phylogenetic trees. Dealing with the relations, not with the actual “data” also distinguishes our approach from all optimization-based methods, if the optimization is defined as a way to reconstruct the sequences of the character-state changes on a tree, either the standard alignment-based techniques or the “direct” alignment-free procedure. We are not viewing our recent framework as an alternative to the three-taxon statement analysis (3TA), but there are two major differences between our recent proposal and the 3TA, as originally designed and implemented: (1) the 3TA deals with the three-taxon statements or minimal relationships. According to the logic of 3TA, the set of the minimal trees must be established as a binary matrix and used as an input for the parsimony program. In this paper, we operate directly with maximal relationships written just as trees, not as binary matrices, while also using the Average Consensus method instead of the MP analysis. The solely ‘reversal’-based groups can always be found by our method without the separate scoring of the putative reversals before analyses.
APA, Harvard, Vancouver, ISO, and other styles
43

Qiu, Jian Lin, Fen Li, Xiang Gu, Li Chen, and Yan Yun Chen. "A New Logic Optimization Algorithm of Multi-Valued Logic Function Based on Two-Valued Logic." Applied Mechanics and Materials 121-126 (October 2011): 4330–34. http://dx.doi.org/10.4028/www.scientific.net/amm.121-126.4330.

Full text
Abstract:
We make an approach to the logic optimization algorithm including converting multi-valued logic into two-valued logic and converting two-valued logic into multi-valued logic. We discuss the algorithm converting two-valued logic into multi-valued logic on the basis of building an assignment graph and present multi-valued logic optimization algorithm. In this paper, we analyze and study the logic optimization algorithms based on mini-terms and design a software system on logic optimization. It overpasses testing of Benchmark and right validate and it shows that the function of the software system on logic optimization is good and the optimization efficiency is high by testing.
APA, Harvard, Vancouver, ISO, and other styles
44

Despeyroux, Joëlle, and Robert Harper. "Special issue on Logical Frameworks and Metalanguages http//www-sop.inria.fr/certilab/LFM00/cfp-jfp.html." Journal of Functional Programming 10, no. 1 (January 2000): 135–36. http://dx.doi.org/10.1017/s0956796899009892.

Full text
Abstract:
Logical frameworks and meta-languages are intended as a common substrate for representing and implementing a wide variety of logics and formal systems. Their definition and implementation have been the focus of considerable work over the last decade. At the heart of this work is a quest for generality: A logical framework provides a basis for capturing uniformities across deductive systems and support for implementing particular systems. Similarly a meta-language supports reasoning about and using languages.Logical frameworks have been based on a variety of different languages including higher-order logics, type theories with dependent types, linear logic, and modal logic. Techniques of representation of logics include higher-order abstract syntax, inductive definitions or some form of equational or rewriting logic in which substitution is explicitly encoded.Examples of systems that implement logical frameworks include Alf, Coq, NuPrl, HOL, Isabelle, Maude, lambda-Prolog and Twelf. An active area of research in such systems is the study of automated reasoning techniques. Current work includes the development of various automated procedures as well as the investigation of rewriting tools that use reflection or make use of links with systems that already have sophisticated rewriting systems. Program extraction and optimization are additional topics of ongoing work.
APA, Harvard, Vancouver, ISO, and other styles
45

Saha, Aloke, Rahul Pal, and Jayanta Ghosh. "Novel Self-Pipelining Approach for Speed-Power Efficient Reliable Binary Multiplication." Micro and Nanosystems 12, no. 3 (December 1, 2020): 149–58. http://dx.doi.org/10.2174/1876402911666190916155445.

Full text
Abstract:
Background: The present study explores a novel self-pipelining strategy that can enhance speed-power efficiency as well as the reliability of a binary multiplier as compared to state-of-art register and wavepipelining. Method: Proper synchronization with efficient clocking between the subsequent self-pipelining stages has been assured to design a self-pipelined multiplier. Each self-pipelining stage consists of self-latching leaf cells that are designed, optimized and evaluated by TSMC 0.18μm CMOS technology with 1.8V supply rail and at 25°C temperature. The T-Spice transient response and simulated results for the designed circuits are presented. The proposed idea has been applied to design 4-b×4-b self-pipelined Wallace- tree multiplier. The multiplier was validated for all possible test patterns and the transient response was evaluated. The circuit performance in terms of propagation delay, average power and Power-Delay- Product (PDP) is recorded. Next, the decomposition logic is applied to design a higher-order multiplier (i.e., 8-bit×8-bit and 16-bit×16-bit) based on the proposed strategy using 4-bit×4-bit self-pipelined multiplier. The designed multiplier was also validated through extensive TSpice simulation for all the required test patterns using W-Edit and the evaluated performance is presented. All the designs, optimizations and evaluations performed are based on BSIM3 device parameter of TSMC 0.18μm CMOS technology with 1.8V supply rail at 25°C temperature using S-Edit of Tanner EDA. Results: The reliability was investigated of the proposed 4-b×4-b multiplier in the temperature range - 40°C to 100°C for maximum PDP variation. Conclusion: A benchmarking analysis in terms of speed-power performance with recent competitive design reveals preeminence of the proposed technique.
APA, Harvard, Vancouver, ISO, and other styles
46

Chiang, Hsiao-Yu, Yung-Chih Chen, De-Xuan Ji, Xiang-Min Yang, Chia-Chun Lin, and Chun-Yao Wang. "LOOPLock: Logic Optimization-Based Cyclic Logic Locking." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 39, no. 10 (October 2020): 2178–91. http://dx.doi.org/10.1109/tcad.2019.2960351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Qing Feng. "Optimization of Control and Combustion on a 330MW Face-Fired Boiler." Advanced Materials Research 977 (June 2014): 315–20. http://dx.doi.org/10.4028/www.scientific.net/amr.977.315.

Full text
Abstract:
A pair of contradictory problems of unstable combustion at low load and serious slagging near the burners, appear on a 330MW boiler made by Babocock. By diagnosis from aspects of the distributed control system (DCS) control logics and the boiler operation modes, it is found that the root cause is the coarse controlling of primary air (PA) flow which is caused by improper measure or calculation logical blocks of DCS. These improper DCS control logics are corrected. The PA flow rates are calibrated by tests in order to get precise coefficients for the new logic. The combustion mode, the control curves of the primary air pressure and the secondary air pressure are optimally adjusted at last. As a result, the boiler efficiency increases by 0.75%, the power consumption rate decreases by 0.2%, the NOx emission concentration reduce to half.
APA, Harvard, Vancouver, ISO, and other styles
48

Wilson, J. M., K. McAloon, and C. Tretkoff. "Optimization and Computational Logic." Journal of the Operational Research Society 49, no. 7 (July 1998): 768. http://dx.doi.org/10.2307/3010251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Murgai, Rajeev. "Technology-Dependent Logic Optimization." Proceedings of the IEEE 103, no. 11 (November 2015): 2004–20. http://dx.doi.org/10.1109/jproc.2015.2484299.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Wilson, J. M. "Optimization and Computational Logic." Journal of the Operational Research Society 49, no. 7 (1998): 768–69. http://dx.doi.org/10.1038/sj.jors.2600021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography