Articles de revues sur le sujet « LOOKUP TABLE APPROACH »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : LOOKUP TABLE APPROACH.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « LOOKUP TABLE APPROACH ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Malathy, S., et R. Ramaprabha. « Maximum Power Point Tracking Based on Look Up Table Approach ». Advanced Materials Research 768 (septembre 2013) : 124–30. http://dx.doi.org/10.4028/www.scientific.net/amr.768.124.

Texte intégral
Résumé :
This work proposes a lookup table based approach to track the maximum power from a solar photovoltaic (PV) module. The performance of the solar PV module is greatly influenced by various environmental factors and it is therefore necessary to operate the PV module at its optimal point so as to ensure that maximum power is extracted from the PV source. Several fixed step and variable step maximum power point tracking (MPPT) algorithms have been proposed in the literature. In this paper a simple and fast maximum power tracking method based on lookup table approach is proposed. The maximum power point voltages for various insolation levels are obtained from the experimental setup and are fed to the look up table. This look up table thus formulated can then provide the reference voltage for various insolation conditions without many computations. The performance of the proposed method is compared with that of the conventional MPPT methods like perturb and Observe (P&O), Incremental Conductance (INC) and Fuzzy logic (FLC) based MPPT. The simulation results show that the lookup table (LUT) approach tracks the maximum power point faster than the conventional algorithms under changing illumination conditions and reduces simulation time.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Kuo-Liang Chung et Shih-Tung Wu. « Inverse halftoning algorithm using edge-based lookup table approach ». IEEE Transactions on Image Processing 14, no 10 (octobre 2005) : 1583–89. http://dx.doi.org/10.1109/tip.2005.854494.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Wilcox, Chris, Michelle Mills Strout et James M. Bieman. « An optimization-based approach to lookup table program transformations ». Journal of Software : Evolution and Process 26, no 6 (21 septembre 2013) : 533–51. http://dx.doi.org/10.1002/smr.1620.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

LAKSHMIPATHY, JAGANNATHAN, WIESLAW L. NOWINSKI et ERIC A. WERNERT. « TEMPLATE-BASED ISOCONTOURING ». International Journal of Image and Graphics 06, no 02 (avril 2006) : 187–204. http://dx.doi.org/10.1142/s0219467806002197.

Texte intégral
Résumé :
Different isocontour extraction methods use different cell types (tetrahedral, hexahedral, etc.) depending on the nature of the acquisition grids (structured, unstructured, etc.). The existing isocontouring methods have the following pre-steps for the actual extraction process: (a) identification of cell types, (b) identification of topologically independent instances for each cell type, (c) determination of surface primitives contained in the topologically independent instances and (d) generation of a lookup table such that the name of the entry is an instance of a cell and the entry is the triangle set for that instance. The extraction process outputs the triangles from the lookup table. In this paper we present a novel generic method that enables us to list topologically independent surface primitives called "templates" within any n-polytope cell namely tetrahedra, hexahedra etc. We have also modified the traditional lookup table such that name is the cell instance and the entry is face index representations of all template instances contained in that cell. To show an example, we have applied this approach on a hexahedron and listed the templates and subsequently we have showed how to construct a lookup table. Most modern graphics hardware render triangles faster if they are rendered collectively as triangle strips as opposed to individual triangles. With our modified lookup table approach we can identify triangles in the neighboring cell in a linear time and hence we are able to connect two triangle strips into a longer triangle strip on the fly during the extraction process. We have compared our approach with some existing methods. The following are some of the important features of the method: (1) Simplicity, (2) procedural triangulation and (3) face-index representation.
Styles APA, Harvard, Vancouver, ISO, etc.
5

WOLPERT, DAVID, et PAUL AMPADU. « ADAPTIVE DELAY CORRECTION FOR RUNTIME VARIATION IN DYNAMIC VOLTAGE SCALING SYSTEMS ». Journal of Circuits, Systems and Computers 17, no 06 (décembre 2008) : 1111–28. http://dx.doi.org/10.1142/s0218126608004861.

Texte intégral
Résumé :
Temperature and voltage fluctuations affect delay sensitivity differently, as supply voltage is reduced. These differences make runtime variations particularly difficult to manage in dynamic voltage scaling systems, which adjust supply voltage in accordance with the required operating frequency. To include process variation in current table-lookup methods, a worst-case process is typically assumed. We propose a new method that takes process variation into account and reduces the excessive runtime variation guardbands. Our approach uses a ring oscillator to generate baseline frequencies, and employs a guardband lookup table to offset this baseline. The new method ensures robust operation and reduces power consumption by up to 20% compared with a method that assumes worst-case process variation in filling a lookup table.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Huang, Yong-Huai, Kuo-Liang Chung et Bi-Ru Dai. « Improved inverse halftoning using vector and texture-lookup table-based learning approach ». Expert Systems with Applications 38, no 12 (novembre 2011) : 15573–81. http://dx.doi.org/10.1016/j.eswa.2011.06.002.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Li, Shuhui, Timothy A. Haskew et Yang-Ki Hong. « Investigation of maximum wind power extraction using adaptive virtual lookup-table approach ». International Journal of Energy Research 35, no 11 (16 juin 2010) : 964–78. http://dx.doi.org/10.1002/er.1726.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Al-Ani, Muzhir Shaban. « ECG Signal Recognition Based on Lookup Table and Neural Networks ». UHD Journal of Science and Technology 7, no 1 (21 janvier 2023) : 22–31. http://dx.doi.org/10.21928/uhdjst.v7n1y2023.pp22-31.

Texte intégral
Résumé :
Electrocardiograph (ECG) signals are very important part in diagnosis healthcare the heart diseases. The implemented ECG signals recognition system consists hardware devices, software algorithm and network connection. An ECG is a non-invasive way to help diagnose many common heart problems. A health-care provider can use an ECG to recognize irregular heartbeats, blocked or narrowed arteries in the heart, whether you have ever had a heart attack, and the quality of certain heart disease treatments. The main part of the software algorithm including the recognition of ECG signals parameters such as P-QRST. Since the voltages at which handheld ECG equipment operate are shrinking, signal processing has become an important challenge. The implemented ECG signal recognition approach based on both lookup table and neural networks techniques. In this approach, the extracted ECG features are compared with the stored features to recognize the heart diseases of the received ECG features. The introduction of neural network technology added new benefits to the system implementing the learning and training process.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Fukumoto, Kazui, et Yoshifumi Ogami. « Simulation of CO-H2-Air Turbulent Nonpremixed Flame Using the Eddy Dissipation Concept Model with Lookup Table Approach ». Journal of Combustion 2012 (2012) : 1–11. http://dx.doi.org/10.1155/2012/496460.

Texte intégral
Résumé :
We present a new combustion simulation technique based on a lookup table approach. In the proposed technique, a flow solver extracts the reaction rates from the look-up table using the mixture fraction, progress variable, and reaction time. Look-up table building and combustion simulation are carried out simultaneously. The reaction rates of the chemical species are recorded in the look-up table according to the mixture fraction, progress variable, and time scale of the reaction. Once the reaction rates are recorded, a direct integration to solve the chemical equations becomes unnecessary; thus, the time for computing the reaction rates is shortened. The proposed technique is applied to an eddy dissipation concept (EDC) model and it is validated through a simulation of a CO-H2-air nonpremixed flame. The results obtained by using the proposed technique are compared with experimental and computational data obtained by using the EDC model with direct integration. Good agreement between our method and the EDC model and the experimental data was found. Moreover, the computation time for the proposed technique is approximately 99.2% lower than that of the EDC model with direct integration.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Leoncini, Giovanni, Roger A. Pielke et Philip Gabriel. « From Model-Based Parameterizations to Lookup Tables : An EOF Approach ». Weather and Forecasting 23, no 6 (1 décembre 2008) : 1127–45. http://dx.doi.org/10.1175/2008waf2007033.1.

Texte intégral
Résumé :
Abstract The goal of this study is to transform the Harrington radiation parameterization into a transfer scheme or lookup table, which provides essentially the same output (heating rate profile and short- and longwave fluxes at the surface) at a fraction of the computational cost. The methodology put forth here does not introduce a new parameterization simply derived from the Harrington scheme but, rather, shows that given a generic parameterization it is possible to build an algorithm, largely not based on the physics, that mimics the outcome of the parent parameterization. The core concept is to compute the empirical orthogonal functions (EOFs) of all of the input variables of the parent scheme, run the scheme on the EOFs, and express the output of a generic input sounding exploiting the input–output pairs associated with the EOFs. The weights are based on the difference between the input and EOFs water vapor mixing ratios. A detailed overview of the algorithm and the development of a few transfer schemes are also presented. Results show very good agreement (r > 0.91) between the different transfer schemes and the Harrington radiation parameterization with a very significant reduction in computational cost (at least 95%).
Styles APA, Harvard, Vancouver, ISO, etc.
11

Benbouali, Abderrahmen, Fayçal Chabni, Rachid Taleb et Noureddine Mansour. « Flight parameters improvement for an unmanned aerial vehicle using a lookup table based fuzzy PID controller ». Indonesian Journal of Electrical Engineering and Computer Science 23, no 1 (1 juillet 2021) : 171. http://dx.doi.org/10.11591/ijeecs.v23.i1.pp171-178.

Texte intégral
Résumé :
In this paper, a control scheme based on lookup table fuzzy proportionalintegral-derivate (PID) controller for the quadrotor unmanned aerial vehicle (UAV) movement control is proposed. This type of control provides enhanced quadrotor movement control beyond what can be achieved with conventional controllers and has a less computational burden on the processor. The proposed control scheme uses three lookup table based fuzzy logic controllers to control the different movement ranges of a quadrotor (i.e. roll, pitch, and yaw) to achieve stability. The mathematical model of a quadrotor, used to design the proposed controller, is derived based on the Lagrange approach. The processor in the loop (PIL) technique was used to test and validate the proposed control scheme. MATLAB/Simulink environment was used as a platform for the quadrotor model, whereas a low cost and high-performance STM32F407 microcontroller was used to implement the controllers. Data transfer between the hardware and software is via serial communication converter. The control system designed based on simulation is tested and validated using “processor in the loop” techniques.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Johnson, Tyler, et Dawn Taylor. « Improving reaching with functional electrical stimulation by incorporating stiffness modulation ». Journal of Neural Engineering 18, no 5 (1 octobre 2021) : 055009. http://dx.doi.org/10.1088/1741-2552/ac2f7a.

Texte intégral
Résumé :
Abstract Objective. Intracortical recordings have now been combined with functional electrical stimulation (FES) of arm/hand muscles to demonstrate restoration of upper-limb function after spinal cord injury. However, for each desired limb position decoded from the brain, there are multiple combinations of muscle stimulation levels that can produce that position. The objective of this simulation study is to explore how modulating the amount of coactivation of antagonist muscles during FES can impact reaching performance and energy usage. Stiffening the limb by cocontracting antagonist muscles makes the limb more resistant to perturbation. Minimizing cocontraction saves energy and reduces fatigue. Approach. Prior demonstrations of reaching via FES used a fixed empirically-derived lookup table for each joint that defined the muscle stimulation levels that would position the limb at the desired joint angle decoded from the brain at each timestep. This study expands on that previous work by using simulations to: (a) test the feasibility of controlling arm reaching using a suite of lookup tables with varying levels of cocontraction instead of a single fixed lookup table for each joint, (b) optimize a simple function for automatically switching between these different cocontraction tables using only the desired kinematic information already being decoded from the brain, and (c) compare energy savings and movement performance when using the optimized function to automatically modulate cocontraction during reaching versus using the best fixed level of cocontraction. Main results. Our data suggests energy usage and/or movement performance can be significantly improved by dynamically modulating limb stiffness using our multi-table method and a simple function that determines cocontraction level based on decoded endpoint speed and its derivative. Significance. By demonstrating how modulating cocontraction can reduce energy usage while maintaining or even improving movement performance, this study makes brain-controlled FES a more viable option for restoration of reaching after paralysis.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Gottschalk, Simon, et Elena Demidova. « Tab2KG : Semantic table interpretation with lightweight semantic profiles ». Semantic Web 13, no 3 (6 avril 2022) : 571–97. http://dx.doi.org/10.3233/sw-222993.

Texte intégral
Résumé :
Tabular data plays an essential role in many data analytics and machine learning tasks. Typically, tabular data does not possess any machine-readable semantics. In this context, semantic table interpretation is crucial for making data analytics workflows more robust and explainable. This article proposes Tab2KG – a novel method that targets at the interpretation of tables with previously unseen data and automatically infers their semantics to transform them into semantic data graphs. We introduce original lightweight semantic profiles that enrich a domain ontology’s concepts and relations and represent domain and table characteristics. We propose a one-shot learning approach that relies on these profiles to map a tabular dataset containing previously unseen instances to a domain ontology. In contrast to the existing semantic table interpretation approaches, Tab2KG relies on the semantic profiles only and does not require any instance lookup. This property makes Tab2KG particularly suitable in the data analytics context, in which data tables typically contain new instances. Our experimental evaluation on several real-world datasets from different application domains demonstrates that Tab2KG outperforms state-of-the-art semantic table interpretation baselines.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Lu, Shyue-Kung, Fu-Min Yeh et Jen-Sheng Shih. « Fault Detection and Fault Diagnosis Techniques for Lookup Table FPGAs ». VLSI Design 15, no 1 (1 janvier 2002) : 397–406. http://dx.doi.org/10.1080/1065514021000012011.

Texte intégral
Résumé :
In this paper, we present a novel fault detection and fault diagnosis technique for Field Programmable Gate Arrays (FPGAs). The cell is configured to implement a bijective function to simplify the testing of the whole cell array. The whole chip is partitioned into disjoint one-dimensional arrays of cells. For the lookup table (LUT), a fault may occur at the memory matrix, decoder, input or output lines. The input patterns can be easily generated with a k-bit binary counter, where k denotes the number of input lines of a configurable logic block (CLB). Theoretical proofs show that the resulting fault coverage is 100%. According to the characteristics of the bijective cell function, a novel built-in self-test structure is also proposed. Our BIST approaches have the advantages of requiring less hardware resources for test pattern generation and output response analysis. To locate a faulty CLB, two diagnosis sessions are required. However, the maximum number of configurations is k + 4 for diagnosing a faulty CLB. The diagnosis complexity of our approach is also analyzed. Our results show that the time complexity is independent of the array size of the FPGA. In other words, we can make the FPGA array C-diagnosable.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Banihani, Suleiman, Khalid Al-Widyan, Ahmad Al-Jarrah et Mohammad Ababneh. « A genetic algorithm based lookup table approach for optimal stepping sequence of open-loop stepper motor systems ». Journal of Control Theory and Applications 11, no 1 (10 janvier 2013) : 35–41. http://dx.doi.org/10.1007/s11768-013-1165-4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

Allen, William G., et Vijay Perincherry. « Two-Stage Vehicle Availability Model ». Transportation Research Record : Journal of the Transportation Research Board 1556, no 1 (janvier 1996) : 16–21. http://dx.doi.org/10.1177/0361198196155600103.

Texte intégral
Résumé :
It is well accepted that travel forecasting models benefit from the stratification of travel markets by socioeconomic levels. The number of vehicles available is a key indicator of that level. Using this variable requires that the proportion of households by vehicles available be forecast for each zone. An improved submodel for forecasting vehicle availability by incorporating transit accessibility and land use indicators along with the usual demographic variables is described. This model uses a two-stage approach. The first stage is similar to many other models in current use. In the first step, a lookup table is used to identify an initial estimate of the proportion of 0-vehicle, 1-vehicle, 2-vehicle, and 3+-vehicle households on the basis of the household's size (1–4 +), number of workers (0–3 +), and income quartile (1–4). This lookup table has 52 cells, with each cell containing the four proportions by vehicles available. Census Public Use Microdata Sample data were used to create this lookup table. The second stage applies an incremental logit model to the initial proportions. In this step, the effects of transit accessibility and land use form on vehicle availability are modeled. Accessibility and density measures are used to calculate a “disutility” measure, which is then used to modify the initial percentages. Good transit service and high development density are associated with lower vehicle ownership. Vehicle availability models of this type recently have been successfully calibrated for the Washington, D.C., and Seattle, Washington, areas.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Tyurin, S. F., A. Yu Skornyakova, Y. A. Stepchenkov et Y. G. Diachenko. « SELF-TIMED LOOK UP TABLE FOR ULAs AND FPGAs ». Radio Electronics, Computer Science, Control 1, no 1 (24 mars 2021) : 36–45. http://dx.doi.org/10.15588/1607-3274-2021-1-4.

Texte intégral
Résumé :
Context. Self-Timed Circuits, proposed by D. Muller on the rise of the digital era, continues to excite researchers’ minds. These circuits started with the task of improving performance by taking into account real delays. Then Self-Timed Circuits have moved into the field of green computing. At last, they are currently positioned mainly in the field of fault tolerance. There is much redundancy in Self-Timed Circuits. It is believed that Self-Timed Circuits approaches will be in demand in the nano-circuitry when a synchronous approach becomes impossible. Strictly Self-Timed Circuits check transition process completion for each gate’s output. For this, they use so-called D. Muller elements (C-elements, hysteresis flip-flops, G-flip-flops). Usually, Self-Timed Circuits are designed on Uncommitted Logic Array. Now an extensive base of Uncommitted Logic Array Self-Timed gates exists. It is believed that SelfTimed Circuits are not compatible with FPGA technology. However, attempts to create self-timed FPGAs do not stop. The article proposes a Self-Timed Lookup Table for the Self-Timed Uncommitted Logic Array and the Self-Timed FPGA, carried out either by constants or utilizing additional memory cells. Authors proposed 1,2 – Self-Timed Lookup Table and described simulation results. Objective. The work’s goal is the analysis and design of the Strictly Self-Timed universal logic element based on Uncommitted Logic Array cells and pass-transistors circuits. Methods. Analysis and synthesis of the Strictly Self-Timed circuits with Boolean algebra. Simulation of the proposed element in the CAD “ARC”, TRANAL program, system NI Multisim by National Instruments Electronics Workbench Group, and layout design by Microwind. The reliability theory and reliability calculations in PTC Mathcad. Results. Authors designed, analyzed, and proved the Self-Timed Lookup Table’s workability for the Uncommitted Logic Arrays and FPGAs. Layouts of the novel logic gates are ready for manufacturing. Conclusions. The conducted studies allow us to use proposed circuits in perspective digital devices.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Byun, Hayoung, Qingling Li et Hyesook Lim. « Vectored-Bloom Filter for IP Address Lookup : Algorithm and Hardware Architectures ». Applied Sciences 9, no 21 (30 octobre 2019) : 4621. http://dx.doi.org/10.3390/app9214621.

Texte intégral
Résumé :
The Internet Protocol (IP) address lookup is one of the most challenging tasks for Internet routers, since it requires to perform packet forwarding at wire-speed for tens of millions of incomming packets per second. Efficient IP address lookup algorithms have been widely studied to satisfy this requirement. Among them, Bloom filter-based approach is attractive in providing high performance. This paper proposes a high-speed and flexible architecture based on a vectored-Bloom filter (VBF), which is a space-efficient data structure that can be stored in a fast on-chip memory. An off-chip hash table is infrequently accessed, only when the VBF fails to provide address lookup results. The proposed architecture has been evaluated through both a behavior simulation with C language and a timing simulation with Verilog. The hardware implementation result shows that the proposed architecture can achieve the throughput of 5 million packets per second in a field programmable gate array (FPGA) operated at 100 MHz.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Cao, Hongtao, Xingfa Gu, Xiangqin Wei, Tao Yu et Haifeng Zhang. « Lookup Table Approach for Radiometric Calibration of Miniaturized Multispectral Camera Mounted on an Unmanned Aerial Vehicle ». Remote Sensing 12, no 24 (8 décembre 2020) : 4012. http://dx.doi.org/10.3390/rs12244012.

Texte intégral
Résumé :
Over recent years, miniaturized multispectral cameras mounted on an unmanned aerial vehicle (UAV) have been widely used in remote sensing. Most of these cameras are integrated with low-cost, image-frame complementary metal-oxide semiconductor (CMOS) sensors. Compared to the typical charged coupled device (CCD) sensors or linear array sensors, consumer-grade CMOS sensors have the disadvantages of low responsivity, higher noise, and non-uniformity of pixels, which make it difficult to accurately detect optical radiation. Therefore, comprehensive radiometric calibration is crucial for quantitative remote sensing and comparison of temporal data using such sensors. In this study, we examine three procedures of radiometric calibration: relative radiometric calibration, normalization, and absolute radiometric calibration. The complex features of dark current noise, vignetting effect, and non-uniformity of detector response are analyzed. Further, appropriate procedures are used to derive the lookup table (LUT) of correction factors for these features. Subsequently, an absolute calibration coefficient based on an empirical model is used to convert the digital number (DN) of images to radiance unit. Due to the radiometric calibration, the DNs of targets observed in the image are more consistent than before calibration. Compared to the method provided by the manufacturer of the sensor, LUTs facilitate much better radiometric calibration. The root mean square error (RMSE) of measured reflectance in each band (475, 560, 668, 717, and 840 nm) are 2.30%, 2.87%, 3.66%, 3.98%, and 4.70% respectively.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Nichols, Brandon S. « Performance of a lookup table-based approach for measuring tissue optical properties with diffuse optical spectroscopy ». Journal of Biomedical Optics 17, no 5 (20 avril 2012) : 057001. http://dx.doi.org/10.1117/1.jbo.17.5.057001.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
21

Ayadi, Mohamed Issam, Abderrahim Maizate, Mohammed Ouzzif et Charif Mahmoudi. « Deep Learning Forwarding in NDN With a Case Study of Ethernet LAN ». International Journal of Web-Based Learning and Teaching Technologies 16, no 1 (janvier 2021) : 1–9. http://dx.doi.org/10.4018/ijwltt.2021010101.

Texte intégral
Résumé :
In this paper, the authors propose a novel forwarding strategy based on deep learning that can adaptively route interests/data packets through ethernet links without relying on the FIB table. The experiment was conducted as a proof of concept. They developed an approach and an algorithm that leverage existing intelligent forwarding approaches in order to build an NDN forwarder that can reduce forwarding cost in terms of prefix name lookup, and memory requirement in FIB simulation results showed that the approach is promising in terms of cross-validation score and prediction in ethernet LAN scenario.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Josbert, Nteziriza Nkerabahizi, Wang Ping, Min Wei et Yong Li. « Industrial Networks Driven by SDN Technology for Dynamic Fast Resilience ». Information 12, no 10 (15 octobre 2021) : 420. http://dx.doi.org/10.3390/info12100420.

Texte intégral
Résumé :
Software-Defined Networking (SDN) provides the prospect of logically centralized management in industrial networks and simplified programming among devices. It also facilitates the reconfiguration of connectivity when there is a network element failure. This paper presents a new Industrial SDN (ISDN) resilience that addresses the gap between two types of resilience: the first is restoration while the second is protection. Using a restoration approach increases the recovery time proportionally to the number of affected flows contrarily to the protection approach which attains the fast recovery. Nevertheless, the protection approach utilizes more flow rules (flow entries) in the switch which in return increments the lookup time taken to discover an appropriate flow entry in the flow table. This can have a negative effect on the end-to-end delay before a failure occurs (in the normal situation). In order to balance both approaches, we propose a Mixed Fast Resilience (MFR) approach to ensure the fast recovery of the primary path without any impact on the end-to-end delay in the normal situation. In the MFR, the SDN controller establishes a new path after failure detection and this is based on flow rules stored in its memory through the dynamic hash table structure as the internal flow table. At that time, it transmits the flow rules to all switches across the appropriate secondary path simultaneously from the failure point to the destination switch. Moreover, these flow rules which correspond to secondary paths are cached in the hash table by considering the current minimum path weight. This strategy leads to reduction in the load at the SDN controller and the calculation time of a new working path. The MFR approach applies the dual primary by considering several metrics such as packet-loss probability, delay, and bandwidth which are the Quality of Service (QoS) requirements for many industrial applications. Thus, we have built a simulation network and conducted an experimental testbed. The results showed that our resilience approach reduces the failure recovery time as opposed to the restoration approaches and is more scalable than a protection approach. In the normal situation, the MFR approach reduces the lookup time and end-to-end delay than a protection approach. Furthermore, the proposed approach improves the performance by minimizing the packet loss even under failing links.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Binkowski, Francis S., Saravanan Arunachalam, Zachariah Adelman et Joseph P. Pinto. « Examining Photolysis Rates with a Prototype Online Photolysis Module in CMAQ ». Journal of Applied Meteorology and Climatology 46, no 8 (1 août 2007) : 1252–56. http://dx.doi.org/10.1175/jam2531.1.

Texte intégral
Résumé :
Abstract A prototype online photolysis module has been developed for the Community Multiscale Air Quality (CMAQ) modeling system. The module calculates actinic fluxes and photolysis rates (j values) at every vertical level in each of seven wavelength intervals from 291 to 850 nm, as well as the total surface irradiance and aerosol optical depth within each interval. The module incorporates updated opacity at each time step, based on changes in local ozone, nitrogen dioxide, and particle concentrations. The module is computationally efficient and requires less than 5% more central processing unit time than using the existing CMAQ “lookup” table method for calculating j values. The main focus of the work presented here is to describe the new online module as well as to highlight the differences between the effective cross sections from the lookup-table method currently being used and the updated effective cross sections from the new online approach. Comparisons of the vertical profiles for the photolysis rates for nitrogen dioxide (NO2) and ozone (O3) from the new online module with those using the effective cross sections from a standard CMAQ simulation show increases in the rates of both NO2 and O3 photolysis.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Bhattacharjee, Abhishek, Dheeraj Kumar Sahu et Sambhu Nath Pradhan. « Lookup table‐based negative‐bias temperature instability effect and leakage power co‐optimization using genetic algorithm approach ». International Journal of Circuit Theory and Applications 49, no 7 (3 mai 2021) : 1902–15. http://dx.doi.org/10.1002/cta.3038.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Zhang, Hailong, Chong Huang, Shanshan Yu, Li Li, Xiaozhou Xin et Qinhuo Liu. « A Lookup-Table-Based Approach to Estimating Surface Solar Irradiance from Geostationary and Polar-Orbiting Satellite Data ». Remote Sensing 10, no 3 (7 mars 2018) : 411. http://dx.doi.org/10.3390/rs10030411.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Guezzi, Abdelkader, Abderrahmane Lakas, Ahmed Korichi et Sarra Cherbal. « Peer to Peer Approach based Replica and Locality Awareness to Manage and Disseminate Data in Vehicular Ad Hoc Networks ». International journal of Computer Networks & ; Communications 12, no 6 (30 novembre 2020) : 65–81. http://dx.doi.org/10.5121/ijcnc.2020.12605.

Texte intégral
Résumé :
Distributed Hash Table (DHT) based structured peer-to-peer (P2P) systems provide an efficient method of disseminating information in a VANET environment owing to its high performance and properties (e.g., self-organization, decentralization, scalability, etc.). The topology of ad hoc vehicle networks (VANET) varies dynamically; its disconnections are frequent due to the high movement of vehicles. In such a topology, information availability is an ultimate problem for vehicles, in general, connect and disconnect frequently from the network. Data replication is an appropriate and adequate solution to this problem. In this contribution, to increase the accessibility of data, which also increases the success rate of the lookup, a method based on replication in the Vanet network is proposed named LAaR-Vanet. Also, this replication strategy is combined with a locality-awareness method to promote the same purpose and to avoid the problems of long paths. The performance of the proposed solution is assessed by a series of in-depth simulations in urban areas. The obtained results indicate the efficiency of the proposed approach, in terms of the following metrics: lookup success rate, the delay, and the number of the logical hop.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Wei, Peng Cheng, et Jun Jian Huang. « Using Segment Number Parameter of Piecewise Linear Chaotic Map Construct Novel Hash Scheme ». Materials Science Forum 694 (juillet 2011) : 479–84. http://dx.doi.org/10.4028/www.scientific.net/msf.694.479.

Texte intégral
Résumé :
A novel keyed Hash function is presented based on the dynamic S-boxes. The proposed approach can give a chaotic Hash value by means of the lookup table of functions and chaotic dynamic S-box. Compared with the existing chaotic Hash functions, this method improves computational performance of Hash system by using the chaotic S-box substitution. Theoretical and experimental results show that the proposed method has not only strong one way property, sensitivity to initial conditions and chaotic system’s parameters, but also high speed.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Wei, Yu Xuan, Peng Cheng Wei, Huai Wen et Shun Yong Liu. « Construct and Analyze K-Hash Function Based on Chaotic Dynamic S-Boxes ». Applied Mechanics and Materials 519-520 (février 2014) : 891–98. http://dx.doi.org/10.4028/www.scientific.net/amm.519-520.891.

Texte intégral
Résumé :
By combining the traditional iteration structure of Hash function with the dynamic S-boxes, a novel keyed Hash function is presented. The proposed approach can give a chaotic Hash value by means of the lookup table of functions and chaotic dynamic S-box. Compared with the existing chaotic Hash functions, this method improves computational performance of Hash system by using the chaotic S-box substitution. Theoretical and experimental results show that the proposed method has not only strong one way property, sensitivity to initial conditions and chaotic system’s parameters, but also high speed.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Mendez Reyes, Hector, Stoyan Kanev, Bart Doekemeijer et Jan-Willem van Wingerden. « Validation of a lookup-table approach to modeling turbine fatigue loads in wind farms under active wake control ». Wind Energy Science 4, no 4 (11 octobre 2019) : 549–61. http://dx.doi.org/10.5194/wes-4-549-2019.

Texte intégral
Résumé :
Abstract. Wake redirection is an active wake control (AWC) concept that is known to have a high potential for increasing the overall power production of wind farms. Being based on operating the turbines with intentional yaw misalignment to steer wakes away from downstream turbines, this control strategy requires careful attention to the load implications. However, the computational effort required to perform an exhaustive analysis of the site-specific loads on each turbine in a wind farm is unacceptably high due to the huge number of aeroelastic simulations required to cover all possible inflow and yaw conditions. To reduce this complexity, a practical load modeling approach is based on “gridding”, i.e., performing simulations only for a subset of the range of environmental and operational conditions that can occur. Based on these simulations, a multi-dimensional lookup table (LUT) can be constructed containing the fatigue and extreme loads on all components of interest. Using interpolation, the loads on each turbine in the farm can the be predicted for the whole range of expected conditions. Recent studies using this approach indicate that wake redirection can increase the overall power production of the wind farm and at the same time decrease the lifetime fatigue loads on the main components of the individual turbines. As the present level of risk perception related to operation with large yaw misalignment is still substantial, it is essential to increase the confidence level in this LUT-based load modeling approach to further derisk the wake redirection strategy. To this end, this paper presents the results of a series of studies focused on the validation of different aspects of the LUT load modeling approach. These studies are based on detailed aeroelastic simulations, two wind tunnel tests, and a full-scale field test. The results indicate that the LUT approach is a computationally efficient methodology for assessing the farm loads under AWC, which achieves generally good prediction of the load trends.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Darvishzadeh, Roshanak, Ali A. Matkan et Abdolhamid Dashti Ahangar. « Inversion of a Radiative Transfer Model for Estimation of Rice Canopy Chlorophyll Content Using a Lookup-Table Approach ». IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 5, no 4 (août 2012) : 1222–30. http://dx.doi.org/10.1109/jstars.2012.2186118.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

Zhang, Lei, Tianguang Zhang, Haiyan Wu, Alexander Borst et Kolja Kühnlenz. « Visual Flight Control of a Quadrotor Using Bioinspired Motion Detector ». International Journal of Navigation and Observation 2012 (8 mai 2012) : 1–9. http://dx.doi.org/10.1155/2012/627079.

Texte intégral
Résumé :
Motion detection in the fly is extremely fast with low computational requirements. Inspired from the fly's vision system, we focus on a real-time flight control on a miniquadrotor with fast visual feedback. In this work, an elaborated elementary motion detector (EMD) is utilized to detect local optical flow. Combined with novel receptive field templates, the yaw rate of the quadrotor is estimated through a lookup table established with this bioinspired visual sensor. A closed-loop control system with the feedback of yaw rate estimated by EMD is designed. With the motion of the other degrees of freedom stabilized by a camera tracking system, the yaw-rate of the quadrotor during hovering is controlled based on EMD feedback under real-world scenario. The control performance of the proposed approach is compared with that of conventional approach. The experimental results demonstrate the effectiveness of utilizing EMD for quadrotor control.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Saeed, Ahmed Sardar M., et Loay E. George. « Fingerprint-Based Data Deduplication Using a Mathematical Bounded Linear Hash Function ». Symmetry 13, no 11 (20 octobre 2021) : 1978. http://dx.doi.org/10.3390/sym13111978.

Texte intégral
Résumé :
Due to the quick increase in digital data, especially in mobile usage and social media, data deduplication has become a vital and cost-effective approach for removing redundant data segments, reducing the pressure imposed by enormous volumes of data that must be kept. As part of the data deduplication process, fingerprints are employed to represent and identify identical data blocks. However, when the amount of data increases, the number of fingerprints grows as well, and due to the restricted memory size, the speed of data deduplication suffers dramatically. Various deduplication solutions show a bottleneck in the form of matching lookups and chunk fingerprint calculations, for which we pay in the form of storage and processors needed for storing hashes. Utilizing a fast hash algorithm to improve the fingerprint lookup performance is an appealing challenge. Thus, this study is focused on enhancing the deduplication system by suggesting a novel and effective mathematical bounded linear hashing algorithm that decreases the hashing time by more than two times compared to MD5 and SHA-1 and reduces the size of the hash index table by 50%. Due to the enormous number of chunk hash values, looking up and comparing hash values takes longer for large datasets; this work offers a hierarchal fingerprint lookup strategy to minimize the hash judgement comparison time by up to 78%. Our suggested system reduces the high latency imposed by deduplication procedures, primarily the hashing and matching phases. The symmetry of our work is based on the balance between the proposed hashing algorithm performance and its reflection on the system efficiency, as well as evaluating the approximate symmetries of the hashing and lookup phases compared to the other deduplication systems.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Bendix, Jörg, Boris Thies, Jan Cermak et Thomas Nauß. « Ground Fog Detection from Space Based on MODIS Daytime Data—A Feasibility Study ». Weather and Forecasting 20, no 6 (1 décembre 2005) : 989–1005. http://dx.doi.org/10.1175/waf886.1.

Texte intégral
Résumé :
Abstract The distinction made by satellite data between ground fog and low stratus is still an open problem. A proper detection scheme would need to make a determination between low stratus thickness and top height. Based on this information, stratus base height can be computed and compared with terrain height at a specific picture element. In the current paper, a procedure for making the distinction between ground fog and low-level stratus is proposed based on Moderate Resolution Imaging Spectroradiometer (MODIS, flying on board the NASA Terra and Aqua satellites) daytime data for Germany. Stratus thickness is alternatively derived from either empirical relationships or a newly developed retrieval scheme (lookup table approach), which relies on multiband albedo and radiative transfer calculations. A trispectral visible–near-infrared (VIS–NIR) approach has been proven to give the best results for the calculation of geometrical thickness. The comparison of horizontal visibility data from synoptic observing (SYNOP) stations of the German Weather Service and the results of the ground fog detection schemes reveals that the lookup table approach shows the best performance for both a valley fog situation and an extended layer of low stratus with complex local visibility structures. Even if the results are very encouraging [probability of detection (POD) = 0.76], relatively high percentage errors and false alarm ratios still occur. Uncertainties in the retrieval scheme are mostly due to possible collocation errors and known problems caused by comparing point and pixel data (time lag between satellite overpass and ground observation, etc.). A careful inspection of the pixels that mainly contribute to the false alarm ratio reveals problems with thin cirrus layers and the fog-edge position of the SYNOP stations. Validation results can be improved by removing these suspicious pixels (e.g., percentage error decreases from 28% to 22%).
Styles APA, Harvard, Vancouver, ISO, etc.
34

Maabid, Abdelmawgoud Mohamed, et Tarek Elghazaly. « Theoretical and Computational Perspectives of Arabic Morphological Analyzers and Generators : Theoretical Survey ». INTERNATIONAL JOURNAL OF COMPUTERS & ; TECHNOLOGY 13, no 11 (30 novembre 2014) : 5126–33. http://dx.doi.org/10.24297/ijct.v13i11.2782.

Texte intégral
Résumé :
Morphology analysis is an essential part of most applications of natural language processing (NLP) which included different applications like Machine Translation (MT) and language rule based Information Retrieval (IR). Many Arabic morphological systems had built for different purposes with different algorithms and approaches; this paper is considered a survey of Arabic Morphological system from researchers’ perspectives and approaches used to build them. Based on this survey; in the first part of this paper; the perspective of Arabic morphological systems had been classified into two major issues; one of them is the theoretical perspective and the second is the computational perspective of Arabic morphology. While the second part of this paper deals with approaches used to build the Arabic Morphology systems itself which are Table Lookup Approach, Combinatorial Approach, Linguistic Approach, Traditional Approaches, Finite-state Automata and Two-Level Morphology Approach and Pattern-Based Approach.
Styles APA, Harvard, Vancouver, ISO, etc.
35

CHOI, BYUNGHEE, et YOUNGSOO SHIN. « LOOKUP TABLE-BASED ADAPTIVE BODY BIASING OF MULTIPLE MACROS FOR PROCESS VARIATION COMPENSATION AND LOW LEAKAGE ». Journal of Circuits, Systems and Computers 19, no 07 (novembre 2010) : 1449–64. http://dx.doi.org/10.1142/s021812661000675x.

Texte intégral
Résumé :
A reduced supply voltage must be accompanied by a reduced threshold voltage, which makes this approach to power saving susceptible to process variation in transistor parameters, as well as resulting in increased subthreshold leakage. While adaptive body biasing is efficient for both compensating process variation and suppressing leakage current, it suffers from a large overhead of control circuit. Most body biasing circuits target an entire chip, which causes excessive leakage of some blocks and misses the chance of fine grain control. We propose a new adaptive body biasing scheme, based on a lookup table for independent control of multiple functional blocks on a chip, which controls leakage and also compensates for process variation at the block level. An adaptive body bias is applied to blocks in active mode and a large reverse body bias is applied to blocks in standby mode. This is achieved by a central body bias controller, which has a low overhead in terms of area, delay, and power consumption. The problem of optimizing the required set of bias voltages is formulated and solved. A design methodology for semicustom design using standard-cell elements is developed and verified with benchmark circuits.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Karmakar, Chandan, Wei Luo, Truyen Tran, Michael Berk et Svetha Venkatesh. « Predicting Risk of Suicide Attempt Using History of Physical Illnesses From Electronic Medical Records ». JMIR Mental Health 3, no 3 (11 juillet 2016) : e19. http://dx.doi.org/10.2196/mental.5475.

Texte intégral
Résumé :
Background Although physical illnesses, routinely documented in electronic medical records (EMR), have been found to be a contributing factor to suicides, no automated systems use this information to predict suicide risk. Objective The aim of this study is to quantify the impact of physical illnesses on suicide risk, and develop a predictive model that captures this relationship using EMR data. Methods We used history of physical illnesses (except chapter V: Mental and behavioral disorders) from EMR data over different time-periods to build a lookup table that contains the probability of suicide risk for each chapter of the International Statistical Classification of Diseases and Related Health Problems, 10th Revision (ICD-10) codes. The lookup table was then used to predict the probability of suicide risk for any new assessment. Based on the different lengths of history of physical illnesses, we developed six different models to predict suicide risk. We tested the performance of developed models to predict 90-day risk using historical data over differing time-periods ranging from 3 to 48 months. A total of 16,858 assessments from 7399 mental health patients with at least one risk assessment was used for the validation of the developed model. The performance was measured using area under the receiver operating characteristic curve (AUC). Results The best predictive results were derived (AUC=0.71) using combined data across all time-periods, which significantly outperformed the clinical baseline derived from routine risk assessment (AUC=0.56). The proposed approach thus shows potential to be incorporated in the broader risk assessment processes used by clinicians. Conclusions This study provides a novel approach to exploit the history of physical illnesses extracted from EMR (ICD-10 codes without chapter V-mental and behavioral disorders) to predict suicide risk, and this model outperforms existing clinical assessments of suicide risk.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Morasa, Balaji, et Padmaja Nimmagadda. « Low power residue number system using lookup table decomposition and finite state machine based post computation ». Indonesian Journal of Electrical Engineering and Computer Science 26, no 1 (1 avril 2022) : 127. http://dx.doi.org/10.11591/ijeecs.v26.i1.pp127-134.

Texte intégral
Résumé :
In this paper, memory optimization and architectural level modifications are introduced for realizing the low power <span lang="EN-US">residue number system (RNS) with improved flexibility for electroencephalograph (EEG) signal classification. The proposed RNS framework is intended to maximize the reconfigurability of RNS for high-performance finite impulse response (FIR) filter design. By replacing the existing power-hungry RAM-based reverse conversion model with a highly decomposed lookup table (LUT) model which can produce the results without using any post accumulation process. The reverse conversion block is modified with an appropriate functional unit to accommodate FIR convolution results. The proposed approach is established to develop and execute pre-calculated inverters for various module sets. Therefore, the proposed LUT-decomposition with RNS multiplication-based post-accumulation technology provides a high-performance FIR filter architecture that allows different frequency response configuration elements. Experimental results shows the superior performance of decomposing LUT-based direct reverse conversion over other existing reverse conversion techniques adopted for energy-efficient RNS FIR implementations. When compared with the conventional RNS FIR design with the proposed FSM based decomposed RNS FIR, the logic elements (LEs) were reduced by 4.57%, the frequency component is increased by 31.79%, number of LUTs is reduced by 42.85%, and the power dissipation was reduced by 13.83%.</span>
Styles APA, Harvard, Vancouver, ISO, etc.
38

Nguyen, Nhu Y., Dang Dinh Kha et Yutaka Ichikawa. « Developing a multivariable lookup table function for estimating flood damages of rice crop in Vietnam using a secondary research approach ». International Journal of Disaster Risk Reduction 58 (mai 2021) : 102208. http://dx.doi.org/10.1016/j.ijdrr.2021.102208.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
39

Purohit, Gaurav, Kota Solomon Raju et Vinod Kumar Chaubey. « XOR-FREE Implementation of Convolutional Encoder for Reconfigurable Hardware ». International Journal of Reconfigurable Computing 2016 (2016) : 1–8. http://dx.doi.org/10.1155/2016/9128683.

Texte intégral
Résumé :
This paper presents a novel XOR-FREE algorithm to implement the convolutional encoder using reconfigurable hardware. The approach completely removes the XOR processing of a chosen nonsystematic, feedforward generator polynomial of larger constraint length. The hardware (HW) implementation of new architecture uses Lookup Table (LUT) for storing the parity bits. The design implements architectural reconfigurability by modifying the generator polynomial of the same constraint length and code rate to reduce the design complexity. The proposed architecture reduces the dynamic power up to 30% and improves the hardware cost and propagation delay up to 20% and 32%, respectively. The performance of the proposed architecture is validated in MATLAB Simulink and tested on Zynq-7 series FPGA.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Boumaalif, Youness, et Hamid Ouadi. « Accounting for magnetic saturation in designing a SRM speed controller for torque ripple minimization ». International Journal of Power Electronics and Drive Systems (IJPEDS) 14, no 1 (1 mars 2023) : 77. http://dx.doi.org/10.11591/ijpeds.v14.i1.pp77-88.

Texte intégral
Résumé :
This study established a nonlinear control design for switched reluctance motor (SRM) vehicle applications, using the backstepping approach. The suggested controller is established according to a model that consider magnetic saturation while reducing torque ripple and resulting in less vibrations. To optimize torque ripple, control angles are adjusted based on the machine speed and torque measurements. Indeed, a lookup table is constructed, offering the efficient control angles for various motor operating points. The suggested control technique was validated through simulation, exploiting an accurate MATLAB SRM model considering magnetic saturation effects. To illustrate the superiority of the suggested regulator, a comparison of its performance with a proportional-integral (PI) controller was performed. The acquired findings indicate the suggested regulator’s effectiveness.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Perumal, Sivachandran R., et Faizal Baharum. « Design and Simulation of a Circadian Lighting Control System Using Fuzzy Logic Controller for LED Lighting Technology ». Journal of Daylighting 9, no 1 (26 mai 2022) : 64–82. http://dx.doi.org/10.15627/jd.2022.5.

Texte intégral
Résumé :
This paper introduces a fuzzy logic-based circadian lighting control system using flexibility of Light-Emitting Diode (LED) lighting technology to synchronise artificial lighting with circadian (natural) lighting Correlated Colour Temperature (CCT) characteristics. Besides for vision acuity, the Non-Imaging Forming effects of lighting affect human circadian rhythms. Past works in spectrally tuning CCT or Spectral Power Distribution of lighting have used conventional Proportional-Integral-Derivative (PID) control system architecture, where the modelling process of system transfer functions was mathematically complex, especially for nonlinear systems. A methodology of regulating lighting CCT is employed in a 7×5 fuzzy logic rules matrix in a Fuzzy Logic Controller (FLC) system, to closely replicate natural lighting CCT characteristics for indoor lighting. A reference lookup table was devised to store desired CCT values arbitrarily with respect to time mark in a day, which acts as an outdoor circadian stimulus and guides the FLC. The FLC compensates for the lack of CCT in lighting space. Simulation results show acceptable CCT output values conforming to circadian lighting parameters at a time in a day compared to the lookup table targets. Deviation from blackbody curve was within ±0.003 using CCT Duv checking. The system did not produce an overshoot (0.0%) with a steady state (zero error) reached after the fourth iteration. Also, rise time was calculated to be 1 iteration. This approach could be further enhanced to cater for additional custom needs in many built environments. Future works may consider connecting more sensors to capture real-time outdoor CCT values for practical regulation.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Ullah, Inayat, Zahid Ullah et Jeong-A. Lee. « EE-TCAM : An Energy-Efficient SRAM-Based TCAM on FPGA ». Electronics 7, no 9 (10 septembre 2018) : 186. http://dx.doi.org/10.3390/electronics7090186.

Texte intégral
Résumé :
Ternary content-addressable memories (TCAMs) are used to design high-speed search engines. TCAM is implemented on application-specific integrated circuit (native TCAMs) and field-programmable gate array (FPGA) (static random-access memory (SRAM)-based TCAMs) platforms but both have the drawback of high power consumption. This paper presents a pre-classifier-based architecture for an energy-efficient SRAM-based TCAM. The first classification stage divides the TCAM table into several sub-tables of balanced size. The second SRAM-based implementation stage maps each of the resultant TCAM sub-tables to a separate row of configured SRAM blocks in the architecture. The proposed architecture selectively activates at most one row of SRAM blocks for each incoming TCAM word. Compared with the existing SRAM-based TCAM designs on FPGAs, the proposed design consumes significantly reduced energy as it activates a part of SRAM memory used for lookup rather than the entire SRAM memory as in the previous schemes. We implemented the proposed approach sample designs of size 512 × 36 on Xilinx Virtex-6 FPGA. The experimental results showed that the proposed design achieved at least three times lower power consumption per performance than other SRAM-based TCAM architectures.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Javed, Muhammad Fasih, Waqas Nawaz et Kifayat Ullah Khan. « HOVA-FPPM : Flexible Periodic Pattern Mining in Time Series Databases Using Hashed Occurrence Vectors and Apriori Approach ». Scientific Programming 2021 (4 janvier 2021) : 1–14. http://dx.doi.org/10.1155/2021/8841188.

Texte intégral
Résumé :
Finding flexible periodic patterns in a time series database is nontrivial due to irregular occurrence of unimportant events, which makes it intractable or computationally intensive for large datasets. There exist various solutions based on Apriori, projection, tree, and other techniques to mine these patterns. However, the existence of constant size tree structure, i.e., suffix tree, with extra information in memory throughout the mining process, redundant and invalid pattern generation, limited types of mined flexible periodic patterns, and repeated traversal over tree data structure for pattern discovery, results in unacceptable space and time complexity. In order to overcome these issues, we introduce an efficient approach called HOVA-FPPM based on Apriori approach with hashed occurrence vectors to find all types of flexible periodic patterns. We do not rely on complex tree structure rather manage necessary information in a hash table for efficient lookup during the mining process. We measured the performance of our proposed approach and compared the results with the baseline approach, i.e., FPPM. The results show that our approach requires lesser time and space, regardless of the data size or period value.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Luo, Shuangxiao, Chunqiao Song, Kai Liu, Linghong Ke et Ronghua Ma. « An Effective Low-Cost Remote Sensing Approach to Reconstruct the Long-Term and Dense Time Series of Area and Storage Variations for Large Lakes ». Sensors 19, no 19 (30 septembre 2019) : 4247. http://dx.doi.org/10.3390/s19194247.

Texte intégral
Résumé :
Inland lakes are essential components of hydrological and biogeochemical water cycles, as well as indispensable water resources for human beings. To derive the long-term and continuous trajectory of lake inundation area changes is increasingly significant. Since it helps to understand how they function in the global water cycle and how they are impacted by climate change and human activities. Employing optical satellite images, as an important means of lake mapping, has been widely used in the monitoring of lakes. It is well known that one of the obvious difficulties of traditional remote sensing-based mapping methods lies in the tremendous labor and computing costs for delineating the large lakes (e.g., Caspian Sea). In this study, a novel approach of reconstructing long-term and high-frequency time series of inundation areas of large lakes is proposed. The general idea of this method is to obtain the lake inundation area at any specific observation date by referring to the mapping relationship of the water occurrence frequency (WOF) of the selected shoreline segment at relatively slight terrains and lake areas based on the pre-established lookup table. The lookup table to map the links of the WOF and lake areas is derived from the Joint Research Centre (JRC)Global Surface Water (GSW) dataset accessed in Google Earth Engine (GEE). We select five large lakes worldwide to reconstruct their long time series (1984–2018) of inundation areas using this method. The time series of lake volume variation are analyzed, and the qualitative investigations of these lake changes are eventually discussed by referring to previous studies. The results based on the case of North Aral Sea show that the mean relative error between estimated area and actually mapped value is about 0.85%. The mean R2 of all the five lakes is 0.746, which indicates that the proposed method can produce the robust estimates of area time series for these large lakes. This research sheds new light on mapping large lakes at considerably deducted time and labor costs, and be effectively applicable in other large lakes in regional and global scales.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Guo, Jing-Ming, Li-Ying Chang et Jiann-Der Lee. « An Efficient and Geometric-Distortion-Free Binary Robust Local Feature ». Sensors 19, no 10 (20 mai 2019) : 2315. http://dx.doi.org/10.3390/s19102315.

Texte intégral
Résumé :
An efficient and geometric-distortion-free approach, namely the fast binary robust local feature (FBRLF), is proposed. The FBRLF searches the stable features from an image with the proposed multiscale adaptive and generic corner detection based on the accelerated segment test (MAGAST) to yield an optimum threshold value based on adaptive and generic corner detection based on the accelerated segment test (AGAST). To overcome the problem of image noise, the Gaussian template is applied, which is efficiently boosted by the adoption of an integral image. The feature matching is conducted by incorporating the voting mechanism and lookup table method to achieve a high accuracy with low computational complexity. The experimental results clearly demonstrate the superiority of the proposed method compared with the former schemes regarding local stable feature performance and processing efficiency.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Lukács, Dániel, Gergely Pongrácz et Máté Tejfel. « Control flow based cost analysis for P4 ». Open Computer Science 11, no 1 (17 décembre 2020) : 70–79. http://dx.doi.org/10.1515/comp-2020-0131.

Texte intégral
Résumé :
AbstractThe networking industry is currently undergoing a steady trend of softwarization. Yet, network engineers suffer from the lack of software development tools that support programming of new protocols. We are creating a cost analysis tool for the P4 programming language, that automatically verifies whether the developed program meets soft deadline requirements imposed by the network. In this paper, we present an approach to estimate the average execution time of P4 program based on control flow graphs. Our approach takes into consideration that many of the parts of P4 are implementation-defined: required information can be added in through incremental refinement, while missing information is handled by falling back to less precise defaults. We illustrate application of this approach to a P4 protocol in two case studies: we use it to examine the effect of a compiler optimization in the deparse stage, and to show how it enables cost modelling complex lookup table implementations. Finally, we assess future research tasks to be completed before the tool is ready for real-world usage.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Jiang, Yu, Mehrdad Bahrami, Seyed Amin Bagherzadeh, Ali Abdollahi, Mohsen Tahmasebi Sulgani, Arash Karimipour, Marjan Goodarzi et Quang-Vu Bach. « Propose a new approach of fuzzy lookup table method to predict Al2O3/deionized water nanofluid thermal conductivity based on achieved empirical data ». Physica A : Statistical Mechanics and its Applications 527 (août 2019) : 121177. http://dx.doi.org/10.1016/j.physa.2019.121177.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
48

Mizuochi, Hiroki, Chikako Nishiyama, Iwan Ridwansyah et Kenlo Nishida Nasahara. « Monitoring of an Indonesian Tropical Wetland by Machine Learning-Based Data Fusion of Passive and Active Microwave Sensors ». Remote Sensing 10, no 8 (6 août 2018) : 1235. http://dx.doi.org/10.3390/rs10081235.

Texte intégral
Résumé :
In this study, a novel data fusion approach was used to monitor the water-body extent in a tropical wetland (Lake Sentarum, Indonesia). Monitoring is required in the region to support the conservation of water resources and biodiversity. The developed approach, random forest database unmixing (RFDBUX), makes use of pixel-based random forest regression to overcome the limitations of the existing lookup-table-based approach (DBUX). The RFDBUX approach with passive microwave data (AMSR2) and active microwave data (PALSAR-2) was used from 2012 to 2017 in order to obtain PALSAR-2-like images with a 100 m spatial resolution and three-day temporal resolution. In addition, a thresholding approach for the obtained PALSAR-2-like backscatter coefficient images provided water body extent maps. The validation revealed that the spatial patterns of the images predicted by RFDBUX are consistent with the original PALSAR-2 backscatter coefficient images (r = 0.94, RMSE = 1.04 in average), and that the temporal pattern of the predicted water body extent can track the wetland dynamics. The PALSAR-2-like images should be a useful basis for further investigation of the hydrological/climatological features of the site, and the proposed approach appears to have the potential for application in other tropical regions worldwide.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Bak, Juseon, Xiong Liu, Robert Spurr, Kai Yang, Caroline R. Nowlan, Christopher Chan Miller, Gonzalo Gonzalez Abad et Kelly Chance. « Radiative transfer acceleration based on the principal component analysis and lookup table of corrections : optimization and application to UV ozone profile retrievals ». Atmospheric Measurement Techniques 14, no 4 (7 avril 2021) : 2659–72. http://dx.doi.org/10.5194/amt-14-2659-2021.

Texte intégral
Résumé :
Abstract. In this work, we apply a principal component analysis (PCA)-based approach combined with lookup tables (LUTs) of corrections to accelerate the Vector Linearized Discrete Ordinate Radiative Transfer (VLIDORT) model used in the retrieval of ozone profiles from backscattered ultraviolet (UV) measurements by the Ozone Monitoring Instrument (OMI). The spectral binning scheme, which determines the accuracy and efficiency of the PCA-RT performance, is thoroughly optimized over the spectral range 265 to 360 nm with the assumption of a Rayleigh-scattering atmosphere above a Lambertian surface. The high level of accuracy (∼ 0.03 %) is achieved from fast-PCA calculations of full radiances. In this approach, computationally expensive full multiple scattering (MS) calculations are limited to a small set of PCA-derived optical states, while fast single scattering and two-stream MS calculations are performed, for every spectral point. The number of calls to the full MS model is only 51 in the application to OMI ozone profile retrievals with the fitting window of 270–330 nm where the RT model should be called at fine intervals (∼ 0.03 nm with ∼ 2000 wavelengths) to simulate OMI measurements (spectral resolution: 0.4–0.6 nm). LUT corrections are implemented to accelerate the online RT model due to the reduction of the number of streams (discrete ordinates) from 8 to 4, while improving the accuracy at the level attainable from simulations using a vector model with 12 streams and 72 layers. Overall, we speed up our OMI retrieval by a factor of 3.3 over the previous version, which has already been significantly sped up over line-by-line calculations due to various RT approximations. Improved treatments for RT approximation errors using LUT corrections improve spectral fitting (2 %–5 %) and hence retrieval errors, especially for tropospheric ozone by up to ∼ 10 %; the remaining errors due to the forward model errors are within 5 % in the troposphere and 3 % in the stratosphere.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Bühl, Johannes, Patric Seifert, Martin Radenz, Holger Baars et Albert Ansmann. « Ice crystal number concentration from lidar, cloud radar and radar wind profiler measurements ». Atmospheric Measurement Techniques 12, no 12 (13 décembre 2019) : 6601–17. http://dx.doi.org/10.5194/amt-12-6601-2019.

Texte intégral
Résumé :
Abstract. A new method for the retrieval of ice crystal number concentration (ICNC) from combined active remote-sensing measurements of Raman lidar, cloud radar and radar wind profiler is presented. We exploit – for the first time – measurements of terminal fall velocity together with the radar reflectivity factor and/or the lidar-derived particle extinction coefficient in clouds for retrieving the number concentration of pristine ice particles with presumed particle shapes. A lookup table approach for the retrieval of the properties of the particle size distribution from observed parameters is presented. Analysis of methodological uncertainties and error propagation is performed, which shows that a retrieval of ice particle number concentration based on terminal fall velocity is possible within 1 order of magnitude. Comparison between a retrieval of the number concentration based on terminal fall velocity on the one hand and lidar and cloud radar on the other shows agreement within the uncertainties of the retrieval.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie