To see the other types of publications on this topic, follow the link: FIRST QUANTIZATION ESTIMATION.

Journal articles on the topic 'FIRST QUANTIZATION ESTIMATION'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 47 journal articles for your research on the topic 'FIRST QUANTIZATION ESTIMATION.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gao, Chao, Guorong Zhao, Jianhua Lu, and Shuang Pan. "Decentralized state estimation for networked spatial-navigation systems with mixed time-delays and quantized complementary measurements: The moving horizon case." Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering 232, no. 11 (June 8, 2017): 2160–77. http://dx.doi.org/10.1177/0954410017712277.

Full text
Abstract:
In this paper, the navigational state estimation problem is investigated for a class of networked spatial-navigation systems with quantization effects, mixed time-delays, and network-based observations (i.e. complementary measurements and regional estimations). A decentralized moving horizon estimation approach, featuring complementary reorganization and recursive procedure, is proposed to tackle this problem. First, through the proposed reorganized scheme, a random delayed system with complementary observations is reconstructed into an equivalent delay-free one without dimensional augment. Second, with this equivalent system, a robust moving horizon estimation scheme is presented as a uniform estimator for the navigational states. Third, for the demand of real-time estimate, the recursive form of decentralized moving horizon estimation approach is developed. Furthermore, a collective estimation is obtained through the weighted fusion of two parts, i.e. complementary measurements based estimation, and regional estimations directly from the neighbors. The convergence properties of the proposed estimator are also studied. The obtained stability condition implicitly establishes a relation between the upper bound of the estimation error and two parameters, i.e. quantization density and delay occur probability. Finally, an application example to networked unmanned aerial vehicles is presented and comparative simulations demonstrate the main features of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
2

Galvan, Fausto, Giovanni Puglisi, Arcangelo Ranieri Bruna, and Sebastiano Battiato. "First Quantization Matrix Estimation From Double Compressed JPEG Images." IEEE Transactions on Information Forensics and Security 9, no. 8 (August 2014): 1299–310. http://dx.doi.org/10.1109/tifs.2014.2330312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sun, Baoyan, Jun Hu, and Yan Gao. "Variance-constrained robust $ H_{\infty} $ state estimation for discrete time-varying uncertain neural networks with uniform quantization." AIMS Mathematics 7, no. 8 (2022): 14227–48. http://dx.doi.org/10.3934/math.2022784.

Full text
Abstract:
<abstract><p>In this paper, we consider the robust $ H_{\infty} $ state estimation (SE) problem for a class of discrete time-varying uncertain neural networks (DTVUNNs) with uniform quantization and time-delay under variance constraints. In order to reflect the actual situation for the dynamic system, the constant time-delay is considered. In addition, the measurement output is first quantized by a uniform quantizer and then transmitted through a communication channel. The main purpose is to design a time-varying finite-horizon state estimator such that, for both the uniform quantization and time-delay, some sufficient criteria are obtained for the estimation error (EE) system to satisfy the error variance boundedness and the $ H_{\infty} $ performance constraint. With the help of stochastic analysis technique, a new $ H_{\infty} $ SE algorithm without resorting the augmentation method is proposed for DTVUNNs with uniform quantization. Finally, a simulation example is given to illustrate the feasibility and validity of the proposed variance-constrained robust $ H_{\infty} $ SE method.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Guiyun, Jing Yao, Yonggui Liu, Hongbin Chen, and Dong Tang. "Channel-Aware Adaptive Quantization Method for Source Localization in Wireless Sensor Networks." International Journal of Distributed Sensor Networks 2015 (2015): 1–13. http://dx.doi.org/10.1155/2015/214081.

Full text
Abstract:
This paper considers the problem of source localization using quantized observations in wireless sensor networks where, due to bandwidth constraint, each sensor’s observation is usually quantized into one bit of information. First, a channel-aware adaptive quantization scheme for target location estimation is proposed and local sensor nodes dynamically adjust their quantization thresholds according to the position-based information sequence. The novelty of the proposed approach comes from the fact that the scheme not only adopts the distributed adaptive quantization instead of the conventional fixed quantization, but also incorporates the statistics of imperfect wireless channels between sensors and the fusion center (binary symmetric channels). Furthermore, the appropriate maximum likelihood estimator (MLE), the performance metric Cramér-Rao lower bound (CRLB), and a sufficient condition for the Fisher information matrix being positive definite are derived, respectively. Simulation results are presented to show that the appropriated CRLB is less than the fixed quantization channel-aware CRLB and the proposed MLE will approach their CRLB when the number of sensors is large enough.
APA, Harvard, Vancouver, ISO, and other styles
5

Tadic, Predrag, Zeljko Djurovic, and Branko Kovacevic. "Analysis of speech waveform quantization methods." Journal of Automatic Control 18, no. 1 (2008): 19–22. http://dx.doi.org/10.2298/jac0801019t.

Full text
Abstract:
Digitalization, consisting of sampling and quantization, is the first step in any digital signal processing algorithm. In most cases, the quantization is uniform. However, having knowledge of certain stochastic attributes of the signal (namely, the probability density function, or pdf), quantization can be made more efficient, in the sense of achieving a greater signal to quantization noise ratio. This means that narrower channel bandwidths are required for transmitting a signal of the same quality. Alternatively, if signal storage is of interest, rather than transmission, considerable savings in memory space can be made. This paper presents several available methods for speech signal pdf estimation, and quantizer optimization in the sense of minimizing the quantization error power.
APA, Harvard, Vancouver, ISO, and other styles
6

Battiato, Sebastiano, Oliver Giudice, Francesco Guarnera, and Giovanni Puglisi. "First Quantization Estimation by a Robust Data Exploitation Strategy of DCT Coefficients." IEEE Access 9 (2021): 73110–20. http://dx.doi.org/10.1109/access.2021.3080576.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yao, Heng, Hongbin Wei, Chuan Qin, and Xinpeng Zhang. "An improved first quantization matrix estimation for nonaligned double compressed JPEG images." Signal Processing 170 (May 2020): 107430. http://dx.doi.org/10.1016/j.sigpro.2019.107430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xue, Fei, Ziyi Ye, Wei Lu, Hongmei Liu, and Bin Li. "MSE period based estimation of first quantization step in double compressed JPEG images." Signal Processing: Image Communication 57 (September 2017): 76–83. http://dx.doi.org/10.1016/j.image.2017.05.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

HoangVan, Xiem. "Adaptive Quantization Parameter Estimation for HEVC Based Surveillance Scalable Video Coding." Electronics 9, no. 6 (May 30, 2020): 915. http://dx.doi.org/10.3390/electronics9060915.

Full text
Abstract:
Visual surveillance systems have been playing a vital role in human modern life with a large number of applications, ranging from remote home management, public security to traffic monitoring. The recent High Efficiency Video Coding (HEVC) scalable extension, namely SHVC, provides not only the compression efficiency but also the adaptive streaming capability. However, SHVC is originally designed for videos captured from generic scenes rather than from visual surveillance systems. In this paper, we propose a novel HEVC based surveillance scalable video coding (SSVC) framework. First, to achieve high quality inter prediction, we propose a long-term reference coding method, which adaptively exploits the temporal correlation among frames in surveillance video. Second, to optimize the SSVC compression performance, we design a quantization parameter adaptation mechanism in which the relationship between SSVC rate-distortion (RD) performance and the quantization parameter is statistically modeled by a fourth-order polynomial function. Afterwards, an appropriate quantization parameter is derived for frames at long-term reference position. Experiments conducted for a common set of surveillance videos have shown that the proposed SSVC significantly outperforms the relevant SHVC standard, notably by around 6.9% and 12.6% bitrate saving for the low delay (LD) and random access (RA) coding configurations, respectively while still providing a similar perceptual decoded frame quality.
APA, Harvard, Vancouver, ISO, and other styles
10

Peric, Zoran, Milan Tancic, Nikola Simic, and Vladimir Despotovic. "Simple Speech Transform Coding Scheme using Forward Adaptive Quantization for Discrete Input Signal." Information Technology And Control 48, no. 3 (September 24, 2019): 454–63. http://dx.doi.org/10.5755/j01.itc.48.3.21685.

Full text
Abstract:
We propose a speech coding scheme based on the simple transform coding and forward adaptive quantization for discrete input signal processing in this paper. The quasi-logarithmic quantizer is applied to discretization of continuous input signal, i.e. for preparing discrete input. The application of forward adaptation based on the input signal variance provides more efficient bandwidth usage, whereas utilization of transform coding provides sub-sequences with more predictable signal characteristics that ensure higher quality of signal reconstruction at the receiving end. In order to provide additional compression, transform coding precedes adaptive quantization. As an objective measure of system performance we use signal-to-quantization-noise ratio. Sysem performance is discussed for two typical cases. In the first case, we consider that the information about continuous signal variance is available whereas the second case considers system performance estimation when we know only the information about discretized signal variance which means that there is a loss of input signal information. The main goal of such performance estimation comparison of the proposed speech signal coding model is to explore what is the objectivity of performance if we do not have information about a continuous source, which is a common phenomenon in digital systems.
APA, Harvard, Vancouver, ISO, and other styles
11

Jun, Liu, Luo Zhongqiang, and Xiong Xingzhong. "Low-Complexity Synchronization Scheme with Low-Resolution ADCs." Information 9, no. 12 (December 7, 2018): 313. http://dx.doi.org/10.3390/info9120313.

Full text
Abstract:
An important function of next-generation (5G) and beyond mobile communication systems is aim to provide thousand-fold capacity growth and to support high-speed data transmission up to several megabits per second. However, the research community and industries have to face a dilemma of power consumption and hardware design to satisfy the increasing communication requirements. For the purpose of improving the system cost, power consumption, and implementation complexity, a novel scheme of symbol timing and frequency offset estimation with low-resolution analog-to-digital converters (ADCs) based on an orthogonal frequency division multiplexing ultra-wideband (OFDM-UWB) system is proposed in this paper. In our work, we first verified the principle that the autocorrelation of the pseudo-noise (PN) sequences was not affected by low-resolution quantization. With the help of this property, the timing synchronization could be strongly implemented against the influence of low-resolution quantization. Then, the transmitted signal structure and low-resolution quantization scheme under the synchronization scheme were designed. Finally, a frequency offset estimation model with one-bit timing synchronization was established. Theoretical analysis and simulation results corroborate that the performance of the proposed scheme not only approximates to that of the full-resolution synchronization scheme, but also has lower power consumption and computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
12

Xie, Lin Bo, Fang Huang, and He Guang Liu. "State Estimation and Stability Analysis of Networked Control Systems with Multi-Quantized Output Feedback." Applied Mechanics and Materials 48-49 (February 2011): 1101–5. http://dx.doi.org/10.4028/www.scientific.net/amm.48-49.1101.

Full text
Abstract:
This paper is concerned with closed-loop stability analysis and dynamic quantization parameters design of discrete-time networked control systems (NCSs) with noise disturbance. First, based on the Lyapunov function, two state invariant regions of estimation error system and plant system are constructed. The quadratically attractive conditions of the two systems are obtained by using the proposed invariant region sequence, respectively. Second, a quantized control strategy is presented by dynamically adjusting the scaling parameters of logarithmic quantizers. Moreover, the connection of quadratic stability between the plant system and the estimation error system is also given.
APA, Harvard, Vancouver, ISO, and other styles
13

Al-Shaikhi, Ali. "Signal Recovery from Randomly Quantized Data Using Neural Network Approach." Sensors 22, no. 22 (November 11, 2022): 8712. http://dx.doi.org/10.3390/s22228712.

Full text
Abstract:
We present an efficient scheme based on a long short-term memory (LSTM) autoencoder for accurate seismic deconvolution in a multichannel setup. The technique is beneficial for compressing massive amounts of seismic data. The proposed robust estimation ensures the recovery of sparse reflectivity from acquired seismic data that have been under-quantized. By adjusting the quantization error, the technique considerably improves the robustness of data to the quantization error, thereby boosting the visual saliency of seismic data compared to the other existing algorithms. This framework has been validated using both field and synthetic seismic data sets, and the assessment is carried out by comparing it to the steepest decent and basis pursuit methods. The findings indicate that the proposed scheme outperforms the other algorithms significantly in the following ways: first, in the proposed estimation, fraudulently or overbearingly estimated impulses are significantly suppressed, and second, the proposed guesstimate is much more robust to the quantization interval changes. The tests on real and synthetic data sets reveal that the proposed LSTM autoencoder-based method yields the best results in terms of both quality and computational complexity when compared with existing methods. Finally, the relative reconstruction error (RRE), signal-to-reconstruction error ratio (SRER), and power spectral density (PSD) are used to evaluate the performance of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
14

Wang, Cheng C., Changchun Shi, Robert W. Brodersen, and Dejan Marković. "An Automated Fixed-Point Optimization Tool in MATLAB XSG/SynDSP Environment." ISRN Signal Processing 2011 (May 10, 2011): 1–17. http://dx.doi.org/10.5402/2011/414293.

Full text
Abstract:
This paper presents an automated tool for floating-point to fixed-point conversion. The tool is based on previous work that was built in MATLAB/Simulink environment and Xilinx System Generator support. The tool is now extended to include Synplify DSP blocksets in a seamless way from the users' view point. In addition to FPGA area estimation, the tool now also includes ASIC area estimation for end-users who choose the ASIC flow. The tool minimizes hardware cost subject to mean-squared quantization error (MSE) constraints. To obtain more accurate ASIC area estimations with synthesized results, 3 performance levels are available to choose from, suitable for high-performance, typical, or low-power applications. The use of the tool is first illustrated on an FIR filter to achieve over 50% area savings for MSE specification of 10−6 as compared to all 16-bit realization. More complex optimization results for chip-level designs are also demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
15

Harrison, A. J. L., and D. P. Stoten. "Generalized Finite Difference Methods for Optimal Estimation of Derivatives in Real-Time Control Problems." Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 209, no. 2 (May 1995): 67–78. http://dx.doi.org/10.1243/pime_proc_1995_209_368_02.

Full text
Abstract:
The real-time estimation of (particularly first and second) derivatives of motion from position data has many applications in automatic control. Errors in such estimates at high sampling rates arise from noise or quantization of the data and from the presence of high-order derivatives of motion at low rates. This paper presents a generalized method to allow estimation of derivatives of orbitary order from position data, with minimum overall error, for a given signal-to-noise ratio and sampling-to-signal frequency ratio. Experimental results included in the paper show that the theory is borne out in practice.
APA, Harvard, Vancouver, ISO, and other styles
16

Biscainho, Luiz W. P., Paulo S. R. Diniz, and Mauro F. de Carvalho. "[NO TITLE AVAILABLE]." Sba: Controle & Automação Sociedade Brasileira de Automatica 14, no. 2 (June 2003): 199–207. http://dx.doi.org/10.1590/s0103-17592003000200011.

Full text
Abstract:
This paper addresses the effects of the quantization of an audio signal on the Least-Squares (LS) estimate of its autoregressive (AR) model. First, three topics are reviewed: the statistical description of the quantization error in terms of the number of bits used in fixed-point representation for a signal; the LS estimation of the AR model for a signal; and the relation between Minimum Mean-Square Error (MMSE) solutions for the AR model obtained from noisy and noiseless signals. The sensitivity of the associated generator filter poles localization (expressed by magnitudes and phases) to the deviation of the model parameters is examined. Through the interconnection of these aspects, the deviation of the model coefficients is described in terms of the number of bits used to represent the signal to be modeled, which allows for model correction. Conclusions about peculiarities of the pole deviation of the generator filter are drawn.
APA, Harvard, Vancouver, ISO, and other styles
17

Harrison, A. J., and C. A. McMahon. "Estimation of Acceleration from Data with Quantization Errors Using Central Finite-Difference Methods." Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 207, no. 2 (May 1993): 77–86. http://dx.doi.org/10.1243/pime_proc_1993_207_321_02.

Full text
Abstract:
The paper considers the problem of generating derivatives of measured data when appropriate transducers and/or observers are not available. Six methods or schemes for estimating acceleration (or generally the second derivative of one parameter with respect to another), by central finite-difference methods, are described. Each scheme is subject to two principal sources of error: noisy or quantized data and the presence of ignored high-order derivatives of motion. The first of these increases with sampling frequency, and the second decreases. There is thus an optimal frequency of sampling for each scheme, dependent on the system signal/noise ratio, the signal- frequency content and the order of the derivative modelled by the scheme. Tables are given that enable the investigator to select the most accurate scheme for a given signal/noise ratio and a desired sampling-frequency I signal-frequency ratio, together with estimates of the resulting combined mean absolute error from the two sources. The results are confirmed experimentally.
APA, Harvard, Vancouver, ISO, and other styles
18

Hu, Li, Shilian Wang, and Eryang Zhang. "Aspect-Aware Target Detection and Localization by Wireless Sensor Networks." Sensors 18, no. 9 (August 25, 2018): 2810. http://dx.doi.org/10.3390/s18092810.

Full text
Abstract:
This paper considers the active detection of a stealth target with aspect dependent reflection (e.g., submarine, aircraft, etc.) using wireless sensor networks (WSNs). When the target is detected, its localization is also of interest. Due to stringent bandwidth and energy constraints, sensor observations are quantized into few-bit data individually and then transmitted to a fusion center (FC), where a generalized likelihood ratio test (GLRT) detector is employed to achieve target detection and maximum likelihood estimation of the target location simultaneously. In this context, we first develop a GLRT detector using one-bit quantized data which is shown to outperform the typical counting rule and the detection scheme based on the scan statistic. We further propose a GLRT detector based on adaptive multi-bit quantization, where the sensor observations are more precisely quantized, and the quantized data can be efficiently transmitted to the FC. The Cramer-Rao lower bound (CRLB) of the estimate of target location is also derived for the GLRT detector. The simulation results show that the proposed GLRT detector with adaptive 2-bit quantization achieves much better performance than the GLRT based on one-bit quantization, at the cost of only a minor increase in communication overhead.
APA, Harvard, Vancouver, ISO, and other styles
19

Yang, Yu, Jun Hu, Dongyan Chen, Yunliang Wei, and Junhua Du. "Non-fragile Suboptimal Set-membership Estimation for Delayed Memristive Neural Networks with Quantization via Maximum-error-first Protocol." International Journal of Control, Automation and Systems 18, no. 7 (January 22, 2020): 1904–14. http://dx.doi.org/10.1007/s12555-019-0422-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Dalmia, Nandita, and Manish Okade. "Robust first quantization matrix estimation based on filtering of recompression artifacts for non-aligned double compressed JPEG images." Signal Processing: Image Communication 61 (February 2018): 9–20. http://dx.doi.org/10.1016/j.image.2017.10.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Baofeng, Ge Guo, and Xiue Gao. "Variance-Constrained Robust Estimation for Discrete-Time Systems with Communication Constraints." Mathematical Problems in Engineering 2014 (2014): 1–10. http://dx.doi.org/10.1155/2014/980753.

Full text
Abstract:
This paper is concerned with a new filtering problem in networked control systems (NCSs) subject to limited communication capacity, which includes measurement quantization, random transmission delay, and packets loss. The measurements are first quantized via a logarithmic quantizer and then transmitted through a digital communication network with random delay and packet loss. The three communication constraints phenomena which can be seen as a class of uncertainties are formulated by a stochastic parameter uncertainty system. The purpose of the paper is to design a linear filter such that, for all the communication constraints, the error state of the filtering process is mean square bounded and the steady-state variance of the estimation error for each state is not more than the individual prescribed upper bound. It is shown that the desired filtering can effectively be solved if there are positive definite solutions to a couple of algebraic Riccati-like inequalities or linear matrix inequalities. Finally, an illustrative numerical example is presented to demonstrate the effectiveness and flexibility of the proposed design approach.
APA, Harvard, Vancouver, ISO, and other styles
22

Yang, Xiong, Wang, and Zhang. "Analysis of Byzantine Attacks for Target Tracking in Wireless Sensor Networks." Sensors 19, no. 15 (August 5, 2019): 3436. http://dx.doi.org/10.3390/s19153436.

Full text
Abstract:
Herein, the problem of target tracking in wireless sensor networks (WSNs) is investigated in the presence of Byzantine attacks. More specifically, we analyze the impact of Byzantine attacks on the performance of a tracking system. First, under the condition of jointly estimating the target state and the attack parameters, the posterior Cramer–Rao lower bound (PCRLB) is calculated. Then, from the perspective of attackers, we define the optimal Byzantine attack and theoretically find a way to achieve such an attack with minimal cost. When the attacked nodes are correctly identified by the fusion center (FC), we further define the suboptimal Byzantine attack and also find a way to realize such an attack. Finally, in order to alleviate the negative impact of attackers on the system performance, a modified sampling importance resampling (SIR) filter is proposed. Simulation results show that the tracking results of the modified SIR filter can be close to the true trajectory of the moving target. In addition, when the quantization level increases, both the security performance and the estimation performance of the tracking system are improved.
APA, Harvard, Vancouver, ISO, and other styles
23

Kern, Jonathan, Elsa Dupraz, Abdeldjalil Aïssa-El-Bey, Lav R. Varshney, and François Leduc-Primeau. "Optimizing the Energy Efficiency of Unreliable Memories for Quantized Kalman Filtering." Sensors 22, no. 3 (January 23, 2022): 853. http://dx.doi.org/10.3390/s22030853.

Full text
Abstract:
This paper presents a quantized Kalman filter implemented using unreliable memories. We consider that both the quantization and the unreliable memories introduce errors in the computations, and we develop an error propagation model that takes into account these two sources of errors. In addition to providing updated Kalman filter equations, the proposed error model accurately predicts the covariance of the estimation error and gives a relation between the performance of the filter and its energy consumption, depending on the noise level in the memories. Then, since memories are responsible for a large part of the energy consumption of embedded systems, optimization methods are introduced to minimize the memory energy consumption under the desired estimation performance of the filter. The first method computes the optimal energy levels allocated to each memory bank individually, and the second one optimizes the energy allocation per groups of memory banks. Simulations show a close match between the theoretical analysis and experimental results. Furthermore, they demonstrate an important reduction in energy consumption of more than 50%.
APA, Harvard, Vancouver, ISO, and other styles
24

Al Koutayni, Mhd Rashed, Vladimir Rybalkin, Jameel Malik, Ahmed Elhayek, Christian Weis, Gerd Reis, Norbert Wehn, and Didier Stricker. "Real-Time Energy Efficient Hand Pose Estimation: A Case Study." Sensors 20, no. 10 (May 16, 2020): 2828. http://dx.doi.org/10.3390/s20102828.

Full text
Abstract:
The estimation of human hand pose has become the basis for many vital applications where the user depends mainly on the hand pose as a system input. Virtual reality (VR) headset, shadow dexterous hand and in-air signature verification are a few examples of applications that require to track the hand movements in real-time. The state-of-the-art 3D hand pose estimation methods are based on the Convolutional Neural Network (CNN). These methods are implemented on Graphics Processing Units (GPUs) mainly due to their extensive computational requirements. However, GPUs are not suitable for the practical application scenarios, where the low power consumption is crucial. Furthermore, the difficulty of embedding a bulky GPU into a small device prevents the portability of such applications on mobile devices. The goal of this work is to provide an energy efficient solution for an existing depth camera based hand pose estimation algorithm. First, we compress the deep neural network model by applying the dynamic quantization techniques on different layers to achieve maximum compression without compromising accuracy. Afterwards, we design a custom hardware architecture. For our device we selected the FPGA as a target platform because FPGAs provide high energy efficiency and can be integrated in portable devices. Our solution implemented on Xilinx UltraScale+ MPSoC FPGA is 4.2× faster and 577.3× more energy efficient than the original implementation of the hand pose estimation algorithm on NVIDIA GeForce GTX 1070.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhang, Xianyu, Xiaoqiang Qiao, Kang An, Junquan Deng, Tao Liang, and Xiaoyu Wang. "Reconfigurable Intelligent Surface-Aided Cell-Free Massive MIMO with Low-Resolution ADCs: Secrecy Performance Analysis and Optimization." Wireless Communications and Mobile Computing 2022 (November 3, 2022): 1–15. http://dx.doi.org/10.1155/2022/8462350.

Full text
Abstract:
The secure communication in reconfigurable intelligent surface-aided cell-free massive MIMO system is investigated with low-resolution ADCs and with the existence of an active eavesdropper. Specifically, an aggregated channel estimation approach is applied to decrease the overhead required to estimate the channels. Using the available imperfect channel state information (CSI), the conjugate beamforming and random beamforming are applied at the APs and the RIS for downlink data transmission, respectively. The closed-form expression of the achievable secrecy rate is acquired to appraise the achievable secrecy performance using only the channel statistics. With the achievable analytical results, the impacts of the quantization bit of ADCs, channel estimation error, the number of RIS elements, and the number of the APs can be unveiled. Aiming to maximize the minimum achievable rate of all legitimate users subject to security constraints, the power control optimization scheme is first formulated. To tackle this nonconvex property of the proposed optimization problem, a path-following algorithm is then utilized to solve the initial problem with continuous approximations and iterative optimization. Numerical results are presented to verify the achieved results along with availability of the presented power allocation approach.
APA, Harvard, Vancouver, ISO, and other styles
26

Kapitanov, Viktor A., Anna A. Ivanova, and Aleksandra Y. Maksimova. "The problems of numerical inequalities estimates." Statistics and Economics 15, no. 4 (September 4, 2018): 4–15. http://dx.doi.org/10.21686/2500-3925-2018-4-4-15.

Full text
Abstract:
The purpose of this paper is to compare the shortcomings of the widely used inequality coefficients that appear when working with real (ie, knowingly incomplete) data and searching for alternative quantitative methods for describing inequalities that lack these shortcomings.Research methods:– consideration of an extensive range of as full as possible real data on the population distribution by income, expenditure, property (ie data on the economic structure of society);– revealing the specific shortcomings of these data on the economic structure of society, finding out which information is missing or presented disproportionately;– comparison of the values of the most widely used indices of inequality calculated on real data on the economic structure, with a view to establishing the suitability of these indicators for problems of inequality estimation;– development of an index of inequality that adequately describes the real economic structure of society.Research data:– official data of Rosstat and the Federal Tax Service on incomes of Russian citizens;– specialized sites of announcements about the prices for real estate and cars;– Credit Suisse Research Institute data on the distribution of Russian citizens by property level;– Forbes data on income and wealth of the richest people in Russia.It is shown that the income data are essentially incomplete and fragmentary – the width of the income range (i.e., the income of therichest member of society) is known, but the filling of rich cohorts is not known, since the incomes of the richest members of society are hidden.We proposed the next (criteria as) requirements for an inequality index:– possibility of calculating the index of inequality for arbitrary quantization;– invariance of the value of the inequality index for different quantization of the same data;– sensitivity of the index to the width of the income range.It is noted that only the exponential function describes societies with high social inequality enough well (the intensity of the exponential distribution is more than 10).For the presented population distributions, the next indices of inequality are calculated:– decile coefficient of funds;– Gini coefficient;– Pareto index;– indicators of total entropy (zero, first or Tayle index, and second orders);– the ratio of maximum income (property value) to the modal;– intensity of exponential distribution.It is shown, that:– the value of the Pareto index does not have a unique relationship with the inequality;– the coefficients of the funds (decile, quintile, etc.) are not computable for arbitrary quantization, and therefore are unsuitable for comparing data from various sources and have different quantization;– The Gini index requires complete data on the rich;– from all considered criteria of inequality the first three indicators of the total entropy, as well as the ratio of maximum income (property) to the modal strongly depend on data quantization.Therefore they are unsuitable for comparison data from various sources with different quantization. It is concluded that the intensity of the exponential distribution does not possess the listed disadvantages and can be recommended as an index of inequality.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhang, Zheng, Xiaofeng Zhu, Guangming Lu, and Yudong Zhang. "Probability Ordinal-Preserving Semantic Hashing for Large-Scale Image Retrieval." ACM Transactions on Knowledge Discovery from Data 15, no. 3 (April 12, 2021): 1–22. http://dx.doi.org/10.1145/3442204.

Full text
Abstract:
Semantic hashing enables computation and memory-efficient image retrieval through learning similarity-preserving binary representations. Most existing hashing methods mainly focus on preserving the piecewise class information or pairwise correlations of samples into the learned binary codes while failing to capture the mutual triplet-level ordinal structure in similarity preservation. In this article, we propose a novel Probability Ordinal-preserving Semantic Hashing (POSH) framework, which for the first time defines the ordinal-preserving hashing concept under a non-parametric Bayesian theory. Specifically, we derive the whole learning framework of the ordinal similarity-preserving hashing based on the maximum posteriori estimation, where the probabilistic ordinal similarity preservation, probabilistic quantization function, and probabilistic semantic-preserving function are jointly considered into one unified learning framework. In particular, the proposed triplet-ordering correlation preservation scheme can effectively improve the interpretation of the learned hash codes under an economical anchor-induced asymmetric graph learning model. Moreover, the sparsity-guided selective quantization function is designed to minimize the loss of space transformation, and the regressive semantic function is explored to promote the flexibility of the formulated semantics in hash code learning. The final joint learning objective is formulated to concurrently preserve the ordinal locality of original data and explore potentials of semantics for producing discriminative hash codes. Importantly, an efficient alternating optimization algorithm with the strictly proof convergence guarantee is developed to solve the resulting objective problem. Extensive experiments on several large-scale datasets validate the superiority of the proposed method against state-of-the-art hashing-based retrieval methods.
APA, Harvard, Vancouver, ISO, and other styles
28

CASACUBERTA, FRANCISCO. "GROWTH TRANSFORMATIONS FOR PROBABILISTIC FUNCTIONS OF STOCHASTIC GRAMMARS." International Journal of Pattern Recognition and Artificial Intelligence 10, no. 03 (May 1996): 183–201. http://dx.doi.org/10.1142/s0218001496000153.

Full text
Abstract:
Stochastic Grammars are the most usual models in Syntactic Pattern Recognition. Both components of a Stochastic Grammar, the characteristic grammar and the probabilities attached to the rules, can be learnt automatically from training samples. In this paper, first a review of some algorithms are presented to infer the probabilistic component of Stochastic Regular and Context-Free Grammars under the framework of the Growth Transformations. On the other hand, with Stochastic Grammars, the patterns must be represented as strings over a finite set of symbols. However, the most natural representation in many Syntactic Pattern Recognition applications (i.e. speech) is as sequences of vectors from a feature vector space, that is, a continuous representation. Therefore, to obtain a discrete representation of the patterns, some quantization errors are introduced in the representation process. To avoid this drawback, a formal presentation of a semi-continuous extension of the Stochastic Regular and Context-Free Grammars is studied and probabilistic estimation algorithms are developed in this paper. In this extension, sequences of vectors, instead of strings of symbols, can be processed with Stochastic Grammars.
APA, Harvard, Vancouver, ISO, and other styles
29

Graepel, Thore, and Klaus Obermayer. "A Stochastic Self-Organizing Map for Proximity Data." Neural Computation 11, no. 1 (January 1, 1999): 139–55. http://dx.doi.org/10.1162/089976699300016854.

Full text
Abstract:
We derive an efficient algorithm for topographic mapping of proximity data (TMP), which can be seen as an extension of Kohonen's self-organizing map to arbitrary distance measures. The TMP cost function is derived in a Baysian framework of folded Markov chains for the description of autoencoders. It incorporates the data by a dissimilarity matrix [Formula: see text] and the topographic neighborhood by a matrix [Formula: see text] of transition probabilities. From the principle of maximum entropy, a nonfactorizing Gibbs distribution is obtained, which is approximated in a mean-field fashion. This allows for maximum likelihood estimation using an expectation-maximization algorithm. In analogy to the transition from topographic vector quantization to the self-organizing map, we suggest an approximation to TMP that is computationally more efficient. In order to prevent convergence to local minima, an annealing scheme in the temperature parameter is introduced, for which the critical temperature of the first phase transition is calculated in terms of [Formula: see text] and [Formula: see text]. Numerical results demonstrate the working of the algorithm and confirm the analytical results. Finally, the algorithm is used to generate a connection map of areas of the cat's cerebral cortex.
APA, Harvard, Vancouver, ISO, and other styles
30

Tsironis, V., A. Tranou, A. Vythoulkas, A. Psalta, E. Petsa, and G. Karras. "AUTOMATIC RECTIFICATION OF BUILDING FAÇADES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W3 (February 23, 2017): 645–50. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w3-645-2017.

Full text
Abstract:
Focusing mainly on the case of (near-)planar building façades, a methodology for their automatic projective rectification is described and evaluated. It relies on a suitably configured, calibrated stereo pair of an object expected to contain a minimum of vertical and/or horizontal lines for the purposes of levelling. The SURF operator has been used for extracting and matching interest points. The coplanar points have been separated with two alternative methods. First, the fundamental matrix of the stereo pair, computed using robust estimation, allows estimating the relative orientation of the calibrated pair; initial parameter values, if needed, may be estimated via the essential matrix. Intersection of valid points creates a 3D point set in model space, to which a plane is robustly fitted. Second, all initial point matches are directly used for robustly estimating the inter-image homography of the pair, thus directly selecting all image matches referring to coplanar points; initial values for the relative orientation parameters, if needed, may be estimated from a decomposition of the inter-image homography. Finally, all intersected coplanar model points yield the object-to-image homography to allow image rectification. The in-plane rotation required to finalize the transformation is found by assuming that rectified images contain sufficient straight linear segments to form a dominant pair of orthogonal directions which correspond to horizontality/verticality in 3D space. In our implementation, image edges from Canny detector are used in linear Hough Transform (HT) resulting in a 2D array (ρ, θ) with values equal to the sum of pixels belonging to the particular line. Quantization parameter values aim at absorbing possible slight deviations from collinearity due to thinning or uncorrected lens distortions. By first imposing a threshold expressing the minimum acceptable number of edge-characterized pixels, the resulting HT is accumulated along the ρ-dimension to give a single vector, whose values represent the number of lines of the particular direction. Since here the dominant pair of orthogonal directions has to be found, all vector values are added with their π/2-shifted counterpart. This function is then convolved with a 1D Gaussian function; the optimal angle of in-plane rotation is at the maximum value of the result. The described approach has been successfully evaluated with several building façades of varying morphology by assessing remaining line convergence (projectivity), skewness and deviations from horizontality/verticality. Mean estimated deviation from a metric result was 0°.2. Open questions are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
31

Cazzaniga, Greta, Carlo De Michele, Michele D'Amico, Cristina Deidda, Antonio Ghezzi, and Roberto Nebuloni. "Hydrological response of a peri-urban catchment exploiting conventional and unconventional rainfall observations: the case study of Lambro Catchment." Hydrology and Earth System Sciences 26, no. 8 (April 27, 2022): 2093–111. http://dx.doi.org/10.5194/hess-26-2093-2022.

Full text
Abstract:
Abstract. Commercial microwave links (CMLs) can be used as opportunistic and unconventional rainfall sensors by converting the received signal level into path-averaged rainfall intensity. As the reliable reconstruction of the spatial distribution of rainfall is still a challenging issue in meteorology and hydrology, there is a widespread interest in integrating the precipitation estimates gathered by the ubiquitous CMLs with the conventional rainfall sensors, i.e. rain gauges (RGs) and weather radars. Here, we investigate the potential of a dense CML network for the estimation of river discharges via a semi-distributed hydrological model. The analysis is conducted in a peri-urban catchment, Lambro, located in northern Italy and covered by 50 links. A two-level comparison is made between CML- and RG-based outcomes, relying on 12 storm/flood events. First, rainfall data are spatially interpolated and assessed in a set of significant points of the catchment area. Rainfall depth values obtained from CMLs are definitively comparable with direct RG measurements, except for the spells of persistent light rain, probably due to the limited sensitivity of CMLs caused by the coarse quantization step of raw power data. Moreover, it is shown that, when changing the type of rainfall input, a new calibration of model parameters is required. In fact, after the recalibration of model parameters, CML-driven model performance is comparable with RG-driven performance, confirming that the exploitation of a CML network may be a great support to hydrological modelling in areas lacking a well-designed and dense traditional monitoring system.
APA, Harvard, Vancouver, ISO, and other styles
32

Kolokolov, Yury, and Anna Monovskaya. "Guess-Work and Reasonings on Centennial Evolution of Surface Air Temperature in Russia. Part IV: Towards Economic Estimations of Climate-Related Damages from the Bifurcation Analysis Viewpoint." International Journal of Bifurcation and Chaos 26, no. 12 (November 2016): 1630033. http://dx.doi.org/10.1142/s0218127416300330.

Full text
Abstract:
The paper completes the cycle of the research devoted to the development of the experimental bifurcation analysis (not computer simulations) in order to answer the following questions: whether qualitative changes occur in the dynamics of local climate systems in a centennial timescale?; how to analyze such qualitative changes with daily resolution for local and regional space-scales?; how to establish one-to-one daily correspondence between the dynamics evolution and economic consequences for productions? To answer the questions, the unconventional conceptual model to describe the local climate dynamics was proposed and verified in the previous parts. That model (HDS-model) originates from the hysteresis regulator with double synchronization and has a variable structure due to competition between the amplitude quantization and the time quantization. The main advantage of the HDS-model is connected with the possibility to describe “internally” (on the basis of the self-regulation) the specific causal effects observed in the dynamics of local climate systems instead of “external” description of three states of the hysteresis behavior of climate systems (upper, lower and transient states). As a result, the evolution of the local climate dynamics is based on the bifurcation diagrams built by processing the data of meteorological observations, where the strange effects of the essential interannual daily variability of annual temperature variation are taken into account and explained. It opens the novel possibilities to analyze the local climate dynamics taking into account the observed resultant of all internal and external influences on each local climate system. In particular, the paper presents the viewpoint on how to estimate economic damages caused by climate-related hazards through the bifurcation analysis. That viewpoint includes the following ideas: practically each local climate system is characterized by its own time pattern of the natural qualitative changes in temperature dynamics over a century, so, any unified time window to determine the local climatic norms seems to be questionable; the temperature limits determined for climate-related technological hazards should be reasoned by the conditions of artificial human activity, but not by the climatic norms; the damages caused by such hazards can be approximately estimated in relation to the average annual profit of each production. Now, it becomes possible to estimate the minimal and maximal numbers of the specified hazards per year in order, first of all, to avoid unforeseen latent damages. Also, it becomes possible to make some useful relative estimation concerning damage and profit. We believe that the results presented in the cycle illustrate great practical competence of the current advances in the experimental bifurcation analysis. In particular, the developed QHS-analysis provides the novel prospects towards both how to adapt production to climatic changes and how to compensate negative technological impacts on environment.
APA, Harvard, Vancouver, ISO, and other styles
33

Zinevich, A., H. Messer, and P. Alpert. "Prediction of rainfall intensity measurement errors using commercial microwave communication links." Atmospheric Measurement Techniques 3, no. 5 (October 12, 2010): 1385–402. http://dx.doi.org/10.5194/amt-3-1385-2010.

Full text
Abstract:
Abstract. Commercial microwave radio links forming cellular communication networks are known to be a valuable instrument for measuring near-surface rainfall. However, operational communication links are more uncertain relatively to the dedicated installations since their geometry and frequencies are optimized for high communication performance rather than observing rainfall. Quantification of the uncertainties for measurements that are non-optimal in the first place is essential to assure usability of the data. In this work we address modeling of instrumental impairments, i.e. signal variability due to antenna wetting, baseline attenuation uncertainty and digital quantization, as well as environmental ones, i.e. variability of drop size distribution along a link affecting accuracy of path-averaged rainfall measurement and spatial variability of rainfall in the link's neighborhood affecting the accuracy of rainfall estimation out of the link path. Expressions for root mean squared error (RMSE) for estimates of path-averaged and point rainfall have been derived. To verify the RMSE expressions quantitatively, path-averaged measurements from 21 operational communication links in 12 different locations have been compared to records of five nearby rain gauges over three rainstorm events. The experiments show that the prediction accuracy is above 90% for temporal accumulation less than 30 min and lowers for longer accumulation intervals. Spatial variability in the vicinity of the link, baseline attenuation uncertainty and, possibly, suboptimality of wet antenna attenuation model are the major sources of link-gauge discrepancies. In addition, the dependence of the optimal coefficients of a conventional wet antenna attenuation model on spatial rainfall variability and, accordingly, link length has been shown. The expressions for RMSE of the path-averaged rainfall estimates can be useful for integration of measurements from multiple heterogeneous links into data assimilation algorithms.
APA, Harvard, Vancouver, ISO, and other styles
34

Alkalani, Fadi, and Raed Sahawneh. "Methods and Algorithms of Speech Signals Processing and Compression and Their Implementation in Computer Systems." Oriental journal of computer science and technology 10, no. 04 (December 11, 2017): 736–44. http://dx.doi.org/10.13005/ojcst/10.04.06.

Full text
Abstract:
The review and comparative analysis of the methods of compression and recognition of speech signals is carried out. The result of the carried out analysis of the existing recognition methods indicates, that all of them are based on the use of “inflexible” algorithms, which are badly adapted to the characteristic features of speech signals, thus degrading the efficiency of the operation of the whole recognition system. The necessity of the use of algorithms for determination of recognition features along with the use of the wavelet packet analysis as one of the advanced directions of the creation of the effective methods and principles of the development of the speech signals recognition systems is substantiated. Analysis of the compression methods with the use of the orthogonal transformations at the complete exception of minimal decomposition factors is conducted; a maximal possible compression degree is defined. In this compression method the orthogonal transformation of the signal segment with the subsequent exception of the set of the smallest modulo decomposition factors, irrespective of the order of their distribution, is conducted. Therefore the additional transfer of the information on the factors distribution is required. As a result, two information streams appear, the first one corresponds to the information stream on the decomposition factors, and the second stream transfers information on the distribution of these factors. Method of the determination of the speech signals recognition features and the algorithm for nonlinear time normalization is proposed and proved. Wavelet-packet transformation is adaptive, i.e. it allows adapting to the signal features more accurately by means of the choice of the proper tree of the optimal decomposition form, which provides the minimal number of wavelet factors at the prescribed accuracy of signal reconstruction, thus eliminating the information-surplus and unnecessary details of the signals. Estimation of the informativeness of the set of wavelet factors is accomplished by the entropy. In order to obtain the recognition factors, the spectral analysis operation is used. In order to carry out the temporary normalization, the deforming function is found, the use of which minimizes the discrepancy between the standard and new words realization. Dedicated to the determination of admissible compression factors on the basis of the orthogonal transformations use at the incomplete elimination of the set of minimal decomposition factors, to the creation of the block diagram of the method of the recognition features formation, to the practical testing of the software- methods. In order to elevate the compression factor, the adaptive uniform quantization is used, where the adaptation is conducted for all the decomposition factors. The program testing of the recognition methods is carried out by means of determination of the classification error probability using Mahalanobis (Gonzales) distance.
APA, Harvard, Vancouver, ISO, and other styles
35

Kukharchuk, Vasyl, Waldemar Wójcik, Sergii Pavlov, Samoil Katsyv, Volodymyr Holodiuk, Oleksandr Reyda, Ainur Kozbakova, and Gaukhar Borankulova. "FEATURES OF THE ANGULAR SPEED DYNAMIC MEASUREMENTS WITH THE USE OF AN ENCODER." Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska 12, no. 3 (September 30, 2022): 20–26. http://dx.doi.org/10.35784/iapgos.3035.

Full text
Abstract:
Based on the most significant features of the angular velocity dynamic measurements selected by the authors, the main phases of measuring information transformation were established, which allowed to obtain new mathematical models in the form of transformation function, equations for estimating quantization errors, analytical dependences for measuring range that are initial for modeling physical processes occurring in such digital measuring channels with microprocessor control. The process of converting an analog quantity into a binary code is analytically described, an equation for estimating the absolute and relative quantization error is obtained and a measurement range is established, which provides a normalized value of relative quantization error for angular velocity measuring channels with encoder. For the first time, the equation of sampling error was obtained, and it was proved that the limiting factor of the angular velocity measurements upper limit is not only the normalized value of quantization error, as previously thought, but also the value of sampling frequency fD. Therefore, to expand the measurement range (by increasing the upper limit of measurement), it is proposed not only to increase the speed of analog-to-digital conversion hardware, but also to reduce the execution time of software drivers for transmitting measurement information to RAM of microprocessor system. For this purpose, the analytical dependences of estimating the upper limit of measurement based on the value of the sampling step for different modes of measurement information transmission are obtained. The practical implementation of the software mode measurement information transmission is characterized by a minimum of hardware costs and maximum execution time of the software driver, which explains its low speed, and therefore provides a minimum value of the upper limit measurement. In the interrupt mode, the upper limit value of the angular velocity measurement is higher than in the program mode due to the reduction of the software driver’s execution time (tFl = 0). The maximum value of the angular velocity measurements upper limit can be achieved using the measurement information transmission in the mode of direct access to memory (DMA) by providing maximum speed in this mode (tFl = 0, tDR = 0). In addition, the application of the results obtained in the work allows at the design stage (during physical and mathematical modeling) to assess the basic metrological characteristics of the measuring channel, aimed at reducing the development time and debugging of hardware, software, and standardization of their metrological characteristics.
APA, Harvard, Vancouver, ISO, and other styles
36

MATTFELDT, TORSTEN. "CLASSIFICATION OF BINARY SPATIAL TEXTURES USING STOCHASTIC GEOMETRY, NONLINEAR DETERMINISTIC ANALYSIS AND ARTIFICIAL NEURAL NETWORKS." International Journal of Pattern Recognition and Artificial Intelligence 17, no. 02 (March 2003): 275–300. http://dx.doi.org/10.1142/s0218001403002332.

Full text
Abstract:
Stereology and stochastic geometry can be used as auxiliary tools for diagnostic purposes in tumour pathology. The role of first-order parameters and stochastic–geometric functions for the classification of the texture of biological tissues has been investigated recently. The volume fraction and surface area per unit volume, the pair correlation function and the centred quadratic contact density function of epithelium were estimated in three case series of benign and malignant lesions of glandular tissues. This approach was further extended by applying the Laslett test, i.e. a point process statistic computed after transformation of the convex tangent points of sectioned random sets from planar images. This method has not yet been applied to histological images so far. Also the nonlinear deterministic approach to tissue texture was applied by estimating the correlation dimension as a function of embedding dimension. We used the stochastic–geometric functions, the first-order parameters and the correlation dimensions for the classification of cases using various algorithms. Learning vector quantization was applied as neural paradigm. Applications included distinction between mastopathy and mammary cancer, between benign prostatic hyperplasia and prostatic cancer, and between chronic pancreatitis and pancreatic cancer. The same data sets were also classified with discriminant analysis and support vector machines. The stereological estimates provided high accuracy in the classification of individual cases. The question: which category of estimator is the most informative, cannot be answered globally, but must be explored empirically for each specific data set. The results obtained by the three algorithms were similar.
APA, Harvard, Vancouver, ISO, and other styles
37

Классина and S. Klassina. "Psycholigical impacts as ameans of human´s functional state rehabilitation under psycho-emotional stress." Journal of New Medical Technologies. eJournal 8, no. 1 (November 5, 2014): 1–7. http://dx.doi.org/10.12737/5478.

Full text
Abstract:
The article is devoted to studying of human body systemic reactions by the psychological rehabilitation impacts under psycho-emotional stress. Two series of surveys were carrying out. At first of survey 20 students were attended, who directly before the exam were exposed to 20 minutes séance of melodious music as a psychological rehabilitation impact. At second of survey 27 students were attended, who directly before the exam were exposed to 5 minutes séance of autogenous express regulation as psychological rehabilitation impact. These students were concentrated on the thumb of his right hand, but formulas of autosuggestion were excluding. Periodically, the stubjects were asking to switch the «inner eye» and to trace the blood movement through hand vessels. They must trace for a new subjective feeling appearance and fix it. Before and after any kind of psychological rehabilitations a subjects were offered a computer test for a operator activity, where it simulates a goal shootingt. Methodological basis for the test activity analysis was a system «quantization concept». In accordance with it the whole continuum of test activity is broken down into individual discrete segments – «systemokvants», which it had all functional system features. Any «systemokvant» could be describing by the parameter of achieved results, its «physiological cost» and indicator of efficacy. Before and after any kind of psychological rehabilitations ECG, pneumography, EEG, arterial blood pressure, dynamic tremor were recording. R.M. Baevsky tention index, subjective levels of self-filling, activity, mood, level of situational anxiety by Spielberger were estimating. Subject´s vegetative status was assessing by calculating the Kerdo vegetative index, Hildebrandt index and a minute volume of blood. EEG spectral analysis was performing, and the values of spectral EEG power in the delta, theta, alpha and beta bands were assessing. It was shown, that the different rehabilitation impacts were addressed to different structures and functions of a human body, causing fundamentally different responses whole system. So, melodious music, creating positive emotions, help to reduce mental and emotional human stress, to normalize the autonomic tone; help to increase of efficiency of activity. Séance of autogenous express regulation, by providing a more effect to a human psychic sphere, on the contrary, was contributed to a change the system functions organization and the appearance of new subjective sensations as qualitatively new system properties. Nature of subjective sensations was determining by initial state of a human brain bioelectrical activity.
APA, Harvard, Vancouver, ISO, and other styles
38

Battiato, Sebastiano, Oliver Giudice, Francesco Guarnera, and Giovanni Puglisi. "CNN-based first quantization estimation of double compressed JPEG images." Journal of Visual Communication and Image Representation, September 2022, 103635. http://dx.doi.org/10.1016/j.jvcir.2022.103635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Tondi, Benedetta, Andrea Costanzo, Dequ Huang, and Bin Li. "Boosting CNN-based primary quantization matrix estimation of double JPEG images via a classification-like architecture." EURASIP Journal on Information Security 2021, no. 1 (May 17, 2021). http://dx.doi.org/10.1186/s13635-021-00119-0.

Full text
Abstract:
AbstractEstimating the primary quantization matrix of double JPEG compressed images is a problem of relevant importance in image forensics since it allows to infer important information about the past history of an image. In addition, the inconsistencies of the primary quantization matrices across different image regions can be used to localize splicing in double JPEG tampered images. Traditional model-based approaches work under specific assumptions on the relationship between the first and second compression qualities and on the alignment of the JPEG grid. Recently, a deep learning-based estimator capable to work under a wide variety of conditions has been proposed that outperforms tailored existing methods in most of the cases. The method is based on a convolutional neural network (CNN) that is trained to solve the estimation as a standard regression problem. By exploiting the integer nature of the quantization coefficients, in this paper, we propose a deep learning technique that performs the estimation by resorting to a simil-classification architecture. The CNN is trained with a loss function that takes into account both the accuracy and the mean square error (MSE) of the estimation. Results confirm the superior performance of the proposed technique, compared to the state-of-the art methods based on statistical analysis and, in particular, deep learning regression. Moreover, the capability of the method to work under general operative conditions, regarding the alignment of the second compression grid with the one of first compression and the combinations of the JPEG qualities of former and second compression, is very relevant in practical applications, where these information are unknown a priori.
APA, Harvard, Vancouver, ISO, and other styles
40

Wang, Yu, Xiaogang Wang, and Naigang Cui. "Quantized feedback particle filter for unmanned aerial vehicles tracking with quantized measurements." Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering, July 21, 2020, 095441002094268. http://dx.doi.org/10.1177/0954410020942682.

Full text
Abstract:
Many existing state estimation approaches assume that the measurement noise of sensors is Gaussian. However, in unmanned aerial vehicles tracking applications with distributed passive radar array, the measurements suffer from quantization noise due to limited communication bandwidth. In this paper, a novel state estimation algorithm referred to as the quantized feedback particle filter is proposed to solve unmanned aerial vehicles tracking with quantized measurements, which is an improvement of the feedback particle filter (FPF) for the case of quantization noise. First, a bearing-only quantized measurement model is presented based on the midriser quantizer. The relationship between quantized measurements and original measurements is analyzed. By assuming that the quantization satisfies [Formula: see text], Sheppard’s correction is used for calculating the variances of the measurement noise. Then, a set of controlled particles is used to approximate the posterior distribution. To cope with the quantization noise of passive radars, a new formula of the gain matrix is derived by modifying the measurement noise covariance. Finally, a typical two-passive radar unmanned aerial vehicles tracking scenario is performed by QFPF and compared with the three other algorithms. Simulation results verify the superiority of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
41

Kim, Hwanjin, and Junil Choi. "Channel estimation for spatially/temporally correlated massive MIMO systems with one-bit ADCs." EURASIP Journal on Wireless Communications and Networking 2019, no. 1 (December 2019). http://dx.doi.org/10.1186/s13638-019-1587-x.

Full text
Abstract:
AbstractThis paper considers the channel estimation problem for massive multiple-input multiple-output (MIMO) systems that use one-bit analog-to-digital converters (ADCs). Previous channel estimation techniques for massive MIMO using one-bit ADCs are all based on single-shot estimation without exploiting the inherent temporal correlation in wireless channels. In this paper, we propose an adaptive channel estimation technique taking the spatial and temporal correlations into account for massive MIMO with one-bit ADCs. We first use the Bussgang decomposition to linearize the one-bit quantized received signals. Then, we adopt the Kalman filter to estimate the spatially and temporally correlated channels. Since the quantization noise is not Gaussian, we assume the effective noise as a Gaussian noise with the same statistics to apply the Kalman filtering. We also implement the truncated polynomial expansion-based low-complexity channel estimator with negligible performance loss. Numerical results reveal that the proposed channel estimators can improve the estimation accuracy significantly by using the spatial and temporal correlations of channels.
APA, Harvard, Vancouver, ISO, and other styles
42

Debyeche, Mohamed, Jean Paul Haton, and Amrane Houacine. "A NEW VECTOR QUANTIZATION APPROACH FOR DISCRETE HMM SPEECH RECOGNITION SYSTEM." International Journal of Computing, August 1, 2014, 72–78. http://dx.doi.org/10.47839/ijc.5.1.384.

Full text
Abstract:
In order to address accuracy issues of discrete Hidden Markov Models (HMMs), in this paper, a new vector quantization (VQ) approach is presented. This new VQ approach performs an optimal distribution of VQ codebook components on HMM states. This technique that we named the distributed vector quantization (DVQ) of hidden Markov models, succeeds in unifying acoustic micro-structure and phonetic macro-structure, when the estimation of HMM parameters is performed. The DVQ technique is implemented through two variants. The first variant uses the K-means algorithm (K-means-DVQ) to optimize the VQ, while the second variant exploits the benefits of the classification behavior of neural networks (NN-DVQ) for the same purpose. The proposed variants are compared with the HMMbased baseline system by experiments of specific Arabic consonants recognition. The results show that the distributed vector quantization technique increase the performance of the discrete HMM system.
APA, Harvard, Vancouver, ISO, and other styles
43

Georgieva, Alexandra, Andrey V. Belashov, and Nikolay V. Petrov. "Optimization of DMD-based independent amplitude and phase modulation by analysis of target complex wavefront." Scientific Reports 12, no. 1 (May 11, 2022). http://dx.doi.org/10.1038/s41598-022-11443-x.

Full text
Abstract:
AbstractThe paper presents the results of a comprehensive study on the optimization of independent amplitude and phase wavefront manipulation which is implemented using a binary digital micromirror device. The study aims to investigate the spatial resolution and quantization achievable using this approach and its optimization based on the parameters of the target complex wave and the modulation error estimation. Based on a statistical analysis of the data, an algorithm for selecting parameters (carrier frequency of binary pattern and aperture for the first diffraction order filtering) that ensures the optimal quality of the modulated wavefront was developed. The algorithm takes into account the type of modulation, that is, amplitude, phase, or amplitude-phase, the size of the encoded distribution, and its requirements for spatial resolution and quantization. The results of the study will greatly contribute to the improvement of modulated wavefront quality in various applications with different requirements for spatial resolution and quantization.
APA, Harvard, Vancouver, ISO, and other styles
44

Zhu, Hongzhong, and Toshiharu Sugie. "Velocity Estimation of Motion Systems Based on Low-Resolution Encoders." Journal of Dynamic Systems, Measurement, and Control 135, no. 1 (October 30, 2012). http://dx.doi.org/10.1115/1.4007065.

Full text
Abstract:
This paper proposes a new approach to estimate the velocity of mechanical system in the case where the optical incremental encoder is used as the position sensor. First, the actual angular position is reconstructed via moving horizon polynomial fitting method by taking account of quantization feature and the plant dynamics. Then, the reconstruction signal is applied to a classical observer to obtain the velocity estimation. Its robustness against the position sensor resolution and the degree of the polynomial is discussed by some numerical examples. Experiments with very low-resolution encoder in low speed range also confirm its effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
45

Escrig Mas, Gabriel, Roberto Campos, Pablo Antonio Moreno Casares, and Miguel Angel Martin-Delgado. "Parameter Estimation of Gravitational Waves with a Quantum Metropolis Algorithm." Classical and Quantum Gravity, January 3, 2023. http://dx.doi.org/10.1088/1361-6382/acafcf.

Full text
Abstract:
Abstract After the first detection of a gravitational wave in 2015, the number of successes achieved by this innovative way of looking through the universe has not stopped growing. However, the current techniques for analyzing this type of events present a serious bottleneck due to the high computational power they require. In this article we explore how recent techniques based on quantum algorithms could surpass this obstacle. For this purpose, we propose a quantization of the classical algorithms used in the literature for the inference of gravitational wave parameters based on the well-known Quantum Walks technique applied to a Metropolis-Hastings algorithm. Finally, we develop a quantum environment on classical hardware, implementing a metric to compare quantum versus classical algorithms in a fair way. We further test all these developments in the real inference of several sets of parameters of all the events of the first detection period GWTC-1 and we find a polynomial advantage in the quantum algorithms, thus setting a first starting point for future algorithms.
APA, Harvard, Vancouver, ISO, and other styles
46

"Video compression based on Multiwavelet and Multi Stage Vector Quantization using Adaptive Diamond Refinement Search Algorithm." International Journal of Recent Technology and Engineering 8, no. 2S5 (August 29, 2019): 203–7. http://dx.doi.org/10.35940/ijrte.b1041.0782s519.

Full text
Abstract:
Due to the advances in the digital technology, multimedia processing has become the essential requirement in many applications. These applications find wide use in mobile, personal computer(PC), TV, surveillance and satellite broadcast. Also it is necessary that the video coding algorithms to be updated in order to meet the requirements of latest hardware devices. The processing speed and bandwidth are essential parameters in these applications. A good video compression standard can achieve these parameters adequately. In the proposed system, the video coding standard is implemented using the three important stages. In which the first sage uses multiwavelets to achieve good compression rate. Also it reduces the memory and bandwidth requirement. Second stage is the Multi Stage Vector Quantization(MVSQ) which reduces the complexity of searching process and the size of codebook. Third stage uses Adaptive Diamond Refinement Search(ADRS) algorithm for the motion estimation which has better performance than the Adaptive Diamond Orthogonal Search(ADOS) and Diamond Refinement Search(DRS) algorithms. The combination of multiwavelet, Multi Stage Vector Quantization(MVSQ) and Adaptive Diamond Refinement Search(ADRS) algorithm gives the high compression ratios. Preliminary results indicate that the proposed method has good performance in terms of average number of search points, PSNR values and compression rates.
APA, Harvard, Vancouver, ISO, and other styles
47

Herrmann, Joachim. "Towards a unified theory of the fundamental physical interactions based on the underlying geometric structure of the tangent bundle." European Physical Journal C 82, no. 10 (October 26, 2022). http://dx.doi.org/10.1140/epjc/s10052-022-10781-4.

Full text
Abstract:
AbstractThis paper pursues the hypothesis that the tangent bundle (TB) with the central extended little groups of the SO(3,1) group as gauge group is the underlying geometric structure for a unified theory of the fundamental physical interactions. Based on this hypothesis as a first step, I recently presented a generalized theory of electroweak interaction (including hypothetical dark matter particles) (Herrmann in Eur Phys J C 79:779, 2019). The vertical Laplacian of the tangent bundle possesses the same form as the Hamiltonian of a 2D semiconductor quantum Hall system. This explains fractional charge quantization of quarks and the existence of lepton and quark families. As will be shown, the SU(3) color symmetry for strong interactions arises in the TB as an emergent symmetry similar to Chern–Simon gauge symmetries in quantum Hall systems. This predicts a signature of quark confinement as a universal large-scale property of the Chern–Simon fields and induces a new understanding of the vacuum as the ground state occupied by a condensate of quark–antiquark pairs. The gap for quark–antiquark pairing is calculated in the mean-field approximation, which allows a numerical estimation of the characteristic parameters of the vacuum such as its chemical potential, the quark condensation parameter and the vacuum energy. Note that a gauge theoretical understanding of gravity was previously achieved by considering the translation group T(3,1) in the TB as gauge group. Therefore, the theory presented here can be considered as a new type of unified theory for all known fundamental interactions linked with the geometrization program of physics.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography