Journal articles on the topic 'Separable kernels'

To see the other types of publications on this topic, follow the link: Separable kernels.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Separable kernels.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gutiérrez, José Manuel, Miguel Ángel Hernández-Verón, and Eulalia Martínez. "Improved Iterative Solution of Linear Fredholm Integral Equations of Second Kind via Inverse-Free Iterative Schemes." Mathematics 8, no. 10 (October 11, 2020): 1747. http://dx.doi.org/10.3390/math8101747.

Full text
Abstract:
This work is devoted to Fredholm integral equations of second kind with non-separable kernels. Our strategy is to approximate the non-separable kernel by using an adequate Taylor’s development. Then, we adapt an already known technique used for separable kernels to our case. First, we study the local convergence of the proposed iterative scheme, so we obtain a ball of starting points around the solution. Then, we complete the theoretical study with the semilocal convergence analysis, that allow us to obtain the domain of existence for the solution in terms of the starting point. In this case, the existence of a solution is deduced. Finally, we illustrate this study with some numerical experiments.
APA, Harvard, Vancouver, ISO, and other styles
2

Cheng, Xianhang, and Zhenzhong Chen. "Video Frame Interpolation via Deformable Separable Convolution." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 10607–14. http://dx.doi.org/10.1609/aaai.v34i07.6634.

Full text
Abstract:
Learning to synthesize non-existing frames from the original consecutive video frames is a challenging task. Recent kernel-based interpolation methods predict pixels with a single convolution process to replace the dependency of optical flow. However, when scene motion is larger than the pre-defined kernel size, these methods yield poor results even though they take thousands of neighboring pixels into account. To solve this problem in this paper, we propose to use deformable separable convolution (DSepConv) to adaptively estimate kernels, offsets and masks to allow the network to obtain information with much fewer but more relevant pixels. In addition, we show that the kernel-based methods and conventional flow-based methods are specific instances of the proposed DSepConv. Experimental results demonstrate that our method significantly outperforms the other kernel-based interpolation methods and shows strong performance on par or even better than the state-of-the-art algorithms both qualitatively and quantitatively.
APA, Harvard, Vancouver, ISO, and other styles
3

Bondarenko, Serge, Valery Burov, and Sergey Yurev. "Trinucleon form factors with relativistic multirank separable kernels." Nuclear Physics A 1014 (October 2021): 122251. http://dx.doi.org/10.1016/j.nuclphysa.2021.122251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jin, Xing, Ping Tang, Thomas Houet, Thomas Corpetti, Emilien Gence Alvarez-Vanhard, and Zheng Zhang. "Sequence Image Interpolation via Separable Convolution Network." Remote Sensing 13, no. 2 (January 15, 2021): 296. http://dx.doi.org/10.3390/rs13020296.

Full text
Abstract:
Remote-sensing time-series data are significant for global environmental change research and a better understanding of the Earth. However, remote-sensing acquisitions often provide sparse time series due to sensor resolution limitations and environmental factors, such as cloud noise for optical data. Image interpolation is the method that is often used to deal with this issue. This paper considers the deep learning method to learn the complex mapping of an interpolated intermediate image from predecessor and successor images, called separable convolution network for sequence image interpolation. The separable convolution network uses a separable 1D convolution kernel instead of 2D kernels to capture the spatial characteristics of input sequence images and then is trained end-to-end using sequence images. Our experiments, which were performed with unmanned aerial vehicle (UAV) and Landsat-8 datasets, show that the method is effective to produce high-quality time-series interpolated images, and the data-driven deep model can better simulate complex and diverse nonlinear image data information.
APA, Harvard, Vancouver, ISO, and other styles
5

Haley, Stephen B., and Herman J. Fink. "Non-separable pairing interaction kernels applied to superconducting cuprates." Physica C: Superconductivity 500 (May 2014): 44–55. http://dx.doi.org/10.1016/j.physc.2014.03.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gutiérrez, José M., and Miguel Á. Hernández-Verón. "A Picard-Type Iterative Scheme for Fredholm Integral Equations of the Second Kind." Mathematics 9, no. 1 (January 1, 2021): 83. http://dx.doi.org/10.3390/math9010083.

Full text
Abstract:
In this work, we present an application of Newton’s method for solving nonlinear equations in Banach spaces to a particular problem: the approximation of the inverse operators that appear in the solution of Fredholm integral equations. Therefore, we construct an iterative method with quadratic convergence that does not use either derivatives or inverse operators. Consequently, this new procedure is especially useful for solving non-homogeneous Fredholm integral equations of the first kind. We combine this method with a technique to find the solution of Fredholm integral equations with separable kernels to obtain a procedure that allows us to approach the solution when the kernel is non-separable.
APA, Harvard, Vancouver, ISO, and other styles
7

Peeters, G. "Construction and classification of minimal representations of semi-separable kernels." Journal of Mathematical Analysis and Applications 137, no. 1 (January 1989): 264–87. http://dx.doi.org/10.1016/0022-247x(89)90288-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Groenewald, G. J., M. A. Petersen, and A. C. M. Ran. "Characterization of integral operators with semi-separable kernels with symmetries." Journal of Functional Analysis 219, no. 2 (February 2005): 255–84. http://dx.doi.org/10.1016/j.jfa.2004.05.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Odibat, Zaid M. "Differential transform method for solving Volterra integral equation with separable kernels." Mathematical and Computer Modelling 48, no. 7-8 (October 2008): 1144–49. http://dx.doi.org/10.1016/j.mcm.2007.12.022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Becker, Leigh C. "Resolvents and solutions of singular Volterra integral equations with separable kernels." Applied Mathematics and Computation 219, no. 24 (August 2013): 11265–77. http://dx.doi.org/10.1016/j.amc.2013.05.038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Gohberg, I., and M. A. Kaashoek. "Minimal representations of semiseparable kernels and systems with separable boundary conditions." Journal of Mathematical Analysis and Applications 124, no. 2 (June 1987): 436–58. http://dx.doi.org/10.1016/0022-247x(87)90007-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Leeb, H., and H. Markum. "A Green's function code for Schrödinger equations with nonlocal separable kernels." Computer Physics Communications 34, no. 3 (January 1985): 271–86. http://dx.doi.org/10.1016/0010-4655(85)90004-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Irene, D. Shiny, and T. Sethukarasi. "Efficient Kernel Extreme Learning Machine and Neutrosophic C-means-based Attribute Weighting Method for Medical Data Classification." Journal of Circuits, Systems and Computers 29, no. 16 (November 9, 2020): 2050260. http://dx.doi.org/10.1142/s0218126620502606.

Full text
Abstract:
This paper proposes an integrated system neutrosophic C-means-based attribute weighting-kernel extreme learning machine (NCMAW-KELM) for medical data classification using NCM clustering and KELM. To do that, NCMAW is developed, and then combined with classification method in classification of medical data. The proposed approach contains two steps. In the first step, input attributes are weighted using NCMAW method. The purpose of the weighting method is twofold: (i) to improve the classification performance in the classification of the medical data, (ii) to transform from nonlinearly separable dataset to linearly separable dataset. Finally, KELM algorithm is used for medical data classification purpose. In KELM algorithm, four types of kernels, such as Polynomial, Sigmoid, Radial basis function and Linear, are used. The simulation result on our three datasets demonstrates that the sigmoid kernel is outperformed to ELM in most cases. From the results, NCMAW-KELM approach may be a promising method in medical data classification problem.
APA, Harvard, Vancouver, ISO, and other styles
14

Yu, Ruixuan, and Jian Sun. "Learning Polynomial-Based Separable Convolution for 3D Point Cloud Analysis." Sensors 21, no. 12 (June 19, 2021): 4211. http://dx.doi.org/10.3390/s21124211.

Full text
Abstract:
Shape classification and segmentation of point cloud data are two of the most demanding tasks in photogrammetry and remote sensing applications, which aim to recognize object categories or point labels. Point convolution is an essential operation when designing a network on point clouds for these tasks, which helps to explore 3D local points for feature learning. In this paper, we propose a novel point convolution (PSConv) using separable weights learned with polynomials for 3D point cloud analysis. Specifically, we generalize the traditional convolution defined on the regular data to a 3D point cloud by learning the point convolution kernels based on the polynomials of transformed local point coordinates. We further propose a separable assumption on the convolution kernels to reduce the parameter size and computational cost for our point convolution. Using this novel point convolution, a hierarchical network (PSNet) defined on the point cloud is proposed for 3D shape analysis tasks such as 3D shape classification and segmentation. Experiments are conducted on standard datasets, including synthetic and real scanned ones, and our PSNet achieves state-of-the-art accuracies for shape classification, as well as competitive results for shape segmentation compared with previous methods.
APA, Harvard, Vancouver, ISO, and other styles
15

Shojae Chaeikar, Saman, Azizah Abdul Manaf, Ala Abdulsalam Alarood, and Mazdak Zamani. "PFW: Polygonal Fuzzy Weighted—An SVM Kernel for the Classification of Overlapping Data Groups." Electronics 9, no. 4 (April 5, 2020): 615. http://dx.doi.org/10.3390/electronics9040615.

Full text
Abstract:
Support vector machines are supervised learning models which are capable of classifying data and measuring regression by means of a learning algorithm. If data are linearly separable, a conventional linear kernel is used to classify them. Otherwise, the data are normally first transformed from input space to feature space, and then they are classified. However, carrying out this transformation is not always practical, and the process itself increases the cost of training and prediction. To address these problems, this paper puts forward an SVM kernel, called polygonal fuzzy weighted or PFW, which effectively classifies data without space transformation, even if the groups in question are not linearly separable and have overlapping areas. This kernel is based on Gaussian data distribution, standard deviation, the three-sigma rule and a polygonal fuzzy membership function. A comparison of our PFW, radial basis function (RBF) and conventional linear kernels in identical experimental conditions shows that PFW produces a minimum of 26% higher classification accuracy compared with the linear kernel, and it outperforms the RBF kernel in two-thirds of class labels, by a minimum of 3%. Moreover, Since PFW runs within the original feature space, it involves no additional computational cost.
APA, Harvard, Vancouver, ISO, and other styles
16

Porter, D. "The reduction of a pair of singular integral equations." Mathematical Proceedings of the Cambridge Philosophical Society 100, no. 1 (July 1986): 175–82. http://dx.doi.org/10.1017/s0305004100065981.

Full text
Abstract:
AbstractA method is derived for converting a pair of coupled singular integral equations of a certain form into a single equation of the same (Cauchy-separable) type. Reduction methods for systems of singular integral equations are generally directed towards the construction of equivalent Fredholm equations. Preservation of the singular nature of the kernel in the reduction process permits the powerful techniques associated with Cauchy kernels to be used in seeking closed solutions of the original pair.The example given, derived previously from a problem in wave diffraction theory, illustrates many aspects of the method.
APA, Harvard, Vancouver, ISO, and other styles
17

Revin, D. O., and A. V. Zavarnitsine. "The behavior of $pi$-submaximal subgroups under homomorphisms with $pi$-separable kernels." Sibirskie Elektronnye Matematicheskie Izvestiya 17 (August 24, 2020): 1155–64. http://dx.doi.org/10.33048/semi.2020.17.087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Ren, Zhenwen, Quansen Sun, and Dong Wei. "Multiple Kernel Clustering with Kernel k-Means Coupled Graph Tensor Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (May 18, 2021): 9411–18. http://dx.doi.org/10.1609/aaai.v35i11.17134.

Full text
Abstract:
Kernel k-means (KKM) and spectral clustering (SC) are two basic methods used for multiple kernel clustering (MKC), which have both been widely used to identify clusters that are non-linearly separable. However, both of them have their own shortcomings: 1) the KKM-based methods usually focus on learning a discrete clustering indicator matrix via a combined consensus kernel, but cannot exploit the high-order affinities of all pre-defined base kernels; and 2) the SC-based methods require a robust and meaningful affinity graph in kernel space as input in order to form clusters with desired clustering structure. In this paper, a novel method, kernel k-means coupled graph tensor (KCGT), is proposed to graciously couple KKM and SC for seizing their merits and evading their demerits simultaneously. In specific, we innovatively develop a new graph learning paradigm by leveraging an explicit theoretical connection between clustering indicator matrix and affinity graph, such that the affinity graph propagated from KKM enjoys the valuable block diagonal and sparse property. Then, by using this graph learning paradigm, base kernels can produce multiple candidate affinity graphs, which are stacked into a low-rank graph tensor for capturing the high-order affinity of all these graphs. After that, by averaging all the frontal slices of the tensor, a high-quality affinity graph is obtained. Extensive experiments have shown the superiority of KCGT compared with the state-of-the-art MKC methods.
APA, Harvard, Vancouver, ISO, and other styles
19

Gesztesy, Fritz, and Konstantin A. Makarov. "(Modified) Fredholm Determinants for Operators with Matrix-Valued Semi-Separable Integral Kernels Revisited." Integral Equations and Operator Theory 47, no. 4 (December 1, 2003): 457–97. http://dx.doi.org/10.1007/s00020-003-1170-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Gesztesy, Fritz, and Konstantin A. Makarov. "(Modified) Fredholm Determinants for Operators with Matrix-Valued Semi-Separable Integral Kernels Revisited." Integral Equations and Operator Theory 48, no. 4 (April 1, 2004): 561–602. http://dx.doi.org/10.1007/s00020-003-1279-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Ullah, Aman, Zia Ullah, Thabet Abdeljawad, Zakia Hammouch, and Kamal Shah. "A hybrid method for solving fuzzy Volterra integral equations of separable type kernels." Journal of King Saud University - Science 33, no. 1 (January 2021): 101246. http://dx.doi.org/10.1016/j.jksus.2020.101246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Yao, Qiong, Xiang Xu, and Wensheng Li. "A Sparsified Densely Connected Network with Separable Convolution for Finger-Vein Recognition." Symmetry 14, no. 12 (December 19, 2022): 2686. http://dx.doi.org/10.3390/sym14122686.

Full text
Abstract:
At present, ResNet and DenseNet have achieved significant performance gains in the field of finger-vein biometric recognition, which is partially attributed to the dominant design of cross-layer skip connection. In this manner, features from multiple layers can be effectively aggregated to provide sufficient discriminant representation. Nevertheless, an over-dense connection pattern may induce channel expansion of feature maps and excessive memory consumption. To address these issues, we proposed a low memory overhead and fairly lightweight network architecture for finger-vein recognition. The core components of the proposed network are a sequence of sparsified densely connected blocks with symmetric structure. In each block, a novel connection cropping strategy is adopted to balance the channel ratio of input/output feature maps. Beyond this, to facilitate smaller model volume and faster convergence, we substitute the standard convolutional kernels with separable convolutional kernels and introduce a robust loss metric that is defined on the geodesic distance of angular space. Our proposed sparsified densely connected network with separable convolution (hereinafter dubbed ‘SC-SDCN’) has been tested on two benchmark finger-vein datasets, including the Multimedia Lab of Chonbuk National University (MMCBNU)and Finger Vein of Universiti Sains Malaysia (FV-USM), and the advantages of our SC-SDCN can be evident from the experimental results. Specifically, an equal error rate (EER) of 0.01% and an accuracy of 99.98% are obtained on the MMCBNU dataset, and an EER of 0.45% and an accuracy of 99.74% are obtained on the FV-USM dataset.
APA, Harvard, Vancouver, ISO, and other styles
23

Oukouomi Noutchie, S. C., and E. F. Doungmo Goufo. "Exact Solutions of Fragmentation Equations with General Fragmentation Rates and Separable Particles Distribution Kernels." Mathematical Problems in Engineering 2014 (2014): 1–5. http://dx.doi.org/10.1155/2014/789769.

Full text
Abstract:
We make use of Laplace transform techniques and the method of characteristics to solve fragmentation equations explicitly. Our result is a breakthrough in the analysis of pure fragmentation equations as this is the first instance where an exact solution is provided for the fragmentation evolution equation with general fragmentation rates. This paper is the key for resolving most of the open problems in fragmentation theory including “shattering” and the sudden appearance of infinitely many particles in some systems with initial finite particles number.
APA, Harvard, Vancouver, ISO, and other styles
24

Gesztesy, Fritz, and Konstantin A. Makarov. "Erratum: (Modified) Fredholm Determinants for Operators with Matrix-Valued Semi-Separable Integral Kernels Revisited." Integral Equations and Operator Theory 48, no. 3 (March 1, 2004): 425–26. http://dx.doi.org/10.1007/s00020-003-1278-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Jabeen, Tahira, Ravi P. Agarwal, Vasile Lupulescu, and Donal O’Regan. "Impulsive Evolution Equations with Causal Operators." Symmetry 12, no. 1 (December 25, 2019): 48. http://dx.doi.org/10.3390/sym12010048.

Full text
Abstract:
In this paper, we establish sufficient conditions for the existence of mild solutions for certain impulsive evolution differential equations with causal operators in separable Banach spaces. We rely on the existence of mild solutions for the strongly continuous semigroups theory, the measure of noncompactness and the Schauder fixed point theorem. We consider the impulsive integro-differential evolutions equation and impulsive reaction diffusion equations (which could include symmetric kernels) as applications to illustrate our main results.
APA, Harvard, Vancouver, ISO, and other styles
26

SMALE, STEVE, and DING-XUAN ZHOU. "ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY." Analysis and Applications 01, no. 01 (January 2003): 17–41. http://dx.doi.org/10.1142/s0219530503000089.

Full text
Abstract:
Let B be a Banach space and (ℋ,‖·‖ℋ) be a dense, imbedded subspace. For a ∈ B, its distance to the ball of ℋ with radius R (denoted as I(a, R)) tends to zero when R tends to infinity. We are interested in the rate of this convergence. This approximation problem arose from the study of learning theory, where B is the L2 space and ℋ is a reproducing kernel Hilbert space. The class of elements having I(a, R) = O(R-r) with r > 0 is an interpolation space of the couple (B, ℋ). The rate of convergence can often be realized by linear operators. In particular, this is the case when ℋ is the range of a compact, symmetric, and strictly positive definite linear operator on a separable Hilbert space B. For the kernel approximation studied in Learning Theory, the rate depends on the regularity of the kernel function. This yields error estimates for the approximation by reproducing kernel Hilbert spaces. When the kernel is smooth, the convergence is slow and a logarithmic convergence rate is presented for analytic kernels in this paper. The purpose of our results is to provide some theoretical estimates, including the constants, for the approximation error required for the learning theory.
APA, Harvard, Vancouver, ISO, and other styles
27

Bhatia, Navnina, David Tisseur, Solene Valton, and Jean Michel Létang. "Separable scatter model of the detector and object contributions using continuously thickness-adapted kernels in CBCT." Journal of X-Ray Science and Technology 24, no. 5 (October 6, 2016): 723–32. http://dx.doi.org/10.3233/xst-160583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Patel, Chirag, Dulari Bhatt, Urvashi Sharma, Radhika Patel, Sharnil Pandya, Kirit Modi, Nagaraj Cholli, et al. "DBGC: Dimension-Based Generic Convolution Block for Object Recognition." Sensors 22, no. 5 (February 24, 2022): 1780. http://dx.doi.org/10.3390/s22051780.

Full text
Abstract:
The object recognition concept is being widely used a result of increasing CCTV surveillance and the need for automatic object or activity detection from images or video. Increases in the use of various sensor networks have also raised the need of lightweight process frameworks. Much research has been carried out in this area, but the research scope is colossal as it deals with open-ended problems such as being able to achieve high accuracy in little time using lightweight process frameworks. Convolution Neural Networks and their variants are widely used in various computer vision activities, but most of the architectures of CNN are application-specific. There is always a need for generic architectures with better performance. This paper introduces the Dimension-Based Generic Convolution Block (DBGC), which can be used with any CNN to make the architecture generic and provide a dimension-wise selection of various height, width, and depth kernels. This single unit which uses the separable convolution concept provides multiple combinations using various dimension-based kernels. This single unit can be used for height-based, width-based, or depth-based dimensions; the same unit can even be used for height and width, width and depth, and depth and height dimensions. It can also be used for combinations involving all three dimensions of height, width, and depth. The main novelty of DBGC lies in the dimension selector block included in the proposed architecture. Proposed unoptimized kernel dimensions reduce FLOPs by around one third and also reduce the accuracy by around one half; semi-optimized kernel dimensions yield almost the same or higher accuracy with half the FLOPs of the original architecture, while optimized kernel dimensions provide 5 to 6% higher accuracy with around a 10 M reduction in FLOPs.
APA, Harvard, Vancouver, ISO, and other styles
29

Deng, Feiyue, Yan Bi, Yongqiang Liu, and Shaopu Yang. "Deep-Learning-Based Remaining Useful Life Prediction Based on a Multi-Scale Dilated Convolution Network." Mathematics 9, no. 23 (November 26, 2021): 3035. http://dx.doi.org/10.3390/math9233035.

Full text
Abstract:
Remaining useful life (RUL) prediction of key components is an important influencing factor in making accurate maintenance decisions for mechanical systems. With the rapid development of deep learning (DL) techniques, the research on RUL prediction based on the data-driven model is increasingly widespread. Compared with the conventional convolution neural networks (CNNs), the multi-scale CNNs can extract different-scale feature information, which exhibits a better performance in the RUL prediction. However, the existing multi-scale CNNs employ multiple convolution kernels with different sizes to construct the network framework. There are two main shortcomings of this approach: (1) the convolution operation based on multiple size convolution kernels requires enormous computation and has a low operational efficiency, which severely restricts its application in practical engineering. (2) The convolutional layer with a large size convolution kernel needs a mass of weight parameters, leading to a dramatic increase in the network training time and making it prone to overfitting in the case of small datasets. To address the above issues, a multi-scale dilated convolution network (MsDCN) is proposed for RUL prediction in this article. The MsDCN adopts a new multi-scale dilation convolution fusion unit (MsDCFU), in which the multi-scale network framework is composed of convolution operations with different dilated factors. This effectively expands the range of receptive field (RF) for the convolution kernel without an additional computational burden. Moreover, the MsDCFU employs the depthwise separable convolution (DSC) to further improve the operational efficiency of the prognostics model. Finally, the proposed method was validated with the accelerated degradation test data of rolling element bearings (REBs). The experimental results demonstrate that the proposed MSDCN has a higher RUL prediction accuracy compared to some typical CNNs and better operational efficiency than the existing multi-scale CNNs based on different convolution kernel sizes.
APA, Harvard, Vancouver, ISO, and other styles
30

Mertens, Jean-François, and Anna Rubinchik. "REGULARITY AND STABILITY OF EQUILIBRIA IN AN OVERLAPPING GENERATIONS GROWTH MODEL." Macroeconomic Dynamics 23, no. 2 (July 17, 2017): 699–729. http://dx.doi.org/10.1017/s1365100516001334.

Full text
Abstract:
In an exogenous-growth economy with overlapping generations, the Cobb–Douglas production, any positive life-cycle productivity, and time-separable constant elasticity of substitution (CES) utility, we analyze local stability of a balanced growth equilibrium (BGE) with respect to changes in consumption endowments, which could be interpreted as a transfer policy. We show that generically, in the space of parameters, equilibria around a BGE are locally unique and are locally differentiable functions of endowments, with derivatives given by kernels. Furthermore, those equilibria are stable in the sense that the effects of temporary changes decay exponentially toward ±∞.
APA, Harvard, Vancouver, ISO, and other styles
31

Weng, Liguo, Yiming Xu, Min Xia, Yonghong Zhang, Jia Liu, and Yiqing Xu. "Water Areas Segmentation from Remote Sensing Images Using a Separable Residual SegNet Network." ISPRS International Journal of Geo-Information 9, no. 4 (April 18, 2020): 256. http://dx.doi.org/10.3390/ijgi9040256.

Full text
Abstract:
Changes on lakes and rivers are of great significance for the study of global climate change. Accurate segmentation of lakes and rivers is critical to the study of their changes. However, traditional water area segmentation methods almost all share the following deficiencies: high computational requirements, poor generalization performance, and low extraction accuracy. In recent years, semantic segmentation algorithms based on deep learning have been emerging. Addressing problems associated to a very large number of parameters, low accuracy, and network degradation during training process, this paper proposes a separable residual SegNet (SR-SegNet) to perform the water area segmentation using remote sensing images. On the one hand, without compromising the ability of feature extraction, the problem of network degradation is alleviated by adding modified residual blocks into the encoder, the number of parameters is limited by introducing depthwise separable convolutions, and the ability of feature extraction is improved by using dilated convolutions to expand the receptive field. On the other hand, SR-SegNet removes the convolution layers with relatively more convolution kernels in the encoding stage, and uses the cascading method to fuse the low-level and high-level features of the image. As a result, the whole network can obtain more spatial information. Experimental results show that the proposed method exhibits significant improvements over several traditional methods, including FCN, DeconvNet, and SegNet.
APA, Harvard, Vancouver, ISO, and other styles
32

Sarada, N., and K. Thirupathi Rao. "A Neural Network Architecture Using Separable Neural Networks for the Identification of “Pneumonia” in Digital Chest Radiographs." International Journal of e-Collaboration 17, no. 1 (January 2021): 89–100. http://dx.doi.org/10.4018/ijec.2021010106.

Full text
Abstract:
In recent years, convolutional neural networks had a wide impact in the fields of medical image processing. Image semantic segmentation and image classification have been the main challenges in this field. These two techniques have been seeing a lot of improvement in medical surgeries which are being carried out by robots and autonomous machines. This work will be working on a convolutional model to detect pneumonia in a given chest x-ray scan. In addition to the convolution model, the proposed model consists of deep separable convolution kernels which replace few convolutional layers; one main advantage is these take in a smaller number of parameters and filters. The described model will be more efficient, robust, and fine-tuned than previous models developed using convolutional neural networks. The authors also benchmarked the present model with the CheXnet model, which almost predicts over 16 abnormalities in the given chest-x-rays.
APA, Harvard, Vancouver, ISO, and other styles
33

Jin, Xing, Ping Tang, and Zheng Zhang. "Sequence Image Datasets Construction via Deep Convolution Networks." Remote Sensing 13, no. 9 (May 10, 2021): 1853. http://dx.doi.org/10.3390/rs13091853.

Full text
Abstract:
Remote-sensing time-series datasets are significant for global change research and a better understanding of the Earth. However, remote-sensing acquisitions often provide sparse time series due to sensor resolution limitations and environmental factors such as cloud noise for optical data. Image transformation is the method that is often used to deal with this issue. This paper considers the deep convolution networks to learn the complex mapping between sequence images, called adaptive filter generation network (AdaFG), convolution long short-term memory network (CLSTM), and cycle-consistent generative adversarial network (CyGAN) for construction of sequence image datasets. AdaFG network uses a separable 1D convolution kernel instead of 2D kernels to capture the spatial characteristics of input sequence images and then is trained end-to-end using sequence images. CLSTM network can map between different images using the state information of multiple time-series images. CyGAN network can map an image from a source domain to a target domain without additional information. Our experiments, which were performed with unmanned aerial vehicle (UAV) and Landsat-8 datasets, show that the deep convolution networks are effective to produce high-quality time-series image datasets, and the data-driven deep convolution networks can better simulate complex and diverse nonlinear data information.
APA, Harvard, Vancouver, ISO, and other styles
34

Fuentes-Alventosa, Antonio, Juan Gómez-Luna, José Maria González-Linares, Nicolás Guil, and R. Medina-Carnicer. "CAVLCU: an efficient GPU-based implementation of CAVLC." Journal of Supercomputing 78, no. 6 (November 29, 2021): 7556–90. http://dx.doi.org/10.1007/s11227-021-04183-8.

Full text
Abstract:
AbstractCAVLC (Context-Adaptive Variable Length Coding) is a high-performance entropy method for video and image compression. It is the most commonly used entropy method in the video standard H.264. In recent years, several hardware accelerators for CAVLC have been designed. In contrast, high-performance software implementations of CAVLC (e.g., GPU-based) are scarce. A high-performance GPU-based implementation of CAVLC is desirable in several scenarios. On the one hand, it can be exploited as the entropy component in GPU-based H.264 encoders, which are a very suitable solution when GPU built-in H.264 hardware encoders lack certain necessary functionality, such as data encryption and information hiding. On the other hand, a GPU-based implementation of CAVLC can be reused in a wide variety of GPU-based compression systems for encoding images and videos in formats other than H.264, such as medical images. This is not possible with hardware implementations of CAVLC, as they are non-separable components of hardware H.264 encoders. In this paper, we present CAVLCU, an efficient implementation of CAVLC on GPU, which is based on four key ideas. First, we use only one kernel to avoid the long latency global memory accesses required to transmit intermediate results among different kernels, and the costly launches and terminations of additional kernels. Second, we apply an efficient synchronization mechanism for thread-blocks (In this paper, to prevent confusion, a block of pixels of a frame will be referred to as simply block and a GPU thread block as thread-block.) that process adjacent frame regions (in horizontal and vertical dimensions) to share results in global memory space. Third, we exploit fully the available global memory bandwidth by using vectorized loads to move directly the quantized transform coefficients to registers. Fourth, we use register tiling to implement the zigzag sorting, thus obtaining high instruction-level parallelism. An exhaustive experimental evaluation showed that our approach is between 2.5$$\times$$ × and 5.4$$\times$$ × faster than the only state-of-the-art GPU-based implementation of CAVLC.
APA, Harvard, Vancouver, ISO, and other styles
35

Gutiérrez, José M., and Miguel Á. Hernández-Verón. "An Ulm-Type Inverse-Free Iterative Scheme for Fredholm Integral Equations of Second Kind." Symmetry 13, no. 10 (October 17, 2021): 1957. http://dx.doi.org/10.3390/sym13101957.

Full text
Abstract:
In this paper, we present an iterative method based on the well-known Ulm’s method to numerically solve Fredholm integral equations of the second kind. We support our strategy in the symmetry between two well-known problems in Numerical Analysis: the solution of linear integral equations and the approximation of inverse operators. In this way, we obtain a two-folded algorithm that allows us to approximate, with quadratic order of convergence, the solution of the integral equation as well as the inverses at the solution of the derivative of the operator related to the problem. We have studied the semilocal convergence of the method and we have obtained the expression of the method in a particular case, given by some adequate initial choices. The theoretical results are illustrated with two applications to integral equations, given by symmetric non-separable kernels.
APA, Harvard, Vancouver, ISO, and other styles
36

Moore, J. B., R. Horowitz, and W. Messner. "Functional Persistence of Excitation and Observability for Learning Control Systems." Journal of Dynamic Systems, Measurement, and Control 114, no. 3 (September 1, 1992): 500–507. http://dx.doi.org/10.1115/1.2897375.

Full text
Abstract:
Adaptive systems involving function learning can be formulated in terms of integral equations of the first kind, possibly with separable, finite-dimensional kernels. The learning process involves estimating the influence functions (Messner et al., 1989). To achieve convergence of the influence function estimates and exponentially stability, it is important to have persistence of excitation in the training tasks. This paper develops the concept of functional persistence of excitation (PE), and the associated concept of functional uniform complete observability (UCO). Relevant PE and UCO properties for linear systems are developed. For example, a key result is that uniform complete observability in this context is maintained under bounded integral operator output injection—a natural generalization of the corresponding finite dimensional result. This paper also demonstrates the application of the theory to linear error equations associated with a repetitive control algorithm.
APA, Harvard, Vancouver, ISO, and other styles
37

Shang, Shang, Sijie Lin, and Fengyu Cong. "Zebrafish Larvae Phenotype Classification from Bright-field Microscopic Images Using a Two-Tier Deep-Learning Pipeline." Applied Sciences 10, no. 4 (February 13, 2020): 1247. http://dx.doi.org/10.3390/app10041247.

Full text
Abstract:
Classification of different zebrafish larvae phenotypes is useful for studying the environmental influence on embryo development. However, the scarcity of well-annotated training images and fuzzy inter-phenotype differences hamper the application of machine-learning methods in phenotype classification. This study develops a deep-learning approach to address these challenging problems. A convolutional network model with compressed separable convolution kernels is adopted to address the overfitting issue caused by insufficient training data. A two-tier classification pipeline is designed to improve the classification accuracy based on fuzzy phenotype features. Our method achieved an averaged accuracy of 91% for all the phenotypes and maximum accuracy of 100% for some phenotypes (e.g., dead and chorion). We also compared our method with the state-of-the-art methods based on the same dataset. Our method obtained dramatic accuracy improvement up to 22% against the existing method. This study offers an effective deep-learning solution for classifying difficult zebrafish larvae phenotypes based on very limited training data.
APA, Harvard, Vancouver, ISO, and other styles
38

Bercovici, H., C. Foias, and C. Pearcy. "On the Hyperinvariant Subspace Problem. IV." Canadian Journal of Mathematics 60, no. 4 (August 1, 2008): 758–89. http://dx.doi.org/10.4153/cjm-2008-034-2.

Full text
Abstract:
AbstractThis paper is a continuation of three recent articles concerning the structure of hyperinvariant subspace lattices of operators on a (separable, infinite dimensional) Hilbert space . We show herein, in particular, that there exists a “universal” fixed block-diagonal operator B on such that if ε > 0 is given and T is an arbitrary nonalgebraic operator on , then there exists a compact operator K of norm less than ε such that (i) Hlat(T) is isomorphic as a complete lattice to Hlat(B + K) and (ii) B + K is a quasidiagonal, C00, (BCP)-operator with spectrum and left essential spectrum the unit disc. In the last four sections of the paper, we investigate the possible structures of the hyperlattice of an arbitrary algebraic operator. Contrary to existing conjectures, Hlat(T) need not be generated by the ranges and kernels of the powers of T in the nilpotent case. In fact, this lattice can be infinite.
APA, Harvard, Vancouver, ISO, and other styles
39

Hansen, Leif Ove, and Jan Myrheim. "Nongeneric positive partial transpose states of rank five in 3×3 dimensions." International Journal of Quantum Information 15, no. 07 (October 2017): 1750054. http://dx.doi.org/10.1142/s021974991750054x.

Full text
Abstract:
In [Formula: see text] dimensions, entangled mixed states that are positive under partial transposition (PPT states) must have rank at least four. These rank four states are completely understood. We say that they have rank [Formula: see text] since both a state [Formula: see text] and its partial transpose [Formula: see text] have rank four. The next problem is to understand the extremal PPT states of rank [Formula: see text]. We call two states [Formula: see text]-equivalent if they are related by a product transformation. A generic rank [Formula: see text] PPT state [Formula: see text] is extremal, and both [Formula: see text] and [Formula: see text] have six product vectors in their ranges, and no product vectors in their kernels. The three numbers [Formula: see text] are [Formula: see text]-invariants that help us classify the state. There is no analytical understanding of such states. We have studied numerically a few types of nongeneric rank five PPT states, in particular, states with one or more product vectors in their kernels. We find an interesting new analytical construction of all rank four extremal PPT states, up to [Formula: see text]-equivalence, where they appear as boundary states on one single five-dimensional face on the set of normalized PPT states. The interior of the face consists of rank [Formula: see text] states with four common product vectors in their kernels, it is a simplex of separable states surrounded by entangled PPT states. We say that a state [Formula: see text] is [Formula: see text]-symmetric if [Formula: see text] and [Formula: see text] are [Formula: see text]-equivalent, and is genuinely [Formula: see text]-symmetric if it is [Formula: see text]-equivalent to a state [Formula: see text] with [Formula: see text]. Genuine [Formula: see text]-symmetry implies a special form of [Formula: see text]-symmetry. We have produced numerically, by a special method, a random sample of rank [Formula: see text] [Formula: see text]-symmetric states. About 50 of these are of type [Formula: see text], among those all are extremal and about half are genuinely [Formula: see text]-symmetric. All these genuinely [Formula: see text]-symmetric states can be transformed to have a circulant form. We find however that this is not a generic property of genuinely [Formula: see text]-symmetric states. The remaining [Formula: see text]-symmetric states found in the search have product vectors in their kernels, and they inspired us to study such states without regard to [Formula: see text]-symmetry.
APA, Harvard, Vancouver, ISO, and other styles
40

Miao, Yuqi, Shanshan Jiang, Yiming Xu, and Dongjie Wang. "Feature Residual Analysis Network for Building Extraction from Remote Sensing Images." Applied Sciences 12, no. 10 (May 18, 2022): 5095. http://dx.doi.org/10.3390/app12105095.

Full text
Abstract:
Building extraction of remote sensing images is very important for urban planning. In the field of deep learning, in order to extract more detailed building features, more complex convolution operations and larger network models are usually used to segment buildings, resulting in low efficiency of automatic extraction. The existing network is difficult to balance the extraction accuracy and extraction speed. Considering the segmentation accuracy and speed, a Feature Residual Analysis Network (FRA-Net) is proposed to realize fast and accurate building extraction. The whole network includes two stages: encoding and decoding. In the encoding stage, a Separable Residual Module (SRM) is designed to extract feature information and extract building features from remote sensing images, avoiding the use of large convolution kernels to reduce the complexity of the model. In the decoding stage, the SRM is used for information decoding, and a multi-feature attention module is constructed to enhance the effective information. The experimental results on the LandCover dataset and Massachusetts Buildings dataset show that the reasoning speed has been greatly improved without reducing the segmentation accuracy.
APA, Harvard, Vancouver, ISO, and other styles
41

Chen, Jeng-Tzong, and An-Chien Wu. "Null-Field Approach for the Multi-inclusion Problem Under Antiplane Shears." Journal of Applied Mechanics 74, no. 3 (May 22, 2006): 469–87. http://dx.doi.org/10.1115/1.2338056.

Full text
Abstract:
In this paper, we derive the null-field integral equation for an infinite medium containing circular holes and/or inclusions with arbitrary radii and positions under the remote antiplane shear. To fully capture the circular geometries, separable expressions of fundamental solutions in the polar coordinate for field and source points and Fourier series for boundary densities are adopted to ensure the exponential convergence. By moving the null-field point to the boundary, singular and hypersingular integrals are transformed to series sums after introducing the concept of degenerate kernels. Not only the singularity but also the sense of principle values are novelly avoided. For the calculation of boundary stress, the Hadamard principal value for hypersingularity is not required and can be easily calculated by using series sums. Besides, the boundary-layer effect is eliminated owing to the introduction of degenerate kernels. The solution is formulated in a manner of semi-analytical form since error purely attributes to the truncation of Fourier series. The method is basically a numerical method, and because of its semi-analytical nature, it possesses certain advantages over the conventional boundary element method. The exact solution for a single inclusion is derived using the present formulation and matches well with the Honein et al.’s solution by using the complex-variable formulation (Honein, E., Honein, T., and Hermann, G., 1992, Appl. Math., 50, pp. 479–499). Several problems of two holes, two inclusions, one cavity surrounded by two inclusions and three inclusions are revisited to demonstrate the validity of our method. The convergence test and boundary-layer effect are also addressed. The proposed formulation can be generalized to multiple circular inclusions and cavities in a straightforward way without any difficulty.
APA, Harvard, Vancouver, ISO, and other styles
42

Amosov, Grigori G., and Egor L. Baitenov. "On the space of Schwartz operators in the symmetric Fock space and its dual." Vestnik of Saint Petersburg University. Mathematics. Mechanics. Astronomy 9, no. 2 (2022): 193–200. http://dx.doi.org/10.21638/spbu01.2022.201.

Full text
Abstract:
A long-standing problem that arises when constructing the mathematical apparatus of quantum mechanics is the need to work with unbounded operators. Since the space of nuclear operators is preconjugate for the algebra of all bounded operators, we can consider the states of a quantum system as nuclear operators, and consider bounded operators as observables. In this case, taking the trace for the product of a nuclear operator (a quantum state) and a bounded operator (a quantum observable) gives the average value of a quantum observable in a fixed state. The existence of such an average for unbounded operators is not guaranteed. If we want to define a space of observables that includes such naturally occurring unbounded operators as the position and momentum operators, for which average values are always determined, we should consider a space of states smaller than all nuclear operators. Recently, this approach has been mathematically correct implemented in the Hilbert space H = L^2(R^N). The so-called space of Schwarz operators, equipped with a system of semi-norms and being a Frechet space, was chosen as the space of states. Schwarz operators appeared to be integral operators whose kernels are functions belonging to the usual Schwarz space. The dual space to the space of Schwarz operators should be considered as the space of quantum observables and it really includes such standard observables as polynomials from the products of the position and momentum operators. In the present work we transfer this approach to the symmetric Fock space H = F(H) over an infinite dimensional separable Hilbert space H. We introduce the space of Schwartz operators in F(H) over an infinite dimensional separable space H and investigate which of the standard operators of quantum white noise belong to the space dual to the space of Schwartz operators.
APA, Harvard, Vancouver, ISO, and other styles
43

Raja Sekaran, Sarmela, Ying Han Pang, Goh Fan Ling, and Ooi Shih Yin. "MSTCN: A multiscale temporal convolutional network for user independent human activity recognition." F1000Research 10 (May 18, 2022): 1261. http://dx.doi.org/10.12688/f1000research.73175.2.

Full text
Abstract:
Background: In recent years, human activity recognition (HAR) has been an active research topic due to its widespread application in various fields such as healthcare, sports, patient monitoring, etc. HAR approaches can be categorised as handcrafted feature methods (HCF) and deep learning methods (DL). HCF involves complex data pre-processing and manual feature extraction in which the models may be exposed to high bias and crucial implicit pattern loss. Hence, DL approaches are introduced due to their exceptional recognition performance. Convolutional Neural Network (CNN) extracts spatial features while preserving localisation. However, it hardly captures temporal features. Recurrent Neural Network (RNN) learns temporal features, but it is susceptible to gradient vanishing and suffers from short-term memory problems. Unlike RNN, Long-Short Term Memory network has a relatively longer-term dependency. However, it consumes higher computation and memory because it computes and stores partial results at each level. Methods: This work proposes a novel multiscale temporal convolutional network (MSTCN) based on the Inception model with a temporal convolutional architecture. Unlike HCF methods, MSTCN requires minimal pre-processing and no manual feature engineering. Further, multiple separable convolutions with different-sized kernels are used in MSTCN for multiscale feature extraction. Dilations are applied to each separable convolution to enlarge the receptive fields without increasing the model parameters. Moreover, residual connections are utilised to prevent information loss and gradient vanishing. These features enable MSTCN to possess a longer effective history while maintaining a relatively low in-network computation. Results: The performance of MSTCN is evaluated on UCI and WISDM datasets using a subject independent protocol with no overlapping subjects between the training and testing sets. MSTCN achieves accuracies of 97.42 on UCI and 96.09 on WISDM. Conclusion: The proposed MSTCN dominates the other state-of-the-art methods by acquiring high recognition accuracies without requiring any manual feature engineering.
APA, Harvard, Vancouver, ISO, and other styles
44

Raja Sekaran, Sarmela, Pang Ying Han, Goh Fan Ling, and Ooi Shih Yin. "MSTCN: A multiscale temporal convolutional network for user independent human activity recognition." F1000Research 10 (December 8, 2021): 1261. http://dx.doi.org/10.12688/f1000research.73175.1.

Full text
Abstract:
Background: In recent years, human activity recognition (HAR) has been an active research topic due to its widespread application in various fields such as healthcare, sports, patient monitoring, etc. HAR approaches can be categorised as handcrafted feature methods (HCF) and deep learning methods (DL). HCF involves complex data pre-processing and manual feature extraction in which the models may be exposed to high bias and crucial implicit pattern loss. Hence, DL approaches are introduced due to their exceptional recognition performance. Convolutional Neural Network (CNN) extracts spatial features while preserving localisation. However, it hardly captures temporal features. Recurrent Neural Network (RNN) learns temporal features, but it is susceptible to gradient vanishing and suffers from short-term memory problems. Unlike RNN, Long-Short Term Memory network has a relatively longer-term dependency. However, it consumes higher computation and memory because it computes and stores partial results at each level. Methods: This work proposes a novel multiscale temporal convolutional network (MSTCN) based on the Inception model with a temporal convolutional architecture. Unlike HCF methods, MSTCN requires minimal pre-processing and no manual feature engineering. Further, multiple separable convolutions with different-sized kernels are used in MSTCN for multiscale feature extraction. Dilations are applied to each separable convolution to enlarge the receptive fields without increasing the model parameters. Moreover, residual connections are utilised to prevent information loss and gradient vanishing. These features enable MSTCN to possess a longer effective history while maintaining a relatively low in-network computation. Results: The performance of MSTCN is evaluated on UCI and WISDM datasets using subject independent protocol with no overlapping subjects between the training and testing sets. MSTCN achieves F1 scores of 0.9752 on UCI and 0.9470 on WISDM. Conclusion: The proposed MSTCN dominates the other state-of-the-art methods by acquiring high recognition accuracies without requiring any manual feature engineering.
APA, Harvard, Vancouver, ISO, and other styles
45

Vassiliev, N. N., I. N. Parasidis, and E. Providas. "Exact solution method for Fredholm integro-differential equations with multipoint and integral boundary conditions. Part 1. Extention method." Information and Control Systems, no. 6 (December 18, 2018): 14–23. http://dx.doi.org/10.31799/1684-8853-2018-6-14-23.

Full text
Abstract:
Introduction:Boundary value problems for differential and integro-differential equations with multipoint and non-local boundary conditions often arise in mechanics, physics, biology, biotechnology, chemical engineering, medical science, finances and other fields. Finding an exact solution of a boundary value problem with Fredholm integro-differential equations is a challenging problem. In most cases, solutions are obtained by numerical methods.Purpose:Search for necessary and sufficient solvability conditions for abstract operator equations and their exact solutions. Results: A direct method is proposed for the exact solution of a certain class of ordinary differential or Fredholm integro-differential equations with separable kernels and multlpolnt/lntegral boundary conditions. We study abstract equations of the formBu = Au -gF(Au) = fandB1u = A2u -qF(Au) -gF(A2u) = fwith non-local boundary conditionsΦ(u ) =NѰ(u )andΦ(u ) =NѰ(u ),Φ(Au) =DF(Au) +NѰ(Au), respectively, where A is a differential operator,qandgare vectors,DandNare matrices, andF,ΦandѰare functional vectors. This method is simple to use and can be easily incorporated into any Computer Algebra System (CAS). The upcoming Part 2 of this paper will be devoted to decomposition method for this problem where the operatorB1is quadratic factorable.
APA, Harvard, Vancouver, ISO, and other styles
46

Yang, Tianbao, Mehrdad Mahdavi, Rong Jin, Jinfeng Yi, and Steven Hoi. "Online Kernel Selection: Algorithms and Evaluations." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 1197–203. http://dx.doi.org/10.1609/aaai.v26i1.8298.

Full text
Abstract:
Kernel methods have been successfully applied to many machine learning problems. Nevertheless, since the performance of kernel methods depends heavily on the type of kernels being used, identifying good kernels among a set of given kernels is important to the success of kernel methods. A straightforward approach to address this problem is cross-validation by training a separate classifier for each kernel and choosing the best kernel classifier out of them. Another approach is Multiple Kernel Learning (MKL), which aims to learn a single kernel classifier from an optimal combination of multiple kernels. However, both approaches suffer from a high computational cost in computing the full kernel matrices and in training, especially when the number of kernels or the number of training examples is very large. In this paper, we tackle this problem by proposing an efficient online kernel selection algorithm. It incrementally learns a weight for each kernel classifier. The weight for each kernel classifier can help us to select a good kernel among a set of given kernels. The proposed approach is efficient in that (i) it is an online approach and therefore avoids computing all the full kernel matrices before training; (ii) it only updates a single kernel classifier each time by a sampling technique and therefore saves time on updating kernel classifiers with poor performance; (iii) it has a theoretically guaranteed performance compared to the best kernel predictor. Empirical studies on image classification tasks demonstrate the effectiveness of the proposed approach for selecting a good kernel among a set of kernels.
APA, Harvard, Vancouver, ISO, and other styles
47

Hasan, H. M., D. F. Ahmed, M. F. Hama, and K. H. F. Jwamer. "Central Limit Theorem in View of Subspace Convex-Cyclic Operators." BULLETIN OF THE KARAGANDA UNIVERSITY-MATHEMATICS 103, no. 3 (September 30, 2021): 25–35. http://dx.doi.org/10.31489/2021m3/25-35.

Full text
Abstract:
In our work we have defined an operator called subspace convex-cyclic operator. The property of this newly defined operator relates eigenvalues which have eigenvectors of modulus one with kernels of the operator. We have also illustrated the effect of the subspace convex-cyclic operator when we let it function in linear dynamics and joining it with functional analysis. The work is done on infinite dimensional spaces which may make linear operators have dense orbits. Its property of measure preserving puts together probability space with measurable dynamics and widens the subject to ergodic theory. We have also applied Birkhoff’s Ergodic Theorem to give a modified version of subspace convex-cyclic operator. To work on a separable infinite Hilbert space, it is important to have Gaussian invariant measure from which we use eigenvectors of modulus one to get what we need to have. One of the important results that we have got from this paper is the study of Central Limit Theorem. We have shown that providing Gaussian measure, Central Limit Theorem holds under the certain conditions that are given to the defined operator. In general our work is theoretically new and is combining three basic concepts dynamical system, operator theory and ergodic theory under the measure and statistics theory.
APA, Harvard, Vancouver, ISO, and other styles
48

Huang, Yibin, Congying Qiu, Xiaonan Wang, Shijun Wang, and Kui Yuan. "A Compact Convolutional Neural Network for Surface Defect Inspection." Sensors 20, no. 7 (April 1, 2020): 1974. http://dx.doi.org/10.3390/s20071974.

Full text
Abstract:
The advent of convolutional neural networks (CNNs) has accelerated the progress of computer vision from many aspects. However, the majority of the existing CNNs heavily rely on expensive GPUs (graphics processing units). to support large computations. Therefore, CNNs have not been widely used to inspect surface defects in the manufacturing field yet. In this paper, we develop a compact CNN-based model that not only achieves high performance on tiny defect inspection but can be run on low-frequency CPUs (central processing units). Our model consists of a light-weight (LW) bottleneck and a decoder. By a pyramid of lightweight kernels, the LW bottleneck provides rich features with less computational cost. The decoder is also built in a lightweight way, which consists of an atrous spatial pyramid pooling (ASPP) and depthwise separable convolution layers. These lightweight designs reduce the redundant weights and computation greatly. We train our models on groups of surface datasets. The model can successfully classify/segment surface defects with an Intel i3-4010U CPU within 30 ms. Our model obtains similar accuracy with MobileNetV2 while only has less than its 1/3 FLOPs (floating-point operations per second) and 1/8 weights. Our experiments indicate CNNs can be compact and hardware-friendly for future applications in the automated surface inspection (ASI).
APA, Harvard, Vancouver, ISO, and other styles
49

CLARKE, JOHN M. "EFFECT OF KERNEL WATER CONCENTRATION AT HARVEST AND DRYING METHOD ON GRADES OF RED SPRING AND DURUM WHEATS." Canadian Journal of Plant Science 66, no. 1 (January 1, 1986): 79–86. http://dx.doi.org/10.4141/cjps86-010.

Full text
Abstract:
Effects of kernel water concentration at harvest, and windrow compared to artificial drying, were determined in two red spring (Triticum aestivum L.) and three durum (T. turgidum L. var. durum) cultivars. Grain harvested at kernel water concentrations of 1000 to < 170 g water per kilogram kernel dry weight was dried in the field in simulated windrows or artificially in a forced-air oven (40–45 °C). Test weight and commercial grades were determined. Artificial drying of immature wheat reduced grades, primarily due to numbers of green kernels. Green kernel levels reduced grades of windrowed durum wheat in 1 of 3 years. In the absence of grade-limiting levels of green kernels, test weight limited grades of durum in 1 year, particularly in the windrowed treatment. In a separate experiment, percentages of green kernels were determined in field-scale windrowed and standing hard red spring and durum wheat crops. Levels of green kernels declined at similar rates in standing and windrowed crops. The kernel water concentration at which level of green kernels dropped to 0.75%, the maximum level tolerated in the top grades of hard red spring and durum wheat, was lower in dry years when maturity was forced than in moist years.Key words: Wheat (red spring), wheat (durum), windrowing, artificial drying, test weight
APA, Harvard, Vancouver, ISO, and other styles
50

Pessoa, José Dalton Cruz, and Johannes Van Leeuwen. "Development of a shelling method to recover whole kernels of the Cutia nut (Couepia edulis)." Revista Brasileira de Fruticultura 28, no. 2 (August 2006): 236–39. http://dx.doi.org/10.1590/s0100-29452006000200018.

Full text
Abstract:
The kernel of the cutia nut (castanha-de-cutia, Couepia edulis (Prance) Prance) of the western Amazon, which is consumed by the local population, has traditionally been extracted from the nut with a machete, a dangerous procedure that only produces kernels cut in half. A shelling off machine prototype, which produces whole kernels without serious risks to its operator, is described and tested. The machine makes a circular cut in the central part of the fruit shell, perpendicular to its main axis. Three ways of conditioning the fruits before cutting were compared: (1) control; (2) oven drying immediately prior to cutting; (3) oven drying, followed by a 24-hour interval before cutting. The time needed to extract and separate the kernel from the endocarp and testa was measured. Treatment 3 produced the highest output: 63 kernels per hour, the highest percentage of whole kernels (90%), and the best kernel taste. Kernel extraction with treatment 3 required 50% less time than treatment 1, while treatment 2 needed 38% less time than treatment 1. The proportion of kernels attached to the testa was 93%, 47%, and 8% for treatments 1, 2, and 3, respectively, and was the main reason for extraction time differences.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography