Journal articles on the topic 'Three-dimensional convolution'

To see the other types of publications on this topic, follow the link: Three-dimensional convolution.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Three-dimensional convolution.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

McCutchen, C. W. "Convolution relation within the three-dimensional diffraction image." Journal of the Optical Society of America A 8, no. 6 (June 1, 1991): 868. http://dx.doi.org/10.1364/josaa.8.000868.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wells, N. H., C. S. Burrus, G. E. Desobry, and A. L. Boyer. "Three-dimensional Fourier convolution with an array processor." Computers in Physics 4, no. 5 (1990): 507. http://dx.doi.org/10.1063/1.168385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Feng Bowen, 冯博文, 吕晓琪 Lü Xiaoqi, 谷宇 Gu Yu, 李菁 Li Qing, and 刘阳 Liu Yang. "Three-Dimensional Parallel Convolution Neural Network Brain Tumor Segmentation Based on Dilated Convolution." Laser & Optoelectronics Progress 57, no. 14 (2020): 141009. http://dx.doi.org/10.3788/lop57.141009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fan, Wenxian, and Yebing Zou. "Three-dimensional Motion Skeleton Reconstruction Algorithm for Gymnastic Dancing Movements." International Journal of Circuits, Systems and Signal Processing 16 (January 7, 2022): 1–5. http://dx.doi.org/10.46300/9106.2022.16.1.

Full text
Abstract:
Aiming at the problem of inaccurate matching results in the traditional three-dimensional reconstruction algorithm of gymnastic skeleton, a three-dimensional motion skeleton reconstruction algorithm of gymnastic dance action is proposed. Taking the center of gravity of the human body as the origin, the position of other nodes in the camera coordinate system relative to the center point of the human skeleton model is calculated, and the human skeleton data collection is completed through action division and posture feature calculation. Polynomial density is introduced into the integration of convolution surface, and the human body model of convolution surface is established according to convolution surface. By using the method of binary parameter matching, the accuracy of the matching results is improved, and the three-dimensional skeleton of gymnastic dance movement is reconstructed. The experimental results show that the fitting degree between the proposed method and the actual reconstruction result is 99.8%, and the reconstruction result of this algorithm has high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
5

Hyeon, Janghun, Weonsuk Lee, Joo Hyung Kim, and Nakju Doh. "NormNet: Point-wise normal estimation network for three-dimensional point cloud data." International Journal of Advanced Robotic Systems 16, no. 4 (July 2019): 172988141985753. http://dx.doi.org/10.1177/1729881419857532.

Full text
Abstract:
In this article, a point-wise normal estimation network for three-dimensional point cloud data called NormNet is proposed. We propose the multiscale K-nearest neighbor convolution module for strengthened local feature extraction. With the multiscale K-nearest neighbor convolution module and PointNet-like architecture, we achieved a hybrid of three features: a global feature, a semantic feature from the segmentation network, and a local feature from the multiscale K-nearest neighbor convolution module. Those features, by mutually supporting each other, not only increase the normal estimation performance but also enable the estimation to be robust under severe noise perturbations or point deficiencies. The performance was validated in three different data sets: Synthetic CAD data (ModelNet), RGB-D sensor-based real 3D PCD (S3DIS), and LiDAR sensor-based real 3D PCD that we built and shared.
APA, Harvard, Vancouver, ISO, and other styles
6

Yu Feng, 冯雨, 易本顺 Benshun Yi, 吴晨玥 Chenyue Wu, and 章云港 Yungang Zhang. "Pulmonary Nodule Recognition Based on Three-Dimensional Convolution Neural Network." Acta Optica Sinica 39, no. 6 (2019): 0615006. http://dx.doi.org/10.3788/aos201939.0615006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kim, Dongyi, Hyeon Cho, Hochul Shin, Soo-Chul Lim, and Wonjun Hwang. "An Efficient Three-Dimensional Convolutional Neural Network for Inferring Physical Interaction Force from Video." Sensors 19, no. 16 (August 17, 2019): 3579. http://dx.doi.org/10.3390/s19163579.

Full text
Abstract:
Interaction forces are traditionally predicted by a contact type haptic sensor. In this paper, we propose a novel and practical method for inferring the interaction forces between two objects based only on video data—one of the non-contact type camera sensors—without the use of common haptic sensors. In detail, we could predict the interaction force by observing the texture changes of the target object by an external force. For this purpose, our hypothesis is that a three-dimensional (3D) convolutional neural network (CNN) can be made to predict the physical interaction forces from video images. In this paper, we proposed a bottleneck-based 3D depthwise separable CNN architecture where the video is disentangled into spatial and temporal information. By applying the basic depthwise convolution concept to each video frame, spatial information can be efficiently learned; for temporal information, the 3D pointwise convolution can be used to learn the linear combination among sequential frames. To validate and train the proposed model, we collected large quantities of datasets, which are video clips of the physical interactions between two objects under different conditions (illumination and angle variations) and the corresponding interaction forces measured by the haptic sensor (as the ground truth). Our experimental results confirmed our hypothesis; when compared with previous models, the proposed model was more accurate and efficient, and although its model size was 10 times smaller, the 3D convolutional neural network architecture exhibited better accuracy. The experiments demonstrate that the proposed model remains robust under different conditions and can successfully estimate the interaction force between objects.
APA, Harvard, Vancouver, ISO, and other styles
8

Kou, Shan Shan, Colin J. R. Sheppard, and Jiao Lin. "Calculation of the volumetric diffracted field with a three-dimensional convolution: the three-dimensional angular spectrum method." Optics Letters 38, no. 24 (December 5, 2013): 5296. http://dx.doi.org/10.1364/ol.38.005296.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Qiang, Qi Wang, and Xuelong Li. "Mixed 2D/3D Convolutional Network for Hyperspectral Image Super-Resolution." Remote Sensing 12, no. 10 (May 21, 2020): 1660. http://dx.doi.org/10.3390/rs12101660.

Full text
Abstract:
Deep learning-based hyperspectral image super-resolution (SR) methods have achieved great success recently. However, there are two main problems in the previous works. One is to use the typical three-dimensional convolution analysis, resulting in more parameters of the network. The other is not to pay more attention to the mining of hyperspectral image spatial information, when the spectral information can be extracted. To address these issues, in this paper, we propose a mixed convolutional network (MCNet) for hyperspectral image super-resolution. We design a novel mixed convolutional module (MCM) to extract the potential features by 2D/3D convolution instead of one convolution, which enables the network to more mine spatial features of hyperspectral image. To explore the effective features from 2D unit, we design the local feature fusion to adaptively analyze from all the hierarchical features in 2D units. In 3D unit, we employ spatial and spectral separable 3D convolution to extract spatial and spectral information, which reduces unaffordable memory usage and training time. Extensive evaluations and comparisons on three benchmark datasets demonstrate that the proposed approach achieves superior performance in comparison to existing state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Igumnov, L. A., I. V. Vorobtsov, and S. Yu Litvinchuk. "Boundary Element Method with Runge-Kutta Convolution Quadrature for Three-Dimensional Dynamic Poroelasticity." Applied Mechanics and Materials 709 (December 2014): 101–4. http://dx.doi.org/10.4028/www.scientific.net/amm.709.101.

Full text
Abstract:
The paper contains a brief introduction to the state of the art in poroelasticity models, in BIE & BEM methods application to solve dynamic problems in Laplace domain. Convolution Quadrature Method is formulated, as well as Runge-Kutta convolution quadrature modification and scheme with a key based on the highly oscillatory quadrature principles. Several approaches to Laplace transform inversion, including based on traditional Euler stepping scheme and Runge-Kutta stepping schemes, are numerically compared. A BIE system of direct approach in Laplace domain is used together with the discretization technique based on the collocation method. The boundary is discretized with the quadrilateral 8-node biquadratic elements. Generalized boundary functions are approximated with the help of the Goldshteyn’s displacement-stress matched model. The time-stepping scheme can rely on the application of convolution theorem as well as integration theorem. By means of the developed software the following 3d poroelastodynamic problem were numerically treated: a Heaviside-shaped longitudinal load acting on the face of a column.
APA, Harvard, Vancouver, ISO, and other styles
11

Kastengren, Alan L., and J. Craig Dutton. "Aspects of Shear Layer Unsteadiness in a Three-Dimensional Supersonic Wake." Journal of Fluids Engineering 127, no. 6 (July 5, 2005): 1085–94. http://dx.doi.org/10.1115/1.2062727.

Full text
Abstract:
The near wake of a blunt-base cylinder at 10° angle-of-attack to a Mach 2.46 free-stream flow is visualized at several locations to study unsteady aspects of its structure. In both side-view and end-view images, the shear layer flapping grows monotonically as the shear layer develops, similar to the trends seen in a corresponding axisymmetric supersonic base flow. The interface convolution, a measure of the tortuousness of the shear layer, peaks for side-view and end-view images during recompression. The high convolution for a septum of fluid seen in the middle of the wake indicates that the septum actively entrains fluid from the recirculation region, which helps to explain the low base pressure for this wake compared to that for a corresponding axisymmetric wake.
APA, Harvard, Vancouver, ISO, and other styles
12

Gupta, P. K., L. A. Bennett, and A. P. Raiche. "Hybrid calculations of the three‐dimensional electromagnetic response of buried conductors." GEOPHYSICS 52, no. 3 (March 1987): 301–6. http://dx.doi.org/10.1190/1.1442304.

Full text
Abstract:
The hybrid method for computing the electromagnetic response of a three‐dimensional conductor in a layered, conducting half‐space consists of solving a finite‐element problem in a localized region containing the conductor, and using integral‐equation methods to obtain the fields outside that region. The original scheme obtains the boundary values by iterating between the integral‐equation solution and the finite‐element solution, after making an initial guess based on primary values from the field. A two‐dimensional interpolation scheme is then used to speed the evaluation of the [Formula: see text] to [Formula: see text] Green’s function convolution integrals required by most problems. The two algorithms presented are modifications of the original scheme. Both contain a search routine to identify a set of unique points where the convolution integral evaluations are required. By replacing the two‐dimensional interpolation with a one‐dimensional interpolation and reading the convolution integrals from a reference table, computation time was reduced by up to 70 percent and accuracy was improved. The first algorithm retains the iterative technique for enforcing consistency between the integral‐equation and finite‐element solutions on the boundary of the region. The second algorithm solves the coupled integral‐equation/finite‐element system directly. For some models, the direct method has reduced the computation time to 10 percent of that required by the original scheme. In practice the direct scheme is also more stable.
APA, Harvard, Vancouver, ISO, and other styles
13

Hu, Shi-Min, Zheng-Ning Liu, Meng-Hao Guo, Jun-Xiong Cai, Jiahui Huang, Tai-Jiang Mu, and Ralph R. Martin. "Subdivision-based Mesh Convolution Networks." ACM Transactions on Graphics 41, no. 3 (June 30, 2022): 1–16. http://dx.doi.org/10.1145/3506694.

Full text
Abstract:
Convolutionalneural networks (CNNs) have made great breakthroughs in two-dimensional (2D) computer vision. However, their irregular structure makes it hard to harness the potential of CNNs directly on meshes. A subdivision surface provides a hierarchical multi-resolution structure in which each face in a closed 2-manifold triangle mesh is exactly adjacent to three faces. Motivated by these two observations, this article presents SubdivNet , an innovative and versatile CNN framework for three-dimensional (3D) triangle meshes with Loop subdivision sequence connectivity. Making an analogy between mesh faces and pixels in a 2D image allows us to present a mesh convolution operator to aggregate local features from nearby faces. By exploiting face neighborhoods, this convolution can support standard 2D convolutional network concepts, e.g., variable kernel size, stride, and dilation. Based on the multi-resolution hierarchy, we make use of pooling layers that uniformly merge four faces into one and an upsampling method that splits one face into four. Thereby, many popular 2D CNN architectures can be easily adapted to process 3D meshes. Meshes with arbitrary connectivity can be remeshed to have Loop subdivision sequence connectivity via self-parameterization, making SubdivNet a general approach. Extensive evaluation and various applications demonstrate SubdivNet’s effectiveness and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
14

Abdelazeem, Rania M., Doaa Youssef, Jala El-Azab, Salah Hassab-Elnaby, and Mostafa Agour. "Three-Dimensional Holographic Reconstruction of Brain Tissue Based on Convolution Propagation." Journal of Physics: Conference Series 1472 (February 2020): 012008. http://dx.doi.org/10.1088/1742-6596/1472/1/012008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Lagaris, I. E., and D. G. Papageorgiou. "CONVUS — an efficient package for calculating three-dimensional convolution-type integrals." Computer Physics Communications 76, no. 1 (June 1993): 80–86. http://dx.doi.org/10.1016/0010-4655(93)90122-s.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Yang, Zhuqing. "A Novel Brain Image Segmentation Method Using an Improved 3D U-Net Model." Scientific Programming 2021 (August 18, 2021): 1–10. http://dx.doi.org/10.1155/2021/4801077.

Full text
Abstract:
Medical image segmentation (IS) is a research field in image processing. Deep learning methods are used to automatically segment organs, tissues, or tumor regions in medical images, which can assist doctors in diagnosing diseases. Since most IS models based on convolutional neural network (CNN) are two-dimensional models, they are not suitable for three-dimensional medical imaging. On the contrary, the three-dimensional segmentation model has problems such as complex network structure and large amount of calculation. Therefore, this study introduces the self-excited compressed dilated convolution (SECDC) module on the basis of the 3D U-Net network and proposes an improved 3D U-Net network model. In the SECDC module, the calculation amount of the model can be reduced by 1 × 1 × 1 convolution. Combining normal convolution and cavity convolution with an expansion rate of 2 can dig out the multiview features of the image. At the same time, the 3D squeeze-and-excitation (3D-SE) module can realize automatic learning of the importance of each layer. The experimental results on the BraTS2019 dataset show that the Dice coefficient and other indicators obtained by the model used in this paper indicate that the overall tumor can reach 0.87, the tumor core can reach 0.84, and the most difficult to segment enhanced tumor can reach 0.80. From the evaluation indicators, it can be analyzed that the improved 3D U-Net model used can greatly reduce the amount of data while achieving better segmentation results, and the model has better robustness. This model can meet the clinical needs of brain tumor segmentation methods.
APA, Harvard, Vancouver, ISO, and other styles
17

Zhao, Di. "Mobile GPU Computing Based Filter Bank Convolution for Three-Dimensional Wavelet Transform." International Journal of Mobile Computing and Multimedia Communications 7, no. 2 (April 2016): 22–35. http://dx.doi.org/10.4018/ijmcmc.2016040102.

Full text
Abstract:
Mobile GPU computing, or System on Chip with embedded GPU (SoC GPU), becomes in great demand recently. Since these SoCs are designed for mobile devices with real-time applications such as image processing and video processing, high-efficient implementations of wavelet transform are essential for these chips. In this paper, the author develops two SoC GPU based DWT: signal based parallelization for discrete wavelet transform (sDWT) and coefficient based parallelization for discrete wavelet transform (cDWT), and the author evaluates the performance of three-dimensional wavelet transform on SoC GPU Tegra K1. Computational results show that, SoC GPU based DWT is significantly faster than SoC CPU based DWT. Computational results also show that, sDWT can generally satisfy the requirement of real-time processing (30 frames per second) with the image sizes of 352×288, 480×320, 720×480 and 1280×720, while cDWT can only obtain read-time processing with small image sizes of 352×288 and 480×320.
APA, Harvard, Vancouver, ISO, and other styles
18

Starkschall, George. "Beam-commissioning methodology for a three-dimensional convolution/superposition photon dose algorithm." Journal of Applied Clinical Medical Physics 1, no. 1 (January 2000): 8. http://dx.doi.org/10.1120/1.308246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Starkschall, George, Roy E. Steadham, Richard A. Popple, Salahuddin Ahmad, and Isaac I. Rosen. "Beam-commissioning methodology for a three-dimensional convolution/superposition photon dose algorithm." Journal of Applied Clinical Medical Physics 1, no. 1 (January 1, 2000): 8–27. http://dx.doi.org/10.1120/jacmp.v1i1.2651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Banjai, Lehel, and Maryna Kachanovska. "Sparsity of Runge–Kutta convolution weights for the three-dimensional wave equation." BIT Numerical Mathematics 54, no. 4 (June 17, 2014): 901–36. http://dx.doi.org/10.1007/s10543-014-0498-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Zha, Wenshu, Wen Zhang, Daolun Li, Yan Xing, Lei He, and Jieqing Tan. "Convolution-Based Model-Solving Method for Three-Dimensional, Unsteady, Partial Differential Equations." Neural Computation 34, no. 2 (January 14, 2022): 518–40. http://dx.doi.org/10.1162/neco_a_01462.

Full text
Abstract:
Abstract Neural networks are increasingly used widely in the solution of partial differential equations (PDEs). This letter proposes 3D-PDE-Net to solve the three-dimensional PDE. We give a mathematical derivation of a three-dimensional convolution kernel that can approximate any order differential operator within the range of expressing ability and then conduct 3D-PDE-Net based on this theory. An optimum network is obtained by minimizing the normalized mean square error (NMSE) of training data, and L-BFGS is the optimized algorithm of second-order precision. Numerical experimental results show that 3D-PDE-Net can achieve the solution with good accuracy using few training samples, and it is of highly significant in solving linear and nonlinear unsteady PDEs.
APA, Harvard, Vancouver, ISO, and other styles
22

Miao Guang, 苗光, and 李朝锋 Li Chaofeng. "Detection of Pulmonary Nodules CT Images Combined with Two-Dimensional and Three-Dimensional Convolution Neural Networks." Laser & Optoelectronics Progress 55, no. 5 (2018): 051006. http://dx.doi.org/10.3788/lop55.051006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Yu, Dawen, and Shunping Ji. "Grid Based Spherical CNN for Object Detection from Panoramic Images." Sensors 19, no. 11 (June 9, 2019): 2622. http://dx.doi.org/10.3390/s19112622.

Full text
Abstract:
Recently proposed spherical convolutional neural networks (SCNNs) have shown advantages over conventional planar CNNs on classifying spherical images. However, two factors hamper their application in an objection detection task. First, a convolution in S2 (a two-dimensional sphere in three-dimensional space) or SO(3) (three-dimensional special orthogonal group) space results in the loss of an object’s location. Second, overlarge bandwidth is required to preserve a small object’s information on a sphere because the S2/SO(3) convolution must be performed on the whole sphere, instead of a local image patch. In this study, we propose a novel grid-based spherical CNN (G-SCNN) for detecting objects from spherical images. According to input bandwidth, a sphere image is transformed to a conformal grid map to be the input of the S2/SO3 convolution, and an object’s bounding box is scaled to cover an adequate area of the grid map. This solves the second problem. For the first problem, we utilize a planar region proposal network (RPN) with a data augmentation strategy that increases rotation invariance. We have also created a dataset including 600 street view panoramic images captured from a vehicle-borne panoramic camera. The dataset contains 5636 objects of interest annotated with class and bounding box and is named as WHU (Wuhan University) panoramic dataset. Results on the dataset proved our grid-based method is extremely better than the original SCNN in detecting objects from spherical images, and it outperformed several mainstream object detection networks, such as Faster R-CNN and SSD.
APA, Harvard, Vancouver, ISO, and other styles
24

Skoglund, Ulf. "Quantitative Image Processing in 3D." Microscopy and Microanalysis 4, S2 (July 1998): 442–43. http://dx.doi.org/10.1017/s1431927600022339.

Full text
Abstract:
Three-dimensional reconstructions from projections are usually ridden by noise from different sources. A common problem among many three-dimensional reconstruction techniques is the systematic absence of certain projections, but also the accidental absence of spurious projections. In these three-dimensional reconstructions such absences are visible as directional smearing due to convolution. Other convolution effects such as those due to the optics of the instrument used to record the data usually cause severe damping of high frequencies and even contrast reversal (common in images from electron microscopes).Several approaches to overcome these ‘noise’ effects in three-dimensional reconstructions have been developed, but they usually suffer from the very small radius of convergence. Very easily and commonly, the refinement iterations end up stuck in a premature choice of a minimum. We have developed another algorithm, constrained maximum entropy tomography (COMET), that in practice has been shown to overcome these problems.
APA, Harvard, Vancouver, ISO, and other styles
25

Li, N., C. Wang, H. Zhao, X. Gong, and D. Wang. "A NOVEL DEEP CONVOLUTIONAL NEURAL NETWORK FOR SPECTRAL–SPATIAL CLASSIFICATION OF HYPERSPECTRAL DATA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3 (April 30, 2018): 897–900. http://dx.doi.org/10.5194/isprs-archives-xlii-3-897-2018.

Full text
Abstract:
Spatial and spectral information are obtained simultaneously by hyperspectral remote sensing. Joint extraction of these information of hyperspectral image is one of most import methods for hyperspectral image classification. In this paper, a novel deep convolutional neural network (CNN) is proposed, which extracts spectral-spatial information of hyperspectral images correctly. The proposed model not only learns sufficient knowledge from the limited number of samples, but also has powerful generalization ability. The proposed framework based on three-dimensional convolution can extract spectral-spatial features of labeled samples effectively. Though CNN has shown its robustness to distortion, it cannot extract features of different scales through the traditional pooling layer that only have one size of pooling window. Hence, spatial pyramid pooling (SPP) is introduced into three-dimensional local convolutional filters for hyperspectral classification. Experimental results with a widely used hyperspectral remote sensing dataset show that the proposed model provides competitive performance.
APA, Harvard, Vancouver, ISO, and other styles
26

Baddour, Natalie. "Operational and convolution properties of three-dimensional Fourier transforms in spherical polar coordinates." Journal of the Optical Society of America A 27, no. 10 (September 13, 2010): 2144. http://dx.doi.org/10.1364/josaa.27.002144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Karasik, Y. B. "How to compute three-dimensional convolution and/or correlation optically: A mathematical foundation." Journal of Modern Optics 45, no. 4 (April 1998): 817–23. http://dx.doi.org/10.1080/09500349808230624.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Bai, Xue, Ze Liu, Jie Zhang, Shengye Wang, Qing Hou, Guoping Shan, Ming Chen, and Binbing Wang. "Comparing of two dimensional and three dimensional fully convolutional networks for radiotherapy dose prediction in left-sided breast cancer." Science Progress 104, no. 3 (July 2021): 003685042110381. http://dx.doi.org/10.1177/00368504211038162.

Full text
Abstract:
Fully convolutional networks were developed for predicting optimal dose distributions for patients with left-sided breast cancer and compared the prediction accuracy between two-dimensional and three-dimensional networks. Sixty cases treated with volumetric modulated arc radiotherapy were analyzed. Among them, 50 cases were randomly chosen to conform the training set, and the remaining 10 were to construct the test set. Two U-Net fully convolutional networks predicted the dose distributions, with two-dimensional and three-dimensional convolution kernels, respectively. Computed tomography images, delineated regions of interest, or their combination were considered as input data. The accuracy of predicted results was evaluated against the clinical dose. Most types of input data retrieved a similar dose to the ground truth for organs at risk ( p > 0.05). Overall, the two-dimensional model had higher performance than the three-dimensional model ( p < 0.05). Moreover, the two-dimensional region of interest input provided the best prediction results regarding the planning target volume mean percentage difference (2.40 ± 0.18%), heart mean percentage difference (4.28 ± 2.02%), and the gamma index at 80% of the prescription dose are with tolerances of 3 mm and 3% (0.85 ± 0.03), whereas the two-dimensional combined input provided the best prediction regarding ipsilateral lung mean percentage difference (4.16 ± 1.48%), lung mean percentage difference (2.41 ± 0.95%), spinal cord mean percentage difference (0.67 ± 0.40%), and 80% Dice similarity coefficient (0.94 ± 0.01). Statistically, the two-dimensional combined inputs achieved higher prediction accuracy regarding 80% Dice similarity coefficient than the two-dimensional region of interest input (0.94 ± 0.01 vs 0.92 ± 0.01, p < 0.05). The two-dimensional data model retrieves higher performance than its three-dimensional counterpart for dose prediction, especially when using region of interest and combined inputs.
APA, Harvard, Vancouver, ISO, and other styles
29

Zheng, Yang, Jieyu Zhao, Yu Chen, Chen Tang, and Shushi Yu. "3D Mesh Model Classification with a Capsule Network." Algorithms 14, no. 3 (March 22, 2021): 99. http://dx.doi.org/10.3390/a14030099.

Full text
Abstract:
With the widespread success of deep learning in the two-dimensional field, how to apply deep learning methods from two-dimensional to three-dimensional field has become a current research hotspot. Among them, the polygon mesh structure in the three-dimensional representation as a complex data structure provides an effective shape approximate representation for the three-dimensional object. Although the traditional method can extract the characteristics of the three-dimensional object through the graphical method, it cannot be applied to more complex objects. However, due to the complexity and irregularity of the mesh data, it is difficult to directly apply convolutional neural networks to 3D mesh data processing. Considering this problem, we propose a deep learning method based on a capsule network to effectively classify mesh data. We first design a polynomial convolution template. Through a sliding operation similar to a two-dimensional image convolution window, we directly sample on the grid surface, and use the window sampling surface as the minimum unit of calculation. Because a high-order polynomial can effectively represent a surface, we fit the approximate shape of the surface through the polynomial, use the polynomial parameter as the shape feature of the surface, and add the center point coordinates and normal vector of the surface as the pose feature of the surface. The feature is used as the feature vector of the surface. At the same time, to solve the problem of the introduction of a large number of pooling layers in traditional convolutional neural networks, the capsule network is introduced. For the problem of nonuniform size of the input grid model, the capsule network attitude parameter learning method is improved by sharing the weight of the attitude matrix. The amount of model parameters is reduced, and the training efficiency of the 3D mesh model is further improved. The experiment is compared with the traditional method and the latest two methods on the SHREC15 data set. Compared with the MeshNet and MeshCNN, the average recognition accuracy in the original test set is improved by 3.4% and 2.1%, and the average after fusion of features the accuracy reaches 93.8%. At the same time, under the premise of short training time, this method can also achieve considerable recognition results through experimental verification. The three-dimensional mesh classification method proposed in this paper combines the advantages of graphics and deep learning methods, and effectively improves the classification effect of 3D mesh model.
APA, Harvard, Vancouver, ISO, and other styles
30

Qing, Yuhao, and Wenyi Liu. "Hyperspectral Image Classification Based on Multi-Scale Residual Network with Attention Mechanism." Remote Sensing 13, no. 3 (January 20, 2021): 335. http://dx.doi.org/10.3390/rs13030335.

Full text
Abstract:
In recent years, image classification on hyperspectral imagery utilizing deep learning algorithms has attained good results. Thus, spurred by that finding and to further improve the deep learning classification accuracy, we propose a multi-scale residual convolutional neural network model fused with an efficient channel attention network (MRA-NET) that is appropriate for hyperspectral image classification. The suggested technique comprises a multi-staged architecture, where initially the spectral information of the hyperspectral image is reduced into a two-dimensional tensor, utilizing a principal component analysis (PCA) scheme. Then, the constructed low-dimensional image is input to our proposed ECA-NET deep network, which exploits the advantages of its core components, i.e., multi-scale residual structure and attention mechanisms. We evaluate the performance of the proposed MRA-NET on three public available hyperspectral datasets and demonstrate that, overall, the classification accuracy of our method is 99.82 %, 99.81%, and 99.37, respectively, which is higher compared to the corresponding accuracy of current networks such as 3D convolutional neural network (CNN), three-dimensional residual convolution structure (RES-3D-CNN), and space–spectrum joint deep network (SSRN).
APA, Harvard, Vancouver, ISO, and other styles
31

Haizhong, Qian. "I3D: An Improved Three-Dimensional CNN Model on Hyperspectral Remote Sensing Image Classification." Security and Communication Networks 2021 (November 29, 2021): 1–12. http://dx.doi.org/10.1155/2021/5217578.

Full text
Abstract:
Hyperspectral image data are widely used in real life because it contains rich spectral and spatial information. Hyperspectral image classification is to distinguish different functions based on different features. The computer performs quantitative analysis through the captured image and classifies each pixel in the image. However, the traditional deep learning-based hyperspectral image classification technology, due to insufficient spatial-spectral feature extraction, too many network layers, and complex calculations, leads to large parameters and optimizes hyperspectral images. For this reason, I proposed the I3D-CNN model. The number of classification parameters is large, and the network is complex. This method uses hyperspectral image cubes to directly extract spectral-spatial coupling features, adds depth separable convolution to 3D convolution to reextract spatial features, and extracts the parameter amount and calculation time at the same time. In addition, the model removes the pooling layer to achieve fewer parameters, smaller model scale, and easier training effects. The performance of the I3D-CNN model on the test datasets is better than other deep learning-based methods after comparison. The results show that the model still exhibits strong classification performance, reduces a large number of learning parameters, and reduces complexity. The accuracy rate, average classification accuracy rate, and kappa coefficient are all stable above 95%.
APA, Harvard, Vancouver, ISO, and other styles
32

Hu, Guo X., Zhong Yang, Lei Hu, Li Huang, and Jia M. Han. "Small Object Detection with Multiscale Features." International Journal of Digital Multimedia Broadcasting 2018 (September 30, 2018): 1–10. http://dx.doi.org/10.1155/2018/4546896.

Full text
Abstract:
The existing object detection algorithm based on the deep convolution neural network needs to carry out multilevel convolution and pooling operations to the entire image in order to extract a deep semantic features of the image. The detection models can get better results for big object. However, those models fail to detect small objects that have low resolution and are greatly influenced by noise because the features after repeated convolution operations of existing models do not fully represent the essential characteristics of the small objects. In this paper, we can achieve good detection accuracy by extracting the features at different convolution levels of the object and using the multiscale features to detect small objects. For our detection model, we extract the features of the image from their third, fourth, and 5th convolutions, respectively, and then these three scales features are concatenated into a one-dimensional vector. The vector is used to classify objects by classifiers and locate position information of objects by regression of bounding box. Through testing, the detection accuracy of our model for small objects is 11% higher than the state-of-the-art models. In addition, we also used the model to detect aircraft in remote sensing images and achieved good results.
APA, Harvard, Vancouver, ISO, and other styles
33

Igumnov, Leonid A., Ivan Markov, Aleksandr Lyubimov, and Valery Novikov. "Dynamic Response of Three-Dimensional Multi-Domain Piezoelectric Structures via BEM." Key Engineering Materials 769 (April 2018): 317–22. http://dx.doi.org/10.4028/www.scientific.net/kem.769.317.

Full text
Abstract:
In this paper, a Laplace domain boundary element method is applied for transient dynamic analysis of three-dimensional multi-domain linear piezoelectric structures. Piezoelectric materials of homogeneous sub-domains may have arbitrary degree of anisotropy. The boundary element formulation is based on a weakly singular representation of the piezoelectric boundary integral equations in the Laplace domain. To compute the time-domain solutions a convolution quadrature formula is applied for the numerical inversion of Laplace transform. Presented multi-domain boundary element method is tested on a three-dimensional problem of nonhomogeneous column which is made of two dissimilar piezoelectric materials and subjected to dynamic impact loading.
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Ruixue, Bo Yin, Yanping Cong, and Zehua Du. "Simultaneous Prediction of Soil Properties Using Multi_CNN Model." Sensors 20, no. 21 (November 3, 2020): 6271. http://dx.doi.org/10.3390/s20216271.

Full text
Abstract:
Soil nutrient prediction based on near-infrared spectroscopy has become the main research direction for rapid acquisition of soil information. The development of deep learning has greatly improved the prediction accuracy of traditional modeling methods. In view of the low efficiency and low accuracy of current soil prediction models, this paper proposes a soil multi-attribute intelligent prediction method based on convolutional neural networks, by constructing a dual-stream convolutional neural network model Multi_CNN that combines one-dimensional convolution and two-dimensional convolution, the intelligent prediction of soil multi-attribute is realized. The model extracts the characteristics of soil attributes from spectral sequences and spectrograms respectively, and multiple attributes can be predicted simultaneously by feature fusion. The model is based on two different-scale soil near-infrared spectroscopy data sets for multi-attribute prediction. The experimental results show that the RP2 of the three attributes of Total Carbon, Total Nitrogen, and Alkaline Nitrogen on the small dataset are 0.94, 0.95, 0.87, respectively, and the RP2 of the attributes of Organic Carbon, Nitrogen, and Clay on the LUCAS dataset are, respectively, 0.95, 0.91, 0.83, And compared with traditional regression models and new prediction methods commonly used in soil nutrient prediction, the multi-task model proposed in this paper is more accurate.
APA, Harvard, Vancouver, ISO, and other styles
35

Li, Wenmei, Huaihuai Chen, Qing Liu, Haiyan Liu, Yu Wang, and Guan Gui. "Attention Mechanism and Depthwise Separable Convolution Aided 3DCNN for Hyperspectral Remote Sensing Image Classification." Remote Sensing 14, no. 9 (May 5, 2022): 2215. http://dx.doi.org/10.3390/rs14092215.

Full text
Abstract:
Hyperspectral Remote Rensing Image (HRSI) classification based on Convolution Neural Network (CNN) has become one of the hot topics in the field of remote sensing. However, the high dimensional information and limited training samples are prone to the Hughes phenomenon for hyperspectral remote sensing images. Meanwhile, high-dimensional information processing also consumes significant time and computing power, or the extracted features may not be representative, resulting in unsatisfactory classification efficiency and accuracy. To solve these problems, an attention mechanism and depthwise separable convolution are introduced to the three-dimensional convolutional neural network (3DCNN). Thus, 3DCNN-AM and 3DCNN-AM-DSC are proposed for HRSI classification. Firstly, three hyperspectral datasets (Indian pines, University of Pavia and University of Houston) are used to analyze the patchsize and dataset allocation ratio (Training set: Validation set: Test Set) in the performance of 3DCNN and 3DCNN-AM. Secondly, in order to improve work efficiency, principal component analysis (PCA) and autoencoder (AE) dimension reduction methods are applied to reduce data dimensionality, and maximize the classification accuracy of the 3DCNN, but it will still take time. Furthermore, the HRSI classification model 3DCNN-AM and 3DCNN-AM-DSC are applied to classify with the three classic HRSI datasets. Lastly, the classification accuracy index and time consumption are evaluated. The results indicate that 3DCNN-AM could improve classification accuracy and reduce computing time with the dimension reduction dataset, and the 3DCNN-AM-DSC model can reduce the training time by a maximum of 91.77% without greatly reducing the classification accuracy. The results of the three classic hyperspectral datasets illustrate that 3DCNN-AM-DSC can improve the classification performance and reduce the time required for model training. It may be a new way to tackle hyperspectral datasets in HRSl classification tasks without dimensionality reduction.
APA, Harvard, Vancouver, ISO, and other styles
36

Yang Jun, 杨军, 王顺 Wang Shun, and 周鹏 Zhou Peng. "Recognition and Classification for Three-Dimensional Model Based on Deep Voxel Convolution Neural Network." Acta Optica Sinica 39, no. 4 (2019): 0415007. http://dx.doi.org/10.3788/aos201939.0415007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Marston, Philip L. "Spatial surface convolution approximation of three‐dimensional leaky wave contributions to high‐frequency scattering." Journal of the Acoustical Society of America 100, no. 4 (October 1996): 2820. http://dx.doi.org/10.1121/1.416619.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

McGary, J. E., and A. L. Boyer. "An interactive, parallel, three-dimensional fast Fourier transform convolution dose calculation using a supercomputer." Medical Physics 24, no. 4 (April 1997): 519–22. http://dx.doi.org/10.1118/1.597934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Chehrazi, F., B. Yi, M. Sarfaraz, S. Naqvi, and C. Yu. "Three Dimensional Dose Reconstruction IMRT Verification using an EPID and the Convolution Superposition Algorithm." International Journal of Radiation Oncology*Biology*Physics 63 (October 2005): S514. http://dx.doi.org/10.1016/j.ijrobp.2005.07.870.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

TIMOSHIN, S. N., and F. T. SMITH. "Vortex/inflectional-wave interactions with weakly three-dimensional input." Journal of Fluid Mechanics 348 (October 10, 1997): 247–94. http://dx.doi.org/10.1017/s0022112097006447.

Full text
Abstract:
The subtle impact of the spanwise scaling in nonlinear interactions between oblique instability waves and the induced longitudinal vortex field is considered theoretically for the case of a Rayleigh-unstable boundary-layer flow, at large Reynolds numbers. A classification is given of various flow regimes on the basis of Reynolds-stress mechanisms of mean vorticity generation, and a connection between low-amplitude non-parallel vortex/wave interactions and less-low-amplitude non-equilibrium critical-layer flows is discussed in more detail than in previous studies. Two new regimes of vortex/wave interaction for increased spanwise lengthscales are identified and studied. In the first, with the cross-scale just slightly larger than the boundary-layer thickness, the wave modulation is governed by an amplitude equation with a convolution and an ordinary integral term present due to nonlinear contributions from all three Reynolds-stress components in the cross-momentum balance. In the second regime the cross-scale is larger, and the wave modulation is found to be governed by an integral/partial differential equation. In both cases the main-flow non-parallelism contributes significantly to the coupled wave/vortex development.
APA, Harvard, Vancouver, ISO, and other styles
41

Chamgoulov, Ravil, Pierre Lane, and Calum MacAulay. "Optical Computed-Tomographic Microscope for Three-Dimensional Quantitative Histology." Analytical Cellular Pathology 26, no. 5-6 (January 1, 2004): 319–27. http://dx.doi.org/10.1155/2004/209579.

Full text
Abstract:
A novel optical computed‐tomographic microscope has been developed allowing quantitative three‐dimensional (3D) imaging and analysis of fixed pathological material. Rather than a conventional two‐dimensional (2D) image, the instrument produces a 3D representation of fixed absorption‐stained material, from which quantitative histopathological features can be measured more accurately. The accurate quantification of these features is critically important in disease diagnosis and the clinical classification of cancer. The system consists of two high NA objective lenses, a light source, a digital spatial light modulator (DMD, by Texas Instrument), an x–y stage, and a CCD detector. The DMD, positioned at the back pupil‐plane of the illumination objective, is employed to illuminate the specimen with parallel rays at any desired angle. The system uses a modification of the convolution backprojection algorithm for reconstruction. In contrast to fluorescent images acquired by a confocal microscope, this instrument produces 3D images of absorption stained material. Microscopic 3D volume reconstructions of absorption‐stained cells have been demonstrated. Reconstructed 3D images of individual cells and tissue can be cut virtually with the distance between the axial slices less than 0.5 μm.
APA, Harvard, Vancouver, ISO, and other styles
42

Lutter, Liisa, Christopher J. Serpell, Mick F. Tuite, Louise C. Serpell, and Wei-Feng Xue. "Three-dimensional reconstruction of individual helical nano-filament structures from atomic force microscopy topographs." Biomolecular Concepts 11, no. 1 (May 6, 2020): 102–15. http://dx.doi.org/10.1515/bmc-2020-0009.

Full text
Abstract:
AbstractAtomic force microscopy, AFM, is a powerful tool that can produce detailed topographical images of individual nano-structures with a high signal-to-noise ratio without the need for ensemble averaging. However, the application of AFM in structural biology has been hampered by the tip-sample convolution effect, which distorts images of nano-structures, particularly those that are of similar dimensions to the cantilever probe tips used in AFM. Here we show that the tip-sample convolution results in a feature-dependent and non-uniform distribution of image resolution on AFM topographs. We show how this effect can be utilised in structural studies of nano-sized upward convex objects such as spherical or filamentous molecular assemblies deposited on a flat surface, because it causes ‘magnification’ of such objects in AFM topographs. Subsequently, this enhancement effect is harnessed through contact-point based deconvolution of AFM topographs. Here, the application of this approach is demonstrated through the 3D reconstruction of the surface envelope of individual helical amyloid filaments without the need of cross-particle averaging using the contact-deconvoluted AFM topographs. Resolving the structural variations of individual macromolecular assemblies within inherently heterogeneous populations is paramount for mechanistic understanding of many biological phenomena such as amyloid toxicity and prion strains. The approach presented here will also facilitate the use of AFM for high-resolution structural studies and integrative structural biology analysis of single molecular assemblies.
APA, Harvard, Vancouver, ISO, and other styles
43

Bourgeois, Aline, Philippe Joseph, and Jean Claude Lecomte. "Three-dimensional full wave seismic modelling versus one-dimensional convolution: the seismic appearance of the Grès d’Annot turbidite system." Geological Society, London, Special Publications 221, no. 1 (2004): 401–17. http://dx.doi.org/10.1144/gsl.sp.2004.221.01.22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Wu, Peida, Ziguan Cui, Zongliang Gan, and Feng Liu. "Three-Dimensional ResNeXt Network Using Feature Fusion and Label Smoothing for Hyperspectral Image Classification." Sensors 20, no. 6 (March 16, 2020): 1652. http://dx.doi.org/10.3390/s20061652.

Full text
Abstract:
In recent years, deep learning methods have been widely used in the hyperspectral image (HSI) classification tasks. Among them, spectral-spatial combined methods based on the three-dimensional (3-D) convolution have shown good performance. However, because of the three-dimensional convolution, increasing network depth will result in a dramatic rise in the number of parameters. In addition, the previous methods do not make full use of spectral information. They mostly use the data after dimensionality reduction directly as the input of networks, which result in poor classification ability in some categories with small numbers of samples. To address the above two issues, in this paper, we designed an end-to-end 3D-ResNeXt network which adopts feature fusion and label smoothing strategy further. On the one hand, the residual connections and split-transform-merge strategy can alleviate the declining-accuracy phenomenon and decrease the number of parameters. We can adjust the hyperparameter cardinality instead of the network depth to extract more discriminative features of HSIs and improve the classification accuracy. On the other hand, in order to improve the classification accuracies of classes with small numbers of samples, we enrich the input of the 3D-ResNeXt spectral-spatial feature learning network by additional spectral feature learning, and finally use a loss function modified by label smoothing strategy to solve the imbalance of classes. The experimental results on three popular HSI datasets demonstrate the superiority of our proposed network and an effective improvement in the accuracies especially for the classes with small numbers of training samples.
APA, Harvard, Vancouver, ISO, and other styles
45

Jiang, Xinrui, Ye Zhang, Qi Yang, Bin Deng, and Hongqiang Wang. "Millimeter-Wave Array Radar-Based Human Gait Recognition Using Multi-Channel Three-Dimensional Convolutional Neural Network." Sensors 20, no. 19 (September 23, 2020): 5466. http://dx.doi.org/10.3390/s20195466.

Full text
Abstract:
At present, there are two obvious problems in radar-based gait recognition. First, the traditional radar frequency band is difficult to meet the requirements of fine identification with due to its low carrier frequency and limited micro-Doppler resolution. Another significant problem is that radar signal processing is relatively complex, and the existing signal processing algorithms are poor in real-time usability, robustness and universality. This paper focuses on the two basic problems of human gait detection with radar and proposes a human gait classification and recognition method based on millimeter-wave array radar. Based on deep-learning technology, a multi-channel three-dimensional convolution neural network is proposed on the basis of improving the residual network, which completes the classification and recognition of human gait through the hierarchical extraction and fusion of multi-dimensional features. Taking the three-dimensional coordinates, motion speed and intensity of strong scattering points in the process of target motion as network inputs, multi-channel convolution is used to extract motion features, and the classification and recognition of typical daily actions are completed. The experimental results show that we have more than 92.5% recognition accuracy for common gait categories such as jogging and normal walking.
APA, Harvard, Vancouver, ISO, and other styles
46

Lozano Torres, Jose Agustin, and Björn Malte Schäfer. "Three-dimensional weak gravitational lensing of the 21-cm radiation background." Monthly Notices of the Royal Astronomical Society 512, no. 4 (March 23, 2022): 5135–52. http://dx.doi.org/10.1093/mnras/stac796.

Full text
Abstract:
ABSTRACT We study weak gravitational lensing by the cosmic large-scale structure of the 21-cm radiation background in the 3D weak-lensing formalism. The interplay between source distance measured at finite resolution, visibility, and lensing terms is analysed in detail and the resulting total covariance Cℓ(k, k′) is derived. The effect of lensing correlates different multipoles through convolution, breaking the statistical homogeneity of the 21-cm radiation background. This homogeneity breaking can be exploited to reconstruct the lensing field $\hat{\phi }_{\rm \ell m}(\kappa)$ and noise-lensing reconstruction $N_{\ell }^{\hat{\phi }}$ by means of quadratic estimators. The effects related to the actual measurement process (redshift precision and visibility terms) change drastically the values of the off-diagonal terms of the total covariance Cℓ(k, k′). It is expected that the detection of lensing effects on a 21-cm radiation background will require sensitive studies and high-resolution observations by future low-frequency radio arrays such as the SKA survey.
APA, Harvard, Vancouver, ISO, and other styles
47

Zhao, Huapeng, and Zhongxiang Shen. "Efficient Modeling of Three-Dimensional Reverberation Chambers Using Hybrid Discrete Singular Convolution-Method of Moments." IEEE Transactions on Antennas and Propagation 59, no. 8 (August 2011): 2943–53. http://dx.doi.org/10.1109/tap.2011.2158966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Yang Li-Xia, Ge De-Biao, and Wei Bing. "Three-dimensional finite-difference time-domain implementation for anisotropic dispersive medium using recursive convolution method." Acta Physica Sinica 56, no. 8 (2007): 4509. http://dx.doi.org/10.7498/aps.56.4509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Yoo, Hoon, and Jae-Young Jang. "Computational three-dimensional reconstruction in diffraction grating imaging by convolution with periodic δ-function array." Optik 249 (January 2022): 168211. http://dx.doi.org/10.1016/j.ijleo.2021.168211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Zhang, Qi, Jianlong Chang, Gaofeng Meng, Shiming Xiang, and Chunhong Pan. "Spatio-Temporal Graph Structure Learning for Traffic Forecasting." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (April 3, 2020): 1177–85. http://dx.doi.org/10.1609/aaai.v34i01.5470.

Full text
Abstract:
As an indispensable part in Intelligent Traffic System (ITS), the task of traffic forecasting inherently subjects to the following three challenging aspects. First, traffic data are physically associated with road networks, and thus should be formatted as traffic graphs rather than regular grid-like tensors. Second, traffic data render strong spatial dependence, which implies that the nodes in the traffic graphs usually have complex and dynamic relationships between each other. Third, traffic data demonstrate strong temporal dependence, which is crucial for traffic time series modeling. To address these issues, we propose a novel framework named Structure Learning Convolution (SLC) that enables to extend the traditional convolutional neural network (CNN) to graph domains and learn the graph structure for traffic forecasting. Technically, SLC explicitly models the structure information into the convolutional operation. Under this framework, various non-Euclidean CNN methods can be considered as particular instances of our formulation, yielding a flexible mechanism for learning on the graph. Along this technical line, two SLC modules are proposed to capture the global and local structures respectively and they are integrated to construct an end-to-end network for traffic forecasting. Additionally, in this process, Pseudo three Dimensional convolution (P3D) networks are combined with SLC to capture the temporal dependencies in traffic data. Extensively comparative experiments on six real-world datasets demonstrate our proposed approach significantly outperforms the state-of-the-art ones.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography