Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: 3D Networks.

Статті в журналах з теми "3D Networks"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "3D Networks".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Liang, Long, Christopher Jones, Shaohua Chen, Bo Sun, and Yang Jiao. "Heterogeneous force network in 3D cellularized collagen networks." Physical Biology 13, no. 6 (October 25, 2016): 066001. http://dx.doi.org/10.1088/1478-3975/13/6/066001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Gou, Pingzhang, Baoyong Guo, Miao Guo, and Shun Mao. "VKECE-3D: Energy-Efficient Coverage Enhancement in Three-Dimensional Heterogeneous Wireless Sensor Networks Based on 3D-Voronoi and K-Means Algorithm." Sensors 23, no. 2 (January 4, 2023): 573. http://dx.doi.org/10.3390/s23020573.

Повний текст джерела
Анотація:
During these years, the 3D node coverage of heterogeneous wireless sensor networks that are closer to the actual application environment has become a strong focus of research. However, the direct application of traditional two-dimensional planar coverage methods to three-dimensional space suffers from high application complexity, a low coverage rate, and a short life cycle. Most methods ignore the network life cycle when considering coverage. The network coverage and life cycle determine the quality of service (QoS) in heterogeneous wireless sensor networks. Thus, energy-efficient coverage enhancement is a significantly pivotal and challenging task. To solve the above task, an energy-efficient coverage enhancement method, VKECE-3D, based on 3D-Voronoi partitioning and the K-means algorithm is proposed. The quantity of active nodes is kept to a minimum while guaranteeing coverage. Firstly, based on node deployment at random, the nodes are deployed twice using a highly destructive polynomial mutation strategy to improve the uniformity of the nodes. Secondly, the optimal perceptual radius is calculated using the K-means algorithm and 3D-Voronoi partitioning to enhance the network coverage quality. Finally, a multi-hop communication and polling working mechanism are proposed to lower the nodes’ energy consumption and lengthen the network’s lifetime. Its simulation findings demonstrate that compared to other energy-efficient coverage enhancement solutions, VKECE-3D improves network coverage and greatly lengthens the network’s lifetime.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Leng, Biao, Yu Liu, Kai Yu, Xiangyang Zhang, and Zhang Xiong. "3D object understanding with 3D Convolutional Neural Networks." Information Sciences 366 (October 2016): 188–201. http://dx.doi.org/10.1016/j.ins.2015.08.007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Chaaban, Fadi, Hanan Darwishe, and Jamal El Khattabi. "A Semi-Automatic Approach in GIS for 3D Modeling and Visualization of Utility Networks: Application for Sewer & Stormwater networks." MATEC Web of Conferences 295 (2019): 02003. http://dx.doi.org/10.1051/matecconf/201929502003.

Повний текст джерела
Анотація:
This paper presents a semi-automatic methodology proposed for 3D modeling of utility networks in GIS environment. The ModelBuilder in ArcGIS (ESRI) software is used for implementing this methodology, by developing two tools to automate the construction processes of 3D networks. The first presents a tool to create a 3D Manhole layer from points layer, and the second is a tool to create a 3D pipe layer. For both tools, a work algorithm has been built, in addition to designing user interfaces elements. These tools are stored in a Toolbox called “3D Manhole & Pipe.tbx”. The two previous tools were tested and applied to spatial data for a proposed residential area. The final 3D model of the residential area includes the sewage and stormwater networks, as well as other spatial data such as buildings, parks, roads, etc. This model is able to spot the intersection points in the network, visually or using the 3D analysis available in the software, allowing us to identify problems to be processed and resolved before starting a project, leading consequently to time and cost savings, effort and money. The proposed methodology is an easy and an effective way to build 3D network models (sewer, water..etc), and the developed tools allow the implementation of a set of necessary processes needed to build 3D networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Wang, Shaohua, Yeran Sun, Yinle Sun, Yong Guan, Zhenhua Feng, Hao Lu, Wenwen Cai, and Liang Long. "A Hybrid Framework for High-Performance Modeling of Three-Dimensional Pipe Networks." ISPRS International Journal of Geo-Information 8, no. 10 (October 8, 2019): 441. http://dx.doi.org/10.3390/ijgi8100441.

Повний текст джерела
Анотація:
Three-dimensional (3D) pipe network modeling plays an essential part in high performance-based smart city applications. Given that massive 3D pipe networks tend to be difficult to manage and to visualize, we propose in this study a hybrid framework for high-performance modeling of a 3D pipe network, including pipe network data model and high-performance modeling. The pipe network data model is devoted to three-dimensional pipe network construction based on network topology and building information models (BIMs). According to the topological relationships of the pipe point pipelines, the pipe network is decomposed into multiple pipe segment units. The high-performance modeling of 3D pipe network contains a spatial 3D model, the instantiation, adaptive rendering, and combination parallel computing. Spatial 3D model (S3M) is proposed for spatial data transmission, exchange, and visualization of massive and multi-source 3D spatial data. The combination parallel computing framework with GPU and OpenMP was developed to reduce the processing time for pipe networks. The results of the experiments showed that the hybrid framework achieves a high efficiency and the hardware resource occupation is reduced.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

HUANG, MING, JINGJING YANG, ZHE XIAO, JUN SUN, and JINHUI PENG. "MODELING THE DIELECTRIC RESPONSE IN HETEROGENEOUS MATERIALS USING 3D RC NETWORKS." Modern Physics Letters B 23, no. 25 (October 10, 2009): 3023–33. http://dx.doi.org/10.1142/s0217984909021090.

Повний текст джерела
Анотація:
A model of 3D RC networks was developed to describe the dielectric response of heterogeneous materials and was compared with the results of the 2D RC network model. We show that the "universal dielectric response" (UDR) of heterogeneous materials is a common feature of both 2D and 3D very large networks with randomly positioned resistors and capacitors, and that the percolation threshold of the 2D and 3D bond network are close to 0.5 and 0.25, respectively. In addition, it was found that the percolation threshold of the 3D network is in good agreement with the result of the coherent potential (CP) formula.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Fries, David, and Geran Barton. "3D MICROSENSOR IMAGING ARRAYS NETWORKS." Additional Conferences (Device Packaging, HiTEC, HiTEN, and CICMT) 2015, DPC (January 1, 2015): 000348–78. http://dx.doi.org/10.4071/2015dpc-ta33.

Повний текст джерела
Анотація:
2D microsensor arrays can permit spatial distribution measurements of the sensed parameter and enable high resolution sensing visualizations. Measuring constituents in a flowing media, such as air or liquid could benefit from such flow through or flow across imaging systems. These flow imagers can have applications in mobile robotics and non-visible imagery, and alternate mechanical systems of perception, process control and environmental observations. In order to create rigid-conformal, large area imaging systems we have in the past merged flexible PCB substrates with rigid constructions from 3D printing. This approach merges the 2D flexible electronics world of printed circuits with the 3D printed packaging world. Extending this 2D flow imaging concept into the third dimension permits 3D flow imaging networks, architectures and designs and can create a new class of sensing systems. Using 3D printing, 3D printed filaments, nets and microsensor cages, can be combined into integrated designs to generate distributed 3D imaging networks and camera systems for a variety of sensory applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Jeong, Cheol, and Won-Yong Shin. "Capacity of 3D Erasure Networks." IEEE Transactions on Communications 64, no. 7 (July 2016): 2900–2912. http://dx.doi.org/10.1109/tcomm.2016.2569580.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Berber, Mustafa, Petr Vaníček, and Peter Dare. "Robustness analysis of 3D networks." Journal of Geodynamics 47, no. 1 (January 2009): 1–8. http://dx.doi.org/10.1016/j.jog.2008.02.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Thomas, Edwin L. "Nanoscale 3D ordered polymer networks." Science China Chemistry 61, no. 1 (December 13, 2017): 25–32. http://dx.doi.org/10.1007/s11426-017-9138-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Livingstone, David J, and David T Manallack. "Neural Networks in 3D QSAR." QSAR & Combinatorial Science 22, no. 5 (July 2003): 510–18. http://dx.doi.org/10.1002/qsar.200310003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Cai, Jiahui, and Jianguo Hu. "3D RANs: 3D Residual Attention Networks for action recognition." Visual Computer 36, no. 6 (July 25, 2019): 1261–70. http://dx.doi.org/10.1007/s00371-019-01733-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Lombardo, Andrew T., Shane R. Nelson, Guy G. Kennedy, Kathleen M. Trybus, Sam Walcott, and David M. Warshaw. "Myosin Va transport of liposomes in three-dimensional actin networks is modulated by actin filament density, position, and polarity." Proceedings of the National Academy of Sciences 116, no. 17 (April 9, 2019): 8326–35. http://dx.doi.org/10.1073/pnas.1901176116.

Повний текст джерела
Анотація:
The cell’s dense 3D actin filament network presents numerous challenges to vesicular transport by teams of myosin Va (MyoVa) molecular motors. These teams must navigate their cargo through diverse actin structures ranging from Arp2/3-branched lamellipodial networks to the dense, unbranched cortical networks. To define how actin filament network organization affects MyoVa cargo transport, we created two different 3D actin networks in vitro. One network was comprised of randomly oriented, unbranched actin filaments; the other was comprised of Arp2/3-branched actin filaments, which effectively polarized the network by aligning the actin filament plus-ends. Within both networks, we defined each actin filament’s 3D spatial position using superresolution stochastic optical reconstruction microscopy (STORM) and its polarity by observing the movement of single fluorescent reporter MyoVa. We then characterized the 3D trajectories of fluorescent, 350-nm fluid-like liposomes transported by MyoVa teams (∼10 motors) moving within each of the two networks. Compared with the unbranched network, we observed more liposomes with directed and fewer with stationary motion on the Arp2/3-branched network. This suggests that the modes of liposome transport by MyoVa motors are influenced by changes in the local actin filament polarity alignment within the network. This mechanism was supported by an in silico 3D model that provides a broader platform to understand how cellular regulation of the actin cytoskeletal architecture may fine tune MyoVa-based intracellular cargo transport.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Kalayci, Selim, and Zeynep H. Gümüş. "Exploring Biological Networks in 3D, Stereoscopic 3D, and Immersive 3D with iCAVE." Current Protocols in Bioinformatics 61, no. 1 (March 2018): 8.27.1–8.27.26. http://dx.doi.org/10.1002/cpbi.47.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Hu, Zihe, Jing Guo, and Xuequan Zhang. "Three-Dimensional (3D) Parametric Modeling and Organization for Web-Based Visualization of City-Scale Pipe Network." ISPRS International Journal of Geo-Information 9, no. 11 (October 24, 2020): 623. http://dx.doi.org/10.3390/ijgi9110623.

Повний текст джерела
Анотація:
Underground pipe network is a critical city infrastructure, which plays an important role in smart city management. As the detailed three-dimensional (3D) scene of underground pipe networks is difficult to construct, and massive numbers of pipe points and segments are difficult to manage, a 3D pipe network modeling and organization method is explored in this study. First, the modeling parameters were parsed from the pipe network survey data. Then, the 3D pipe segment and point models were built based on parametric modeling algorithms. Finally, a heterogeneous data structure for the 3D pipe network was established through loose quadtree data organization. The proposed data structure was suitable for 3D Tiles, which was adopted by Cesium (a web-based 3D virtual globe); hence, a multitude of pipe networks can be viewed in the browser. The proposed method was validated by generating and organizing a large-scale 3D pipe network scene of Beijing. The experimental results indicate that the 3D pipe network models formed by this method can satisfy the visual effect and render the efficiency required for smart urban management.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Hu, Kejian, and Xiaoguang Wu. "A Bridge Structure 3D Representation for Deep Neural Network and Its Application in Frequency Estimation." Advances in Civil Engineering 2022 (March 22, 2022): 1–13. http://dx.doi.org/10.1155/2022/1999013.

Повний текст джерела
Анотація:
Currently, most predictions related to bridge geometry use shallow neural networks, which limit the network’s ability to fit since the input form limits the depth of the neural network. Therefore, this study proposed a new 3D representation of bridge structures. Based on the geometric parameters of the bridge structure, three 4D tensors were formed. This form of representation not only retained all geometric information but also expressed the spatial relationship of the structure. Then, this study constructed the corresponding 3D convolutional neural network and used it to estimate the frequency of the bridge. In addition, this study also developed a traditional shallow neural network for comparison. The application of 3D representation and 3D convolution could effectively reduce the prediction error. The 3D representation presented in this study could be used not only for frequency prediction but also for any prediction problems related to bridge geometry.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Pollet, Andreas M. A. O., Erik F. G. A. Homburg, Ruth Cardinaels, and Jaap M. J. den Toonder. "3D Sugar Printing of Networks Mimicking the Vasculature." Micromachines 11, no. 1 (December 30, 2019): 43. http://dx.doi.org/10.3390/mi11010043.

Повний текст джерела
Анотація:
The vasculature plays a central role as the highway of the body, through which nutrients and oxygen as well as biochemical factors and signals are distributed by blood flow. Therefore, understanding the flow and distribution of particles inside the vasculature is valuable both in healthy and disease-associated networks. By creating models that mimic the microvasculature fundamental knowledge can be obtained about these parameters. However, microfabrication of such models remains a challenging goal. In this paper we demonstrate a promising 3D sugar printing method that is capable of recapitulating the vascular network geometry with a vessel diameter range of 1 mm down to 150 µm. For this work a dedicated 3D printing setup was built that is capable of accurately printing the sugar glass material with control over fibre diameter and shape. By casting of printed sugar glass networks in PDMS and dissolving the sugar glass, perfusable networks with circular cross-sectional channels are obtained. Using particle image velocimetry, analysis of the flow behaviour was conducted showing a Poisseuille flow profile inside the network and validating the quality of the printing process.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Firat, Mehmet. "Analysis of 3D Virtual Worlds as Connected Knowledge Networks." International Journal of Information and Education Technology 4, no. 2 (2014): 203–7. http://dx.doi.org/10.7763/ijiet.2014.v4.399.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Monteagudo, Jorge E. P., and Paulo L. C. Lage. "Cross-Properties Relations in 3D Percolation Networks: II. Network Permeability." Transport in Porous Media 61, no. 3 (December 2005): 259–74. http://dx.doi.org/10.1007/s11242-004-7363-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Mengu, Deniz, Yifan Zhao, Nezih T. Yardimci, Yair Rivenson, Mona Jarrahi, and Aydogan Ozcan. "Misalignment resilient diffractive optical networks." Nanophotonics 9, no. 13 (July 4, 2020): 4207–19. http://dx.doi.org/10.1515/nanoph-2020-0291.

Повний текст джерела
Анотація:
AbstractAs an optical machine learning framework, Diffractive Deep Neural Networks (D2NN) take advantage of data-driven training methods used in deep learning to devise light–matter interaction in 3D for performing a desired statistical inference task. Multi-layer optical object recognition platforms designed with this diffractive framework have been shown to generalize to unseen image data achieving, e.g., >98% blind inference accuracy for hand-written digit classification. The multi-layer structure of diffractive networks offers significant advantages in terms of their diffraction efficiency, inference capability and optical signal contrast. However, the use of multiple diffractive layers also brings practical challenges for the fabrication and alignment of these diffractive systems for accurate optical inference. Here, we introduce and experimentally demonstrate a new training scheme that significantly increases the robustness of diffractive networks against 3D misalignments and fabrication tolerances in the physical implementation of a trained diffractive network. By modeling the undesired layer-to-layer misalignments in 3D as continuous random variables in the optical forward model, diffractive networks are trained to maintain their inference accuracy over a large range of misalignments; we term this diffractive network design as vaccinated D2NN (v-D2NN). We further extend this vaccination strategy to the training of diffractive networks that use differential detectors at the output plane as well as to jointly-trained hybrid (optical-electronic) networks to reveal that all of these diffractive designs improve their resilience to misalignments by taking into account possible 3D fabrication variations and displacements during their training phase.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Merino, Ibon, Jon Azpiazu, Anthony Remazeilles, and Basilio Sierra. "3D Convolutional Neural Networks Initialized from Pretrained 2D Convolutional Neural Networks for Classification of Industrial Parts." Sensors 21, no. 4 (February 4, 2021): 1078. http://dx.doi.org/10.3390/s21041078.

Повний текст джерела
Анотація:
Deep learning methods have been successfully applied to image processing, mainly using 2D vision sensors. Recently, the rise of depth cameras and other similar 3D sensors has opened the field for new perception techniques. Nevertheless, 3D convolutional neural networks perform slightly worse than other 3D deep learning methods, and even worse than their 2D version. In this paper, we propose to improve 3D deep learning results by transferring the pretrained weights learned in 2D networks to their corresponding 3D version. Using an industrial object recognition context, we have analyzed different combinations of 3D convolutional networks (VGG16, ResNet, Inception ResNet, and EfficientNet), comparing the recognition accuracy. The highest accuracy is obtained with EfficientNetB0 using extrusion with an accuracy of 0.9217, which gives comparable results to state-of-the art methods. We also observed that the transfer approach enabled to improve the accuracy of the Inception ResNet 3D version up to 18% with respect to the score of the 3D approach alone.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Jiang, Haiyang, Yaozong Pan, Jian Zhang, and Haitao Yang. "Battlefield Target Aggregation Behavior Recognition Model Based on Multi-Scale Feature Fusion." Symmetry 11, no. 6 (June 5, 2019): 761. http://dx.doi.org/10.3390/sym11060761.

Повний текст джерела
Анотація:
In this paper, our goal is to improve the recognition accuracy of battlefield target aggregation behavior while maintaining the low computational cost of spatio-temporal depth neural networks. To this end, we propose a novel 3D-CNN (3D Convolutional Neural Networks) model, which extends the idea of multi-scale feature fusion to the spatio-temporal domain, and enhances the feature extraction ability of the network by combining feature maps of different convolutional layers. In order to reduce the computational complexity of the network, we further improved the multi-fiber network, and finally established an architecture—3D convolution Two-Stream model based on multi-scale feature fusion. Extensive experimental results on the simulation data show that our network significantly boosts the efficiency of existing convolutional neural networks in the aggregation behavior recognition, achieving the most advanced performance on the dataset constructed in this paper.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Kim, Seong Kwang, YeonJoo Jeong, Pavlo Bidenko, Hyeong-Rak Lim, Yu-Rim Jeon, Hansung Kim, Yun Jung Lee, et al. "3D Stackable Synaptic Transistor for 3D Integrated Artificial Neural Networks." ACS Applied Materials & Interfaces 12, no. 6 (January 15, 2020): 7372–80. http://dx.doi.org/10.1021/acsami.9b22008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Yang, Zhuxian, Chunze Yan, Jinhui Liu, Sakineh Chabi, Yongde Xia, and Yanqiu Zhu. "Designing 3D graphene networks via a 3D-printed Ni template." RSC Advances 5, no. 37 (2015): 29397–400. http://dx.doi.org/10.1039/c5ra03454j.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Niu, Lei, Zhiyong Wang, Yiquan Song, and Yi Li. "An Evaluation Model for Analyzing Robustness and Spatial Closeness of 3D Indoor Evacuation Networks." ISPRS International Journal of Geo-Information 10, no. 5 (May 13, 2021): 331. http://dx.doi.org/10.3390/ijgi10050331.

Повний текст джерела
Анотація:
Indoor evacuation efficiency heavily relies on the connectivity status of navigation networks. During disastrous situations, the spreading of hazards (e.g., fires, plumes) significantly influences indoor navigation networks’ status. Nevertheless, current research concentrates on utilizing classical statistical methods to analyze this status and lacks the flexibility to evaluate the increasingly disastrous scope’s influence. We propose an evaluation method combining 3D spatial geometric distance and topology for emergency evacuations to address this issue. Within this method, we offer a set of indices to describe the nodes’ status and the entire network under emergencies. These indices can help emergency responders quickly identify vulnerable nodes and areas in the network, facilitating the generation of evacuation plans and improving evacuation efficiency. We apply this method to analyze the fire evacuation efficiency and resilience of two experiment buildings’ indoor networks. Experimental results show a strong influence on the network’s spatial connectivity on the evacuation efficiency under disaster situations.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Chang, Inho, Min-Gyu Park, Je Woo Kim, and Ju Hong Yoon. "Absolute 3D Human Pose Estimation Using Noise-Aware Radial Distance Predictions." Symmetry 15, no. 1 (December 22, 2022): 25. http://dx.doi.org/10.3390/sym15010025.

Повний текст джерела
Анотація:
We present a simple yet effective pipeline for absolute three-dimensional (3D) human pose estimation from two-dimensional (2D) joint keypoints, namely, the 2D-to-3D human pose lifting problem. Our method comprises two simple baseline networks, a 3D conversion function, and a correction network. The former two networks predict the root distance and the root-relative joint distance simultaneously. Given the input and predicted distances, the 3D conversion function recovers the absolute 3D pose, and the correction network reduces 3D pose noise caused by input uncertainties. Furthermore, to cope with input noise implicitly, we adopt a Siamese architecture that enforces the consistency of features between two training inputs, i.e., ground truth 2D joint keypoints and detected 2D joint keypoints. Finally, we experimentally validate the advantages of the proposed method and demonstrate its competitive performance over state-of-the-art absolute 2D-to-3D pose-lifting methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Lan, Qiang, Zelong Wang, Mei Wen, Chunyuan Zhang, and Yijie Wang. "High Performance Implementation of 3D Convolutional Neural Networks on a GPU." Computational Intelligence and Neuroscience 2017 (2017): 1–8. http://dx.doi.org/10.1155/2017/8348671.

Повний текст джерела
Анотація:
Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Jia, Xiaogang, Wei Chen, Zhengfa Liang, Xin Luo, Mingfei Wu, Chen Li, Yulin He, Yusong Tan, and Libo Huang. "A Joint 2D-3D Complementary Network for Stereo Matching." Sensors 21, no. 4 (February 18, 2021): 1430. http://dx.doi.org/10.3390/s21041430.

Повний текст джерела
Анотація:
Stereo matching is an important research field of computer vision. Due to the dimension of cost aggregation, current neural network-based stereo methods are difficult to trade-off speed and accuracy. To this end, we integrate fast 2D stereo methods with accurate 3D networks to improve performance and reduce running time. We leverage a 2D encoder-decoder network to generate a rough disparity map and construct a disparity range to guide the 3D aggregation network, which can significantly improve the accuracy and reduce the computational cost. We use a stacked hourglass structure to refine the disparity from coarse to fine. We evaluated our method on three public datasets. According to the KITTI official website results, Our network can generate an accurate result in 80 ms on a modern GPU. Compared to other 2D stereo networks (AANet, DeepPruner, FADNet, etc.), our network has a big improvement in accuracy. Meanwhile, it is significantly faster than other 3D stereo networks (5× than PSMNet, 7.5× than CSN and 22.5× than GANet, etc.), demonstrating the effectiveness of our method.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Dinc, Niyazi Ulas, Demetri Psaltis, and Daniel Brunner. "Optical neural networks: The 3D connection." Photoniques, no. 104 (September 2020): 34–38. http://dx.doi.org/10.1051/photon/202010434.

Повний текст джерела
Анотація:
We motivate a canonical strategy for integrating photonic neural networks (NN) by leveraging 3D printing. Our belief is that a NN’s parallel and dense connectivity is not scalable without 3D integration. 3D additive fabrication complemented with photonic signal transduction can dramatically augment the current capabilities of 2D CMOS and integrated photonics. Here we review some of our recent advances made towards such an architecture.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Valderhaug, Vibeke Devold, Wilhelm Robert Glomm, Eugenia Mariana Sandru, Masahiro Yasuda, Axel Sandvig, and Ioanna Sandvig. "Formation of neural networks with structural and functional features consistent with small-world network topology on surface-grafted polymer particles." Royal Society Open Science 6, no. 10 (October 2019): 191086. http://dx.doi.org/10.1098/rsos.191086.

Повний текст джерела
Анотація:
In vitro electrophysiological investigation of neural activity at a network level holds tremendous potential for elucidating underlying features of brain function (and dysfunction). In standard neural network modelling systems, however, the fundamental three-dimensional (3D) character of the brain is a largely disregarded feature. This widely applied neuroscientific strategy affects several aspects of the structure–function relationships of the resulting networks, altering network connectivity and topology, ultimately reducing the translatability of the results obtained. As these model systems increase in popularity, it becomes imperative that they capture, as accurately as possible, fundamental features of neural networks in the brain, such as small-worldness. In this report, we combine in vitro neural cell culture with a biologically compatible scaffolding substrate, surface-grafted polymer particles (PPs), to develop neural networks with 3D topology. Furthermore, we investigate their electrophysiological network activity through the use of 3D multielectrode arrays. The resulting neural network activity shows emergent behaviour consistent with maturing neural networks capable of performing computations, i.e. activity patterns suggestive of both information segregation (desynchronized single spikes and local bursts) and information integration (network spikes). Importantly, we demonstrate that the resulting PP-structured neural networks show both structural and functional features consistent with small-world network topology.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Ruan, Kun, Shun Zhao, Xueqin Jiang, Yixuan Li, Jianbo Fei, Dinghua Ou, Qiang Tang, Zhiwei Lu, Tao Liu, and Jianguo Xia. "A 3D Fluorescence Classification and Component Prediction Method Based on VGG Convolutional Neural Network and PARAFAC Analysis Method." Applied Sciences 12, no. 10 (May 12, 2022): 4886. http://dx.doi.org/10.3390/app12104886.

Повний текст джерела
Анотація:
Three-dimensional fluorescence is currently studied by methods such as parallel factor analysis (PARAFAC), fluorescence regional integration (FRI), and principal component analysis (PCA). There are also many studies combining convolutional neural networks at present, but there is no one method recognized as the most effective among the methods combining convolutional neural networks and 3D fluorescence analysis. Based on this, we took some samples from the actual environment for measuring 3D fluorescence data and obtained a batch of public datasets from the internet species. Firstly, we preprocessed the data (including two steps of PARAFAC analysis and CNN dataset generation), and then we proposed a 3D fluorescence classification method and a components fitting method based on VGG16 and VGG11 convolutional neural networks. The VGG16 network is used for the classification of 3D fluorescence data with a training accuracy of 99.6% (as same as the PCA + SVM method (99.6%)). Among the component maps fitting networks, we comprehensively compared the improved LeNet network, the improved AlexNet network, and the improved VGG11 network, and finally selected the improved VGG11 network as the component maps fitting network. In the improved VGG11 network training, we used the MSE loss function and cosine similarity to judge the merit of the model, and the MSE loss of the network training reached 4.6 × 10−4 (characterizing the variability of the training results and the actual results), and we used the cosine similarity as the accuracy criterion, and the cosine similarity of the training results reached 0.99 (comparison of the training results and the actual results). The network performance is excellent. The experiments demonstrate that the convolutional neural network has a great application in 3D fluorescence analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Tumanov, E. A. "Fluid simulation utilizing 3D convolutional networks." Proceedings of Moscow Institute of Physics and Technology 13, no. 3 (2021): 109–17. http://dx.doi.org/10.53815/20726759_2021_13_3_109.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Laera, Anna Maria, Luciana Mirenghi, Monica Schioppa, Concetta Nobile, Laura Capodieci, Anna Grazia Scalone, Francesca Di Benedetto, and Leander Tapfer. "Fabrication of 3D carbon nanotube networks." Materials Research Express 3, no. 8 (August 4, 2016): 085007. http://dx.doi.org/10.1088/2053-1591/3/8/085007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Scarselli, Manuela. "(Invited) 3D Multifunctional Carbon Nanotube Networks." ECS Transactions 86, no. 1 (July 20, 2018): 85–96. http://dx.doi.org/10.1149/08601.0085ecst.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Ahlström, Johan, Casey Jessop, Lars Hammar, and Christer Persson. "3D characterisation of RCF crack networks." MATEC Web of Conferences 12 (2014): 06001. http://dx.doi.org/10.1051/matecconf/20141206001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Ron, Racheli, Marcin Stefan Zielinski, and Adi Salomon. "Cathodoluminescence Nanoscopy of 3D Plasmonic Networks." Nano Letters 20, no. 11 (October 15, 2020): 8205–11. http://dx.doi.org/10.1021/acs.nanolett.0c03317.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Holzreiter, Stefan. "Autolabeling 3D tracks using neural networks." Clinical Biomechanics 20, no. 1 (January 2005): 1–8. http://dx.doi.org/10.1016/j.clinbiomech.2004.04.006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Wu, Willie, Adam DeConinck, and Jennifer A. Lewis. "Omnidirectional Printing of 3D Microvascular Networks." Advanced Materials 23, no. 24 (March 23, 2011): H178—H183. http://dx.doi.org/10.1002/adma.201004625.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Jian, Hongdeng, Xiangtao Fan, Jian Liu, Qingwen Jin, and Xujie Kang. "A Quaternion-Based Piecewise 3D Modeling Method for Indoor Path Networks." ISPRS International Journal of Geo-Information 8, no. 2 (February 15, 2019): 89. http://dx.doi.org/10.3390/ijgi8020089.

Повний текст джерела
Анотація:
Generating 3D path models (with textures) from indoor paths is a good way to improve the visualization performance of 3D indoor path analysis. In this paper, a quaternion-based piecewise 3D modeling method is proposed to automatically generate highly recognizable 3D models for indoor path networks. To create such models, indoor paths are classified into four types of basic elements: corridor, stairs, elevator and node, which contain six kinds of edges and seven kinds of nodes. A quaternion-based method is devised to calculate the coordinates of the designed elements, and a piecewise 3D modeling method is implemented to create the entire 3D indoor path models in a 3D GIS scene. The numerical comparison of 3D scene primitives in different visualization modes indicates that the proposed method can generate detailed and irredundant models for indoor path networks. The result of 3D path analysis shows that indoor path models can improve the visualization performance of a 3D indoor path network by displaying paths with different shapes, textures and colors and that the models can maintain a high rendering efficiency (above 50 frames per second) in a 3D GIS scene containing more than 50,000 polygons and triangles.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Thalore, Ranjana, Partha Pratim Bhattacharya, and Manish Kumar Jha. "Performance Comparison of Homogeneous and Heterogeneous 3D Wireless Sensor Networks." Journal of Telecommunications and Information Technology, no. 2 (June 30, 2017): 32–37. http://dx.doi.org/10.26636/jtit.2017.110216.

Повний текст джерела
Анотація:
Recent developments in wireless sensor networks include their applications in safety, medical monitoring, environment monitoring and many more. Limited battery energy and efficient data delivery are most considered constraints for sensor nodes. Depletion of node battery ceases functioning of the node. The network lifetime can be enhanced with the help of Multi-Layer protocol (ML-MAC). This paper presents a practical approach including 3-dimensional deployment of sensor nodes and analyzes two different types of networks – homogeneous and heterogeneous WSNs. To analyze various QoS parameters, two types of nodes are considered in a heterogeneous network. The performance of both the networks is compared through simulations. The results show that ML-MAC performs better for a 3D heterogeneous WSNs.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Nyan, L. T., A. I. Gavrilov, and M. T. Do. "Classification of Hyperspectral Remote Earth Sensing Data using Combined 3D--2D Convolutional Neural Networks." Herald of the Bauman Moscow State Technical University. Series Instrument Engineering, no. 1 (138) (March 2022): 100–118. http://dx.doi.org/10.18698/0236-3933-2022-1-100-118.

Повний текст джерела
Анотація:
Hyperspectral image classification is used for analyzing remote Earth sensing data. Convolutional neural network is one of the most commonly used methods for processing visual data based on deep learning. The article considers the proposed hybrid 3D--2D spectral convolutional neural network for hyperspectral image classification. At the initial stage, a simple combined trained deep learning model was proposed, which was constructed by combining 2D and 3D convolutional neural networks to extract deeper spatial-spectral features with fewer 3D--2D convolutions. The 3D network facilitates the joint spatial-spectral representation of objects from a stack of spectral bands. Functions of 3D--2D convolutional neural networks were used for classifying hyperspectral images. The algorithm of the method of principal components is applied to reduce the dimension. Hyperspectral image classification experiments were performed on Indian Pines, University of Pavia and Salinas Scene remote sensing datasets. The first layer of the feature map is used as input for subsequent layers in predicting final labels for each hyperspectral pixel. The proposed method not only includes the benefits of advanced feature extraction from convolutional neural networks, but also makes full use of spectral and spatial information. The effectiveness of the proposed method was tested on three reference data sets. The results show that a multifunctional learning system based on such networks significantly improves classification accuracy (more than 99 %)
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Luo, Guoliang, Guoming Xiong, Xiaojun Huang, Xin Zhao, Yang Tong, Qiang Chen, Zhiliang Zhu, Haopeng Lei, and Juncong Lin. "Geometry Sampling-Based Adaption to DCGAN for 3D Face Generation." Sensors 23, no. 4 (February 9, 2023): 1937. http://dx.doi.org/10.3390/s23041937.

Повний текст джерела
Анотація:
Despite progress in the past decades, 3D shape acquisition techniques are still a threshold for various 3D face-based applications and have therefore attracted extensive research. Moreover, advanced 2D data generation models based on deep networks may not be directly applicable to 3D objects because of the different dimensionality of 2D and 3D data. In this work, we propose two novel sampling methods to represent 3D faces as matrix-like structured data that can better fit deep networks, namely (1) a geometric sampling method for the structured representation of 3D faces based on the intersection of iso-geodesic curves and radial curves, and (2) a depth-like map sampling method using the average depth of grid cells on the front surface. The above sampling methods can bridge the gap between unstructured 3D face models and powerful deep networks for an unsupervised generative 3D face model. In particular, the above approaches can obtain the structured representation of 3D faces, which enables us to adapt the 3D faces to the Deep Convolution Generative Adversarial Network (DCGAN) for 3D face generation to obtain better 3D faces with different expressions. We demonstrated the effectiveness of our generative model by producing a large variety of 3D faces with different expressions using the two novel down-sampling methods mentioned above.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Mahmoudi Kouhi, Reza, Sylvie Daniel, and Philippe Giguère. "Data Preparation Impact on Semantic Segmentation of 3D Mobile LiDAR Point Clouds Using Deep Neural Networks." Remote Sensing 15, no. 4 (February 10, 2023): 982. http://dx.doi.org/10.3390/rs15040982.

Повний текст джерела
Анотація:
Currently, 3D point clouds are being used widely due to their reliability in presenting 3D objects and accurately localizing them. However, raw point clouds are unstructured and do not contain semantic information about the objects. Recently, dedicated deep neural networks have been proposed for the semantic segmentation of 3D point clouds. The focus has been put on the architecture of the network, while the performance of some networks, such as Kernel Point Convolution (KPConv), shows that the way data are presented at the input of the network is also important. Few prior works have studied the impact of using data preparation on the performance of deep neural networks. Therefore, our goal was to address this issue. We propose two novel data preparation methods that are compatible with typical density variations in outdoor 3D LiDAR point clouds. We also investigated two already existing data preparation methods to show their impact on deep neural networks. We compared the four methods with a baseline method based on point cloud partitioning in PointNet++. We experimented with two deep neural networks: PointNet++ and KPConv. The results showed that using any of the proposed data preparation methods improved the performance of both networks by a tangible margin compared to the baseline. The two proposed novel data preparation methods achieved the best results among the investigated methods for both networks. We noticed that, for datasets containing many classes with widely varying sizes, the KNN-based data preparation offered superior performance compared to the Fixed Radius (FR) method. Moreover, this research allowed identifying guidelines to select meaningful downsampling and partitioning of large-scale outdoor 3D LiDAR point clouds at the input of deep neural networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

ZHANG, DAKUN, GUOZHI SONG, KUNLIANG LIU, YONG MA, CHENGLONG ZHAO, and XU AN WANG. "Comprehensive Improved Simulated Annealing Optimization for Floorplanning of Heterogeneous 3D Networks-on-Chip." Journal of Interconnection Networks 15, no. 03n04 (September 2015): 1540006. http://dx.doi.org/10.1142/s021926591540006x.

Повний текст джерела
Анотація:
With the rapid development of integrated circuit manufacturing processes, poor system scalability has become a prominent problem for System on Chip (SoC).To solve the bottleneck problems such as global synchronization, network on chip Networks on Chip (NoC) has emerged as a new design to tackle the increasing communication demand among elements on chips. With the development of networks-on-chip, the research has expanded from two-dimensional to three-dimensional design, and 3D networks-on-chip is a combination of 3D integration technology and 2D networks-on-chips with the advantages of both to meet the development trend of diversified chip functions. This paper presents an improved floorplanning optimization algorithm based on simulated annealing algorithm (Comprehensive Improved Simulated Annealing, hereinafter referred to as CISA algorithm) to replace the original floorplanning optimization algorithm based on simulated annealing algorithm (Simulated Annealing, hereinafter referred to as SA algorithm) to make it more applicable to the three-dimensional network-on- chip simulation. This paper describes the CISA algorithm improvement ideas and uses it on an existing 3D network-on-chip simulator with a set of classical simulation tests. The results show that the proposed CISA algorithm is better than the original SA algorithm and it is more suitable for simulations of three-dimensional networks-on-chip, especially when dealing with large scale 3D NoC.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Wu, Qiang. "Construction and 3D Simulation of Virtual Animation Instant Network Communication System Based on Convolution Neural Networks." Computational Intelligence and Neuroscience 2021 (August 28, 2021): 1–9. http://dx.doi.org/10.1155/2021/7277733.

Повний текст джерела
Анотація:
In recent years, great progress has been made in 3D simulation modeling of instant network communication system, such as the application of virtual reality technology and 3D virtual animation online modeling technology. Facing the increasing demand of different industries, how to build an instant network communication system for 3D virtual animation has become a research hotspot. On this basis, the construction method of fast instant network communication system based on convolutional neural network and fusion morphological 3D simulation model is studied. This paper analyzes the research status of instant network communication system. The experiment optimizes and improves the shortcomings of the current research hotspot of virtual animation instant network communication system and takes the morphological 3D simulation model fusion as the core for in-depth optimization. Finally, the experimental results show that the fusion morphological 3D simulation model can reconstruct the standard 3D virtual animation model according to different needs and can quickly model the optimization strategy according to the local differences of different animations. The response accuracy of the network communication system reaches 97.7%.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Joshi, Abhishek, Sarang Dhongdi, Rishabh Sethunathan, Pritish Nahar, and K. R. Anupama. "Energy Efficient Clustering Based Network Protocol Stack for 3D Airborne Monitoring System." Journal of Computer Networks and Communications 2017 (2017): 1–13. http://dx.doi.org/10.1155/2017/8921261.

Повний текст джерела
Анотація:
Wireless Sensor Network consists of large number of nodes densely deployed in ad hoc manner. Usually, most of the application areas of WSNs require two-dimensional (2D) topology. Various emerging application areas such as airborne networks and underwater wireless sensor networks are usually deployed using three-dimensional (3D) network topology. In this paper, a static 3D cluster-based network topology has been proposed for airborne networks. A network protocol stack consisting of various protocols such as TDMA MAC and dynamic routing along with services such as time synchronization, Cluster Head rotation, and power level management has been proposed for this airborne network. The proposed protocol stack has been implemented on the hardware platform consisting of number of TelosB nodes. This 3D airborne network architecture can be used to measure Air Quality Index (AQI) in an area. Various parameters of network such as energy consumption, Cluster Head rotation, time synchronization, and Packet Delivery Ratio (PDR) have been analyzed. Detailed description of the implementation of the protocol stack along with results of implementation has been provided in this paper.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Vasileiadis, Manolis, Christos-Savvas Bouganis, and Dimitrios Tzovaras. "Multi-person 3D pose estimation from 3D cloud data using 3D convolutional neural networks." Computer Vision and Image Understanding 185 (August 2019): 12–23. http://dx.doi.org/10.1016/j.cviu.2019.04.011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Wang, Hao, Ming-hui Sun, Hao Zhang, and Li-yan Dong. "LHPE-nets: A lightweight 2D and 3D human pose estimation model with well-structural deep networks and multi-view pose sample simplification method." PLOS ONE 17, no. 2 (February 23, 2022): e0264302. http://dx.doi.org/10.1371/journal.pone.0264302.

Повний текст джерела
Анотація:
The cross-view 3D human pose estimation model has made significant progress, it better completed the task of human joint positioning and skeleton modeling in 3D through multi-view fusion method. The multi-view 2D pose estimation part of this model is very important, but its training cost is also very high. It uses some deep learning networks to generate heatmaps for each view. Therefore, in this article, we tested some new deep learning networks for pose estimation tasks. These deep networks include Mobilenetv2, Mobilenetv3, Efficientnetv2 and Resnet. Then, based on the performance and drawbacks of these networks, we built multiple deep learning networks with better performance. We call our network in this article LHPE-nets, which mainly includes Low-Span network and RDNS network. LHPE-nets uses a network structure with evenly distributed channels, inverted residuals, external residual blocks and a framework for processing small-resolution samples to achieve training saturation faster. And we also designed a static pose sample simplification method for 3D pose data. It implemented low-cost sample storage, and it was also convenient for models to read these samples. In the experiment, we used several recent models and two public estimation indicators. The experimental results show the superiority of this work in fast start-up and network lightweight, it is about 1-5 epochs faster than the Resnet-34 during training. And they also show the accuracy improvement of this work in estimating different joints, the estimated performance of approximately 60% of the joints is improved. Its performance in the overall human pose estimation exceeds other networks by more than 7mm. The experiment analyzes the network size, fast start-up and the performance in 2D and 3D pose estimation of the model in this paper in detail. Compared with other pose estimation models, its performance has also reached a higher level of application.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Tan, Jindong, Fanyu Kong, and Wei Liang. "A 3D object model for wireless camera networks with network constraints." Transactions of the Institute of Measurement and Control 35, no. 7 (September 12, 2012): 866–74. http://dx.doi.org/10.1177/0142331212457584.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Fu, Junwei, and Jun Liang. "Virtual View Generation Based on 3D-Dense-Attentive GAN Networks." Sensors 19, no. 2 (January 16, 2019): 344. http://dx.doi.org/10.3390/s19020344.

Повний текст джерела
Анотація:
A binocular vision system is a common perception component of an intelligent vehicle. Benefiting from the biomimetic structure, the system is simple and effective. Which are extremely snesitive on external factors, especially missing vision signals. In this paper, a virtual view-generation algorithm based on generative adversarial networks (GAN) is proposed to enhance the robustness of binocular vision systems. The proposed model consists of two parts: generative network and discriminator network. To improve the quality of a virtual view, a generative network structure based on 3D convolutional neural networks (3D-CNN) and attentive mechanisms is introduced to extract the time-series features from image sequences. To avoid gradient vanish during training, the dense block structure is utilized to improve the discriminator network. Meanwhile, three kinds of image features, including image edge, depth map and optical flow are extracted to constrain the supervised training of model. The final results on KITTI and Cityscapes datasets demonstrate that our algorithm outperforms conventional methods, and the missing vision signal can be replaced by a generated virtual view.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії