Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Single to multiview conversion.

Zeitschriftenartikel zum Thema „Single to multiview conversion“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Single to multiview conversion" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Lu, Shao-Ping, Sibo Feng, Beerend Ceulemans, Miao Wang, Rui Zhong und Adrian Munteanu. „Multiview conversion of 2D cartoon images“. Communications in Information and Systems 16, Nr. 4 (2016): 229–54. http://dx.doi.org/10.4310/cis.2016.v16.n4.a2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Chapiro, Alexandre, Simon Heinzle, Tunç Ozan Aydın, Steven Poulakos, Matthias Zwicker, Aljosa Smolic und Markus Gross. „Optimizing stereo-to-multiview conversion for autostereoscopic displays“. Computer Graphics Forum 33, Nr. 2 (Mai 2014): 63–72. http://dx.doi.org/10.1111/cgf.12291.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Gu, Yi, und Kang Li. „Entropy-Based Multiview Data Clustering Analysis in the Era of Industry 4.0“. Wireless Communications and Mobile Computing 2021 (30.04.2021): 1–8. http://dx.doi.org/10.1155/2021/9963133.

Der volle Inhalt der Quelle
Annotation:
In the era of Industry 4.0, single-view clustering algorithm is difficult to play a role in the face of complex data, i.e., multiview data. In recent years, an extension of the traditional single-view clustering is multiview clustering technology, which is becoming more and more popular. Although the multiview clustering algorithm has better effectiveness than the single-view clustering algorithm, almost all the current multiview clustering algorithms usually have two weaknesses as follows. (1) The current multiview collaborative clustering strategy lacks theoretical support. (2) The weight of each view is averaged. To solve the above-mentioned problems, we used the Havrda-Charvat entropy and fuzzy index to construct a new collaborative multiview fuzzy c-means clustering algorithm using fuzzy weighting called Co-MVFCM. The corresponding results show that the Co-MVFCM has the best clustering performance among all the comparison clustering algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kanaan-Izquierdo, Samir, Andrey Ziyatdinov, Maria Araceli Burgueño und Alexandre Perera-Lluna. „Multiview: a software package for multiview pattern recognition methods“. Bioinformatics 35, Nr. 16 (31.12.2018): 2877–79. http://dx.doi.org/10.1093/bioinformatics/bty1039.

Der volle Inhalt der Quelle
Annotation:
Abstract Summary Multiview datasets are the norm in bioinformatics, often under the label multi-omics. Multiview data are gathered from several experiments, measurements or feature sets available for the same subjects. Recent studies in pattern recognition have shown the advantage of using multiview methods of clustering and dimensionality reduction; however, none of these methods are readily available to the extent of our knowledge. Multiview extensions of four well-known pattern recognition methods are proposed here. Three multiview dimensionality reduction methods: multiview t-distributed stochastic neighbour embedding, multiview multidimensional scaling and multiview minimum curvilinearity embedding, as well as a multiview spectral clustering method. Often they produce better results than their single-view counterparts, tested here on four multiview datasets. Availability and implementation R package at the B2SLab site: http://b2slab.upc.edu/software-and-tutorials/ and Python package: https://pypi.python.org/pypi/multiview. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Wang, Qingjun, Haiyan Lv, Jun Yue und Eugene Mitchell. „Supervised multiview learning based on simultaneous learning of multiview intact and single view classifier“. Neural Computing and Applications 28, Nr. 8 (21.01.2016): 2293–301. http://dx.doi.org/10.1007/s00521-016-2189-8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Shin, Hong-Chang, Jinwhan Lee, Gwangsoon Lee und Namho Hur. „Stereo-To-Multiview Conversion System Using FPGA and GPU Device“. Journal of Broadcast Engineering 19, Nr. 5 (30.09.2014): 616–26. http://dx.doi.org/10.5909/jbe.2014.19.5.616.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kuo, Pin-Chen, Kuan-Lun Lo, Huan-Kai Tseng, Kuan-Ting Lee, Bin-Da Liu und Jar-Ferr Yang. „Stereoview to Multiview Conversion Architecture for Auto-Stereoscopic 3D Displays“. IEEE Transactions on Circuits and Systems for Video Technology 28, Nr. 11 (November 2018): 3274–87. http://dx.doi.org/10.1109/tcsvt.2017.2732061.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Pei, Jifang, Weibo Huo, Chenwei Wang, Yulin Huang, Yin Zhang, Junjie Wu und Jianyu Yang. „Multiview Deep Feature Learning Network for SAR Automatic Target Recognition“. Remote Sensing 13, Nr. 8 (09.04.2021): 1455. http://dx.doi.org/10.3390/rs13081455.

Der volle Inhalt der Quelle
Annotation:
Multiview synthetic aperture radar (SAR) images contain much richer information for automatic target recognition (ATR) than a single-view one. It is desirable to establish a reasonable multiview ATR scheme and design effective ATR algorithm to thoroughly learn and extract that classification information, so that superior SAR ATR performance can be achieved. Hence, a general processing framework applicable for a multiview SAR ATR pattern is first given in this paper, which can provide an effective approach to ATR system design. Then, a new ATR method using a multiview deep feature learning network is designed based on the proposed multiview ATR framework. The proposed neural network is with a multiple input parallel topology and some distinct deep feature learning modules, with which significant classification features, the intra-view and inter-view features existing in the input multiview SAR images, will be learned simultaneously and thoroughly. Therefore, the proposed multiview deep feature learning network can achieve an excellent SAR ATR performance. Experimental results have shown the superiorities of the proposed multiview SAR ATR method under various operating conditions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Pei, Jifang, Zhiyong Wang, Xueping Sun, Weibo Huo, Yin Zhang, Yulin Huang, Junjie Wu und Jianyu Yang. „FEF-Net: A Deep Learning Approach to Multiview SAR Image Target Recognition“. Remote Sensing 13, Nr. 17 (02.09.2021): 3493. http://dx.doi.org/10.3390/rs13173493.

Der volle Inhalt der Quelle
Annotation:
Synthetic aperture radar (SAR) is an advanced microwave imaging system of great importance. The recognition of real-world targets from SAR images, i.e., automatic target recognition (ATR), is an attractive but challenging issue. The majority of existing SAR ATR methods are designed for single-view SAR images. However, multiview SAR images contain more abundant classification information than single-view SAR images, which benefits automatic target classification and recognition. This paper proposes an end-to-end deep feature extraction and fusion network (FEF-Net) that can effectively exploit recognition information from multiview SAR images and can boost the target recognition performance. The proposed FEF-Net is based on a multiple-input network structure with some distinct and useful learning modules, such as deformable convolution and squeeze-and-excitation (SE). Multiview recognition information can be effectively extracted and fused with these modules. Therefore, excellent multiview SAR target recognition performance can be achieved by the proposed FEF-Net. The superiority of the proposed FEF-Net was validated based on experiments with the moving and stationary target acquisition and recognition (MSTAR) dataset.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Guo, Peng, Guoqi Xie und Renfa Li. „Object Detection Using Multiview CCA-Based Graph Spectral Learning“. Journal of Circuits, Systems and Computers 29, Nr. 02 (23.04.2019): 2050022. http://dx.doi.org/10.1142/s021812662050022x.

Der volle Inhalt der Quelle
Annotation:
Recent years have witnessed a surge of interest in semi-supervised learning-based object detection. Object detection is usually accomplished by classification methods. Different from conventional methods, those usually adopt a single feature view or concatenate multiple features into a long feature vector, multiview graph spectral learning can attain simultaneously object classification and weight learning of multiview. However, most existing multiview graph spectral learning (GSL) methods are only concerned with the complementarities between multiple views but not with correlation information. Accurately representing image objects is difficult because there are multiple views simultaneously for an image object. Thus, we offer a GSL method based on multiview canonical correlation analysis (GSL-MCCA). The method adds MCCA regularization term to a graph learning framework. To enable MCCA to reveal the nonlinear correlation information hidden in multiview data, manifold local structure information is incorporated into MCCA. Thus, GSL-MCCA can lead to simultaneous selection of relevant features and learning transformation. Experimental evaluations based on Corel and VOC datasets suggest the effectiveness of GSL-MCCA in object detection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Solimene, Raffaele, Aniello Buonanno und Rocco Pierri. „Localization of small PEC spheres by multiview/single-frequency data“. Near Surface Geophysics 6, Nr. 6 (01.08.2008): 371–79. http://dx.doi.org/10.3997/1873-0604.2008025.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Rioja, M., R. Dodson, G. Orosz und H. Imai. „MultiView High Precision VLBI Astrometry at Low Frequencies“. Proceedings of the International Astronomical Union 13, S336 (September 2017): 439–42. http://dx.doi.org/10.1017/s1743921317010560.

Der volle Inhalt der Quelle
Annotation:
AbstractObservations at low frequencies (<8GHz) are dominated by distinct direction dependent ionospheric propagation errors, which place a very tight limit on the angular separation of a suitable phase referencing calibrator and astrometry. To increase the capability for high precision astrometric measurements an effective calibration strategy of the systematic ionospheric propagation effects that is widely applicable is required. The MultiView technique holds the key to the compensation of atmospheric spatial-structure errors, by using observations of multiple calibrators and two dimensional interpolation. In this paper we present the first demonstration of the power of MultiView using three calibrators, several degrees from the target, along with a comparative study of the astrometric accuracy between MultiView and phase-referencing techniques. MultiView calibration provides an order of magnitude improvement in astrometry with respect to conventional phase referencing, achieving ~100micro-arcseconds astrometry errors in a single epoch of observations, effectively reaching the thermal noise limit.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Sahin, Erdem, Suren Vagharshakyan, Robert Bregovic, Gwangsoon Lee und Atanas Gotchev. „Conversion of sparsely-captured light field into alias-free full-parallax multiview content“. Electronic Imaging 2018, Nr. 4 (28.01.2018): 144–1. http://dx.doi.org/10.2352/issn.2470-1173.2018.04.sda-144.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Schaffner, Michael, Frank K. Gurkaynak, Pierre Greisen, Hubert Kaeslin, Luca Benini und Aljosa Smolic. „Hybrid ASIC/FPGA System for Fully Automatic Stereo-to-Multiview Conversion Using IDW“. IEEE Transactions on Circuits and Systems for Video Technology 26, Nr. 11 (November 2016): 2093–108. http://dx.doi.org/10.1109/tcsvt.2015.2501640.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Sabah, Ali, Sabrina Tiun, Nor Samsiah Sani, Masri Ayob und Adil Yaseen Taha. „Enhancing web search result clustering model based on multiview multirepresentation consensus cluster ensemble (mmcc) approach“. PLOS ONE 16, Nr. 1 (15.01.2021): e0245264. http://dx.doi.org/10.1371/journal.pone.0245264.

Der volle Inhalt der Quelle
Annotation:
Existing text clustering methods utilize only one representation at a time (single view), whereas multiple views can represent documents. The multiview multirepresentation method enhances clustering quality. Moreover, existing clustering methods that utilize more than one representation at a time (multiview) use representation with the same nature. Hence, using multiple views that represent data in a different representation with clustering methods is reasonable to create a diverse set of candidate clustering solutions. On this basis, an effective dynamic clustering method must consider combining multiple views of data including semantic view, lexical view (word weighting), and topic view as well as the number of clusters. The main goal of this study is to develop a new method that can improve the performance of web search result clustering (WSRC). An enhanced multiview multirepresentation consensus clustering ensemble (MMCC) method is proposed to create a set of diverse candidate solutions and select a high-quality overlapping cluster. The overlapping clusters are obtained from the candidate solutions created by different clustering methods. The framework to develop the proposed MMCC includes numerous stages: (1) acquiring the standard datasets (MORESQUE and Open Directory Project-239), which are used to validate search result clustering algorithms, (2) preprocessing the dataset, (3) applying multiview multirepresentation clustering models, (4) using the radius-based cluster number estimation algorithm, and (5) employing the consensus clustering ensemble method. Results show an improvement in clustering methods when multiview multirepresentation is used. More importantly, the proposed MMCC model improves the overall performance of WSRC compared with all single-view clustering models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Cayoren, M., I. Akduman, A. Yapar und L. Crocco. „Shape Reconstruction of Perfectly Conducting Targets From Single-Frequency Multiview Data“. IEEE Geoscience and Remote Sensing Letters 5, Nr. 3 (Juli 2008): 383–86. http://dx.doi.org/10.1109/lgrs.2008.916075.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Vizsnyiczai, Gaszton, András Búzás, Badri Lakshmanrao Aekbote, Tamás Fekete, István Grexa, Pál Ormos und Lóránd Kelemen. „Multiview microscopy of single cells through microstructure-based indirect optical manipulation“. Biomedical Optics Express 11, Nr. 2 (16.01.2020): 945. http://dx.doi.org/10.1364/boe.379233.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Peng, Jiansheng, Kui Fu, Qingjin Wei, Yong Qin und Qiwen He. „Improved Multiview Decomposition for Single-Image High-Resolution 3D Object Reconstruction“. Wireless Communications and Mobile Computing 2020 (26.12.2020): 1–14. http://dx.doi.org/10.1155/2020/8871082.

Der volle Inhalt der Quelle
Annotation:
As a representative technology of artificial intelligence, 3D reconstruction based on deep learning can be integrated into the edge computing framework to form an intelligent edge and then realize the intelligent processing of the edge. Recently, high-resolution representation of 3D objects using multiview decomposition (MVD) architecture is a fast reconstruction method for generating objects with realistic details from a single RGB image. The results of high-resolution 3D object reconstruction are related to two aspects. On the one hand, a low-resolution reconstruction network represents a good 3D object from a single RGB image. On the other hand, a high-resolution reconstruction network maximizes fine low-resolution 3D objects. To improve these two aspects and further enhance the high-resolution reconstruction capabilities of the 3D object generation network, we study and improve the low-resolution 3D generation network and the depth map superresolution network. Eventually, we get an improved multiview decomposition (IMVD) network. First, we use a 2D image encoder with multifeature fusion (MFF) to enhance the feature extraction capability of the model. Second, a 3D decoder using an effective subpixel convolutional neural network (3D ESPCN) improves the decoding speed in the decoding stage. Moreover, we design a multiresidual dense block (MRDB) to optimize the depth map superresolution network, which allows the model to capture more object details and reduce the model parameters by approximately 25% when the number of network layers is doubled. The experimental results show that the proposed IMVD is better than the original MVD in the 3D object superresolution experiment and the high-resolution 3D reconstruction experiment of a single image.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Alfaqheri, Taha, Akuha Solomon Aondoakaa, Mohammad Rafiq Swash und Abdul Hamid Sadka. „Low-delay single holoscopic 3D computer-generated image to multiview images“. Journal of Real-Time Image Processing 17, Nr. 6 (19.06.2020): 2015–27. http://dx.doi.org/10.1007/s11554-020-00991-y.

Der volle Inhalt der Quelle
Annotation:
Abstract Due to the nature of holoscopic 3D (H3D) imaging technology, H3D cameras can capture more angular information than their conventional 2D counterparts. This is mainly attributed to the macrolens array which captures the 3D scene with slightly different viewing angles and generates holoscopic elemental images based on fly’s eyes imaging concept. However, this advantage comes at the cost of decreasing the spatial resolution in the reconstructed images. On the other hand, the consumer market is looking to find an efficient multiview capturing solution for the commercially available autostereoscopic displays. The autostereoscopic display provides multiple viewers with the ability to simultaneously enjoy a 3D viewing experience without the need for wearing 3D display glasses. This paper proposes a low-delay content adaptation framework for converting a single holoscopic 3D computer-generated image into multiple viewpoint images. Furthermore, it investigates the effects of varying interpolation step sizes on the converted multiview images using the nearest neighbour and bicubic sampling interpolation techniques. In addition, it evaluates the effects of changing the macrolens array size, using the proposed framework, on the perceived visual quality both objectively and subjectively. The experimental work is conducted on computer-generated H3D images with different macrolens sizes. The experimental results show that the proposed content adaptation framework can be used to capture multiple viewpoint images to be visualised on autostereoscopic displays.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Liu, Cuiwei, Zhaokui Li, Xiangbin Shi und Chong Du. „Learning a Mid-Level Representation for Multiview Action Recognition“. Advances in Multimedia 2018 (2018): 1–10. http://dx.doi.org/10.1155/2018/3508350.

Der volle Inhalt der Quelle
Annotation:
Recognizing human actions in videos is an active topic with broad commercial potentials. Most of the existing action recognition methods are supposed to have the same camera view during both training and testing. And thus performances of these single-view approaches may be severely influenced by the camera movement and variation of viewpoints. In this paper, we address the above problem by utilizing videos simultaneously recorded from multiple views. To this end, we propose a learning framework based on multitask random forest to exploit a discriminative mid-level representation for videos from multiple cameras. In the first step, subvolumes of continuous human-centered figures are extracted from original videos. In the next step, spatiotemporal cuboids sampled from these subvolumes are characterized by multiple low-level descriptors. Then a set of multitask random forests are built upon multiview cuboids sampled at adjacent positions and construct an integrated mid-level representation for multiview subvolumes of one action. Finally, a random forest classifier is employed to predict the action category in terms of the learned representation. Experiments conducted on the multiview IXMAS action dataset illustrate that the proposed method can effectively recognize human actions depicted in multiview videos.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Castro, F. M., M. J. Marín-Jiménez, N. Guil Mata und R. Muñoz-Salinas. „Fisher Motion Descriptor for Multiview Gait Recognition“. International Journal of Pattern Recognition and Artificial Intelligence 31, Nr. 01 (Januar 2017): 1756002. http://dx.doi.org/10.1142/s021800141756002x.

Der volle Inhalt der Quelle
Annotation:
The goal of this paper is to identify individuals by analyzing their gait. Instead of using binary silhouettes as input data (as done in many previous works) we propose and evaluate the use of motion descriptors based on densely sampled short-term trajectories. We take advantage of state-of-the-art people detectors to define custom spatial configurations of the descriptors around the target person, obtaining a rich representation of the gait motion. The local motion features (described by the Divergence-Curl-Shear descriptor [M. Jain, H. Jegou and P. Bouthemy, Better exploiting motion for better action recognition, in Proc. IEEE Conf. Computer Vision Pattern Recognition (CVPR) (2013), pp. 2555–2562.]) extracted on the different spatial areas of the person are combined into a single high-level gait descriptor by using the Fisher Vector encoding [F. Perronnin, J. Sánchez and T. Mensink, Improving the Fisher kernel for large-scale image classification, in Proc. European Conf. Computer Vision (ECCV) (2010), pp. 143–156]. The proposed approach, coined Pyramidal Fisher Motion, is experimentally validated on ‘CASIA’ dataset [S. Yu, D. Tan and T. Tan, A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition, in Proc. Int. Conf. Pattern Recognition, Vol. 4 (2006), pp. 441–444]. (parts B and C), ‘TUM GAID’ dataset, [M. Hofmann, J. Geiger, S. Bachmann, B. Schuller and G. Rigoll, The TUM Gait from Audio, Image and Depth (GAID) database: Multimodal recognition of subjects and traits, J. Vis. Commun. Image Represent. 25(1) (2014) 195–206]. ‘CMU MoBo’ dataset [R. Gross and J. Shi, The CMU Motion of Body (MoBo) database, Technical Report CMU-RI-TR-01-18, Robotics Institute (2001)]. and the recent ‘AVA Multiview Gait’ dataset [D. López-Fernández, F. Madrid-Cuevas, A. Carmona-Poyato, M. Marín-Jiménez and R. Muñoz-Salinas, The AVA multi-view dataset for gait recognition, in Activity Monitoring by Multiple Distributed Sensing, Lecture Notes in Computer Science (Springer, 2014), pp. 26–39]. The results show that this new approach achieves state-of-the-art results in the problem of gait recognition, allowing to recognize walking people from diverse viewpoints on single and multiple camera setups, wearing different clothes, carrying bags, walking at diverse speeds and not limited to straight walking paths.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Johnston, K. B., S. M. Caballero-Nieves, V. Petit, A. M. Peter und R. Haber. „Variable star classification using multiview metric learning“. Monthly Notices of the Royal Astronomical Society 491, Nr. 3 (14.11.2019): 3805–19. http://dx.doi.org/10.1093/mnras/stz3165.

Der volle Inhalt der Quelle
Annotation:
ABSTRACT Comprehensive observations of variable stars can include time domain photometry in a multitude of filters, spectroscopy, estimates of colour (e.g. U-B), etc. When the objective is to classify variable stars, traditional machine learning techniques distill these various representations (or views) into a single feature vector and attempt to discriminate among desired categories. In this work, we propose an alternative approach that inherently leverages multiple views of the same variable star. Our multiview metric learning framework enables robust characterization of star categories by directly learning to discriminate in a multifaceted feature space, thus, eliminating the need to combine feature representations prior to fitting the machine learning model. We also demonstrate how to extend standard multiview learning, which employs multiple vectorized views, to the matrix-variate case which allows very novel variable star signature representations. The performance of our proposed methods is evaluated on the UCR Starlight and LINEAR data sets. Both the vector and matrix-variate versions of our multiview learning framework perform favourably – demonstrating the ability to discriminate variable star categories.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Chandler, Talon, Shalin Mehta, Hari Shroff, Rudolf Oldenbourg und Patrick J. La Rivière. „Single-fluorophore orientation determination with multiview polarized illumination: modeling and microscope design“. Optics Express 25, Nr. 25 (01.12.2017): 31309. http://dx.doi.org/10.1364/oe.25.031309.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Jin, Yi, Jiuwen Cao, Qiuqi Ruan und Xueqiao Wang. „Cross-Modality 2D-3D Face Recognition via Multiview Smooth Discriminant Analysis Based on ELM“. Journal of Electrical and Computer Engineering 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/584241.

Der volle Inhalt der Quelle
Annotation:
In recent years, 3D face recognition has attracted increasing attention from worldwide researchers. Rather than homogeneous face data, more and more applications require flexible input face data nowadays. In this paper, we propose a new approach for cross-modality 2D-3D face recognition (FR), which is called Multiview Smooth Discriminant Analysis (MSDA) based on Extreme Learning Machines (ELM). Adding the Laplacian penalty constrain for the multiview feature learning, the proposed MSDA is first proposed to extract the cross-modality 2D-3D face features. The MSDA aims at finding a multiview learning based common discriminative feature space and it can then fully utilize the underlying relationship of features from different views. To speed up the learning phase of the classifier, the recent popular algorithm named Extreme Learning Machine (ELM) is adopted to train the single hidden layer feedforward neural networks (SLFNs). To evaluate the effectiveness of our proposed FR framework, experimental results on a benchmark face recognition dataset are presented. Simulations show that our new proposed method generally outperforms several recent approaches with a fast training speed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Kim, Joohyung, Janghun Hyeon und Nakju Doh. „Generative multiview inpainting for object removal in large indoor spaces“. International Journal of Advanced Robotic Systems 18, Nr. 2 (01.03.2021): 172988142199654. http://dx.doi.org/10.1177/1729881421996544.

Der volle Inhalt der Quelle
Annotation:
As interest in image-based rendering increases, the need for multiview inpainting is emerging. Despite of rapid progresses in single-image inpainting based on deep learning approaches, they have no constraint in obtaining color consistency over multiple inpainted images. We target object removal in large-scale indoor spaces and propose a novel pipeline of multiview inpainting to achieve color consistency and boundary consistency in multiple images. The first step of the pipeline is to create color prior information on masks by coloring point clouds from multiple images and projecting the colored point clouds onto the image planes. Next, a generative inpainting network accepts a masked image, a color prior image, imperfect guideline, and two different masks as inputs and yields the refined guideline and inpainted image as outputs. The color prior and guideline input ensure color and boundary consistencies across multiple images. We validate our pipeline on real indoor data sets quantitatively using consistency distance and similarity distance, metrics we defined for comparing results of multiview inpainting and qualitatively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Wang, Ziqiang, Xia Sun, Lijun Sun und Yuchun Huang. „Multiview Discriminative Geometry Preserving Projection for Image Classification“. Scientific World Journal 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/924090.

Der volle Inhalt der Quelle
Annotation:
In many image classification applications, it is common to extract multiple visual features from different views to describe an image. Since different visual features have their own specific statistical properties and discriminative powers for image classification, the conventional solution for multiple view data is to concatenate these feature vectors as a new feature vector. However, this simple concatenation strategy not only ignores the complementary nature of different views, but also ends up with “curse of dimensionality.” To address this problem, we propose a novel multiview subspace learning algorithm in this paper, named multiview discriminative geometry preserving projection (MDGPP) for feature extraction and classification. MDGPP can not only preserve the intraclass geometry and interclass discrimination information under a single view, but also explore the complementary property of different views to obtain a low-dimensional optimal consensus embedding by using an alternating-optimization-based iterative algorithm. Experimental results on face recognition and facial expression recognition demonstrate the effectiveness of the proposed algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Preble, Stefan, Liang Cao, Ali Elshaari, Abdelsalam Aboketaf und Donald Adams. „Single photon adiabatic wavelength conversion“. Applied Physics Letters 101, Nr. 17 (22.10.2012): 171110. http://dx.doi.org/10.1063/1.4764068.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Wan, Yong, Xiaoyu Zhang, Yongshou Dai und Xiaolei Shi. „Research on a Method for Simulating Multiview Ocean Wave Synchronization Data by Networked SAR Satellites“. Journal of Marine Science and Engineering 7, Nr. 6 (07.06.2019): 180. http://dx.doi.org/10.3390/jmse7060180.

Der volle Inhalt der Quelle
Annotation:
It is expected that the problem of the azimuth cutoff wavelength in single-satellite synthetic aperture radar (SAR) observations can be solved by means of the cooperative observation of networked SAR satellites. Multiview SAR wave synchronization data are required in the process. However, most of the current orbiting satellites are geosynchronous orbit satellites; the simultaneous observation by multiple SARs in the same sea area cannot be achieved, and multiview synchronization data cannot be obtained. Therefore, this paper studies the simulation of the multiview SAR wave synchronization data. Ocean wave spectra were simulated by using the Pierson Moskowitz (PM) spectrum. The Monte Carlo method was used to simulate two-dimensional (2D) ocean surfaces at different wind speeds. The two-scale electromagnetic scattering model was used to calculate the ocean surface backscattering coefficient, and the time-domain echo algorithm was used to generate echo signals. The echo signals were processed by the Range–Doppler (RD) imaging algorithm to obtain ocean SAR data. Based on the obtained single-SAR wave data, networked satellites consisting of three SARs were simulated, and the SAR wave data were synchronized. The results show that when SARs are used to observe the same sea area from different observation directions, the clarity of the wave fringes in the SAR images are different. For different azimuth angles, the degrees of azimuth cutoff are different. These results reflect the influences of different degrees of azimuth cutoff on SAR images. The simulated wave synchronization data can be used as the basic data source for subsequent azimuth cutoff wavelength compensation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Saito, Toyohiro, Yuto Kobayashi, Keita Takahashi und Toshiaki Fujii. „Displaying Real-World Light Fields With Stacked Multiplicative Layers: Requirement and Data Conversion for Input Multiview Images“. Journal of Display Technology 12, Nr. 11 (November 2016): 1290–300. http://dx.doi.org/10.1109/jdt.2016.2594804.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Bérubé-Lauzière, Yves, Matteo Crotti, Simon Boucher, Seyedrohollah Ettehadi, Julien Pichette und Ivan Rech. „Prospects on Time-Domain Diffuse Optical Tomography Based on Time-Correlated Single Photon Counting for Small Animal Imaging“. Journal of Spectroscopy 2016 (2016): 1–23. http://dx.doi.org/10.1155/2016/1947613.

Der volle Inhalt der Quelle
Annotation:
This paper discusses instrumentation based on multiview parallel high temporal resolution (<50 ps) time-domain (TD) measurements for diffuse optical tomography (DOT) and a prospective view on the steps to undertake as regards such instrumentation to make TD-DOT a viable technology for small animal molecular imaging. TD measurements provide information-richest data, and we briefly review the interaction of light with biological tissues to provide an understanding of this. This data richness is yet to be exploited to its full potential to increase the spatial resolution of DOT imaging and to allow probing, via the fluorescence lifetime, tissue biochemical parameters, and processes that are otherwise not accessible in fluorescence DOT. TD data acquisition time is, however, the main factor that currently compromises the viability of TD-DOT. Current high temporal resolution TD-DOT scanners simply do not integrate sufficient detection channels. Based on our past experience in developing TD-DOT instrumentation, we review and discuss promising technologies to overcome this difficulty. These are single photon avalanche diode (SPAD) detectors and fully parallel highly integrated electronics for time-correlated single photon counting (TCSPC). We present experimental results obtained with such technologies demonstrating the feasibility of next-generation multiview TD-DOT therewith.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Pimont, François, Maxime Soma und Jean-Luc Dupuy. „Accounting for Wood, Foliage Properties, and Laser Effective Footprint in Estimations of Leaf Area Density from Multiview-LiDAR Data“. Remote Sensing 11, Nr. 13 (03.07.2019): 1580. http://dx.doi.org/10.3390/rs11131580.

Der volle Inhalt der Quelle
Annotation:
The spatial distribution of Leaf Area Density (LAD) in a tree canopy has fundamental functions in ecosystems. It can be measured through a variety of methods, including voxel-based methods applied to LiDAR point clouds. A theoretical study recently compared the numerical errors of these methods and showed that the bias-corrected Maximum Likelihood Estimator was the most efficient. However, it ignored (i) wood volumes, (ii) vegetation sub-grid clumping, (iii) the instrument effective footprint, and (iv) was limited to a single viewpoint. In practice, retrieving LAD is not straightforward, because vegetation is not randomly distributed in sub-grids, beams are divergent, and forestry plots are sampled from more than one viewpoint to mitigate occlusion. In the present article, we extend the previous formulation to (i) account for both wood volumes and hits, (ii) rigorously include correction terms for vegetation and instrument characteristics, and (iii) integrate multiview data. Two numerical experiments showed that the new approach entailed reduction of bias and errors, especially in the presence of wood volumes or when multiview data are available for poorly-explored volumes. In addition to its conciseness, completeness, and efficiency, this new formulation can be applied to multiview TLS—and also potentially to UAV LiDAR scanning—to reduce errors in LAD estimation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Sase, I., T. Okinaga, M. Hoshi, G. W. Feigenson und K. Kinosita. „Regulatory mechanisms of the acrosome reaction revealed by multiview microscopy of single starfish sperm.“ Journal of Cell Biology 131, Nr. 4 (15.11.1995): 963–73. http://dx.doi.org/10.1083/jcb.131.4.963.

Der volle Inhalt der Quelle
Annotation:
The acrosome reaction in many animals is a coupled reaction involving an exocytotic step and a dramatic change in cell shape. It has been proposed that these morphological changes are regulated by intracellular ions such as Ca2+ and H+. We report here simultaneous visualization, under a multiview microscope, of intracellular free Ca2+ concentration ([Ca2+]i), intracellular pH (pHi), and morphological changes in a single starfish sperm (Asterina pectinifera). [Ca2+]i and pHi were monitored with the fluorescent probes indo-1 and SNARF-1, respectively. The acrosome reaction was induced with ionomycin. After the introduction of ionomycin in the medium, [Ca2+]i increased gradually and reached a plateau in approximately 30 s. The fusion of the acrosomal vacuole took place abruptly before the plateau, during the rising phase. Although the speed of the [Ca2+]i increase varied among the many sperm tested, exocytosis in all cases occurred at the same [Ca2+]i of approximately 2 microM (estimated using the dissociation constant of indo-1 for Ca2+ of 1.1 microM). This result suggests that the exocytotic mechanism in starfish sperm responds to [Ca2+]i rapidly, with a reaction time of the order of one second or less. Unlike the change in [Ca2+]i, an abrupt increase in pHi was observed immediately after exocytosis, suggesting the presence of a proton mobilizing system that is triggered by exocytosis. The rapid increase in pHi coincided with the formation of the acrosomal rod and the beginning of vigorous movement of the flagellum, both of which have been proposed to be pHi dependent. The exocytotic event itself was visualized with the fluorescent membrane probe RH292. The membrane of the acrosomal vacuole, concealed from the external medium in an unreacted sperm, was seen to fuse with the plasma membrane.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Guo, Jingwei, und Lihong Xu. „Automatic Segmentation for Plant Leaves via Multiview Stereo Reconstruction“. Mathematical Problems in Engineering 2017 (2017): 1–11. http://dx.doi.org/10.1155/2017/9845815.

Der volle Inhalt der Quelle
Annotation:
This paper presented a new method for automatic plant point cloud acquisition and leaves segmentation. Quasi-dense point cloud of the plant is obtained from multiview stereo reconstruction based on surface expansion. In order to overcome the negative effects from complex natural light changes and to obtain a more accurate plant point cloud, the Adaptive Normalized Cross-Correlation algorithm is used in calculating the matching cost between two images, which is robust to radiometric factors and can reduce the fattening effect around boundaries. In the stage of segmentation for each single leaf, an improved region growing method based on fully connected conditional random field (CRF) is proposed to separate the connected leaves with similar color. The method has three steps: boundary erosion, initial segmentation, and segmentation refinement. First, the edge of each leaf point cloud is eroded to remove the connectivity between leaves. Then leaves will be initially segmented by region growing algorithm based on local surface normal and curvature. Finally an efficient CRF inference method based on mean field approximation is applied to remove small isolated regions. Experimental results show that our multiview stereo reconstruction method is robust to illumination changes and can obtain accurate color point clouds. And the improved region growing method based on CRF can effectively separate the connected leaves in obtained point cloud.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Liang, T. J., Y. C. Kuo und J. F. Chen. „Single-stage photovoltaic energy conversion system“. IEE Proceedings - Electric Power Applications 148, Nr. 4 (2001): 339. http://dx.doi.org/10.1049/ip-epa:20010436.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Li, Yuanhang, Xinzhu Sang, Shujun Xing, Yanxin Guan, Shenwu Yang und Binbin Yan. „Real-time volume data three-dimensional display with a modified single-pass multiview rendering method“. Optical Engineering 59, Nr. 10 (03.03.2020): 1. http://dx.doi.org/10.1117/1.oe.59.10.102412.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Lee, Harim, Y. J. Moon, Hyeonock Na, Soojeong Jang und Jae-Ok Lee. „Are 3-D coronal mass ejection parameters from single-view observations consistent with multiview ones?“ Journal of Geophysical Research: Space Physics 120, Nr. 12 (Dezember 2015): 10,237–10,249. http://dx.doi.org/10.1002/2015ja021118.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Ge, Shuguang, Xuesong Wang, Yuhu Cheng und Jian Liu. „Cancer Subtype Recognition Based on Laplacian Rank Constrained Multiview Clustering“. Genes 12, Nr. 4 (03.04.2021): 526. http://dx.doi.org/10.3390/genes12040526.

Der volle Inhalt der Quelle
Annotation:
Integrating multigenomic data to recognize cancer subtype is an important task in bioinformatics. In recent years, some multiview clustering algorithms have been proposed and applied to identify cancer subtype. However, these clustering algorithms ignore that each data contributes differently to the clustering results during the fusion process, and they require additional clustering steps to generate the final labels. In this paper, a new one-step method for cancer subtype recognition based on graph learning framework is designed, called Laplacian Rank Constrained Multiview Clustering (LRCMC). LRCMC first forms a graph for a single biological data to reveal the relationship between data points and uses affinity matrix to encode the graph structure. Then, it adds weights to measure the contribution of each graph and finally merges these individual graphs into a consensus graph. In addition, LRCMC constructs the adaptive neighbors to adjust the similarity of sample points, and it uses the rank constraint on the Laplacian matrix to ensure that each graph structure has the same connected components. Experiments on several benchmark datasets and The Cancer Genome Atlas (TCGA) datasets have demonstrated the effectiveness of the proposed algorithm comparing to the state-of-the-art methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Roth, Lukas, Moritz Camenzind, Helge Aasen, Lukas Kronenberg, Christoph Barendregt, Karl-Heinz Camp, Achim Walter, Norbert Kirchgessner und Andreas Hund. „Repeated Multiview Imaging for Estimating Seedling Tiller Counts of Wheat Genotypes Using Drones“. Plant Phenomics 2020 (07.09.2020): 1–20. http://dx.doi.org/10.34133/2020/3729715.

Der volle Inhalt der Quelle
Annotation:
Early generation breeding nurseries with thousands of genotypes in single-row plots are well suited to capitalize on high throughput phenotyping. Nevertheless, methods to monitor the intrinsically hard-to-phenotype early development of wheat are yet rare. We aimed to develop proxy measures for the rate of plant emergence, the number of tillers, and the beginning of stem elongation using drone-based imagery. We used RGB images (ground sampling distance of 3 mm pixel-1) acquired by repeated flights (≥ 2 flights per week) to quantify temporal changes of visible leaf area. To exploit the information contained in the multitude of viewing angles within the RGB images, we processed them to multiview ground cover images showing plant pixel fractions. Based on these images, we trained a support vector machine for the beginning of stem elongation (GS30). Using the GS30 as key point, we subsequently extracted plant and tiller counts using a watershed algorithm and growth modeling, respectively. Our results show that determination coefficients of predictions are moderate for plant count (R2=0.52), but strong for tiller count (R2=0.86) and GS30 (R2=0.77). Heritabilities are superior to manual measurements for plant count and tiller count, but inferior for GS30 measurements. Increasing the selection intensity due to throughput may overcome this limitation. Multiview image traits can replace hand measurements with high efficiency (85–223%). We therefore conclude that multiview images have a high potential to become a standard tool in plant phenomics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Zhu, Tiantian, Zhengqiu Weng, Lei Fu und Linqi Ruan. „A Web Shell Detection Method Based on Multiview Feature Fusion“. Applied Sciences 10, Nr. 18 (09.09.2020): 6274. http://dx.doi.org/10.3390/app10186274.

Der volle Inhalt der Quelle
Annotation:
Web shell is a malicious script file that can harm web servers. Web shell is often used by intruders to perform a series of malicious operations on website servers, such as privilege escalation and sensitive information leakage. Existing web shell detection methods have some shortcomings, such as viewing a single network traffic behavior, using simple signature comparisons, and adopting easily bypassed regex matches. In view of the above deficiencies, a web shell detection method based on multiview feature fusion is proposed based on the PHP language web shell. Firstly, lexical features, syntactic features, and abstract features that can effectively represent the internal meaning of web shells from multiple levels are integrated and extracted. Secondly, the Fisher score is utilized to rank and filter the most representative features, according to the importance of each feature. Finally, an optimized support vector machine (SVM) is used to establish a model that can effectively distinguish between web shell and normal script. In large-scale experiments, the final classification accuracy of the model on 1056 web shells and 1056 benign web scripts reached 92.18%. The results also surpassed well-known web shell detection tools such as VirusTotal, ClamAV, LOKI, and CloudWalker, as well as the state-of-the-art web shell detectionmethods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Kim, Kwang-Seop, Jung-Min Kwon und Bong-Hwan Kwon. „Single-Switch Single Power-Conversion PFC Converter Using Regenerative Snubber“. IEEE Transactions on Industrial Electronics 65, Nr. 7 (Juli 2018): 5436–44. http://dx.doi.org/10.1109/tie.2017.2774765.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Hao, Yaojun, Peng Zhang und Fuzhi Zhang. „Multiview Ensemble Method for Detecting Shilling Attacks in Collaborative Recommender Systems“. Security and Communication Networks 2018 (11.10.2018): 1–33. http://dx.doi.org/10.1155/2018/8174603.

Der volle Inhalt der Quelle
Annotation:
Faced with the evolving attacks in collaborative recommender systems, the conventional shilling detection methods rely mainly on one kind of user-generated information (i.e., single view) such as rating values, rating time, and item popularity. However, these methods often suffer from poor precision when detecting different attacks due to ignoring other potentially relevant information. To address this limitation, in this paper we propose a multiview ensemble method to detect shilling attacks in collaborative recommender systems. Firstly, we extract 17 user features by considering the temporal effects of item popularity and rating values in different popular item sets. Secondly, we devise a multiview ensemble detection framework by integrating base classifiers from different classification views. Particularly, we use a feature set partition algorithm to divide the features into several subsets to construct multiple optimal classification views. We introduce a repartition strategy to increase the diversity of views and reduce the influence of feature order. Finally, the experimental results on the Netflix and Amazon review datasets indicate that the proposed method has better performance than benchmark methods when detecting various synthetic attacks and real-world attacks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Senthilnathan, R., R. Anand, Roshan Suresh Kumar und Shreyansh Keseri. „A Mechatronics Approach for Design of Multiview Image Acquisition Setup for Scene Reconstruction with Single Camera“. Applied Mechanics and Materials 852 (September 2016): 776–81. http://dx.doi.org/10.4028/www.scientific.net/amm.852.776.

Der volle Inhalt der Quelle
Annotation:
Conventional design approach is being actively replaced by concurrent design approach in the context of interdisciplinary systems. The proposed research work intends to develop a single moving camera based stereo vision system for scene reconstruction with the intrinsic advantage of multi-directional fields of view. The conventional stereo vision setup uses two stationary passive cameras to capture images of a scene from different vantage points. The proposed system imparts varying mechanical degrees of freedom motion for both the object and the camera which aids in acquiring sequence of images which covers all the visible regions of the object of interest. This gives better detail of the scene under consideration as compared to the conventional two images based stereopsis. A mechatronics design approach has been presented which carefully integrates various elements of the system such as the mechanisms, actuators, sensors and the electronic controller. The paper clearly pin points the cues for the design of the mechanical system which are obtained from the requirements of the computer vision system. The relative pose between the camera and the scene is governed by three independent degrees of freedom namely rotation angle for the object, tilt and working distance for the camera. The selection of the aforementioned parameters is decided by the specifications such as field of view, size of the object and sensor and spatial resolution. The proposed design predicts the system to enjoy benefits of reduced cost and improved flexibility in general.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Bouchard, Jonathan, Arnaud Samson, William Lemaire, Caroline Paulin, Jean-Francois Pratte, Yves Berube-Lauziere und Rejean Fontaine. „A Low-Cost Time-Correlated Single Photon Counting System for Multiview Time-Domain Diffuse Optical Tomography“. IEEE Transactions on Instrumentation and Measurement 66, Nr. 10 (Oktober 2017): 2505–15. http://dx.doi.org/10.1109/tim.2017.2666458.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Wang, H. H., J. Y. Huang, S. H. Chu, Y. J. Chiang, K. L. Liu und P. C. Lai. „Sirolimus Conversion Experience in a Single Center“. Transplantation Proceedings 40, Nr. 7 (September 2008): 2209–10. http://dx.doi.org/10.1016/j.transproceed.2008.07.052.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Watanabe, Wataru, Tomoko Shimada, Sachihiro Matsunaga, Daisuke Kurihara, Kiichi Fukui, Shin-ichi Arimura, Nobuhiro Tsutsumi, Keisuke Isobe und Kazuyoshi Itoh. „Single-organelle tracking by two-photon conversion“. Optics Express 15, Nr. 5 (05.03.2007): 2490. http://dx.doi.org/10.1364/oe.15.002490.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Flack, H. D., E. Blanc und D. Schwarzenbach. „DIFRAC, single-crystal diffractometer output-conversion software“. Journal of Applied Crystallography 25, Nr. 3 (01.06.1992): 455–59. http://dx.doi.org/10.1107/s0021889892000384.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Tao, Wang, und Liu Yang. „Multiview Community Discovery Algorithm via Nonnegative Factorization Matrix in Heterogeneous Networks“. Mathematical Problems in Engineering 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/8596893.

Der volle Inhalt der Quelle
Annotation:
With the rapid development of the Internet and communication technologies, a large number of multimode or multidimensional networks widely emerge in real-world applications. Traditional community detection methods usually focus on homogeneous networks and simply treat different modes of nodes and connections in the same way, thus ignoring the inherent complexity and diversity of heterogeneous networks. It is challenging to effectively integrate the multiple modes of network information to discover the hidden community structure underlying heterogeneous interactions. In our work, a joint nonnegative matrix factorization (Joint-NMF) algorithm is proposed to discover the complex structure in heterogeneous networks. Our method transforms the heterogeneous dataset into a series of bipartite graphs correlated. Taking inspiration from the multiview method, we extend the semisupervised learning from single graph to several bipartite graphs with multiple views. In this way, it provides mutual information between different bipartite graphs to realize the collaborative learning of different classifiers, thus comprehensively considers the internal structure of all bipartite graphs, and makes all the classifiers tend to reach a consensus on the clustering results of the target-mode nodes. The experimental results show that Joint-NMF algorithm is efficient and well-behaved in real-world heterogeneous networks and can better explore the community structure of multimode nodes in heterogeneous networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Itoh, R., K. Ishizaka, H. Oishi und H. Okada. „Single-switch single-phase rectifier for step-down AC-DC conversion“. Electronics Letters 37, Nr. 5 (2001): 276. http://dx.doi.org/10.1049/el:20010219.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Xiao, Yang, Zhiguo Cao, Wen Zhuo, Liang Ye und Lei Zhu. „mCLOUD: A Multiview Visual Feature Extraction Mechanism for Ground-Based Cloud Image Categorization“. Journal of Atmospheric and Oceanic Technology 33, Nr. 4 (April 2016): 789–801. http://dx.doi.org/10.1175/jtech-d-15-0015.1.

Der volle Inhalt der Quelle
Annotation:
AbstractIn this paper, a novel Multiview CLOUD (mCLOUD) visual feature extraction mechanism is proposed for the task of categorizing clouds based on ground-based images. To completely characterize the different types of clouds, mCLOUD first extracts the raw visual descriptors from the views of texture, structure, and color simultaneously, in a densely sampled way—specifically, the scale invariant feature transform (SIFT), the census transform histogram (CENTRIST), and the statistical color features are extracted, respectively. To obtain a more descriptive cloud representation, the feature encoding of the raw descriptors is realized by using the Fisher vector. This is followed by the feature aggregation procedure. A linear support vector machine (SVM) is employed as the classifier to yield the final cloud image categorization result. The experiments on a challenging cloud dataset termed the six-class Huazhong University of Science and Technology (HUST) cloud demonstrate that mCLOUD consistently outperforms the state-of-the-art cloud classification approaches by large margins (at least 6.9%) under all the different experimental settings. It has also been verified that, compared to the single view, the multiview cloud representation generally enhances the performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Tay, Chee Wei, Liang Shen, Mikael Hartman, Shridhar Ganpathi Iyer, Krishnakumar Madhavan und Stephen Kin Yong Chang. „SILC for SILC: Single Institution Learning Curve for Single-Incision Laparoscopic Cholecystectomy“. Minimally Invasive Surgery 2013 (2013): 1–7. http://dx.doi.org/10.1155/2013/381628.

Der volle Inhalt der Quelle
Annotation:
Objectives. We report the single-incision laparoscopic cholecystectomy (SILC) learning experience of 2 hepatobiliary surgeons and the factors that could influence the learning curve of SILC.Methods. Patients who underwent SILC by Surgeons A and B were studied retrospectively. Operating time, conversion rate, reason for conversion, identity of first assistants, and their experience with previous laparoscopic cholecystectomy (LC) were analysed. CUSUM analysis is used to identify learning curve.Results. Hundred and nineteen SILC cases were performed by Surgeons A and B, respectively. Eight cases required additional port. In CUSUM analysis, most conversion occurred during the first 19 cases. Operating time was significantly lower (62.5 versus 90.6 min,P= 0.04) after the learning curve has been overcome. Operating time decreases as the experience increases, especially Surgeon B. Most conversions are due to adhesion at Calot’s triangle. Acute cholecystitis, patients’ BMI, and previous surgery do not seem to influence conversion rate. Mean operating times of cases assisted by first assistant with and without LC experience were 48 and 74 minutes, respectively (P= 0.004).Conclusion. Nineteen cases are needed to overcome the learning curve of SILC. Team work, assistant with CLC experience, and appropriate equipment and technique are the important factors in performing SILC.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie