Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Feature Map Scaling.

Zeitschriftenartikel zum Thema „Feature Map Scaling“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Feature Map Scaling" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Leem, Choon Seong, D. A. Dornfeld und S. E. Dreyfus. „A Customized Neural Network for Sensor Fusion in On-Line Monitoring of Cutting Tool Wear“. Journal of Engineering for Industry 117, Nr. 2 (01.05.1995): 152–59. http://dx.doi.org/10.1115/1.2803289.

Der volle Inhalt der Quelle
Annotation:
A customized neural network for sensor fusion of acoustic emission and force in on-line detection of tool wear is developed. Based on two critical concerns regarding practical and reliable tool-wear monitoring systems, the maximal utilization of “unsupervised” sensor data and the avoidance of off-line feature analysis, the neural network is trained by unsupervised Kohonen’s Feature Map procedure followed by an Input Feature Scaling algorithm. After levels of tool wear are topologically ordered by Kohonen’s Feature Map, input features of AE and force sensor signals are transformed via Input Feature Scaling so that the resulting decision boundaries of the neural network approximate those of error-minimizing Bayes classifier. In a machining experiment, the customized neural network achieved high accuracy rates in the classification of levels of tool wear. Also, the neural network shows several practical and reliable properties for the implementation of the monitoring system in manufacturing industries.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Li, Mao Hai, und Li Ning Sun. „Monocular Vision Based Mobile Robot 3D Map Building“. Applied Mechanics and Materials 43 (Dezember 2010): 49–52. http://dx.doi.org/10.4028/www.scientific.net/amm.43.49.

Der volle Inhalt der Quelle
Annotation:
A robust dense 3D feature map is built only with monocular vision and odometry. Monocular vision mounted on the robot front-end tracks the 3D natural landmarks, which are structured with matching Scale Invariant Feature Transform (SIFT) feature matching pairs. SIFT features are highly distinctive and invariant to image scaling, rotation, and change in 3D viewpoints. A fast SIFT feature matching algorithm is implemented with the KD-Tree based nearest search approach in the time cost of O(log2N), and matches with large error are eliminated by epipolar line restriction. A map building algorithm based on 3D spatial SIFT landmarks is designed and implemented. Experiment results on Pioneer mobile robot in a real indoor environment show the superior performance of our proposed method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

El Chakik, Abdallah, Abdul Rahman El Sayed, Hassan Alabboud und Amer Bakkach. „An invariant descriptor map for 3D objects matching“. International Journal of Engineering & Technology 9, Nr. 1 (23.01.2020): 59. http://dx.doi.org/10.14419/ijet.v9i1.29918.

Der volle Inhalt der Quelle
Annotation:
Meshes and point clouds are traditionally used to represent and match 3D shapes. The matching prob-lem can be formulated as finding the best one-to-one correspondence between featured regions of two shapes. This paper presents an efficient and robust 3D matching method using vertices descriptors de-tection to define feature regions and an optimization approach for regions matching. To do so, we compute an invariant shape descriptor map based on 3D surface patches calculated using Zernike coef-ficients. Then, we propose a multi-scale descriptor map to improve the measured descriptor map quali-ty and to deal with noise. In addition, we introduce a linear algorithm for feature regions segmentation according to the descriptor map. Finally, the matching problem is modelled as sub-graph isomorphism problem, which is a combinatorial optimization problem to match feature regions while preserving the geometric. Finally, we show the robustness and stability of our method through many experimental re-sults with respect to scaling, noise, rotation, and translation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ma, Ding, Zhigang Zhao, Ye Zheng, Renzhong Guo und Wei Zhu. „PolySimp: A Tool for Polygon Simplification Based on the Underlying Scaling Hierarchy“. ISPRS International Journal of Geo-Information 9, Nr. 10 (10.10.2020): 594. http://dx.doi.org/10.3390/ijgi9100594.

Der volle Inhalt der Quelle
Annotation:
Map generalization is a process of reducing the contents of a map or data to properly show a geographic feature(s) at a smaller extent. Over the past few years, the fractal way of thinking has emerged as a new paradigm for map generalization. A geographic feature can be deemed as a fractal given the perspective of scaling, as its rough, irregular, and unsmooth shape inherently holds a striking scaling hierarchy of far more small elements than large ones. The pattern of far more small things than large ones is a de facto heavy tailed distribution. In this paper, we apply the scaling hierarchy for map generalization to polygonal features. To do this, we firstly revisit the scaling hierarchy of a classic fractal: the Koch Snowflake. We then review previous work that used the Douglas–Peuker algorithm, which identifies characteristic points on a line to derive three types of measures that are long-tailed distributed: the baseline length (d), the perpendicular distance to the baseline (x), and the area formed by x and d (area). More importantly, we extend the usage of the three measures to other most popular cartographical generalization methods; i.e., the bend simplify method, Visvalingam–Whyatt method, and hierarchical decomposition method, each of which decomposes any polygon into a set of bends, triangles, or convex hulls as basic geometric units for simplification. The different levels of details of the polygon can then be derived by recursively selecting the head part of geometric units and omitting the tail part using head/tail breaks, which is a new classification scheme for data with a heavy-tailed distribution. Since there are currently few tools with which to readily conduct the polygon simplification from such a fractal perspective, we have developed PolySimp, a tool that integrates the mentioned four algorithms for polygon simplification based on its underlying scaling hierarchy. The British coastline was selected to demonstrate the tool’s usefulness. The developed tool can be expected to showcase the applicability of fractal way of thinking and contribute to the development of map generalization.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Chang, Chin Chen, I. Ta Lee, Tsung Ta Ke und Wen Kai Tai. „An Object-Based Image Reducing Approach“. Advanced Materials Research 1044-1045 (Oktober 2014): 1049–52. http://dx.doi.org/10.4028/www.scientific.net/amr.1044-1045.1049.

Der volle Inhalt der Quelle
Annotation:
Common methods for reducing image size include scaling and cropping. However, these two approaches have some quality problems for reduced images. In this paper, we propose an image reducing algorithm by separating the main objects and the background. First, we extract two feature maps, namely, an enhanced visual saliency map and an improved gradient map from an input image. After that, we integrate these two feature maps to an importance map. Finally, we generate the target image using the importance map. The proposed approach can obtain desired results for a wide range of images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Li, James Xinzhi. „Visualization of High-Dimensional Data with Relational Perspective Map“. Information Visualization 3, Nr. 1 (März 2004): 49–59. http://dx.doi.org/10.1057/palgrave.ivs.9500051.

Der volle Inhalt der Quelle
Annotation:
This paper introduces a method called relational perspective map (RPM) to visualize distance information in high-dimensional spaces. Like conventional multidimensional scaling, the RPM algorithm aims to produce proximity preserving 2-dimensional (2-D) maps. The main idea of the RPM algorithm is to simulate a multiparticle system on a closed surface: whereas the repulsive forces between the particles reflect the distance information, the closed surface holds the whole system in balance and prevents the resulting map from degeneracy. A special feature of RPM algorithm is its ability to partition a complex dataset into pieces and map them onto a 2-D space without overlapping. Compared to other multidimensional scaling methods, RPM is able to reveal more local details of complex datasets. This paper demonstrates the properties of RPM maps with four examples and provides extensive comparison to other multidimensional scaling methods, such as Sammon Mapping and Curvilinear Principle Analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ma, Liang, Jihua Zhu, Li Zhu, Shaoyi Du und Jingru Cui. „Merging grid maps of different resolutions by scaling registration“. Robotica 34, Nr. 11 (20.03.2015): 2516–31. http://dx.doi.org/10.1017/s0263574715000168.

Der volle Inhalt der Quelle
Annotation:
SUMMARYThis paper considers the problem of merging grid maps that have different resolutions. Because the goal of map merging is to find the optimal transformation between two partially overlapping grid maps, it can be viewed as a special image registration issue. To address this special issue, the solution considers the non-common areas and designs an objective function based on the trimmed mean-square error (MSE). The trimmed and scaling iterative closest point (TsICP) algorithm is then proposed to solve this well-designed objective function. As the TsICP algorithm can be proven to be locally convergent in theory, a good initial transformation should be provided. Accordingly, scale-invariant feature transform (SIFT) features are extracted for the maps to be potentially merged, and the random sample consensus (RANSAC) algorithm is employed to find the geometrically consistent feature matches that are used to estimate the initial transformation for the TsICP algorithm. In addition, this paper presents the rules for the fusion of the grid maps based on the estimated transformation. Experimental results carried out with publicly available datasets illustrate the superior performance of this approach at merging grid maps with respect to robustness and accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Li, Lin, Zheng Min Xia, Sheng Hong Li, Li Pan und Zhi Hua Huang. „Detecting Overlapping Communities with MDS and Local Expansion FCM“. Applied Mechanics and Materials 644-650 (September 2014): 3295–99. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.3295.

Der volle Inhalt der Quelle
Annotation:
Community structure is an important feature to understand structural and functional properties in various complex networks. In this paper, we use Multidimensional Scaling (MDS) to map nodes of network into Euclidean space to keep the distance information of nodes, and then we use topology feature of communities to propose the local expansion strategy to detect initial seeds for FCM. Finally, the FCM are used to uncover overlapping communities in the complex networks. The test results in real-world and artificial networks show that the proposed algorithm is efficient and robust in uncovering overlapping community structure.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Yang, Yang, und Hongmin Deng. „GC-YOLOv3: You Only Look Once with Global Context Block“. Electronics 9, Nr. 8 (31.07.2020): 1235. http://dx.doi.org/10.3390/electronics9081235.

Der volle Inhalt der Quelle
Annotation:
In order to make the classification and regression of single-stage detectors more accurate, an object detection algorithm named Global Context You-Only-Look-Once v3 (GC-YOLOv3) is proposed based on the You-Only-Look-Once (YOLO) in this paper. Firstly, a better cascading model with learnable semantic fusion between a feature extraction network and a feature pyramid network is designed to improve detection accuracy using a global context block. Secondly, the information to be retained is screened by combining three different scaling feature maps together. Finally, a global self-attention mechanism is used to highlight the useful information of feature maps while suppressing irrelevant information. Experiments show that our GC-YOLOv3 reaches a maximum of 55.5 object detection mean Average Precision (mAP)@0.5 on Common Objects in Context (COCO) 2017 test-dev and that the mAP is 5.1% higher than that of the YOLOv3 algorithm on Pascal Visual Object Classes (PASCAL VOC) 2007 test set. Therefore, experiments indicate that the proposed GC-YOLOv3 model exhibits optimal performance on the PASCAL VOC and COCO datasets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Chen, Ying, Lei Zhang und Xiao Juan Ji. „A New Robust Watermarking Algorithm for Small Vector Data Set“. Applied Mechanics and Materials 263-266 (Dezember 2012): 2999–3004. http://dx.doi.org/10.4028/www.scientific.net/amm.263-266.2999.

Der volle Inhalt der Quelle
Annotation:
Because less data can be hidden on the small vector data set , its copyright is very difficult to obtain protection, and the data is vulnerable to attack, so a new double watermarking algorithm was proposed in this paper. Its main features are: 1) it selected feature points from the sequence of Douglas, then embedded watermarking points on both sides of the feature point, and last embedded watermarking points by wavelet transform again; 2) Further more it increased the map graphic deformation control design. The algorithm was applied to experimental data, and the test results showed that the algorithm had good robustness on graphics, rotation, scaling and shifting geometric transformation of points in graph layers. However, due to the smaller vector data set, so the algorithm is robustness on deletion and cropping of points in graph layers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Bonanno, Claudio, und Imen Chouari. „Escape Rates for the Farey Map with Approximated Holes“. International Journal of Bifurcation and Chaos 26, Nr. 10 (September 2016): 1650169. http://dx.doi.org/10.1142/s0218127416501698.

Der volle Inhalt der Quelle
Annotation:
We study the escape rate for the Farey map, an infinite measure preserving system, with a hole including the indifferent fixed point. Due to the ergodic properties of the map, the standard theoretical approaches to this problem cannot be applied. It has been recently shown in [Knight & Munday, 2016] how to apply the standard analytical methods to a piecewise-linear version of the Farey map with holes depending on the associated partition, but their results cannot be obtained in the general case we consider here. To overcome these difficulties we propose here to study approximations of the hole by means of real analytic functions. We introduce a particular family of approximations and study numerically the behavior of the escape rate for approximated holes with vanishing measure. The results suggest that the scaling of the escape rate depends on the “shape” of the approximation, and we show that this is a typical feature of systems with an indifferent fixed point, not an artifact of the particular family we consider.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Biswas, Soumyajyoti, und Lucas Goehring. „Mapping heterogeneities through avalanche statistics“. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 377, Nr. 2136 (26.11.2018): 20170388. http://dx.doi.org/10.1098/rsta.2017.0388.

Der volle Inhalt der Quelle
Annotation:
Avalanche statistics of various threshold-activated dynamical systems are known to depend on the magnitude of the drive, or stress, on the system. Such dependences exist for earthquake size distributions, in sheared granular avalanches, laboratory-scale fracture and also in the outage statistics of power grids. In this work, we model threshold-activated avalanche dynamics and investigate the time required to detect local variations in the ability of model elements to bear stress. We show that the detection time follows a scaling law where the scaling exponents depend on whether the feature that is sought is either weaker, or stronger, than its surroundings. We then look at earthquake data from Sumatra and California, demonstrate the trade-off between the spatial resolution of a map of earthquake exponents (i.e. the b -values of the Gutenberg–Richter Law) and the accuracy of those exponents, and suggest a means to maximize both. This article is part of the theme issue ‘Statistical physics of fracture and earthquakes’.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

LI, WEI, und NASSER M. NASRABADI. „INVARIANT OBJECT RECOGNITION BASED ON A NEURAL NETWORK OF CASCADED RCE NETS“. International Journal of Pattern Recognition and Artificial Intelligence 07, Nr. 04 (August 1993): 815–29. http://dx.doi.org/10.1142/s0218001493000418.

Der volle Inhalt der Quelle
Annotation:
A neural network of cascaded Restricted Coulomb Energy (RCE) nets is constructed for the recognition of two-dimensional objects. A number of RCE nets are cascaded together to form a classifier where the overlapping decision regions are progressively resolved by a set of cascaded networks. Similarities among objects which have complex decision boundaries in the feature space are resolved by this multi-net approach. The generalization ability of an RCE net recognition system, referring to the ability of the system to correctly recognize a new pattern even when the number of learning exemplars is small, is increased by the proposed coarse-to-fine learning strategy. A feature extraction technique is used to map the geometrical shape information of an object into an ordered feature vector of fixed length. This feature vector is then used as an input to the neural network. The feature vector is invariant to object changes such as positional shift, rotation, scaling, illumination variance, variation of camera setup, perspective distortion, and noise distortion. Experimental results for recognition of several objects are also presented. A correct recognition rate of 100% was achieved for both the training and the testing input patterns.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Fan, Yuchen, Jiahui Yu, Ding Liu und Thomas S. Huang. „Scale-Wise Convolution for Image Restoration“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 07 (03.04.2020): 10770–77. http://dx.doi.org/10.1609/aaai.v34i07.6706.

Der volle Inhalt der Quelle
Annotation:
While scale-invariant modeling has substantially boosted the performance of visual recognition tasks, it remains largely under-explored in deep networks based image restoration. Naively applying those scale-invariant techniques (e.g., multi-scale testing, random-scale data augmentation) to image restoration tasks usually leads to inferior performance. In this paper, we show that properly modeling scale-invariance into neural networks can bring significant benefits to image restoration performance. Inspired from spatial-wise convolution for shift-invariance, “scale-wise convolution” is proposed to convolve across multiple scales for scale-invariance. In our scale-wise convolutional network (SCN), we first map the input image to the feature space and then build a feature pyramid representation via bi-linear down-scaling progressively. The feature pyramid is then passed to a residual network with scale-wise convolutions. The proposed scale-wise convolution learns to dynamically activate and aggregate features from different input scales in each residual building block, in order to exploit contextual information on multiple scales. In experiments, we compare the restoration accuracy and parameter efficiency among our model and many different variants of multi-scale neural networks. The proposed network with scale-wise convolution achieves superior performance in multiple image restoration tasks including image super-resolution, image denoising and image compression artifacts removal. Code and models are available at: https://github.com/ychfan/scn_sr.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Dresp-Langley, Birgitta, und John M. Wandeto. „Human Symmetry Uncertainty Detected by a Self-Organizing Neural Network Map“. Symmetry 13, Nr. 2 (10.02.2021): 299. http://dx.doi.org/10.3390/sym13020299.

Der volle Inhalt der Quelle
Annotation:
Symmetry in biological and physical systems is a product of self-organization driven by evolutionary processes, or mechanical systems under constraints. Symmetry-based feature extraction or representation by neural networks may unravel the most informative contents in large image databases. Despite significant achievements of artificial intelligence in recognition and classification of regular patterns, the problem of uncertainty remains a major challenge in ambiguous data. In this study, we present an artificial neural network that detects symmetry uncertainty states in human observers. To this end, we exploit a neural network metric in the output of a biologically inspired Self-Organizing Map Quantization Error (SOM-QE). Shape pairs with perfect geometry mirror symmetry but a non-homogenous appearance, caused by local variations in hue, saturation, or lightness within and/or across the shapes in a given pair produce, as shown here, a longer choice response time (RT) for “yes” responses relative to symmetry. These data are consistently mirrored by the variations in the SOM-QE from unsupervised neural network analysis of the same stimulus images. The neural network metric is thus capable of detecting and scaling human symmetry uncertainty in response to patterns. Such capacity is tightly linked to the metric’s proven selectivity to local contrast and color variations in large and highly complex image data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Vestena, K. M., D. R. Dos Santos, E. M. Oilveira Jr., N. L. Pavan und K. Khoshelham. „A WEIGHTED CLOSED-FORM SOLUTION FOR RGB-D DATA REGISTRATION“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (09.06.2016): 403–9. http://dx.doi.org/10.5194/isprsarchives-xli-b3-403-2016.

Der volle Inhalt der Quelle
Annotation:
Existing 3D indoor mapping of RGB-D data are prominently point-based and feature-based methods. In most cases iterative closest point (ICP) and its variants are generally used for pairwise registration process. Considering that the ICP algorithm requires an relatively accurate initial transformation and high overlap a weighted closed-form solution for RGB-D data registration is proposed. In this solution, we weighted and normalized the 3D points based on the theoretical random errors and the dual-number quaternions are used to represent the 3D rigid body motion. Basically, dual-number quaternions provide a closed-form solution by minimizing a cost function. The most important advantage of the closed-form solution is that it provides the optimal transformation in one-step, it does not need to calculate good initial estimates and expressively decreases the demand for computer resources in contrast to the iterative method. Basically, first our method exploits RGB information. We employed a scale invariant feature transformation (SIFT) for extracting, detecting, and matching features. It is able to detect and describe local features that are invariant to scaling and rotation. To detect and filter outliers, we used random sample consensus (RANSAC) algorithm, jointly with an statistical dispersion called interquartile range (IQR). After, a new RGB-D loop-closure solution is implemented based on the volumetric information between pair of point clouds and the dispersion of the random errors. The loop-closure consists to recognize when the sensor revisits some region. Finally, a globally consistent map is created to minimize the registration errors via a graph-based optimization. The effectiveness of the proposed method is demonstrated with a Kinect dataset. The experimental results show that the proposed method can properly map the indoor environment with an absolute accuracy around 1.5% of the travel of a trajectory.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Vestena, K. M., D. R. Dos Santos, E. M. Oilveira Jr., N. L. Pavan und K. Khoshelham. „A WEIGHTED CLOSED-FORM SOLUTION FOR RGB-D DATA REGISTRATION“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (09.06.2016): 403–9. http://dx.doi.org/10.5194/isprs-archives-xli-b3-403-2016.

Der volle Inhalt der Quelle
Annotation:
Existing 3D indoor mapping of RGB-D data are prominently point-based and feature-based methods. In most cases iterative closest point (ICP) and its variants are generally used for pairwise registration process. Considering that the ICP algorithm requires an relatively accurate initial transformation and high overlap a weighted closed-form solution for RGB-D data registration is proposed. In this solution, we weighted and normalized the 3D points based on the theoretical random errors and the dual-number quaternions are used to represent the 3D rigid body motion. Basically, dual-number quaternions provide a closed-form solution by minimizing a cost function. The most important advantage of the closed-form solution is that it provides the optimal transformation in one-step, it does not need to calculate good initial estimates and expressively decreases the demand for computer resources in contrast to the iterative method. Basically, first our method exploits RGB information. We employed a scale invariant feature transformation (SIFT) for extracting, detecting, and matching features. It is able to detect and describe local features that are invariant to scaling and rotation. To detect and filter outliers, we used random sample consensus (RANSAC) algorithm, jointly with an statistical dispersion called interquartile range (IQR). After, a new RGB-D loop-closure solution is implemented based on the volumetric information between pair of point clouds and the dispersion of the random errors. The loop-closure consists to recognize when the sensor revisits some region. Finally, a globally consistent map is created to minimize the registration errors via a graph-based optimization. The effectiveness of the proposed method is demonstrated with a Kinect dataset. The experimental results show that the proposed method can properly map the indoor environment with an absolute accuracy around 1.5% of the travel of a trajectory.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Guo, Hongjun, und Lili Chen. „An Image Similarity Invariant Feature Extraction Method Based on Radon Transform“. International Journal of Circuits, Systems and Signal Processing 15 (08.04.2021): 288–96. http://dx.doi.org/10.46300/9106.2021.15.33.

Der volle Inhalt der Quelle
Annotation:
With the advancements of computer technology, image recognition technology has been more and more widely applied and feature extraction is a core problem of image recognition. In study, image recognition classifies the processed image and identifies the category it belongs to. By selecting the feature to be extracted, it measures the necessary parameters and classifies according to the result. For better recognition, it needs to conduct structural analysis and image description of the entire image and enhance image understanding through multi-object structural relationship. The essence of Radon transform is to reconstruct the original N-dimensional image in N-dimensional space according to the N-1 dimensional projection data of N-dimensional image in different directions. The Radon transform of image is to extract the feature in the transform domain and map the image space to the parameter space. This paper study the inverse problem of Radon transform of the upper semicircular curve with compact support and continuous in the support. When the center and radius of a circular curve change in a certain range, the inversion problem is unique when the Radon transform along the upper semicircle curve is known. In order to further improve the robustness and discrimination of the features extracted, given the image translation or proportional scaling and the removal of impact caused by translation and proportion, this paper has proposed an image similarity invariant feature extraction method based on Radon transform, constructed Radon moment invariant and shown the description capacity of shape feature extraction method on shape feature by getting intra-class ratio. The experiment result has shown that the method of this paper has overcome the flaws of cracks, overlapping, fuzziness and fake edges which exist when extracting features alone, it can accurately extract the corners of the digital image and has good robustness to noise. It has effectively improved the accuracy and continuity of complex image feature extraction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Yu, Jimin, und Wei Zhang. „Face Mask Wearing Detection Algorithm Based on Improved YOLO-v4“. Sensors 21, Nr. 9 (08.05.2021): 3263. http://dx.doi.org/10.3390/s21093263.

Der volle Inhalt der Quelle
Annotation:
To solve the problems of low accuracy, low real-time performance, poor robustness and others caused by the complex environment, this paper proposes a face mask recognition and standard wear detection algorithm based on the improved YOLO-v4. Firstly, an improved CSPDarkNet53 is introduced into the trunk feature extraction network, which reduces the computing cost of the network and improves the learning ability of the model. Secondly, the adaptive image scaling algorithm can reduce computation and redundancy effectively. Thirdly, the improved PANet structure is introduced so that the network has more semantic information in the feature layer. At last, a face mask detection data set is made according to the standard wearing of masks. Based on the object detection algorithm of deep learning, a variety of evaluation indexes are compared to evaluate the effectiveness of the model. The results of the comparations show that the mAP of face mask recognition can reach 98.3% and the frame rate is high at 54.57 FPS, which are more accurate compared with the exiting algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Pinjarkar, Latika, Manisha Sharma und Smita Selot. „Novel Relevance Feedback Approach for Color Trademark Recognition Using Optimization and Learning Strategy“. Journal of Intelligent Systems 27, Nr. 1 (26.01.2018): 67–79. http://dx.doi.org/10.1515/jisys-2017-0022.

Der volle Inhalt der Quelle
Annotation:
Abstract The trademark registration process, apparent in all organizations nowadays, deals with recognition and retrieval of similar trademark images from trademark databases. Trademark retrieval is an imperative application area of content-based image retrieval. The main challenges in designing and developing this application area are reducing the semantic gap, obtaining higher accuracy, reducing computation complexity, and subsequently the execution time. The proposed work focuses on these challenges. This paper proposes the relevance feedback system embedded with optimization and unsupervised learning technique as the preprocessing stage, for trademark recognition. The search space is reduced by using particle swam optimization, for optimization of database feature set, which is further followed by clustering using self-organizing map. The relevance feedback technique is implemented over this preprocessed feature set. Experimentation is done using the FlickrLogos-32 PLUS dataset. To introduce variations between the training and query images, transformations are applied to each of the query image, viz. rotation, scaling, and translation of the image. The same query image is tested for various combinations of transformations. The proposed technique is invariant to various transformations, with significant performance as depicted in the results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Smadbeck, Patrick, und Michael P. H. Stumpf. „Coalescent models for developmental biology and the spatio-temporal dynamics of growing tissues“. Journal of The Royal Society Interface 13, Nr. 117 (April 2016): 20160112. http://dx.doi.org/10.1098/rsif.2016.0112.

Der volle Inhalt der Quelle
Annotation:
Development is a process that needs to be tightly coordinated in both space and time. Cell tracking and lineage tracing have become important experimental techniques in developmental biology and allow us to map the fate of cells and their progeny. A generic feature of developing and homeostatic tissues that these analyses have revealed is that relatively few cells give rise to the bulk of the cells in a tissue; the lineages of most cells come to an end quickly. Computational and theoretical biologists/physicists have, in response, developed a range of modelling approaches, most notably agent-based modelling. These models seem to capture features observed in experiments, but can also become computationally expensive. Here, we develop complementary genealogical models of tissue development that trace the ancestry of cells in a tissue back to their most recent common ancestors. We show that with both bounded and unbounded growth simple, but universal scaling relationships allow us to connect coalescent theory with the fractal growth models extensively used in developmental biology. Using our genealogical perspective, it is possible to study bulk statistical properties of the processes that give rise to tissues of cells, without the need for large-scale simulations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

BALL, ROBIN C., MARINA DIAKONOVA und ROBERT S. MACKAY. „QUANTIFYING EMERGENCE IN TERMS OF PERSISTENT MUTUAL INFORMATION“. Advances in Complex Systems 13, Nr. 03 (Juni 2010): 327–38. http://dx.doi.org/10.1142/s021952591000258x.

Der volle Inhalt der Quelle
Annotation:
We define Persistent Mutual Information (PMI) as the Mutual (Shannon) Information between the past history of a system and its evolution significantly later in the future. This quantifies how much past observations enable long-term prediction, which we propose as the primary signature of (Strong) Emergent Behavior. The key feature of our definition of PMI is the omission of an interval of "present" time, so that the mutual information between close times is excluded: this renders PMI robust to superposed noise or chaotic behavior or graininess of data, distinguishing it from a range of established Complexity Measures. For the logistic map, we compare predicted with measured long-time PMI data. We show that measured PMI data captures not just the period doubling cascade but also the associated cascade of banded chaos, without confusion by the overlayer of chaotic decoration. We find that the standard map has apparently infinite PMI, but with well-defined fractal scaling which we can interpret in terms of the relative information codimension. Whilst our main focus is in terms of PMI over time, we can also apply the idea to PMI across space in spatially-extended systems as a generalization of the notion of ordered phases.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Lin, Zhiyang, Jihua Zhu, Zutao Jiang, Yujie Li, Yaochen Li und Zhongyu Li. „Merging Grid Maps in Diverse Resolutions by the Context-based Descriptor“. ACM Transactions on Internet Technology 21, Nr. 4 (22.07.2021): 1–21. http://dx.doi.org/10.1145/3403948.

Der volle Inhalt der Quelle
Annotation:
Building an accurate map is essential for autonomous robot navigation in the environment without GPS. Compared with single-robot, the multiple-robot system has much better performance in terms of accuracy, efficiency and robustness for the simultaneous localization and mapping (SLAM). As a critical component of multiple-robot SLAM, the problem of map merging still remains a challenge. To this end, this article casts it into point set registration problem and proposes an effective map merging method based on the context-based descriptors and correspondence expansion. It first extracts interest points from grid maps by the Harris corner detector. By exploiting neighborhood information of interest points, it automatically calculates the maximum response radius as scale information to compute the context-based descriptor, which includes eigenvalues and normals computed from local structures of each interest point. Then, it effectively establishes origin matches with low precision by applying the nearest neighbor search on the context-based descriptor. Further, it designs a scale-based corresponding expansion strategy to expand each origin match into a set of feature matches, where one similarity transformation between two grid maps can be estimated by the Random Sample Consensus algorithm. Subsequently, a measure function formulated from the trimmed mean square error is utilized to confirm the best similarity transformation and accomplish the coarse map merging. Finally, it utilizes the scaling trimmed iterative closest point algorithm to refine initial similarity transformation so as to achieve accurate merging. As the proposed method considers scale information in the context-based descriptor, it is able to merge grid maps in diverse resolutions. Experimental results on real robot datasets demonstrate its superior performance over other related methods on accuracy and robustness.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

TAX, DAVID M. J., und PIOTR JUSZCZAK. „KERNEL WHITENING FOR ONE-CLASS CLASSIFICATION“. International Journal of Pattern Recognition and Artificial Intelligence 17, Nr. 03 (Mai 2003): 333–47. http://dx.doi.org/10.1142/s021800140300240x.

Der volle Inhalt der Quelle
Annotation:
In one-class classification one tries to describe a class of target data and to distinguish it from all other possible outlier objects. Obvious applications are areas where outliers are very diverse or very difficult or expensive to measure, such as in machine diagnostics or in medical applications. In order to have a good distinction between the target objects and the outliers, good representation of the data is essential. The performance of many one-class classifiers critically depends on the scaling of the data and is often harmed by data distributions in (nonlinear) subspaces. This paper presents a simple preprocessing method which actively tries to map the data to a spherical symmetric cluster and is almost insensitive to data distributed in subspaces. It uses techniques from Kernel PCA to rescale the data in a kernel feature space to unit variance. This transformed data can now be described very well by the Support Vector Data Description, which basically fits a hypersphere around the data. The paper presents the methods and some preliminary experimental results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Wang, Chengyou, Zhi Zhang und Xiao Zhou. „An Image Copy-Move Forgery Detection Scheme Based on A-KAZE and SURF Features“. Symmetry 10, Nr. 12 (03.12.2018): 706. http://dx.doi.org/10.3390/sym10120706.

Der volle Inhalt der Quelle
Annotation:
The popularity of image editing software has made it increasingly easy to alter the content of images. These alterations threaten the authenticity and integrity of images, causing misjudgments and possibly even affecting social stability. The copy-move technique is one of the most commonly used approaches for manipulating images. As a defense, the image forensics technique has become popular for judging whether a picture has been tampered with via copy-move, splicing, or other forgery techniques. In this paper, a scheme based on accelerated-KAZE (A-KAZE) and speeded-up robust features (SURF) is proposed for image copy-move forgery detection (CMFD). It is difficult for most keypoint-based CMFD methods to obtain sufficient points in smooth regions. To remedy this defect, the response thresholds for the A-KAZE and SURF feature detection stages are set to small values in the proposed method. In addition, a new correlation coefficient map is presented, in which the duplicated regions are demarcated, combining filtering and mathematical morphology operations. Numerous experiments are conducted to demonstrate the effectiveness of the proposed method in searching for duplicated regions and its robustness against distortions and post-processing techniques, such as noise addition, rotation, scaling, image blurring, joint photographic expert group (JPEG) compression, and hybrid image manipulation. The experimental results demonstrate that the performance of the proposed scheme is superior to that of other tested CMFD methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Hernández-Roig, Harold A., M. Carmen Aguilera-Morillo und Rosa E. Lillo. „Functional Modeling of High-Dimensional Data: A Manifold Learning Approach“. Mathematics 9, Nr. 4 (19.02.2021): 406. http://dx.doi.org/10.3390/math9040406.

Der volle Inhalt der Quelle
Annotation:
This paper introduces stringing via Manifold Learning (ML-stringing), an alternative to the original stringing based on Unidimensional Scaling (UDS). Our proposal is framed within a wider class of methods that map high-dimensional observations to the infinite space of functions, allowing the use of Functional Data Analysis (FDA). Stringing handles general high-dimensional data as scrambled realizations of an unknown stochastic process. Therefore, the essential feature of the method is a rearrangement of the observed values. Motivated by the linear nature of UDS and the increasing number of applications to biosciences (e.g., functional modeling of gene expression arrays and single nucleotide polymorphisms, or the classification of neuroimages) we aim to recover more complex relations between predictors through ML. In simulation studies, it is shown that ML-stringing achieves higher-quality orderings and that, in general, this leads to improvements in the functional representation and modeling of the data. The versatility of our method is also illustrated with an application to a colon cancer study that deals with high-dimensional gene expression arrays. This paper shows that ML-stringing is a feasible alternative to the UDS-based version. Also, it opens a window to new contributions to the field of FDA and the study of high-dimensional data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Guo, Liang, Ruchi Sharma, Lei Yin, Ruodan Lu und Ke Rong. „Automated competitor analysis using big data analytics“. Business Process Management Journal 23, Nr. 3 (05.06.2017): 735–62. http://dx.doi.org/10.1108/bpmj-05-2015-0065.

Der volle Inhalt der Quelle
Annotation:
Purpose Competitor analysis is a key component in operations management. Most business decisions are rooted in the analysis of rival products inferred from market structure. Relative to more traditional competitor analysis methods, the purpose of this paper is to provide operations managers with an innovative tool to monitor a firm’s market position and competitors in real time at higher resolution and lower cost than more traditional competitor analysis methods. Design/methodology/approach The authors combine the techniques of Web Crawler, Natural Language Processing and Machine Learning algorithms with data visualization to develop a big data competitor-analysis system that informs operations managers about competitors and meaningful relationships among them. The authors illustrate the approach using the fitness mobile app business. Findings The study shows that the system supports operational decision making both descriptively and prescriptively. In particular, the innovative probabilistic topic modeling algorithm combined with conventional multidimensional scaling, product feature comparison and market structure analyses reveal an app’s position in relation to its peers. The authors also develop a user segment overlapping index based on user’s social media data. The authors combine this new index with the product functionality similarity index to map indirect and direct competitors with and without user lock-in. Originality/value The approach improves on previous approaches by fully automating information extraction from multiple online sources. The authors believe this is the first system of its kind. With limited human intervention, the methodology can easily be adapted to different settings, giving quicker, more reliable real-time results. The approach is also cost effective for market analysis projects covering different data sources.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Mishra, K., A. Siddiqui und V. Kumar. „A COMPARATIVE ASSESSMENT OF EFFICACY OF SUPER RESOLVED AIRBORNE HYPERSPECTRAL OUTPUTS IN URBAN MATERIAL AND LAND COVER INFORMATION EXTRACTION“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-5 (19.11.2018): 653–58. http://dx.doi.org/10.5194/isprs-archives-xlii-5-653-2018.

Der volle Inhalt der Quelle
Annotation:
<p><strong>Abstract.</strong> Urban areas despite being heterogeneous in nature are characterized as mixed pixels in medium to coarse resolution imagery which renders their mapping as highly inaccurate. A detailed classification of urban areas therefore needs both high spatial and spectral resolution marking the essentiality of different satellite data. Hyperspectral sensors with more than 200 contiguous bands over a narrow bandwidth of 1&amp;ndash;10<span class="thinspace"></span>nm can distinguish identical land use classes. However, such sensors possess low spatial resolution. As the exchange of rich spectral and spatial information is difficult at hardware level resolution enhancement techniques like super resolution (SR) hold the key. SR preserves the spectral characteristics and enables feature visualization at a higher spatial scale. Two SR algorithms: Anchored Neighbourhood Regression (ANR) and Sparse Regression and Natural Prior (SRP) have been executed on an airborne hyperspectral scene of Advanced Visible/Near Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) for the mixed environment centred on Kankaria Lake in the city of Ahmedabad thereby bringing down the spatial resolution from 8.1<span class="thinspace"></span>m to 4.05<span class="thinspace"></span>m. The generated super resolved outputs have been then used to map ten urban material and land cover classes identified in the study area using supervised Spectral Angle Mapper (SAM) and Support Vector Machine (SVM) classification methods. Visual comparison and accuracy assessment on the basis of confusion matrix and Pearson’s Kappa coefficient revealed that SRP super-resolved output classified using radial basis function (RBF) kernel based SVM is the best outcome thereby highlighting the superiority of SR over simple scaling up and resampling approaches.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Mountjoy, Daniel N., Celestine A. Ntuen, Sharolyn A. Converse und William P. Marshak. „Basing Non-Linear Displays on Vector Map Formats“. Journal of Navigation 53, Nr. 1 (Januar 2000): 68–78. http://dx.doi.org/10.1017/s0373463399008620.

Der volle Inhalt der Quelle
Annotation:
One approach to increasing the area coverage of small-screen map displays is to exploit the ‘elasticity’ of the electronic display through use of non-linear scaling. Although resolution is necessarily lost in areas of smaller scale, bitmap images lose important information when re- scaled. This can render features unrecognisable and text unreadable. Vector maps are more tolerant to non-linear scaling because critical position information found in vertices of lines is distorted without being lost. Examples of non-linear scaling are presented to demonstrate the effect on bitmap and vector images. Ongoing research examining the user performance benefits of non-linear displays is also described.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Codilean, Alexandru T., Henry Munack, Timothy J. Cohen, Wanchese M. Saktura, Andrew Gray und Simon M. Mudd. „OCTOPUS: an open cosmogenic isotope and luminescence database“. Earth System Science Data 10, Nr. 4 (30.11.2018): 2123–39. http://dx.doi.org/10.5194/essd-10-2123-2018.

Der volle Inhalt der Quelle
Annotation:
Abstract. We present a database of cosmogenic radionuclide and luminescence measurements in fluvial sediment. With support from the Australian National Data Service (ANDS) we have built infrastructure for hosting and maintaining the data at the University of Wollongong and making this available to the research community via an Open Geospatial Consortium (OGC)-compliant web service. The cosmogenic radionuclide (CRN) part of the database consists of 10Be and 26Al measurements in modern fluvial sediment samples from across the globe, along with ancillary geospatial vector and raster layers, including sample site, basin outline, digital elevation model, gradient raster, flow-direction and flow-accumulation rasters, atmospheric pressure raster, and CRN production scaling and topographic shielding factor rasters. Sample metadata are comprehensive and include all necessary information for the recalculation of denudation rates using CAIRN, an open-source program for calculating basin-wide denudation rates from 10Be and 26Al data. Further all data have been recalculated and harmonised using the same program. The luminescence part of the database consists of thermoluminescence (TL) and optically stimulated luminescence (OSL) measurements in fluvial sediment samples from stratigraphic sections and sediment cores from across the Australian continent and includes ancillary vector and raster geospatial data. The database can be interrogated and downloaded via a custom-built web map service. More advanced interrogation and exporting to various data formats, including the ESRI Shapefile and Google Earth's KML, is also possible via the Web Feature Service (WFS) capability running on the OCTOPUS server. Use of open standards also ensures that data layers are visible to other OGC-compliant data-sharing services. OCTOPUS and its associated data curation framework provide the opportunity for researchers to reuse previously published but otherwise unusable CRN and luminescence data. This delivers the potential to harness old but valuable data that would otherwise be lost to the research community. OCTOPUS can be accessed at https://earth.uow.edu.au (last access: 28 November 2018). The individual data collections can also be accessed via the following DOIs: https://doi.org/10.4225/48/5a8367feac9b2 (CRN International), https://doi.org/10.4225/48/5a836cdfac9b5 (CRN Australia), and https://doi.org/10.4225/48/5a836db1ac9b6 (OSL &amp; TL Australia).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Nishi, Hayato, und Yasushi Asami. „Bayesian Geographical Multi-Dimensional Scaling“. Abstracts of the ICA 1 (15.07.2019): 1–2. http://dx.doi.org/10.5194/ica-abs-1-271-2019.

Der volle Inhalt der Quelle
Annotation:
<p><strong>Abstract.</strong> Multi-dimensional scaling (MDS) is a popular method of visualizing the similarity of individuals in a dataset. When dissimilarities between individuals in a dataset are measured, MDS projects these individuals into the (typically two- or three-dimensional) map. In this map, because similar individuals are projected to be close to one another, distances between individuals correspond to their dissimilarities. In other words, MDS makes a similarity map of a dataset.</p><p>Some of the dissimilarities and distances have a strong relation to the geographical location. For example, time distances are similar to geographical distances, and regional features will be similar if the regions are close together. Therefore, it will be useful to compare the MDS projection and geographical locations. However, because MDS projection is not concerned with the rotation, parallel translation, and similarity expansion, it might be difficult to compare the projection to the actual geographical locations. When geographically related similarities are visualized, projected locations should be bound to the geographical locations.</p><p>In this article, we propose Bayesian Geographical Multidimensional Scaling (BGMDS), in which geographical restrictions of projections are given from a statistical point of view. BGMDS gives not only geographically bound projections, but also incorporates the uncertainty of the projections.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Jiang, Bin. „New Paradigm in Mapping: A Critique on Cartography and GIS“. Abstracts of the ICA 1 (15.07.2019): 1. http://dx.doi.org/10.5194/ica-abs-1-150-2019.

Der volle Inhalt der Quelle
Annotation:
<p><strong>Abstract.</strong> "<i>Two important characteristics of maps should be noticed. A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness. If the map could be ideally correct, it would include, in a reduced scale, the map of the map; the map of the map, of the map; and so on, endlessly, a fact first noticed by Royce.</i>" Alfred Korzybski (1933)</p><p>As noted in the introductory quotation, a map was long ago seen as the map of the map, the map of the map, of the map, and so on endlessly. This recursive perspective on maps, however, has received little attention in cartography. Cartography, as a scientific discipline, is essentially founded on Euclidean geometry and Gaussian statistics, which deal with respectively regular shapes, and more or less similar things. It is commonly accepted that geographic features are not regular and that the Earth’s surface is full of fractal or scaling or living phenomena: far more small things than large ones at different levels of scale. This paper argues for a new paradigm in mapping, based on fractal or living geometry and Paretian statistics, and – more critically – on the new conception of space, conceived and developed by Christopher Alexander, that space is neither lifeless nor neutral, but a living structure capable of being more living or less living. The fractal geometry is not limited to Benoit Mandelbrot’s framework, but towards Christopher Alexander’s living geometry and based upon the third definition of fractal: A set or pattern is fractal if the scaling of far more small things than large ones recurs multiple times. Paretian statistics deals with far more small things than large ones, so it differs fundamentally from Gaussian statistics, which deals with more or less similar things. Under the new paradigm, I make several claims about maps and mapping: (1) Topology of geometrically coherent things – in addition to that of geometric primitives – enables us to see a scaling or fractal or living structure; (2) Under the third definition, all geographic features are fractal or living, given the right perspective and scope; (3) Exactitude is not truth – to paraphrase Henri Matisse – but the living structure is; and (4) Töpfer’s law is not universal, but scaling law is. All these assertions are supported by evidence, drawn from a series of previous studies. This paper demands a monumental shift in perspective and thinking from what we have used to on the legacy of cartography and GIS.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Degret, F., und S. Lespinats. „Circular background decreases misunderstanding of multidimensional scaling results for naive readers“. MATEC Web of Conferences 189 (2018): 10002. http://dx.doi.org/10.1051/matecconf/201818910002.

Der volle Inhalt der Quelle
Annotation:
Non-linear multidimensional scaling (NL-MDS) methods are widely used to give an insight on structures of a dataset. Such a technic displays a “map” of data points onto a 2 dimensional space. The reader is expected to have natural understanding of proximity relationships between items. In our experience, MDS are especially helpful as a support for the collaboration between data analysts and specialists of other fields. Indeed, it often allows understanding main issues, major features, how to deal with data and so on. However, we observed that the classical/rectangular display of map causes confusion for non-specialists and long explanation is often required before reaching the fruitful step of the collaboration. The meaning –the absence of meaning, actually- of axes can be subject for many questions and skepticism from many naive persons. Although it is hardly quantifiable, we observed that using a circle-shaped background for maps improves the understanding of the concept of data mapping by far. We however present here a subjective feedback that may support the practical contribution of NL-MDS for other scientific fields.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Devi, K. R., und Herb Schwab. „High-resolution seismic signals from band-limited data using scaling laws of wavelet transforms“. GEOPHYSICS 74, Nr. 2 (März 2009): WA143—WA152. http://dx.doi.org/10.1190/1.3077622.

Der volle Inhalt der Quelle
Annotation:
Time-scale spectra, obtained from seismic data wavelet transforms, are useful in analyzing local scaling properties of seismic signals. In particular, the wavelet transform modulus maxima (WTMM) spectra, obtained by following the local extrema of wavelet transforms along a constant phase line, describe characteristics of discontinuities such as interfaces. They also show a smooth behavior as a function of scale and thus allow us to derive local scaling laws. We use scaling behavior of WTMM spectra to enhance the bandwidth of seismic data. An analysis of well-log scaling behaviors and the seismic data shows that, whereas the WTMM spectrum of well logs at each interface exhibits a power-law behavior as a function of scale, the corresponding seismic signal spectrum shows a more complicated behavior, arising from seismic wavelet effects. Under the assumption that local well-log power-law behavior holds in general, a scaling law for seismic signals can be derived in terms of parameters that describe subsurface scaling effects and the seismic wavelet. A stable estimation of these parameters can be carried out simultaneously, as a function of time and over the seismic bandwidth, using the modified scaling law. No well-log information is needed to derive the seismic wavelet. Then wavelet transforms can be corrected for seismic wavelet effects and a high-resolution signal reconstructed. This reconstructed high-resolution signal can be used to map features that might not be obvious in the original seismic data, such as small faults, fractures, and fine-scale variations within channel margins.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

HARRISON, MARY ANN F., und YING-CHENG LAI. „CONTINUUM COUPLED-MAP APPROACH TO PATTERN FORMATION IN OSCILLATING GRANULAR LAYERS: ROBUSTNESS AND LIMITATION“. International Journal of Bifurcation and Chaos 18, Nr. 06 (Juni 2008): 1627–43. http://dx.doi.org/10.1142/s0218127408021245.

Der volle Inhalt der Quelle
Annotation:
Continuum coupled maps have been proposed as a generic and universal class of models to understand pattern formation in oscillating granular layers. Such models usually involve two features: Temporal period doubling in local maps and spatial coupling. The models can generate various patterns that bear striking similarities to those observed in real experiments. Here we ask two questions: (1) How robust are patterns generated by continuum coupled maps? and (2) Are there limitations, at a quantitative level, to the continuum coupled-map approach? We address the first question by investigating the effects of noise and spatial inhomogeneity on patterns generated. We also propose a measure to characterize the sharpness of the patterns. This allows us to demonstrate that patterns generated by the model are robust to random perturbations in both space and time. For the second question, we investigate the temporal scaling behavior of the disorder function, which has been proposed to characterize experimental patterns in granular layers. We find that patterns generated by continuum coupled maps do not exhibit scaling behaviors observed in experiments, suggesting that the coupled map approach, while insightful at a qualitative level, may not yield behaviors that are of importance to pattern characterization at a more quantitative level.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Nichols, J. D., und S. W. H. Cowley. „Magnetosphere-ionosphere coupling currents in Jupiter’s middle magnetosphere: dependence on the effective ionospheric Pedersen conductivity and iogenic plasma mass outflow rate“. Annales Geophysicae 21, Nr. 7 (31.07.2003): 1419–41. http://dx.doi.org/10.5194/angeo-21-1419-2003.

Der volle Inhalt der Quelle
Annotation:
Abstract. The amplitude and spatial distribution of the coupling currents that flow between Jupiter’s ionosphere and middle magnetosphere, which enforce partial corotation on outward-flowing iogenic plasma, depend on the values of the effective Pedersen conductivity of the jovian ionosphere and the mass outflow rate of iogenic plasma. The values of these parameters are, however, very uncertain. Here we determine how the solutions for the plasma angular velocity and current components depend on these parameters over wide ranges. We consider two models of the poloidal magnetospheric magnetic field, namely the planetary dipole alone, and an empirical current sheet field based on Voyager data. Following work by Hill (2001), we obtain a complete normalized analytic solution for the dipole field, which shows in compact form how the plasma angular velocity and current components scale in space and in amplitude with the system parameters in this case. We then obtain an approximate analytic solution in similar form for a current sheet field in which the equatorial field strength varies with radial distance as a power law. A key feature of the model is that the current sheet field lines map to a narrow latitudinal strip in the ionosphere, at ≈ 15° co-latitude. The approximate current sheet solutions are compared with the results of numerical integrations using the full field model, for which a power law applies beyond ≈ 20 RJ, and are found to agree very well within their regime of applicability. A major distinction between the solutions for the dipole field and the current sheet concerns the behaviour of the field-aligned current. In the dipole model the direction of the current reverses at moderate equatorial distances, and the current system wholly closes if the model is extended to infinity in the equatorial plane and to the pole in the ionosphere. In the approximate current sheet model, however, the field-aligned current is unidirectional, flowing consistently from the ionosphere to the current sheet for the sense of the jovian magnetic field. Current closure must then occur at higher latitudes, on field lines outside the region described by the model. The amplitudes of the currents in the two models are found to scale with the system parameters in similar ways, though the scaling is with a somewhat higher power of the conductivity for the current sheet model than for the dipole, and with a somewhat lower power of the plasma mass outflow rate. The absolute values of the currents are also higher for the current sheet model than for the dipole for given parameters, by factors of approx 4 for the field-perpendicular current intensities, ≈ 10 for the total current flowing in the circuit, and ≈ 25 for the field-aligned current densities, factors which do not vary greatly with the system parameters. These results thus confirm that the conclusions drawn previously from a small number of numerical integrations using spot values of the system parameters are generally valid over wide ranges of the parameter values.Key words. Magnetospheric physics (current systems, magnetosphere-ionosphere interactions, planetary magnetospheres)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Rossmanith, G., H. Modest, C. Räth, A. J. Banday, K. M. Górski und G. Morfill. „Search for Non-Gaussianities in the WMAP Data with the Scaling Index Method“. Advances in Astronomy 2011 (2011): 1–21. http://dx.doi.org/10.1155/2011/174873.

Der volle Inhalt der Quelle
Annotation:
In the recent years, non-Gaussianity and statistical isotropy of the Cosmic Microwave Background (CMB) was investigated with various statistical measures, first and foremost by means of the measurements of the WMAP satellite. In this paper, we focus on the analyses that were accomplished with a measure of local type, the so-calledScaling Index Method(SIM). The SIM is able to detect structural characteristics of a given data set and has proven to be highly valuable in CMB analysis. It was used for comparing the data set with simulations as well as surrogates, which are full-sky maps generated by randomisation of previously selected features of the original map. During these investigations, strong evidence for non-Gaussianities as well as asymmetries and local features could be detected. In combination with the surrogates approach, the SIM detected the highest significances for non-Gaussianity to date.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Blowers, Geoffrey H., und John Bacon-Shone. „On Detecting the Differences in Jazz: A Reassessment of Comparative Methods of Measuring Perceptual Veridicality“. Empirical Studies of the Arts 12, Nr. 1 (Januar 1994): 41–58. http://dx.doi.org/10.2190/xgv5-frf6-56gl-l5lv.

Der volle Inhalt der Quelle
Annotation:
This article reports an extension of the jazz studies of Holbrook and Huber, which compared individuals' perceptions of music with its objective characteristics. One of their major findings was that a compositional approach to the data, i.e., one that used multiple discriminant or factor analysis provided a better statistical fit to objective features such as style, and key, than did a decompositional method which used a map derived by multidimensional scaling. They concluded that the compositional approach enables a more accurate assessment of perceptual veridicality. However, there is a confounding in their studies of the type of statistical analysis employed with the response mode allowed the subjects, which draws into question the validity of their conclusions. The study reported here assesses the relative merits of the type of statistic and mode of response separately. Its findings support the use of repertory grid as a viable tool in musical perception studies in which under appropriate conditions multiple discriminant analysis and multidimensional scaling are equally viable statistical procedures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Li, Xingdong, Hewei Gao, Fusheng Zha, Jian Li, Yangwei Wang, Yanling Guo und Xin Wang. „Learning the Cost Function for Foothold Selection in a Quadruped Robot“. Sensors 19, Nr. 6 (14.03.2019): 1292. http://dx.doi.org/10.3390/s19061292.

Der volle Inhalt der Quelle
Annotation:
This paper is focused on designing a cost function of selecting a foothold for a physical quadruped robot walking on rough terrain. The quadruped robot is modeled with Denavit–Hartenberg (DH) parameters, and then a default foothold is defined based on the model. Time of Flight (TOF) camera is used to perceive terrain information and construct a 2.5D elevation map, on which the terrain features are detected. The cost function is defined as the weighted sum of several elements including terrain features and some features on the relative pose between the default foothold and other candidates. It is nearly impossible to hand-code the weight vector of the function, so the weights are learned using Supporting Vector Machine (SVM) techniques, and the training data set is generated from the 2.5D elevation map of a real terrain under the guidance of experts. Four candidate footholds around the default foothold are randomly sampled, and the expert gives the order of such four candidates by rotating and scaling the view for seeing clearly. Lastly, the learned cost function is used to select a suitable foothold and drive the quadruped robot to walk autonomously across the rough terrain with wooden steps. Comparing to the approach with the original standard static gait, the proposed cost function shows better performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Aguado, P. L., J. P. Del Monte, R. Moratiel und A. M. Tarquis. „Spatial Characterization of Landscapes through Multifractal Analysis of DEM“. Scientific World Journal 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/563038.

Der volle Inhalt der Quelle
Annotation:
Landscape evolution is driven by abiotic, biotic, and anthropic factors. The interactions among these factors and their influence at different scales create a complex dynamic. Landscapes have been shown to exhibit numerous scaling laws, from Horton’s laws to more sophisticated scaling of heights in topography and river network topology. This scaling and multiscaling analysis has the potential to characterise the landscape in terms of the statistical signature of the measure selected. The study zone is a matrix obtained from a digital elevation model (DEM) (map 10 × 10 m, and height 1 m) that corresponds to homogeneous region with respect to soil characteristics and climatology known as “Monte El Pardo” although the water level of a reservoir and the topography play a main role on its organization and evolution. We have investigated whether the multifractal analysis of a DEM shows common features that can be used to reveal the underlying patterns and information associated with the landscape of the DEM mapping and studied the influence of the water level of the reservoir on the applied analysis. The results show that the use of the multifractal approach with mean absolute gradient data is a useful tool for analysing the topography represented by the DEM.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

González-Bravo, M. I., und Arjola Mecaj. „Structural and Evolutionary Patterns of Companies in a Financial Distress Situation“. Advances in Decision Sciences 2011 (03.08.2011): 1–28. http://dx.doi.org/10.1155/2011/928716.

Der volle Inhalt der Quelle
Annotation:
The present paper studies the evolution of a set of USA firms during the years 1993–2002. The firms that faced a difficult economic and financial situation in 1993 were considered to be in a distress situation. The aim of this study is to explore if the evolution of this situation depends on the initial features of the distress or if it concerns certain firms' characteristics. If the evolution is independent from the above, the management decisions become crucial in critical times. For the analysis we used a Multidimensional Scaling methodology where the firms are represented in a consensus map according to symptom variables, reaction variables, and recovering variables.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

MELCHIONNA, SIMONE, MARIA G. FYTA, EFTHIMIOS KAXIRAS und SAURO SUCCI. „EXPLORING DNA TRANSLOCATION THROUGH A NANOPORE VIA A MULTISCALE LATTICE-BOLTZMANN MOLECULAR-DYNAMICS METHODOLOGY“. International Journal of Modern Physics C 18, Nr. 04 (April 2007): 685–92. http://dx.doi.org/10.1142/s0129183107010942.

Der volle Inhalt der Quelle
Annotation:
A multiscale approach is used to simulate the translocation of DNA through a nanopore. Within this scheme, the interactions of the molecule with the surrounding fluid (solvent) are explicitly taken into account. By generating polymers of various initial configurations and lengths we map the probability distibutions of the passage times of the DNA through the nanopore. A scaling law behavior for the most probable of these times with respect to length is derived, and shown to exhibit an exponent that is in a good agreement with the experimental findings. The essential features of the DNA dynamics as it passes through the pore are explored.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Tenreiro Machado, J. A., und António M. Lopes. „Analysis of Forest Fires by means of Pseudo Phase Plane and Multidimensional Scaling Methods“. Mathematical Problems in Engineering 2014 (2014): 1–8. http://dx.doi.org/10.1155/2014/575872.

Der volle Inhalt der Quelle
Annotation:
Forest fires dynamics is often characterized by the absence of a characteristic length-scale, long range correlations in space and time, and long memory, which are features also associated with fractional order systems. In this paper a public domain forest fires catalogue, containing information of events for Portugal, covering the period from 1980 up to 2012, is tackled. The events are modelled as time series of Dirac impulses with amplitude proportional to the burnt area. The time series are viewed as the system output and are interpreted as a manifestation of the system dynamics. In the first phase we use the pseudo phase plane (PPP) technique to describe forest fires dynamics. In the second phase we use multidimensional scaling (MDS) visualization tools. The PPP allows the representation of forest fires dynamics in two-dimensional space, by taking time series representative of the phenomena. The MDS approach generates maps where objects that are perceived to be similar to each other are placed on the map forming clusters. The results are analysed in order to extract relationships among the data and to better understand forest fires behaviour.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Khandakar, Amith, Muhammad E. H. Chowdhury, Monzure Khoda Kazi, Kamel Benhmed, Farid Touati, Mohammed Al-Hitmi und Antonio Jr S. P. Gonzales. „Machine Learning Based Photovoltaics (PV) Power Prediction Using Different Environmental Parameters of Qatar“. Energies 12, Nr. 14 (19.07.2019): 2782. http://dx.doi.org/10.3390/en12142782.

Der volle Inhalt der Quelle
Annotation:
Photovoltaics (PV) output power is highly sensitive to many environmental parameters and the power produced by the PV systems is significantly affected by the harsh environments. The annual PV power density of around 2000 kWh/m2 in the Arabian Peninsula is an exploitable wealth of energy source. These countries plan to increase the contribution of power from renewable energy (RE) over the years. Due to its abundance, the focus of RE is on solar energy. Evaluation and analysis of PV performance in terms of predicting the output PV power with less error demands investigation of the effects of relevant environmental parameters on its performance. In this paper, the authors have studied the effects of the relevant environmental parameters, such as irradiance, relative humidity, ambient temperature, wind speed, PV surface temperature and accumulated dust on the output power of the PV panel. Calibration of several sensors for an in-house built PV system was described. Several multiple regression models and artificial neural network (ANN)-based prediction models were trained and tested to forecast the hourly power output of the PV system. The ANN models with all the features and features selected using correlation feature selection (CFS) and relief feature selection (ReliefF) techniques were found to successfully predict PV output power with Root Mean Square Error (RMSE) of 2.1436, 6.1555, and 5.5351, respectively. Two different bias calculation techniques were used to evaluate the instances of biased prediction, which can be utilized to reduce bias to improve accuracy. The ANN model outperforms other regression models, such as a linear regression model, M5P decision tree and gaussian process regression (GPR) model. This will have a noteworthy contribution in scaling the PV deployment in countries like Qatar and increase the share of PV power in the national power production.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Moore, Antoni B., und Bin Jiang. „Multivariate thematic mapping using fractal glyphs“. Abstracts of the ICA 1 (15.07.2019): 1–2. http://dx.doi.org/10.5194/ica-abs-1-256-2019.

Der volle Inhalt der Quelle
Annotation:
<p><strong>Abstract.</strong> Cartographers often face the task of depicting multivariate data on a single map. As spatial data gets ever more voluminous, and in particular, complex (multifarious), this challenge will have to be overcome with increasing regularity. Many solutions have been implemented in answer to situations like this: choropleths with trivariate colour or texture schemes, or point symbols in the form of star plots, ray glyphs or more naturalistically, Chernoff Faces (all of which could be sized proportional to a further variable).</p><p>Point symbols with a naturalistic basis have a mimetic appearance that makes for unambiguous communication of data. Chernoff faces in particular work so well due to their resemblance to, and human ability in reading, human faces. Importantly, facial features are easily linked with data through manipulation of their size, position or orientation (e.g. small to large nose; frowny to smily mouth).</p><p>Computer-generated fractals offer further potential solutions to the multivariate challenge in the naturalistic category. Examples such as the Barnsley fern leaf exhibit the fractal property of self-similar geometry over multiple spatial scales to create a realistic fern appearance. Fundamentally, the fern figure is under full control of a few numerical parameters that are simple to link with underlying attribute data. If these fractal glyphs are plotted as points on a map then we have a novel and potentially rich basis for multivariate mapping to address the big data challenge.</p><p>We can generate a Barnsley fern leaf fractal using the Iterated Function System (IFS), driven by parameters for geometric operators: displacement, rotation and scaling (Table 1). Each of these transforms can be linked with a variable, which is the basis for multivariate representation. Figure 1 illustrates how the appearance of the Barnsley fern is affected by scaling, overall orientation and angle of fern frond. In Table 1, cells 2a and d are used for scaling, 2b for orientation and 3d / 4d for frond angle.</p><p> Our demonstrating example is based on “natural cities” (and within them, “natural streets” that define the city outline, Figure 2a), which have been calculated through the head/tail breaks method (Jiang, 2013) to capture a size-based emergent organic hierarchy. For natural streets, street network junction points have been triangulated, and triangular edges divided into two groups, those longer than the mean edge length (the “head”) and those shorter (the “tail”). This process is recursively applied to the head, each time adding a tier to the street size hierarchy. Natural cities are calculated through a similar process, but on the basis of mean settlement area. In Figure 2b, natural city parameters have been used on the top 11 New Zealand cities in the following way: scaling has been linked to number of breaks (this is the ht-index, which indicates how complex the city network form is - Jiang and Yin, 2014), angle of fronds to percentage size of head and overall orientation to North (pointing right) and South Islands (left).</p><p> Rigorous testing needs to take place to measure the effect of fractal multivariate symbols compared with conventional cartographic means of representing the same data (which may include separate variable maps and the multivariate conventions listed above). Usability of fractally enhanced maps can be assessed along efficiency (how long it takes to perform a specific map task), effectiveness (how correct that task was performed) and satisfaction (covering ease, engagement and enjoyment of the map user in performing the map-based tasks).</p><p>Beyond multivariate mapping, these fractal glyphs are being investigated as one of the building blocks in generating data-based artworks (through the use of convolutional neural networks for style transfer – Moore and Jiang, 2017). This symbolisation based on self-similar graphics is an extension of a fractally-oriented vision for cartography (Jiang, 2018). It also addresses a part of the current cartography and Big Data agenda addressing spatial Big Data representation using artworks (Robinson et al, 2017).</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Fressengeas, Claude, Benoît Beausir, Christophe Kerisit, Anne-Laure Helbert, Thierry Baudin, François Brisset, Marie-Hélène Mathon, Rémy Besnard und Nathalie Bozzolo. „On the evaluation of dislocation densities in pure tantalum from EBSD orientation data“. Matériaux & Techniques 106, Nr. 6 (2018): 604. http://dx.doi.org/10.1051/mattech/2018058.

Der volle Inhalt der Quelle
Annotation:
We analyze measurements of dislocation densities carried out independently by several teams using three different methods on orientation maps obtained by Electron Back Scattered Diffraction on commercially pure tantalum samples in three different microstructural states. The characteristic aspects of these three methods: the Kernel average method, the Dillamore method and the determination of the lattice curvature-induced Nye’s tensor component fields are reviewed and their results are compared. One of the main features of the uncovered dislocation density distributions is their strong heterogeneity over the analyzed samples. Fluctuations in the dislocation densities, amounting to several times their base level and scaling as power-laws of their spatial frequency are observed along grain boundaries, and to a lesser degree along sub-grain boundaries. As a result of such scale invariance, defining an average dislocation density over a representative volume element is hardly possible, which leads to questioning the pertinence of such a notion. Field methods allowing to map the dislocation density distributions over the samples therefore appear to be mandatory.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Ibrahim, Najihah, Fadratul Hafinaz Hassan, Nor Muzlifah Mahyuddin und Noorhazlinda Abd Ra. „Cellular Automaton based Fire Spreading Simulation in Closed Area: Clogging Region Detection“. International Journal of Engineering & Technology 7, Nr. 4.44 (01.12.2018): 37. http://dx.doi.org/10.14419/ijet.v7i4.44.26859.

Der volle Inhalt der Quelle
Annotation:
Fire spreading is one of the visualization techniques used for re-enacting or envisions the fire incidents for conducting the post-incidents’ responses and analysing the incidents for post-mortem purposes. There are several current researches on the fire spreading incidents that involve the construction of fire spreading simulation which has focusing on the fire development, smoke control, the prediction of temperature distribution during the fire spreading, emergency response’s plans and post-fire damage assessment. However, there are more features need to be explored in the fire spreading simulation and also the pedestrians movement of the affected incident’s area for the future space design development, arrangement and structural improvement that are impactful towards human safety and also useful for the justification and prediction on the pedestrian survival rate during any panic situations. Hence, this research has focusing on the features of realistic scaling of the spatial layout and implementing the Cellular Automata (CA) approach for imitating the near-realistic pedestrian self-organizing movement and fire spreading characteristics at the microstructure level for designing the heat map of the affected area to show the clogging region in the spatial layout while constructing a reliable prediction on the pedestrian survival rate. This clogging region mapping will be useful for finding the existing issues that lead towards high casualties. Based on the experiments and observations, the heat map of the affected area showed the heavy congestions happened specifically near to the ingress/ egress points and narrow pathways that had affected the pedestrian flow rate and caused the 75% of the 352 pedestrians in the spatial layout to burn and die during the fire simulation by unintentionally taking an extra of 43.85 seconds more than the total fire spreading time (13.42 seconds) to evacuate from the closed area building.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Fedi, Maurizio, und Mahmoud Ahmed Abbas. „A fast interpretation of self-potential data using the depth from extreme points method“. GEOPHYSICS 78, Nr. 2 (01.03.2013): E107—E116. http://dx.doi.org/10.1190/geo2012-0074.1.

Der volle Inhalt der Quelle
Annotation:
We used a fast method to interpret self-potential data: the depth from extreme points (DEXP) method. This is an imaging method transforming self-potential data, or their derivatives, into a quantity proportional to the source distribution. It is based on upward continuing of the field to a number of altitudes and then multiplying the continued data with a scaling law of those altitudes. The scaling law is in the form of a power law of the altitudes, with an exponent equal to half of the structural index, a source parameter related to the type of source. The method is autoconsistent because the structural index is basically determined by analyzing the scaling function, which is defined as the derivative of the logarithm of the self-potential (or of its [Formula: see text]th derivative) with respect to the logarithm of the altitudes. So, the DEXP method does not need a priori information on the self-potential sources and yields effective information about their depth and shape/typology. Important features of the DEXP method are its high-resolution power and stability, resulting from the combined effect of a stable operator (upward continuation) and a high-order differentiation operator. We tested how to estimate the depth to the source in two ways: (1) at the positions of the extreme points in the DEXP transformed map and (2) at the intersection of the lines of the absolute values of the potential or of its derivative (geometrical method). The method was demonstrated using synthetic data of isolated sources and using a multisource model. The method is particularly suited to handle noisy data, because it is stable even using high-order derivatives of the self-potential. We discussed some real data sets: Malachite Mine, Colorado (USA), the Sariyer area (Turkey), and the Bender area (India). The estimated depths and structural indices agree well with the known information.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Arora, Nikhil, Connor Stone, Stéphane Courteau und Thomas H. Jarrett. „MaNGA galaxy properties – I. An extensive optical, mid-infrared photometric, and environmental catalogue“. Monthly Notices of the Royal Astronomical Society 505, Nr. 3 (19.05.2021): 3135–56. http://dx.doi.org/10.1093/mnras/stab1430.

Der volle Inhalt der Quelle
Annotation:
ABSTRACT We present an extensive catalogue of non-parametric structural properties derived from optical and mid-infrared imaging for 4585 galaxies from the MaNGA survey. DESI and Wide-field Infrared Survey Explorer (WISE) imaging are used to extract surface brightness profiles in the g, r, z, W1, W2 photometric bands. Our optical photometry takes advantage of the automated algorithm autoprof and probes surface brightnesses that typically reach below 29 mag arcsec−2 in the r-band, while our WISE photometry achieves 28 mag arcsec−2 in the W1-band. Neighbour density measures and central/satellite classifications are also provided for a large subsample of the MaNGA galaxies. Highlights of our analysis of galaxy light profiles include (i) an extensive comparison of galaxian structural properties that illustrates the robustness of non-parametric extraction of light profiles over parametric methods; (ii) the ubiquity of bimodal structural properties, suggesting the existence of galaxy families in multiple dimensions; and (iii) an appreciation that structural properties measured relative to total light, regardless of the fractional levels, are uncertain. We study galaxy scaling relations based on photometric parameters, and present detailed comparisons with literature and theory. Salient features of this analysis include the near-constancy of the slope and scatter of the size–luminosity and size–stellar mass relations for late-type galaxies with wavelength, and the saturation of central surface density, measured within 1 kpc, for elliptical galaxies with $M_* \gt 10.7\, {\rm M}_{\odot }$ (corresponding to $\Sigma _1 \simeq 10^{10}\, {\rm M}_{\odot }\, {\rm kpc}^{-2}$). The multiband photometry, environmental parameters, and structural scaling relations presented are useful constraints for stellar population and galaxy formation models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Chang, Jae Seung, Yong Min Ahn, Han Young Yu, Hye Jean Park, Kyu Young Lee, Se Hyun Kim und Yong Sik Kim. „Exploring Clinical Characteristics of Bipolar Depression: Internal Structure of the Bipolar Depression Rating Scale“. Australian & New Zealand Journal of Psychiatry 43, Nr. 9 (01.01.2009): 830–37. http://dx.doi.org/10.1080/00048670903107666.

Der volle Inhalt der Quelle
Annotation:
Objective: Due to its pleomorphic phenomenology, the clinical features of bipolar depression are difficult to assess. The objective of the present study was therefore to explore the internal structure of the Bipolar Depression Rating Scale (BDRS) in terms of the phenomenological characteristics of bipolar depression. Methods: Sixty patients with DSM-IV bipolar depression completed the BDRS, depression and excitement subscales of the Positive and Negative Syndrome Scale (PANSS-D and PANSS-E), 17-item Hamilton Depression Rating Scale, Montgomery–Äsberg Depression Rating Scale, Young Mania Rating Scale (YMRS), and the Drug-Induced Extrapyramidal Symptoms Scale. The internal structure of the BDRS was explored through hierarchical cluster analysis (HCA) using Ward's method and multidimensional scaling (MDS). Results: From 20-item BDRS data, the HCA yielded two symptom clusters. The first cluster included 12 items of conventional depressive symptoms. The second cluster included eight items of mixed symptoms. The MDS identified a depressive–mixed dimension. The depressive symptom cluster showed a more cohesive and conglomerate cluster structure on the MDS map compared to the mixed symptom cluster. After controlling for the effects of treatment-emergent extrapyramidal symptoms, strong positive correlations were observed between the BDRS and other depression rating scales, and the BDRS also weakly correlated with the YMRS and the PANSS-E. Conclusions: The internal structure of BDRS appears to be sensitive to complex features of bipolar depression. Hence, the BDRS may have an advantage in evaluating clinical changes in patients with bipolar depression within the therapeutic process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie