Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Depth level-sets.

Zeitschriftenartikel zum Thema „Depth level-sets“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Depth level-sets" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Brunel, Victor-Emmanuel. „Concentration of the empirical level sets of Tukey’s halfspace depth“. Probability Theory and Related Fields 173, Nr. 3-4 (11.05.2018): 1165–96. http://dx.doi.org/10.1007/s00440-018-0850-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Grünewald, T., Y. Bühler und M. Lehning. „Elevation dependency of mountain snow depth“. Cryosphere Discussions 8, Nr. 4 (11.07.2014): 3665–98. http://dx.doi.org/10.5194/tcd-8-3665-2014.

Der volle Inhalt der Quelle
Annotation:
Abstract. Elevation strongly affects quantity and distribution of precipitation and snow. Positive elevation gradients were identified by many studies, usually based on data from sparse precipitation stations or snow depth measurements. We present a systematic evaluation of the elevation – snow depth relationship. We analyse areal snow depth data obtained by remote sensing for seven mountain sites. Snow depths were averaged to 100 m elevation bands and then related to their respective elevation level. The assessment was performed at three scales ranging from the complete data sets by km-scale sub-catchments to slope transects. We show that most elevation – snow depth curves at all scales are characterised through a single shape. Mean snow depths increase with elevation up to a certain level where they have a distinct peak followed by a decrease at the highest elevations. We explain this typical shape with a generally positive elevation gradient of snow fall that is modified by the interaction of snow cover and topography. These processes are preferential deposition of precipitation and redistribution of snow by wind, sloughing and avalanching. Furthermore we show that the elevation level of the peak of mean snow depth correlates with the dominant elevation level of rocks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Bogicevic, Milica, und Milan Merkle. „Approximate calculation of Tukey's depth and median with high-dimensional data“. Yugoslav Journal of Operations Research 28, Nr. 4 (2018): 475–99. http://dx.doi.org/10.2298/yjor180520022b.

Der volle Inhalt der Quelle
Annotation:
We present a new fast approximate algorithm for Tukey (halfspace) depth level sets and its implementation-ABCDepth. Given a d-dimensional data set for any d ? 1, the algorithm is based on a representation of level sets as intersections of balls in Rd. Our approach does not need calculations of projections of sample points to directions. This novel idea enables calculations of approximate level sets in very high dimensions with complexity that is linear in d, which provides a great advantage over all other approximate algorithms. Using different versions of this algorithm, we demonstrate approximate calculations of the deepest set of points ("Tukey median") and Tukey's depth of a sample point or out-of-sample point, all with a linear in d complexity. An additional theoretical advantage of this approach is that the data points are not assumed to be in "general position". Examples with real and synthetic data show that the executing time of the algorithm in all mentioned versions in high dimensions is much smaller than the time of other implemented algorithms. Also, our algorithms can be used with thousands of multidimensional observations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Gupta, Pawan, Lorraine A. Remer, Falguni Patadia, Robert C. Levy und Sundar A. Christopher. „High-Resolution Gridded Level 3 Aerosol Optical Depth Data from MODIS“. Remote Sensing 12, Nr. 17 (02.09.2020): 2847. http://dx.doi.org/10.3390/rs12172847.

Der volle Inhalt der Quelle
Annotation:
The state-of-art satellite observations of atmospheric aerosols over the last two decades from NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS) instruments have been extensively utilized in climate change and air quality research and applications. The operational algorithms now produce Level 2 aerosol data at varying spatial resolutions (1, 3, and 10 km) and Level 3 data at 1 degree. The local and global applications have benefited from the coarse resolution gridded data sets (i.e., Level 3, 1 degree), as it is easier to use since data volume is low, and several online and offline tools are readily available to access and analyze the data with minimal computing resources. At the same time, researchers who require data at much finer spatial scales have to go through a challenging process of obtaining, processing, and analyzing larger volumes of data sets that require high-end computing resources and coding skills. Therefore, we created a high spatial resolution (high-resolution gridded (HRG), 0.1 × 0.1 degree) daily and monthly aerosol optical depth (AOD) product by combining two MODIS operational algorithms, namely Deep Blue (DB) and Dark Target (DT). The new HRG AODs meet the accuracy requirements of Level 2 AOD data and provide either the same or more spatial coverage on daily and monthly scales. The data sets are provided in daily and monthly files through open an Ftp server with python scripts to read and map the data. The reduced data volume with an easy to use format and tools to access the data will encourage more users to utilize the data for research and applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Grünewald, T., Y. Bühler und M. Lehning. „Elevation dependency of mountain snow depth“. Cryosphere 8, Nr. 6 (20.12.2014): 2381–94. http://dx.doi.org/10.5194/tc-8-2381-2014.

Der volle Inhalt der Quelle
Annotation:
Abstract. Elevation strongly affects quantity and distribution patterns of precipitation and snow. Positive elevation gradients were identified by many studies, usually based on data from sparse precipitation stations or snow depth measurements. We present a systematic evaluation of the elevation–snow depth relationship. We analyse areal snow depth data obtained by remote sensing for seven mountain sites near to the time of the maximum seasonal snow accumulation. Snow depths were averaged to 100 m elevation bands and then related to their respective elevation level. The assessment was performed at three scales: (i) the complete data sets (10 km scale), (ii) sub-catchments (km scale) and (iii) slope transects (100 m scale). We show that most elevation–snow depth curves at all scales are characterised through a single shape. Mean snow depths increase with elevation up to a certain level where they have a distinct peak followed by a decrease at the highest elevations. We explain this typical shape with a generally positive elevation gradient of snow fall that is modified by the interaction of snow cover and topography. These processes are preferential deposition of precipitation and redistribution of snow by wind, sloughing and avalanching. Furthermore, we show that the elevation level of the peak of mean snow depth correlates with the dominant elevation level of rocks (if present).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Tavkhelidze, Avto, Amiran Bibilashvili, Larissa Jangidze und Nima E. Gorji. „Fermi-Level Tuning of G-Doped Layers“. Nanomaterials 11, Nr. 2 (17.02.2021): 505. http://dx.doi.org/10.3390/nano11020505.

Der volle Inhalt der Quelle
Annotation:
Recently, geometry-induced quantum effects were observed in periodic nanostructures. Nanograting (NG) geometry significantly affects the electronic, magnetic, and optical properties of semiconductor layers. Silicon NG layers exhibit geometry-induced doping. In this study, G-doped junctions were fabricated and characterized and the Fermi-level tuning of the G-doped layers by changing the NG depth was investigated. Samples with various indent depths were fabricated using laser interference lithography and a consecutive series of reactive ion etching. Four adjacent areas with NG depths of 10, 20, 30, and 40 nm were prepared on the same chip. A Kelvin probe was used to map the work function and determine the Fermi level of the samples. The G-doping-induced Fermi-level increase was recorded for eight sample sets cut separately from p-, n-, p+-, and n+-type silicon substrates. The maximum increase in the Fermi level was observed at a10 nm depth, and this decreased with increasing indent depth in the p- and n-type substrates. Particularly, this reduction was more pronounced in the p-type substrates. However, the Fermi-level increase in the n+- and p+-type substrates was negligible. The obtained results are explained using the G-doping theory and G-doped layer formation mechanism introduced in previous works.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Lowrey, Wilson, Ryan Broussard und Lindsey A. Sherrill. „Data journalism and black-boxed data sets“. Newspaper Research Journal 40, Nr. 1 (März 2019): 69–82. http://dx.doi.org/10.1177/0739532918814451.

Der volle Inhalt der Quelle
Annotation:
This study explores the level of scrutiny data journalists from national, local, traditional and digital outlets apply to data sets and data categories, and reasons that scrutiny varies. The study applies a sociology of quantification framework that assumes a tendency for data categories to become “black-boxed,” or taken-for-granted and unquestioned. Results of in-depth interviews with 15 data journalists suggested these journalists were more concerned with data accessibility and ease of use than validity of data categories, though this varied across outlet size and level of story complexity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Harms-Ringdahl, Lars. „Analysis of Results from Event Investigations in Industrial and Patient Safety Contexts“. Safety 7, Nr. 1 (05.03.2021): 19. http://dx.doi.org/10.3390/safety7010019.

Der volle Inhalt der Quelle
Annotation:
Accident investigations are probably the most common approach to evaluate the safety of systems. The aim of this study is to analyse event investigations and especially their recommendations for safety reforms. Investigation reports were studied with a methodology based on the characterisation of organisational levels and types of recommendations. Three sets of event investigations from industrial companies and hospitals were analysed. Two sets employed an in-depth approach, while the third was based on the root-cause concept. The in-depth approach functioned in a similar way for both industrial organisations and hospitals. The number of suggested reforms varied between 56 and 143 and was clearly greater for the industry. Two sets were from health care, but with different methodologies. The number of suggestions was eight times higher with the in-depth approach, which also addressed higher levels in the organisational hierarchy and more often safety management issues. The root-cause investigations had a clear emphasis on reforms at the local level and improvement of production. The results indicate a clear need for improvements of event investigations in the health care sector, for which some suggestions are presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Letang, D. L., und W. J. de Groot. „Forest floor depths and fuel loads in upland Canadian forests“. Canadian Journal of Forest Research 42, Nr. 8 (August 2012): 1551–65. http://dx.doi.org/10.1139/x2012-093.

Der volle Inhalt der Quelle
Annotation:
Forest floor data are important for many forest resource management applications. In terms of fire and forest carbon dynamics, these data are critical for modeling direct carbon emissions from wildfire in Canadian forests because forest floor organic material is usually the greatest emissions source. However, there are very few data available to initialize wildfire emission models. Six data sets representing 41 534 forest stands across Canada were combined to provide summary statistics and to analyze factors controlling forest floor fuel loads and depths. The impacts of dominant tree species, ecozone, drainage-class, and age-class data on forest floor fuel loads and depth were examined using ANOVA and regression. All four parameters were significant factors affecting forest floor fuel load and depth, but only tree species and ecozone were substantially influential. Although forest floor depths summarized in this study are similar to those of previous studies, forest floor fuel loads are higher. Average forest floor fuel loads and depths are summarized by species and ecozone and can be used to initialize dynamic stand-level forest models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Xi, Qingkui, Weiming Wu, Junjie Ji, Zhenghui Zhang und Feng Ni. „Comparing the Level of Commitment to In-Depth Reference and Research Support Services in Two Sets of Chinese Universities“. Science & Technology Libraries 38, Nr. 2 (07.03.2019): 204–23. http://dx.doi.org/10.1080/0194262x.2019.1583624.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Cai, Jing, Fang Li, Ning An, Jing Jing Lu und Lu Sun. „The Application of the Minimum Cut Sets in Reliability Evaluation of Power Transmission and Transformation System“. Advanced Materials Research 463-464 (Februar 2012): 1175–81. http://dx.doi.org/10.4028/www.scientific.net/amr.463-464.1175.

Der volle Inhalt der Quelle
Annotation:
As an important part of power systems, power transmission and transformation system involves various equipments, complicated structure and operation modes, and its reliability level has a significant influence on the reliability of the whole system. The paper proposes a practical method for reliability evaluation of power transmission and transformation system based on minimum cut sets. The algorithm, based on topological structure and reliability data without power flow, analyses the reliability of system by different voltage grades. In each voltage grade analysis, the method resolves minimum path sets by depth first search method, gets minimum cut sets, and calculates reliability index. The method considers three states involved normal operating, schedule repair and fault repair, and it makes the evaluation process much more reasonable and effective as it resolves branch-node mixed cut sets. The paper verifies the effectiveness of the algorithm by the test of IEEE-RTS 79 system
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Yao, Huang, Mengting Yang, Tiantian Chen, Yantao Wei und Yu Zhang. „Depth-based human activity recognition via multi-level fused features and fast broad learning system“. International Journal of Distributed Sensor Networks 16, Nr. 2 (Februar 2020): 155014772090783. http://dx.doi.org/10.1177/1550147720907830.

Der volle Inhalt der Quelle
Annotation:
Human activity recognition using depth videos remains a challenging problem while in some applications the available training samples is limited. In this article, we propose a new method for human activity recognition by crafting an integrated descriptor called multi-level fused features for depth sequences and devising a fast broad learning system based on matrix decomposition for classification. First, the surface normals are computed from original depth maps; the histogram of the surface normal orientations is obtained as a low-level feature by accumulating the contributions from normals, then a high-level feature is acquired by sparse coding and pooling on the aggregation of polynormals. After that, the principal component analysis is applied to the conjunction of the two-level features in order to obtain a low-dimensional and discriminative fused feature. At last, fast broad learning system based on matrix decomposition is proposed to accelerate the training process and enhance the classification results. The recognition results on three benchmark data sets show that our method outperforms the state-of-the-art methods in term of accuracy, especially when the number of training samples is small.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Hopkins, DL. „An evaluation of the Hennessy Grading Probe for measuring fat depth in beef carcasses“. Australian Journal of Experimental Agriculture 29, Nr. 6 (1989): 781. http://dx.doi.org/10.1071/ea9890781.

Der volle Inhalt der Quelle
Annotation:
Fat depth at the P8 site on the rump was measured by the cut-and-measure (CM) technique and with the Hennessy Grading Probe (HGP) on 2501 beef carcasses at 1 abattoir over a 12-month period. CM measurements that differed by more than 1 mm between the right and left sides of the carcass were discarded. A subsequent data set of 1850 carcasses was randomly divided so that 2 models could be developed to assess the general validity of the relationship between the 2 methods of measurement. Analysis of measurements of the left side of the carcasses of these 2 subsamples showed the data were not normally distributed. Removal of outliers at the 95% confidence level and also measurements at both extremes of the data range improved the symmetry of the sets of data. From each adjusted data set, regression equations were developed to predict CM measurements from HGP measurements. Linear equations were adequate for predicting CM measurements from HGP measurements, and curvilinear analysis did not improve the predictions. Compared with the curvilinear equations, the linear equations resulted in smaller differences between the 2 data sets for the predicted CM measurements over a range of HGP measurements.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Rodríguez-Fernández, Nemesio J., Arnaud Mialon, Stephane Mermoz, Alexandre Bouvet, Philippe Richaume, Ahmad Al Bitar, Amen Al-Yaari et al. „An evaluation of SMOS L-band vegetation optical depth (L-VOD) data sets: high sensitivity of L-VOD to above-ground biomass in Africa“. Biogeosciences 15, Nr. 14 (30.07.2018): 4627–45. http://dx.doi.org/10.5194/bg-15-4627-2018.

Der volle Inhalt der Quelle
Annotation:
Abstract. The vegetation optical depth (VOD) measured at microwave frequencies is related to the vegetation water content and provides information complementary to visible/infrared vegetation indices. This study is devoted to the characterization of a new VOD data set obtained from SMOS (Soil Moisture and Ocean Salinity) satellite observations at L-band (1.4 GHz). Three different SMOS L-band VOD (L-VOD) data sets (SMOS level 2, level 3 and SMOS-IC) were compared with data sets on tree height, visible/infrared indexes (NDVI, EVI), mean annual precipitation and above-ground biomass (AGB) for the African continent. For all relationships, SMOS-IC showed the lowest dispersion and highest correlation. Overall, we found a strong (R > 0.85) correlation with no clear sign of saturation between L-VOD and four AGB data sets. The relationships between L-VOD and the AGB data sets were linear per land cover class but with a changing slope depending on the class type, which makes it a global non-linear relationship. In contrast, the relationship linking L-VOD to tree height (R = 0.87) was close to linear. For vegetation classes other than evergreen broadleaf forest, the annual mean of L-VOD spans a range from 0 to 0.7 and it is linearly correlated with the average annual precipitation. SMOS L-VOD showed higher sensitivity to AGB compared to NDVI and K/X/C-VOD (VOD measured at 19, 10.7 and 6.9 GHz). The results showed that, although the spatial resolution of L-VOD is coarse ( ∼ 40 km), the high temporal frequency and sensitivity to AGB makes SMOS L-VOD a very promising indicator for large-scale monitoring of the vegetation status, in particular biomass.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Tishchenko, Ilya, Gabor Tari, Mohammad Fallah und Jonathan Floodpage. „Submarine landslide origin of a tsunami at the Black Sea coast: Evidence based on swath bathymetry and 3D seismic reflection data“. Interpretation 9, Nr. 2 (21.04.2021): SB67—SB78. http://dx.doi.org/10.1190/int-2020-0174.1.

Der volle Inhalt der Quelle
Annotation:
Tsunami waves were observed along the Bulgarian Black Sea coastline on 7 May 2007. The maximum rise and fall of the sea level were 1.2 and 2.0 m, respectively, with wave oscillations between 4 and 8 min. At first, submarine landsliding and then later on atmospheric disturbance were suggested as the cause of the tsunami. Numerical modeling, assuming a landslide displacing 30–60 million m3 material on the slope with a thickness range of more than 20–40 m, could reproduce the main characteristics of the recorded tsunami. In this early model, the landslide initiated on the shelf at a water depth of 100 m with a runout of approximately 20 km into 1000 m water depth. Subsequent and recent numerical modeling suggested that the failure may have initiated on the slope, anywhere between 200 and 1500 m seafloor depth. The runout of the transported sediments in these latest model was at 1850 m water depth. Just a few years after the tsunami, OMV and its joint venture partners, TOTAL and Repsol, acquired modern deepwater data sets in the same area where the submarine landsliding was assumed to occur. These data sets included multibeam swath bathymetry area and 3D reflection seismic data. These data sets offer a possibility to establish the presence of speculative submarine landslide responsible for the tsunami, with its geometry and nature. Our results provide direct evidence for the occurrence of large nonseismic catastrophic sediment failures along the Bulgarian coast. In this study, we illustrate Quaternary submarine landslides on 3D seismic reflection data immediately below the one responsible for the 2007 event; we also briefly point out the potential interpretation pitfall related to sediment waves and mass transport complexes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Zhao, Yuekun, Suyun Luo, Xiaoci Huang und Dan Wei. „A Multi-Sensor 3D Detection Method for Small Objects“. World Electric Vehicle Journal 15, Nr. 5 (10.05.2024): 210. http://dx.doi.org/10.3390/wevj15050210.

Der volle Inhalt der Quelle
Annotation:
In response to the limited accuracy of current three-dimensional (3D) object detection algorithms for small objects, this paper presents a multi-sensor 3D small object detection method based on LiDAR and a camera. Firstly, the LiDAR point cloud is projected onto the image plane to obtain a depth image. Subsequently, we propose a cascaded image fusion module comprising multi-level pooling layers and multi-level convolution layers. This module extracts features from both the camera image and the depth image, addressing the issue of insufficient depth information in the image feature. Considering the non-uniform distribution characteristics of the LiDAR point cloud, we introduce a multi-scale voxel fusion module composed of three sets of VFE (voxel feature encoder) layers. This module partitions the point cloud into grids of different sizes to improve detection ability for small objects. Finally, the multi-level fused point features are associated with the corresponding scale’s initial voxel features to obtain the fused multi-scale voxel features, and the final detection results are obtained based on this feature. To evaluate the effectiveness of this method, experiments are conducted on the KITTI dataset, achieving a 3D AP (average precision) of 73.81% for the hard level of cars and 48.03% for the hard level of persons. The experimental results demonstrate that this method can effectively achieve 3D detection of small objects.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Xu, Zhuo, Fengjiao Zhang, Christopher Juhlin, Björn Lund, Maria Ask und Liguo Han. „Extrapolated supervirtual refraction interferometry“. Geophysical Journal International 227, Nr. 2 (20.07.2021): 1439–63. http://dx.doi.org/10.1093/gji/ggab283.

Der volle Inhalt der Quelle
Annotation:
SUMMARY Accurate picking of head-wave arrival times is an important component of first-arrival traveltime tomography. Far-offset traces in particular have low signal-to-noise ratio (SNR), but picking on these traces is necessary in order to obtain velocity information at depth. Furthermore, there is often an insufficient number of far-offset traces for obtaining reliable models at depth. We present here an extrapolation method for increasing the number of first arrivals beyond the maximum recorded offset, thereby extending the supervirtual refraction interferometry (SVI) method. We refer to the method as extrapolated SVI (ESVI). It is a novel attempt to extrapolate first arrivals using a fully data-driven method. We first test the methodology on synthetic data sets, and we then apply ESVI to two published real data sets over the Pärvie fault system in northern Sweden. These data sets were acquired along the same profile at different times with different acquisition parameters and noise levels. The results show that ESVI enhances the SNR of head waves when the noise level is high. That is the same as the conventional SVI. ESVI also increases the number of pickable first arrivals by extrapolating head waves past the original maximum offset of each shot. We also show that the significant increase in first-arrival traveltime picks is beneficial for improving resolution and penetration depth in the tomographic imaging and, consequently, better revealing the subsurface velocity distribution. The tomographic images show higher velocities in the hanging walls of the main Pärvie fault and another subsidiary fault, as interpreted relative to migrated images from previous seismic reflection processing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Wang, Hong, und Hong Li. „Uncertainty Measure for Multisource Intuitionistic Fuzzy Information System“. Complexity 2022 (07.04.2022): 1–21. http://dx.doi.org/10.1155/2022/3605881.

Der volle Inhalt der Quelle
Annotation:
Multisource information systems and multigranulation intuitionistic fuzzy rough sets are important extended types of Pawlak’s classical rough set model. Multigranulation intuitionistic fuzzy rough sets have been investigated in depth in recent years. However, few studies have considered this combination of multisource information systems and intuitionistic fuzzy rough sets. In this paper, we give the uncertainty measure for multisource intuitionistic fuzzy information system. Against the background of multisource intuitionistic fuzzy information system, each information source is regarded as a granularity level. Considering the different importance of information sources, we assign different weights to them. Firstly, the paper proposes an optimal source selection method. Secondly, we study the weighted generalized, weighted optimistic, and weighted pessimistic multigranularity intuitionistic fuzzy rough set models and uncertainty measurement methods in the multisource intuitionistic fuzzy information system, and we further study the relationship between the three models and related properties. Finally, an example is given to verify the validity of the models and methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Yang, Guoliang, Ziling Nie, Jixiang Wang, Hao Yang und Shuaiying Yu. „MSREA-Net: An Efficient Skin Disease Segmentation Method Based on Multi-Level Resolution Receptive Field“. Applied Sciences 13, Nr. 18 (14.09.2023): 10315. http://dx.doi.org/10.3390/app131810315.

Der volle Inhalt der Quelle
Annotation:
Aiming at the low contrast of skin lesion image and inaccurate segmentation of lesion boundary, a skin lesion segmentation method based on multi-level split receptive field and attention is proposed. Firstly, the depth feature extraction module and multi-level splitting receptive field module are used to extract image feature information; secondly, the hybrid pooling module is used to build long-term and short-term dependencies and integrate global information and local information. Finally, the reverse residual external attention module is introduced to construct the decoding part, which can mine the potential relationship between data sets and improve the network segmentation ability. Experiments on ISBI2017 and ISIC2018 data sets show that the Dice similarity coefficient and Jaccard index reach 88.67% and 91.84%, 79.25% and 81.48%, respectively, and the accuracy reaches 93.89% and 96.16%. The segmentation method is superior to the existing algorithms as a whole. Simulation experiments show that the network has a good effect on skin lesion image segmentation and provides a new method for skin disease diagnosis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Xu, Wanpeng, Ling Zou, Lingda Wu und Zhipeng Fu. „Self-Supervised Monocular Depth Learning in Low-Texture Areas“. Remote Sensing 13, Nr. 9 (26.04.2021): 1673. http://dx.doi.org/10.3390/rs13091673.

Der volle Inhalt der Quelle
Annotation:
For the task of monocular depth estimation, self-supervised learning supervises training by calculating the pixel difference between the target image and the warped reference image, obtaining results comparable to those with full supervision. However, the problematic pixels in low-texture regions are ignored, since most researchers think that no pixels violate the assumption of camera motion, taking stereo pairs as the input in self-supervised learning, which leads to the optimization problem in these regions. To tackle this problem, we perform photometric loss using the lowest-level feature maps instead and implement first- and second-order smoothing to the depth, ensuring consistent gradients ring optimization. Given the shortcomings of ResNet as the backbone, we propose a new depth estimation network architecture to improve edge location accuracy and obtain clear outline information even in smoothed low-texture boundaries. To acquire more stable and reliable quantitative evaluation results, we introce a virtual data set in the self-supervised task because these have dense depth maps corresponding to pixel by pixel. We achieve performance that exceeds that of the prior methods on both the Eigen Splits of the KITTI and VKITTI2 data sets taking stereo pairs as the input.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Zuschin, Martin, Rafał Nawrot, Mathias Harzhauser, Oleg Mandic und Adam Tomašových. „Taxonomic and numerical sufficiency in depth- and salinity-controlled marine paleocommunities“. Paleobiology 43, Nr. 3 (09.03.2017): 463–78. http://dx.doi.org/10.1017/pab.2016.49.

Der volle Inhalt der Quelle
Annotation:
AbstractNumerical and taxonomic resolution of compositional data sets affects investigators’ abilities to detect and measure relationships between communities and environmental factors. We test whether varying numerical (untransformed, square-root- and fourth-root-transformed relative abundance and presence–absence data) and taxonomic (species, genera, families) resolutions reveals different insights into early to middle Miocene molluscan communities along bathymetric and salinity gradients. The marine subtidal has a more even species-abundance distribution, a higher number of rare species, and higher species:family and species:genus ratios than the three habitats—marine and estuarine intertidal, estuarine subtidal—with higher fluctuations in salinity and other physical parameters. Taxonomic aggregation and numerical transformation of data result in very different ordinations, although all habitats differ significantly from one another at all taxonomic and numerical levels. Rank correlations between species-level and higher-taxon, among-sample dissimilarities are very high for proportional abundance and decrease strongly with increasing numerical transformation, most notably in the two intertidal habitats. The proportion of variation explained by depth is highest for family-level data, decreases gradually with numerical transformation, and is higher in marine than in estuarine habitats. The proportion of variation explained by salinity is highest for species-level data, increases gradually with numerical transformation, and is higher in subtidal than in intertidal habitats. Therefore, there is no single best numerical and taxonomic resolution for the discrimination of communities along environmental gradients: the “best” resolution depends on the environmental factor considered and the nature of community response to it. Different numerical and taxonomic transformations capture unique aspects of metacommunity assembly along environmental gradients that are not detectable at a single level of resolution. We suggest that simultaneous analyses of community gradients at multiple taxonomic and numerical resolutions provide novel insights into processes responsible for spatial and temporal community stability.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Özdenvar, Turgut, George A. McMechan und Preston Chaney. „Simulation of complete seismic surveys for evaluation of experiment design and processing“. GEOPHYSICS 61, Nr. 2 (März 1996): 496–508. http://dx.doi.org/10.1190/1.1443976.

Der volle Inhalt der Quelle
Annotation:
Synthesis of complete seismic survey data sets allows analysis and optimization of all stages in an acquisition/processing sequence. The characteristics of available survey designs, parameter choices, and processing algorithms may be evaluated prior to field acquisition to produce a composite system in which all stages have compatible performance; this maximizes the cost effectiveness for a given level of accuracy, or for targets with specific characteristics. Data sets synthesized for three salt structures provide representative comparisons of time and depth migration, post‐stack and prestack processing, and illustrate effects of varying recording aperture and shot spacing, iterative focusing analysis, and the interaction of migration algorithms with recording aperture. A final example demonstrates successful simulation of both 2-D acquisition and processing of a real data line over a salt pod in the Gulf of Mexico.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Sarkkola, Sakari, Hannu Hökkä, Harri Koivusalo, Mika Nieminen, Erkki Ahti, Juhani Päivänen und Jukka Laine. „Role of tree stand evapotranspiration in maintaining satisfactory drainage conditions in drained peatlands“. Canadian Journal of Forest Research 40, Nr. 8 (August 2010): 1485–96. http://dx.doi.org/10.1139/x10-084.

Der volle Inhalt der Quelle
Annotation:
Ditch networks in drained peatland forests are maintained regularly to prevent water table rise and subsequent decrease in tree growth. The growing tree stand itself affects the level of water table through evapotranspiration, the magnitude of which is closely related to the living stand volume. In this study, regression analysis was applied to quantify the relationship between the late summer water table depth (DWT) and tree stand volume, mean monthly summertime precipitation (Ps), drainage network condition, and latitude. The analysis was based on several large data sets from southern to northern Finland, including concurrent measurements of stand volume and summer water table depth. The identified model demonstrated a nonlinear effect of stand volume on DWT, a linear effect of Ps on DWT, and an interactive effect of both stand volume and Ps. Latitude and ditch depth showed only marginal influence on DWT. A separate analysis indicated that an increase of 10 m3·ha–1 in stand volume corresponded with a drop of 1 cm in water table level during the growing season. In a subsample of the data, high bulk density peat showed deeper DWT than peat with low bulk density at the same stand volume.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Cormier, Emily C., Danielle R. Sisson, Kathleen M. Rühland, John P. Smol und Joseph R. Bennett. „A morphological trait-based approach to environmental assessment models using diatoms“. Canadian Journal of Fisheries and Aquatic Sciences 77, Nr. 1 (Januar 2020): 108–12. http://dx.doi.org/10.1139/cjfas-2018-0376.

Der volle Inhalt der Quelle
Annotation:
Diatom assemblages are excellent indicators for environmental monitoring. However, enumerating diatoms using fine-level taxonomy takes considerable effort, which must be undertaken by specialist taxonomists. One alternative is to enumerate assemblages using morphological traits. In this study, we compared the accuracy of models using 20 morphological traits with those using species assemblages to infer lake water pH, salinity, depth, and total phosphorus concentrations in four data sets, each comprising over 200 lakes. Assemblages aggregated by trait combinations were used to predict environmental variables via weighted averaging regressions, and richness of trait combinations was regressed against the environmental variables. Trait-based weighted averaging regressions showed slightly lower accuracy than species-level analyses and higher accuracy than analyses at the family and sometimes genus level. Richness of trait combinations showed relationships with pH, salinity, and lake depth that were marginally stronger than relationships using species richness. Although species-level analyses are the best approach when time and budgets allow, we suggest that trait combinations could provide an alternative method for water quality assessment programs, where funds do not allow the use of specialist taxonomists or where diatoms are being used as part of a multi-indicator analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Zhao, Xinlei, Shuang Wu, Nan Fang, Xiao Sun und Jue Fan. „Evaluation of single-cell classifiers for single-cell RNA sequencing data sets“. Briefings in Bioinformatics 21, Nr. 5 (23.10.2019): 1581–95. http://dx.doi.org/10.1093/bib/bbz096.

Der volle Inhalt der Quelle
Annotation:
Abstract Single-cell RNA sequencing (scRNA-seq) has been rapidly developing and widely applied in biological and medical research. Identification of cell types in scRNA-seq data sets is an essential step before in-depth investigations of their functional and pathological roles. However, the conventional workflow based on clustering and marker genes is not scalable for an increasingly large number of scRNA-seq data sets due to complicated procedures and manual annotation. Therefore, a number of tools have been developed recently to predict cell types in new data sets using reference data sets. These methods have not been generally adapted due to a lack of tool benchmarking and user guidance. In this article, we performed a comprehensive and impartial evaluation of nine classification software tools specifically designed for scRNA-seq data sets. Results showed that Seurat based on random forest, SingleR based on correlation analysis and CaSTLe based on XGBoost performed better than others. A simple ensemble voting of all tools can improve the predictive accuracy. Under nonideal situations, such as small-sized and class-imbalanced reference data sets, tools based on cluster-level similarities have superior performance. However, even with the function of assigning ‘unassigned’ labels, it is still challenging to catch novel cell types by solely using any of the single-cell classifiers. This article provides a guideline for researchers to select and apply suitable classification tools in their analysis workflows and sheds some lights on potential direction of future improvement on classification tools.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Blank, Daniel, Annette Eicker, Laura Jensen und Andreas Güntner. „A global analysis of water storage variations from remotely sensed soil moisture and daily satellite gravimetry“. Hydrology and Earth System Sciences 27, Nr. 13 (04.07.2023): 2413–35. http://dx.doi.org/10.5194/hess-27-2413-2023.

Der volle Inhalt der Quelle
Annotation:
Abstract. Water storage changes in the soil can be observed on a global scale with different types of satellite remote sensing. While active or passive microwave sensors are limited to the upper few centimeters of the soil, satellite gravimetry can detect changes in the terrestrial water storage (TWS) in an integrative way, but it cannot distinguish between storage variations in different compartments or soil depths. Jointly analyzing both data types promises novel insights into the dynamics of subsurface water storage and of related hydrological processes. In this study, we investigate the global relationship of (1) several satellite soil moisture products and (2) non-standard daily TWS data from the Gravity Recovery and Climate Experiment/Follow-On (GRACE/GRACE-FO) satellite gravimetry missions on different timescales. The six soil moisture products analyzed in this study differ in the post-processing and the considered soil depth. Level 3 surface soil moisture data sets of the Soil Moisture Active Passive (SMAP) and Soil Moisture and Ocean Salinity (SMOS) missions are compared to post-processed Level 4 data products (surface and root zone soil moisture) and the European Space Agency Climate Change Initiative (ESA CCI) multi-satellite product. On a common global 1∘ grid, we decompose all TWS and soil moisture data into seasonal to sub-monthly signal components and compare their spatial patterns and temporal variability. We find larger correlations between TWS and soil moisture for soil moisture products with deeper integration depths (root zone vs. surface layer) and for Level 4 data products. Even for high-pass filtered sub-monthly variations, significant correlations of up to 0.6 can be found in regions with a large, high-frequency storage variability. A time shift analysis of TWS versus soil moisture data reveals the differences in water storage dynamics with integration depth.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Bulbul, Mohammad Farhad, Amin Ullah, Hazrat Ali und Daijin Kim. „A Deep Sequence Learning Framework for Action Recognition in Small-Scale Depth Video Dataset“. Sensors 22, Nr. 18 (09.09.2022): 6841. http://dx.doi.org/10.3390/s22186841.

Der volle Inhalt der Quelle
Annotation:
Depth video sequence-based deep models for recognizing human actions are scarce compared to RGB and skeleton video sequences-based models. This scarcity limits the research advancements based on depth data, as training deep models with small-scale data is challenging. In this work, we propose a sequence classification deep model using depth video data for scenarios when the video data are limited. Unlike summarizing the frame contents of each frame into a single class, our method can directly classify a depth video, i.e., a sequence of depth frames. Firstly, the proposed system transforms an input depth video into three sequences of multi-view temporal motion frames. Together with the three temporal motion sequences, the input depth frame sequence offers a four-stream representation of the input depth action video. Next, the DenseNet121 architecture is employed along with ImageNet pre-trained weights to extract the discriminating frame-level action features of depth and temporal motion frames. The extracted four sets of feature vectors about frames of four streams are fed into four bi-directional (BLSTM) networks. The temporal features are further analyzed through multi-head self-attention (MHSA) to capture multi-view sequence correlations. Finally, the concatenated genre of their outputs is processed through dense layers to classify the input depth video. The experimental results on two small-scale benchmark depth datasets, MSRAction3D and DHA, demonstrate that the proposed framework is efficacious even for insufficient training samples and superior to the existing depth data-based action recognition methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Wang, Li. „English Speech Recognition and Pronunciation Quality Evaluation Model Based on Neural Network“. Scientific Programming 2022 (30.06.2022): 1–10. http://dx.doi.org/10.1155/2022/2249722.

Der volle Inhalt der Quelle
Annotation:
An in-depth neural network-based approach is proposed to better develop an assessment model for English speech recognition and call quality assessment. By studying the structure of a deep nonlinear network, you can approximate complex functions, define distributed representations of input data, demonstrate a strong ability to learn important data set characteristics from some sample sets, and better simulate human brain analysis, and learning. The author uses in-depth learning technology to recognize English speech and has developed a speech recognition model with a deep belief network using the characteristics of the honey frequency centrum based on human hearing patterns. The test results show that examples include 210 machine and manual evaluations and 30 samples with first-grade differences. The overall compatibility level of the machine and human evaluation is 90.65%, and the adjacency consistency level is 90.65%. This is 100%, and the correlation coefficient is 0.798. We need to evaluate the quality of speech and pronunciation in English, which indicates a strong correlation between machine estimates and human estimates.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Karpiah, Arvin Boutik, Maxwell Azuka Meju, Roger Vernon Miller, Xavier Legrand, Prabal Shankar Das und Raja Natasha Bt Raja Musafarudin. „Crustal structure and basement-cover relationship in the Dangerous Grounds, offshore North-West Borneo, from 3D joint CSEM and MT imaging“. Interpretation 8, Nr. 4 (01.11.2020): SS97—SS111. http://dx.doi.org/10.1190/int-2019-0261.1.

Der volle Inhalt der Quelle
Annotation:
Accurate mapping of crustal thickness variations and the boundary relationships between sedimentary cover rocks and the crystalline basement is very important for heat-flow prediction and petroleum system modeling of a basin. Using legacy industry 3D data sets, we investigated the potential of 3D joint inversion of marine controlled-source electromagnetic (CSEM) and magnetotelluric (MT) data incorporating resistivity anisotropy to map these parameters across subbasins in the Dangerous Grounds in the southwestern rifted margin of the South China Sea, where limited previous seismic and potential field basement interpretations are available for comparison. We have reconstructed 3D horizontal and vertical resistivity models from the seabed down to [Formula: see text] depth for a [Formula: see text] area. The resistivity-versus-depth profile extracted from our 3D joint inversion models satisfactorily matched the resistivity and lithologic well logs at a wildcat exploration well location chosen for model validation. We found that the maximum resistivity gradients in the computed first derivative of the 3D resistivity volumes predict a depth to basement that matches the acoustic basement. The models predict the presence of 2 to approximately 5 km thick electrically conductive ([Formula: see text]) sedimentary cover atop an electrically resistive ([Formula: see text]) crystalline crust that is underlain by an electrically conductive ([Formula: see text]) upper mantle at depths that vary laterally from approximately 25 to 30 km below sea level in our study area. Our resistivity variation with depth is found to be remarkably consistent with the density distribution at Moho depth from recent independent 3D gravity/gradiometry inversion studies in this region. We suggest that 3D joint inversion of CSEM-MT, seismic, and potential field data is the way forward for understanding the deep structure of such rifted margins.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Cooper, Gordon R. J., und Rob C. Whitehead. „Determining the distance to magnetic sources“. GEOPHYSICS 81, Nr. 2 (01.03.2016): J25—J34. http://dx.doi.org/10.1190/geo2015-0142.1.

Der volle Inhalt der Quelle
Annotation:
The distance to sources of magnetic field anomalies of a known structural index can be determined by using ratios of the analytic signal amplitudes ([Formula: see text]) of different orders, and this can be performed in several different ways. Local minima of the distance correspond to the source depth. If an incorrect structural index has been used, then the different methods will yield different depths. Hence, a comparison of the results obtained from the different methods can help us to differentiate between valid and invalid source depths. These methods are computationally straightforward, and in some of the methods, the orders of the [Formula: see text] that are used can be chosen based on the noise level of the data. Some approaches that do not require the a priori specification of the structural index of the source are introduced, including methods that use data from two different altitudes (obtained by vertical continuation). We have applied the methods to aeromagnetic data sets from the Giyani and Kuruman regions of South Africa with plausible results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Liu, Hongwei, und Mustafa Naser Al-Ali. „Common-focus point-based target-oriented imaging approach for continuous seismic reservoir monitoring“. GEOPHYSICS 83, Nr. 4 (01.07.2018): M41—M48. http://dx.doi.org/10.1190/geo2017-0842.1.

Der volle Inhalt der Quelle
Annotation:
The ideal approach for continuous reservoir monitoring allows generation of fast and accurate images to cope with the massive data sets acquired for such a task. Conventionally, rigorous depth-oriented velocity-estimation methods are performed to produce sufficiently accurate velocity models. Unlike the traditional way, the target-oriented imaging technology based on the common-focus point (CFP) theory can be an alternative for continuous reservoir monitoring. The solution is based on a robust data-driven iterative operator updating strategy without deriving a detailed velocity model. The same focusing operator is applied on successive 3D seismic data sets for the first time to generate efficient and accurate 4D target-oriented seismic stacked images from time-lapse field seismic data sets acquired in a [Formula: see text] injection project in Saudi Arabia. Using the focusing operator, target-oriented prestack angle domain common-image gathers (ADCIGs) could be derived to perform amplitude-versus-angle analysis. To preserve the amplitude information in the ADCIGs, an amplitude-balancing factor is applied by embedding a synthetic data set using the real acquisition geometry to remove the geometry imprint artifact. Applying the CFP-based target-oriented imaging to time-lapse data sets revealed changes at the reservoir level in the poststack and prestack time-lapse signals, which is consistent with the [Formula: see text] injection history and rock physics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

TRAGOUDAS, SPYROS. „BOARD LEVEL PARTITIONING FOR IMPROVED PARTIAL SCAN“. International Journal of High Speed Electronics and Systems 06, Nr. 04 (Dezember 1995): 573–94. http://dx.doi.org/10.1142/s0129156495000201.

Der volle Inhalt der Quelle
Annotation:
We present a two phase board level partitioning scheme for improved partial scan on the resulting Integrated Circuits (ICs). The first phase clusters the nodes of the synchronous sequential PCB system into sets of bounded capacity. Each set represents an IC. The main objective function is to minimize the maximum number of inputs to a set. This considerably affects the test generation and response verification phases while testing the ICs. The second phase repositions the flip-flops so that we minimize the partial scan related hardware overhead for each IC, maintain a small sequential depth for all chips, and minimize the period of the global clock. We present an efficient iterative improvement heuristic for the partitioning problem of the first phase whose performance is tested on benchmarks. We also employ provably good algorithms for the second phase which result to reduced hardware overhead for partial scan. The proposed tool may also be applied to the system level partitioning problem where we partition the input circuit into Printed Circuit Boards or Multi-Chip Modules.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Dimitrijevic, T., B. Kahler, G. Evans, M. Collins und A. Moule. „Depth and Distance Perception of Dentists and Dental Students“. Operative Dentistry 36, Nr. 5 (01.10.2011): 467–77. http://dx.doi.org/10.2341/10-290-l.

Der volle Inhalt der Quelle
Annotation:
SUMMARYThe quality of work carried out by dentists is dependent, among other things, on experience, training, and manual dexterity. Historical focus on the latter as a predictor of dental performance has failed to recognize that dental competence also requires good perceptual and visual skills, not only for gathering information but also for judging positions, distances, and the size of objects and shapes. Most predictive tests ignore visual and interpretative deficiencies that could make individual acquisition of skills and interpretation of instructions difficult. Ability to estimate depth and distance, the manner in which students learn this ability, whether and how it can be taught, or whether there is an association among ability, stereopsis, and dental performance has not been thoroughly examined; nor has the perception that dental students fully understand verbal and written instruction relating to depth and distance. This study investigated the ability of dentists and dental students to estimate and reproduce small depths and distances and the relationship of this ability to stereopsis, dental experience, and student performance. A total of 163 undergraduate dental students from three year groups and 20 experienced dentists and specialists performed three tasks. A depth-perception task involved estimation of the depth of two sets (2-mm or 4-mm wide) of nine computer milled slots ranging in depth from 0.5 to 4.0 mm. A distance task involved estimation of the width of specially prepared printed square blocks. In a writing task, participants recorded distances across a printed line on separate sheets of paper. All tasks were conducted at set positions in custom-made transportable light boxes. Stereopsis and visual acuity were also measured. Ability to perform perceptual tasks varied enormously, with the level of accuracy dependent on the type of task and dental experience. Many students had considerable difficulty in estimating depth. Inexperienced students performed poorly. Most participants overestimated depth and distance estimation tasks, but underestimated when required to draw distances. Smaller depths and distances were easier to estimate than larger ones. All groups overestimated depth more in 4-mm-wide blocks than in 2-mm-wide blocks. There was no correlation found between depth and distance estimation and stereopsis scores or with the overall grades tested. This study highlights that some dentists and many dental students, particularly early in their course, have great difficulty in accurately gauging depths and distances. It is proposed that that this could impact significantly on a student's ability to interpret verbal and written preclinical instruction and could make the acquisition of manual skills and interpretation of clinical instruction difficult. Routine testing of all undergraduate dental students for perceptual and visual difficulties is recommended, so that those with difficulties can be identified and problems remedied, if possible, early in their course.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Perec, Andrzej. „Desirability Function Analysis (DFA) in Multiple Responses Optimization of Abrasive Water Jet Cutting Process“. Reports in Mechanical Engineering 3, Nr. 1 (15.12.2022): 11–19. http://dx.doi.org/10.31181/rme200103011p.

Der volle Inhalt der Quelle
Annotation:
This paper introduces optimization of machining parameters for high-pressure abrasive water jet cutting of Hardox 500 steel utilizing desirability function analysis (DFA). The tests were carried out according to the orthogonal matrix (Taguchi) L9. The control parameters of the process such as pressure, abrasive flow rate, and traverse speed was optimized under multi-response conditions namely cutting depth and surface roughness. The optimal set of control parameters was established on the basis of the composite desirability value obtained from desirability function analysis and the significance of these parameters was determined by analysis of variance (ANOVA). The effects show that optimal sets for high cutting depth and small surface roughness is high pressure, middle abrasive flow rate, and small traverse speed. A confirmation test was also leaded to validate the test results. Results of the research have shown that machining efficiency at keeping good level quality of cut surface can be improved this approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Xie, Zhen, Jianhua Zhang und Pengfei Wang. „Event-based stereo matching using semiglobal matching“. International Journal of Advanced Robotic Systems 15, Nr. 1 (01.01.2018): 172988141775275. http://dx.doi.org/10.1177/1729881417752759.

Der volle Inhalt der Quelle
Annotation:
In this article, we focus on the problem of depth estimation from a stereo pair of event-based sensors. These sensors asynchronously capture pixel-level brightness changes information (events) instead of standard intensity images at a specified frame rate. So, these sensors provide sparse data at low latency and high temporal resolution over a wide intrascene dynamic range. However, new asynchronous, event-based processing algorithms are required to process the event streams. We propose a fully event-based stereo three-dimensional depth estimation algorithm inspired by semiglobal matching. Our algorithm considers the smoothness constraints between the nearby events to remove the ambiguous and wrong matches when only using the properties of a single event or local features. Experimental validation and comparison with several state-of-the-art, event-based stereo matching methods are provided on five different scenes of event-based stereo data sets. The results show that our method can operate well in an event-driven way and has higher estimation accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Wenck, Soeren, Marina Creydt, Jule Hansen, Florian Gärber, Markus Fischer und Stephan Seifert. „Opening the Random Forest Black Box of the Metabolome by the Application of Surrogate Minimal Depth“. Metabolites 12, Nr. 1 (21.12.2021): 5. http://dx.doi.org/10.3390/metabo12010005.

Der volle Inhalt der Quelle
Annotation:
For the untargeted analysis of the metabolome of biological samples with liquid chromatography–mass spectrometry (LC-MS), high-dimensional data sets containing many different metabolites are obtained. Since the utilization of these complex data is challenging, different machine learning approaches have been developed. Those methods are usually applied as black box classification tools, and detailed information about class differences that result from the complex interplay of the metabolites are not obtained. Here, we demonstrate that this information is accessible by the application of random forest (RF) approaches and especially by surrogate minimal depth (SMD) that is applied to metabolomics data for the first time. We show this by the selection of important features and the evaluation of their mutual impact on the multi-level classification of white asparagus regarding provenance and biological identity. SMD enables the identification of multiple features from the same metabolites and reveals meaningful biological relations, proving its high potential for the comprehensive utilization of high-dimensional metabolomics data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Duan, Kangkang, und Shuangyin Cao. „Data-Driven Parameter Selection and Modeling for Concrete Carbonation“. Materials 15, Nr. 9 (07.05.2022): 3351. http://dx.doi.org/10.3390/ma15093351.

Der volle Inhalt der Quelle
Annotation:
Concrete carbonation is known as a stochastic process. Its uncertainties mainly result from parameters that are not considered in prediction models. Parameter selection, therefore, is important. In this paper, based on 8204 sets of data, statistical methods and machine learning techniques were applied to choose appropriate influence factors in terms of three aspects: (1) the correlation between factors and concrete carbonation; (2) factors’ influence on the uncertainties of carbonation depth; and (3) the correlation between factors. Both single parameters and parameter groups were evaluated quantitatively. The results showed that compressive strength had the highest correlation with carbonation depth and that using the aggregate–cement ratio as the parameter significantly reduced the dispersion of carbonation depth to a low level. Machine learning models manifested that selected parameter groups had a large potential in improving the performance of models with fewer parameters. This paper also developed machine learning carbonation models and simplified them to propose a practical model. The results showed that this concise model had a high accuracy on both accelerated and natural carbonation test datasets. For natural carbonation datasets, the mean absolute error of the practical model was 1.56 mm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Qin, Li-Xuan, Jian Zou, Jiejun Shi, Ann Lee, Aleksandra Mihailovic, Thalia A. Farazi, Thomas Tuschl und Samuel Singer. „Statistical Assessment of Depth Normalization for Small RNA Sequencing“. JCO Clinical Cancer Informatics, Nr. 4 (September 2020): 567–82. http://dx.doi.org/10.1200/cci.19.00118.

Der volle Inhalt der Quelle
Annotation:
PURPOSE Methods for depth normalization have been assessed primarily with simulated data or cell-line–mixture data. There is a pressing need for benchmark data enabling a more realistic and objective assessment, especially in the context of small RNA sequencing. METHODS We collected a unique pair of microRNA sequencing data sets for the same set of tumor samples; one data set was collected with and the other without uniform handling and balanced design. The former provided a benchmark for evaluating evidence of differential expression and the latter served as a test bed for normalization. Next, we developed a data perturbation algorithm to simulate additional data set pairs. Last, we assembled a set of computational tools to visualize and quantify the assessment. RESULTS We validated the quality of the benchmark data and showed the need for normalization of the test data. For illustration, we applied the data and tools to assess the performance of 9 existing normalization methods. Among them, trimmed mean of M-values was a better scaling method, whereas the median and the upper quartiles were consistently the worst performers; one variation of remove unwanted variation had the best chance of capturing true positives but at the cost of increased false positives. In general, these methods were, at best, moderately helpful when the level of differential expression was extensive and asymmetric. CONCLUSION Our study (1) provides the much-needed benchmark data and computational tools for assessing depth normalization, (2) shows the dependence of normalization performance on the underlying pattern of differential expression, and (3) calls for continued research efforts to develop more effective normalization methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Abu Bakar, Rubiah, Najdah Abd Aziz und Rafhanah Zakirah Rosley. „AWARENESS OF GHARAR IN SALE AND PURCHASE CONTRACTS AMONG KUALA NERUS SOCIETY“. Journal of Islamic, Social, Economics and Development 7, Nr. 43 (15.03.2022): 29–39. http://dx.doi.org/10.55573/jised.074303.

Der volle Inhalt der Quelle
Annotation:
This study was conducted to identify the awareness of gharar in sale and purchase contracts in the society of Kuala Nerus, Terengganu. Any uncertainties, doubts and shortcomings in the sale and purchase contracts are considered gharar. Using the questionnaire as a research instrument, 120 sets were distributed through online forms among randomly selected Kuala Nerus society. The methodology of this study uses quantititave methods. Descriptive analysis is used to analyse data in percentage, frequency, mean, mode and median. The result of the investigation found that the mean value shown in each variable is high level, the value of mean part B is 4.233 and part C is 4.3833 (high). At the end of this study, the researcher suggested to conduct a more in-depth questionnaire on the understanding of gharar in the sale and purchase contracts. Researchers also suggest that other researchers develop their research aspects more widely, not only at the district level but also at the comparative level by state.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Ponte, Aurélien L. „Periodic Wind-Driven Circulation in an Elongated and Rotating Basin“. Journal of Physical Oceanography 40, Nr. 9 (01.09.2010): 2043–58. http://dx.doi.org/10.1175/2010jpo4235.1.

Der volle Inhalt der Quelle
Annotation:
Abstract An idealized model is developed for the three-dimensional response of a coastal basin (e.g., lagoon, bay, or estuary) to time-periodic wind stress. This model handles basins that are deeper and/or shallower than an Ekman depth with wind forcing frequencies ranging from subinertial to superinertial. Here the model is used to describe how the response (current and sea level) of a basin deeper than one Ekman depth depends on the wind forcing frequency. At low subinertial frequencies, the response is similar to the steady wind case and is hence called “quasi steady.” There is a near-surface Ekman transport to the right of the wind balanced by a return flow at depth. Lateral bathymetric variations introduce an along-basin circulation that decays with increasing frequency and sets the extent of the quasi-steady response in the frequency domain. At the inertial frequency, the wind forces a damped resonant response with large vertical shear and weak depth-integrated flow. This result is potentially important for coastal basins located near ±30° latitude and forced by a diurnal breeze. At superinertial frequencies, the response becomes irrotational and is amplified near seiche frequencies. The response to a sudden onset of wind is computed in the time domain and confirms that the slow growth of the along-basin circulation controls the spinup process. The response of basins shallower than an Ekman depth, and the validity of the model inside semienclosed basins, are also discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Warren, Joseph D., und Peter H. Wiebe. „Accounting for biological and physical sources of acoustic backscatter improves estimates of zooplankton biomass“. Canadian Journal of Fisheries and Aquatic Sciences 65, Nr. 7 (Juli 2008): 1321–33. http://dx.doi.org/10.1139/f08-047.

Der volle Inhalt der Quelle
Annotation:
To convert measurements of backscattered acoustic energy to estimates of abundance and taxonomic information about the zooplankton community, all of the scattering processes in the water column need to be identified and their scattering contributions quantified. Zooplankton populations in the eastern edge of Wilkinson Basin in the Gulf of Maine in the Northwest Atlantic were surveyed in October 1997. Net tow samples at different depths, temperature and salinity profiles, and multiple frequency acoustic backscatter measurements from the upper 200 m of the water column were collected. Zooplankton samples were identified, enumerated, and measured. Temperature and salinity profiles were used to estimate the amount of turbulent microstructure in the water column. These data sets were used with theoretical acoustic scattering models to calculate the contributions of both biological and physical scatterers to the overall measured scattering level. The output of these predictions shows that the dominant source of acoustic backscatter varies with depth and acoustic frequency in this region. By quantifying the contributions from multiple scattering sources, acoustic backscatter becomes a better measure of net-collected zooplankton biomass.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Polonsky, A. B., und P. A. Sukhonos. „Comparison of ocean datasets by their ability to adequately reproduce winter anomalies in the characteristics of the upper layer ot the north–eastern part of the North Atlantic“. Monitoring systems of environment, Nr. 1 (25.03.2021): 137–46. http://dx.doi.org/10.33075/2220-5861-2021-1-137-146.

Der volle Inhalt der Quelle
Annotation:
This article analyzes the reproducibility of the reemergence of temperature and upper mixed layer (UML) depth anomalies in the northeastern North Atlantic during severe weather conditions observed in the Atlantic-European region in the winter of 2009/2010 and 2010/2011. The data of re-analyzes ORA-S3, GFDL, GODAS, GLORYS2v4 and objective analyzes Ishii, EN4.1.1 are used. It is confirmed that the formation of the negative temperature anomaly in UML in winter 2010/2011 is largely due to the reemergence of the ocean temperature anomaly that occurred in the winter of 2009/2010. Interannual UML depth anomalies in the northeastern North Atlantic from the ORA-S3 and GODAS reanalysis datasets from March 2009 to November 2011 are in satisfactory agreement. The best description of the evolution of temperature anomalies in the 10–550 m layer in 2010, in accordance with the hypothesis of the reemergence of the ocean temperature anomaly, was obtained for the UML depth from the indicated data sets. An assessment of the statistical features of the case of the reemergence of anomalies in the UML characteristics at a significant level showed the occurrence of the UML depth anomaly in the winter of 2010/2011, formed in the last autumn-winter period. Moreover, such specific conditions could not have formed in the early 2000s.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Panboonyuen, Teerapong, Kulsawasd Jitkajornwanich, Siam Lawawirojwong, Panu Srestasathiern und Peerapon Vateekul. „Semantic Labeling in Remote Sensing Corpora Using Feature Fusion-Based Enhanced Global Convolutional Network with High-Resolution Representations and Depthwise Atrous Convolution“. Remote Sensing 12, Nr. 8 (12.04.2020): 1233. http://dx.doi.org/10.3390/rs12081233.

Der volle Inhalt der Quelle
Annotation:
One of the fundamental tasks in remote sensing is the semantic segmentation on the aerial and satellite images. It plays a vital role in applications, such as agriculture planning, map updates, route optimization, and navigation. The state-of-the-art model is the Enhanced Global Convolutional Network (GCN152-TL-A) from our previous work. It composes two main components: (i) the backbone network to extract features and ( i i ) the segmentation network to annotate labels. However, the accuracy can be further improved, since the deep learning network is not designed for recovering low-level features (e.g., river, low vegetation). In this paper, we aim to improve the semantic segmentation network in three aspects, designed explicitly for the remotely sensed domain. First, we propose to employ a modern backbone network called “High-Resolution Representation (HR)” to extract features with higher quality. It repeatedly fuses the representations generated by the high-to-low subnetworks with the restoration of the low-resolution representations to the same depth and level. Second, “Feature Fusion (FF)” is added to our network to capture low-level features (e.g., lines, dots, or gradient orientation). It fuses between the features from the backbone and the segmentation models, which helps to prevent the loss of these low-level features. Finally, “Depthwise Atrous Convolution (DA)” is introduced to refine the extracted features by using four multi-resolution layers in collaboration with a dilated convolution strategy. The experiment was conducted on three data sets: two private corpora from Landsat-8 satellite and one public benchmark from the “ISPRS Vaihingen” challenge. There are two baseline models: the Deep Encoder-Decoder Network (DCED) and our previous model. The results show that the proposed model significantly outperforms all baselines. It is the winner in all data sets and exceeds more than 90% of F 1 : 0.9114, 0.9362, and 0.9111 in two Landsat-8 and ISPRS Vaihingen data sets, respectively. Furthermore, it achieves an accuracy beyond 90% on almost all classes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Péter, László, Kálmán Vad, Attila Csik, Rocío Muñíz, Lara Lobo, Rosario Pereiro, SaÅ¡o Å turm et al. „In-depth component distribution in electrodeposited alloys and multilayers“. Journal of Electrochemical Science and Engineering 8, Nr. 1 (03.03.2018): 49–71. http://dx.doi.org/10.5599/jese.480.

Der volle Inhalt der Quelle
Annotation:
It is shown in this overview that modern composition depth profiling methods like secondary neutral mass spectroscopy (SNMS) and glow-discharge – time-of-flight mass spectrometry (GD-ToFMS) can be used to gain highly specific composition depth profile information on electrodeposited alloys. In some cases, cross-sectional transmission electron microscopy was also used for gaining complementary information; nevertheless, the basic component distribution derived with each method exhibited the same basic features. When applying the reverse sputtering direction to SNMS analysis, the near-substrate composition evolution can be revealed with unprecedented precision. Results are presented for several specific cases of electrodeposited alloys and mulitlayers. It is shown that upon d.c. plating from an unstirred solution, the preferentially deposited metal accumulates in the near-substrate zone, and the steady-state alloy composition sets in at about 150-200 nm deposit thickness only. If there is more than one preferentially deposited metal in the alloy, the accumulation zones of these metals occur in the order of the deposition preference. This accumulation zone can be eliminated by well-controlled hydrodynamic conditions (like the application of rotating disc electrodes) or by pulse plating where the systematic decrease in the duty cycle provides a gradual transition from a graded to a uniform composition depth profile. The application of composition depth profile measurements enabled detecting the coincidence in the occurrence of some components in the deposits down to the impurity level. This was exemplified by the GD-ToFMS measurements of Ni-Cu/Cu multilayers where all detected impurities accumulated in the Cu layer. The wealth of information obtained by these methods provides a much more detailed picture than the results normally obtained with bulk analysis through conventional integral depth profiling and help in the elucidation of the side reactions taking place during the plating processes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Dwi Laksono, David Gilang, und Amrie Firmansyah. „THE ROLE OF MANAGERIAL ABILITY IN INDONESIA: INVESTMENT OPPORTUNITY SETS, ENVIRONMENTAL UNCERTAINTY, TAX AVOIDANCE“. Humanities & Social Sciences Reviews 8, Nr. 4 (25.09.2020): 1305–18. http://dx.doi.org/10.18510/hssr.2020.84123.

Der volle Inhalt der Quelle
Annotation:
Purpose of the study: This study aims to obtain empirical evidence of the effect of investment factors that consist of investment opportunity sets and environmental uncertainty on tax avoidance and the role of managerial ability in moderating these effects. Methodology: The analysis was conducted on 49 manufacturing companies listed on the Indonesia Stock Exchange from 2012 to 2018. It was chosen through a purposive sampling method, so 343 observations were obtained. This study engages two-panel data regression models, a model with and without moderation managerial ability. Also, this study employs factor analysis to produce investment opportunity sets that can represent this variable. Main Findings: This study reveals that investment opportunity sets and environmental uncertainty positively affect tax avoidance. Meanwhile, managerial ability failed to moderate the effect of investment opportunity sets and environmental uncertainty on tax avoidance. Implications: The results of the profiling can be used as an early warning, especially for account representatives and tax auditors at the Indonesia Tax Authority, so that potential tax exploration and examination can be more in-depth for firms that fulfill these characteristics. Also, this study provides advice to the Government of Indonesia to provide tax holidays for firms with high IOS who invest in the real sector and tax incentives for firms that are facing an environment with high uncertainty. Novelty: This study deploys managerial ability as a moderating variable between the relationship of investment opportunity sets and environmental uncertainty to tax avoidance. The managerial ability has an important role in firms' IOS and environmental uncertainty faced by the firms because the level of managers will produce differences in the economic outcomes and the effectiveness of the discretion.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Trivedy, R. K., und S. M. Pattanshetty. „Treatment of dairy waste by using water hyacinth“. Water Science and Technology 45, Nr. 12 (01.06.2002): 329–34. http://dx.doi.org/10.2166/wst.2002.0442.

Der volle Inhalt der Quelle
Annotation:
In the present study treatment of wastewater from a large dairy by using water hyacinth was studied in laboratory experiments. Effects of depth of the system, variations in area coverage, prior settling and of daily renewal of the plants was also studied on the efficacy of hyacinth in treating the dairy waste. Water hyacinth (Eichhornia crassipes) was found to grow exceptionally well in the waste (BOD 840.0 mg/L) and brought down the level of BOD from 840.0 to 121.0 mg/L; COD from 1,160.0 to 164.0 mg/L, total suspended solids from 359.0 mg/L to 245.0 mg/L, TDS from 848.0 mg/L to 352.0 mg/L, total nitrogen from 26.6 mg/L to 8.9 mg/L in 4 days. There was very little reduction, however in calcium, sodium and potassium concentration. Results of different experiments showed that systems with shallow depth were more efficient in removing dissolved solids, suspended solids, BOD, COD, nitrogen and phosphorus. Daily renewal of the plants led to slightly better reduction in suspended and dissolved solids, BOD, COD and nitrogen. Water hyacinth coverage was found to have a direct bearing on the treatment efficiency. Pretreatment (settling) of the waste was also found to be favourable as dissolved oxygen content increased rapidly in the experimental sets with pretreatment. Efficiency of removal of various parameters was also good in these sets. From the study it can be concluded that dairy waste can be effectivily treated by water hyacinth. Consideration of above parameters and incorporating them in design factors can greatly increase the efficiency of the system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

David, Rakesh, Caitlin S. Byrt, Stephen D. Tyerman, Matthew Gilliham und Stefanie Wege. „Roles of membrane transporters: connecting the dots from sequence to phenotype“. Annals of Botany 124, Nr. 2 (01.06.2019): 201–8. http://dx.doi.org/10.1093/aob/mcz066.

Der volle Inhalt der Quelle
Annotation:
Abstract Background Plant membrane transporters are involved in diverse cellular processes underpinning plant physiology, such as nutrient acquisition, hormone movement, resource allocation, exclusion or sequestration of various solutes from cells and tissues, and environmental and developmental signalling. A comprehensive characterization of transporter function is therefore key to understanding and improving plant performance. Scope and Conclusions In this review, we focus on the complexities involved in characterizing transporter function and the impact that this has on current genomic annotations. Specific examples are provided that demonstrate why sequence homology alone cannot be relied upon to annotate and classify transporter function, and to show how even single amino acid residue variations can influence transporter activity and specificity. Misleading nomenclature of transporters is often a source of confusion in transporter characterization, especially for people new to or outside the field. Here, to aid researchers dealing with interpretation of large data sets that include transporter proteins, we provide examples of transporters that have been assigned names that misrepresent their cellular functions. Finally, we discuss the challenges in connecting transporter function at the molecular level with physiological data, and propose a solution through the creation of new databases. Further fundamental in-depth research on specific transport (and other) proteins is still required; without it, significant deficiencies in large-scale data sets and systems biology approaches will persist. Reliable characterization of transporter function requires integration of data at multiple levels, from amino acid residue sequence annotation to more in-depth biochemical, structural and physiological studies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Jalayer, Fatemeh, Hossein Ebrahimian, Konstantinos Trevlopoulos und Brendon Bradley. „Empirical tsunami fragility modelling for hierarchical damage levels“. Natural Hazards and Earth System Sciences 23, Nr. 2 (02.03.2023): 909–31. http://dx.doi.org/10.5194/nhess-23-909-2023.

Der volle Inhalt der Quelle
Annotation:
Abstract. The present work proposes a simulation-based Bayesian method for parameter estimation and fragility model selection for mutually exclusive and collectively exhaustive (MECE) damage states. This method uses an adaptive Markov chain Monte Carlo simulation (MCMC) based on likelihood estimation using point-wise intensity values. It identifies the simplest model that fits the data best, among the set of viable fragility models considered. The proposed methodology is demonstrated for empirical fragility assessments for two different tsunami events and different classes of buildings with varying numbers of observed damage and flow depth data pairs. As case studies, observed pairs of data for flow depth and the corresponding damage level from the South Pacific tsunami on 29 September 2009 and the Sulawesi–Palu tsunami on 28 September 2018 are used. Damage data related to a total of five different building classes are analysed. It is shown that the proposed methodology is stable and efficient for data sets with a very low number of damage versus intensity data pairs and cases in which observed data are missing for some of the damage levels.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Crawford, Mark W., Denise Rohan, Christopher K. Macgowan, Shi-Joon Yoo und Bruce A. Macpherson. „Effect of Propofol Anesthesia and Continuous Positive Airway Pressure on Upper Airway Size and Configuration in Infants“. Anesthesiology 105, Nr. 1 (01.07.2006): 45–50. http://dx.doi.org/10.1097/00000542-200607000-00011.

Der volle Inhalt der Quelle
Annotation:
Background Infants are prone to obstruction of the upper airway during general anesthesia. Continuous positive airway pressure (CPAP) is often used to prevent or treat anesthesia-induced airway obstruction. The authors studied the interaction of propofol anesthesia and CPAP on airway caliber in infants using magnetic resonance imaging. Methods Nine infants undergoing elective magnetic resonance imaging of the brain were studied. Head position was standardized. Spin echo magnetic resonance images of the airway were acquired at the level of the soft palate, base of the tongue, and tip of the epiglottis. Four sets of images were acquired in sequence: (1) during light propofol anesthesia at an infusion rate of 80 microg . kg(-1) . min(-1), (2) after increasing the depth of propofol anesthesia by administering a bolus dose (2.0 mg/kg) and increasing the infusion rate to 240 microg . kg(-1) . min(-1), (3) during continued infusion of 240 microg . kg(-1). min propofol and application of 10 cm H2O CPAP, and (4) after removal of CPAP and continued infusion of 240 microg . kg(-1). min propofol. Results Increasing depth of propofol anesthesia decreased airway caliber at each anatomical level, predominantly due to anteroposterior narrowing. Application of CPAP completely reversed the propofol-induced decrease in airway caliber, primarily by increasing the transverse dimension. Conclusions Airway narrowing with increasing depth of propofol anesthesia results predominantly from a reduction in anteroposterior dimension, whereas CPAP acts primarily to increase the transverse dimension. Although airway caliber during deep propofol anesthesia and application of CPAP was similar to that during light propofol anesthesia, there were significant configurational differences.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Adesta, Erry Yulian Triblas, Muataz H. F. Al Hazza, M. Y. Suprianto und Muhammad Riza. „Predicting Surface Roughness with Respect to Process Parameters Using Regression Analysis Models in End Milling“. Advanced Materials Research 576 (Oktober 2012): 99–102. http://dx.doi.org/10.4028/www.scientific.net/amr.576.99.

Der volle Inhalt der Quelle
Annotation:
Surface roughness affects the functional attributes of finished parts. Therefore, predicting the finish surface is important to select the cutting levels in order to reach the required quality. In this research an experimental investigation was conducted to predict the surface roughness in the finish end milling process with higher cutting speed. Twenty sets of data for finish end milling on AISI H13 at hardness of 48 HRC have been collected based on five-level of Central Composite Design (CCD). All the experiments done by using indexable tool holder Sandvick Coromill R490 and the insert was PVD coated TiAlN carbide. The experimental work performed to predict four different roughness parameters; arithmetic mean roughness (Ra), total roughness (Rt), mean depth of roughness (Rz) and the root mean square (Rq).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie