Journal articles on the topic 'Component-wise Models'

To see the other types of publications on this topic, follow the link: Component-wise Models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Component-wise Models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Schmid, Matthias, and Torsten Hothorn. "Boosting additive models using component-wise P-Splines." Computational Statistics & Data Analysis 53, no. 2 (December 2008): 298–311. http://dx.doi.org/10.1016/j.csda.2008.09.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Reddy, Chandan K., and Bala Rajaratnam. "Learning mixture models via component-wise parameter smoothing." Computational Statistics & Data Analysis 54, no. 3 (March 2010): 732–49. http://dx.doi.org/10.1016/j.csda.2009.04.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Proietti, Tommaso. "Component-wise Representations of Long-memory Models and Volatility Prediction." Journal of Financial Econometrics 14, no. 4 (April 25, 2016): 668–92. http://dx.doi.org/10.1093/jjfinec/nbw004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

GUNDERSON, R. W., and R. CANFIELD. "PIECE-WISE MULTILINEAR PREDICTION FROM FCV DISJOINT PRINCIPAL COMPONENT MODELS∗." International Journal of General Systems 16, no. 4 (May 1990): 373–83. http://dx.doi.org/10.1080/03081079008935089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Carrera, Erasmo, and Alfonso Pagani. "Free vibration analysis of civil engineering structures by component-wise models." Journal of Sound and Vibration 333, no. 19 (September 2014): 4597–620. http://dx.doi.org/10.1016/j.jsv.2014.04.063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Feng, Lei, Susu Zhu, Chu Zhang, Yidan Bao, Pan Gao, and Yong He. "Variety Identification of Raisins Using Near-Infrared Hyperspectral Imaging." Molecules 23, no. 11 (November 8, 2018): 2907. http://dx.doi.org/10.3390/molecules23112907.

Full text
Abstract:
Different varieties of raisins have different nutritional properties and vary in commercial value. An identification method of raisin varieties using hyperspectral imaging was explored. Hyperspectral images of two different varieties of raisins (Wuhebai and Xiangfei) at spectral range of 874–1734 nm were acquired, and each variety contained three grades. Pixel-wise spectra were extracted and preprocessed by wavelet transform and standard normal variate, and object-wise spectra (sample average spectra) were calculated. Principal component analysis (PCA) and independent component analysis (ICA) of object-wise spectra and pixel-wise spectra were conducted to select effective wavelengths. Pixel-wise PCA scores images indicated differences between two varieties and among different grades. SVM (Support Vector Machine), k-NN (k-nearest Neighbors Algorithm), and RBFNN (Radial Basis Function Neural Network) models were built to discriminate two varieties of raisins. Results indicated that both SVM and RBFNN models based on object-wise spectra using optimal wavelengths selected by PCA could be used for raisin variety identification. The visualization maps verified the effectiveness of using hyperspectral imaging to identify raisin varieties.
APA, Harvard, Vancouver, ISO, and other styles
7

McBane, Sean, Youngsoo Choi, and Karen Willcox. "Stress-constrained topology optimization of lattice-like structures using component-wise reduced order models." Computer Methods in Applied Mechanics and Engineering 400 (October 2022): 115525. http://dx.doi.org/10.1016/j.cma.2022.115525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Viglietti, A., E. Zappino, and E. Carrera. "Free vibration analysis of locally damaged aerospace tapered composite structures using component-wise models." Composite Structures 192 (May 2018): 38–51. http://dx.doi.org/10.1016/j.compstruct.2018.02.054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Petrolo, Marco, and Erasmo Carrera. "High-Fidelity and Computationally Efficient Component-Wise Structural Models: An Overview of Applications and Perspectives." Applied Mechanics and Materials 828 (March 2016): 175–96. http://dx.doi.org/10.4028/www.scientific.net/amm.828.175.

Full text
Abstract:
The Component-Wise approach (CW) is a novel structural modeling strategy that stemmed from the Carrera Unified Formulation (CUF). This work presents an overview of the enhanced capabilities of the CW for the static and dynamic analysis of structures, such as aircraft wings, civil buildings, and composite plates. The CW makes use of the advanced 1D CUF models. Such models exploit Lagrange polynomial expansions (LE) to model the displacement field above the cross-section of the structure. The use of LE allows the improvement of the 1D model capabilities. LE models provide 3D-like accuracies with far fewer computational costs. The use of LE leads to the CW. Although LE are 1D elements, every component of an engineering structure can be modeled via LE elements independently of their geometry, e.g. 2D transverse stiffeners and panels, and of their scale, e.g. fiber/matrix cells. The use of the same type of finite elements facilitates the finite element modeling to a great extent. For instance, no interface techniques are necessary. Moreover, in a CW model, the displacement unknowns are placed along the physical surfaces of the structure with no need for artificial lines and surfaces. Such a feature is promising in a CAD/FEM coupling scenario. The CW approach can be considered as an accurate and computationally cheap analysis tool for many structural problems. Such as progressive failure analyses, multiscale, impact problems and health-monitoring.
APA, Harvard, Vancouver, ISO, and other styles
10

Yamada, Tomonori, Noriyuki Kushida, Fumimasa Araya, Akemi Nishida, and Norihiro Nakajima. "Component-Wise Meshing Approach and Evaluation of Bonding Strategy on the Interface of Components for Assembled Finite Element Analysis of Structures." Key Engineering Materials 452-453 (November 2010): 701–4. http://dx.doi.org/10.4028/www.scientific.net/kem.452-453.701.

Full text
Abstract:
The finite elements are extensively utilized to solve various problems in engineering fields with the growth of computing technologies. However, there is a lack of methodology for analyses of huge assembled structures. The mechanics on the interface of each components, for instance, contact, bolt joint and welding in assembly is a key issue for important huge structure such as nuclear power plants. On the other hand, it is well known that as finite element models become large and complex, construction of detailed mesh becomes a bottleneck in the CAE procedures. To solve these problems, the authors would like to introduce component-wise meshing approach and bonding strategy on the interface of components. In order to assemble component-wise meshes, the penalty method is introduced not only to constrain the displacements, but also to introduce classical spring connection on the joint interface, although penalty method is claimed that it is not suitable for iterative solver. In this paper, the convergence performance of an iterative solver with penalty method is investigated and the detailed component-wise distributed computation scheme is described with numerical examples.
APA, Harvard, Vancouver, ISO, and other styles
11

Zappino, E., A. Viglietti, and E. Carrera. "The analysis of tapered structures using a component-wise approach based on refined one-dimensional models." Aerospace Science and Technology 65 (June 2017): 141–56. http://dx.doi.org/10.1016/j.ast.2017.02.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Abdalla, A., H. Fashir, A. Ali, and D. Fairhead. "Validation of recent GOCE/GRACE geopotential models over Khartoum state - Sudan." Journal of Geodetic Science 2, no. 2 (January 1, 2012): 88–97. http://dx.doi.org/10.2478/v10156-011-0035-6.

Full text
Abstract:
Validation of recent GOCE/GRACE geopotential models over Khartoum state - SudanThis paper evaluates a number of latest releases of GOCE/GRACE global geopotential models (GGMs) using the GPS-levelling geometric geoid heights, terrestrial gravity data and existing local gravimetric models. We investigate each global model at every 5 degree of spherical harmonics. Our analysis shows that the satellite-only models derived by space-wise and time-wise approaches (SPW_R1, SPW_R2 TIM_R1 and TIM_R2), GOCO01S together with EGM08 (combined model) are very distinct and consistent to the local data, which guarantees one of them to be selected as the best of candidate models and then to be utilized in our further geoid studies. One of Satellite-only models will be employed for acquiring the long wavelength geoid component which is one of major steps in the geoid determination. EGM08 will be used to compensate and restore the missing gravity data points in the un-surveyed parts within the target area. We expect further improvements in geoid studies in Sudan due to the improved medium wavelength part of the gravity field from GOCE mission.
APA, Harvard, Vancouver, ISO, and other styles
13

Martí, M. C., and P. Mulet. "Some techniques for improving the resolution of finite difference component-wise WENO schemes for polydisperse sedimentation models." Applied Numerical Mathematics 78 (April 2014): 1–13. http://dx.doi.org/10.1016/j.apnum.2013.11.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Faghani, Shahriar, Bardia Khosravi, Mana Moassefi, Gian Marco Conte, and Bradley Erickson. "NIMG-55. A COMPARISON OF THREE DIFFERENT DEEP LEARNING-BASED MODELS TO PREDICT THE MGMT PROMOTER METHYLATION STATUS IN GLIOBLASTOMA USING BRAIN MRI." Neuro-Oncology 24, Supplement_7 (November 1, 2022): vii176. http://dx.doi.org/10.1093/neuonc/noac209.673.

Full text
Abstract:
Abstract GBM is the most common primary malignant brain tumor in adults. O6-methylguanine DNA methyltransferase (MGMT) promoter methylation status is an important prognostic biomarker that predicts the response to temozolomide and guides the treatment decisions. At present, the only reliable way to determine MGMT promoter methylation status is through the analysis of tumor tissues. Given the limitations and complications of histology-based methods, an imaging-based approach for non-invasively MGMT promoter methylation status prediction is beneficial. This study aimed to develop and compare three different deep-learning-based approaches for predicting MGMT promoter methylation status non-invasively. We obtained 576 T2 weighted images (T2WIs) with their corresponding tumor masks and MGMT promoter methylation status from, The Brain Tumor Segmentation (BraTS) 2021 datasets. Dataset was split into five folds at the patient's level stratified by MGMT promoter methylation status to perform a 5-fold cross-validation. We developed three different models: voxel-wise, slice-wise, and whole-brain. For voxel-wise classification, methylated and unmethylated MGMT tumor masks were made into 1 and 2 with 0 background, respectively. We converted each T2WI into 32x32x32 patches. We trained a 3D-Vnet model for tumor segmentation. After inference, we constructed the whole brain volume based on the patch's coordinates. The final prediction of MGMT methylation status was made by majority voting between the predicted voxel values of the biggest connected component. For slice-wise classification, we trained an object detection model for tumor detection and MGMT methylation status prediction; then, we used majority voting for the final prediction. For the whole-brain approach, we trained a 3D Densenet121 for prediction. Whole-brain, slice-wise, voxel-wise, accuracy was 65.42%(SD 3.97%), 61.37%(SD 1.48%), and 56.84%(SD 4.38%) respectively. We found that across the whole-brain, slice-wise, and voxel-wise deep learning approaches, the whole-brain approach is the most effective approach for MGMT methylation status prediction on the BraTS 2021 dataset.
APA, Harvard, Vancouver, ISO, and other styles
15

Cann, Matthew, Ryley McConkey, Fue-Sang Lien, William Melek, and Eugene Yee. "Mode classification for vortex shedding from an oscillating wind turbine using machine learning." Journal of Physics: Conference Series 2141, no. 1 (December 1, 2021): 012009. http://dx.doi.org/10.1088/1742-6596/2141/1/012009.

Full text
Abstract:
Abstract This study presents an effective strategy that applies machine learning methods to classify vortex shedding modes produced by the oscillating cylinder of a bladeless wind turbine. A 2-dimensional computational fluid dynamic (CFD) simulation using OpenFOAMv2006 was developed to simulate a bladeless wind turbines vortex shedding behavior. The simulations were conducted at two wake modes (2S, 2P) and a transition mode (2PO). The local flow measurements were recorded using four sensors: vorticity, flow speed, stream-wise and transverse stream-wise velocity components. The time-series data was transformed into the frequency domain to generate a reduced feature vector. A variety of supervised machine learning models were quantitatively compared based on classification accuracy. The best performing models were then reevaluated based on the effects of artificial noisy experimental data on the models’ performance. The velocity sensors orientated transverse to the pre-dominant flow (u y ) achieved improved testing accuracy of 15% compared to the next best sensor. The random forest and k-nearest neighbor models, using u y , achieved 99.3% and 99.8% classification accuracy, respectively. The feature noise analysis conducted reduced classification accuracy by 11.7% and 21.2% at the highest noise level for the respective models. The random forest algorithm trained using the transverse stream-wise component of the velocity vector provided the best balance of testing accuracy and robustness to data corruption. The results highlight the proposed methods’ ability to accurately identify vortex structures in the wake of an oscillating cylinder using feature extraction.
APA, Harvard, Vancouver, ISO, and other styles
16

Wu, Jiann-Ming. "Natural Discriminant Analysis Using Interactive Potts Models." Neural Computation 14, no. 3 (March 1, 2002): 689–713. http://dx.doi.org/10.1162/089976602317250951.

Full text
Abstract:
Natural discriminant analysis based on interactive Potts models is developed in this work. A generative model composed of piece-wise multivariate gaussian distributions is used to characterize the input space, exploring the embedded clustering and mixing structures and developing proper internal representations of input parameters. The maximization of a log-likelihood function measuring the fitness of all input parameters to the generative model, and the minimization of a design cost summing up square errors between posterior outputs and desired outputs constitutes a mathematical framework for discriminant analysis. We apply a hybrid of the mean-field annealing and the gradient-descent methods to the optimization of this framework and obtain multiple sets of interactive dynamics, which realize coupled Potts models for discriminant analysis. The new learning process is a whole process of component analysis, clustering analysis, and labeling analysis. Its major improvement compared to the radial basis function and the support vector machine is described by using some artificial examples and a real-world application to breast cancer diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
17

Reil, Matthias, David Morin, Magnus Langseth, and Octavian Knoll. "Connections between steel and aluminium using adhesive bonding combined with self-piercing riveting." EPJ Web of Conferences 183 (2018): 04010. http://dx.doi.org/10.1051/epjconf/201818304010.

Full text
Abstract:
The multi-material design of modern car bodies requires joining technologies for dissimilar materials. Adhesive bonding in combination with self-piercing riveting is widely used for joining steel and aluminium structures. To guarantee crashworthiness and reliability of a car body, accurate and effcient numerical models of its materials and connections are required. Suitable component test setups are necessary for development and validation of such models. In this work, a novel test setup for adhesively bonded and point-wise connected components is presented. Here, load combinations comparable to a vehicle crash are introduced into the connections. The developed setup facilitates successive failure of multiple connections and enables a broad validation of numerical connection models.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhang, Tao, and Shuyu Sun. "Thermodynamics-Informed Neural Network (TINN) for Phase Equilibrium Calculations Considering Capillary Pressure." Energies 14, no. 22 (November 18, 2021): 7724. http://dx.doi.org/10.3390/en14227724.

Full text
Abstract:
The thermodynamic properties of fluid mixtures play a crucial role in designing physically meaningful models and robust algorithms for simulating multi-component multi-phase flow in subsurface, which is needed for many subsurface applications. In this context, the equation-of-state-based flash calculation used to predict the equilibrium properties of each phase for a given fluid mixture going through phase splitting is a crucial component, and often a bottleneck, of multi-phase flow simulations. In this paper, a capillarity-wise Thermodynamics-Informed Neural Network is developed for the first time to propose a fast, accurate and robust approach calculating phase equilibrium properties for unconventional reservoirs. The trained model performs well in both phase stability tests and phase splitting calculations in a large range of reservoir conditions, which enables further multi-component multi-phase flow simulations with a strong thermodynamic basis.
APA, Harvard, Vancouver, ISO, and other styles
19

Knecht, Stefan, Michal Repisky, Hans Jørgen Aagaard Jensen, and Trond Saue. "Exact two-component Hamiltonians for relativistic quantum chemistry: Two-electron picture-change corrections made simple." Journal of Chemical Physics 157, no. 11 (September 21, 2022): 114106. http://dx.doi.org/10.1063/5.0095112.

Full text
Abstract:
Based on self-consistent field (SCF) atomic mean-field (amf) quantities, we present two simple yet computationally efficient and numerically accurate matrix-algebraic approaches to correct both scalar-relativistic and spin–orbit two-electron picture-change effects (PCEs) arising within an exact two-component (X2C) Hamiltonian framework. Both approaches, dubbed amfX2C and e(xtended)amfX2C, allow us to uniquely tailor PCE corrections to mean-field models, viz. Hartree–Fock or Kohn–Sham DFT, in the latter case also avoiding the need for a point-wise calculation of exchange–correlation PCE corrections. We assess the numerical performance of these PCE correction models on spinor energies of group 18 (closed-shell) and group 16 (open-shell) diatomic molecules, achieving a consistent [Formula: see text] Hartree accuracy compared to reference four-component data. Additional tests include SCF calculations of molecular properties such as absolute contact density and contact density shifts in copernicium fluoride compounds ([Formula: see text], n = 2,4,6), as well as equation-of-motion coupled-cluster calculations of x-ray core-ionization energies of [Formula: see text]- and [Formula: see text]-containing molecules, where we observe an excellent agreement with reference data. To conclude, we are confident that our (e)amfX2C PCE correction models constitute a fundamental milestone toward a universal and reliable relativistic two-component quantum-chemical approach, maintaining the accuracy of the parent four-component one at a fraction of its computational cost.
APA, Harvard, Vancouver, ISO, and other styles
20

Rovilos, E., I. Georgantopoulos, A. Akylas, J. Aird, D. M. Alexander, A. Comastri, A. Del Moro, et al. "A wide search of obscured Active Galactic Nuclei using XMM-Newton and WISE." Proceedings of the International Astronomical Union 9, S304 (October 2013): 245–46. http://dx.doi.org/10.1017/s1743921314003950.

Full text
Abstract:
AbstractWe use the WISE all sky survey observations to look for counterparts of hard X-ray selected sources from the XMM-Newton-SDSS survey. We then measure the 12 μm luminosity of the AGN by decomposing their optical to infrared SEDs with a host and an AGN component and compare it to the X-ray luminosity and their expected intrinsic relation. This way we select 20 X-ray under-luminous heavily obscured candidates and examine their X-ray and optical properties in more detail. We find evidence for a Compton-thick nucleus for six sources, a number lower than what expected from X-ray background synthesis models, which shows the limitations of our method.
APA, Harvard, Vancouver, ISO, and other styles
21

Ma, Xianghua, Zhenkun Yang, and Shining Chen. "Multiscale Feature Filtering Network for Image Recognition System in Unmanned Aerial Vehicle." Complexity 2021 (February 18, 2021): 1–11. http://dx.doi.org/10.1155/2021/6663851.

Full text
Abstract:
For unmanned aerial vehicle (UAV), object detection at different scales is an important component for the visual recognition. Recent advances in convolutional neural networks (CNNs) have demonstrated that attention mechanism remarkably enhances multiscale representation of CNNs. However, most existing multiscale feature representation methods simply employ several attention blocks in the attention mechanism to adaptively recalibrate the feature response, which overlooks the context information at a multiscale level. To solve this problem, a multiscale feature filtering network (MFFNet) is proposed in this paper for image recognition system in the UAV. A novel building block, namely, multiscale feature filtering (MFF) module, is proposed for ResNet-like backbones and it allows feature-selective learning for multiscale context information across multiparallel branches. These branches employ multiple atrous convolutions at different scales, respectively, and further adaptively generate channel-wise feature responses by emphasizing channel-wise dependencies. Experimental results on CIFAR100 and Tiny ImageNet datasets reflect that the MFFNet achieves very competitive results in comparison with previous baseline models. Further ablation experiments verify that the MFFNet can achieve consistent performance gains in image classification and object detection tasks.
APA, Harvard, Vancouver, ISO, and other styles
22

Carrera, E., A. G. de Miguel, and A. Pagani. "Component-wise analysis of laminated structures by hierarchical refined models with mapping features and enhanced accuracy at layer to fiber-matrix scales." Mechanics of Advanced Materials and Structures 25, no. 14 (December 27, 2017): 1224–38. http://dx.doi.org/10.1080/15376494.2017.1396631.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, Congyi, Mohamed Elgharib, Gereon Fox, Min Gu, Christian Theobalt, and Wenping Wang. "An Implicit Parametric Morphable Dental Model." ACM Transactions on Graphics 41, no. 6 (November 30, 2022): 1–13. http://dx.doi.org/10.1145/3550454.3555469.

Full text
Abstract:
3D Morphable models of the human body capture variations among subjects and are useful in reconstruction and editing applications. Current dental models use an explicit mesh scene representation and model only the teeth, ignoring the gum. In this work, we present the first parametric 3D morphable dental model for both teeth and gum. Our model uses an implicit scene representation and is learned from rigidly aligned scans. It is based on a component-wise representation for each tooth and the gum, together with a learnable latent code for each of such components. It also learns a template shape thus enabling several applications such as segmentation, interpolation and tooth replacement. Our reconstruction quality is on par with the most advanced global implicit representations while enabling novel applications. The code will be available at https://github.com/cong-yi/DMM
APA, Harvard, Vancouver, ISO, and other styles
24

Mao, Jiachen, Huanrui Yang, Ang Li, Hai Li, and Yiran Chen. "TPrune." ACM Transactions on Cyber-Physical Systems 5, no. 3 (July 2021): 1–22. http://dx.doi.org/10.1145/3446640.

Full text
Abstract:
The invention of Transformer model structure boosts the performance of Neural Machine Translation (NMT) tasks to an unprecedented level. Many previous works have been done to make the Transformer model more execution-friendly on resource-constrained platforms. These researches can be categorized into three key fields: Model Pruning, Transfer Learning, and Efficient Transformer Variants. The family of model pruning methods are popular for their simplicity in practice and promising compression rate and have achieved great success in the field of convolution neural networks (CNNs) for many vision tasks. Nonetheless, previous Transformer pruning works did not perform a thorough model analysis and evaluation on each Transformer component on off-the-shelf mobile devices. In this work, we analyze and prune transformer models at the line-wise granularity and also implement our pruning method on real mobile platforms. We explore the properties of all Transformer components as well as their sparsity features, which are leveraged to guide Transformer model pruning. We name our whole Transformer analysis and pruning pipeline as TPrune. In TPrune, we first propose Block-wise Structured Sparsity Learning (BSSL) to analyze Transformer model property. Then, based on the characters derived from BSSL, we apply Structured Hoyer Square (SHS) to derive the final pruned models. Comparing with the state-of-the-art Transformer pruning methods, TPrune is able to achieve a higher model compression rate with less performance degradation. Experimental results show that our pruned models achieve 1.16×–1.92× speedup on mobile devices with 0%–8% BLEU score degradation compared with the original Transformer model.
APA, Harvard, Vancouver, ISO, and other styles
25

Xiong, Lie, Pei-Fen Kuan, Jianan Tian, Sunduz Keles, and Sijian Wang. "Multivariate Boosting for Integrative Analysis of High-Dimensional Cancer Genomic Data." Cancer Informatics 13s7 (January 2014): CIN.S16353. http://dx.doi.org/10.4137/cin.s16353.

Full text
Abstract:
In this paper, we propose a novel multivariate component-wise boosting method for fitting multivariate response regression models under the high-dimension, low sample size setting. Our method is motivated by modeling the association among different biological molecules based on multiple types of high-dimensional genomic data. Particularly, we are interested in two applications: studying the influence of DNA copy number alterations on RNA transcript levels and investigating the association between DNA methylation and gene expression. For this purpose, we model the dependence of the RNA expression levels on DNA copy number alterations and the dependence of gene expression on DNA methylation through multivariate regression models and utilize boosting-type method to handle the high dimensionality as well as model the possible nonlinear associations. The performance of the proposed method is demonstrated through simulation studies. Finally, our multivariate boosting method is applied to two breast cancer studies.
APA, Harvard, Vancouver, ISO, and other styles
26

Pagani, Alfonso, Stefano Valvano, and Erasmo Carrera. "Analysis of laminated composites and sandwich structures by variable-kinematic MITC9 plate elements." Journal of Sandwich Structures & Materials 20, no. 1 (May 26, 2016): 4–41. http://dx.doi.org/10.1177/1099636216650988.

Full text
Abstract:
In this paper, classical as well as various refined plate finite elements for the analysis of laminates and sandwich structures are discussed. The attention is particularly focussed on a new variable-kinematic plate element. According to the proposed modelling approach, the plate kinematics can vary through the thickness within the same finite element. Therefore, refined approximations and layer-wise descriptions of the primary mechanical variables can be adopted in selected portions of the structures that require a more accurate analysis. The variable-kinematic model is implemented in the framework of the Carrera unified formulation, which is a hierarchical approach allowing for the straightforward implementation of the theories of structures. In particular, Legendre-like polynomial expansions are adopted to approximate the through-the-thickness unknowns and develop equivalent single layer, layer-wise, as well as variable-kinematic theories. In this paper, the principle of virtual displacements is used to derive the governing equations of the generic plate theory and a mixed interpolation of tensorial components technique is employed to avoid locking phenomena. Various problems are addressed in order to validate and assess the proposed formulation, including multi-layer plates and sandwich structures subjected to different loadings and boundary conditions. The results are compared with those from the elasticity theory given in the literature and from layer-wise solutions. The discussion clearly underlines the enhanced capabilities of the proposed variable-kinematic mixed interpolation of tensorial component plate elements, which allows, if used properly, to obtain formally correct solutions in critical areas of the structure with a considerable reduction of the computational costs with respect to more complex, full layer-wise models. This aspect results particularly advantageous in problems where localized phenomena within complex structures play a major role.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhu, Zhou, Gao, Bao, He, and Feng. "Near-Infrared Hyperspectral Imaging Combined with Deep Learning to Identify Cotton Seed Varieties." Molecules 24, no. 18 (September 7, 2019): 3268. http://dx.doi.org/10.3390/molecules24183268.

Full text
Abstract:
Cotton seed purity is a critical factor influencing the cotton yield. In this study, near-infrared hyperspectral imaging was used to identify seven varieties of cotton seeds. Score images formed by pixel-wise principal component analysis (PCA) showed that there were differences among different varieties of cotton seeds. Effective wavelengths were selected according to PCA loadings. A self-design convolution neural network (CNN) and a Residual Network (ResNet) were used to establish classification models. Partial least squares discriminant analysis (PLS-DA), logistic regression (LR) and support vector machine (SVM) were used as direct classifiers based on full spectra and effective wavelengths for comparison. Furthermore, PLS-DA, LR and SVM models were used for cotton seeds classification based on deep features extracted by self-design CNN and ResNet models. LR and PLS-DA models using deep features as input performed slightly better than those using full spectra and effective wavelengths directly. Self-design CNN based models performed slightly better than ResNet based models. Classification models using full spectra performed better than those using effective wavelengths, with classification accuracy of calibration, validation and prediction sets all over 80% for most models. The overall results illustrated that near-infrared hyperspectral imaging with deep learning was feasible to identify cotton seed varieties.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhao, Jian-Qiang, Yan-Yong Zhao, Jin-Guan Lin, Zhang-Xiao Miao, and Waled Khaled. "Estimation and testing for panel data partially linear single-index models with errors correlated in space and time." Random Matrices: Theory and Applications 09, no. 02 (November 7, 2019): 2150005. http://dx.doi.org/10.1142/s2010326321500052.

Full text
Abstract:
We consider a panel data partially linear single-index models (PDPLSIM) with errors correlated in space and time. A serially correlated error structure is adopted for the correlation in time. We propose using a semiparametric minimum average variance estimation (SMAVE) to obtain estimators for both the parameters and unknown link function. We not only establish an asymptotically normal distribution for the estimators of the parameters in the single index and the linear component of the model, but also obtain an asymptotically normal distribution for the nonparametric local linear estimator of the unknown link function. Then, a fitting of spatial and time-wise correlation structures is investigated. Based on the estimators, we propose a generalized F-type test method to deal with testing problems of index parameters of PDPLSIM with errors correlated in space and time. It is shown that under the null hypothesis, the proposed test statistic follows asymptotically a [Formula: see text]-distribution with the scale constant and degrees of freedom being independent of nuisance parameters or functions. Simulated studies and real data examples have been used to illustrate our proposed methodology.
APA, Harvard, Vancouver, ISO, and other styles
29

Zinzuvadiya, Milan, and Vahid Behzadan. "State-Wise Adaptive Discounting from Experience (SADE): A Novel Discounting Scheme for Reinforcement Learning (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 18 (May 18, 2021): 15953–54. http://dx.doi.org/10.1609/aaai.v35i18.17973.

Full text
Abstract:
In Markov Decision Process (MDP) models of sequential decision-making, it is common practice to account for temporal discounting by incorporating a constant discount factor. While the effectiveness of fixed-rate discounting in various Reinforcement Learning (RL) settings is well-established, the efficiency of this scheme has been questioned in recent studies. Another notable shortcoming of fixed-rate discounting stems from abstracting away the experiential information of the agent, which is shown to be a significant component of delay discounting in human cognition. To address this issue, we propose State-wise Adaptive Discounting from Experience (SADE) as a novel adaptive discounting scheme for RL agents. SADE leverages the experiential observations of state values in episodic trajectories to iteratively adjust state-specific discount rates. We report experimental evaluations of SADE in Q-learning agents, which demonstrate significant enhancement of sample complexity and convergence rate compared to fixed-rate discounting.
APA, Harvard, Vancouver, ISO, and other styles
30

Griesbach, Colin, Andreas Groll, and Elisabeth Bergherr. "Joint Modelling Approaches to Survival Analysis via Likelihood-Based Boosting Techniques." Computational and Mathematical Methods in Medicine 2021 (November 15, 2021): 1–11. http://dx.doi.org/10.1155/2021/4384035.

Full text
Abstract:
Joint models are a powerful class of statistical models which apply to any data where event times are recorded alongside a longitudinal outcome by connecting longitudinal and time-to-event data within a joint likelihood allowing for quantification of the association between the two outcomes without possible bias. In order to make joint models feasible for regularization and variable selection, a statistical boosting algorithm has been proposed, which fits joint models using component-wise gradient boosting techniques. However, these methods have well-known limitations, i.e., they provide no balanced updating procedure for random effects in longitudinal analysis and tend to return biased effect estimation for time-dependent covariates in survival analysis. In this manuscript, we adapt likelihood-based boosting techniques to the framework of joint models and propose a novel algorithm in order to improve inference where gradient boosting has said limitations. The algorithm represents a novel boosting approach allowing for time-dependent covariates in survival analysis and in addition offers variable selection for joint models, which is evaluated via simulations and real world application modelling CD4 cell counts of patients infected with human immunodeficiency virus (HIV). Overall, the method stands out with respect to variable selection properties and represents an accessible way to boosting for time-dependent covariates in survival analysis, which lays a foundation for all kinds of possible extensions.
APA, Harvard, Vancouver, ISO, and other styles
31

Fu, Zhiying, Rui Xu, Shiqing Xin, Shuangmin Chen, Changhe Tu, Chenglei Yang, and Lin Lu. "EasyVRModeling." Proceedings of the ACM on Computer Graphics and Interactive Techniques 5, no. 1 (May 4, 2022): 1–14. http://dx.doi.org/10.1145/3522613.

Full text
Abstract:
The latest innovations of VR make it possible to construct 3D models in a holographic immersive simulation environment. In this paper, we develop a user-friendly mid-air interactive modeling system named EasyVRModeling. We first prepare a dataset consisting of diverse components and precompute the discrete signed distance function (SDF) for each component. During the modeling phase, users can freely design complicated shapes with a pair of VR controllers. Based on the discrete SDF representation, any CSG-like operation (union, intersect, subtract) can be performed voxel-wise. Throughout the modeling process, we maintain one single dynamic SDF for the whole scene so that the zero-level set surface of the SDF exactly encodes the up-to-date constructed shape. Both SDF fusion and surface extraction are implemented via GPU to allow for smooth user experience. We asked 34 volunteers to create their favorite models using EasyVRModeling. With a simple training process for several minutes, most of them can create a fascinating shape or even a descriptive scene very quickly.
APA, Harvard, Vancouver, ISO, and other styles
32

Paulovičová, Lucia. "The Optimization of Mechanized Earthwork Processes." Applied Mechanics and Materials 820 (January 2016): 96–101. http://dx.doi.org/10.4028/www.scientific.net/amm.820.96.

Full text
Abstract:
Earthwork processes are the most costly and time consuming component of construction these days and they are characterized by a powerful heavy mechanization which participate on the earthwork process. Current pressure for minimize the cost and maximize the productivity highlights the need to optimize earthworks. In this paper, the optimization process in the area of earthwork processes is described. The selection of the right types of machines for earthwork and its implements has become very difficult these days because of availability of variety of machines models and therefore a multicriteria method is presented to tackle the problem. This paper describes methodology for optimizing the earthwork process according to the selected optimal criteria. The methodology is focused on the proposal phase of optimization where the decision maker has to make a decision and choose the right type of excavators. To overcome the problem of comparing the chosen machines a mathematical modeling approach leading to multicriteria optimization was adopted to make the step wise decision. The methodology gives an mathematical models by which we can solve this problem.
APA, Harvard, Vancouver, ISO, and other styles
33

Berger, Moritz, and Matthias Schmid. "Flexible modeling of ratio outcomes in clinical and epidemiological research." Statistical Methods in Medical Research 29, no. 8 (December 9, 2019): 2250–68. http://dx.doi.org/10.1177/0962280219891195.

Full text
Abstract:
In medical studies one frequently encounters ratio outcomes. For modeling these right-skewed positive variables, two approaches are in common use. The first one assumes that the outcome follows a normal distribution after transformation (e.g. a log-normal distribution), and the second one assumes gamma distributed outcome values. Classical regression approaches relate the mean ratio to a set of explanatory variables and treat the other parameters of the underlying distribution as nuisance parameters. Here, more flexible extensions for modeling ratio outcomes are proposed that allow to relate all the distribution parameters to explanatory variables. The models are embedded into the framework of generalized additive models for location, scale and shape (GAMLSS), and can be fitted using a component-wise gradient boosting algorithm. The added value of the new modeling approach is demonstrated by the analysis of the LDL/HDL cholesterol ratio, which is a strong predictor of cardiovascular events, using data from the German Chronic Kidney Disease Study. Particularly, our results confirm various important findings on risk factors for cardiovascular events.
APA, Harvard, Vancouver, ISO, and other styles
34

Schroedter-Homscheidt, M., H. Elbern, and T. Holzer-Popp. "Observation operator for the assimilation of aerosol type resolving satellite measurements into a chemical transport model." Atmospheric Chemistry and Physics Discussions 10, no. 6 (June 7, 2010): 13855–900. http://dx.doi.org/10.5194/acpd-10-13855-2010.

Full text
Abstract:
Abstract. Modelling of aerosol particles with chemical transport models is still based mainly on static emission databases while episodic emissions can not be treated sufficiently. To overcome this situation, a coupling of chemical mass concentration modelling with satellite-based measurements relying on physical and optical principles has been developed. This study deals with the observation operator for a component-wise assimilation of satellite measurements. It treats aerosol particles classified into water soluble, water insoluble, soot, sea salt and mineral dust containing aerosol particles in the atmospheric boundary layer as separately assimilated aerosol components. It builds on a mapping of aerosol classes used both in observation and model space taking their optical and chemical properties into account. Refractive indices for primary organic carbon particles, anthropogenic particles, and secondary organic species have been defined based on a literature review. Together with a treatment of different size distributions in observations and model state, this allows transforming the background from mass concentrations into aerosol optical depths. A two-dimensional, variational assimilation is applied for component-wise aerosol optical depths. Error covariance matrices are defined based on a validation against AERONET sun photometer measurements. Analysis fields are assessed threefold: (1) through validation against AERONET especially in Saharan dust outbreak situations, (2) through comparison with the British Black Smoke and Sulphur Dioxide Network for soot-containing particles, and (3) through comparison with measurements of the water soluble components SO4, NH4, and NO3 conducted by the EMEP (European Monitoring and Evaluation Programme) network. Separately, for the water soluble, the soot and the mineral dust aerosol components a bias reduction and subsequent a root mean square error reduction is observed in the analysis for a test period from July to November 2003. Additionally, examples of an improved analysis during wildfire and dust outbreak situations are shown.
APA, Harvard, Vancouver, ISO, and other styles
35

Schroedter-Homscheidt, M., H. Elbern, and T. Holzer-Popp. "Observation operator for the assimilation of aerosol type resolving satellite measurements into a chemical transport model." Atmospheric Chemistry and Physics 10, no. 21 (November 8, 2010): 10435–52. http://dx.doi.org/10.5194/acp-10-10435-2010.

Full text
Abstract:
Abstract. Modelling of aerosol particles with chemical transport models is still based mainly on static emission databases while episodic emissions cannot be treated sufficiently. To overcome this situation, a coupling of chemical mass concentration modelling with satellite-based measurements relying on physical and optical principles has been developed. This study deals with the observation operator for a component-wise assimilation of satellite measurements. It treats aerosol particles classified into water soluble, water insoluble, soot, sea salt and mineral dust containing aerosol particles in the atmospheric boundary layer as separately assimilated aerosol components. It builds on a mapping of aerosol classes used both in observation and model space taking their optical and chemical properties into account. Refractive indices for primary organic carbon particles, anthropogenic particles, and secondary organic species have been defined based on a literature review. Together with a treatment of different size distributions in observations and model state, this allows transforming the background from mass concentrations into aerosol optical depths. A two-dimensional, variational assimilation is applied for component-wise aerosol optical depths. Error covariance matrices are defined based on a validation against AERONET sun photometer measurements. Analysis fields are assessed threefold: (1) through validation against AERONET especially in Saharan dust outbreak situations, (2) through comparison with the British Black Smoke and Sulphur Dioxide Network for soot-containing particles, and (3) through comparison with measurements of the water soluble components SO4, NH4, and NO3 conducted by the EMEP (European Monitoring and Evaluation Programme) network. Separately, for the water soluble, the soot and the mineral dust aerosol components a bias reduction and subsequent a root mean square error reduction is observed in the analysis for a test period from July to November 2003. Additionally, examples of an improved analysis during wildfire and dust outbreak situations are shown.
APA, Harvard, Vancouver, ISO, and other styles
36

KUMAR, RAKESH, M. K. PATRA, A. THIRUGNANAVEL, BIDYUT C. DEKA, DIBYENDU CHATTERJEE, T. R. BORAH, G. RAJESHA, et al. "Comparative evaluation of different integrated farming system models for small and marginal farmers under the Eastern Himalayas." Indian Journal of Agricultural Sciences 88, no. 11 (November 16, 2018): 1722–29. http://dx.doi.org/10.56093/ijas.v88i11.84913.

Full text
Abstract:
Integrated farming system (IFS) ensures efficient utilization of available farm resources, increases unit productivity and income that are pre-requisite for sustainable livelihood of small and marginal farmers. The present study was conducted to evaluate the performance of four IFS model developed in ~ 1.0 acre area, at ICAR Research Complex for North Eastern Hill Region, Nagaland Centre, Jharnapani, Medziphema, Nagaland. The major components in IFS models were agriculture, horticulture, livestock and subsidiary components like fishery, vermicompost, mushroom and azolla. The field crops, vegetables and livestock components were included in IFS model considering topography of land, soil texture and preference for the tribal livelihood. The performance in terms of component wise productivity, profitability, employment generation and sustainability value index (SVI) were evaluated in consecutive three years (2012–2015). The combinations of subsidiary components in agriculture + horticulture + poultry + fishery in IFS model (model–4) gave the highest net returns (Rupees 32040) followed by the model with agriculture + horticulture + fishery + piggery + vermicompost (model 3) with net profits of Rupees 21230. In field crops component, cropping sequence of ricetoria- mungbean system was found to be the best in terms of productivity among the tested IFS models except in model 1. In terms of employment generation, IFS model-4 has shown maximum man-days engagement (395 days), followed by 350 days in model-3. Based on sustainability values index (SI) derived from different IFS models, maximum SVI values was recorded in model-4 (0.71) followed by model-3 (0.47). Therefore, the intensification of IFS model with crop, horticulture, fishery and livestock or poultry should be popularized among the small and marginal farmers on a larger scale, as it provide scope for higher returns, year round employment and sustainable livelihood in longer perspectives of Eastern Himalayas.
APA, Harvard, Vancouver, ISO, and other styles
37

Altuwaijri, Ghadir Ali, Ghulam Muhammad, Hamdi Altaheri, and Mansour Alsulaiman. "A Multi-Branch Convolutional Neural Network with Squeeze-and-Excitation Attention Blocks for EEG-Based Motor Imagery Signals Classification." Diagnostics 12, no. 4 (April 15, 2022): 995. http://dx.doi.org/10.3390/diagnostics12040995.

Full text
Abstract:
Electroencephalography-based motor imagery (EEG-MI) classification is a critical component of the brain-computer interface (BCI), which enables people with physical limitations to communicate with the outside world via assistive technology. Regrettably, EEG decoding is challenging because of the complexity, dynamic nature, and low signal-to-noise ratio of the EEG signal. Developing an end-to-end architecture capable of correctly extracting EEG data’s high-level features remains a difficulty. This study introduces a new model for decoding MI known as a Multi-Branch EEGNet with squeeze-and-excitation blocks (MBEEGSE). By clearly specifying channel interdependencies, a multi-branch CNN model with attention blocks is employed to adaptively change channel-wise feature responses. When compared to existing state-of-the-art EEG motor imagery classification models, the suggested model achieves good accuracy (82.87%) with reduced parameters in the BCI-IV2a motor imagery dataset and (96.15%) in the high gamma dataset.
APA, Harvard, Vancouver, ISO, and other styles
38

Billman, Dorrit, Richard Catrambone, Jolene Feldman, Zachary Caddick, Sky Eurich, Jonathan Leventhal, Rob Martin, and Kasia Sliwinska. "Training for Generalization: The Role of Integrated Skills and Knowledge in Technology Domains." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62, no. 1 (September 2018): 1434–38. http://dx.doi.org/10.1177/1541931218621326.

Full text
Abstract:
Training is of little value if trainees can only do the exact tasks on which they were trained, in the identical context of training. Rather, the value typically comes from the ability to apply skills and knowledge across novel variation in contexts and tasks. Training in dynamic technical domains can be particularly challenging because the future tasks can rarely be fully anticipated. We hypothesize that generalization in technology domains will be facilitated when principles (such as device models) are taught in addition to operational procedures, and, particularly, when principles and procedures are integrated. We conducted an exploratory study, including method development, using a micro-world with simulated International Space Station Habitat systems. We compared the effects of Integrated versus Component-wise Training Conditions on generalization to varied tasks, quite different from those in training. Exploratory analyses suggested better generalization and transfer in the Integrated Condition.
APA, Harvard, Vancouver, ISO, and other styles
39

Korsaga, M., B. Epinat, P. Amram, C. Carignan, P. Adamczyk, and A. Sorgho. "GHASP: an H α kinematical survey of spiral galaxies – XIII. Distribution of luminous and dark matter in spiral and irregular nearby galaxies using H α and H i rotation curves and WISE photometry." Monthly Notices of the Royal Astronomical Society 490, no. 3 (September 27, 2019): 2977–3024. http://dx.doi.org/10.1093/mnras/stz2678.

Full text
Abstract:
Abstract We present the mass models of 31 spiral and irregular nearby galaxies obtained using hybrid rotation curves (RCs) combining high-resolution GHASP Fabry–Perot H α RCs and extended WHISP H i ones together with 3.4 $\mu$m WISE photometry. The aim is to compare the dark matter (DM) halo properties within the optical radius using only H α RCs with the effect of including and excluding the mass contribution of the neutral gas component, and when using H i or hybrid RCs. Pseudo-isothermal (ISO) core and Navarro–Frenk–White (NFW) cuspy DM halo profiles are used with various fiducial fitting procedures. Mass models using H α RCs including or excluding the H i gas component provide compatible disc M/L. The correlations between DM halo and baryon parameters do not strongly depend on the RC. Clearly, the differences between the fitting procedures are larger than between the different data sets. Hybrid and H i RCs lead to higher M/L values for both ISO and NFW best-fitting models but lower central densities for ISO haloes and higher concentration for NFW haloes than when using H α RCs only. The agreement with the mass model parameters deduced using hybrid RCs, considered as a reference, is better for H i than for H α RCs. ISO density profiles better fit the RCs than the NFW ones, especially when using H α or hybrid RCs. Halo masses at the optical radius determined using the various data sets are compatible even if they tend to be overestimated with H α RCs. Hybrid RCs are thus ideal to study the mass distribution within the optical radius.
APA, Harvard, Vancouver, ISO, and other styles
40

Voigt, W., and D. Zeng. "Solid–liquid equilibria in mixtures of molten salt hydrates for the design of heat storage materials." Pure and Applied Chemistry 74, no. 10 (January 1, 2002): 1909–20. http://dx.doi.org/10.1351/pac200274101909.

Full text
Abstract:
Enthalpy of melting can be used to store heat in a simple way for time periods of hours and days. Knowledge of the solid –liquid equilibria represents the most important presumption for systematic evaluations of the suitability of hydrated salt mixtures. In this paper, two approaches for predicting solid–liquid equilibria in ternary or higher component systems are discussed using the limited amount of thermodynamic data available for such systems. One method is based on the modified Brunauer–Emmett–Teller (BET) model as formulated by Ally and Braunstein. In cases of a strong tendency toward complex formation of salt components, the BET model is no longer applicable. Reaction chain models have been used to treat such systems. Thereby, the reaction chain represents a method to correlate step-wise hydration or complexation enthalpies and entropies and, thus, reduce the number of adjustable parameters. Results are discussed for systems containing MgCl2, CaCl2, ZnCl2, and alkali metal chlorides.
APA, Harvard, Vancouver, ISO, and other styles
41

Benetos, Emmanouil, and Simon Dixon. "A Shift-Invariant Latent Variable Model for Automatic Music Transcription." Computer Music Journal 36, no. 4 (December 2012): 81–94. http://dx.doi.org/10.1162/comj_a_00146.

Full text
Abstract:
In this work, a probabilistic model for multiple-instrument automatic music transcription is proposed. The model extends the shift-invariant probabilistic latent component analysis method, which is used for spectrogram factorization. Proposed extensions support the use of multiple spectral templates per pitch and per instrument source, as well as a time-varying pitch contribution for each source. Thus, this method can effectively be used for multiple-instrument automatic transcription. In addition, the shift-invariant aspect of the method can be exploited for detecting tuning changes and frequency modulations, as well as for visualizing pitch content. For note tracking and smoothing, pitch-wise hidden Markov models are used. For training, pitch templates from eight orchestral instruments were extracted, covering their complete note range. The transcription system was tested on multiple-instrument polyphonic recordings from the RWC database, a Disklavier data set, and the MIREX 2007 multi-F0 data set. Results demonstrate that the proposed method outperforms leading approaches from the transcription literature, using several error metrics.
APA, Harvard, Vancouver, ISO, and other styles
42

Zhao, Qiang, Qizhen Du, Qamar Yasin, Qingqing Li, and Liyun Fu. "Quaternion-based sparse tight frame for multicomponent signal recovery." GEOPHYSICS 85, no. 2 (March 1, 2020): V143—V156. http://dx.doi.org/10.1190/geo2019-0541.1.

Full text
Abstract:
Multicomponent noise attenuation often presents more severe processing challenges than scalar data owing to the uncorrelated random noise in each component. Meanwhile, weak signals merged in the noise are easier to degrade using the scalar processing workflows while ignoring their possible supplement from other components. For seismic data preprocessing, transform-based approaches have achieved improved performance on mitigating noise while preserving the signal of interest, especially when using an adaptive basis trained by dictionary-learning methods. We have developed a quaternion-based sparse tight frame (QSTF) with the help of quaternion matrix and tight frame analyses, which can be used to process the vector-valued multicomponent data by following a vectorial processing workflow. The QSTF is conveniently trained through iterative sparsity-based regularization and quaternion singular-value decomposition. In the quaternion-based sparse domain, multicomponent signals are orthogonally represented, which preserve the nonlinear relationships among multicomponent data to a greater extent as compared with the scalar approaches. We test the performance of our method on synthetic and field multicomponent data, in which component-wise, concatenated, and long-vector models of multicomponent data are used as comparisons. Our results indicate that more features, specifically the weak signals merged in the noise, are better recovered using our method than others.
APA, Harvard, Vancouver, ISO, and other styles
43

Zabihi, Samad, Mehran Yazdi, and Eghbal Mansoori. "Deep Global-Local Gazing: Including Global Scene Properties in Local Saliency Computation." Mobile Information Systems 2022 (August 25, 2022): 1–15. http://dx.doi.org/10.1155/2022/3279325.

Full text
Abstract:
Visual saliency models imitate the attentive mechanism of the human visual system (HVS) to detect the objects that stand out from their neighbors in the scene. Some biological phenomena in HVS, such as contextual cueing effects, suggest that the contextual information of the whole scene does guide the attentive mechanism. The saliency value of each image patch is influenced by its visual (local) features as well as the contextual information of the whole scene. Modern saliency models are based on deep convolutional neural networks. Because the convolutional operators operate locally and use weight sharing, such networks inherently have difficulty capturing global and location-dependent features. In addition, these models calculate the saliency value pixel-wise using local features. Therefore, it is necessary to provide global features along with local features. In this regard, we propose two approaches for capturing the contextual information from the scene. In our first method, we introduce a shift-variant fully connected component to capture global and location-dependent information. Instead of using the native CNN of our base model, in our second method, we use a VGGNet to capture the global and context information of the scene. To show the effectiveness of our methods, we use them to extend the SAM-ResNet saliency model. To evaluate our proposed approaches, four challenging saliency benchmark datasets were used. The experimental results showed that our methods could outperform the existing state-of-the-art saliency prediction models.
APA, Harvard, Vancouver, ISO, and other styles
44

Strömer, Annika, Christian Staerk, Nadja Klein, Leonie Weinhold, Stephanie Titze, and Andreas Mayr. "Deselection of base-learners for statistical boosting—with an application to distributional regression." Statistical Methods in Medical Research 31, no. 2 (December 9, 2021): 207–24. http://dx.doi.org/10.1177/09622802211051088.

Full text
Abstract:
We present a new procedure for enhanced variable selection for component-wise gradient boosting. Statistical boosting is a computational approach that emerged from machine learning, which allows to fit regression models in the presence of high-dimensional data. Furthermore, the algorithm can lead to data-driven variable selection. In practice, however, the final models typically tend to include too many variables in some situations. This occurs particularly for low-dimensional data ([Formula: see text]), where we observe a slow overfitting behavior of boosting. As a result, more variables get included into the final model without altering the prediction accuracy. Many of these false positives are incorporated with a small coefficient and therefore have a small impact, but lead to a larger model. We try to overcome this issue by giving the algorithm the chance to deselect base-learners with minor importance. We analyze the impact of the new approach on variable selection and prediction performance in comparison to alternative methods including boosting with earlier stopping as well as twin boosting. We illustrate our approach with data of an ongoing cohort study for chronic kidney disease patients, where the most influential predictors for the health-related quality of life measure are selected in a distributional regression approach based on beta regression.
APA, Harvard, Vancouver, ISO, and other styles
45

Zhang, Liu, Zhenhong Rao, and Haiyan Ji. "NIR Hyperspectral Imaging Technology Combined with Multivariate Methods to Study the Residues of Different Concentrations of Omethoate on Wheat Grain Surface." Sensors 19, no. 14 (July 17, 2019): 3147. http://dx.doi.org/10.3390/s19143147.

Full text
Abstract:
In this study, a hyperspectral imaging system of 866.4–1701.0 nm was selected and combined with multivariate methods to identify wheat kernels with different concentrations of omethoate on the surface. In order to obtain the optimal model combination, three preprocessing methods (standard normal variate (SNV), Savitzky–Golay first derivative (SG1), and multivariate scatter correction (MSC)), three feature extraction algorithms (successive projections algorithm (SPA), random frog (RF), and neighborhood component analysis (NCA)), and three classifier models (decision tree (DT), k-nearest neighbor (KNN), and support vector machine (SVM)) were applied to make a comparison. Firstly, based on the full wavelengths modeling analysis, it was found that the spectral data after MSC processing performed best in the three classifier models. Secondly, three feature extraction algorithms were used to extract the feature wavelength of MSC processed data and based on feature wavelengths modeling analysis. As a result, the MSC–NCA–SVM model performed best and was selected as the best model. Finally, in order to verify the reliability of the selected model, the hyperspectral image was substituted into the MSC–NCA–SVM model and the object-wise method was used to visualize the image classification. The overall classification accuracy of the four types of wheat kernels reached 98.75%, which indicates that the selected model is reliable.
APA, Harvard, Vancouver, ISO, and other styles
46

Yu, Wennian, Chris Mechefske, and Il Yong Kim. "Identifying optimal features for cutting tool condition monitoring using recurrent neural networks." Advances in Mechanical Engineering 12, no. 12 (December 2020): 168781402098438. http://dx.doi.org/10.1177/1687814020984388.

Full text
Abstract:
Identification of optimal features is necessary for the decision-making models such as the artificial neural network to achieve effective and robust on-line monitoring of cutting tool condition. Most feature selection strategies proposed in the literature are for pattern recognition or classification problems, and not suitable for prognostic problems. This paper applies three parameter suitability metrics introduced in previous similar studies for failure-time analysis and modifies them for failure-process analysis which allows for the unit-wise variation of the component in a population. The suitability of a feature used for cutting tool condition monitoring is determined by its fitness value calculated based on the three metrics. Two types of recurrent neural network are employed to analyze the prognostics ability of the features extracted from multi-sensor signals (acoustics emission, motor current, and vibration) collected from a milling machine under various operating conditions. The analysis results validate that the fitness value of a feature can depict its prognostic ability. It is found that adding more features which share abundant information does not increase the prediction performance but increases the burden on the decision-marking models. In addition, adding features with low fitness values may even deteriorate the prediction.
APA, Harvard, Vancouver, ISO, and other styles
47

Sarsito, Dina Anggreni, and Brian Bramanto. "DIGITAL ELEVATION MODEL ALTERNATIVES ASSESSMENT FOR DEFORMATION ANALYSIS PURPOSES USING GNSS AND INSAR." Jurnal Meteorologi dan Geofisika 23, no. 1 (February 18, 2022): 27. http://dx.doi.org/10.31172/jmg.v23i1.845.

Full text
Abstract:
<p><em>Digital Elevation Model (DEM) is the starting point in the analysis performed to explain the deformation pattern changes from the Earth's surface. The estimated value of deformation based on point-wise GPS and InSAR data with a better spatial resolution must be defined in a reference frame system that reflects the phenomenon of deformation of the real physical world, e.g., orthometric height for the vertical component. Therefore, this study aims to provide alternative DEM models based on a suitable combination between the Global Geopotential Model of Earth Geopotential Model 2008 (EGM2008) and global terrain models, providing position changes with respect to the orthometric height. The alternative DEM models are (i) the global elevation model of ETOPO1 (DEM1), (ii) the modified global elevation model of SRTM30_PLUS (DEM2), and (iii) the regional elevation model of DEMNAS (DEM3). These alternative models comply with each other for the land areas with mean difference values lower than 1 meter. While for the ocean areas, we found that DEM1 and DEM2 have apparent differences due to the different types of data used. However, a similar assessment could not be performed for DEM3 as it only covers the land areas. Additionally, we compared the orthometric height from these terrain models with leveling observations for the coinciding locations. DEM3 achieves the highest accuracy with the estimated standard deviation of 11.2745 meters and is followed by DEM2 and DEM1 with the respective standard deviation of 29.4498 and 37.6872 meters. We found that these models can be used as a starting position determination for horizontal and vertical deformation analysis.</em></p>
APA, Harvard, Vancouver, ISO, and other styles
48

Suchting, Robert, Michael S. Businelle, Stephen W. Hwang, Nikhil S. Padhye, Yijiong Yang, and Diane M. Santa Maria. "Predicting Daily Sheltering Arrangements among Youth Experiencing Homelessness Using Diary Measurements Collected by Ecological Momentary Assessment." International Journal of Environmental Research and Public Health 17, no. 18 (September 20, 2020): 6873. http://dx.doi.org/10.3390/ijerph17186873.

Full text
Abstract:
Youths experiencing homelessness (YEH) often cycle between various sheltering locations including spending nights on the streets, in shelters and with others. Few studies have explored the patterns of daily sheltering over time. A total of 66 participants completed 724 ecological momentary assessments that assessed daily sleeping arrangements. Analyses applied a hypothesis-generating machine learning algorithm (component-wise gradient boosting) to build interpretable models that would select only the best predictors of daily sheltering from a large set of 92 variables while accounting for the correlated nature of the data. Sheltering was examined as a three-category outcome comparing nights spent literally homeless, unstably housed or at a shelter. The final model retained 15 predictors. These predictors included (among others) specific stressors (e.g., not having a place to stay, parenting and hunger), discrimination (by a friend or nonspecified other; due to race or homelessness), being arrested and synthetic cannabinoids use (a.k.a., “kush”). The final model demonstrated success in classifying the categorical outcome. These results have implications for developing just-in-time adaptive interventions for improving the lives of YEH.
APA, Harvard, Vancouver, ISO, and other styles
49

K.N.S. Perera, H.M.N.B. Herath, D.P.S.T.G. Attanayaka, and S.A.C.N. Perera. "Assessment of the Diversity in Fruit Yield and Fruit Components among Sri Lanka Tall Coconut Accessions Conserved Ex-Situ." CORD 31, no. 2 (October 1, 2015): 9. http://dx.doi.org/10.37833/cord.v31i2.60.

Full text
Abstract:
Characterization of conserved coconut germplasm has been undertaken globally for identification of important features of different accessions for them to be effectively used in coconut breeding. One hundred and fifty seven accessions comprising of local and exotic material have been conserved in ex-situ field genebanks of Coconut Research Institute in Sri Lanka. The objective of this study is to quantitatively characterize nut yield and fruit components by weights among Sri Lanka Tall (Typica) coconut accessions. Twenty local tall coconut accessions were characterized for nut yield and fruit components following Bioversity International descriptors for coconut. Bunch wise nut yield was recorded in all the coconut phenotypes in the six most mature bunches in 25 randomly selected palms from each accession. Sampled nuts were scored for weights of fresh nut, husked nut, split nut and kernel and the weights of husk, water and shell of each nut were derived from the scored data. Analysis of variance by general linear models procedure and mean separation by Duncan’s multiple range test were performed in SAS v8 and principal component analysis and cluster analysis using squared Euclidean distances were performed in Minitab V17. General linear models procedure revealed significant differences for nut yield and all the fruit components at 5% probability level. Walahapitiya recorded the highest average nut yield followed by the Razeena with statistically equal performances. Clovis recorded the highest values for most of the parameters for fruit component analysis followed by the accession Margaret, grouping together in Dendogram and the scatter plot. The highest per nut kernel producer, Clovis, was followed by Margaret with statistically equal performances and this is important because kernel is the main economically important component followed by the husk. Results revealed that there is no significant correlation between nut yield and all the fruit components in tall accessions indicating the importance of taking these two parameters separately to formulate germplasm conservation strategies.
APA, Harvard, Vancouver, ISO, and other styles
50

Berg, Christoph, Nina Ihling, Maurice Finger, Olivier Paquet-Durand, Bernd Hitzmann, and Jochen Büchs. "Online 2D Fluorescence Monitoring in Microtiter Plates Allows Prediction of Cultivation Parameters and Considerable Reduction in Sampling Efforts for Parallel Cultivations of Hansenula polymorpha." Bioengineering 9, no. 9 (September 4, 2022): 438. http://dx.doi.org/10.3390/bioengineering9090438.

Full text
Abstract:
Multi-wavelength (2D) fluorescence spectroscopy represents an important step towards exploiting the monitoring potential of microtiter plates (MTPs) during early-stage bioprocess development. In combination with multivariate data analysis (MVDA), important process information can be obtained, while repetitive, cost-intensive sample analytics can be reduced. This study provides a comprehensive experimental dataset of online and offline measurements for batch cultures of Hansenula polymorpha. In the first step, principal component analysis (PCA) was used to assess spectral data quality. Secondly, partial least-squares (PLS) regression models were generated, based on spectral data of two cultivation conditions and offline samples for glycerol, cell dry weight, and pH value. Thereby, the time-wise resolution increased 12-fold compared to the offline sampling interval of 6 h. The PLS models were validated using offline samples of a shorter sampling interval. Very good model transferability was shown during the PLS model application to the spectral data of cultures with six varying initial cultivation conditions. For all the predicted variables, a relative root-mean-square error (RMSE) below 6% was obtained. Based on the findings, the initial experimental strategy was re-evaluated and a more practical approach with minimised sampling effort and elevated experimental throughput was proposed. In conclusion, the study underlines the high potential of multi-wavelength (2D) fluorescence spectroscopy and provides an evaluation workflow for PLS modelling in microtiter plates.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography