To see the other types of publications on this topic, follow the link: Latent code optimization.

Journal articles on the topic 'Latent code optimization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Latent code optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Chen, Taicai, Yue Duan, Dong Li, Lei Qi, Yinghuan Shi, and Yang Gao. "PG-LBO: Enhancing High-Dimensional Bayesian Optimization with Pseudo-Label and Gaussian Process Guidance." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 10 (March 24, 2024): 11381–89. http://dx.doi.org/10.1609/aaai.v38i10.29018.

Full text
Abstract:
Variational Autoencoder based Bayesian Optimization (VAE-BO) has demonstrated its excellent performance in addressing high-dimensional structured optimization problems. However, current mainstream methods overlook the potential of utilizing a pool of unlabeled data to construct the latent space, while only concentrating on designing sophisticated models to leverage the labeled data. Despite their effective usage of labeled data, these methods often require extra network structures, additional procedure, resulting in computational inefficiency. To address this issue, we propose a novel method to effectively utilize unlabeled data with the guidance of labeled data. Specifically, we tailor the pseudo-labeling technique from semi-supervised learning to explicitly reveal the relative magnitudes of optimization objective values hidden within the unlabeled data. Based on this technique, we assign appropriate training weights to unlabeled data to enhance the construction of a discriminative latent space. Furthermore, we treat the VAE encoder and the Gaussian Process (GP) in Bayesian optimization as a unified deep kernel learning process, allowing the direct utilization of labeled data, which we term as Gaussian Process guidance. This directly and effectively integrates the goal of improving GP accuracy into the VAE training, thereby guiding the construction of the latent space. The extensive experiments demonstrate that our proposed method outperforms existing VAE-BO algorithms in various optimization scenarios. Our code will be published at https://github.com/TaicaiChen/PG-LBO.
APA, Harvard, Vancouver, ISO, and other styles
2

Yuan, Xue, Guanjun Lin, Yonghang Tai, and Jun Zhang. "Deep Neural Embedding for Software Vulnerability Discovery: Comparison and Optimization." Security and Communication Networks 2022 (January 18, 2022): 1–12. http://dx.doi.org/10.1155/2022/5203217.

Full text
Abstract:
Due to multitudinous vulnerabilities in sophisticated software programs, the detection performance of existing approaches requires further improvement. Multiple vulnerability detection approaches have been proposed to aid code inspection. Among them, there is a line of approaches that apply deep learning (DL) techniques and achieve promising results. This paper attempts to utilize CodeBERT which is a deep contextualized model as an embedding solution to facilitate the detection of vulnerabilities in C open-source projects. The application of CodeBERT for code analysis allows the rich and latent patterns within software code to be revealed, having the potential to facilitate various downstream tasks such as the detection of software vulnerability. CodeBERT inherits the architecture of BERT, providing a stacked encoder of transformer in a bidirectional structure. This facilitates the learning of vulnerable code patterns which requires long-range dependency analysis. Additionally, the multihead attention mechanism of transformer enables multiple key variables of a data flow to be focused, which is crucial for analyzing and tracing potentially vulnerable data flaws, eventually, resulting in optimized detection performance. To evaluate the effectiveness of the proposed CodeBERT-based embedding solution, four mainstream-embedding methods are compared for generating software code embeddings, including Word2Vec, GloVe, and FastText. Experimental results show that CodeBERT-based embedding outperforms other embedding models on the downstream vulnerability detection tasks. To further boost performance, we proposed to include synthetic vulnerable functions and perform synthetic and real-world data fine tuning to facilitate the model learning of C-related vulnerable code patterns. Meanwhile, we explored the suitable configuration of CodeBERT. The evaluation results show that the model with new parameters outperform some state-of-the-art detection methods in our dataset.
APA, Harvard, Vancouver, ISO, and other styles
3

Sankar, E., L. Karthik, and Kuppa Venkatasriram Sastry. "Quantization of Product using Collaborative Filtering Based on Cluster." International Journal for Research in Applied Science and Engineering Technology 10, no. 3 (March 31, 2022): 876–82. http://dx.doi.org/10.22214/ijraset.2022.40753.

Full text
Abstract:
Abstract: Because of strict response-time constraints, efficiency of top-k recommendation is crucial for real-world recommender systems. Locality sensitive hashing and index-based methods usually store both index data and item feature vectors in main memory, so they handle a limited number of items. Hashing-based recommendation methods enjoy low memory cost and fast retrieval of items, but suffer from large accuracy degradation. In this paper, we propose product Quantized Collaborative Filtering (pQCF) for better trade-off between efficiency and accuracy. pQCF decomposes a joint latent space of users and items into a Cartesian product of low-dimensional subspaces, and learns clustered representation within each subspace. A latent factor is then represented by a short code, which is composed of subspace cluster indexes. A user’s preference for an item can be efficiently calculated via table lookup. We then develop block coordinate descent for efficient optimization and reveal the learning of latent factors is seamlessly integrated with quantization. In this paper we also propose similarity method that has the ability to exploit multiple correlation structures between users who express their preferences for objects that are likely to have similar properties. For this we use a clustering method to find groups of similar objects. Index Terms: Product Quantization, Clustering, Product Search, Collaborative Filtering
APA, Harvard, Vancouver, ISO, and other styles
4

Chennappan, R., and Vidyaa Thulasiraman. "Multicriteria Cuckoo search optimized latent Dirichlet allocation based Ruzchika indexive regression for software quality management." Indonesian Journal of Electrical Engineering and Computer Science 24, no. 3 (December 1, 2021): 1804. http://dx.doi.org/10.11591/ijeecs.v24.i3.pp1804-1813.

Full text
Abstract:
The paper presents the software quality management is a highly significant one to ensure the quality and to review the reliability of software products. To improve the software quality by predicting software failures and enhancing the scalability, in this paper, we present a novel reinforced Cuckoo search optimized latent Dirichlet allocation based Ruzchika indexive regression (RCSOLDA-RIR) technique. At first, Multicriteria reinforced Cuckoo search optimization is used to perform the test case selection and find the most optimal solution while considering the multiple criteria and selecting the optimal test cases for testing the software quality. Next, the generative latent Dirichlet allocation model is applied to predict the software failure density with selected optimal test cases with minimum time. Finally, the Ruzchika indexive regression is applied for measuring the similarity between the preceding versions and the new version of software products. Based on the similarity estimation, the software failure density of the new version is also predicted. With this, software error prediction is made in a significant manner, therefore, improving the reliability of software code and service provisioning time between software versions in software systems is also minimized. An experimental assessment of the RCSOLDA-RIR technique achieves better reliability and scalability than the existing methods.
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Ha Young, and Dongsup Kim. "Prediction of mutation effects using a deep temporal convolutional network." Bioinformatics 36, no. 7 (November 20, 2019): 2047–52. http://dx.doi.org/10.1093/bioinformatics/btz873.

Full text
Abstract:
Abstract Motivation Accurate prediction of the effects of genetic variation is a major goal in biological research. Towards this goal, numerous machine learning models have been developed to learn information from evolutionary sequence data. The most effective method so far is a deep generative model based on the variational autoencoder (VAE) that models the distributions using a latent variable. In this study, we propose a deep autoregressive generative model named mutationTCN, which employs dilated causal convolutions and attention mechanism for the modeling of inter-residue correlations in a biological sequence. Results We show that this model is competitive with the VAE model when tested against a set of 42 high-throughput mutation scan experiments, with the mean improvement in Spearman rank correlation ∼0.023. In particular, our model can more efficiently capture information from multiple sequence alignments with lower effective number of sequences, such as in viral sequence families, compared with the latent variable model. Also, we extend this architecture to a semi-supervised learning framework, which shows high prediction accuracy. We show that our model enables a direct optimization of the data likelihood and allows for a simple and stable training process. Availability and implementation Source code is available at https://github.com/ha01994/mutationTCN. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
6

Balelli, Irene, Santiago Silva, and Marco Lorenzi. "A Differentially Private Probabilistic Framework for Modeling the Variability Across Federated Datasets of Heterogeneous Multi-View Observations." Machine Learning for Biomedical Imaging 1, IPMI 2021 (April 22, 2022): 1–36. http://dx.doi.org/10.59275/j.melba.2022-7175.

Full text
Abstract:
We propose a novel federated learning paradigm to model data variability among heterogeneous clients in multi-centric studies. Our method is expressed through a hierarchical Bayesian latent variable model, where client-specific parameters are assumed to be realization from a global distribution at the master level, which is in turn estimated to account for data bias and variability across clients. We show that our framework can be effectively optimized through expectation maximization (EM) over latent master's distribution and clients' parameters. We also introduce formal differential privacy (DP) guarantees compatibly with our EM optimization scheme. We tested our method on the analysis of multi-modal medical imaging data and clinical scores from distributed clinical datasets of patients affected by Alzheimer's disease. We demonstrate that our method is robust when data is distributed either in iid and non-iid manners, even when local parameters perturbation is included to provide DP guarantees. Our approach allows to quantify the variability of data, views and centers, while guaranteeing high-quality data reconstruction as compared to the state-of-the-art autoencoding models and federated learning schemes.<br>The code is available at <a href='https://gitlab.inria.fr/epione/federated-multi-views-ppca'>https://gitlab.inria.fr/epione/federated-multi-views-ppca</a>
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Yujia, Lai-Man Po, Xuyuan Xu, Mengyang Liu, Yexin Wang, Weifeng Ou, Yuzhi Zhao, and Wing-Yin Yu. "Contrastive Spatio-Temporal Pretext Learning for Self-Supervised Video Representation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 3380–89. http://dx.doi.org/10.1609/aaai.v36i3.20248.

Full text
Abstract:
Spatio-temporal representation learning is critical for video self-supervised representation. Recent approaches mainly use contrastive learning and pretext tasks. However, these approaches learn representation by discriminating sampled instances via feature similarity in the latent space while ignoring the intermediate state of the learned representations, which limits the overall performance. In this work, taking into account the degree of similarity of sampled instances as the intermediate state, we propose a novel pretext task - spatio-temporal overlap rate (STOR) prediction. It stems from the observation that humans are capable of discriminating the overlap rates of videos in space and time. This task encourages the model to discriminate the STOR of two generated samples to learn the representations. Moreover, we employ a joint optimization combining pretext tasks with contrastive learning to further enhance the spatio-temporal representation learning. We also study the mutual influence of each component in the proposed scheme. Extensive experiments demonstrate that our proposed STOR task can favor both contrastive learning and pretext tasks and the joint optimization scheme can significantly improve the spatio-temporal representation in video understanding. The code is available at https://github.com/Katou2/CSTP.
APA, Harvard, Vancouver, ISO, and other styles
8

Esztergár-Kiss, Domokos. "Horizon 2020 Project Analysis by Using Topic Modelling Techniques in the Field of Transport." Transport and Telecommunication Journal 25, no. 3 (June 15, 2024): 266–77. http://dx.doi.org/10.2478/ttj-2024-0019.

Full text
Abstract:
Abstract Understanding the main research directions in transport is crucial to provide useful and relevant insights. The analysis of Horizon 2020, the largest research and innovation framework, has been already realized in a few publications but rarely for the field of transport. Thus, this article is devoted to fill this gap by introducing a novel application of topic modelling techniques, specifically the Latent Dirichlet Allocation (LDA), in the Horizon 2020 framework for transport projects. The method is using the Mallet software with pre-examined code optimizations. As the first step, a corpus is created by collecting 310 project abstracts; afterward, the texts of abstracts are prepared for the LDA analysis by introducing stop words, optimization criteria, the number of words per topics, and the number of topics. The study successfully uncovers the following five main underlying topics: road and traffic safety, aviation and aircraft, mobility and urban transport, maritime industry and shipping, open and real-time data in transport. Besides that, the main trends in transport are identified based on the frequency of words and their occurrence in the corpus. The applied approach maximizes the added value of the Horizon 2020 initiatives by revealing insights that may be overlooked using traditional analysis methods.
APA, Harvard, Vancouver, ISO, and other styles
9

G, Ranganathan, and Bindhu V. "Learned Image Compression with Discretized Gaussian Mixture Likelihoods and Attention Modules." December 2020 2, no. 4 (February 23, 2021): 162–67. http://dx.doi.org/10.36548/jeea.2020.4.004.

Full text
Abstract:
There have been many compression standards developed during the past few decades and technological advances has resulted in introducing many methodologies with promising results. As far as PSNR metric is concerned, there is a performance gap between reigning compression standards and learned compression algorithms. Based on research, we experimented using an accurate entropy model on the learned compression algorithms to determine the rate-distortion performance. In this paper, discretized Gaussian Mixture likelihood is proposed to determine the latent code parameters in order to attain a more flexible and accurate model of entropy. Moreover, we have also enhanced the performance of the work by introducing recent attention modules in the network architecture. Simulation results indicate that when compared with the previously existing techniques using high-resolution and Kodak datasets, the proposed work achieves a higher rate of performance. When MS-SSIM is used for optimization, our work generates a more visually pleasant image.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Dewei, Yin Liu, and Sam Davanloo Tajbakhsh. "A First-Order Optimization Algorithm for Statistical Learning with Hierarchical Sparsity Structure." INFORMS Journal on Computing 34, no. 2 (March 2022): 1126–40. http://dx.doi.org/10.1287/ijoc.2021.1069.

Full text
Abstract:
In many statistical learning problems, it is desired that the optimal solution conform to an a priori known sparsity structure represented by a directed acyclic graph. Inducing such structures by means of convex regularizers requires nonsmooth penalty functions that exploit group overlapping. Our study focuses on evaluating the proximal operator of the latent overlapping group lasso developed by Jacob et al. in 2009 . We implemented an alternating direction method of multiplier with a sharing scheme to solve large-scale instances of the underlying optimization problem efficiently. In the absence of strong convexity, global linear convergence of the algorithm is established using the error bound theory. More specifically, the paper contributes to establishing primal and dual error bounds when the nonsmooth component in the objective function does not have a polyhedral epigraph. We also investigate the effect of the graph structure on the speed of convergence of the algorithm. Detailed numerical simulation studies over different graph structures supporting the proposed algorithm and two applications in learning are provided. Summary of Contribution: The paper proposes a computationally efficient optimization algorithm to evaluate the proximal operator of a nonsmooth hierarchical sparsity-inducing regularizer and establishes its convergence properties. The computationally intensive subproblem of the proposed algorithm can be fully parallelized, which allows solving large-scale instances of the underlying problem. Comprehensive numerical simulation studies benchmarking the proposed algorithm against five other methods on the speed of convergence to optimality are provided. Furthermore, performance of the algorithm is demonstrated on two statistical learning applications related to topic modeling and breast cancer classification. The code along with the simulation studies and benchmarks are available on the corresponding author’s GitHub website for evaluation and future use.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhong, Jiajun, Ning Gui, and Weiwei Ye. "Data Imputation with Iterative Graph Reconstruction." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 11399–407. http://dx.doi.org/10.1609/aaai.v37i9.26348.

Full text
Abstract:
Effective data imputation demands rich latent ``structure" discovery capabilities from ``plain" tabular data. Recent advances in graph neural networks-based data imputation solutions show their structure learning potentials by translating tabular data as bipartite graphs. However, due to a lack of relations between samples, they treat all samples equally which is against one important observation: ``similar sample should give more information about missing values." This paper presents a novel Iterative graph Generation and Reconstruction framework for Missing data imputation(IGRM). Instead of treating all samples equally, we introduce the concept: ``friend networks" to represent different relations among samples. To generate an accurate friend network with missing data, an end-to-end friend network reconstruction solution is designed to allow for continuous friend network optimization during imputation learning. The representation of the optimized friend network, in turn, is used to further optimize the data imputation process with differentiated message passing. Experiment results on eight benchmark datasets show that IGRM yields 39.13% lower mean absolute error compared with nine baselines and 9.04% lower than the second-best. Our code is available at https://github.com/G-AILab/IGRM.
APA, Harvard, Vancouver, ISO, and other styles
12

Mellouli, Sofiene, Talal Alqahtani, and Salem Algarni. "Parametric Analysis of a Solar Water Heater Integrated with PCM for Load Shifting." Energies 15, no. 22 (November 21, 2022): 8741. http://dx.doi.org/10.3390/en15228741.

Full text
Abstract:
Integrating a solar water heater (SWH) with a phase change material (PCM)-based latent heat storage is an attractive method for transferring load from peak to off-peak hours. This transferring load varies as the physical parameters of the PCM change. Thus, the aim of this study is to perform a parametric analysis of the SWH on the basis of the PCM’s thermophysical properties. A mathematical model was established, and a computation code was developed to describe the physical phenomenon of heat storage/release in/from the SWH system. The thermal energy stored and the energy efficiency are used as key performance indicators of the new SWH–PCM system. The obtained numerical results demonstrate that the used key performance indicators were significantly impacted by the PCM thermo-physical properties (melting temperature, density, and latent heat). Using this model, various numerical simulations are performed, and the results indicate that, SWH with PCM, 20.2% of thermal energy on-peak periods load is shifted to the off-peak period. In addition, by increasing the PCM’s density and enthalpy, higher load shifting is observed. In addition, the PCM, which has a lower melting point, can help the SWH retain water temperature for a longer period of time. There are optimal PCM thermo-physical properties that give the best specific energy recovery and thermal efficiency of the SWH–PCM system. For the proposed SWH–PCM system, the optimal PCM thermo-physical properties, i.e., the melting temperature is 313 K, the density is 3200 kg/m3, and the latent heat is 520 kg/kg.
APA, Harvard, Vancouver, ISO, and other styles
13

Manimehalai, N., P. Karthickumar, and K. Rathnakumar. "Thermophysical Properties of Brackish Water Shrimp (Litopenaeus vannamei) for Process Design and Optimization." International Journal of Nutrition, Pharmacology, Neurological Diseases 11, no. 4 (October 2021): 298–302. http://dx.doi.org/10.4103/ijnpnd.ijnpnd_28_21.

Full text
Abstract:
The proximate compositions, moisture (74.2%), protein (28.07%), ash (2.62%), lipid (3.39%), and carbohydrate (4.39%) of brackish water shrimp (Litopenaeus vannamei) were determined. The density, specific heat, thermal conductivity, thermal diffusivity, and latent heat obtained as functions of the proximate composition of the shrimp and found to be 1110 kg/m3, 394 kJ/kgK, 0.5113 W/mK, 1.1773 × 10−6 m2/s, and 256.39 kJ/kg, respectively. The thermophysical values obtained were correlated with the proximate composition value. Correlation analysis reveals that specific heat (0.83), thermal conductivity (perpendicular model) (0.99), and latent heat (0.12) have positive correlation with protein content of shrimp. On the contrary, thermal diffusivity (−0.92) and thermal conductivity (parallel model) (−0.99) have negative correlation with protein content of shrimp. Further, latent heat of shrimp has a weak positive correlation (0.12) with protein content of shrimp and strong positive correlation with carbohydrate (0.82), fat (0.94), and water (1.0). Thermal conductivity (perpendicular model) has weak positive correlation with fat (0.18) and water (0.19), and has strong positive correlation with protein (0.99) and ash (0.81). Shrimp density has strong correlation with protein (0.98) followed by ash (0.86). Heat may be transferred better across the fibers of the shrimp than along the fiber. Context: Shrimp is one of the most valuable sources of high-grade protein among the seafood category. Shrimp has healthy fat a unique source of essential nutrients, including long-chain omega-3 fatty acids, iodine, vitamin D, and calcium. The global shrimp production has increased at a compound annual growth rate (CAGR) of 3.2% during the period 2011 to 2017. High market demand and consumer preference for shrimp food are attributed due to the attractive and sleek appearance, substantial flesh, ease of preparation for processing (mainly due to absence of scales), etc. With the potential and promise of further increases in production, it is essential to provide a firm base for development of technologies suitable for the value-added products from shrimp to further enhance its market expansion. Understanding on thermal properties of foods plays an important role in the design and prediction of heat transfer operations during the handling, processing, canning, storing, and distribution of foods. In addition, they are fundamentally important in mathematical modeling studies for the design and optimization of food-processing operation involving heat and mass transfer. Aims: The aim of the present study is to determine some thermophysical properties of shrimp as a function of its proximate composition to provide data for the development of appropriate equipment and processing technology for brackish water shrimp (L. vannamei). Settings and design: Proximate composition of the brackish water shrimp (L. vannamei) was measured. Prediction equations were developed to predict the thermophysical properties of shrimp. Correlation matrix was prepared to understand the dependence of proximate composition and thermophysical properties of shrimp. Materials and methods: Shrimp (L. vannamei) were obtained from an aquaculture farm in Nagapattinam. Shrimp were handled in accordance with the Codex General Principles of Food Hygiene (CAC/RCP 1-1969) and Code of Practice for Fish and Fishery Products (CAC/RCP 52-2003). The Kjeldahl method was performed according to method 981.10 of the AOAC International. Total lipids in tissue sample were extracted and analyzed by the method. Water content was determined by oven drying at 105°C, and ashing by incineration in a muffle furnace at 525°C. Carbohydrate content was determined by difference method as given in the following equation: % Carbohydrate = 100–% (Crude protein + Total fat + Ash) × 100. Comprehensive models were used predict volume and thermal properties. Statistical analysis used: Correlation matrix of proximate composition and the thermophysical properties of shrimp were prepared to understand the dependence of thermophysical properties with the proximate composition of shrimp. Results: The predicted values of the thermophysical properties of farmed shrimp were in accordance with the already published values. The density of the shrimp is slightly higher side when compared with water alone, indicating the influence of proximate composition. Correlation matrix thus prepared, better explains dependence of thermophysical properties (specific heat, thermal conductivity, thermal diffusivity, latent heat, and density) with the proximate compositions (moisture, protein, ash, lipid, and carbohydrate) of brackish water shrimp (L. vannamei).Shrimp is “nature’s superfood,” an important source of proteins and healthy fat, and a unique source of essential nutrients, including long-chain omega-3 fatty acids, iodine, vitamin D, and calcium. The knowledge of its engineering properties is essential to its processing and preservation to increase its value as food. The thermophysical data obtained in this study could be used as input in heat transfer calculations and to establish critical control points during the drying, freezing, and thermal processing of shrimp meat.
APA, Harvard, Vancouver, ISO, and other styles
14

Everett, Tim, Trey Taylor, Dong-Kyeong Lee, and Dennis M. Akos. "Optimizing the Use of RTKLIB for Smartphone-Based GNSS Measurements." Sensors 22, no. 10 (May 18, 2022): 3825. http://dx.doi.org/10.3390/s22103825.

Full text
Abstract:
The Google Smartphone Decimeter Challenge (GSDC) was a competition held in 2021, where data from a variety of instruments useful for determining a phone’s position (signals from GPS satellites, accelerometer readings, gyroscope readings, etc.) using Android smartphones were provided to be processed/assessed in regard to the most accurate determination of the longitude and latitude of user positions. One of the tools that can be utilized to process the GNSS measurements is RTKLIB. RTKLIB is an open-source GNSS processing software tool that can be used with the GNSS measurements, including code, carrier, and doppler measurements, to provide real-time kinematic (RTK), precise point positioning (PPP), and post-processed kinematic (PPK) solutions. In the GSDC, we focused on the PPK capabilities of RTKLIB, as the challenge only required post-processing of past data. Although PPK positioning is expected to provide sub-meter level accuracies, the lower quality of the Android measurements compared to geodetic receivers makes this performance difficult to achieve consistently. Another latent issue is that the original RTKLIB created by Tomoji Takasu is aimed at commercial GNSS receivers rather than smartphones. Therefore, the performance of the original RTKLIB for the GSDC is limited. Consequently, adjustments to both the code-base and the default settings are suggested. When implemented, these changes allowed RTKLIB processing to score 5th place, based on the performance submissions of the prior GSDC competition. Detailed information on what was changed, and the steps to replicate the final results, are presented in the paper. Moreover, the updated code-base, with all the implemented changes, is provided in the public repository. This paper outlines a procedure to optimize the use of RTKLIB for Android smartphone measurements, highlighting the changes needed given the low-quality measurements from the mobile phone platform (relative to the survey grade GNSS receiver), which can be used as a basis point for further optimization for future GSDC competitions.
APA, Harvard, Vancouver, ISO, and other styles
15

Bi, Sirui, Victor Fung, and Jiaxin Zhang. "Accelerating Inverse Learning via Intelligent Localization with Exploratory Sampling." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (June 26, 2023): 14711–19. http://dx.doi.org/10.1609/aaai.v37i12.26719.

Full text
Abstract:
In the scope of "AI for Science", solving inverse problems is a longstanding challenge in materials and drug discovery, where the goal is to determine the hidden structures given a set of desirable properties. Deep generative models are recently proposed to solve inverse problems, but these are currently struggling in expensive forward operators, precisely localizing the exact solutions and fully exploring the parameter spaces without missing solutions. In this work, we propose a novel approach (called iPage) to accelerate the inverse learning process by leveraging probabilistic inference from deep invertible models and deterministic optimization via fast gradient descent. Given a target property, the learned invertible model provides a posterior over the parameter space; we identify these posterior samples as an intelligent prior initialization which enables us to narrow down the search space. We then perform gradient descent to calibrate the inverse solutions within a local region. Meanwhile, a space-filling sampling is imposed on the latent space to better explore and capture all possible solutions. We evaluate our approach on three benchmark tasks and create two datasets of real-world applications from quantum chemistry and additive manufacturing and find our method achieves superior performance compared to several state-of-the-art baseline methods. The iPage code is available at https://github.com/jxzhangjhu/MatDesINNe.
APA, Harvard, Vancouver, ISO, and other styles
16

Chao, Guoqing, Yi Jiang, and Dianhui Chu. "Incomplete Contrastive Multi-View Clustering with High-Confidence Guiding." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 10 (March 24, 2024): 11221–29. http://dx.doi.org/10.1609/aaai.v38i10.29000.

Full text
Abstract:
Incomplete multi-view clustering becomes an important research problem, since multi-view data with missing values are ubiquitous in real-world applications. Although great efforts have been made for incomplete multi-view clustering, there are still some challenges: 1) most existing methods didn't make full use of multi-view information to deal with missing values; 2) most methods just employ the consistent information within multi-view data but ignore the complementary information; 3) For the existing incomplete multi-view clustering methods, incomplete multi-view representation learning and clustering are treated as independent processes, which leads to performance gap. In this work, we proposed a novel Incomplete Contrastive Multi-View Clustering method with high-confidence guiding (ICMVC). Firstly, we proposed a multi-view consistency relation transfer plus graph convolutional network to tackle missing values problem. Secondly, instance-level attention fusion and high-confidence guiding are proposed to exploit the complementary information while instance-level contrastive learning for latent representation is designed to employ the consistent information. Thirdly, an end-to-end framework is proposed to integrate multi-view missing values handling, multi-view representation learning and clustering assignment for joint optimization. Experiments compared with state-of-the-art approaches demonstrated the effectiveness and superiority of our method. Our code is publicly available at https://github.com/liunian-Jay/ICMVC. The version with supplementary material can be found at http://arxiv.org/abs/2312.08697.
APA, Harvard, Vancouver, ISO, and other styles
17

Ye, Fei, and Adrian G. Bors. "Task-Free Continual Generation and Representation Learning via Dynamic Expansionable Memory Cluster." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 15 (March 24, 2024): 16451–59. http://dx.doi.org/10.1609/aaai.v38i15.29582.

Full text
Abstract:
Human brains can continually acquire and learn new skills and knowledge over time from a dynamically changing environment without forgetting previously learnt information. Such a capacity can selectively transfer some important and recently seen information to the persistent knowledge regions of the brain. Inspired by this intuition, we propose a new memory-based approach for image reconstruction and generation in continual learning, consisting of a temporary and evolving memory, with two different storage strategies, corresponding to the temporary and permanent memorisation. The temporary memory aims to preserve up-to-date information while the evolving memory can dynamically increase its capacity in order to preserve permanent knowledge information. This is achieved by the proposed memory expansion mechanism that selectively transfers those data samples deemed as important from the temporary memory to new clusters defined within the evolved memory according to an information novelty criterion. Such a mechanism promotes the knowledge diversity among clusters in the evolved memory, resulting in capturing more diverse information by using a compact memory capacity. Furthermore, we propose a two-step optimization strategy for training a Variational Autoencoder (VAE) to implement generation and representation learning tasks, which updates the generator and inference models separately using two optimisation paths. This approach leads to a better trade-off between generation and reconstruction performance. We show empirically and theoretically that the proposed approach can learn meaningful latent representations while generating diverse images from different domains. The source code and supplementary material (SM) are available at https://github.com/dtuzi123/DEMC.
APA, Harvard, Vancouver, ISO, and other styles
18

Elagina, E. V. "Certain Issues of Judicial Evaluation of an Expert’s Opinion." Siberian Law Review 19, no. 4 (November 15, 2022): 384–98. http://dx.doi.org/10.19073/2658-7602-2022-19-4-384-398.

Full text
Abstract:
The article deals with local issues of assessing an expert’s opinion: the legality of the subject of initiating a forensic examination and the direct subject of its production. The scope of the Author’s interests, teaching experience and scientific activity make it possible to classify these issues as “latent”, since their non-obviousness is determined both by the established law judicial practice and by the imperfection of the legal regulation of forensic examination in criminal proceedings. Along with the main issues, attention is paid to the genesis of the institution of forensic examination in criminal proceedings, which aims to demonstrate the continuity of its legal regulation, and also to draw attention to the fact that some approaches used by the legislator in the Code of Criminal Procedure of the RSFSR could be preserved in the current criminal procedure law, which would not only prevent the occurrence of a number of issues with the law enforcement officer, but would also serve as an optimization of the production of comprehensive examinations. Evaluation of an expert opinion is a multifaceted intellectual activity that involves the resolution of a complex of procedural and substantive issues. The objects of assessment, along with the expert's opinion, are also procedural documents, which reflect all the actions, the production of which ensured the preparation and appointment of the examination. The list of criteria that the expert opinion must satisfy is standard for all evidence – admissibility, relevance and reliability (part 1 of article 88 of the Criminal Procedure Code of the Russian Federation), but at the same time, the assessment of each of the properties of the evidence “expert’s opinion” should be carried out taking into account the specifics of its formations. At the same time, when evaluating the admissibility of the expert’s opinion, it is necessary to pay attention to data indicating the legality / illegality of the subject of initiating a forensic examination and the direct subject of its production. The Author insists on the need for the subject of the assessment to know the content of the expert's opinion of the entire required set of normative sources and emphasizes that there should be constant monitoring of changes to existing regulations, as well as the publication of new ones related to the area under consideration.
APA, Harvard, Vancouver, ISO, and other styles
19

Palkowski, Marek, and Mateusz Gruzewski. "Time and Energy Benefits of Using Automatic Optimization Compilers for NPDP Tasks." Electronics 12, no. 17 (August 24, 2023): 3579. http://dx.doi.org/10.3390/electronics12173579.

Full text
Abstract:
In this article, we analyze the program codes generated automatically using three advanced optimizers: Pluto, Traco, and Dapt, which are specifically tailored for the NPDP benchmark set. This benchmark set comprises ten program loops, predominantly from the field of bioinformatics. The codes exemplify dynamic programming, a challenging task for well-known tools used in program loop optimization. Given the intricacy involved, we opted for three automatic compilers based on the polyhedral model and various loop-tiling strategies. During our evaluation of the code’s performance, we meticulously considered locality and concurrency to accurately estimate time and energy efficiency. Notably, we dedicated significant attention to the latest Dapt compiler, which applies space–time loop tiling to generate highly efficient code for the NPDP benchmark suite loops. By employing the aforementioned optimizers and conducting an in-depth analysis, we aim to demonstrate the effectiveness and potential of automatic transformation techniques in enhancing the performance and energy efficiency of dynamic programming codes.
APA, Harvard, Vancouver, ISO, and other styles
20

Andrade-Campos, A. "Development of an Optimization Framework for Parameter Identification and Shape Optimization Problems in Engineering." International Journal of Manufacturing, Materials, and Mechanical Engineering 1, no. 1 (January 2011): 57–79. http://dx.doi.org/10.4018/ijmmme.2011010105.

Full text
Abstract:
The use of optimization methods in engineering is increasing. Process and product optimization, inverse problems, shape optimization, and topology optimization are frequent problems both in industry and science communities. In this paper, an optimization framework for engineering inverse problems such as the parameter identification and the shape optimization problems is presented. It inherits the large experience gain in such problems by the SiDoLo code and adds the latest developments in direct search optimization algorithms. User subroutines in Sdl allow the program to be customized for particular applications. Several applications in parameter identification and shape optimization topics using Sdl Lab are presented. The use of commercial and non-commercial (in-house) Finite Element Method codes to evaluate the objective function can be achieved using the interfaces pre-developed in Sdl Lab. The shape optimization problem of the determination of the initial geometry of a blank on a deep drawing square cup problem is analysed and discussed. The main goal of this problem is to determine the optimum shape of the initial blank in order to save latter trimming operations and costs.
APA, Harvard, Vancouver, ISO, and other styles
21

Kurbet, Pavlo M., and Oleksandr A. Rudenok. "The method of parametric adaptation of the check polynomials of the component recursive systematic convulsion code turbo code." Environmental safety and natural resources 50, no. 2 (June 28, 2024): 157–72. http://dx.doi.org/10.32347/2411-4049.2024.2.157-172.

Full text
Abstract:
The article is devoted to increasing the efficiency of the functioning of wireless information transmission systems due to the adaptation of the verification polynomials of the component recursive systematic convolutional turbo code by solving the optimization problem using the gradient method. After the appearance of the extremely important work of K. Shannon, huge efforts have been made to find new transmission methods in order to approach the bandwidth of the channel. Channel coding is one of the main methods that enables such operation at almost full bandwidth. The probability of a white error of information decoding is chosen as the objective function. To calculate the probability of a bit error in information decoding, it is proposed to use cyclic codes. Adaptation schemes of these codes are used to improve the characteristics of information reliability. At the same time, during adaptation, in the vast majority of works, only one parameter changes – the coding speed, which does not fully increase the effectiveness of corrective coding schemes. The purpose of the article is to increase the efficiency of wireless information transmission systems by adapting the verification polynomials of the component recursive systematic convolutional turbo code by solving the optimization problem. The article consists of an introduction, which highlights the problem, analyzes the latest research and publications on this topic, and formulates the purpose of the article. The results of the research are shown, conclusions and prospects for further research are drawn. The article ends with a list of used sources. As a result of the proposed method, the effective number of verification polynomials of the RSCC turbo code, which were found using the method for the channel with additive white Gaussian noise for different sizes of the input data block, is given. We consider the direction of further research to expand the search range to take into account a larger number of parameters of turbo codes during adaptation, while the following can be foreseen: the number of bits in a block, types of interleavers, decoding algorithms, decoding iterations, etc.
APA, Harvard, Vancouver, ISO, and other styles
22

Arzumanyan, R. V. "Arithmetic coder optimization for compressing images obtained through remote probing of water bodies." Vestnik of Don State Technical University 19, no. 1 (April 1, 2019): 86–92. http://dx.doi.org/10.23947/1992-5980-2019-19-1-86-92.

Full text
Abstract:
Introduction.The fast program algorithm of arithmetic coding proposed in the paper is for the compression of digital images. It is shown how the complexity of the arithmetic coder algorithm depends on the complexity measures (the input size is not considered). In the course of work, the most computationally complex parts of the arithmetic coder algorithm are determined. Performance optimization of their software implementation is carried out. Codecs with the new algorithm compress photo and video records obtained through the remote probing of water bodies without frame-to-frame difference.Materials and Methods.In the presented paper, a selection of satellite images of the Azov Sea area was used. At this, the software algorithm of the arithmetic coder was optimized; a theoretical study was conducted; and a computational experiment was performed.Research Results.The performance of the software implementation of the arithmetic coder is increased by the example of the VP9 video codec. Numerous launches of reference and modified codecs were made to measure the runtime. Comparison of the average time of their execution showed that the modified codec performance is 5.21% higher. The overall performance improvement for arithmetic decoding was 7.33%.Discussion and Conclusions.Increase in the speed of the latest digital photo and video image compression algorithms allows them to be used on mobile computing platforms, also as part of the onboard electronics of unmanned aerial vehicles. The theoretical results of this work extend tools of the average-case complexity analysis of the algorithm. They can be used in case where the number of algorithm steps depends not only on the input size, but also on non-measurable criteria (for example, on the common RAM access scheme from parallel processors).
APA, Harvard, Vancouver, ISO, and other styles
23

Kuznetsov, Maksim, and Daniil Polykovskiy. "MolGrow: A Graph Normalizing Flow for Hierarchical Molecular Generation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 8226–34. http://dx.doi.org/10.1609/aaai.v35i9.17001.

Full text
Abstract:
We propose a hierarchical normalizing flow model for generating molecular graphs. The model produces new molecular structures from a single-node graph by recursively splitting every node into two. All operations are invertible and can be used as plug-and-play modules. The hierarchical nature of the latent codes allows for precise changes in the resulting graph: perturbations in the first layer cause global structural changes, while perturbations in the consequent layers change the resulting molecule only marginally. Proposed model outperforms existing generative graph models on the distribution learning task. We also show successful experiments on global and constrained optimization of chemical properties using latent codes of the model.
APA, Harvard, Vancouver, ISO, and other styles
24

Ivanov, Boyan D., and David J. Kropaczek. "ASSESSMENT OF PARALLEL SIMULATED ANNEALING PERFORMANCE WITH THE NEXUS/ANC9 CORE DESIGN CODE SYSTEM." EPJ Web of Conferences 247 (2021): 02019. http://dx.doi.org/10.1051/epjconf/202124702019.

Full text
Abstract:
The method of parallel simulated annealing is being considered as a loading pattern optimization method to be used within the framework of the latest Westinghouse core design code system NEXUS/ANC9. A prototype version of NEXUS/ANC9 that incorporates the parallel simulated annealing method was developed. The prototype version was evaluated in terms of robustness, performance and results. The prototype code was used to optimize LPs for several plants and cycles, including 2-loop, 3-loop and 4-loop Westinghouse plants. Different fuel assembly lattices with IFBA, WABA and Gadolinium burnable absorbers were also exercised in these cores. Different strategies were evaluated using different options in the code. Special attention was paid to the robustness and performance when different number of parallel processes were used with different size of Markov chain.
APA, Harvard, Vancouver, ISO, and other styles
25

Yen, Haw, Seonggyu Park, Jeffrey G. Arnold, Raghavan Srinivasan, Celray James Chawanda, Ruoyu Wang, Qingyu Feng, et al. "IPEAT+: A Built-In Optimization and Automatic Calibration Tool of SWAT+." Water 11, no. 8 (August 14, 2019): 1681. http://dx.doi.org/10.3390/w11081681.

Full text
Abstract:
For almost 30 years, the Soil and Water Assessment Tool (SWAT) has been successfully implemented to address issues around various scientific subjects in the world. On the other hand, it has been reaching to the limit of potential flexibility in further development by the current structure. The new generation SWAT, dubbed SWAT+, was released recently with entirely new coding features. SWAT+ is designed to have far more advanced functions and capacities to handle challenging watershed modeling tasks for hydrologic and water quality processes. However, it is still inevitable to conduct model calibration before the SWAT+ model is applied to engineering projects and research programs. The primary goal of this study is to develop an open-source, easy-to-operate automatic calibration tool for SWAT+, dubbed IPEAT+ (Integrated Parameter Estimation and Uncertainty Analysis Tool Plus). There are four major advantages: (i) Open-source code to general users; (ii) compiled and integrated directly with SWAT+ source code as a single executable; (iii) supported by the SWAT developer group; and, (iv) built with efficient optimization technique. The coupling work between IPEAT+ and SWAT+ is fairly simple, which can be conducted by users with minor efforts. IPEAT+ will be regularly updated with the latest SWAT+ revision. If users would like to integrate IPEAT+ with various versions of SWAT+, only few lines in the SWAT+ source code are required to be updated. IPEAT+ is the first automatic calibration tool integrated with SWAT+ source code. Users can take advantage of the tool to pursue more cutting-edge and forward-thinking scientific questions.
APA, Harvard, Vancouver, ISO, and other styles
26

Facchini, Bruno, Daniele Fiaschi, and Giampaolo Manfrida. "Exergy Analysis of Combined Cycles Using Latest Generation Gas Turbines." Journal of Engineering for Gas Turbines and Power 122, no. 2 (January 3, 2000): 233–38. http://dx.doi.org/10.1115/1.483200.

Full text
Abstract:
The potential performance of optimized gas-steam combined cycles built around latest-generation gas turbine engines is analyzed, by means of energy/exergy balances. The options here considered are the reheat gas turbine and the H-series with closed-loop steam blade cooling. Simulations of performance were run using a well-tested Modular Code developed at the Department of Energy Engineering of Florence and subsequently improved to include the calculation of exergy destruction of all types (heat transfer, friction, mixing, and chemical irreversibilities). The blade cooling process is analyzed in detail as it is recognized to be of capital importance for performance optimization. The distributions of the relative exergy destruction for the two solutions—both capable of achieving energy/exergy efficiencies in the range of 60 percent—are compared and the potential for improvement is discussed. [S0742-4795(00)00902-9]
APA, Harvard, Vancouver, ISO, and other styles
27

Sherstnev, Pavel. "Thefittest: evolutionary machine learning in Python." ITM Web of Conferences 59 (2024): 02020. http://dx.doi.org/10.1051/itmconf/20245902020.

Full text
Abstract:
Thefittest is a new Python library specializing in evolutionary optimization methods and machine learning methods that use evolutionary optimization methods. Thefittest provides both classical evolutionary algorithms and efficient modifications of these algorithms that do not have implementations in the open access. Among the advantages of the library are the performance of the implemented methods, accessibility and ease of use. The paper discusses the motivation for developing and leading the project, describes the structure of the library with examples of use, and provides a comparison with other projects with a similar development goal. Thefittest is an open-source project published on GitHub and PyPi, developed using modern methods of code analysis and testing. At the moment of writing the paper, the latest version of the library is 0.2.3.
APA, Harvard, Vancouver, ISO, and other styles
28

Xiao, Lei, Huaikou Miao, and Ying Zhong. "Test case prioritization and selection technique in continuous integration development environments: a case study." International Journal of Engineering & Technology 7, no. 2.28 (May 16, 2018): 332. http://dx.doi.org/10.14419/ijet.v7i2.28.13207.

Full text
Abstract:
Regression testing is a very important activity in continuous integration development environments. Software engineers frequently integrate new or changed code that involves in a new regression testing. Furthermore, regression testing in continuous integration development environments is together with tight time constraints. It is also impossible to re-run all the test cases in regression testing. Test case prioritization and selection technique are often used to render continuous integration processes more cost-effective. According to multi objective optimization, we present a test case prioritization and selection technique, TCPSCI, to satisfy time constraints and achieve testing goals in continuous integration development environments. Based on historical failure data, testing coverage code size and testing execution time, we order and select test cases. The test cases of the maximize code coverage, the shorter execution time and revealing the latest faults have the higher priority in the same change request. The case study results show that using TCPSCI has a higher cost-effectiveness comparing to the manually prioritization.
APA, Harvard, Vancouver, ISO, and other styles
29

Annam, Sri Nikhil. "Optimizing IT Infrastructure for Business Continuity." Stallion Journal for Multidisciplinary Associated Research Studies 1, no. 5 (October 31, 2022): 31–42. https://doi.org/10.55544/sjmars.1.5.7.

Full text
Abstract:
IT infrastructure is a critical value to ensuring business continuity. It describes the means to maximize IT infrastructure for both resilience, scalability, security, and cost-effectiveness. Based on the current adoption trends in hybrid clouds, edge computing, and automation, the research outlines approaches for improving disaster recovery, fault-tolerant systems, and emerging technologies in AI and IoT. It gives concrete recommendations and also emphasizes performance metrics for sustainable IT infrastructure planning. The study relies on the latest technical information, as well as tables and code snippets to illustrate optimization methods.
APA, Harvard, Vancouver, ISO, and other styles
30

Yang, FengLei, Fei Liu, and ShanShan Liu. "Collaborative Filtering Based on a Variational Gaussian Mixture Model." Future Internet 13, no. 2 (February 1, 2021): 37. http://dx.doi.org/10.3390/fi13020037.

Full text
Abstract:
Collaborative filtering (CF) is a widely used method in recommendation systems. Linear models are still the mainstream of collaborative filtering research methods, but non-linear probabilistic models are beyond the limit of linear model capacity. For example, variational autoencoders (VAEs) have been extensively used in CF, and have achieved excellent results. Aiming at the problem of the prior distribution for the latent codes of VAEs in traditional CF is too simple, which makes the implicit variable representations of users and items too poor. This paper proposes a variational autoencoder that uses a Gaussian mixture model for latent factors distribution for CF, GVAE-CF. On this basis, an optimization function suitable for GVAE-CF is proposed. In our experimental evaluation, we show that the recommendation performance of GVAE-CF outperforms the previously proposed VAE-based models on several popular benchmark datasets in terms of recall and normalized discounted cumulative gain (NDCG), thus proving the effectiveness of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhong, Jing, Qin Li, Chunming Deng, and Lijun Zhang. "Automated Development of an Accurate Diffusion Database in Fcc AlCoCrFeNi High-Entropy Alloys from a Big Dataset of Composition Profiles." Materials 15, no. 9 (April 30, 2022): 3240. http://dx.doi.org/10.3390/ma15093240.

Full text
Abstract:
This study aims to incorporate a big dataset of composition profiles of fcc AlCoCrFeNi alloys, in addition to those of the related subsystem, to develop a self-consistent kinetic description for quinary high-entropy alloys. The latest feature of the HitDIC (High-throughput Determination of Interdiffusion Coefficients) code was adopted in a high-throughput and automatic manner for accommodating a dataset of composition profiles with up to 87 diffusion couples. A good convergence for the optimization process was achieved, while satisfactory results regarding the composition profiles and previously evaluated diffusion properties were obtained. Here, we present an investigation into the elemental effect of Al towards interdiffusion and tracer diffusion, and their potential effect on creep and precipitation processes.
APA, Harvard, Vancouver, ISO, and other styles
32

Liu, Chenghao, Xin Wang, Tao Lu, Wenwu Zhu, Jianling Sun, and Steven Hoi. "Discrete Social Recommendation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 208–15. http://dx.doi.org/10.1609/aaai.v33i01.3301208.

Full text
Abstract:
Social recommendation, which aims at improving the performance of traditional recommender systems by considering social information, has attracted broad range of interests. As one of the most widely used methods, matrix factorization typically uses continuous vectors to represent user/item latent features. However, the large volume of user/item latent features results in expensive storage and computation cost, particularly on terminal user devices where the computation resource to operate model is very limited. Thus when taking extra social information into account, precisely extracting K most relevant items for a given user from massive candidates tends to consume even more time and memory, which imposes formidable challenges for efficient and accurate recommendations. A promising way is to simply binarize the latent features (obtained in the training phase) and then compute the relevance score through Hamming distance. However, such a two-stage hashing based learning procedure is not capable of preserving the original data geometry in the real-value space and may result in a severe quantization loss. To address these issues, this work proposes a novel discrete social recommendation (DSR) method which learns binary codes in a unified framework for users and items, considering social information. We further put the balanced and uncorrelated constraints on the objective to ensure the learned binary codes can be informative yet compact, and finally develop an efficient optimization algorithm to estimate the model parameters. Extensive experiments on three real-world datasets demonstrate that DSR runs nearly 5 times faster and consumes only with 1/37 of its real-value competitor’s memory usage at the cost of almost no loss in accuracy.
APA, Harvard, Vancouver, ISO, and other styles
33

Mehta, Shikha. "Memetic Algorithm with Constrained Local Search for Large-Scale Global Optimization." Journal of Intelligent Systems 26, no. 2 (April 1, 2017): 287–300. http://dx.doi.org/10.1515/jisys-2015-0103.

Full text
Abstract:
AbstractNature-inspired algorithms are seen as potential tools to solve large-scale global optimization problems. Memetic algorithms (MAs) are nature-inspired techniques based on evolutionary computation. MAs are considered as modified genetic algorithms integrated with a local search mechanism. Conventional MAs perform well for small dimensions; however, their performance starts declining with the increase in dimensions. It is popularly known as the “curse of dimensionality” problem. In order to solve this problem, MA with constrained local search (MACLS) is proposed for single-objective optimization problems. MACLS restricts the local search to be performed after every generation. Controlled local search enhances the optimization capability of the MA. MACLS has been evaluated with respect to GS-MPSO (the latest modification of MA) and MLCC, EPUS-PSO, JDEdynNP-F, MTS, DewSAcc, DMS-PSO, LSEDA-gl, UEP, ALPSEA, classical DE (differential evolution), and real-coded CHC algorithms that participated in the Congress on Evolutionary Computation 2008 competition. The results establish that MACLS significantly outperforms these algorithms in attaining global optima for unimodal and multimodal single-objective optimization problems for small as well as large dimensions.
APA, Harvard, Vancouver, ISO, and other styles
34

Zhang, Yudong, Lenan Wu, and Shuihua Wang. "Solving Two-Dimensional HP Model by Firefly Algorithm and Simplified Energy Function." Mathematical Problems in Engineering 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/398141.

Full text
Abstract:
In order to solve the HP model of the protein folding problem, we investigated traditional energy function and pointed out that its discrete property cannot give direction of the next step to the searching point, causing a challenge to optimization algorithms. Therefore, we introduced the simplified energy function into a turn traditional discrete energy function to continuous one. The simplified energy function totals the distance between all pairs of hydrophobic amino acids. To optimize the simplified energy function, we introduced the latest swarm intelligence algorithm, the firefly algorithm (FA). FA is a hot nature-inspired technique and has been used for solving nonlinear multimodal optimization problems in dynamic environment. We also proposed the code scheme strategy to apply FA to the simplified HP model with the clash test strategy. The experiment took 14 sequences of different chain lengths from 18 to 100 as the dataset and compared the FA with standard genetic algorithm and immune genetic algorithm. Each algorithm ran 20 times. The averaged energy convergence results show that FA achieves the lowest values. It concludes that it is effective to solve 2D HP model by the firefly algorithm and the simplified energy function.
APA, Harvard, Vancouver, ISO, and other styles
35

Yoo, Yongseok, and Sriram Vishwanath. "On Resolving Simultaneous Congruences Using Belief Propagation." Neural Computation 27, no. 3 (March 2015): 748–70. http://dx.doi.org/10.1162/neco_a_00702.

Full text
Abstract:
Graphical models and related algorithmic tools such as belief propagation have proven to be useful tools in (approximately) solving combinatorial optimization problems across many application domains. A particularly combinatorially challenging problem is that of determining solutions to a set of simultaneous congruences. Specifically, a continuous source is encoded into multiple residues with respect to distinct moduli, and the goal is to recover the source efficiently from noisy measurements of these residues. This problem is of interest in multiple disciplines, including neural codes, decentralized compression in sensor networks, and distributed consensus in information and social networks. This letter reformulates the recovery problem as an optimization over binary latent variables. Then we present a belief propagation algorithm, a layered variant of affinity propagation, to solve the problem. The underlying encoding structure of multiple congruences naturally results in a layered graphical model for the problem, over which the algorithms are deployed, resulting in a layered affinity propagation (LAP) solution. First, the convergence of LAP to an approximation of the maximum likelihood (ML) estimate is shown. Second, numerical simulations show that LAP converges within a few iterations and that the mean square error of LAP approaches that of the ML estimation at high signal-to-noise ratios.
APA, Harvard, Vancouver, ISO, and other styles
36

Wang, Yufeng, Hui Zhou, Xuebin Zhao, Qingchen Zhang, Poru Zhao, Xiance Yu, and Yangkang Chen. "CuQ-RTM: A CUDA-based code package for stable and efficient Q-compensated reverse time migration." GEOPHYSICS 84, no. 1 (January 1, 2019): F1—F15. http://dx.doi.org/10.1190/geo2017-0624.1.

Full text
Abstract:
Reverse time migration (RTM) in attenuating media should take absorption and dispersion effects into consideration. The latest proposed viscoacoustic wave equation with decoupled fractional Laplacians facilitates separate amplitude compensation and phase correction in [Formula: see text]-compensated RTM ([Formula: see text]-RTM). However, intensive computation and enormous storage requirements of [Formula: see text]-RTM prevent it from being extended into practical application, especially for large-scale 2D or 3D cases. The emerging graphics processing unit (GPU) computing technology, built around a scalable array of multithreaded streaming multiprocessors, presents an opportunity for greatly accelerating [Formula: see text]-RTM by appropriately exploiting GPUs architectural characteristics. We have developed the cu[Formula: see text]-RTM, a CUDA-based code package that implements [Formula: see text]-RTM based on a set of stable and efficient strategies, such as streamed CUDA fast Fourier transform, checkpointing-assisted time-reversal reconstruction, and adaptive stabilization. The cu[Formula: see text]-RTM code package can run in a multilevel parallelism fashion, either synchronously or asynchronously, to take advantages of all the CPUs and GPUs available, while maintaining impressively good stability and flexibility. We mainly outline the architecture of the cu[Formula: see text]-RTM code package and some program optimization schemes. The speedup ratio on a single GeForce GTX760 GPU card relative to a single core of Intel Core i5-4460 CPU can reach greater than 80 in a large-scale simulation. The strong scaling property of multi-GPU parallelism is demonstrated by performing [Formula: see text]-RTM on a Marmousi model with one to six GPU(s) involved. Finally, we further verified the feasibility and efficiency of the cu[Formula: see text]-RTM on a field data set.
APA, Harvard, Vancouver, ISO, and other styles
37

Shulgina, I. V., O. A. Lobovikova, O. A. Voloh, O. V. Gromova, A. K. Nikiforov, A. V. Komissarov, V. A. Demchenko, et al. "Review of Post-registration Changes in the Life Cycle of Сholera Bivalent Chemical Vaccine (Review)." Drug development & registration 9, no. 1 (February 26, 2020): 109–14. http://dx.doi.org/10.33380/2305-2066-2020-9-1-109-114.

Full text
Abstract:
Introduction. The inevitable of post-registration variations due to the improvement of production processes and quality control related to the integration of the modern technological solutions, replacement of equipment, suppliers of raw, consumables and packaging materials, improvement of form release or composition, administrative changes, as well as obtaining new data on clinical efficacy and safety of immunobiological medicinal products (IMP) during post-marketing studies.Text. The purpose of this work is to analyze the post-registration changes in the life cycle of IMP «Сholera bivalent chemical Vaccine» produced by Russian State anti-plague Research Institute «Microbe», reflecting the harmonization of the documents of the registration dossier with the innovations of the Russian legislation, the optimization of production and quality control. First of all, the changes made to the registration documentation for IMP concerned the updating of the dossier in accordance with the adopted Federal laws and Resolutions of the government of the Russian Federation. The following changes related to the optimization of methods for controlling the limit content of impurities of substances used at different stages of antigen production. Also, the changes were associated with the improvement of consumer properties of the drug, namely the introduction of modern polymer packaging and several drug’s form release, convenient for use in practical health care institutions. The latest change of Pharmacopoeia enterprise article (PEA) R N001465/01-111119 regulates the application of identification means in the form of a two-dimensional bar code (QR-code) on the packaging of medicines, which is due to compliance with the requirements of article 67 «Information on medicines. The system of monitoring the movement of medicines» Federal law N 61 «Оn medicines circulation».Conclusion. Maintaining the required level of quality of IMP when changing production technology or control requires a comprehensive analysis of the proposed changes in order to make them in the documents of the registration dossier. At the same time harmonization of documents of the registration dossier with novelties of the Russian legislation is a necessary condition for implementation of production activity within the legal field.
APA, Harvard, Vancouver, ISO, and other styles
38

Ongpeng, Jason Maximino C., Ernesto J. Guades, and Michael Angelo B. Promentilla. "Cross-Organizational Learning Approach in the Sustainable Use of Fly Ash for Geopolymer in the Philippine Construction Industry." Sustainability 13, no. 5 (February 24, 2021): 2454. http://dx.doi.org/10.3390/su13052454.

Full text
Abstract:
The construction industry faces a challenging situation in attaining sustainable development goals. The carbon footprint of the production and use of construction materials such as the use of ordinary Portland cement in concrete products is still on the rise despite of many alternatives and technologies. In this paper, the local cross-organizational learning approach (COLA) and a systematic review of academic and professional literatures were applied in analyzing the use of fly ash as a geopolymer in the Philippine construction industry. Three primary stakeholders were considered: academe, professional organizations, and industry. Documents from each stakeholder were collected, with keywords including sustainability, fly ash, and geopolymer. These documents included published materials, newsletters, department orders, codes, and policies. Text analytics throughout the documents were applied using the Latent Dirichlet Allocation model, which uses a hierarchal Bayesian-modelling process that groups set of items into topics to determine the maturity level of the organizational learning. An adoption framework is proposed aligning COLA with the awareness, interest, desire, and action (AIDA) funnel model. Results show that the organizational maturity until optimization of academe is sufficient towards interest and desire, while industry is highly encouraged to increase organizational maturity from managed to optimization towards desire and action. Factors such as organizational intelligence (OI) and organizational stupidity (OS) are to be considered in balancing critical thinking across organizations. Further studies are recommended by considering the use of COLA with ASEAN organizations in the development of sustainable construction materials.
APA, Harvard, Vancouver, ISO, and other styles
39

Vrugt, J. A. "DREAM<sub>(D)</sub>: an adaptive markov chain monte carlo simulation algorithm to solve discrete, noncontinuous, posterior parameter estimation problems." Hydrology and Earth System Sciences Discussions 8, no. 2 (April 26, 2011): 4025–52. http://dx.doi.org/10.5194/hessd-8-4025-2011.

Full text
Abstract:
Abstract. Formal and informal Bayesian approaches are increasingly being used to treat forcing, model structural, parameter and calibration data uncertainty, and summarize hydrologic prediction uncertainty. This requires posterior sampling methods that approximate the (evolving) posterior distribution. We recently introduced the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm, an adaptive Markov Chain Monte Carlo (MCMC) method that is especially designed to solve complex, high-dimensional and multimodal posterior probability density functions. The method runs multiple chains in parallel, and maintains detailed balance and ergodicity. Here, I present the latest algorithmic developments, and introduce a discrete sampling variant of DREAM that samples the parameter space at fixed points. The development of this new code, DREAM(D), has been inspired by the existing class of integer optimization problems, and emerging class of experimental design problems. Such non-continuous parameter estimation problems are of considerable theoretical and practical interest. The theory developed herein is applicable to DREAM(ZS) (Vrugt et al., 2011) and MT-DREAM(ZS) (Laloy and Vrugt, 2011) as well. Two case studies involving a sudoku puzzle and rainfall – runoff model calibration problem are used to illustrate DREAM(D).
APA, Harvard, Vancouver, ISO, and other styles
40

D'Alberto, Paolo, Victor Wu, Aaron Ng, Rahul Nimaiyar, Elliott Delaye, and Ashish Sirasao. "xDNN: Inference for Deep Convolutional Neural Networks." ACM Transactions on Reconfigurable Technology and Systems 15, no. 2 (June 30, 2022): 1–29. http://dx.doi.org/10.1145/3473334.

Full text
Abstract:
We present xDNN, an end-to-end system for deep-learning inference based on a family of specialized hardware processors synthesized on Field-Programmable Gate Array (FPGAs) and Convolution Neural Networks (CNN). We present a design optimized for low latency, high throughput, and high compute efficiency with no batching. The design is scalable and a parametric function of the number of multiply-accumulate units, on-chip memory hierarchy, and numerical precision. The design can produce a scale-down processor for embedded devices, replicated to produce more cores for larger devices, or resized to optimize efficiency. On Xilinx Virtex Ultrascale+ VU13P FPGA, we achieve 800 MHz that is close to the Digital Signal Processing maximum frequency and above 80% efficiency of on-chip compute resources. On top of our processor family, we present a runtime system enabling the execution of different networks for different input sizes (i.e., from 224× 224 to 2048× 1024). We present a compiler that reads CNNs from native frameworks (i.e., MXNet, Caffe, Keras, and Tensorflow), optimizes them, generates codes, and provides performance estimates. The compiler combines quantization information from the native environment and optimizations to feed the runtime with code as efficient as any hardware expert could write. We present tools partitioning a CNN into subgraphs for the division of work to CPU cores and FPGAs. Notice that the software will not change when or if the FPGA design becomes an ASIC, making our work vertical and not just a proof-of-concept FPGA project. We show experimental results for accuracy, latency, and power for several networks: In summary, we can achieve up to 4 times higher throughput, 3 times better power efficiency than the GPUs, and up to 20 times higher throughput than the latest CPUs. To our knowledge, we provide solutions faster than any previous FPGA-based solutions and comparable to any other top-of-the-shelves solutions.
APA, Harvard, Vancouver, ISO, and other styles
41

Rohanda, A., A. Waris, R. Kurniadi, and S. Bakhri. "Gamma Heating Evaluation of Silicide RSG-GAS Multipurpose Reactor." Journal of Physics: Conference Series 2328, no. 1 (August 1, 2022): 012004. http://dx.doi.org/10.1088/1742-6596/2328/1/012004.

Full text
Abstract:
Abstract In research reactor, gamma heating deposited in the samples is an important issue because it is related to the samples and the reactor operational safety. Multi Purpose Reactor (MPR) 30 MWth or its called Reaktor Serba Guna G.A. Siwabessy (RSG-GAS) is a research reactor that play a role as a place to irradiate various target material type. RSG-GAS has been operating since 1987 and the last measurement of gamma heating in the core was conducted around twenty years ago in oxide core. In 1996, RSG-GAS core was converted from oxide fuel type to silicide fuel type. The purpose of the conversion are to improve the performance and efficiency of RSG-GAS. As part of RSG-GAS irradiation facilities safety analysis, the gamma heating measurement was re-conducted in order to obtain latest data as benchmark data in silicide core. This paper presents the results of gamma heating measurement of LEU silicide RSG-GAS core in Central Irradiation Position (CIP) at 15 MW and 30 MW power level using gamma calorimeter. There were four types of calorimeter used, which were calorimeter with graphite (C) sample, iron (Fe) sample, aluminum (Al) sample and zirconium (Zr) sample. Gamma heating calculations using GAMSET code were performed to verify the measurement results. The measurement results are lower than the GAMSET results and the gamma heating value increases in proportion to the increase of calorimeter sample atomic number. This results are corresponds to gamma heating benchmarking results of RSG-GAS oxide core. Several optimization efforts both measuring and modeling with GAMSET were conducted as an evaluation and justification of the results. The best optimization results are achieved using the maximum value of the measurement and adjusting the power peaking factor (PPF) distribution. The calculated gamma heating value optimization results at 15 MW power are 2.78 W/g (C sample), 2.74 W/g (Al sample), 3.36 W/g (Fe sample) and 4.60 W/g (Zr sample) while at 30 MW power level are 5.57 W/g (C sample), 5.49 W/g (Al sample), 6.75 W/g (Fe sample) and 9.23 W/g (Zr sample). The best optimization results serve as a benchmark data for developing new gamma heating calculation programs based on 18 gamma energy groups.
APA, Harvard, Vancouver, ISO, and other styles
42

Ahn, Jung Min, Jungwook Kim, and Kyunghyun Kim. "Ensemble Machine Learning of Gradient Boosting (XGBoost, LightGBM, CatBoost) and Attention-Based CNN-LSTM for Harmful Algal Blooms Forecasting." Toxins 15, no. 10 (October 10, 2023): 608. http://dx.doi.org/10.3390/toxins15100608.

Full text
Abstract:
Harmful algal blooms (HABs) are a serious threat to ecosystems and human health. The accurate prediction of HABs is crucial for their proactive preparation and management. While mechanism-based numerical modeling, such as the Environmental Fluid Dynamics Code (EFDC), has been widely used in the past, the recent development of machine learning technology with data-based processing capabilities has opened up new possibilities for HABs prediction. In this study, we developed and evaluated two types of machine learning-based models for HABs prediction: Gradient Boosting models (XGBoost, LightGBM, CatBoost) and attention-based CNN-LSTM models. We used Bayesian optimization techniques for hyperparameter tuning, and applied bagging and stacking ensemble techniques to obtain the final prediction results. The final prediction result was derived by applying the optimal hyperparameter and bagging and stacking ensemble techniques, and the applicability of prediction to HABs was evaluated. When predicting HABs with an ensemble technique, it is judged that the overall prediction performance can be improved by complementing the advantages of each model and averaging errors such as overfitting of individual models. Our study highlights the potential of machine learning-based models for HABs prediction and emphasizes the need to incorporate the latest technology into this important field.
APA, Harvard, Vancouver, ISO, and other styles
43

Goettig, Peter, Nikolaj G. Koch, and Nediljko Budisa. "Non-Canonical Amino Acids in Analyses of Protease Structure and Function." International Journal of Molecular Sciences 24, no. 18 (September 13, 2023): 14035. http://dx.doi.org/10.3390/ijms241814035.

Full text
Abstract:
All known organisms encode 20 canonical amino acids by base triplets in the genetic code. The cellular translational machinery produces proteins consisting mainly of these amino acids. Several hundred natural amino acids serve important functions in metabolism, as scaffold molecules, and in signal transduction. New side chains are generated mainly by post-translational modifications, while others have altered backbones, such as the β- or γ-amino acids, or they undergo stereochemical inversion, e.g., in the case of D-amino acids. In addition, the number of non-canonical amino acids has further increased by chemical syntheses. Since many of these non-canonical amino acids confer resistance to proteolytic degradation, they are potential protease inhibitors and tools for specificity profiling studies in substrate optimization and enzyme inhibition. Other applications include in vitro and in vivo studies of enzyme kinetics, molecular interactions and bioimaging, to name a few. Amino acids with bio-orthogonal labels are particularly attractive, enabling various cross-link and click reactions for structure-functional studies. Here, we cover the latest developments in protease research with non-canonical amino acids, which opens up a great potential, e.g., for novel prodrugs activated by proteases or for other pharmaceutical compounds, some of which have already reached the clinical trial stage.
APA, Harvard, Vancouver, ISO, and other styles
44

BARKER, KEVIN J., KEI DAVIS, ADOLFY HOISIE, DARREN J. KERBYSON, MIKE LANG, SCOTT PAKIN, and JOSE CARLOS SANCHO. "A PERFORMANCE EVALUATION OF THE NEHALEM QUAD-CORE PROCESSOR FOR SCIENTIFIC COMPUTING." Parallel Processing Letters 18, no. 04 (December 2008): 453–69. http://dx.doi.org/10.1142/s012962640800351x.

Full text
Abstract:
In this work we present an initial performance evaluation of Intel's latest, second-generation quad-core processor, Nehalem, and provide a comparison to first-generation AMD and Intel quad-core processors Barcelona and Tigerton. Nehalem is the first Intel processor to implement a NUMA architecture incorporating QuickPath Interconnect for interconnecting processors within a node, and the first to incorporate an integrated memory controller. We evaluate the suitability of these processors in quad-socket compute nodes as building blocks for large-scale scientific computing clusters. Our analysis of intra-processor and intra-node scalability of microbenchmarks, and a range of large-scale scientific applications, indicates that quad-core processors can deliver an improvement in performance of up to 4x over a single core depending on the workload being processed. However, scalability can be less when considering a full node. We show that Nehalem outperforms Barcelona on memory-intensive codes by a factor of two for a Nehalem node with 8 cores and a Barcelona node containing 16 cores. Further optimizations are possible with Nehalem, including the use of Simultaneous Multithreading, which improves the performance of some applications by up to 50%.
APA, Harvard, Vancouver, ISO, and other styles
45

Finkbeiner, Bernd, Christopher Hahn, Marvin Stenger, and Leander Tentrup. "Efficient monitoring of hyperproperties using prefix trees." International Journal on Software Tools for Technology Transfer 22, no. 6 (February 20, 2020): 729–40. http://dx.doi.org/10.1007/s10009-020-00552-5.

Full text
Abstract:
Abstract Hyperproperties, such as non-interference and observational determinism, relate multiple computation traces with each other and are thus not monitorable by tools that consider computations in isolation. We present the monitoring approach implemented in the latest version of $$\text {RVHyper}$$ RVHyper , a runtime verification tool for hyperproperties. The input to the tool are specifications given in the temporal logic $$\text {HyperLTL}$$ HyperLTL , which extends linear-time temporal logic (LTL) with trace quantifiers and trace variables. $$\text {RVHyper}$$ RVHyper processes execution traces sequentially until a violation of the specification is detected. In this case, a counterexample, in the form of a set of traces, is returned. $$\text {RVHyper}$$ RVHyper employs a range of optimizations: a preprocessing analysis of the specification and a procedure that minimizes the traces that need to be stored during the monitoring process. In this article, we introduce a novel trace storage technique that arranges the traces in a tree-like structure to exploit partially equal traces. We evaluate $$\text {RVHyper}$$ RVHyper on existing benchmarks on secure information flow control, error correcting codes, and symmetry in hardware designs. As an example application outside of security, we show how $$\text {RVHyper}$$ RVHyper can be used to detect spurious dependencies in hardware designs.
APA, Harvard, Vancouver, ISO, and other styles
46

Ramasamy, Sai Vigness, and Leonid Bazyma. "Progress in electric propulsion numerical simulation." Aerospace Technic and Technology, no. 4 (August 24, 2023): 62–67. http://dx.doi.org/10.32620/aktt.2023.4.07.

Full text
Abstract:
Electric propulsion has been developed since the early 1960s, and its use onboard satellites, orbiting platforms, and interplanetary probes have increased significantly in the 21st century. The need for a detailed understanding of the working physics and a more accurate assessment of performance to create innovative designs has stimulated the development of several numerical simulation codes. The choice of method for modelling a specific thruster should be dictated by the physical characteristics of the flow in the device, and by the level of accuracy required from the simulation. There are various conditions in different types of thrusters. This means that different methods and computer codes must be developed for each of the different thrusters. The successful development of physically accurate numerical methods for simulating gas and plasma flows in electric propulsion thrusters can significantly improve the design process of these devices. In recent years, numerical simulations have increasingly benefited the basic understanding and engineering optimization of electric thrusters. This is due to several concurrent contributions: the evolution of computer hardware that has allowed the representation of multidimensional geometries and multiscale phenomena; implementation of sophisticated new algorithms and numerical diagnostic tools; and availability of new collisional and surface interaction data. There are two main directions for future work to continue to improve the numerical modelling of electric thrusters. First, the numerical methods themselves must be improved in terms of their physical accuracy and computational speed. The second main direction for improvement in the simulations involves more accurate determination of physical parameters that are required by the numerical formulations. This paper outlines efforts to develop models of various electrical propulsion concepts, from the first attempts in the early 90s to the latest sophisticated multidimensional simulations.
APA, Harvard, Vancouver, ISO, and other styles
47

Voloshchenko, Olga. "MODERNIZATION AND FUNCTIONING OF PRINCIPLES OF CIVIL PROCEDURAL LAW OF UKRAINE IN THE CONTEXT OF EUROPEAN INTEGRATION PROCESSES." Journal of V. N. Karazin Kharkiv National University, Series "Law", no. 37 (May 28, 2024): 105–11. http://dx.doi.org/10.26565/2075-1834-2024-37-11.

Full text
Abstract:
Introduction. The signing of the association agreement between Ukraine and the European Union became a powerful impetus for the optimization of law-making processes aimed at improving domestic legislation. The series of commitments adopted by Ukraine regarding the imitation and fixation of European legislative values ​​in the domestic legal field has become a proper reference point for the legislator in matters of modernization and addition of existing provisions in the leading branches of law in general, and in the field of civil procedural law, in particular. Adoption of the new version of the Civil Procedure Code of Ukraine in 2017 was a major step in bringing the field of civil justice closer to the standards of the European Union and improving it. The review of the revised provisions in terms of the functioning and implementation of the principles of civil procedural law makes a positive impression in terms of compliance with the latest requirements of the practice of the European Court of Human Rights. Renewal of approaches to the understanding of the essence of some principles of civil justice led to the formation of new doctrinal conclusions on the basic meaning of such principles and their interaction.Summary of the main results. The doctrinal study of the processes of novelization of the principles of civil procedural law in the light of European integration processes made it possible to formulate the following theses: expanding the list of sources of civil procedural law, as a way of revising the principle of the Rule of Law, strengthened the latter both in the aspect of expanding the tools for ensuring the competitiveness of the process and in the processes of law-making and application practice; precedent practice of the Grand Chamber of the Supreme Court is considered as an element of legal certainty of the principle of the rule of law; it legally enshrined possibility of filing a claim using the "Electronic Court" services strengthened the principle of the Rule of Law in terms of ensuring human rights, as well as expanded the mechanisms of dominance and implementation of the principle of dispositiveness (in terms of the possibility of choosing methods of filing claims on the merits of the case, etc.); it was established that the establishment of the principle of proportionality in Article 11 of the Code of Criminal Procedure of Ukraine is an additional guarantee of ensuring the proper justification of the decision made by the judge (in the context of writing the motivational part of the court decision). Conclusion.The conducted doctrinal study of the content of the Civil Procedure Code of Ukraine through the prism of the practice of the ECtHR provided an opportunity to draw conclusions of general theoretical importance, which can be used in further scientific development of issues of the functioning and interaction of the principles of civil procedural law
APA, Harvard, Vancouver, ISO, and other styles
48

Behdani, Amir M., Jessica Lai, Christina Kim, Lama Basalelah, Trey Halsey, Krista L. Donohoe, and Dayanjan Wijesinghe. "Optimizing pharmacogenomic decision-making by data science." PLOS Digital Health 3, no. 2 (February 8, 2024): e0000451. http://dx.doi.org/10.1371/journal.pdig.0000451.

Full text
Abstract:
Healthcare systems have made rapid progress towards combining data science with precision medicine, particularly in pharmacogenomics. With the lack of predictability in medication effectiveness from patient to patient, acquiring the specifics of their genotype would be highly advantageous for patient treatment. Genotype-guided dosing adjustment improves clinical decision-making and helps optimize doses to deliver medications with greater efficacy and within safe margins. Current databases demand extensive effort to locate relevant genetic dosing information. To address this problem, Patient Optimization Pharmacogenomics (POPGx) was constructed. The objective of this paper is to describe the development of POPGx, a tool to simplify the approach for healthcare providers to determine pharmacogenomic dosing recommendations for patients taking multiple medications. Additionally, this tool educates patients on how their allele variations may impact gene function in case they need further healthcare consultations. POPGx was created on Konstanz Information Miner (KNIME). KNIME is a modular environment that allows users to conduct code-free data analysis. The POPGx workflow can access Clinical Pharmacogenomics Implementation Consortium (CPIC) guidelines and subsequently be able to present relevant dosing and counseling information. A KNIME representational state transfer (REST) application program interface (API) node was established to retrieve information from CPIC and drugs that are exclusively metabolized through CYP450, and these drugs were processed simultaneously to demonstrate competency of the workflow. The POPGx program provides a time-efficient method for users to retrieve relevant, patient-specific medication selection and dosing recommendations. Users input metabolizer gene, genetic allele data, and medication list to retrieve clear dosing information. The program is automated to display current guideline recommendations from CPIC. The integration of this program into healthcare systems has the potential to revolutionize patient care by giving healthcare practitioners an easy way to prescribe medications with greater efficacy and safety by utilizing the latest advancements in the field of pharmacogenomics.
APA, Harvard, Vancouver, ISO, and other styles
49

Mietielov, Volodymyr, Oleksiі Marusenko, and Glib Brodskiy. "DEVELOPMENT OF AN APPLICATION FOR THE ANALYSIS AND RESEARCH OF THE PARAMETERS OF SCIENTIFIC TEXT DOCUMENTS CAPABLE OF WORKING WITH SEVERAL OPERATING SYSTEMS." Bulletin of the National Technical University «KhPI» Series: New solutions in modern technologies, no. 4(14) (December 28, 2022): 41–45. http://dx.doi.org/10.20998/2413-4295.2022.04.06.

Full text
Abstract:
In the work designed, developed and implemented a cross-platform application for retrieving information from the repository, displaying it correctly, generating a description of references in various styles, as well as for opening a document in a cross-platform application. The MS Windows 11 operating system, the Dart programming language and the Flutter framework were used in the development. The optimization of the program was ensured by using the Flutter BloC architecture, it allowed to structure the code, separate the interface from the logic and visually describe the operation of the program depending on various states. The input data is information about the scientific work in the repository, the user's request in the search field is in the form of a link. The result of the work is: a list of processed links, a description of references in different styles, a document of a scientific work, a link for opening in a browser. Instructions containing all the necessary information about the main aspects of the application have been added to the cross-platform application. A page was also created where users can get the necessary help or report problems with the application. The application was tested on devices based o-n the Android 11 operating system and above, devices based on the iOS operating system, and a web browser. To run on an Android-based device, you need to install an apk file and run the app, to run on an iOS-based device, you need to install the app using any development environment. To run in a web browser, you will need to run from the development environment. All these functions make the application relevant for use by students and teachers in the creation of scientific, diploma, and research works. And also to get acquainted with the latest research of colleagues in a large number of areas. The cross-platform application allows you to quickly and conveniently get all available information about work, as well as view the work itself on your device without using third-party programs.
APA, Harvard, Vancouver, ISO, and other styles
50

Slipachyk, Andriana. "Military novels of labour legislation through the prism of judicial practice." ScienceRise: Juridical Science, no. 4(22) (December 30, 2022): 11–23. http://dx.doi.org/10.15587/2523-4153.2022.268985.

Full text
Abstract:
An analysis of the peculiarities of labour relations legal regulation under martial law is presented. There were considered and analysed certain aspects of the newly adopted laws on the organization and optimization of labour relations under the conditions of the special regime, the latest changes to the Labour Code of Ukraine and other laws in the field of labour that regulate issues of remuneration, suspension, termination of labour relations in realities of war. The practical implementation through the prism of judicial practice of both innovations in labour legislation and individual problematic issues that arise during the settlement of labour conflicts (disputes) is proven. In connection with the understanding that in the future the number of categories of cases, related to the resolution of labour conflicts (disputes) that arose after February 24, 2022, will only increase, judges should consider that a formal reference to the martial law is not sufficient cause of the reasonableness of the non-fulfilment of obligations, assigned to the parties of a labour contract, moreover, when assessing the factual circumstances of the case, it is necessary to take into account the geographic position of the region where the labour activity is performed. The author has analysed the activity of highest bodies of state authority, which is accompanied by the introduction of a number of programs, aimed at supporting the national economy, business and stimulating the growth of employment among the population in extremely difficult conditions. The further tendency of prospects for the development of labour legislation in modern conditions is clarified. The author emphasizes the importance of continuing the economic development of Ukraine, because the greater number of successes on the economic front, the more opportunities will appear to improve defence capabilities on the military front, which in turn will be an important step for our joint victory, in which the participation of each of us is extremely important
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography