Academic literature on the topic 'Functional bootstrapping'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Functional bootstrapping.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Functional bootstrapping"

1

Sharipov, Olimjon Sh, and Martin Wendler. "Bootstrapping covariance operators of functional time series." Journal of Nonparametric Statistics 32, no. 3 (June 1, 2020): 648–66. http://dx.doi.org/10.1080/10485252.2020.1771334.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shang, Han Lin. "Bootstrapping Long-Run Covariance of Stationary Functional Time Series." Forecasting 6, no. 1 (February 5, 2024): 138–51. http://dx.doi.org/10.3390/forecast6010008.

Full text
Abstract:
A key summary statistic in a stationary functional time series is the long-run covariance function that measures serial dependence. It can be consistently estimated via a kernel sandwich estimator, which is the core of dynamic functional principal component regression for forecasting functional time series. To measure the uncertainty of the long-run covariance estimation, we consider sieve and functional autoregressive (FAR) bootstrap methods to generate pseudo-functional time series and study variability associated with the long-run covariance. The sieve bootstrap method is nonparametric (i.e., model-free), while the FAR bootstrap method is semi-parametric. The sieve bootstrap method relies on functional principal component analysis to decompose a functional time series into a set of estimated functional principal components and their associated scores. The scores can be bootstrapped via a vector autoregressive representation. The bootstrapped functional time series are obtained by multiplying the bootstrapped scores by the estimated functional principal components. The FAR bootstrap method relies on the FAR of order 1 to model the conditional mean of a functional time series, while residual functions can be bootstrapped via independent and identically distributed resampling. Through a series of Monte Carlo simulations, we evaluate and compare the finite-sample accuracy between the sieve and FAR bootstrap methods for quantifying the estimation uncertainty of the long-run covariance of a stationary functional time series.
APA, Harvard, Vancouver, ISO, and other styles
3

Beutner, Eric, and Henryk Zähle. "Bootstrapping Average Value at Risk of Single and Collective Risks." Risks 6, no. 3 (September 12, 2018): 96. http://dx.doi.org/10.3390/risks6030096.

Full text
Abstract:
Almost sure bootstrap consistency of the blockwise bootstrap for the Average Value at Risk of single risks is established for strictly stationary β -mixing observations. Moreover, almost sure bootstrap consistency of a multiplier bootstrap for the Average Value at Risk of collective risks is established for independent observations. The main results rely on a new functional delta-method for the almost sure bootstrap of uniformly quasi-Hadamard differentiable statistical functionals, to be presented here. The latter seems to be interesting in its own right.
APA, Harvard, Vancouver, ISO, and other styles
4

Okada, Hiroki, Shinsaku Kiyomoto, and Carlos Cid. "Integer-Wise Functional Bootstrapping on TFHE: Applications in Secure Integer Arithmetics." Information 12, no. 8 (July 26, 2021): 297. http://dx.doi.org/10.3390/info12080297.

Full text
Abstract:
TFHE is a fast fully homomorphic encryption scheme proposed by Chillotti et al. in Asiacrypt’ 2018. Integer-wise TFHE is a generalized version of TFHE that can encrypt the plaintext of an integer that was implicitly presented by Chillotti et al., and Bourse et al. presented the actual form of the scheme in CRYPTO’ 2018. However, Bourse et al.’s scheme provides only homomorphic integer additions and homomorphic evaluations of a sign function. In this paper, we construct a technique for operating any 1-variable function in only one bootstrapping of the integer-wise TFHE. For applications of the scheme, we also construct a useful homomorphic evaluation of several integer arithmetics: division, equality test, and multiplication between integer and binary numbers. Our implementation results show that our homomorphic division is approximately 3.4 times faster than any existing work and that its run time is less than 1 second for 4-bit integer inputs.
APA, Harvard, Vancouver, ISO, and other styles
5

Rondal, Jean A., and Anne Cession. "Input evidence regarding the semantic bootstrapping hypothesis." Journal of Child Language 17, no. 3 (October 1990): 711–17. http://dx.doi.org/10.1017/s0305000900010965.

Full text
Abstract:
ABSTRACTThe input language addressed to 18 language-learning children (MLU 1.00–3.00) was analysed so as to assess the quality of the semanticsyntactic correspondence posited by the semantic bootstrapping hypothesis. The correspondence appears to be quite satisfactory with little variation from the lower to the higher MLUs. All the persons and things referred to in the corpora were labelled by the mothers using nouns. All the actions referred to were labelled using verbs. Most of the attributive information was conveyed by adjectives. Spatial information was expressed through the use of spatial prepositions. As to the functional categories, all agents of actions and causes of events were encoded as subjects of sentences. All patients, themes, sources, goals, locations, and instruments were encoded as objects of sentences (either direct or oblique). This good semantic-syntactic correspondence may make the child's construction of grammatical categories easier.
APA, Harvard, Vancouver, ISO, and other styles
6

Shang, Han Lin. "Double bootstrapping for visualizing the distribution of descriptive statistics of functional data." Journal of Statistical Computation and Simulation 91, no. 10 (February 10, 2021): 2116–32. http://dx.doi.org/10.1080/00949655.2021.1885670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Silaban, Daniel Ebenezer, and Irsad Lubis. "BOOTSTRAPPING ANALYSIS OF NORTH SUMATRA PROVINCE INSPECTORATE PERFORMANCE." Jurnal Riset Bisnis dan Manajemen 16, no. 1 (February 22, 2023): 83–90. http://dx.doi.org/10.23969/jrbm.v16i1.7228.

Full text
Abstract:
The goal of this study is to analyze the factors that influence how well the inspectorate in the province of North Sumatra performs. This research is important because it aims to evaluate the performance of the inspectorate's officials and staff in North Sumatra Province so that measures may be taken to raise employee professionalism and quality. The Smart PLS program was employed as the basis for the quantitative research approach. The research subjects were 48 officials and workers in the functional position of auditors who were taken using the Slovin technique. The results indicate that the inspectorate of the North Sumatra Province performs poorly, particularly when it comes to the implementation of good governance and leadership commitment, where indirect good governance has not been able to mediate leadership commitment.
APA, Harvard, Vancouver, ISO, and other styles
8

Brodal, Gerth Stølting, and Chris Okasaki. "Optimal purely functional priority queues." Journal of Functional Programming 6, no. 6 (November 1996): 839–57. http://dx.doi.org/10.1017/s095679680000201x.

Full text
Abstract:
AbstractBrodal recently introduced the first implementation of imperative priority queues to supportfindMin, insertandmeldinO(1) worst-case time, anddeleteMininO(logn) worst-case time. These bounds are asymptotically optimal among all comparison-based priority queues. In this paper, we adapt Brodal's data structure to a purely functional setting. In doing so, we both simplify the data structure and clarify its relationship to the binomial queues of Vuillemin, which support all four operations inO(logn) time. Specifically, we derive our implementation from binomial queues in three steps: first, we reduce the running time ofinserttoO(1) by eliminating the possibility of cascading links; second, we reduce the running time offindMintoO(1) by adding a global root to hold the minimum element; and finally, we reduce the running time ofmeldtoO(1) by allowing priority queues to contain other priority queues. Each of these steps is expressed using ML-style functors. The last transformation, known as data-structural bootstrapping, is an interesting application of higher-order functors and recursive structures.
APA, Harvard, Vancouver, ISO, and other styles
9

LÖFQVIST, LARS. "PRODUCT INNOVATION IN SMALL COMPANIES: MANAGING RESOURCE SCARCITY THROUGH FINANCIAL BOOTSTRAPPING." International Journal of Innovation Management 21, no. 02 (February 2017): 1750020. http://dx.doi.org/10.1142/s1363919617500207.

Full text
Abstract:
Researchers have proposed that scarce resources are the main factor hindering product innovation in small companies. However, despite scarce resources, small companies do innovate, so the research question is: How do small companies manage resource scarcity in product innovation? To answer the research question a multiple case study of three small established companies and their product innovation was used, including interviews and observations over a period of five months. The small companies were found to use many different bootstrapping methods in combination within their product innovation. The methods can be classified into three different functional categories: bootstrapping methods for increasing resources, for using existing resources more efficiently, and those for securing a fast payback on resources put into product innovation. Due to their resource scarcity, the studied companies also favoured an innovation strategy only involving new products done with known technology and targeting existing markets. This strategy seems to avoid unsuccessful innovation but at the same time exclude technologically radical innovation.
APA, Harvard, Vancouver, ISO, and other styles
10

Dar, Davood, Lionel Lacombe, and Neepa T. Maitra. "The exact exchange–correlation potential in time-dependent density functional theory: Choreographing electrons with steps and peaks." Chemical Physics Reviews 3, no. 3 (September 2022): 031307. http://dx.doi.org/10.1063/5.0096627.

Full text
Abstract:
The time-dependent exchange–correlation potential has the unusual task of directing fictitious non-interacting electrons to move with exactly the same probability density as true interacting electrons. This has intriguing implications for its structure, especially in the non-perturbative regime, leading to step and peak features that cannot be captured by bootstrapping any ground-state functional approximation. We review what has been learned about these features in the exact exchange–correlation potential of time-dependent density functional theory in the past decade or so and implications for the performance of simulations when electrons are driven far from any ground state.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Functional bootstrapping"

1

Zhan, Yihui. "Bootstrapping functional M-estimators /." Thesis, Connect to this title online; UW restricted, 1996. http://hdl.handle.net/1773/8958.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Clet, Pierre-Emmanuel. "Contributions to the optimization of TFHE's functional bootstrapping for the evaluation of non-polynomial operators." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG001.

Full text
Abstract:
Avec la création et l'utilisation incessantes de données numériques, ces dernières années ont vu naître des inquiétudes au sujet des données sensibles et personnelles. De nouvelles lois, telle que le Règlement Général sur la Protection des Données, ont alors vu le jour pour assurer le respect de la confidentialité des données des individus. Cependant, l'externalisation grandissante du traitement des données notamment avec l'apparition du "machine learning as a service" soulève la question suivante: est-il possible de laisser un tiers traiter nos données tout en les gardant confidentielles ?Une solution à ce problème vient des chiffrements dits FHE, de l'anglais Fully Homomorphic Encryption. À l'aide de tels cryptosystèmes, des opérations peuvent être appliquées directement sur des messages chiffrés, sans jamais dévoiler ni le message d'origine, ni le message résultant des opérations. Ce corpus de techniques permet donc en théorie d'externaliser des calculs sans compromettre la confidentialité des données utilisées lors de ces calculs.Cela pourrait ouvrir la voie à de nombreuses applications telle que la possibilité d'ouvrir des services de diagnostic médicaux en ligne offrant une totale confidentialité des données médicales des patients.Malgré cette promesse alléchante, l'important coût computationnel des opérateurs FHE en limite la portée pratique. En effet, un calcul sur données chiffrées peut prendre plusieurs millions de fois plus de temps que son équivalent sur des données non chiffrées. Cela rend inenvisageable l'évaluation d'algorithme trop complexes sur des données chiffrées. Par ailleurs, le surcoût en mémoire apporté par les chiffrements FHE s'élève à un facteur multiplicatif de plusieurs milliers. Ce surcoût peut donc s'avérer rédhibitoire pour des applications sur des systèmes à basse mémoire tels que des systèmes embarqués.Dans cette thèse nous développons une nouvelle primitive pour le calcul sur données chiffrées basée sur l'opération de "bootstrapping fonctionnel" supportée par le cryptosystème TFHE. Cette primitive permet un gain en latence et en mémoire par rapport aux autres techniques comparables de l'état de l'art. Aussi, nous introduisons une seconde primitive permettant d'effectuer des calculs sous forme de circuit logique permettant un gain significatif de vitesse de calcul par rapport à l'état de l'art. Cette approche pourra notamment être intéressante auprès des concepteurs de compilateurs homomorphes comme alternative à l'utilisation de chiffrement binaire.Ces deux outils se veulent suffisamment généraux pour être applicables à un large panel de cas d'utilisation et ne sont donc pas limités aux cas d'usage présentés dans ce manuscrit.En guise d'illustration, nous appliquons nos opérateurs au calcul confidentiel de réseaux de neurones externalisés, montrant ainsi la possibilité d'évaluer des réseaux de neurones avec une relativement faible latence, même dans le cas de réseau de neurones de type récurrents.Enfin, nous appliquons nos opérateurs à une technique dite de transchiffrement permettant de s'affranchir des considérations de limitation en mémoire dûes à la grande taille des chiffrés FHE côté client
In recent years, concerns about sensitive and personal data arose due to the increasing creation and use of digital data. New laws, such as the General Data Protection Regulation, have been introduced to ensure that the confidentiality of individuals' data is respected. However, the growing outsourcing of data processing, particularly with the emergence of "machine learning as a service", raises the following question: is it possible to let a third party process our data while keeping it confidential?One solution to this problem comes in the form of Fully Homomorphic Encryption, or FHE for short. Using FHE cryptosystems, operations can be applied directly to encrypted messages, without ever revealing either the original message or the message resulting from the operations. In theory, this collection of techniques makes it possible to externalise calculations without compromising on the confidentiality of the data used during these calculations.This could pave the way for numerous applications, such as the possibility of offering online medical diagnostic services while ensuring the total confidentiality of the patients' medical data.Despite this promise, the high computational cost of FHE operators limits their practical scope. A calculation on encrypted data can take several million times longer than its equivalent on non-encrypted data. This makes it unthinkable to evaluate highly time consuming algorithms on encrypted data. In addition, the memory cost of FHE encryption is several thousand times greater than unencrypted data. This overhead may prove to be prohibitive for applications on low-memory systems such as embedded systems.In this thesis we develop a new primitive for computing on encrypted data based on the "functional bootstrapping" operation supported by the TFHE cryptosystem. This primitive allows a gain in latency and memory compared to other comparable techniques in the state of the art. We are also introducing a second primitive enabling calculations to be performed in the form of a logic circuit, providing a significant gain in calculation speed compared with the state of the art. This approach could be of particular interest to designers of homomorphic compilers as an alternative to the use of binary encryption.These two tools are intended to be sufficiently generic to be applicable to a wide range of use cases and are therefore not limited to the use cases presented in this manuscript.As an illustration, we apply our operators to the confidential computation of outsourced neural networks, thus demonstrating the possibility of evaluating neural networks with relatively low latency, even in the case of recurrent neural networks.Finally, we apply our operators to a technique known as transciphering, making it possible to overcome memory limitation on the client side coming with the large size of FHE ciphertexts
APA, Harvard, Vancouver, ISO, and other styles
3

Kumbhare, Deepak. "3D FUNCTIONAL MODELING OF DBS EFFICACY AND DEVELOPMENT OF ANALYTICAL TOOLS TO EXPLORE FUNCTIONAL STN." VCU Scholars Compass, 2011. http://scholarscompass.vcu.edu/etd/2531.

Full text
Abstract:
Introduction: Exploring the brain for optimal locations for deep brain stimulation (DBS) therapy is a challenging task, which can be facilitated by analysis of DBS efficacy in a large number of patients with Parkinson’s disease (PD). The Unified Parkinson's Disease Rating Scale (UPDRS) scores indicate the DBS efficacy of the corresponding stimulation location in a particular patient. The spatial distribution of these clinical scores can be used to construct a functional model which closely models the expected efficacy of stimulation in the region. Designs and Methods: In this study, different interpolation techniques were investigated that can appropriately model the DBS efficacy for Parkinson’s disease patients. These techniques are linear triangulation based interpolation, ‘roving window’ interpolation and ‘Monopolar inverse weighted distance’ (MIDW) interpolation. The MIDW interpolation technique is developed on the basis of electric field geometry of the monopolar DBS stimulation electrodes, based on the DBS model of monopolar cathodic stimulation of brain tissues. Each of these models was evaluated for their predictability, interpolation accuracy, as well as other benefits and limitations. The bootstrapping based optimization method was proposed to minimize the observational and patient variability in the collected database. A simulation study was performed to validate that the statistically optimized interpolated models were capable to produce reliable efficacy contour plots and reduced false effect due to outliers. Some additional visualization and analysis tools including a graphic user interface (GUI) were also developed for better understanding of the scenario. Results: The interpolation performance of the MIDW interpolation, the linear triangulation method and Roving window method was evaluated as interpolation error as 0.0903, 0.1219 and0.3006 respectively. Degree of prediction for the above methods was found to be 0.0822, 0.2986 and 0.0367 respectively. The simulation study demonstrate that the mean improvement in outlier handling and increased reliability after bootstrapping based optimization (performed on Linear triangulation interpolation method) is 6.192% and 12.8775% respectively. The different interpolation techniques used to model monopolar and bipolar stimulation data is found to be useful to study the corresponding efficacy distribution. A user friendly GUI (PDRP_GUI) and other utility tools are developed. Conclusion: Our investigation demonstrated that the MIDW and linear triangulation methods provided better degree of prediction, whereas the MIDW interpolation with appropriate configuration provided better interpolation accuracy. The simulation study suggests that the bootstrapping-based optimization can be used as an efficient tool to reduce outlier effects and increase interpolated reliability of the functional model of DBS efficacy. Additionally, the differential interpolation techniques used for monopolar and bipolar stimulation modeling facilitate study of overall DBS efficacy using the entire dataset.
APA, Harvard, Vancouver, ISO, and other styles
4

Kleyn, Judith. "The performance of the preliminary test estimator under different loss functions." Thesis, University of Pretoria, 2014. http://hdl.handle.net/2263/43132.

Full text
Abstract:
In this thesis different situations are considered in which the preliminary test estimator is applied and the performance of the preliminary test estimator under different proposed loss functions, namely the reflected normal , linear exponential (LINEX) and bounded LINEX (BLINEX) loss functions is evaluated. In order to motivate the use of the BLINEX loss function rather than the reflected normal loss or the LINEX loss function, the risk for the preliminary test estimator and its component estimators derived under BLINEX loss is compared to the risk of the preliminary test estimator and its components estimators derived under both reflected normal loss and LINEX loss analytically (in some sections) and computationally. It is shown that both the risk under reflected normal loss and the risk under LINEX loss is higher than the risk under BLINEX loss. The key focus point under consideration is the estimation of the regression coefficients of a multiple regression model under two conditions, namely the presence of multicollinearity and linear restrictions imposed on the regression coefficients. In order to address the multicollinearity problem, the regression coefficients were adjusted by making use of Hoerl and Kennard’s (1970) approach in ridge regression. Furthermore, in situations where under- or overestimation exist, symmetric loss functions will not give optimal results and it was necessary to consider asymmetric loss functions. In the economic application, it was shown that a loss function which is both asymmetric and bounded to ensure a maximum upper bound for the loss, is the most appropriate function to use. In order to evaluate the effect that different ridge parameters have on the estimation, the risk values were calculated for all three ridge regression estimators under different conditions, namely an increase in variance, an increase in the level of multicollinearity, an increase in the number of parameters to be estimated in the regression model and an increase in the sample size. These results were compared to each other and summarised for all the proposed estimators and proposed loss functions. The comparison of the three proposed ridge regression estimators under all the proposed loss functions was also summarised for an increase in the sample size and an increase in variance.
Thesis (PhD)--University of Pretoria, 2014.
lk2014
Statistics
PhD
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
5

Cardozo, Sandra Vergara. "Função da probabilidade da seleção do recurso (RSPF) na seleção de habitat usando modelos de escolha discreta." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-11032009-143806/.

Full text
Abstract:
Em ecologia, o comportamento dos animais é freqüentemente estudado para entender melhor suas preferências por diferentes tipos de alimento e habitat. O presente trabalho esta relacionado a este tópico, dividindo-se em três capítulos. O primeiro capitulo refere-se à estimação da função da probabilidade da seleção de recurso (RSPF) comparado com um modelo de escolha discreta (DCM) com uma escolha, usando as estatísticas qui-quadrado para obter as estimativas. As melhores estimativas foram obtidas pelo método DCM com uma escolha. No entanto, os animais não fazem a sua seleção baseados apenas em uma escolha. Com RSPF, as estimativas de máxima verossimilhança, usadas pela regressão logística ainda não atingiram os objetivos, já que os animais têm mais de uma escolha. R e o software Minitab e a linguagem de programação Fortran foram usados para obter os resultados deste capítulo. No segundo capítulo discutimos mais a verossimilhança do primeiro capítulo. Uma nova verossimilhança para a RSPF é apresentada, a qual considera as unidades usadas e não usadas, e métodos de bootstrapping paramétrico e não paramétrico são usados para estudar o viés e a variância dos estimadores dos parâmetros, usando o programa FORTRAN para obter os resultados. No terceiro capítulo, a nova verossimilhança apresentada no capítulo 2 é usada com um modelo de escolha discreta, para resolver parte do problema apresentado no primeiro capítulo. A estrutura de encaixe é proposta para modelar a seleção de habitat de 28 corujas manchadas (Strix occidentalis), assim como a uma generalização do modelo logit encaixado, usando a maximização da utilidade aleatória e a RSPF aleatória. Métodos de otimização numérica, e o sistema computacional SAS, são usados para estimar os parâmetros de estrutura de encaixe.
In ecology, the behavior of animals is often studied to better understand their preferences for different types of habitat and food. The present work is concerned with this topic. It is divided into three chapters. The first concerns the estimation of a resource selection probability function (RSPF) compared with a discrete choice model (DCM) using chi-squared to obtain estimates. The best estimates were obtained by the DCM method. Nevertheless, animals were not selected based on choice alone. With RSPF, the maximum likelihood estimates used with the logistic regression still did not reach the objectives, since the animals have more than one choice. R and Minitab software and the FORTRAN programming language were used for the computations in this chapter. The second chapter discusses further the likelihood presented in the first chapter. A new likelihood for a RSPF is presented, which takes into account the units used and not used, and parametric and non-parametric bootstrapping are employed to study the bias and variance of parameter estimators, using a FORTRAN program for the calculations. In the third chapter, the new likelihood presented in chapter 2, with a discrete choice model is used to resolve a part of the problem presented in the first chapter. A nested structure is proposed for modelling selection by 28 spotted owls (Strix occidentalis) as well as a generalized nested logit model using random utility maximization and a random RSPF. Numerical optimization methods and the SAS system were employed to estimate the nested structural parameters.
APA, Harvard, Vancouver, ISO, and other styles
6

Wu, Mengjiao. "Equivalence testing for identity authentication using pulse waves from photoplethysmograph." Diss., 2019. http://hdl.handle.net/2097/39461.

Full text
Abstract:
Doctor of Philosophy
Department of Statistics
Suzanne Dubnicka
Christopher Vahl
Photoplethysmograph sensors use a light-based technology to sense the rate of blood flow as controlled by the heart’s pumping action. This allows for a graphical display of a patient’s pulse wave form and the description of its key features. A person’s pulse wave has been proposed as a tool in a wide variety of applications. For example, it could be used to diagnose the cause of coldness felt in the extremities or to measure stress levels while performing certain tasks. It could also be applied to quantify the risk of heart disease in the general population. In the present work, we explore its use for identity authentication. First, we visualize the pulse waves from individual patients using functional boxplots which assess the overall behavior and identify unusual observations. Functional boxplots are also shown to be helpful in preprocessing the data by shifting individual pulse waves to a proper starting point. We then employ functional analysis of variance (FANOVA) and permutation tests to demonstrate that the identities of a group of subjects could be differentiated and compared by their pulse wave forms. One of the primary tasks of the project is to confirm the identity of a person, i.e., we must decide if a given person is whom they claim to be. We used an equivalence test to determine whether the pulse wave of the person under verification and the actual person were close enough to be considered equivalent. A nonparametric bootstrap functional equivalence test was applied to evaluate equivalence by constructing point-wise confidence intervals for the metric of identity assurance. We also proposed new testing procedures, including the way of building the equivalence hypothesis and test statistics, determination of evaluation range and equivalence bands, to authenticate the identity.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Li-Jie, and 陳立杰. "A Bootstrapping Approach to Cluster Analysis for Gene Expression Data with Incorporation of Gene Functional Similarity." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/72309351459073201386.

Full text
Abstract:
碩士
國立臺南大學
資訊工程學系碩士班
103
This thesis addressed the problem of incorporating gene semantic similarity into cluster analysis of gene expression data. The purpose was to effectively enhance the biological relevance of the clustering results by simultaneously considering the two features transformed from the gene expression data and gene functional annotations. The key issue of this problem was then on the determination of appropriate feature weights. The method proposed by past related studies needed to manually adjust the feature weights to select the best results. This thesis proposed an automatic feature-weights-determination clustering algorithm that integrated bootstrap and K-medoids methods for clustering gene expression data. The proposed method was validated by applying the method to two sets of frequently used real-life experimental gene expression data of budding yeast Saccharomyces cerevisiae obtained from past related studies. The results were analyzed and compared with the method proposed by past related studies using three well-known external criteria for clustering evaluation. The results indicated that the proposed algorithm can produce comparable or more valid gene expression clustering results than the method proposed by past related studies.
APA, Harvard, Vancouver, ISO, and other styles
8

Jabbari, Arfaee Shahab. "Bootstrap Learning of Heuristic Functions." Master's thesis, 2010. http://hdl.handle.net/10048/1589.

Full text
Abstract:
We investigate the use of machine learning to create effective heuristics for single-agent search. Our method aims to generate a sequence of heuristics from a given weak heuristic h{0} and a set of unlabeled training instances using a bootstrapping procedure. The training instances that can be solved using h{0} provide training examples for a learning algorithm that produces a heuristic h{1} that is expected to be stronger than h{0}. If h{0} is so weak that it cannot solve any of the given instances we use random walks backward from the goal state to create a sequence of successively more difficult training instances starting with ones that are guaranteed to be solvable by h{0}. The bootstrap process is then repeated using h{i} instead of h{i-1} until a sufficiently strong heuristic is produced. We test this method on the 15- and 24-sliding tile puzzles, the 17- , 24- , and 35-pancake puzzles, Rubik's Cube, and the 15- and 20-blocks world. In every case our method produces heuristics that allow IDA* to solve randomly generated problem instances quickly with solutions very close to optimal. The total time for the bootstrap process to create strong heuristics for large problems is several days. To make the process efficient when only a single test instance needs to be solved, we look for a balance in the time spent on learning better heuristics and the time needed to solve the test instance using the current set of learned heuristics. %We use two threads in parallel, We alternate between the execution of two threads, namely the learning thread (to learn better heuristics) and the solving thread (to solve the test instance). The solving thread is split up into sub-threads. The first solving sub-thread aims at solving the instance using the initial heuristic. When a new heuristic is learned in the learning thread, an additional solving sub-thread is started which uses the new heuristic to try to solve the instance. The total time by which we evaluate this process is the sum of the times used by both threads up to the point when the instance is solved in one sub-thread. The experimental results of this method on large search spaces demonstrate that the single instance of large problems are solved substantially faster than the total time needed for the bootstrap process while the solutions obtained are still very close to optimal.
APA, Harvard, Vancouver, ISO, and other styles
9

Roach, Lisa Aretha Nyala. "Temporal Variations in the Compliance of Gas Hydrate Formations." Thesis, 2012. http://hdl.handle.net/1807/44081.

Full text
Abstract:
Seafloor compliance is a non-intrusive geophysical method sensitive to the shear modulus of the sediments below the seafloor. A compliance analysis requires the computation of the frequency dependent transfer function between the vertical stress, produced at the seafloor by the ultra low frequency passive source-infra-gravity waves, and the resulting displacement, related to velocity through the frequency. The displacement of the ocean floor is dependent on the elastic structure of the sediments and the compliance function is tuned to different depths, i.e., a change in the elastic parameters at a given depth is sensed by the compliance function at a particular frequency. In a gas hydrate system, the magnitude of the stiffness is a measure of the quantity of gas hydrates present. Gas hydrates contain immense stores of greenhouse gases making them relevant to climate change science, and represent an important potential alternative source of energy. Bullseye Vent is a gas hydrate system located in an area that has been intensively studied for over 2 decades and research results suggest that this system is evolving over time. A partnership with NEPTUNE Canada allowed for the investigation of this possible evolution. This thesis describes a compliance experiment configured for NEPTUNE Canada’s seafloor observatory and its failure. It also describes the use of 203 days of simultaneously logged pressure and velocity time-series data, measured by a Scripps differential pressure gauge, and a Güralp CMG-1T broadband seismometer on NEPTUNE Canada’s seismic station, respectively, to evaluate variations in sediment stiffness near Bullseye. The evaluation resulted in a (- 4.49 x10-3± 3.52 x 10-3) % change of the transfer function of 3rd October, 2010 and represents a 2.88% decrease in the stiffness of the sediments over the period. This thesis also outlines a new algorithm for calculating the static compliance of isotropic layered sediments.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Functional bootstrapping"

1

McMurry, Timothy, and Dimitris Politis. Resampling methods for functional data. Edited by Frédéric Ferraty and Yves Romain. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780199568444.013.7.

Full text
Abstract:
This article examines the current state of methodological and practical developments for resampling inference techniques in functional data analysis, paying special attention to situations where either the data and/or the parameters being estimated take values in a space of functions. It first provides the basic background and notation before discussing bootstrap results from nonparametric smoothing, taking into account confidence bands in density estimation as well as confidence bands in nonparametric regression and autoregression. It then considers the major results in subsampling and what is known about bootstraps, along with a few recent real-data applications of bootstrapping with functional data. Finally, it highlights possible directions for further research and exploration.
APA, Harvard, Vancouver, ISO, and other styles
2

Cheng, Russell. Bootstrap Analysis. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198505044.003.0004.

Full text
Abstract:
Parametric bootstrapping (BS) provides an attractive alternative, both theoretically and numerically, to asymptotic theory for estimating sampling distributions. This chapter summarizes its use not only for calculating confidence intervals for estimated parameters and functions of parameters, but also to obtain log-likelihood-based confidence regions from which confidence bands for cumulative distribution and regression functions can be obtained. All such BS calculations are very easy to implement. Details are also given for calculating critical values of EDF statistics used in goodness-of-fit (GoF) tests, such as the Anderson-Darling A2 statistic whose null distribution is otherwise difficult to obtain, as it varies with different null hypotheses. A simple proof is given showing that the parametric BS is probabilistically exact for location-scale models. A formal regression lack-of-fit test employing parametric BS is given that can be used even when the regression data has no replications. Two real data examples are given.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Functional bootstrapping"

1

Okada, Hiroki, Shinsaku Kiyomoto, and Carlos Cid. "Integerwise Functional Bootstrapping on TFHE." In Lecture Notes in Computer Science, 107–25. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-62974-8_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Zhihao, Benqiang Wei, Ruida Wang, Xianhui Lu, and Kunpeng Wang. "Full Domain Functional Bootstrapping with Least Significant Bit Encoding." In Information Security and Cryptology, 203–23. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-0942-7_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bendoukha, Adda-Akram, Pierre-Emmanuel Clet, Aymen Boudguiga, and Renaud Sirdey. "Optimized Stream-Cipher-Based Transciphering by Means of Functional-Bootstrapping." In Data and Applications Security and Privacy XXXVII, 91–109. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-37586-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Clet, Pierre-Emmanuel, Aymen Boudguiga, Renaud Sirdey, and Martin Zuber. "ComBo: A Novel Functional Bootstrapping Method for Efficient Evaluation of Nonlinear Functions in the Encrypted Domain." In Progress in Cryptology - AFRICACRYPT 2023, 317–43. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-37679-5_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Zeyu, and Yunhao Wang. "Amortized Functional Bootstrapping in Less than 7 ms, with $$\tilde{O}(1)$$ Polynomial Multiplications." In Advances in Cryptology – ASIACRYPT 2023, 101–32. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8736-8_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Csörgő, Miklós, Sándor Csörgő, and Lajos Horváth. "Bootstrapping Empirical Functions." In An Asymptotic Theory for Empirical Reliability and Concentration Processes, 150–64. New York, NY: Springer New York, 1986. http://dx.doi.org/10.1007/978-1-4615-6420-1_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Applebaum, Benny. "Bootstrapping Obfuscators via Fast Pseudorandom Functions." In Lecture Notes in Computer Science, 162–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-45608-8_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Romero, Alejandro, Francisco Bellas, Jose A. Becerra, and Richard J. Duro. "Bootstrapping Autonomous Skill Learning in the MDB Cognitive Architecture." In Understanding the Brain Function and Emotions, 120–29. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-19591-5_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Kaifeng, and Michael Affenzeller. "Surrogate-assisted Multi-objective Optimization via Genetic Programming Based Symbolic Regression." In Lecture Notes in Computer Science, 176–90. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-27250-9_13.

Full text
Abstract:
AbstractSurrogate-assisted optimization algorithms are a commonly used technique to solve expensive-evaluation problems, in which a regression model is built to replace an expensive function. In some acquisition functions, the only requirement for a regression model is the predictions. However, some other acquisition functions also require a regression model to estimate the “uncertainty” of the prediction, instead of merely providing predictions. Unfortunately, very few statistical modeling techniques can achieve this, such as Kriging/Gaussian processes, and recently proposed genetic programming-based (GP-based) symbolic regression with Kriging (GP2). Another method is to use a bootstrapping technique in GP-based symbolic regression to estimate prediction and its corresponding uncertainty. This paper proposes to use GP-based symbolic regression and its variants to solve multi-objective optimization problems (MOPs), which are under the framework of a surrogate-assisted multi-objective optimization algorithm (SMOA). Kriging and random forest are also compared with GP-based symbolic regression and GP2. Experiment results demonstrate that the surrogate models using the GP2 strategy can improve SMOA’s performance.
APA, Harvard, Vancouver, ISO, and other styles
10

Geelen, Robin, Ilia Iliashenko, Jiayi Kang, and Frederik Vercauteren. "On Polynomial Functions Modulo $$p^e$$ and Faster Bootstrapping for Homomorphic Encryption." In Advances in Cryptology – EUROCRYPT 2023, 257–86. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30620-4_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Functional bootstrapping"

1

"Bootstrapping functional data: a study of distributional property of sample eigenvalues." In 19th International Congress on Modelling and Simulation. Modelling and Simulation Society of Australia and New Zealand (MSSANZ), Inc., 2011. http://dx.doi.org/10.36334/modsim.2011.aa.shang.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Weiss, Benjamin M., Joshua M. Hamel, Mark A. Ganter, and Duane W. Storti. "Data-Driven Additive Manufacturing Constraints for Topology Optimization." In ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/detc2018-85391.

Full text
Abstract:
The topology optimization (TO) of structures to be produced using additive manufacturing (AM) is explored using a data-driven constraint function that predicts the minimum producible size of small features in different shapes and orientations. This shape- and orientation-dependent manufacturing constraint, derived from experimental data, is implemented within a TO framework using a modified version of the Moving Morphable Components (MMC) approach. Because the analytic constraint function is fully differentiable, gradient-based optimization can be used. The MMC approach is extended in this work to include a “bootstrapping” step, which provides initial component layouts to the MMC algorithm based on intermediate Solid Isotropic Material with Penalization (SIMP) topology optimization results. This “bootstrapping” approach improves convergence compared to reference MMC implementations. Results from two compliance design optimization example problems demonstrate the successful integration of the manufacturability constraint in the MMC approach, and the optimal designs produced show minor changes in topology and shape compared to designs produced using fixed-radius filters in the traditional SIMP approach. The use of this data-driven manufacturability constraint makes it possible to take better advantage of the achievable complexity in additive manufacturing processes, while resulting in typical penalties to the design objective function of around only 2% when compared to the unconstrained case.
APA, Harvard, Vancouver, ISO, and other styles
3

Page, Alvaro, Noelia López, William Ricardo Venegas, and Pilar Serra. "Comparación de la normalización lineal de la escala de tiempos con el registro funcional continuo en movimientos cíclicos del cuello." In 11 Simposio CEA de Bioingeniería. València: Editorial Universitat Politècnica de València, 2019. http://dx.doi.org/10.4995/ceabioing.2019.10027.

Full text
Abstract:
La normalización de la escala de tiempos es un paso necesario para aplicar las técnicas de análisis de datos funcionales al estudio de los movimientos humanos. La técnica estándar es la normalización lineal que, a pesar de su sencillez, puede ser ineficaz para reducir la variabilidad en la duración de los eventos [1]. Una alternativa es el registro continuo que supone ajustar de forma no lineal la escala de tiempos [2]. En este caso, la información temporal se mantiene en las funciones warping, que relacionan el tiempo modificado frente al promedio. No obstante, este procedimiento es complejo y computacionalmente costoso. Además, se ha señalado que en movimientos cíclicos como el de masticación apenas hay diferencias entre métodos [3]. En este trabajo se comparan ambos métodos en el caso del movimiento cíclico de flexo-extensión del cuello, analizando las funciones ángulo y velocidad angular. A partir de una base de datos con 437 ciclos completos extensión-flexión de cuello, se han aplicado ambos tipos de normalización de la escala temporal. Se han analizado las diferencias en las curvas medias y las desviaciones típicas funcionales, cuyos valores medios e intervalos de confianza se han establecido mediante un proceso de bootstrapping. Los resultados muestran que apenas hay diferencias en las curvas medias obtenidas por ambos procedimientos, aunque sí en las desviaciones típicas funcionales, que son algo menores en el caso del registro no lineal. Por otra parte, los resultados obtenidos con el registro no lineal son diferentes cuando se usan las curvas de posición o las de velocidad como referencia para el reescalado. Estos resultados sugieren que el registro no lineal, aunque puede ser útil para el análisis de señales no periódicas o donde no haya que analizar a la vez funciones y sus derivadas, no ofrece ventajas importantes frente a la normalización lineal en el caso de los movimientos cíclicos. REFERENCIAS [1] Page, A., & Epifanio, I. (2007). A simple model to analyze the effectiveness of linear time normalization to reduce variability in human movement analysis. Gait & posture, 25(1), 153-156. [2] Page, A., et al. (2006). Normalizing temporal patterns to analyze sit-to-stand movements by using registration of functional data. Journal of biomechanics, 39(13), 2526-2534. [3] Crane, E. et al.. (2010). Effect of registration on cyclical kinematic data. Journal of biomechanics, 43(12), 2444-2447.
APA, Harvard, Vancouver, ISO, and other styles
4

Vahdat, Kimia, and Sara Shashaani. "Non-Parametric Uncertainty Bias and Variance Estimation via Nested Bootstrapping and Influence Functions." In 2021 Winter Simulation Conference (WSC). IEEE, 2021. http://dx.doi.org/10.1109/wsc52266.2021.9715420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Luo, Zhenjun, and Jian S. Dai. "Patterned Bootstrap: A New Method Which Gives Efficiency for Precision Position Synthesis of Planar Linkages." In ASME 2005 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2005. http://dx.doi.org/10.1115/detc2005-84658.

Full text
Abstract:
This paper presents a new method, termed as patterned bootstrap (PB), which is suitable for precision position synthesis of planar linkages. The method solves a determined system of equations using a new bootstrapping strategy. In principle, a randomly generated starting point is advanced to a final solution through solving a number of intermediate systems. The structure and the associated parameters of each intermediate system is defined as a pattern. In practice, a PB procedure generally consists of two levels: an upper level which controls the transition of patterns, and a lower level which solves intermediate systems using globally convergent root-finding algorithms. Besides introducing the new method, tunnelling functions have been added to several systems of polynomials derived by formal researchers in order to exclude degenerated solutions. Our numerical experiments demonstrate that many precision position synthesis problems can be solved efficiently without resorting to time-consuming polynomial homotopy continuation methods or interval methods. Finding over 95 percentages of the complete solutions of the 11 precision position function generation problem of a Stephenson-III linkage has been achieved for the first time.
APA, Harvard, Vancouver, ISO, and other styles
6

Wei, Hua, Deheng Ye, Zhao Liu, Hao Wu, Bo Yuan, Qiang Fu, Wei Yang, and Zhenhui Li. "Boosting Offline Reinforcement Learning with Residual Generative Modeling." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/492.

Full text
Abstract:
Offline reinforcement learning (RL) tries to learn the near-optimal policy with recorded offline experience without online exploration.Current offline RL research includes: 1) generative modeling, i.e., approximating a policy using fixed data; and 2) learning the state-action value function. While most research focuses on the state-action function part through reducing the bootstrapping error in value function approximation induced by the distribution shift of training data, the effects of error propagation in generative modeling have been neglected. In this paper, we analyze the error in generative modeling. We propose AQL (action-conditioned Q-learning), a residual generative model to reduce policy approximation error for offline RL. We show that our method can learn more accurate policy approximations in different benchmark datasets. In addition, we show that the proposed offline RL method can learn more competitive AI agents in complex control tasks under the multiplayer online battle arena (MOBA) game, Honor of Kings.
APA, Harvard, Vancouver, ISO, and other styles
7

Lacaze, Sylvain, and Samy Missoum. "A Generalized “Max-Min” Sample for Reliability Assessment With Dependent Variables." In ASME 2014 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/detc2014-34051.

Full text
Abstract:
This paper introduces a novel approach for reliability assessment with dependent variables. In this work, the boundary of the failure domain, for a computational problem with expensive function evaluations, is approximated using a Support Vector Machine and an adaptive sampling scheme. The approximation is sequentially refined using a new adaptive sampling scheme referred to as generalized “max-min”. This technique efficiently targets high probability density regions of the random space. This is achieved by modifying an adaptive sampling scheme originally tailored for deterministic spaces (Explicit Space Design Decomposition). In particular, the approach can handle any joint probability density function, even if the variables are dependent. In the latter case, the joint distribution might be obtained from copula. In addition, uncertainty on the probability of failure estimate are estimated using bootstrapping. A bootstrapped coefficient of variation of the probability of failure is used as an estimate of the true error to determine convergence. The proposed method is then applied to analytical examples and a beam bending reliability assessment using copulas.
APA, Harvard, Vancouver, ISO, and other styles
8

Voeikova, Maria D. "MORPHONOLOGICAL PROPERTIES OF NOUNS WITH -KA ELEMENT IN THE FINAL PART." In 49th International Philological Conference in Memory of Professor Ludmila Verbitskaya (1936–2019). St. Petersburg State University, 2022. http://dx.doi.org/10.21638/11701/9785288062353.13.

Full text
Abstract:
Nouns containing -ka in their final part have some distinct common properties: they have homogeneous and salient inflectional endings and belong to most frequent (1st and 2nd) declension classes, they are mostly built with several productive derivation patterns and therefore, gain in type and token frequency. This paper addresses some other, not so obvious particularities of this morphonological group. Their accentual types in several case forms tend to be connected with certain semantic groups (e. g., four syllabic masculine nouns in the genitive mostly denote masculine nouns meaning occupation: ljubovnika ‘lover-Gen’, nachal’nika ‘chief-Gen’), thus, helping to disambiguate the usual case syncretism. The data is taken from the frequency lists of word forms of the Corpus of Russian literary language. Accentual types were defined by the number of syllables and by the stress placement. The observations are interpreted from the point of view of systemic approach to modern Russian grammar including the further elaboration of bootstrapping mechanisms that are usually in play during speech perception by adults and language acquisition by children. In this case we deal with the morphonological bootstrapping determining the hidden preferences of speakers. These observations serve as a base for future experimental study of this and similar morphonological groups that should be taken into consideration during the analysis of productive derivation patterns and their structural functions, the correlation of semantic, pragmatic and structure, as well as their perceptive capacities. The prediction for experimental study is that nouns with -ka in the final part should be earlier recognized in the reaction time experiments, easier processed and, probably, earlier acquired by children. Another development of this topic is a similar description of the final -k- of the stem followed by other vowels marking oblique cases, like e, u, i and o, that, however, would be not as promising because of poorer syncretism in the sphere of other inflectional endings. Refs 20.
APA, Harvard, Vancouver, ISO, and other styles
9

Zeng, Zizhen, Shanpu Shen, Bo Wang, Johan J. Estrada-Lopez, Ross Murch, and Edgar Sanchez-Sinencio. "An Ultra-low-power Power Management Circuit with Output Bootstrapping and Reverse Leakage Reduction Function for RF Energy Harvesting." In 2020 IEEE/MTT-S International Microwave Symposium (IMS). IEEE, 2020. http://dx.doi.org/10.1109/ims30576.2020.9224098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zheng, Zheng, Hui Li, and Mengqi Wang. "Application of a 3D Discrete Ordinates-Monte Carlo Coupling Method on CAP1400 Cavity Streaming Calculation." In 2017 25th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/icone25-66401.

Full text
Abstract:
Neutrons and photons produced from reactor core during operation pass through the pressure vessel, reach the reactor cavity, and form the reactor cavity streaming. Reactor cavity streaming dose rates calculation during normal operation is important for the evaluation and control of the equipment dose rates in the nuclear power plant. Because reactor is great in dimension and complex in geometry, neutrons and photons fluence rates declined by several orders from reactor core to outside. Cavity streaming calculation is a deep penetration calculation with heavy computation load which is difficult to converge. Three dimensional Discrete Ordinates and Monte Carlo (SN-MC) coupling method combines the advantage of the SN method with high efficiency and the MC method with fine geometrical modeling. The SN-MC coupling method decreases the tally errors and increases the efficiency of the MC method effectively by using MC surface source generated by the SN fluence rates. In this paper, the theoretical model of the 3D SN-MC coupling method is presented. In order to fulfill the coupling calculation, a 3D Discrete Ordinates code is modified to output angular fluence rates, a link code DO2MC is developed to calculate cummulative distribution functions of source particle variables on surface source, and a source subroutine is written for a 3D Monte Carlo code. The 3D SN-MC coupling method is applied on the calculation of the CAP1400 cavity streaming neutron and photon dose rates. Numerical results show that the 3D SN-MC coupling codes are correct, the relative errors of the results are less than 20% compared with those of the MC bootstrapping method, and the efficiency is greatly enhanced.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography