Literatura científica selecionada sobre o tema "Moduli randomization"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Moduli randomization".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Moduli randomization"

1

Csorgő, M., Z. Y. Lin e Q. M. Shao. "Randomization Moduli of Continuity for ℓ2-Norm Squared Ornstein-Uhlenbeck Processes". Canadian Journal of Mathematics 45, n.º 2 (1 de abril de 1993): 269–83. http://dx.doi.org/10.4153/cjm-1993-013-3.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Stankiewicz, Anna, e Sławomir Juściński. "How to Make the Stress Relaxation Experiment for Polymers More Informative". Polymers 15, n.º 23 (2 de dezembro de 2023): 4605. http://dx.doi.org/10.3390/polym15234605.

Texto completo da fonte
Resumo:
Different viscoelastic models and characteristics are commonly used to describe, analyze, compare and improve the mechanical properties of polymers. A time-dependent linear relaxation modulus next to frequency-domain storage and loss moduli are the basic rheological material functions of polymers. The exponential Maxwell model and the exponential stretched Kohlrausch–Williams–Watts model are, probably, the most known linear rheological models of polymers. There are different identification methods for such models, some of which are dedicated to specific models, while others are general in nature. However, the identification result, i.e., the best model, always depends on the specific experimental data on the basis of which it was determined. When the rheological stress relaxation test is performed, the data are composed of the sampling instants used in the test and on the measurements of the relaxation modulus of the real material. To build a relaxation modulus model that does not depend on sampling instants is a fundamental concern. The problem of weighted least-squares approximation of the real relaxation modulus is discussed when only the noise-corrupted time-measurements of the relaxation modulus are accessible for identification. A wide class of models, that are continuous, differentiable and Lipschitz with respect to parameters, is considered for the relaxation modulus approximation. The main results concern the models that are selected asymptotically as the number of measurements tends to infinity. It is shown that even when the true relaxation modulus description is completely unknown, the approximate optimal model parameters can be derived from the measurement data that are obtained for sampling instants that are selected randomly due to the appropriate randomization introduced whenever certain conditions regarding the adopted class of models are satisfied. It is shown that the most commonly used stress relaxation models, the Maxwell and Kohlrausch–Williams–Watts models, satisfy these conditions. Since the practical problems of the identification of relaxation modulus models are usually ill posed, Tikhonov regularization is applied to guarantee the stability of the regularized solutions. The approximate optimal model is a strongly consistent estimate of the regularized model that is optimal in the sense of the deterministic integral weighted square error. An identification algorithm leading to the best regularized model is presented. The stochastic-type convergence analysis is conducted for noise-corrupted relaxation modulus measurements, and the exponential convergence rate is proved. Numerical studies for different models of the relaxation modulus used in the polymer rheology are presented for the material described by a bimodal Gauss-like relaxation spectrum. Numerical studies have shown that if appropriate randomization is introduced in the selection of sampling instants, then optimal regularized models of the relaxation modulus being asymptotically independent of these time instants can be recovered from the stress relaxation experiment data. The robustness of the identification algorithm to measurement noises was demonstrated both by analytical and numerical analyses.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Schwamb, Megan E., Jeremy Kubica, Mario Jurić, Drew Oldag, Maxine West, Melissa DeLucchi e Matthew J. Holman. "Controlling Randomization in Astronomy Simulations". Research Notes of the AAS 8, n.º 1 (19 de janeiro de 2024): 25. http://dx.doi.org/10.3847/2515-5172/ad1f6b.

Texto completo da fonte
Resumo:
Abstract As the primary requirement, correctly implementing and controlling random number generation is vital for a range of scientific analyses and simulations across astronomy and planetary science. Beyond advice on how to set the seed, there is little guidance in the literature for how best to handle pseudo-random number generation in the current era of open-source astronomical software development. We present our methodology for implementing a pseudo-random number generation in astronomy simulations and software and share the short lines of python code that create the generator. Without sacrificing randomization at run time, our strategy ensures reproducibility on a per function/module basis for unit testing and for run time debugging where there may be multiple functions requiring lists of randomly generated values that occur before a specific function is executed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Sun, Shi-Hai, e Lin-Mei Liang. "Experimental demonstration of an active phase randomization and monitor module for quantum key distribution". Applied Physics Letters 101, n.º 7 (13 de agosto de 2012): 071107. http://dx.doi.org/10.1063/1.4746402.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Wu, Jiayao, Chen He, Jiahui Xie, Xiaopeng Liu e Minghui Zhang. "Twin-Field Quantum Digital Signature with Fully Discrete Phase Randomization". Entropy 24, n.º 6 (18 de junho de 2022): 839. http://dx.doi.org/10.3390/e24060839.

Texto completo da fonte
Resumo:
Quantum digital signatures (QDS) are able to verify the authenticity and integrity of a message in modern communication. However, the current QDS protocols are restricted by the fundamental rate-loss bound and the secure signature distance cannot be further improved. We propose a twin-field quantum digital signature (TF-QDS) protocol with fully discrete phase randomization and investigate its performance under the two-intensity decoy-state setting. For better performance, we optimize intensities of the signal state and the decoy state for each given distance. Numerical simulation results show that our TF-QDS with as few as six discrete random phases can give a higher signature rate and a longer secure transmission distance compared with current quantum digital signatures (QDSs), such as BB84-QDS and measurement-device-independent QDS (MDI-QDS). Moreover, we provide a clear comparison among some possible TF-QDSs constructed by different twin-field key generation protocols (TF-KGPs) and find that the proposed TF-QDS exhibits the best performance. Conclusively, the advantages of the proposed TF-QDS protocol in signature rate and secure transmission distance are mainly due to the single-photon interference applied in the measurement module and precise matching of discrete phases. Besides, our TF-QDS shows the feasibility of experimental implementation with current devices in practical QDS system.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Zhao, Yizhou, e Hua Sun. "Expand-and-Randomize: An Algebraic Approach to Secure Computation". Entropy 23, n.º 11 (4 de novembro de 2021): 1461. http://dx.doi.org/10.3390/e23111461.

Texto completo da fonte
Resumo:
We consider the secure computation problem in a minimal model, where Alice and Bob each holds an input and wish to securely compute a function of their inputs at Carol without revealing any additional information about the inputs. For this minimal secure computation problem, we propose a novel coding scheme built from two steps. First, the function to be computed is expanded such that it can be recovered while additional information might be leaked. Second, a randomization step is applied to the expanded function such that the leaked information is protected. We implement this expand-and-randomize coding scheme with two algebraic structures—the finite field and the modulo ring of integers, where the expansion step is realized with the addition operation and the randomization step is realized with the multiplication operation over the respective algebraic structures.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Steffen, Alana D., Larisa A. Burke, Heather A. Pauls, Marie L. Suarez, Yingwei Yao, William H. Kobak, Miho Takayama et al. "Double-blinding of an acupuncture randomized controlled trial optimized with clinical translational science award resources". Clinical Trials 17, n.º 5 (10 de julho de 2020): 545–51. http://dx.doi.org/10.1177/1740774520934910.

Texto completo da fonte
Resumo:
Background Clinical trial articles often lack detailed descriptions of the methods used to randomize participants, conceal allocation, and blind subjects and investigators to group assignment. We describe our systematic approach to implement and measure blinding success in a double-blind phase 2 randomized controlled trial testing the efficacy of acupuncture for the treatment of vulvodynia. Methods Randomization stratified by vulvodynia subtype is managed by Research Electronic Data Capture software’s randomization module adapted to achieve complete masking of group allocation. Subject and acupuncturist blinding assessments are conducted multiple times to identify possible correlates of unblinding. Results At present, 48 subjects have been randomized and completed the protocol resulting in 87 subject and 206 acupuncturist blinding assessments. Discussion Our approach to blinding and blinding assessment has the potential to improve our understanding of unblinding over time in the presence of possible clinical improvement.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Didier, Gilles, Alberto Valdeolivas e Anaïs Baudot. "Identifying communities from multiplex biological networks by randomized optimization of modularity". F1000Research 7 (10 de julho de 2018): 1042. http://dx.doi.org/10.12688/f1000research.15486.1.

Texto completo da fonte
Resumo:
The identification of communities, or modules, is a common operation in the analysis of large biological networks. The Disease Module Identification DREAM challenge established a framework to evaluate clustering approaches in a biomedical context, by testing the association of communities with GWAS-derived common trait and disease genes. We implemented here several extensions of the MolTi software that detects communities by optimizing multiplex (and monoplex) network modularity. In particular, MolTi now runs a randomized version of the Louvain algorithm, can consider edge and layer weights, and performs recursive clustering. On simulated networks, the randomization procedure clearly improves the detection of communities. On the DREAM challenge benchmark, the results strongly depend on the selected GWAS dataset and enrichment p-value threshold. However, the randomization procedure, as well as the consideration of weighted edges and layers generally increases the number of trait and disease community detected. The new version of MolTi and the scripts used for the DMI DREAM challenge are available at: https://github.com/gilles-didier/MolTi-DREAM.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Didier, Gilles, Alberto Valdeolivas e Anaïs Baudot. "Identifying communities from multiplex biological networks by randomized optimization of modularity". F1000Research 7 (22 de novembro de 2018): 1042. http://dx.doi.org/10.12688/f1000research.15486.2.

Texto completo da fonte
Resumo:
The identification of communities, or modules, is a common operation in the analysis of large biological networks. The Disease Module Identification DREAM challenge established a framework to evaluate clustering approaches in a biomedical context, by testing the association of communities with GWAS-derived common trait and disease genes. We implemented here several extensions of the MolTi software that detects communities by optimizing multiplex (and monoplex) network modularity. In particular, MolTi now runs a randomized version of the Louvain algorithm, can consider edge and layer weights, and performs recursive clustering. On simulated networks, the randomization procedure clearly improves the detection of communities. On the DREAM challenge benchmark, the results strongly depend on the selected GWAS dataset and enrichment p-value threshold. However, the randomization procedure, as well as the consideration of weighted edges and layers generally increases the number of trait and disease community detected. The new version of MolTi and the scripts used for the DMI DREAM challenge are available at: https://github.com/gilles-didier/MolTi-DREAM.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Baharsyah, Baharudin Adi, Endang Dian Setioningsih, Sari Luthfiyah e Wahyu Caesarendra. "Analyzing the Relationship between Dialysate Flow Rate Stability and Hemodialysis Machine Efficiency". Indonesian Journal of Electronics, Electromedical Engineering, and Medical Informatics 5, n.º 2 (30 de maio de 2023): 86–91. http://dx.doi.org/10.35882/ijeeemi.v5i2.276.

Texto completo da fonte
Resumo:
Chronic kidney disease (CKD) is a condition characterized by impaired kidney function, leading to disruptions in metabolism, fluid balance, and electrolyte regulation. Hemodialysis serves as a supportive therapy for individuals with CKD, prolonging life but unable to fully restore kidney function. Factors influencing urea and creatinine levels in hemodialysis patients include blood flow velocity, dialysis duration, and dialyzer selection. This research aims to establish a standard for calculating the dialysate flow rate, thereby enhancing dialysis efficiency. The study employs a pre-experimental "one group post-test" design, lacking baseline measurements and randomization, although a control group was utilized. The design's weakness lies in the absence of an initial condition assessment, making conclusive results challenging. Measurement comparisons between the module and the instrument yielded a 5.30% difference, while the difference between the hemodialysis machine and standard equipment was 4.02%. Furthermore, six module measurements against three comparison tools showed a 0.17% difference for the hemodialysis machine with standard equipment, and a 0.18% difference for the module with standard equipment, with a 0.23% discrepancy between the two. Further analysis is necessary to understand the clinical significance and implications of these measurement variations on overall dialysis efficacy
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Moduli randomization"

1

Courtois, Jérôme. "Leak study of cryptosystem implementations in randomized RNS arithmetic". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS290.

Texto completo da fonte
Resumo:
On parlera d'analyse forte pour une analyse qui permet de retrouver la clef d'un système cryptographique et d'une analyse faible dans le cas où on élimine des clefs candidates. Le but de cette thèse est essentiellement la compréhension du comportement de l'aléa des distances de Hamming produites par un système cryptographique de type ECC (Elliptic Curve for Cryptography) quand on utilise une représentation RNS (Residue Number System) avec la méthode des moduli aléatoires. Le Chapitre 2 introduit les différentes notions nécessaires à la compréhension de ce document. Il introduit brièvement l'algorithme de multiplication modulaire (Algorithme de Montgomery pour RNS) qui a inspiré la méthode des moduli aléatoires. Puis il décrit l'algorithme qui génère les séquences de distances de Hamming nécessaires à notre analyse. Ensuite il montre quel niveau de résistance apporte la méthode des moduli aléatoires contre différentes attaques classiques comme DPA (Diferential Power Analysis), CPA (Correlation Power Analysis), DPA du second ordre et MIA (Mutual Information Analysis). On apporte une compréhension de la distribution des distances de Hamming considérées comme des variables aléatoires. Suite à cela, on ajoute l'hypothèse gaussienne sur les distances de Hamming. On utilise alors le MLE (Maximum Likelihood Estimator) et une analyse forte comme pour faire des Template Attacks pour avoir une compréhension fine du niveau d'aléa apporté par la méthode des moduli aléatoires. Le Chapitre 3 est une suite naturelle des conclusions apportées par le Chapitre 2 sur l'hypothèse gaussienne. Si des attaques sont possibles avec le MLE, c'est qu'il existe sans doute des relations fortes entre les distances de Hamming considérées comme des variables aléatoires. La Section 3.2 cherche à quantifier ce niveau de dépendance des distances de Hamming. Ensuite, en restant dans l'hypothèse gaussienne, on remarquera qu'il est possible de construire une type de DPA qu'on a appelé DPA square reposant sur la covariance au lieu de la moyenne comme dans la DPA classique. Mais cela reste très gourmand en traces d'observation d'autant que dans de nombreux protocoles utilisant une ECC, on utilise une clef qu'une seule fois. La Section 3.4 s'efforce de montrer qu'il existe de l'information sur peu de traces de distances de Hamming malgré la randomisation des moduli. Pour cela, on fait un MLE par un conditionnement sur l'une des distances de Hamming avec une analyse faible. Le dernier Chapitre 4 commence par introduire brièvement les choix algorithmiques qui ont été faits pour résoudre les problèmes d'inversion de matrices de covariance (symétriques définies positives) de la Section 3.2 et l'analyse des relations fortes entre les Hamming dans la Section 3.2. On utilise ici des outils de Graphics Processing Unit (GPU) sur un très grand nombre de matrices de petites tailles. On parlera de Batch Computing. La méthode LDLt présentée au début de ce chapitre s'est avérée insuffisante pour résoudre complètement le problème du MLE conditionné présenté dans la Section 3.4. On présente le travail sur l'amélioration d'un code de diagonalisation de matrice tridiagonale utilisant le principe de Diviser pour Régner (Divide & Conquer) développé par Lokmane Abbas-Turki et Stéphane Graillat. On présente une généralisation de ce code, des optimisations en temps de calcul et une amélioration de la robustesse des calculs en simple précision pour des matrices de taille inférieure à 32
We will speak of strong analysis for an analysis which makes it possible to find the key to a cryptographic system. We define a weak analysis in the case where candidate keys are eliminated. The goal of this thesis is to understand the behavior of the random of Hamming distances produced by an ECC (Elliptic Curve for Cryptography) cryptographic system when using a RNS (Residue Number System) representation with the random moduli method. Chapter 2 introduces the different concepts for understanding this document. He brieflyintroducesthemodularmultiplicationalgorithm(MontgomeryalgorithmforRNS) which inspired the method of random moduli. Then it describes the algorithm which generatestheHammingdistancesequencesnecessaryforouranalysis. Thenitshowswhat level of resistance brings the method of random moduli against different classic attacks like DPA (Diferrential Power Analysis), CPA (Correlation Power Analysis), DPA of the second order and MIA (Mutual Information Analysis). We provide an understanding of the distribution of Hamming distances considered to be random variables. Following this, we add the Gaussian hypothesis on Hamming distances. We use MLE (Maximum Likelihood Estimator) and a strong analysis as to make Template Attacks to have a fine understanding of the level of random brought by the method of random moduli. The last Chapter 4 begins by briefly introducing the algorithmic choices which have been made to solve the problems of inversion of covariance matrices (symmetric definite positive) of Section 2.5 and the analysis of strong relationships between Hamming in Section 3.2. We use here Graphics Processing Unit (GPU) tools on a very large number of small size matrices. We talk about Batch Computing. The LDLt method presented at the beginning of this chapter proved to be insufficient to completely solve the problem of conditioned MLE presented in Section 3.4. We present work on the improvement of a diagonalization code of a tridiagonal matrix using the principle of Divide & Conquer developed by Lokmane Abbas-Turki and Stéphane Graillat. We present a generalization of this code, optimizations in computation time and an improvement of the accuracy of computations in simple precision for matrices of size lower than 32
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Nadeem, Muhammad Hassan. "Linux Kernel Module Continuous Address Space Re-Randomization". Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/104685.

Texto completo da fonte
Resumo:
Address space layout randomization (ASLR) is a technique employed to prevent exploitation of memory corruption vulnerabilities in user-space programs. While this technique is widely studied, its kernel space counterpart known as kernel address space layout randomization (KASLR) has received less attention in the research community. KASLR, as it is implemented today is limited in entropy of randomization. Specifically, the kernel image and its modules can only be randomized within a narrow 1GB range. Moreover, KASLR does not protect against memory disclosure vulnerabilities, the presence of which reduces or completely eliminates the benefits of KASLR. In this thesis, we make two major contributions. First, we add support for position-independent kernel modules to Linux so that the modules can be placed anywhere in the 64-bit virtual address space and at any distance apart from each other. Second, we enable continuous KASLR re-randomization for Linux kernel modules by leveraging the position-independent model. Both contributions increase the entropy and reduce the chance of successful ROP attacks. Since prior art tackles only user-space programs, we also solve a number of challenges unique to the kernel code. Our experimental evaluation shows that the overhead of position-independent code is very low. Likewise, the cost of re-randomization is also small even at very high re-randomization frequencies.
Master of Science
Address space layout randomization (ASLR) is a computer security technique used to prevent attacks that exploit memory disclosure and corruption vulnerabilities. ASLR works by randomly arranging the locations of key areas of a process such as the stack, heap, shared libraries and base address of the executable in the address space. This prevents an attacker from jumping to vulnerable code in memory and thus making it hard to launch control flow hijacking and code reuse attacks. ASLR makes it impossible for the attacker to leverage return-oriented programming (ROP) by pre-computing the location of code gadgets. Unfortunately, ASLR can be defeated by using memory disclosure vulnerabilities to unravel static randomization in an attack known as Just-In-Time ROP (JIT-ROP) attack. There exist techniques that extend the idea of ASLR by continually re-randomizing the program at run-time. With re-randomization, any leaked memory location is quickly obsoleted by rapidly and continuously rearranging memory. If the period of re-randomization is kept shorter than the time it takes for an attacker to create and launch their attack, then JIT-ROP attacks can be prevented. Unfortunately, there exists no continuous re-randomization implementation for the Linux kernel. To make matters worse, the ASLR implementation for the Linux kernel (KASLR) is limited. Specifically, for x86-64 CPUs, due to architectural restrictions, the Linux kernel is loaded in a narrow 1GB region of the memory. Likewise, all the kernel modules are loaded within the 1GB range of the kernel image. Due to this relatively low entropy, the Linux kernel is vulnerable to brute-force ROP attacks. In this thesis, we make two major contributions. First, we add support for position-independent kernel modules to Linux so that the modules can be placed anywhere in the 64-bit virtual address space and at any distance apart from each other. Second, we enable continuous KASLR re-randomization for Linux kernel modules by leveraging the position-independent model. Both contributions increase the entropy and reduce the chance of successful ROP attacks. Since prior art tackles only user-space programs, we also solve a number of challenges unique to the kernel code. We demonstrate the mechanism and the generality of our proposed re-randomization technique using several different, widely used device drivers, compiled as re-randomizable modules. Our experimental evaluation shows that the overhead of position-independent code is very low. Likewise, the cost of re-randomization is also small even at very high re-randomization frequencies.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Morris, David Dry. "Randomization analysis of experimental designs under non standard conditions". Diss., Virginia Polytechnic Institute and State University, 1987. http://hdl.handle.net/10919/53649.

Texto completo da fonte
Resumo:
Often the basic assumptions of the ANOVA for an experimental design are not met or the statistical model is incorrectly specified. Randomization of treatments to experimental units is expected to protect against such shortcomings. This paper uses randomization theory to examine the impact on the expectations of mean squares, treatment means, and treatment differences for two model mis·specifications: Systematic response shifts and correlated experimental units. Systematic response shifts are presented in the context of the randomized complete block design (RCBD). In particular fixed shifts are added to the responses of experimental units in the initial and final positions of each block. The fixed shifts are called border shifts. It is shown that the RCBD is an unbiased design under randomization theory when border shifts are present. Treatment means are biased but treatment differences are unbiased. However the estimate of error is biased upwards and the power of the F test is reduced. Alternative designs to the RCBD under border shifts are the Latin square, semi-Latin square, and two-column designs. Randomization analysis demonstrates that the Latin square is an unbiased design with an unbiased estimate of error and of treatment differences. The semi-Latin square has each of the t treatments occurring only once per row and column, but t is a multiple of the number of rows or columns. Thus each row-column combination contains more than one experimental unit. The semi-Latin square is a biased design with a biased estimate of error even when no border shifts are present. Row-column interaction is responsible for the bias. Border shifts do not contaminate the expected mean squares or treatment differences, and thus the semi-Latin square is a viable alternative when the border shift overwhelms the row-column interaction. The two columns of the two-column design correspond to the border and interior experimental units respectively. Results similar to that for the semi-Latin square are obtained. Simulation studies for the RCBD and its alternatives indicate that the power of the F test is reduced for the RCBD when border shifts are present. When no row-column interaction is present, the semi-Latin square and two-column designs provide good alternatives to the RCBD. Similar results are found for the split plot design when border shifts occur in the sub plots. A main effects plan is presented for situations when the number of whole plot units equals the number of sub plot units per whole plot. The analysis of designs in which the experimental units occur in a sequence and exhibit correlation is considered next. The Williams Type Il(a) design is examined in conjunction with the usual ANOVA and with the method of first differencing. Expected mean squares, treatment means, and treatment differences are obtained under randomization theory for each analysis. When only adjacent experimental units have non negligible correlation, the Type Il(a) design provides an unbiased error estimate for the usual ANOVA. However the expectation of the treatment mean square is biased downwards for a positive correlation. First differencing results in a biased test and a biased error estimate. The test is approximately unbiased if the correlation between units is close to a half.
Ph. D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

MARTELOTTE, MARCELA COHEN. "USING LINEAR MIXED MODELS ON DATA FROM EXPERIMENTS WITH RESTRICTION IN RANDOMIZATION". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2010. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=16422@1.

Texto completo da fonte
Resumo:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Esta dissertação trata da aplicação de modelos lineares mistos em dados provenientes de experimentos com restrição na aleatorização. O experimento utilizado neste trabalho teve como finalidade verificar quais eram os fatores de controle do processo de laminação a frio que mais afetavam a espessura do material utilizado na fabricação das latas para bebidas carbonatadas. A partir do experimento, foram obtidos dados para modelar a média e a variância da espessura do material. O objetivo da modelagem era identificar quais fatores faziam com que a espessura média atingisse o valor desejado (0,248 mm). Além disso, era necessário identificar qual a combinação dos níveis desses fatores que produzia a variância mínima na espessura do material. Houve replicações neste experimento, mas estas não foram executadas de forma aleatória, e, além disso, os níveis dos fatores utilizados não foram reinicializados, nas rodadas do experimento. Devido a estas restrições, foram utilizados modelos mistos para o ajuste da média, e da variância, da espessura, uma vez que com tais modelos é possível trabalhar na presença de dados auto-correlacionados e heterocedásticos. Os modelos mostraram uma boa adequação aos dados, indicando que para situações onde existe restrição na aleatorização, a utilização de modelos mistos se mostra apropriada.
This dissertation presents an application of linear mixed models on data from an experiment with restriction in randomization. The experiment used in this study was aimed to verify which were the controlling factors, in the cold-rolling process, that most affected the thickness of the material used in the carbonated beverages market segment. From the experiment, data were obtained to model the mean and variance of the thickness of the material. The goal of modeling was to identify which factors were significant for the thickness reaches the desired value (0.248 mm). Furthermore, it was necessary to identify which combination of levels, of these factors, produced the minimum variance in the thickness of the material. There were replications of this experiment, but these were not performed randomly. In addition, the levels of factors used were not restarted during the trials. Due to these limitations, mixed models were used to adjust the mean and the variance of the thickness. The models showed a good fit to the data, indicating that for situations where there is restriction on randomization, the use of mixed models is suitable.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Hossain, Mohammad Zakir. "A small-sample randomization-based approach to semi-parametric estimation and misspecification in generalized linear mixed models". Thesis, Queen Mary, University of London, 2017. http://qmro.qmul.ac.uk/xmlui/handle/123456789/24641.

Texto completo da fonte
Resumo:
In a generalized linear mixed model (GLMM), the random effects are typically uncorrelated and assumed to follow a normal distribution. However, findings from recent studies on how the misspecification of the random effects distribution affects the estimated model parameters are inconclusive. In the thesis, we extend the randomization approach for deriving linear models to the GLMM framework. Based on this approach, we develop an algorithm for estimating the model parameters of the randomization-based GLMM (RBGLMM) for the completely randomized design (CRD) which does not require normally distributed random effects. Instead, the discrete uniform distribution on the symmetric group of permutations is used for the random effects. Our simulation results suggest that the randomization-based algorithm may be an alternative when the assumption of normality is violated. In the second part of the thesis, we consider an RB-GLMM for the randomized complete block design (RCBD) with random block effects. We investigate the effect of misspecification of the correlation structure and of the random effects distribution via simulation studies. In the simulation, we use the variance covariance matrices derived from the randomization approach. The misspecified model with uncorrelated random effects is fitted to data generated from the model with correlated random effects. We also fit the model with normally distributed random effects to data simulated from models with different random effects distributions. The simulation results show that misspecification of both the correlation structure and of the random effects distribution has hardly any effect on the estimates of the fixed effects parameters. However, the estimated variance components are frequently severely biased and standard errors of these estimates are substantially higher.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Di, Pace Brian S. "Site- and Location-Adjusted Approaches to Adaptive Allocation Clinical Trial Designs". VCU Scholars Compass, 2019. https://scholarscompass.vcu.edu/etd/5706.

Texto completo da fonte
Resumo:
Response-Adaptive (RA) designs are used to adaptively allocate patients in clinical trials. These methods have been generalized to include Covariate-Adjusted Response-Adaptive (CARA) designs, which adjust treatment assignments for a set of covariates while maintaining features of the RA designs. Challenges may arise in multi-center trials if differential treatment responses and/or effects among sites exist. We propose Site-Adjusted Response-Adaptive (SARA) approaches to account for inter-center variability in treatment response and/or effectiveness, including either a fixed site effect or both random site and treatment-by-site interaction effects to calculate conditional probabilities. These success probabilities are used to update assignment probabilities for allocating patients between treatment groups as subjects accrue. Both frequentist and Bayesian models are considered. Treatment differences could also be attributed to differences in social determinants of health (SDH) that often manifest, especially if unmeasured, as spatial heterogeneity amongst the patient population. In these cases, patient residential location can be used as a proxy for these difficult to measure SDH. We propose the Location-Adjusted Response-Adaptive (LARA) approach to account for location-based variability in both treatment response and/or effectiveness. A Bayesian low-rank kriging model will interpolate spatially-varying joint treatment random effects to calculate the conditional probabilities of success, utilizing patient outcomes, treatment assignments and residential information. We compare the proposed methods with several existing allocation strategies that ignore site for a variety of scenarios where treatment success probabilities vary.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Jamal, Aygul. "A parallel iterative solver for large sparse linear systems enhanced with randomization and GPU accelerator, and its resilience to soft errors". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS269/document.

Texto completo da fonte
Resumo:
Dans cette thèse de doctorat, nous abordons trois défis auxquels sont confrontés les solveurs d'algèbres linéaires dans la perspective des futurs systèmes exascale: accélérer la convergence en utilisant des techniques innovantes au niveau algorithmique, en profitant des accélérateurs GPU (Graphics Processing Units) pour améliorer le calcul sur plusieurs systèmes, en évaluant l'impact des erreurs due à l'augmentation du parallélisme dans les superordinateurs. Nous nous intéressons à l'étude des méthodes permettant d'accélérer la convergence et le temps d'exécution des solveurs itératifs pour les grands systèmes linéaires creux. Le solveur plus spécifiquement considéré dans ce travail est le “parallel Algebraic Recursive Multilevel Solver (pARMS)” qui est un soldeur parallèle sur mémoire distribuée basé sur les méthodes de sous-espace de Krylov.Tout d'abord, nous proposons d'intégrer une technique de randomisation appelée “Random Butterfly Transformations (RBT)” qui a été proposée avec succès pour éliminer le coût du pivotage dans la résolution des systèmes linéaires denses. Notre objectif est d'appliquer cette technique dans le préconditionneur ARMS de pARMS pour résoudre plus efficacement le dernier système Complément de Schur dans l'application du processus à multi-niveaux récursif. En raison de l'importance considérable du dernier Complément de Schur pour certains problèmes de test, nous proposons également d'utiliser une variante creux de RBT suivie d'un solveur direct creux (SuperLU). Les résultats expérimentaux sur certaines matrices de la collection de Davis montrent une amélioration de la convergence et de la précision par rapport aux implémentations existantes.Ensuite, nous illustrons comment une approche non intrusive peut être appliquée pour implémenter des calculs GPU dans le solveur pARMS, plus particulièrement pour la phase de préconditionnement locale qui représente une partie importante du temps pour la résolution. Nous comparons les solveurs purement CPU avec les solveurs hybrides CPU / GPU sur plusieurs problèmes de test issus d'applications physiques. Les résultats de performance du solveur hybride CPU / GPU utilisant le préconditionnement ARMS combiné avec RBT, ou le préconditionnement ILU(0), montrent un gain de performance jusqu'à 30% sur les problèmes de test considérés dans nos expériences.Enfin, nous étudions l'effet des défaillances logicielles variable sur la convergence de la méthode itérative flexible GMRES (FGMRES) qui est couramment utilisée pour résoudre le système préconditionné dans pARMS. Le problème ciblé dans nos expériences est un problème elliptique PDE sur une grille régulière. Nous considérons deux types de préconditionneurs: une factorisation LU incomplète à double seuil (ILUT) et le préconditionneur ARMS combiné avec randomisation RBT. Nous considérons deux modèle de fautes logicielles différentes où nous perturbons la multiplication du vecteur matriciel et la phase de préconditionnement, et nous comparons leur impact potentiel sur la convergence
In this PhD thesis, we address three challenges faced by linear algebra solvers in the perspective of future exascale systems: accelerating convergence using innovative techniques at the algorithm level, taking advantage of GPU (Graphics Processing Units) accelerators to enhance the performance of computations on hybrid CPU/GPU systems, evaluating the impact of errors in the context of an increasing level of parallelism in supercomputers. We are interested in studying methods that enable us to accelerate convergence and execution time of iterative solvers for large sparse linear systems. The solver specifically considered in this work is the parallel Algebraic Recursive Multilevel Solver (pARMS), which is a distributed-memory parallel solver based on Krylov subspace methods.First we integrate a randomization technique referred to as Random Butterfly Transformations (RBT) that has been successfully applied to remove the cost of pivoting in the solution of dense linear systems. Our objective is to apply this method in the ARMS preconditioner to solve more efficiently the last Schur complement system in the application of the recursive multilevel process in pARMS. The experimental results show an improvement of the convergence and the accuracy. Due to memory concerns for some test problems, we also propose to use a sparse variant of RBT followed by a sparse direct solver (SuperLU), resulting in an improvement of the execution time.Then we explain how a non intrusive approach can be applied to implement GPU computing into the pARMS solver, more especially for the local preconditioning phase that represents a significant part of the time to compute the solution. We compare the CPU-only and hybrid CPU/GPU variant of the solver on several test problems coming from physical applications. The performance results of the hybrid CPU/GPU solver using the ARMS preconditioning combined with RBT, or the ILU(0) preconditioning, show a performance gain of up to 30% on the test problems considered in our experiments.Finally we study the effect of soft fault errors on the convergence of the commonly used flexible GMRES (FGMRES) algorithm which is also used to solve the preconditioned system in pARMS. The test problem in our experiments is an elliptical PDE problem on a regular grid. We consider two types of preconditioners: an incomplete LU factorization with dual threshold (ILUT), and the ARMS preconditioner combined with RBT randomization. We consider two soft fault error modeling approaches where we perturb the matrix-vector multiplication and the application of the preconditioner, and we compare their potential impact on the convergence of the solver
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Pokhilko, Victoria V. "Statistical Designs for Network A/B Testing". VCU Scholars Compass, 2019. https://scholarscompass.vcu.edu/etd/6101.

Texto completo da fonte
Resumo:
A/B testing refers to the statistical procedure of experimental design and analysis to compare two treatments, A and B, applied to different testing subjects. It is widely used by technology companies such as Facebook, LinkedIn, and Netflix, to compare different algorithms, web-designs, and other online products and services. The subjects participating in these online A/B testing experiments are users who are connected in different scales of social networks. Two connected subjects are similar in terms of their social behaviors, education and financial background, and other demographic aspects. Hence, it is only natural to assume that their reactions to online products and services are related to their network adjacency. In this research, we propose to use the conditional autoregressive model (CAR) to present the network structure and include the network effects in the estimation and inference of the treatment effect. The following statistical designs are presented: D-optimal design for network A/B testing, a re-randomization experimental design approach for network A/B testing and covariate-assisted Bayesian sequential design for network A/B testing. The effectiveness of the proposed methods are shown through numerical results with synthetic networks and real social networks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Han, Baoguang. "Statistical analysis of clinical trial data using Monte Carlo methods". Thesis, 2014. http://hdl.handle.net/1805/4650.

Texto completo da fonte
Resumo:
Indiana University-Purdue University Indianapolis (IUPUI)
In medical research, data analysis often requires complex statistical methods where no closed-form solutions are available. Under such circumstances, Monte Carlo (MC) methods have found many applications. In this dissertation, we proposed several novel statistical models where MC methods are utilized. For the first part, we focused on semicompeting risks data in which a non-terminal event was subject to dependent censoring by a terminal event. Based on an illness-death multistate survival model, we proposed flexible random effects models. Further, we extended our model to the setting of joint modeling where both semicompeting risks data and repeated marker data are simultaneously analyzed. Since the proposed methods involve high-dimensional integrations, Bayesian Monte Carlo Markov Chain (MCMC) methods were utilized for estimation. The use of Bayesian methods also facilitates the prediction of individual patient outcomes. The proposed methods were demonstrated in both simulation and case studies. For the second part, we focused on re-randomization test, which is a nonparametric method that makes inferences solely based on the randomization procedure used in clinical trials. With this type of inference, Monte Carlo method is often used for generating null distributions on the treatment difference. However, an issue was recently discovered when subjects in a clinical trial were randomized with unbalanced treatment allocation to two treatments according to the minimization algorithm, a randomization procedure frequently used in practice. The null distribution of the re-randomization test statistics was found not to be centered at zero, which comprised power of the test. In this dissertation, we investigated the property of the re-randomization test and proposed a weighted re-randomization method to overcome this issue. The proposed method was demonstrated through extensive simulation studies.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Livros sobre o assunto "Moduli randomization"

1

Nelson, Trisalyn. Using conditional spatial randomization to identify insect infestation hot spots. Victoria, B.C: Pacific Forestry Centre, 2007.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Bianconi, Ginestra. Multilayer Network Models. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198753919.003.0010.

Texto completo da fonte
Resumo:
This chapter presents the existing modelling frameworks for multiplex and multilayer networks. Multiplex network models are divided into growing multiplex network models and null models of multiplex networks. Growing multiplex networks are here shown to explain the main dynamical rules responsible to the emergent properties of multiplex networks, including the scale-free degree distribution, interlayer degree correlations and multilayer communities. Null models of multiplex networks are described in the context of maximum-entropy multiplex network ensembles. Randomization algorithms to test the relevant of network properties against null models are here described. Moreover, Multi-slice temporal networks Models capturing main properties of real temporal network data are presented. Finally, null models of general multilayer networks and networks of networks are characterized.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Puranam, Phanish. Methodologies for Microstructures. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199672363.003.0009.

Texto completo da fonte
Resumo:
I review developments in theory and methodology that may allow us to begin creating innovative forms of organizing, rather than rest content with studying them after they have emerged. We now have the conceptual and technical apparatus to prototype organization designs at small scale, cheaply and fast. The process of organization re-design can be seen in terms of multiple stages. It begins with careful observation of phenomena. Qualitative or indeed quantitative induction (i.e. data mining) can play a critical role here. Once we have some understanding or at least conjectures about underlying mechanisms, we can use the behavioral lab or an agent-based model to run cheap experiments to adjust the design. Once we have formulated a new design, we may want to run a field experiment with randomization. If the results look satisfactory, we can scale up and implement.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Moduli randomization"

1

Ahlswede, Rudolf. "Identification Without Randomization". In Identification and Other Probabilistic Models, 83–101. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-65072-8_4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Popkov, Yuri S., Alexey Yu Popkov, Yuri A. Dubnov e Alexander Yu Mazurov. "Randomized Parametric Models". In Entropy Randomization in Machine Learning, 113–56. Boca Raton: Chapman and Hall/CRC, 2022. http://dx.doi.org/10.1201/9781003306566-4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Popkov, Yuri S., Alexey Yu Popkov, Yuri A. Dubnov e Alexander Yu Mazurov. "Data Sources and Models". In Entropy Randomization in Machine Learning, 17–64. Boca Raton: Chapman and Hall/CRC, 2022. http://dx.doi.org/10.1201/9781003306566-2.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Popkov, Yuri S., Alexey Yu Popkov, Yuri A. Dubnov e Alexander Yu Mazurov. "Entropy-Robust Estimation Procedures for Randomized Models and Measurement Noises". In Entropy Randomization in Machine Learning, 157–82. Boca Raton: Chapman and Hall/CRC, 2022. http://dx.doi.org/10.1201/9781003306566-5.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

McArdle, John J., e John R. Nesselroade. "Notes on the inclusion of randomization in longitudinal studies." In Longitudinal data analysis using structural equation models., 323–27. Washington: American Psychological Association, 2014. http://dx.doi.org/10.1037/14440-030.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Dvir, Zeev, e Guangda Hu. "Matching-Vector Families and LDCs over Large Modulo". In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, 513–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-40328-6_36.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Friedrich, Tobias, e Lionel Levine. "Fast Simulation of Large-Scale Growth Models". In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, 555–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22935-0_47.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Lindemann, Christoph. "Employing The Randomization Technique for Solving Stochastic Petri Net Models". In Informatik-Fachberichte, 306–19. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/978-3-642-76934-4_21.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Mathews, Ky L., e José Crossa. "Experimental Design for Plant Improvement". In Wheat Improvement, 215–35. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-90673-3_13.

Texto completo da fonte
Resumo:
AbstractSound experimental design underpins successful plant improvement research. Robust experimental designs respect fundamental principles including replication, randomization and blocking, and avoid bias and pseudo-replication. Classical experimental designs seek to mitigate the effects of spatial variability with resolvable block plot structures. Recent developments in experimental design theory and software enable optimal model-based designs tailored to the experimental purpose. Optimal model-based designs anticipate the analytical model and incorporate information previously used only in the analysis. New technologies, such as genomics, rapid cycle breeding and high-throughput phenotyping, require flexible designs solutions which optimize resources whilst upholding fundamental design principles. This chapter describes experimental design principles in the context of classical designs and introduces the burgeoning field of model-based design in the context of plant improvement science.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

von Eckardstein, Arnold. "High Density Lipoproteins: Is There a Comeback as a Therapeutic Target?" In Prevention and Treatment of Atherosclerosis, 157–200. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/164_2021_536.

Texto completo da fonte
Resumo:
AbstractLow plasma levels of High Density Lipoprotein (HDL) cholesterol (HDL-C) are associated with increased risks of atherosclerotic cardiovascular disease (ASCVD). In cell culture and animal models, HDL particles exert multiple potentially anti-atherogenic effects. However, drugs increasing HDL-C have failed to prevent cardiovascular endpoints. Mendelian Randomization studies neither found any genetic causality for the associations of HDL-C levels with differences in cardiovascular risk. Therefore, the causal role and, hence, utility as a therapeutic target of HDL has been questioned. However, the biomarker “HDL-C” as well as the interpretation of previous data has several important limitations: First, the inverse relationship of HDL-C with risk of ASCVD is neither linear nor continuous. Hence, neither the-higher-the-better strategies of previous drug developments nor previous linear cause-effect relationships assuming Mendelian randomization approaches appear appropriate. Second, most of the drugs previously tested do not target HDL metabolism specifically so that the futile trials question the clinical utility of the investigated drugs rather than the causal role of HDL in ASCVD. Third, the cholesterol of HDL measured as HDL-C neither exerts nor reports any HDL function. Comprehensive knowledge of structure-function-disease relationships of HDL particles and associated molecules will be a pre-requisite, to test them for their physiological and pathogenic relevance and exploit them for the diagnostic and therapeutic management of individuals at HDL-associated risk of ASCVD but also other diseases, for example diabetes, chronic kidney disease, infections, autoimmune and neurodegenerative diseases.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Moduli randomization"

1

Tobin, Josh, Lukas Biewald, Rocky Duan, Marcin Andrychowicz, Ankur Handa, Vikash Kumar, Bob McGrew et al. "Domain Randomization and Generative Models for Robotic Grasping". In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018. http://dx.doi.org/10.1109/iros.2018.8593933.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Shamsuddin, Abdul Fathaah, Abhijith P, Krupasankari Ragunathan, Deepak Raja Sekar P. M e Praveen Sankaran. "Domain Randomization on Deep Learning Models for Image Dehazing". In 2021 National Conference on Communications (NCC). IEEE, 2021. http://dx.doi.org/10.1109/ncc52529.2021.9530031.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Deiab, Ibrahim M., e Mohamed A. Elbestawi. "Tribological Aspects of Workpiece/Fixture Contact in Machining Processes". In ASME 2002 International Mechanical Engineering Congress and Exposition. ASMEDC, 2002. http://dx.doi.org/10.1115/imece2002-39092.

Texto completo da fonte
Resumo:
This work presents an experimental investigation of the effect of different factors including normal load, workpiece surface roughness, and fixture roughness, on the coefficient of friction along the workpiece/fixture contact for machining processes. A full factorial design of experiment was used. Randomization, blocking, and averaging schemes were utilized to reduce the experimental error and variability. The data was statistically analyzed and explanations for the observed trends were given. This investigation is part of an ongoing research project to develop an integrated modular dynamic model of the milling process including the effect of fixture dynamic.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Feklisov, Egor, Mihail Zinderenko e Vladimir Frolov. "Procedural interior generation for artificial intelligence training and computer graphics". In International Conference "Computing for Physics and Technology - CPT2020". Bryansk State Technical University, 2020. http://dx.doi.org/10.30987/conferencearticle_5fce2771c14fa7.77481925.

Texto completo da fonte
Resumo:
Since the creation of computers, there has been a lingering problem of data storing and creation for various tasks. In terms of computer graphics and video games, there has been a constant need in assets. Although nowadays the issue of space is not one of the developers' prime concerns, the need in being able to automate asset creation is still relevant. The graphical fidelity, that the modern audiences and applications demand requires a lot of work on the artists' and designers' front, which costs a lot. The automatic generation of 3D scenes is of critical importance in the tasks of Artificial Intelligent (AI) robotics training, where the amount of generated data during training cannot even be viewed by a single person due to the large amount of data needed for machine learning algorithms. A completely separate, but nevertheless necessary task for an integrated solution, is furniture generation and placement, material and lighting randomisation. In this paper we propose interior generator for computer graphics and robotics learning applications. The suggested framework is able to generate and render interiors with furniture at photo-realistic quality. We combined the existing algorithms for generating plans and arranging interiors and then finally add material and lighting randomization. Our solution contains semantic database of 3D models and materials, which allows generator to get realistic scenes with randomization and per-pixel mask for training detection and segmentation algorithms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Sanzharov, Vadim, Vladimir Frolov e Alexey Voloboy. "Variable photorealistic image synthesis for training dataset generation". In International Conference "Computing for Physics and Technology - CPT2020". Bryansk State Technical University, 2020. http://dx.doi.org/10.30987/conferencearticle_5fce27723872e5.04814843.

Texto completo da fonte
Resumo:
Photorealistic rendering systems have recently found new applications in artificial intelligence, specifically in computer vision for the purpose of generation of image and video sequence datasets. The problem associated with this application is producing large number of photorealistic images with high variability of 3d models and their appearance. In this work, we propose an approach based on combining existing procedural texture generation techniques and domain randomization to generate large number of highly variative digital assets during the rendering process. This eliminates the need for a large pre-existing database of digital assets (only a small set of 3d models is required), and generates objects with unique appearance during rendering stage, reducing the needed post-processing of images and storage requirements. Our approach uses procedural texturing and material substitution to rapidly produce large number of variations of digital assets. The proposed solution can be used to produce training datasets for artificial intelligence applications and can be combined with most of state-of-the-art methods of scene generation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

SINGH, NAND KISHORE, KAZI ZAHIR UDDIN, RATNESHWAR JHA e BEHRAD KOOHBOR. "ANALYZING MICRO-MACRO TRANSITIONAL LENGTH SCALE IN UNIDIRECTIONAL COMPOSITES". In Thirty-sixth Technical Conference. Destech Publications, Inc., 2021. http://dx.doi.org/10.12783/asc36/35927.

Texto completo da fonte
Resumo:
Understanding the hierarchy in the mechanical behavior of heterogeneous materials requires a systematic characterization of the material response at different length scales, as well as the nature and characteristics of the transitional scales. Characterization of such transitional length scales has been carried out in the past by analytical models that calculate and compare stiffness values at micro and macro scales. The convergence of the material stiffness at the two scales has been used as the criterion for quantification of the so-called transitional length scales. These stiffness calculation approaches are based on the idea of local strain and stress distributions obtained from complex finite element models. Recent advancements in full-field experimental strain measurements have made it possible to identify the transitional length scales in fiber composites based on pure experimental measurements without the requirement of local stress analysis. In this work, we study the validity of such ‘strain-based’ approaches that are used to identify the RVE size in unidirectional fiber composites. Our modeling platform replicates the realistic conditions present in experimental measurements through the randomization of fiber locations and volume fraction within an epoxy matrix.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Zhang, Wentai, Quan Chen, Can Koz, Liuyue Xie, Amit Regmi, Soji Yamakawa, Tomotake Furuhata, Kenji Shimada e Levent Burak Kara. "Data Augmentation of Engineering Drawings for Data-Driven Component Segmentation". In ASME 2022 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/detc2022-91043.

Texto completo da fonte
Resumo:
Abstract We present a new data generation method to facilitate an automatic machine interpretation of 2D engineering part drawings. While such drawings are a common medium for clients to encode design and manufacturing requirements, a lack of computer support to automatically interpret these drawings necessitates part manufacturers to resort to laborious manual approaches for interpretation which, in turn, severely limits processing capacity. Although recent advances in trainable computer vision methods may enable automatic machine interpretation, it remains challenging to apply such methods to engineering drawings due to a lack of labeled training data. As one step toward this challenge, we propose a constrained data synthesis method to generate an arbitrarily large set of synthetic training drawings using only a handful of labeled examples. Our method is based on the randomization of the dimension sets subject to two major constraints to ensure the validity of the synthetic drawings. The effectiveness of our method is demonstrated in the context of a binary component segmentation task with a proposed list of descriptors. An evaluation of several image segmentation methods trained on our synthetic dataset shows that our approach to new data generation can boost the segmentation accuracy and the generalizability of the machine learning models to unseen drawings.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Tsai, Cheng-Han (Lance), e Jen-Yuan (James) Chang. "A New Approach to Enhance Artificial Intelligence for Robot Picking System Using Auto Picking Point Annotation". In ASME 2021 30th Conference on Information Storage and Processing Systems. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/isps2021-65218.

Texto completo da fonte
Resumo:
Abstract Artificial Intelligence (AI) has been widely used in different domains such as self-driving, automated optical inspection, and detection of object locations for the robotic pick and place operations. Although the current results of using AI in the mentioned fields are good, the biggest bottleneck for AI is the need for a vast amount of data and labeling of the corresponding answers for a sufficient training. Evidentially, these efforts still require significant manpower. If the quality of the labelling is unstable, the trained AI model becomes unstable and as consequence, so do the results. To resolve this issue, the auto annotation system is proposed in this paper with methods including (1) highly realistic model generation with real texture, (2) domain randomization algorithm in the simulator to automatically generate abundant and diverse images, and (3) visibility tracking algorithm to calculate the occlusion effect objects cause on each other for different picking strategy labels. From our experiments, we will show 10,000 images can be generated per hour, each having multiple objects and each object being labelled in different classes based on their visibility. Instance segmentation AI models can also be trained with these methods to verify the gaps between performance synthetic data for training and real data for testing, indicating that even at mAP 70 the mean average precision can reach 70%!
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Murthy, Raghavendra, Marc P. Mignolet e Aly El-Shafei. "Nonparametric Stochastic Modeling of Uncertainty in Rotordynamics". In ASME Turbo Expo 2009: Power for Land, Sea, and Air. ASMEDC, 2009. http://dx.doi.org/10.1115/gt2009-59700.

Texto completo da fonte
Resumo:
A systematic and rational approach is presented for the consideration of uncertainty in rotordynamics systems, i.e. in rotor mass and gyroscopic matrices, stiffness matrix, and bearing coefficients. The approach is based on the nonparametric stochastic modeling technique which permits the consideration of both data and modeling uncertainty. The former is induced by a lack of exact knowledge of properties such as density, Young’s modulus, etc. The latter occurs in the generation of the computational model from the physical structure as some of its features are invariably ignored, e.g. small anisotropies, or approximately represented, e.g. detailed meshing of gears. The nonparametric stochastic modeling approach, which is briefly reviewed first, introduces uncertainty in reduced order models through the randomization of their system matrices (e.g. stiffness, mass, and damping matrices of nonrotating structural dynamic systems). Here, this methodology is first extended to permit the consideration of uncertainty in symmetric and asymmetric rotor dynamic systems. Its application is next demonstrated on a symmetric rotor on linear bearings and uncertainties on the rotor stiffness (stiffness matrix) and/or mass properties (mass and gyroscopic matrices) are introduced that maintain the symmetry of the rotor. The effects of these uncertainties on the Campbell diagram, damping ratios, mode shapes, forced unbalance response, and oil whip instability threshold are analyzed. The generalization of these concepts to uncertainty in the bearing coefficients is achieved next. Finally, the consideration of uncertainty in asymmetric rotors is addressed and exemplified.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Pecci, Filippo, Ivan Stoianov e Avi Ostfeld. "Optimal Design-for-Control of Water Distribution Networks via Convex Relaxation". In 2nd WDSA/CCWI Joint Conference. València: Editorial Universitat Politècnica de València, 2022. http://dx.doi.org/10.4995/wdsa-ccwi2022.2022.14267.

Texto completo da fonte
Resumo:
This paper considers joint design-for-control problems in water distribution networks (WDNs), where locations and operational settings of control actuators are simultaneously optimized. We study two classes of optimal design-for-control problems, with the objectives of controlling pressure and managing drinking-water quality. First, we formulate the problem of optimal placement and operation of valves in water networks with the objective of minimizing average zone pressure, while satisfying minimum service requirements. The resulting mixed-integer non-linear optimization problem includes binary variables representing the unknown valve locations, and continuous variables modelling the valves’ operational settings. In addition, water utilities aim to maintain optimal target chlorine concentrations, sufficient to prevent microbial contamination, without affecting water taste and odour, or causing growth of disinfectant by-products. We consider the problem of optimal placement and operation of chlorine booster stations, which reapply disinfectant at selected locations within WDNs. The objective is to minimize deviations from target chlorine concentrations, while satisfying lower and upper bounds on the levels of chlorine residuals. The problem formulation includes discretized linear PDEs modelling advective transport of chlorine concentrations along network pipes. Moreover, binary variables model the placement of chlorine boosters, while continuous variables include the boosters’ operational settings. Computing an exact solution for the considered mixed-integer optimization problems can be computationally impractical when large water network models are considered. We investigate scalable heuristic methods to enable the solution of optimal design-for-control problems in large WDNs. As a first step, we solve a convex relaxation of the considered mixed-integer optimization problem. Then, starting from the relaxed solution, we implement randomization and local search to generate candidate design configurations. Each configuration is evaluated by implementing continuous optimization methods to optimize the actuators’ control settings and compute feasible solutions for the mixed-integer optimization problem. Moreover, the solution of the convex relaxation yields a lower bound to the optimal value of the original problem, resulting in worst-case estimates on the level of sub-optimality of the computed solutions We evaluate the considered heuristics to solve problems of optimal placement and operation of valves and chlorine boosters in water networks. As case study, we utilize an operational water network from the UK, with varying sizes and levels of connectivity and complexity. The convex heuristics are shown to generate good-quality feasible solutions in all problem instances with bounds on the optimality gap comparable to the level of uncertainty inherent in hydraulic and water quality models. Future work should investigate the formulation and solution of multiobjective optimization problems for the optimization of pressure and water quality, to evaluate the trade-offs between these two objectives. Moreover, the formulation and solution of robust optimization problems for the design of water networks under uncertainty is the subject of future work.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Relatórios de organizações sobre o assunto "Moduli randomization"

1

Hauer, Klaus, Ilona Dutzi, Christian Werner, Jürgen M. Bauer e Phoebe Ullrich. Implementation of intervention programs specifically tailored for patients with CI in early rehabilitation during acute hospitalization: a scoping review protocol. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, outubro de 2022. http://dx.doi.org/10.37766/inplasy2022.10.0067.

Texto completo da fonte
Resumo:
Review question / Objective: What is the current status of implementation of interventional programs on early functional rehabilitation during acute, hospital-based medical care, specifically tailored for older patients with CI and what are the most appropriate programs or program components to support early rehab in this specific population? This study combines a systematic umbrella review with a scoping review. While an umbrella review synthesizes knowledge by summarizing existing review papers, a scoping review aims to provide an overview of an emerging area, extracting concepts and identify the gaps in knowledge. The study focuses on older hospitalized adults (>65 yrs.) receiving ward based early rehabilitation. The focus within this review is on study participants with cognitive impairment or dementia. The study targets at controlled trials independent of their randomization procedure reporting on an early functional rehabilitation during hospitalization. Trials that were conducted in different or mixed settings (e.g. inpatient and aftercare intervention) without a clear focus on hospital based rehabilitation were excluded. The study aim is to identify the presence of CI specific features for early rehabilitation including: CI/dementia assessment, sub-analysis of results according to cognitive status, sample description defined by cognitive impairment, program modules specific for geriatric patients CI.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Reimer, David, Astrid Olsen, Bent Sortkær e Rie Thomsen. Reducing inequality in access to Higher Education in Denmark: Technical report for Nextstep 1.0 intervention and data collection. Aarhus University, janeiro de 2024. http://dx.doi.org/10.7146/aul.511.

Texto completo da fonte
Resumo:
The aim of the project Reducing Inequality in Access to Higher Education1 was to raise the university application rate for upper secondary students whose parents did not them-selves have a university degree. The project implemented an information intervention, and his technical report outlines the procedures involved in designing that the interven-tion called NextStep 1.0. It includes the selection and recruitment of schools, as well as the development of a survey for both students and counsellors and the creation of role model videos which was implemented in the intervention. The project Reducing Inequality in Access to Higher Education also included a NextStep 2.0 and a nudge experiment, which are not included in this technical report. To smoothen the readability of the report, we call the project NextStep throughout this report. The NextStep study is funded by Independent Research Fund Denmark, Grant No. 8019-00100B in a project running from 2019 to 2024. The target group for the intervention was upper secondary students in the spring of 2020, when they were just three-five month from graduation. In this report, we will address the following topics: • The design of the intervention • Randomization • Recruitment of schools • Data management including data to register data • Intervention videos and home page activity • Nudges • Appendix with transcripts of intervention material including the survey
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia