Letteratura scientifica selezionata sul tema "532.001 51"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "532.001 51".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "532.001 51":

1

Marcucci, Guido, C. D. Baldus, A. S. Ruppert, M. D. Radmacher, K. Mrózek, S. P. Whitman, J. E. Kolitz et al. "Overexpression of the ERG Gene Is an Adverse Prognostic Factor in Acute Myeloid Leukemia (AML) with Normal Cytogenetics (NC): A Cancer and Leukemia Group B Study (CALGB)." Blood 106, n. 11 (16 novembre 2005): 335. http://dx.doi.org/10.1182/blood.v106.11.335.335.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract Around 45% of adults with AML have NC and are included in an intermediate-risk group. However, their 5-year (yr) overall survival (OS) rates vary between 24 and 42%, likely due to the prognostic impact of submicroscopic genetic alterations, e.g., mutations in FLT3, CEBPA, MLL and NPM genes and overexpression of BAALC. We recently showed that ERG, an ETS-Related Gene mapped to 21q22, is often overexpressed in AML with unfavorable complex karyotypes, and in a subset of NC AML (PNAS2004;101:3915), suggesting that ERG overexpression contributes to an aggressive phenotype in AML. Here we analyzed ERG expression levels by real-time RT-PCR in pretreatment blood from 84 adults with NC AML, aged <60 yrs, treated on CALGB 9621 and characterized for BAALC expression (Blood2003;102:1613). Patients (pts) were divided into quartiles according to ERG levels and dichotomized into 2 groups: the uppermost quartile of ERG expression values (Q4) and a group comprising 3 lower quartiles (Q1-3), as relapse risk was significantly different for Q4 compared with Q1 (P=.024), Q2 (P=.002) and Q3 (P=.009). The complete remission rates were similar for the 2 groups (76% vs. 83%; P=.532). With a median follow-up of 5.7 yrs, Q4 pts had a worse cumulative incidence of relapse (CIR; P<.001) and OS (P=.011) than Q1-3 pts. For Q4 pts, the estimated 5-yr CIR and OS rates were, respectively, 81% and 19% compared with 33% and 51% for Q1-3 pts. In multivariable models, high ERG expression (Q4) adversely impacted CIR (P<.001), whereas an interaction between ERG and BAALC expression (P=.013) was observed for OS, where Q4 predicted shorter survival only in low BAALC expressers (P=.002; Table 1). Table 1. Multivariable analysis for pts divided into Q4 and Q1-3 groups according to ERG expression Endpoint Variable Hazard ratio (95% CI) P CIR ERG expression (Q4 vs. Q1-3) 3.71 (1.88 to 7.31) <.001 Present vs. absent MLL PTD 2.70 (1.12 to 6.52) .027 OS Interaction ERG of and BAALC .013 - Pts with low BAALC expression, ERG expression (Q4 vs. Q1-3) 5.40 (1.87 to 15.64) .002 - Pts with high BAALC expression, ERG expression (Q4 vs. Q1-3) 1.04 (0.50 to 2.16) .922 Log[WBC] 1.35 (1.07 to 1.70) .012 When ERG expression was evaluated in the context of pts with known FLT3 internal tandem duplication (ITD) status, including those with the very unfavorable FLT3ITD/-genotype, i.e., lacking the FLT3 wild-type allele, a multivariable analysis showed that higher risk of relapse and death was independently predicted by both high ERG expression values (i.e., Q4) (P=.023 and P=.002, respectively) and FLT3 ITD mutations (P≤ .001 and P=.002, respectively). We also used Affymetrix U133 plus 2.0 GeneChips to identify genes differentially expressed (P≤ .001) between Q4 and Q1-3 pts. Q4 pts displayed a signature characterized by overexpression of 63 genes and 49 expressed-sequenced tags. Fourteen genes, including general (GTF2H2) and lineage-specific (BCL11A, HEMGN) transcription regulators and genes involved in cell proliferation (RAB10, GAS5) and apoptosis (IKIP, DAPK), had at least a two-fold difference in expression levels between the Q4 and Q1-3 groups. In conclusion, we show for the first time that ERG overexpression in NC AML constitutes an adverse prognostic factor and is associated with a distinct gene-expression signature.
2

Gunderson, Leonard L., Daniel J. Sargent, Joel E. Tepper, Norman Wolmark, Michael J. O'Connell, Mirsada Begovic, Cristine Allmer et al. "Impact of T and N Stage and Treatment on Survival and Relapse in Adjuvant Rectal Cancer". Journal of Clinical Oncology 22, n. 10 (15 maggio 2004): 1785–96. http://dx.doi.org/10.1200/jco.2004.08.173.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Purpose To determine survival and relapse rates by T and N stage and treatment method in five randomized phase III North American rectal adjuvant studies. Patients and Methods Data were pooled from 3,791 eligible patients enrolled onto North Central Cancer Treatment Group (NCCTG) 79-47-51, NCCTG 86-47-51, US Gastrointestinal Intergroup 0114, National Surgical Adjuvant Breast and Bowel Project (NSABP) R01, and NSABP R02. Surgery alone (S) was the treatment arm in 179 patients. The remaining patients received adjuvant treatment as follows: irradiation (RT) alone (n = 281), RT + fluorouracil (FU) ± semustine bolus chemotherapy (CT; n = 779), RT + protracted venous infusion CT (n = 325), RT + FU ± leucovorin or levamisole bolus CT (n = 1,695), or CT alone (n = 532). Five-year follow-up was available in 94% of surviving patients, and 8-year follow-up, in 62%. Results Overall (OS) and disease-free survival were dependent on TN stage, NT stage, and treatment method. Even among N2 patients, T substage influenced 5-year OS (T1-2, 67%; T3, 44%; T4, 37%; P < .001). Three risk groups of patients were defined: (1) intermediate (T1-2/N1, T3/N0), (2) moderately high (T1-2/N2, T3/N1, T4/N0), and (3) high (T3/N2, T4/N1, T4/N2). For intermediate-risk patients, those receiving S plus CT had 5-year OS rates of 85% (T1-2/N1) and 84% (T3/N0), which was similar to results with S plus RT plus CT (T1-2/N1, 78% to 83%; T3/N0, 74% to 80%). For moderately high-risk lesions, 5-year OS ranged from 43% to 70% with S plus CT, and 44% to 80% with S plus RT plus CT. For high-risk lesions, 5-year OS ranged from 25% to 45% with S plus CT, and 29% to 57% with S plus RT plus CT. Conclusion Different treatment strategies may be indicated for intermediate-risk versus moderately high- or high-risk patients based on differential survival rates and rates of relapse. Use of trimodality treatment for all patients with intermediate-risk lesions may be excessive, since S plus CT resulted in 5-year OS of approximately 85%; however, 5-year disease-free survival rates with S plus CT were 78% (T1-2/N1) and 69%(T3/N0), indicating room for improvement.
3

Neben, Kai, Christian Giesecke, Silja Schweizer, Anthony D. Ho e Alwin Krämer. "Centrosome aberrations in acute myeloid leukemia are correlated with cytogenetic risk profile". Blood 101, n. 1 (1 gennaio 2003): 289–91. http://dx.doi.org/10.1182/blood-2002-04-1188.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract Genetic instability is a common feature in acute myeloid leukemia (AML). Centrosome aberrations have been described as a possible cause of aneuploidy in many human tumors. To investigate whether centrosome aberrations correlate with cytogenetic findings in AML, we examined a set of 51 AML samples by using a centrosome-specific antibody to pericentrin. All 51 AML samples analyzed displayed numerical and structural centrosome aberrations (36.0% ± 16.6%) as compared with peripheral blood mononuclear cells from 21 healthy volunteers (5.2% ± 2.0%; P &lt; .0001). In comparison to AML samples with normal chromosome count, the extent of numerical and structural centrosome aberrations was higher in samples with numerical chromosome changes (50.5% ± 14.2% versus 34.3% ± 12.2%; P &lt; .0001). When the frequency of centrosome aberrations was analyzed within cytogenetically defined risk groups, we found a correlation of the extent of centrosome abnormalities to all 3 risk groups (P = .0015), defined as favorable (22.5% ± 7.3%), intermediate (35.3% ± 13.1%), and adverse (50.3% ± 15.6%). These results indicate that centrosome defects may contribute to the acquisition of chromosome aberrations and thereby to the prognosis in AML.
4

Robertson, Sarah E., Nina R. Joyce, Jon A. Steingrimsson, Elizabeth A. Stuart, Denise R. Aberle, Constantine A. Gatsonis e Issa J. Dahabreh. "Comparing Lung Cancer Screening Strategies in a Nationally Representative US Population Using Transportability Methods for the National Lung Cancer Screening Trial". JAMA Network Open 7, n. 1 (30 gennaio 2024): e2346295. http://dx.doi.org/10.1001/jamanetworkopen.2023.46295.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
ImportanceThe National Lung Screening Trial (NLST) found that screening for lung cancer with low-dose computed tomography (CT) reduced lung cancer–specific and all-cause mortality compared with chest radiography. It is uncertain whether these results apply to a nationally representative target population.ObjectiveTo extend inferences about the effects of lung cancer screening strategies from the NLST to a nationally representative target population of NLST-eligible US adults.Design, Setting, and ParticipantsThis comparative effectiveness study included NLST data from US adults at 33 participating centers enrolled between August 2002 and April 2004 with follow-up through 2009 along with National Health Interview Survey (NHIS) cross-sectional household interview survey data from 2010. Eligible participants were adults aged 55 to 74 years, and were current or former smokers with at least 30 pack-years of smoking (former smokers were required to have quit within the last 15 years). Transportability analyses combined baseline covariate, treatment, and outcome data from the NLST with covariate data from the NHIS and reweighted the trial data to the target population. Data were analyzed from March 2020 to May 2023.InterventionsLow-dose CT or chest radiography screening with a screening assessment at baseline, then yearly for 2 more years.Main Outcomes and MeasuresFor the outcomes of lung-cancer specific and all-cause death, mortality rates, rate differences, and ratios were calculated at a median (25th percentile and 75th percentile) follow-up of 5.5 (5.2-5.9) years for lung cancer–specific mortality and 6.5 (6.1-6.9) years for all-cause mortality.ResultsThe transportability analysis included 51 274 NLST participants and 685 NHIS participants representing the target population (of approximately 5 700 000 individuals after survey-weighting). Compared with the target population, NLST participants were younger (median [25th percentile and 75th percentile] age, 60 [57 to 65] years vs 63 [58 to 67] years), had fewer comorbidities (eg, heart disease, 6551 of 51 274 [12.8%] vs 1 025 951 of 5 739 532 [17.9%]), and were more educated (bachelor’s degree or higher, 16 349 of 51 274 [31.9%] vs 859 812 of 5 739 532 [15.0%]). In the target population, for lung cancer–specific mortality, the estimated relative rate reduction was 18% (95% CI, 1% to 33%) and the estimated absolute rate reduction with low-dose CT vs chest radiography was 71 deaths per 100 000 person-years (95% CI, 4 to 138 deaths per 100 000 person-years); for all-cause mortality the estimated relative rate reduction was 6% (95% CI, −2% to 12%). In the NLST, for lung cancer–specific mortality, the estimated relative rate reduction was 21% (95% CI, 9% to 32%) and the estimated absolute rate reduction was 67 deaths per 100 000 person-years (95% CI, 27 to 106 deaths per 100 000 person-years); for all-cause mortality, the estimated relative rate reduction was 7% (95% CI, 0% to 12%).Conclusions and RelevanceEstimates of the comparative effectiveness of low-dose CT screening compared with chest radiography in a nationally representative target population were similar to those from unweighted NLST analyses, particularly on the relative scale. Increased uncertainty around effect estimates for the target population reflects large differences in the observed characteristics of trial participants and the target population.
5

Satoko, Morishima, Koichi Kashiwase, Fumihiro Azuma, Toshio Yabe, Aiko Sato-Otsubo, Seishi Ogawa, Takashi Shiina et al. "Impact Of HLA Allele and Haplotype On Acute Graft-Versus-Host Disease and Survival After Hematopoietic Stem Cell Transplantation From Unrelated Donor". Blood 122, n. 21 (15 novembre 2013): 708. http://dx.doi.org/10.1182/blood.v122.21.708.708.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract Background Although the effect of allele matching of each HLA locus on the clinical outcome of unrelated hematopoietic stem cell transplantation (UR-HSCT) has been characterized, the effect of HLA allele or haplotype (HP) itself has not been well elucidated. The HLA region is recognized as one of the most important genetic regions associated with human disease, especially autoimmune and infectious diseases. We therefore hypothesized that the immunological response and the clinical outcome following UR-HSCT depend not only on HLA allele matching but also on the HLA allele itself or HLA-linked genetic background of the patient and donor. Methods We analyzed 5237 patients who received T-cell-replete bone marrow transplants from serologically HLA-A, -B, and -DR antigen-matched unrelated donors facilitated by the Japan Marrow Donor Program between 1993 and 2008. HLA-A, -B, -C, -DRB1, -DQB1, and -DPB1 alleles were retrospectively genotyped. HLA allele frequencies were calculated by direct counting, and multi-locus HLA HP frequencies were estimated using the maximum-likelihood method with EM algorithm of PyPop software. Patients were stratified by HLA-matching status into a full match (FM) group (12/12-matched, n=733) and a mismatch (MM) group (≤11/12-matched, n=4504). The effect of HLA alleles or HPs with a frequency greater than 5% on acute graft-versus-host disease (aGVHD) and overall survival (OS) was analyzed using a multivariate competing risk regression model. The results are expressed as hazard ratios (HRs) comparing specific allele/haplotype-positive group to -negative group. Results For each allele, the number of HLA alleles significantly associated with aGVHD (p <.01) in the MM group, were as follows: HLA-A (1 of 10), HLA-B (2 of 17), HLA-C (3 of 15), HLA-DRB1 (1 of 17), HLA-DQB1 (1 of 11) and HLA-DPB1 (0 of 10). In contrast, only one HLA-DPB1 allele was significantly associated with aGVHD in the FM group. The following patient and donor HLA alleles were significantly associated with a reduced risk of aGVHD in the MM group: HLA-A*33:03, C*14:03, B*44:03, DRB1*13:02, and DQB1*06:04. These alleles are located on a common HP (HP-P2) in the Japanese population, which showed a similar effect on grade II-IV (n=534; HR 0.79; p=.001) and III-IV (HR 0.70; p=.004) aGVHD. Strong linkage disequilibrium (LD) hampered determination of the allele responsible for the reducing risk of aGVHD. A significant association with an increased risk of grade III-IV aGVHD and a poor OS was observed in patient HLA-B*51:01 (n=756; aGVHD: HR 1.51, p<.001; OS: HR 1.19, p=.003, respectively) and donor HLA-B*51:01 (n=773; HR 1.46, p<.001; HR 1.15, p=.015) , patient HLA-C*14:02 (n=599; HR 1.55, p<.001; HR 1.19, p=.007), and donor HLA-C*15:02 (n=226; HR 1.62, p<.001; HR 1.38, p=.001) in the MM group. HLA-B*51:01 demonstrated strong positive LD with HLA-C*14:02 and -C*15:02. A significant association with an increased risk of grade III-IV aGVHD and a poor OS was also observed in patient HLA-C*14:02-B*51:01 (n=586; HR 1.52, p<.001; HR 1.19; p=.007) and donor HLA-C*15:02-B*51:01 (n=106; HR 1.98, p<.001; HR 1.53, p=.001). HLA-DPB1*04:02 was the only allele associated with an increased risk of grade II-IV aGVHD in the FM group (n=173; HR 1.64; p=.001). HLA-DPB1*04:02 was linked to two distinctive extended HPs, and the effect of these HPs on aGVHD was stronger in the patients with HLA-DRB1*04:05-DQB1*04:01-DPB1*04:02 (n=60; HR 2.15; p<.001) than in those with HLA-DRB1*01:01-DQB1*05:01-DPB1*-DPB1*04:02 (n=125; HR 1.40; p=.035). HLA-DRB1*04:05-DQB1*04:01-DPB1*04:02 was also significantly associated with poor OS in the FM group (HR 1.65; p=.01). HP-P2 showed a tendency to reduce the risk of grade II-IV aGVHD in the FM group (n=119; HR 0.70; p=.075). Conclusion Patient- and donor-specific HLA alleles and HPs themselves contribute to the risk of aGVHD and survival after UR-HSCT. In addition to HLA-B*51:01 being strongly associated with Bechet’s disease, we found this allele to be associated with an increased risk of aGVHD in UR-HSCT. Given that different HLA alleles and HPs were identified in the FM and MM groups, multiple mechanisms, including HLA-mismatch induced alloreactivity, might be involved in the development or exacerbation of aGVHD. These findings suggest that, in addition to HLA-matching status, consideration of patient and donor HLA alleles and haplotypes will provide predictive risk factors for UR-HSCT. Disclosures: No relevant conflicts of interest to declare.
6

Prochaska, Judith J., Erin A. Vogel, Amy Chieng, Matthew Kendra, Michael Baiocchi, Sarah Pajarito e Athena Robinson. "A Therapeutic Relational Agent for Reducing Problematic Substance Use (Woebot): Development and Usability Study". Journal of Medical Internet Research 23, n. 3 (23 marzo 2021): e24850. http://dx.doi.org/10.2196/24850.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Background Misuse of substances is common, can be serious and costly to society, and often goes untreated due to barriers to accessing care. Woebot is a mental health digital solution informed by cognitive behavioral therapy and built upon an artificial intelligence–driven platform to deliver tailored content to users. In a previous 2-week randomized controlled trial, Woebot alleviated depressive symptoms. Objective This study aims to adapt Woebot for the treatment of substance use disorders (W-SUDs) and examine its feasibility, acceptability, and preliminary efficacy. Methods American adults (aged 18-65 years) who screened positive for substance misuse without major health contraindications were recruited from online sources and flyers and enrolled between March 27 and May 6, 2020. In a single-group pre/postdesign, all participants received W-SUDs for 8 weeks. W-SUDs provided mood, craving, and pain tracking and modules (psychoeducational lessons and psychotherapeutic tools) using elements of dialectical behavior therapy and motivational interviewing. Paired samples t tests and McNemar nonparametric tests were used to examine within-subject changes from pre- to posttreatment on measures of substance use, confidence, cravings, mood, and pain. Results The sample (N=101) had a mean age of 36.8 years (SD 10.0), and 75.2% (76/101) of the participants were female, 78.2% (79/101) were non-Hispanic White, and 72.3% (73/101) were employed. Participants’ W-SUDs use averaged 15.7 (SD 14.2) days, 12.1 (SD 8.3) modules, and 600.7 (SD 556.5) sent messages. About 94% (562/598) of all completed psychoeducational lessons were rated positively. From treatment start to end, in-app craving ratings were reduced by half (87/101, 86.1% reporting cravings in the app; odds ratio 0.48, 95% CI 0.32-0.73). Posttreatment assessment completion was 50.5% (51/101), with better retention among those who initially screened higher on substance misuse. From pre- to posttreatment, confidence to resist urges to use substances significantly increased (mean score change +16.9, SD 21.4; P<.001), whereas past month substance use occasions (mean change −9.3, SD 14.1; P<.001) and scores on the Alcohol Use Disorders Identification Test-Concise (mean change −1.3, SD 2.6; P<.001), 10-item Drug Abuse Screening Test (mean change −1.2, SD 2.0; P<.001), Patient Health Questionnaire-8 item (mean change 2.1, SD 5.2; P=.005), Generalized Anxiety Disorder-7 (mean change −2.3, SD 4.7; P=.001), and cravings scale (68.6% vs 47.1% moderate to extreme; P=.01) significantly decreased. Most participants would recommend W-SUDs to a friend (39/51, 76%) and reported receiving the service they desired (41/51, 80%). Fewer felt W-SUDs met most or all of their needs (22/51, 43%). Conclusions W-SUDs was feasible to deliver, engaging, and acceptable and was associated with significant improvements in substance use, confidence, cravings, depression, and anxiety. Study attrition was high. Future research will evaluate W-SUDs in a randomized controlled trial with a more diverse sample and with the use of greater study retention strategies. Trial Registration ClinicalTrials.gov NCT04096001; http://clinicaltrials.gov/ct2/show/NCT04096001.
7

Vollbrecht, Hanna, Vineet Arora, Sebastian Otero, Kyle Carey, David Meltzer e Valerie G. Press. "Evaluating the Need to Address Digital Literacy Among Hospitalized Patients: Cross-Sectional Observational Study". Journal of Medical Internet Research 22, n. 6 (4 giugno 2020): e17519. http://dx.doi.org/10.2196/17519.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Background Technology is a potentially powerful tool to assist patients with transitions of care during and after hospitalization. Patients with low health literacy who are predisposed to poor health outcomes are particularly poised to benefit from such interventions. However, this population may lack the ability to effectively engage with technology. Although prior research studied the role of health literacy in technology access/use among outpatients, hospitalized patient populations have not been investigated in this context. Further, with the rapid uptake of technology, access may no longer be pertinent, and differences in technological capabilities may drive the current digital divide. Thus, characterizing the digital literacy of hospitalized patients across health literacy levels is paramount. Objective We sought to determine the relationship between health literacy level and technological access, use, and capability among hospitalized patients. Methods Adult inpatients completed a technology survey that asked about technology access/use and online capabilities as part of an ongoing quality of care study. Participants’ health literacy level was assessed utilizing the 3-question Brief Health Literacy Screen. Descriptive statistics, bivariate chi-squared analyses, and multivariate logistic regression analyses (adjusting for age, race, gender, and education level) were performed. Using Bonferroni correction for the 18 tests, the threshold P value for significance was <.003. Results Among 502 enrolled participants, the mean age was 51 years, 71.3% (358/502) were African American, half (265/502, 52.8%) were female, and half (253/502, 50.4%) had at least some college education. Over one-third (191/502, 38.0%) of participants had low health literacy. The majority of participants owned devices (owned a smartphone: 116/173, 67.1% low health literacy versus 235/300, 78.3% adequate health literacy, P=.007) and had used the Internet previously (143/189, 75.7% low health literacy versus 281/309, 90.9% adequate health literacy, P<.001). Participants with low health literacy were more likely to report needing help performing online tasks (133/189, 70.4% low health literacy versus 135/303, 44.6% adequate health literacy, P<.001). In the multivariate analysis, when adjusting for age, race, gender, and education level, we found that low health literacy was not significantly associated with a lower likelihood of owning smartphones (OR: 0.8, 95% CI 0.5-1.4; P=.52) or using the internet ever (OR: 0.5, 95% CI 0.2-0.9; P=.02). However, low health literacy remained significantly associated with a higher likelihood of needing help performing any online task (OR: 2.2, 95% CI 1.3-3.6; P=.002). Conclusions The majority of participants with low health literacy had access to technological devices and had used the internet previously, but they were unable to perform online tasks without assistance. The barriers patients face in using online health information and other health information technology may be more related to online capabilities rather than to technology access. When designing and implementing technological tools for hospitalized patients, it is important to ensure that patients across digital literacy levels can both understand and use them.
8

Li, Shan, Xianglu Zhu, Lin Zhu, Xin Hu e Shujuan Wen. "Associations between serum carotenoid levels and the risk of non-Hodgkin lymphoma: a case–control study". British Journal of Nutrition 124, n. 12 (30 aprile 2020): 1311–19. http://dx.doi.org/10.1017/s000711452000152x.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractLimited studies have investigated the effects of serum carotenoids on the risk of non-Hodgkin lymphoma (NHL), and the findings have been inconclusive. This study aims to assess the association between serum total or specific carotenoid levels and NHL risk. This 1:1 matched, hospital-based case–control study enrolled 512 newly diagnosed (within 1 month) NHL patients and 512 healthy controls who were matched by age (±5 years) and sex in Urumqi, China. Serum carotenoid levels were measured by HPLC. Conditional logistic regression showed that higher serum total carotenoid levels and their subtypes (e.g. α-carotene, β-carotene, β-cryptoxanthin and lycopene) were dose-dependently associated with decreased NHL risk. The multivariable-adjusted OR and their 95 % CI for NHL risk for quartile 4 (v. quartile 1) were 0·31 (95 % CI 0·22, 0·48; Pfor trend < 0·001) for total carotenoids, 0·52 (95 % CI 0·33, 0·79; Pfor trend: 0·003) for α-carotene, 0·63 (95 % CI 0·42, 0·94; Pfor trend: 0·031) for β-carotene, 0·73 (95 % CI 0·49, 1·05; Pfor trend: 0·034) for β-cryptoxanthin and 0·51 (95 % CI 0·34, 0·75; Pfor trend: 0·001) for lycopene. A null association was observed between serum lutein + zeaxanthin and NHL risk (OR 0·89, 95 % CI 0·57, 1·38; Pfor trend: 0·556). Significant interactions were observed after stratifying according to smoking status, and inverse associations were more evident among current smokers than past or never smokers for total carotenoids, α-carotene and lycopene (Pfor heterogeneity: 0·047, 0·042 and 0·046). This study indicates that higher serum carotenoid levels might be inversely associated with NHL risk, especially among current smokers.
9

Shi, Qiuling, Xin Shelley Wang, James M. Reuben, Evan N. Cohen, Loretta A. Williams, Tito R. Mendoza, Mary L. Sailors, Venus M. Ilagan e Charles S. Cleeland. "Chemotherapy-induced peripheral neuropathy in multiple myeloma patients undergoing maintenance therapy." Journal of Clinical Oncology 31, n. 15_suppl (20 maggio 2013): 9646. http://dx.doi.org/10.1200/jco.2013.31.15_suppl.9646.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
9646 Background: After 3-months autologous stem cell transplant (AuSCT), a percentage of multiple myeloma (MM) patients during maintenance therapy continue to experience a complex of symptoms related to peripheral neuropathy. This longitudinal study examined these self-reported neuropathy symptoms and identified circulating inflammatory markers associated with high neuropathy-related symptoms. Methods: MM patients (N=51) rated symptom severity on 0-10 scale via the M. D. Anderson symptom Inventory (MDASI) weekly from 3 to 9 months post AuSCT during maintenance therapy. Patient also rated pain on hand or foot in routine clinic visit. A panel of pro- and anti-inflammatory cytokines, receptors, chemokines was evaluated on serum samples by Luminex. Mixed effect analysis was used to describe the changes on cytokines and symptom outcomes across time. Trajectory analysis identified patients that persistently reported higher or lower symptom severity overtime. Results: During the study period, there was no significant reduction on pain in general or on hand/foot, or change in neuropathic symptoms such as numbness/tingling and muscle weakness. Among a third (33%) of patients who was consistently in high pain (mean 5.5), MIP-1a (p=.001) and MCP-1 (p=.032) showed significant decrease. Approximately 40 % had persistently high numbness/tingling (mean 5.2) across the observation period. Compared to low symptom group patients, this high numbness group had significantly higher IL-6 (p=.019) and TNF-alpha (p=.006). High muscle weakness (mean 3.1) was for 69% of the sample. This group had significantly higher CRP (p=.005) and TNF-alpha (p=.001). Conclusions: This is the first longitudinal study that tracked persistent neuropathy-related symptoms for MM patients post AuSCT. Approximately one third reported painful neuropathy, either from induction therapy or ongoing maintenance therapy. High levels of these neuropathy symptoms were associated with higher levels of specific pro-inflammatory markers. This study provided rationale for examining the effectiveness of anti-inflammation as mechanism driven intervention on peripheral neuropathy in this cohort of MM patients.
10

Liu, Bennett M., Kelley Paskov, Jack Kent, Maya McNealis, Soren Sutaria, Olivia Dods, Christopher Harjadi, Nate Stockham, Andrey Ostrovsky e Dennis P. Wall. "Racial and Ethnic Disparities in Geographic Access to Autism Resources Across the US". JAMA Network Open 6, n. 1 (23 gennaio 2023): e2251182. http://dx.doi.org/10.1001/jamanetworkopen.2022.51182.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
ImportanceWhile research has identified racial and ethnic disparities in access to autism services, the size, extent, and specific locations of these access gaps have not yet been characterized on a national scale. Mapping comprehensive national listings of autism health care services together with the prevalence of autistic children of various races and ethnicities and evaluating geographic regions defined by localized commuting patterns may help to identify areas within the US where families who belong to minoritized racial and ethnic groups have disproportionally lower access to services.ObjectiveTo evaluate differences in access to autism health care services among autistic children of various races and ethnicities within precisely defined geographic regions encompassing all serviceable areas within the US.Design, Setting, and ParticipantsThis population-based cross-sectional study was conducted from October 5, 2021, to June 3, 2022, and involved 530 965 autistic children in kindergarten through grade 12. Core-based statistical areas (CBSAs; defined as areas containing a city and its surrounding commuter region), the Civil Rights Data Collection (CRDC) data set, and 51 071 autism resources (collected from October 1, 2015, to December 18, 2022) geographically distributed into 912 CBSAs were combined and analyzed to understand variation in access to autism health care services among autistic children of different races and ethnicities. Six racial and ethnic categories (American Indian or Alaska Native, Asian, Black or African American, Hispanic or Latino, Native Hawaiian or other Pacific Islander, and White) assigned by the US Department of Education were included in the analysis.Main Outcomes and MeasuresA regularized least-squares regression analysis was used to measure differences in nationwide resource allocation between racial and ethnic groups. The number of autism resources allocated per autistic child was estimated based on the child’s racial and ethnic group. To evaluate how the CBSA population size may have altered the results, the least-squares regression analysis was run on CBSAs divided into metropolitan (&amp;gt;50 000 inhabitants) and micropolitan (10 000-50 000 inhabitants) groups. A Mann-Whitney U test was used to compare the model estimated ratio of autism resources to autistic children among specific racial and ethnic groups comprising the proportions of autistic children in each CBSA.ResultsAmong 530 965 autistic children aged 5 to 18 years, 83.9% were male and 16.1% were female; 0.7% of children were American Indian or Alaska Native, 5.9% were Asian, 14.3% were Black or African American, 22.9% were Hispanic or Latino, 0.2% were Native Hawaiian or other Pacific Islander, 51.7% were White, and 4.2% were of 2 or more races and/or ethnicities. At a national scale, American Indian or Alaska Native autistic children (β = 0; 95% CI, 0-0; P = .01) and Hispanic autistic children (β = 0.02; 95% CI, 0-0.06; P = .02) had significant disparities in access to autism resources in comparison with White autistic children. When evaluating the proportion of autistic children in each racial and ethnic group, areas in which Black autistic children (&amp;gt;50% of the population: β = 0.05; &amp;lt;50% of the population: β = 0.07; P = .002) or Hispanic autistic children (&amp;gt;50% of the population: β = 0.04; &amp;lt;50% of the population: β = 0.07; P &amp;lt; .001) comprised greater than 50% of the total population of autistic children had significantly fewer resources than areas in which Black or Hispanic autistic children comprised less than 50% of the total population. Comparing metropolitan vs micropolitan CBSAs revealed that in micropolitan CBSAs, Black autistic children (β = 0; 95% CI, 0-0; P &amp;lt; .001) and Hispanic autistic children (β = 0; 95% CI, 0-0.02; P &amp;lt; .001) had the greatest disparities in access to autism resources compared with White autistic children. In metropolitan CBSAs, American Indian or Alaska Native autistic children (β = 0; 95% CI, 0-0; P = .005) and Hispanic autistic children (β = 0.01; 95% CI, 0-0.06; P = .02) had the greatest disparities compared with White autistic children.Conclusions and RelevanceIn this study, autistic children from several minoritized racial and ethnic groups, including Black and Hispanic autistic children, had access to significantly fewer autism resources than White autistic children in the US. This study pinpointed the specific geographic regions with the greatest disparities, where increases in the number and types of treatment options are warranted. These findings suggest that a prioritized response strategy to address these racial and ethnic disparities is needed.

Tesi sul tema "532.001 51":

1

Andujar, Moreno Rabindranath. "Variational mechanics and stochastic methods applied to structural design". Doctoral thesis, Universitat Politècnica de Catalunya, 2015. http://hdl.handle.net/10803/286230.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This thesis explores a very well understood area of physics: computational structural dynamics. The aim is to stretch its boundaries by merging it with another very well established discipline such as structural design and optimization. In the recent past both of them have made significant advances, often unaware one of each other for different reasons. It is the aim of this thesis to serve as a bridging tool between the realms of physics and engineering. The work in divided in three parts: variational mechanics, structural optimization and implementation. The initial part deals with deterministic variational mechanics. Two chapters are dedicated to probe the applicability of energy functionals in the structural analysis. First, by mapping the state of the art regarding the vast field of numerical methods for structural dynamics; second, by using those functionals as a tool to compare the methods. It is shown how, once the methods are grouped according to the kind of differential equations they integrate, it is easy to establish a framework for benchmarking. Moreover, if this comparison is made using balance of energy the only parameter needed to observe is a relatively easy to obtain scalar value. The second part, where structural optimization is treated, has also two chapters. In the first one the non-deterministic tools employed by structural designers are presented and examined. An important distinction between tools for optimization and tools for analysis is highlighted. In the following chapter, a framework for the objective characterization of structural systems is developed. This characterization is made on the basis of the thermodynamics and energetic characteristics of the system. Finally, it is successfully applied to drive a sample simulated annealing algorithm. In the third part the resulting code employed in the numerical experiments is shown and explained. This code was developed by means of a visual programming environment and allows for the fast implementation of programs within a consolidated CAD application. It was used to interconnect simultaneously with other applications to seamlessly share simulation data and process it. Those applications were, respectively, a spreadsheet and a general purpose finite element.
La presente tesis explora un area de la fisica ampliamente establecida: dinamica computacional de estructuras. El proposito es expandir los limites de la misma mediante la combinacion con otra disciplina como es el diseño y la optimizacion estructurales. Recientemente, ambas han experimentado avances significativos que, frecuentemente, han ocurrido de forma ajena una de la otra. Esta tesis busca servir de nexo entre el campo de la fisica y la ingenieria. El trabajo esta dividido en tres partes: mecanica variacional, optimizacion estructural e implementacion de una aplicacion de software para su uso en la practica real. La parte inicial trata la mecanica variacional desde el punto de vista determinista. Se dedican dos capitulos a demostrar la aplicabilidad de los funcionales energeticos en el analisis estructural. Primero, se hace un recorrido por el estado del arte de los metodos numericos para dinamica estructural; posteriormente, se emplean estos funcionales para comparar dichos metodos de forma objetiva y eficaz. Se demuestra como, una vez que los metodos han quedado agrupados de acuerdo al tipo de ecuaciones diferenciales que resuelven (ordinarias, parciales o algebraicas), es facil crear un marco de referencia para su evaluacion. Al hacerse la comparacion mediante equilibrio energetico el resultado es un valor escalar cuyo manejo es relativamente facil de interpretar. La segunda parte, donde se trata la optimizacion estructural, abarca tambien dos capitulos. En el primero se presentan y examinan las herramientas no deterministas utilizadas por los diseñadores estructurales y se pone de relevancia la importante distincion entre las herramientas de analisis y las de optimizacion. A continuacion, se desarrolla un marco para la caracterizacion objetiva de sistemas estructurales elaborado sobre la base de la mecanica estadistica y las caracteristicas energeticas de las estructuras. Finalmente, el marco se combina con un algoritmo de "Simulated Annealing" para la optimizacion de estructuras. En la tercera parte de la tesis el codigo resultante empleado en los experimentos numericos de los capitulos anteriores queda explicado. Este codigo, desarrollado mediante herramientas de programacion visual, permite la rapida implementacion de aplicaciones dentro de un entorno de CAD. para su posterior aplicacion a problemas reales.
2

Starnini, Michele. "Time-varying networks approach to social dynamics : from individual to collective behavior". Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/284221.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The data revolution experienced by social science has revealed complex patterns of interactions in human dynamics, such as the heterogeneity and burstiness of social contacts. The recently uncovered temporal dimension of social interactions calls for a renewed effort in analysis and modeling of empirical time-varying networks. This thesis contributes to pursue this program, through a twofold track: The modeling of dynamical social systems and the study of the impact of temporally evolving substrates on dynamical processes running on top of them. Firstly, we introduce some basic concepts and definition of time-varying networks formalism, and we present and analyze some empirical data of face-to-face interactions, discussing their main statistical properties, such as the bursty dynamics of social interactions. The main body of the exposition is then split into two parts. In the first part we focus on the modeling of social dynamics, with a twofold aim: reproduction of empirical data properties and analytic treatment of the models considered. We present and discuss the behavior of a simple model able to replicate the main statistical properties of empirical face-to-face interactions, at different levels of aggregation, such as individual, group and collective scales. The model considers individuals involved in a social gathering as performing a random walk in space, and it is based on the concept of social "attractiveness": socially attractive people (due to their status or role in the gathering) are more likely to make people stop around them, so they start to interact. We also devote attention to the analytic study of the activitydriven model, a model aimed to capture the relation between the dynamics of time-varying networks and the topological properties of their corresponding aggregated social networks. Through a mapping to the hidden variable model, we obtained analytic expressions for both topological properties of the time-integrated networks and connectivity properties of the evolving network, as a function of the integration time and the form of the activity potential. In the second part of the thesis we study the behavior of diffusive processes taking place on temporal networks, constituted by empirical face-to-face interactions data.We first consider random walks, and thanks to different randomization strategies we introduced, we are able to single out the crucial role of temporal correlations in slowing down the random walk exploration. Then we address spreading dynamics, focusing on the case of a simple SI model taking place on temporal networks, complemented by the study of the impact of different immunization strategies on the infection outbreak. We tackle in particular the effect of the length of the temporal window used to gather information in order to design the immunization strategy, finding that a limited amount of information of the contact patterns is sufficient to identify the individuals to immunize so as to maximize the effect of the vaccination protocol. Our work opens interesting perspectives for further research, in particular regarding the possibility to extend the time-varying networks approach to multiplex systems, composed of several layers of interrelated networks, in which the same individuals interact between them on different layers. Empirical analysis of multiplex networks is still in its infancy, indeed, while the data mining of large, social, multi-layered systems is mature to be exploited, calling for an effort in analysis and modeling. Our understanding of the impact of the temporal dimension of networked structures on the behavior of dynamical processes running on top of them can be applied to more complex multi-layered systems, with particular attention to the effect of temporal correlation between the layers in the diffusion dynamics.
La revolució de dades en ciències socials ha revelat els complexos patrons de les interaccions en la dinàmica humana, com ara l'heterogeneïtat i la burstiness dels contactes socials. La dimensió temporal recentment descoberta en les interaccions socials demana un esforç renovat en l'anàlisi i la modelització de xarxes empíriques de variables en el temps. Aquesta tesi contribueix a aquest programa, a través d'un doble recorregut: la modelització dels sistemes socials dinàmics i l'estudi de l'impacte de substrats temporalment variables en els processos dinàmics que es desenvolupen sobre ells. En primer lloc, hem introduït els conceptes bàsics i definicions del formalisme de les xarxes de variables en el temps, i presentem i analitzem algunes dades empíriques de les interaccions humanes de proximitat, discutint les seves principals propietats estadístiques. El cos principal de l'exposició es divideix llavors en dues parts. A la primera part ens centrem en els models de dinàmica social, amb un doble objectiu: la reproducció de les propietats de dades empíriques i el tractament analític dels models considerats. Hem presentat i discutit el comportament d'un model simple capaç de replicar les principals propietats estadístiques de les interaccions empíriques cara a cara, a diferents nivells d'agregació: individuals, grupals i d'escala col·lectiva. El model considera els individus que participen en un context social com si realitzaran una caminada a l'atzar en l'espai, i es basa en el concepte de "atractivitat social": persones socialment atractives tenen més probabilitat de que la gent que els envolta interactuï amb ells. Ens hem ocupat també de l'estudi analític del model "activity driven", destinat a capturar la relació entre la dinàmica de les xarxes variables en el temps i les propietats de les seves corresponents xarxes socials agregats. A través d'un mapeig amb el formalisme de variables ocultes, hem obtingut expressions analítiques per a les propietats topològiques de les xarxes integrades en el temps i les propietats de connectivitat de la xarxa en evolució, en funció del temps d'integració i de la forma del potencial d'activitat. A la segona part de la tesi hem estudiat el comportament dels processos difusius sobre xarxes temporals constituïdes per les dades empíriques de interaccions humanes. Primer considerem el procés de "random walk", o camí aleatori, i gràcies a les diferents estratègies de randomització que hem introduït, podem destacar el paper crucial de la correlacions temporals en alentir l'exploració del camí aleatori. Després hem dirigit la nostra atenció a la difusió d'epidèmies, centrant-nos en el cas d'un simple model SI que es desenvolupa a les xarxes temporals, complementat amb l'estudi de l'impacte de diferents estratègies d'immunització sobre la difusió de la infecció. Abordem, en particular, l'efecte de la longitud de la finestra temporal utilitzada per reunir informació per tal de dissenyar l'estratègia d'immunització, sobre l'eficàcia de la mateixa vacunació, descobrint que una quantitat limitada d'informació és suficient per maximitzar l'efecte del protocol de vacunació. El nostre treball obre interessants perspectives per a futures investigacions, en particular pel que fa a la possibilitat d'ampliar el formalisme de xarxes de temps variable a sistemes múltiplex, compostos de diverses capes de xarxes interrelacionades, en la qual els mateixos individus interactuen entre ells en diferents capes. L'anàlisi empírica de les xarxes múltiplex és encara en la seva infantesa, de fet, mentre que la mineria de dades de grans sistemes socials, de diverses capes, és madur per ser explotat, demanant un esforç en l'anàlisi i modelització. La nostra comprensió de l'impacte de la dimensió temporal de les xarxes sobre els processos dinàmics que es desenvolupen sobre ells es pot aplicar a sistemes més complexos de múltiples capes, estudiant l'efecte de la correlació temporal entre les capes en la dinàmica de difusió.
3

Frawley, John Thomas. "A historical and theological examination of the resurrections of the saints in Matthew 27:51-53". Dallas, TX : Dallas Theological Seminary, 2008. http://dx.doi.org/10.2986/tren.001-1251.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Sáez, Pous Xavier. "Particle-in-cell algorithms for plasma simulations on heterogeneous architectures". Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/381258.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
During the last two decades, High-Performance Computing (HPC) has grown rapidly in performance by improving single-core processors at the cost of a similar growth in power consumption. The single-core processor improvement has led many scientists to exploit mainly the process level parallelism in their codes. However, the performance of HPC systems is becoming increasingly limited by power consumption and power density, which have become a primary concern for the design of new computer systems. As a result, new supercomputers are designed based on the power efficiency provided by new homogeneous and heterogeneous architectures. The growth in computational power has introduced a new approach to science, Computational Physics. Its impact on the study of nuclear fusion and plasma physics has been very significant. This is because the experiments are difficult and expensive to perform whereas computer simulations of plasma are an efficient way to progress. Particle-In-Cell (PIC) is one of the most used methods to simulate plasma. The improvement in the processing power has enabled an increase in the size and complexity of the PIC simulations. Most PIC codes have been designed with a strong emphasis on the physics and have traditionally included only process level parallelism. This approach has not taken advantage of multiprocessor platforms. Therefore, these codes exploit inefficiently the new computing platforms and, as a consequence, they are still limited to using simplified models. The aim of this thesis is to incorporate in a PIC code the latest technologies available in computer science in order to take advantage of the upcoming multiprocessor supercomputers. This will enable an improvement in the simulations, either by introducing more physics in the code or by incorporating more detail to the simulations. This thesis analyses a PIC code named EUTERPE on different computing platforms. EUTERPE is a production code used to simulate fusion plasma instabilities in fusion reactors. It has been implemented for traditional HPC clusters and it has been parallelized prior to this work using only Message Passing Interface (MPI). Our study of its scalability has reached up to tens of thousands of processors, which is several orders of magnitude higher than the scalability achieved when this thesis was initiated. This thesis also describes the strategies adopted for porting a PIC code to a multi-core architecture, such as introducing thread level parallelism, distributing the work among different computing devices, and developing a new thread-safe solver. These strategies have been evaluated by applying them to the EUTERPE code. With respect to heterogeneous architectures, it has been possible to port this kind of plasma physics codes by rewriting part of the code or by using a programming model called OmpSs. This programming model is specially designed to make this computing power easily available to scientists without requiring expert knowledge on computing. Last but not least, this thesis should not be seen as the end of a way, but rather as the beginning of a work to extend the physics simulated in fusion codes through exploiting available HPC resources.
Durant les darreres dues dècades, la Computació d'Alt Rendiment (HPC) ha crescut ràpidament en el rendiment mitjançant la millora dels processadors d'un sol nucli a costa d'un creixement similar en el consum d'energia. La millora en els processadors d'un sol nucli ha portat a molts científics a explotar tot el paral·lelisme a nivell de procés en els seus codis. No obstant això, el rendiment dels sistemes HPC està cada cop més limitat pel consum d'energia i la densitat de potència, que s'han convertit en una de les principals preocupacions en el disseny dels nous sistemes informàtics. Com a resultat, els nous supercomputadors estan dissenyats sobre la base de l'eficiència energètica proporcionada per les noves arquitectures homogènies i heterogènies. El creixement de la potència de càlcul ha introduït un nou enfocament a la ciència, la Física Computacional. El seu impacte en l'estudi de la fusió nuclear i la física del plasma ha estat molt significatiu. Això és perquè els experiments són difícils i costosos de realitzar mentre que les simulacions del plasma amb computadors són una manera eficaç de progressar. Particle-In-Cell (PIC) és un dels mètodes més utilitzats per simular el plasma. La millora en la potència de processament ha permès un augment en la grandària i la complexitat de les simulacions PIC. La majoria dels codis PIC s'han dissenyat amb un fort èmfasi en la física i tradicionalment han inclòs només paral·lelisme a nivell de procés. Aquest enfocament no ha aprofitat les plataformes multiprocessador. Per tant, aquests codis exploten ineficientment les noves plataformes de computació i, com a conseqüència, encara estan limitats a tractar amb models simplificats. L'objectiu d'aquesta tesi és incorporar en un codi PIC les últimes tecnologies disponibles en informàtica per tal d'aprofitar els propers supercomputadors multiprocessador. Això permetrà una millora en les simulacions, ja sigui mitjançant la introducció de més física en el codi o mitjançant la incorporació de més detall en les simulacions. Aquesta tesi analitza un codi PIC anomenat EUTERPE en diferents plataformes de computació. EUTERPE és un codi de producció utilitzat per simular les inestabilitats del plasma en els reactors de fusió. S'ha implementat per clústers HPC tradicionals i s'ha paral·lelitzat prèviament a aquest treball usant només la Interfície de Pas de Missatges (MPI). El nostre estudi de la seva escalabilitat ha arribat fins a desenes de milers de processadors, que és diversos ordres de magnitud més gran que l'escalabilitat que s'havia assolit quan es va iniciar aquesta tesi. Aquesta tesi també descriu les estratègies adoptades per portar un codi PIC a una arquitectura multi-nucli, com ara la introducció de paral·lelisme a nivell de thread, la distribució de la feina entre diferents dispositius de computació i el desenvolupament d'un nou solver thread-safe. Aquestes estratègies han estat avaluades amb la seva aplicació al codi EUTERPE. Pel que fa a les arquitectures heterogènies, ha estat possible portar aquest tipus de codis de la física del plasma reescrivint part del codi o mitjançant l'ús d'un model de programació anomenat OmpSs. Aquest model de programació està especialment dissenyat per posar aquesta potència de càlcul a l'abast dels científics sense necessitat de coneixements d'experts en computació. Finalment, però no menys important, aquesta tesi no ha de ser vista com el final d'un camí, sinó més aviat com l'inici d'un treball per estendre la física simulada en els codis de fusió nuclear mitjançant l'explotació dels recursos disponibles de HPC.
5

Herrera, Ramírez Jorge A. "Diseño e implementación de un sistema multiespectral en el rango ultravioleta, visible e infrarrojo : aplicación al estudio y conservación de obras de arte". Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/277543.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Multispectral systems have several applications and there are different possible configurations in which they can be implemented. Different characteristics make them useful, but the basic one is the access to spectral information of a scene or sample with high spatial resolution. In this thesis , the main objective has been the design and implementation of a multispectral system that covers part of the UV, the visible, and part of theIR ranges of the electromagnetic spectrum to be applied to artwork studies. The main part of the thesis is dedicated to the design, characterization and application of a multispectral system based on multiplexed illumination using light emitting diodes (LED). This system comprises two modules : Module 1 UV-Vis, with a CCD camera sensitive between 370nm and 930nm , coupled to a source of LEDs with 16 different channels, i.e. 16 wavelengths of emission; and Module 2 IR with an lnGaAs camera with sensitivity between 930nm and 1650nm coupled to an LED source with 7 emission wavelengths. Thus , the complete system covers from 370nm to 1650nm with a total of23 channels obtained with LED illumination. The system elements were characterized and simulations were performed to assess their performance in the reconstruction of spectral reflectance under ideal conditions , and also under conditions of quantization noise and additive noise. Its performance was evaluated using the formula CIEDE2000 color difference, the mean square error (RMSE) and the goodness-of-fit coefficient (GFC) . The simulation results showed a good overall system performance, but with better results for module 1 UV -Vis due to the increased amount of LED channels in its spectral range. Computer programs with their respective graphical interfaces to control the hardware and processing the information provided by the system were implemented. For the spectral reconstruction we employed the method based on direct interpolation using splines , and the methods based on training set of samples with known digital system responses and spectral reflectances: the undetermined pseudo-inverse (PSE -1) and simple pseudo-inverse (PSE). The equipment was evaluated over real samples of the Color Checker Chart CCCR and a series of frescoes patches painted with pigments used in artworks. The results of the metrics CIEDE2000, RMSE and GFC showed that the methods ofthe PSE-1 and PSE have similar performance, with slightly better results for the second one. The interpolation method presented a slightly lower performance, but it has the practical value of not needing training. The results for the PSE method were similar to those obtained through simulation, and proved again that the module 2 IR has lower performance. lt was concluded that overall system performance was good with CIEDE2000 and RMSE average values for the methods based on PSE in the order of 1 unit. The developed system was applied to artworks in the museum ofPedralbes Monastery in Barcelona, and the churches of Sant Pere in Terrassa. Different images of murals of the chapel of San Miguel in the Monastery of Pedralbes were captured. The evaluation of the system performance for this museum application showed similar performance to the reported one in laboratory. We also captured a painting of large format, oil on wood named: La Virgen de la Leche. For this artwork the modular design and easy movement of the system was used to generate a complete picture by composition from several smaller images. At the churches of Sant Pere, we explored wall paintings dating from the Visigoth (VI-VII) and Romance times (XII -XIII) to assess whether there were features in the paintings that were not evident in the visible range, but in other spectral ranges. Enhancement algorithms were implemented for this task. The results obtained in this thesis demonstrate the potential of the developed multispecral system for obtaining spectral information in the ultraviolet, visible and infrared regions.
Los sistemas multiespectrales tienen la característica principal de proporcionar acceso a información espectral de una muestra con alta resolución espacial. En esta tesis, como principal objetivo, se ha diseñado e implementado un sistema multiespectral para aplicarlo al estudio de obras de arte. Este sistema comprende dos módulos: el módulo 1 UV-Vis, con una cámara CCD con sensibilidad entre 370nm y930nm, acoplada a una fuente de diodos emisores de luz (LED) con 16 canales diferentes, es decir 16 1ongitudes de onda de emisión; y el módulo 2 IR, con una cámara lnGaAs con sensibilidad entre 930nm y 1650nm, acoplada a una fuente LED con 7 longitudes de onda de emisión. Por tanto, el sistema completo abarca desde 370nm a 1650nm con un total de 23 canales. Se caracterizaron los elementos del sistema y se han realizado simulaciones para evaluar su rendimiento en la reconstrucción de reflectancias espectrales bajo condiciones ideales, condiciones de ruido de cuantificación y aditivo. Su rendimiento se evaluó empleando la fórmula de diferencia de color CIEDE2000, el error cuadrático medio (RMSE) y el coeficiente de bondad del ajuste (GFC). Los resultados de las simulaciones mostraron un buen rendimiento general del sistema, aunque con mejores resultados para el módulo 1 UV-Vis debido a la mayor cantidad de canales LED en su rango espectral. Paralelamente se implementaron los programas computacionales con sus respectivas interfaces gráficas necesarias para el control del hardware usado y para el procesamiento de la información proporcionada por el sistema. Para la reconstrucción espectral empleamos un método de interpolación directa basado en splines, y los métodos de pseudoinversa indeterminada (PSE-l) y pseudoinversa simple (PSE) que necesitan de un entrenamiento con un conjunto de muestras con respuestas digitales del sistema y reflectancias espectrales conocidas. El equipo se evaluó sobre muestras reales de la carta de colores Color Checker CCCR y sobre un conjunto de pinturas al fresco realizadas con pigmentos comúnmente presentas en obras de arte. Los resultados de las métricas CIEDE2000, RMSE y GFC mostraron que los métodos de la PSE-1 y PSE tienen desempeños similares, con resultados ligeramente mejores para el segundo método. El método de interpolación presentó un rendimiento ligeramente menor, aunque tiene el valor práctico de no necesitar entrenamiento. Los resultados reales para el método del PSE fueron similares a los obtenidos mediante simulación, y se mostró una vez más que el módulo 2 IR tiene un rendimiento inferior. Se concluyó que en general el desempeño del sistema era bueno, con valores CIEDE2000 y RMSE promedio para los métodos basados en PSE del orden de 1 unidad en ambos casos. El sistema desarrollado fue aplicado a obras de arte en el museo de Monasterio de Pedralbes, en Barcelona, y las Iglesias de Sant Pere, en Terrassa. En el Monasterio de Pedralbes se capturaron diferentes imágenes de pinturas murales de la capilla de San Miguel y se evaluó el desempeño del sistema para esta aplicación de museo, mostrando un desempeño similar al reportado en las pruebas de laboratorio. También se accedió a la obra “Virgen de la Leche” que es un óleo en tabla de gran formato. En esta obra se aprovechó el diseño modular y de fácil movimiento del sistema para generar por composición una imagen completa a partir de varias imágenes menores. En las iglesias de Sant Pere se exploraron pinturas murales que se estima datan de las épocas visigoda (siglos VI-VII) y románica (siglos XII-XIII) para evaluar si existían características en las pinturas que no fueran evidentes en el rango visible y que si lo fueran en otros rangos espectrales. Para ello se implementaron algoritmos de realce de la información. Los resultados obtenidos en esta tesis doctoral ponen de manifiesto las potencialidades del sistema multiespecral desarrollado para la obtención de información espectral en las regiones ultravioleta, visible e infrarrojo
6

Leupold, Klaus-Peter. "Languages Generated by Iterated Idempotencies". Doctoral thesis, Universitat Rovira i Virgili, 2006. http://hdl.handle.net/10803/8791.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The rewrite relation with parameters m and n and with the possible length limit = k or :::; k we denote by w~, =kW~· or ::;kw~ respectively. The idempotency languages generated from a starting word w by the respective operations are wDAlso other special cases of idempotency languages besides duplication have come up in different contexts. The investigations of Ito et al. about insertion and deletion, Le., operations that are also observed in DNA molecules, have established that w5 and w~ both preserve regularity.
Our investigations about idempotency relations and languages start out from the case of a uniform length bound. For these relations =kW~ the conditions for confluence are characterized completely. Also the question of regularity is -k n answered for aH the languages w- D 1 are more complicated and belong to the class of context-free languages.
For a generallength bound, i.e."for the relations :"::kW~, confluence does not hold so frequently. This complicatedness of the relations results also in more complicated languages, which are often non-regular, as for example the languages W<;kDWithout any length bound, idempotency relations have a very complicated structure. Over alphabets of one or two letters we still characterize the conditions for confluence. Over three or more letters, in contrast, only a few cases are solved. We determine the combinations of parameters that result in the regularity of wDIn a second chapter sorne more involved questions are solved for the special case of duplication. First we shed sorne light on the reasons why it is so difficult to determine the context-freeness ofduplication languages. We show that they fulfiH aH pumping properties and that they are very dense. Therefore aH the standard tools to prove non-context-freness do not apply here.
The concept of root in Formal Language ·Theory is frequently used to describe the reduction of a word to another one, which is in sorne sense elementary.
For example, there are primitive roots, periodicity roots, etc. Elementary in connection with duplication are square-free words, Le., words that do not contain any repetition. Thus we define the duplication root of w to consist of aH the square-free words, from which w can be reached via the relation w~.
Besides sorne general observations we prove the decidability of the question, whether the duplication root of a language is finite.
Then we devise acode, which is robust under duplication of its code words.
This would keep the result of a computation from being destroyed by dupli cations in the code words. We determine the exact conditions, under which infinite such codes exist: over an alphabet of two letters they exist for a length bound of 2, over three letters already for a length bound of 1.
Also we apply duplication to entire languages rather than to single words; then it is interesting to determine, whether regular and context-free languages are closed under this operation. We show that the regular languages are closed under uniformly bounded duplication, while they are not closed under duplication with a generallength bound. The context-free languages are closed under both operations.
The thesis concludes with a list of open problems related with the thesis' topics.
7

Li, Bin. "The variational approach to brittle fracture in materials with anisotropic surface energy and in thin sheets". Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/393861.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Fracture mechanics of brittle materials has focused on bulk materials with isotropic surface energy. In this situation different physical principles for crack path selection are very similar or even equivalent. The situation is radically different when considering crack propagation in brittle materials with anisotropic surface energy. Such materials are important in applications involving single crystals, extruded polymers, or geological and organic materials. When this anisotropy is strong, the phenomenology of crack propagation becomes very rich, with forbidden crack propagation directions or complex sawtooth crack patterns. Thus, this situation interrogates fundamental issues in fracture mechanics, including the principles behind the selection of crack direction. Similarly, tearing of brittle thin elastic sheets, ubiquitous in nature, technology and daily life, challenges our understanding of fracture. Since tearing typically involves large geometric nonlinearity, it is not clear whether the stress intensity factors are meaningful or if and how they determine crack propagation. Geometry, together with the interplay between stretching and bending deformation, leads to complex behaviors, restricting analytical approximate solutions to very simplified settings and specific parameter regimes. In both situations, a rich and nontrivial experimental record has been successfully understood in terms of simple energetic models. However, general modeling approaches to either fracture in the presence of strong surface energy anisotropy or to tearing, capable of exploring new physics, have been lacking. The success of energetic simple models suggests that variational theories of brittle fracture may provide a unifying and general framework capable of dealing with the more general situations considered here. To address fracture in materials with strongly anisotropic surface energy, we propose a variational phase-field model resorting to the extended Cahn-Hilliard framework proposed in the context of crystal growth. Previous phase-field models for anisotropic fracture were formulated in a framework only allowing for weak anisotropy. We implement numerically our higher-order phase-field model with smooth local maximum entropy approximants in a direct Galerkin method. The numerical results exhibit all the features of strongly anisotropic fracture, and reproduce strikingly well recent experimental observations. To explore tearing of thin films, we develop a geometrically exact model and a computational framework coupling elasticity (stretching and bending), fracture, and adhesion to a substrate. We numerically implement the model with subdivision surface finite elements. Our simulations qualitatively and quantitatively reproduced the crack patterns observed in tearing experiments. Finally, we examine how shell geometry affects fracture. As suggested by previous results and our own phase-field simulations, shell shape dramatically affects crack evolution and the effective toughness of the shell structure. To gain insight and eventually develop new concepts for optimizing the design of thin shell structures, we derive the configurational force conjugate to crack extension for Koiter's linear thin shell theory. We identify the conservative contribution to this force through an Eshelby tensor, as well as non-conservative contributions arising from curvature.
La mécanica de fractura frágil se ha centrado en materiales tridimensionales con una energía de superficie isotrópica. En esta situación, los diferentes principios para la selección del camino de la fisura son muy similares, o incluso equivalentes. La situación es radicalmente opuesta cuando se considera la propagación de fisuras en medios con energía de superficie anisótropa. Estos materiales son importantes en aplicaciones que involucran materiales cristalinos, polímeros extrudidos, o materiales orgánicos y geológicos. Cuando la anisotropía es fuerte, el fenómeno de la propagación de fisuras es muy rico, con direcciones de propagación prohibidas o complejos patrones de ruptura en dientes de sierra. Por tanto, esta situación plantea cuestiones fundamentales en la mecánica de la fractura, incluyendo los principios de selección de la dirección de propagación de la fractura. Igualmente, el proceso de rasgado de láminas delgadas y frágiles, comunes en la naturaleza, la tecnología y la vida diaria, desafía nuestro entendimiento de la fractura. Dado que el rasgado de estas láminas típicamente involucra grandes no linealidades geométricas, no está claro si los factores de intensidad de esfuerzos son válidos o si, y en tal caso cómo determinan la propagación de fisuras. La interacción entre la geometría, las deformaciones y la curvatura da lugar a comportamientos complejos, lo que restringe las soluciones analíticas aproximadas a ejemplos muy simplificados y a regímenes de parámetros limitados. En ambas situaciones, se han podido interpretar experimentos no triviales con modelos energéticos simples. Sin embargo, no se ha profundizado en modelos generales de fractura en presencia de energía de superficie fuertemente anisótropa o en láminas delgadas, ambas interesantes por su capacidad para explorar nueva física. El mencionado éxito de los modelos energéticos simplificados sugiere que las teorías variacionales de fractura en medios frágiles pueden proveer un marco unificador para considerar situaciones más generales, como las que se consideran en este trabajo. Para caracterizar la fractura en materiales con energía de superficie fuertemente anisótropa, proponemos un modelo variacional de campo de fase basado en el modelo extendido de Cahn-Hilliard. Los modelos de campo de fase existentes para la fractura anisótropa fueron formulados en un contexto que sólo admite anisotropía débil. En este trabajo, implementamos numéricamente nuestro modelo de campo de fase de alto orden con aproximantes locales de máxima entropía en un método directo de Garlerkin. Los resultados numéricos muestran todas las características de fractura con anisotropía fuerte, y reproducen llamativamente bien las últimas observaciones experimentales. Para explorar el rasgado de láminas delgadas, desarrollamos un modelo geométricamente exacto y un esquema computacional que acopla elasticidad (estiramiento y flexión), fractura, y la adhesión a un substrato. Implementamos numéricamente el modelo con elementos finitos basados en superficies de subdivisión. Nuestras simulaciones reproducen los patrones de ruptura, tanto cualitativamente como cuantitativamente, observados en los experimentos de rasgado. Finalmente, examinamos cómo la geometría de la lámina afecta la fractura. Como ha sido sugerido en resultados previos y en nuestras propias simulaciones de campo de fase, la forma de la lámina afecta dramáticamente la evolución de fisuras y la resistencia efectiva del material. Para comprender mejor estos fenómenos y con el objetivo de desarrollar nuevos conceptos para la optimización del diseño de estructuras de láminas delgadas, derivamos la fuerza configuracional conjugada a la extensión de la fractura para la teoría lineal de láminas delgadas de Koiter. Identificamos las contribuciones conservativas a esta fuerza a través del tensor de Eshelby, así como las contriuciones no conservativas que aparecen por el efecto de la curvatura.
8

Vico, Oton Albert. "Collected results on semigroups, graphs and codes". Doctoral thesis, Universitat Rovira i Virgili, 2012. http://hdl.handle.net/10803/97214.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this thesis we present a compendium of _ve works where discrete mathematics play a key role. The _rst three works describe di_erent developments and applications of the semigroup theory while the other two have more independent topics. First we present a result on semigroups and code e_ciency, where we introduce our results on the so-called Geil-Matsumoto bound and Lewittes' bound for algebraic geometry codes. Following that, we work on semigroup ideals and their relation with the Feng-Rao numbers; those numbers, in turn, are used to describe the Hamming weights which are used in a broad spectrum of applications, i.e. the wire-tap channel of type II or in the t-resilient functions used in cryptography. The third work presented describes the non-homogeneous patterns for semigroups, explains three di_erent scenarios where these patterns arise and gives some results on their admissibility. The last two works are not as related as the _rst three but still use discrete mathematics. One of them is a work on the applications of coding theory to _ngerprinting, where we give results on the traitor tracing problem and we bound the number of colluders in a colluder set trying to hack a _ngerprinting mark made with a Reed-Solomon code. And _nally in the last work we present our results on scientometrics and graphs, modeling the scienti_c community as a cocitation graph, where nodes represent authors and two nodes are connected if there is a paper citing both authors simultaneously. We use it to present three new indices to evaluate an author's impact in the community.
9

Tirnauca, Catalin Ionut. "Syntax-directed translations, tree transformations and bimorphisms". Doctoral thesis, Universitat Rovira i Virgili, 2016. http://hdl.handle.net/10803/381246.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
La traducció basada en la sintaxi va sorgir en l'àmbit de la traducció automàtica dels llenguatges naturals. Els sistemes han de modelar les transformacions d'arbres, reordenar parts d'oracions, ser simètrics i posseir propietats com la componibilitat o simetria. Existeixen diverses maneres de definir transformacions d'arbres: gramàtiques síncrones, transductors d'arbres i bimorfismes d'arbres. Les gramàtiques síncrones fan tot tipus de rotacions, però les propietats matemàtiques són més difícils de provar. Els transductors d'arbres són operacionals i fàcils d'implementar, però les classes principals no són tancades sota la composició. Els bimorfismes d'arbres són difícils d'implementar, però proporcionen una eina natural per provar componibilitat o simetria. Per millorar el procés de traducció, les gramàtiques síncrones es relacionen amb els bimorfismes d'arbres i amb els transductors d'arbres. En aquesta tesi es duu a terme un ampli estudi de la teoria i les propietats dels sistemes de traducció dirigides per la sintaxi, des d'aquestes tres perspectives molt diferents que es complementen perfectament entre si: com a dispositius generatius (gramàtiques síncrones), com a màquines acceptadores (transductors) i com a estructures algebraiques (bimorfismes). S'investiguen i comparen al nivell de la transformació d'arbres i com a dispositius que defineixen translacions. L'estudi es centra en bimorfismes, amb especial èmfasi en les seves aplicacions per al processament del llenguatge natural. També es proposa una completa i actualitzada visió general sobre les classes de transformacions d'arbres definits per bimorfismes, vinculant-los amb els tipus coneguts de gramàtiques síncrones i transductors d'arbres. Provem o recordem totes les propietats interessants que les esmentades classes posseeixen, millorant així els coneixements matemàtics previs. A més, s'exposen les relacions d'inclusió entre les principals classes de bimorfismes mitjançant un diagrama Hasse, com a dispositius de traducció i com a mecanismes de transformació d'arbres.
La traducción basada en la sintaxis surgió en el ámbito de la traducción automática de los lenguajes naturales. Los sistemas deben modelar las transformaciones de árboles, reordenar partes de oraciones, ser simétricos y poseer propiedades como la composición o simetría. Existen varias maneras de definir transformaciones de árboles: gramáticas síncronas, transductores de árboles y bimorfismos de árboles. Las gramáticas síncronas hacen todo tipo de rotaciones, pero las propiedades matemáticas son más difíciles de probar. Los transductores de árboles son operacionales y fáciles de implementar pero las clases principales no son cerradas bajo la composición. Los bimorfismos de árboles son difíciles de implementar, pero proporcionan una herramienta natural para probar composición o simetría. Para mejorar el proceso de traducción, las gramáticas síncronas se relacionan con los bimorfismos de árboles y con los transductores de árboles. En esta tesis se lleva a cabo un amplio estudio de la teoría y las propiedades de los sistemas de traducción dirigidas por la sintaxis, desde estas tres perspectivas muy diferentes que se complementan perfectamente entre sí: como dispositivos generativos (gramáticas síncronas), como máquinas aceptadores (transductores) y como estructuras algebraicas (bimorfismos). Se investigan y comparan al nivel de la transformación de árboles y como dispositivos que definen translaciones. El estudio se centra en bimorfismos, con especial énfasis en sus aplicaciones para el procesamiento del lenguaje natural. También se propone una completa y actualizada visión general sobre las clases de transformaciones de árboles definidos por bimorfismos, vinculándolos con los tipos conocidos de gramáticas síncronas y transductores de árboles. Probamos o recordamos todas las propiedades interesantes que tales clases poseen, mejorando así los previos conocimientos matemáticos. Además, se exponen las relaciones de inclusión entre las principales clases de bimorfismos a través de un diagrama Hasse, como dispositivos de traducción y como mecanismos de transformación de árboles.
Syntax-based machine translation was established by the demanding need of systems used in practical translations between natural languages. Such systems should, among others, model tree transformations, re-order parts of sentences, be symmetric and possess composability or forward and backward application. There are several formal ways to define tree transformations: synchronous grammars, tree transducers and tree bimorphisms. The synchronous grammars do all kind of rotations, but mathematical properties are harder to prove. The tree transducers are operational and easy to implement, but closure under composition does not hold for the main types. The tree bimorphisms are difficult to implement, but they provide a natural tool for proving composability or symmetry. To improve the translation process, synchronous grammars were related to tree bimorphisms and tree transducers. Following this lead, we give a comprehensive study of the theory and properties of syntax-directed translation systems seen from these three very different perspectives that perfectly complement each other: as generating devices (synchronous grammars), as acceptors (transducer machines) and as algebraic structures (bimorphisms). They are investigated and compared both as tree transformation and translation defining devices. The focus is on bimorphisms as they only recently got again into the spotlight especially given their applications to natural language processing. Moreover, we propose a complete and up-to-date overview on tree transformations classes defined by bimorphisms, linking them with well-known types of synchronous grammars and tree transducers. We prove or recall all the interesting properties such classes possess improving thus the mathematical knowledge on synchronous grammars and/or tree transducers. Also, inclusion relations between the main classes of bimorphisms both as translation devices and as tree transformation mechanisms are given for the first time through a Hasse diagram. Directions for future work are suggested by exhibiting how to extend previous results to more general classes of bimorphisms and synchronous grammars.
10

Ruiz, Nicolas. "Toward a universal privacy and information-preserving framework for individual data exchange". Doctoral thesis, Universitat Rovira i Virgili, 2019. http://hdl.handle.net/10803/666489.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Data on individual subjects, which are increasingly gathered and exchanged, provide a rich amount of information that can inform statistical and policy analysis in a meaningful way. However, due to the legal obligations surrounding such data, this wealth of information is often not fully exploited in order to protect the confidentiality of respondents. The issue is thus the following: how to ensure a sufficient level of data protection to meet releasers’ concerns in terms of legal and ethical requirements, while still offering users a reasonable level of information. This question has raised a range concerns about the privacy/information trade-off and has driven a quest for best practices that can be both useful to users but also respectful of individuals’ privacy. Statistical disclosure control research has historically provided the analytical apparatus through which the privacy/information trade-off can be assessed and implemented. In recent years, the literature has burgeoned in many directions. In particular, techniques applicable to micro data offer a wide variety of tools to protect the confidentiality of respondents while maximizing the information content of the data released, for the benefit of society at large. Such diversity is undoubtedly useful but has several major drawbacks. In fact, there is currently a clear lack of agreement and clarity as to the appropriate choice of tools in a given context, and as a consequence, there is no comprehensive view (or at best an incomplete one) of the relative performances of the techniques available. The practical scope of current micro data protection methods is not fully exploited precisely because there is no overarching framework: all methods generally carry their own analytical environment, underlying approaches and definitions of privacy and information. Moreover, the evaluation of utility and privacy for each method is metric and data-dependent, meaning that comparisons across different methods and datasets is a daunting task. Against this backdrop, this thesis focuses on establishing some common grounds for individual data anonymization by developing a new, universal approach. Recent contributions to the literature point to the fact that permutations happen to be the essential principle upon which individual data anonymization can be based. In this thesis, we demonstrate that this principle allows for the proposal of a universal analytical environment for data anonymization. The first contribution of this thesis takes an ex-post approach by proposing some universal measures of disclosure risk and information loss that can be computed in a simple fashion and used for the evaluation of any anonymization method, independently of the context under which they operate. In particular, they exhibit distributional independence. These measures establish a common language for comparing different mechanisms, all with potentially varying parametrizations applied to the same data set or to different data sets. The second contribution of this thesis takes an ex-ante approach by developing a new approach to data anonymization. Bringing data anonymization closer to cryptography, it formulates a general cipher based on permutation keys which appears to be equivalent to a general form of rank swapping. Beyond all the existing methods that this cipher can universally reproduce, it also offers a new way to practice data anonymization based on the ex-ante exploration of different permutation structures. The subsequent study of the cipher’s properties additionally reveals new insights as to the nature of the task of anonymization taken at a general level of functioning. The final two contributions of this thesis aim at exploring two specific areas using the above results. The first area is longitudinal data anonymization. Despite the fact that the SDC literature offers a wide variety of tools suited to different contexts and data types, there have been very few attempts to deal with the challenges posed by longitudinal data. This thesis thus develops a general framework and some associated metrics of disclosure risk and information loss, tailored to the specific challenges posed by longitudinal data anonymization. Notably, it builds on a permutation approach where the effect of time on time-variant attributes can be seen as an anonymization method that can be captured by temporal permutations. The second area considered is synthetic data. By challenging the information and privacy guarantees of synthetic data, it is shown that any synthetic data set can always be expressed as a permutation of the original data, in a way similar to non-synthetic SDC techniques. In fact, releasing synthetic data sets with the same privacy properties but with an improved level of information appears to be invariably possible as the marginal distributions can always be preserved without increasing risk. On the privacy front, this leads to the consequence that the distinction drawn in the literature between non-synthetic and synthetic data is not so clear-cut. Indeed, it is shown that the practice of releasing several synthetic data sets for a single original data set entails privacy issues that do not arise in non-synthetic anonymization.

Capitoli di libri sul tema "532.001 51":

1

Rappuoli, R., e M. G. Pizza. "Pertussis toxin (Bordetella pertussis)". In Guidebook to Protein Toxins and Their Use in Cell Biology, 34–35. Oxford University PressOxford, 1997. http://dx.doi.org/10.1093/oso/9780198599555.003.0011.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract PT (Sekura et al. 1985) is a protein of 105 000 daltons composed of five noncovalently linked subunits named 51 through 55, and organized into two functional domains called A and B (Tamura et al. 1982). The A domain, which is composed of the 51 subunit, is an enzyme that intoxicates eukaryotic cells by ADP-ribosylating their GTP¬ binding proteins (Rappuoli and Pizza 1991). The enzyme binds NAD and transfers the ADP-ribose group to a cysteine residue present in an ...XCGLX motif, located at the carboxy-terminal region of the alpha subunit of many G proteins such as G;, G0, G1, Ggust, and others. G, and G011, which in this position have a tyrosine instead of the cysteine, are not substrates for PT (Domenighini et al. 1995). The B domain is a nontoxic oligomer formed by subunits 52, 53, 54, and 55 which are present in a 1:1:2:1 ratio. This domain binds the toxin receptor on the surface of eukaryotic cells and facilitates the translocation of the 51 subunit across the cellular membrane, so that it can reach the target proteins.

Atti di convegni sul tema "532.001 51":

1

Wu, Bin, Priybrat Sharma, Tao Yu, Lucia Palombi, Hao Wu, Moez Ben Houidi, Niraj Panthi, William Roberts e Gaetano Magnotti. "High-Speed 2-D Raman and Rayleigh Imaging of a Hydrogen Jet Issued from a Hollow-Cone Piezo Injector". In 16th International Conference on Engines & Vehicles. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2023. http://dx.doi.org/10.4271/2023-24-0019.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
<div class="section abstract"><div class="htmlview paragraph">This paper reports high-speed (10 kHz and 100 kHz) 2-D Raman/Rayleigh measurements of a hydrogen (H<sub>2</sub>) jet issued from a Bosch HDEV4 hollow-cone piezo injector in a high-volume constant pressure vessel. During the experiments, a <i>P<sub>a</sub></i> = 10 bar ambient environment with pure nitrogen (N<sub>2</sub>) is created in the chamber at <i>T</i> = 298 K, and pure H<sub>2</sub> is injected vertically with an injection pressure of <i>P<sub>i</sub></i> = 51 bar. To accommodate the transient nature of the injections, a kHz-rate burst-mode laser system with second harmonic output at <i>λ</i> = 532 nm and high-speed CMOS cameras are employed. By sequentially separating the scattered light using dichroic mirrors and bandpass filters, both elastic Rayleigh (<i>λ</i> = 532 nm) and inelastic N<sub>2</sub> (<i>λ</i> = 607 nm) and H<sub>2</sub> (<i>λ</i> = 683 nm) Raman signals are recorded on individual cameras. With the help of the wavelet denoising algorithm, the detection limit of 2-D Raman imaging is greatly expanded. The H<sub>2</sub> mole fraction distribution is then derived directly from scattering signals at 10 kHz for Raman and 100 kHz for Rayleigh, with a spatial resolution of approximately 200 μm (5.0 lp/mm). The current work successfully demonstrates the feasibility of high-speed 2-D Raman and Rayleigh imaging in gaseous fuel injection and the experimental technique could potentially contribute to the design of next-generation high-pressure, high-flowrate H<sub>2</sub> injectors.</div></div>
2

Chu, Bryan, Eklavya Singh, Johnson Samuel e Nikhil Koratkar. "Graphene Oxide Colloidal Suspensions as Cutting Fluids for Micromachining: Part 1 — Fabrication and Performance Evaluation". In ASME 2015 International Manufacturing Science and Engineering Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/msec2015-9372.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper is aimed at investigating the effects of graphene oxide platelet (GOP) geometry (i.e., lateral size and thickness) and oxygen functionalization on the cooling and lubrication performance of GOP colloidal suspensions. The techniques of thermal reduction and ultrasonic exfoliation were used to manufacture three different types of GOPs. For each of these three types of GOPs, colloidal solutions with GOP concentrations varying between 0.1–1 wt% were evaluated for their dynamic viscosity, thermal conductivity and micromachining performance. The ultrasonically-exfoliated GOPs (with 2–3 graphene layers and lowest in-solution characteristic lateral length of 120 nm) appear to be the most favorable for micromachining applications. Even at the lowest concentration of 0.1 wt%, they are capable of providing a 51% reduction in the cutting temperature and a 25% reduction in the surface roughness value over that of the baseline semi-synthetic cutting fluid. For the thermally-reduced GOPs (with 4–8 graphene layers and in-solution characteristic lateral length of 562–2780 nm), a concentration of 0.2 wt% appears to be optimal. The findings suggest that the differences seen between the colloidal suspensions in terms of their droplet spreading, evaporation and the subsequent GOP film-formation characteristics may be better indicators of their machining performance, as opposed to their bulk fluid properties.
3

Nakka, Thejeswar, Prasanth Ganesan, Luxitaa Goenka, Biswajit Dubashi, Smita Kayal, Latha Chaturvedula, Dasari Papa, Prasanth Penumadu, Narendran Krishnamoorthy e Divya B. Thumaty. "Epithelial Ovarian Cancer: Real-World Outcomes". In Annual Conference of Indian Society of Medical and Paediatric Oncology (ISMPO). Thieme Medical and Scientific Publishers Pvt. Ltd., 2021. http://dx.doi.org/10.1055/s-0041-1735369.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract Introduction Ovarian cancer is the third most common cancer and the second most common cause of death among gynecological cancers in Indian women. Ovarian cancer is heterogeneous, among them, epithelial ovarian cancer (EOC) is the most common. Primary cytoreductive surgery along with six to eight cycles of a combination of platinum and taxanes chemotherapy is the cornerstone of first-line treatment in EOC. This study was done to find clinicopathological factors affecting survival outcomes with first-line therapy in EOC in a real-world setting. Objectives This study was aimed to find factors affecting progression-free survival (PFS) and overall survival (OS) with first-line treatment in EOC. Materials and Methods We conducted a single-center retrospective study. We screened all the patients diagnosed with ovarian cancer from January 2015 till December 2019. We locked data in August 2019. Eligible patients were histologically confirmed EOC who underwent primary cytoreduction or received more than or equal to two cycles of chemotherapy or both. Patients who had received first-line treatment at another hospital were excluded. Results Patients demographics and clinical characteristics: between January 5, 2015 to August 31, 2019, 435 patients with a diagnosis of ovarian malignancy were registered at our center. Among them, 406 (82%) had EOC, 290 (64%) newly diagnosed, and fulfilling eligibility criteria were included in the final analysis. The median age of the cohort was 53 years (range: 21–89 years) and 157 patients (54%) were >50 years of age (the Eastern Oncology Cooperative Group Performance status was ≥ 2 in 124 patients [43%]; median duration of symptoms was 3 months; and stage III/IV: 240 [83%]). Grading of the tumor was available in 240 patients of which 219 (91%) were of high grade. Subtyping was available in 272 patients (94%) of which the serous subtype was the most common constituting 228 patients (79%).Treatment Most patients received chemotherapy (n = 283 [98%]) as the first modality of treatment (neoadjuvant/adjuvant and palliative). As neoadjuvant (NACT) in 130 patients (45%) and as adjuvant following surgery in 81 patients (29%). The most common chemotherapy regimen was a combination of carboplatin and paclitaxel in 256 patients (88%). Among 290 patients 218 (75%) underwent cytoreductive surgery. Among them, optimal cytoreduction was achieved in 108 patients (52%). Optimal cytoreduction rate (OCR) with upfront surgery and after NACT was 44 and 53%, respectively (Chi-square test: 0.86; p = 0.35).Survival The median follow-up of the study was 17 months (range: 10–28 months) and it was 20 months (range: 12–35 months) for patients who were alive. At last, follow-up, 149 patients (51%) had progressed and 109 (38%) died. The estimated median PFS and OS were 19 months (95% CI: 16.1–21.0) and 39 months (95% CI: 29.0–48.8), respectively. On multivariate analysis, primary surgery (HR: 0.1, 95% CI: 0.06–0.21; p-value: <0.001) and early-stage disease (HR: 0.2, 95% CI: 0.1–0.6; p-value 0.04) were associated with superior PFS and primary surgery (HR: 0.1, 95% CI: 0.09–0.2; p-value: <0.001) was associated with superior OS. Conclusion Primary surgery (upfront or interval) was associated with improved survival. Newer agents like bevacizumab, poly-ADP (adenosine diphosphate)-ribose polymerase inhibitors and HIPEC should be incorporated precisely into first line of therapy to improve outcomes.
4

de Abreu, Karina Gatti, Genehom Nunes de Neto Farias Farias, Thaís Alves de Sousa, José Adriano Soares Lopes e Maria Verônyca Coelho Melo. "INCIDÊNCIA DE OVOS E LARVAS DE ANCILOSTOMATÍDEO EM ÁREA DE ALIMENTAÇÃO NA CIDADE DE FORTALEZA-CE". In I Congresso Brasileiro de Parasitologia Humana On-line. Revista Multidisciplinar em Saúde, 2021. http://dx.doi.org/10.51161/rems/703.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Introdução: Solo de praças públicas tem sido apontado como indicador de contaminação humana, devido ao grande contingente de cães e gatos errantes que circulam livremente. Objetivo: verificar a contaminação por ovos e larvas de Ancylostoma sp, em amostras de solo coletadas em área de alimentação, na Cidade de Fortaleza-CE, Brasil. Material e métodos: Foram coletadas 480 amostras de solo. As amostras foram retiradas, no período seco (agosto a dezembro de 2018) e chuvoso (janeiro a julho de 2019). A área de 320m2 de onde foram colhidas as amostras, situada em torno da área de alimentação e circulação de uma Instituição pública da cidade de Fortaleza-CE. A mesma foi dividida em quatro quadrantes de 80 X 80, e em cada quadrante foram colhidas 30 amostras de solo. Foram retiradas 240 amostras de solo no período seco e 240 amostras no período chuvoso, perfazendo um total de 480 amostras. Para a retirada do solo, utilizou-se um retângulo de alumínio de dez centímetros de largura por três centímetros de altura, a um raio de cinco metros de distância de qualquer contaminação fecal. As coletas foram realizadas às 9:00 horas da manhã, com o auxílio de luva, colher-de-jardineiro e recipientes plásticos devidamente etiquetados. O retângulo de alumínio foi fixado no solo aproximadamente três centímetros de profundidade em relação à superfície. Foram removidas do solo cerca de 300g de solo, os quais foram acondicionados nos recipientes plásticos e conservados sob refrigeração a 4°C. As amostras foram processadas pelos métodos de Willia e Baermann Morais. Resultados: O helminto investigado neste trabalho foi o ancylostoma sp, sendo que a positividade para ovos foi de 53% (258/480) e para larvas foi de 51% (244/480). Das 480 amostras, 53% apresentaram positividade para ovos e 51% para larvas de Ancylostoma sp. Os maiores resultados concentram-se no período chuvoso, com 517 larvas encontradas. Conclusão: Conclui-se que os resultados demonstraram que cães e gatos desempenham um importante papel como fonte de contaminação ambiental, disseminadores e veiculadores de parasitos com potencial zoonótico, necessitando maior atenção da população, visando à diminuição do risco de infecções para o homem e aos próprios animais.
5

Kasuga, Willy, e Wadrine Maro. "The Influence of Problem-Based Learning on Students’ Motivation in Learning Biology in Tanzania Secondary Schools". In Proceedings of the 1st International Conference of Education. Dar es Salaam University Press, 2023. http://dx.doi.org/10.37759/ice01.2023.16.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The aim of this study was to examine the influence of problem-based learning (PBL) on students’ motivation in learning Biology subject in Tanzania secondary schools in terms of the teaching and learning materials (T/L) and assessment activities employed. The study was conducted in Njombe Region employing a quasi-experimental design with the model of a pre-test post-test control group design framed within social constructivism theory. The participants for the study were Form 1E, 2D and 3B (n = 95) students in experimental school and Form 1D, 2C and 3D (n = 126) students in control school who participated in the topics of Safety in our environment, Balance of nature and Coordination in living things. Data were collected using a structured questionnaire and were analysed using a paired sample t-test technique. The findings show that the use of PBL increased the mean scores from pre-test to post-test on motivation in terms of the T/L materials that was statistically significant at p = .00 with a large effect size (e2 = 0.13) compared to traditional methods that was not statistically significant at p = .51 with no effect size (e2 = .00). Moreover, the use of PBL increased students’ scores in students’ motivation in terms of assessment activities from pre-test to post-test that is statistically significant at p = .00 with a large effect size (e2 =0 .14) compared to the traditional methods that was not statistically significant at p = .52 with no effect size (e2 = .00). The study recommends a continuous use of learner-centred approaches like PBL in learning Biology so as to increase motivation.
6

Hansen, J. B., L. Wilsgard, J. O. Olsen e B. Østerud. "A MODEL TO EVALUATE BLOOD CELLS BEHAVIOUR TO CELL ACTIVATION: A DIFFERENCE BETWEEN MEN AND WOMEN". In XIth International Congress on Thrombosis and Haemostasis. Schattauer GmbH, 1987. http://dx.doi.org/10.1055/s-0038-1644627.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This study was carried out to see what kind of response the blood cells have to a weak stimuli of lipopolysacc-harides (LPS), a substance that in small quantities may be a small part of the blood system. The parameters tested under this condition were: thromboxane A2 (TxA2) (produced in platelets), prostacyclin (PGI2) produced in white cells (mainly monocytes) and induced thromboplastin synthesis in monocytes.Heparinized blood from 40 men and 40 women was incubated with 2 ng LPS/ml blood for 2 hours. Blood cells were then either spun down to obtain plasma or mononuclear cells were isolated from the incubated blood followed by the quantitation of thromboplastin. In order to get measureable PGI2 production, liposomes of soya lecithin were added to amplify this production in monocytes (see abstract by Østerud et al. "Monocyte stimulation--". The quantitation of TxB2 in the resultant plasma samples- revealed a highly significant difference in production of TxA2 between men and women in this system. In the group of men a value of 13.0 ± 5.9 ng/ml was found compared to 7.6 ± 5.8 for women (p<0.01). The liposomes had no effect on the TxA2 production. In contrast, the PGI2 production in women was higher than in men. By quantitating 6-keto-PG 1α concent rat i on in the plasma samples it was found that women had 148 ± 53 pg/ml whereas men had 105± 5 pg/rnl (p< 0.001). ft low index of TxA2 is supposed to be beneficial and associated with PGI2 low frequence of coronary heart disease. In the present study this value was estimated to be 51 for women and 124 in men.A weak but not significant higher thromboplastin activity was found in the stimulated monocytes of men as compared to women (91.4 ± 40.8×10−3/106 cells for men and 74.9 45.6×10−3/ 106 cells for women).It is concluded that blood cell activation in women is less harmful than in men and this may reflect the lower rate of CHD in women as compared to men.
7

Гудкова, Е. П. "Видовой состав накипных лишайников национального парка «Лосиный остров»". In III молодёжная всероссийская научная конференция с международным участием «PLANTAE & FUNGI». Botanical Garden-Institute FEB RAS, 2023. http://dx.doi.org/10.17581/paf2023.50.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Национальный парк (далее НП) «Лосиный остров» является одним из старейших в России и занимает почти 13 000 га. Треть его территории лежит в черте г. Москва (Лосиноостровский и Яузский лесопарки), а две трети – относятся к Московской области (Алексеевский, Лосинопогонный, Мытищинский и Щелковский лесопарки). Областная часть НП плотно окружена населенными пунктами, таким образом весь лесной массив подвергается высокому уровню антропогенной нагрузки. Целенаправленные лихенологические исследования в «Лосином острове» развиваются с 90-х годов ХХ в., и на 2022 г. для парка был известен 81 вид накипных лишайников и близких к ним грибов, из которых 74 для городской территории и всего 28 видов – для областной [1]. Данная разница объясняется тем, что большинство работ проводилось именно в черте города. Нами была обследована областная часть НП, в результате чего список накипных лишайников областной части расширен до 51, а общий список – до 89 видов. Особенно важной является находка Arthonia dispuncta, поскольку этот лишайник впервые отмечается нами для территории всей Московской области [2]. В России A. dispuncta ранее была известна только для территорий Центрального Черноземья, Карелии и Сахалина. Интересны находки лишайников из группы калициоидных: Chaenotheca hispidula и Ch. stemonea, являющихся видами-индикаторами старовозрастных лесных и парковых сообществ Северо-Запада европейской части России и/или биологически ценных лесных ландшафтов в подзоне хвойно-широколиственных лесов Центральной России [3]. В спектр ведущих родов входят Lecanora, Chaenotheca и Lecania, характерные для лесной экологии. Однако наибольшей распространенностью при этом обладают Lecanora symmicta, Lepraria elobata и L. finkii, встретившиеся во всех лесопарках, а также Hypocenomyce scalaris и Graphis scripta, обнаруженные в трех из четырех лесопарков. Это связано с их легко заметным внешним видом. Отдельные лесопарки отличаются друг от друга как по набору выявленных видов, так и по видовому богатству: наименьшее число видов (9) отмечено в Лосинопогонном лесопарке, а наибольшее (35) – в Алексеевском. Разница в числе видов оказывает существенное влияние на процент сходства. Значение коэффициента Сьеренсена для областных лесопарков НП находится в диапазоне от 36% до 53%, что для столь близко расположенных территорий довольно мало. Подобная картина может объясняться как недостаточной обследованностью территории, так и различием наборов биотопов лесопарков.

Vai alla bibliografia