Journal articles on the topic 'Mixture missing mechanisms'

To see the other types of publications on this topic, follow the link: Mixture missing mechanisms.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 33 journal articles for your research on the topic 'Mixture missing mechanisms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Paiva, Thais, and Jerome P. Reiter. "Stop or Continue Data Collection: A Nonignorable Missing Data Approach for Continuous Variables." Journal of Official Statistics 33, no. 3 (September 1, 2017): 579–99. http://dx.doi.org/10.1515/jos-2017-0028.

Full text
Abstract:
AbstractWe present an approach to inform decisions about nonresponse follow-up sampling. The basic idea is (i) to create completed samples by imputing nonrespondents’ data under various assumptions about the nonresponse mechanisms, (ii) take hypothetical samples of varying sizes from the completed samples, and (iii) compute and compare measures of accuracy and cost for different proposed sample sizes. As part of the methodology, we present a new approach for generating imputations for multivariate continuous data with nonignorable unit nonresponse. We fit mixtures of multivariate normal distributions to the respondents’ data, and adjust the probabilities of the mixture components to generate nonrespondents’ distributions with desired features. We illustrate the approaches using data from the 2007 U.S. Census of Manufactures.
APA, Harvard, Vancouver, ISO, and other styles
2

Mehrabi, Fereshteh, and François Béland. "THE LONGITUDINAL RELATIONSHIPS BETWEEN SOCIAL ISOLATION AND HEALTH OUTCOMES: THE ROLE OF PHYSICAL FRAILTY." Innovation in Aging 6, Supplement_1 (November 1, 2022): 141. http://dx.doi.org/10.1093/geroni/igac059.560.

Full text
Abstract:
Abstract Social isolation is a public health issue that is linked to poor health outcomes. However, the mechanisms underlying this association remain unclear. The main objective of this study was to explore whether changes in frailty moderated the relationship between changes in social isolation and changes in health outcomes over two years. We examined the mediating role of changes in frailty when the moderation hypothesis was not supported. A series of latent growth models (LGMs) were used to test our objectives using data from three waves of the FRéLE study among 1643 Canadian community-dwelling older adults aged 65 years and over. Missing data were handled by pattern mixture models with the assumption of missing not at random. We measured social isolation through social participation, social networks, and social support from different sources of social ties. We assessed frailty using the Fried frailty phenotype. Our moderation results revealed that high levels of changes in social participation, support from friends, nuclear, and extended family members, and social contacts with friends were associated with greater changes in cognitive and mental health among frail older adults with diminished physiological reserves compared to robust older adults. Additionally, changes in frailty mediated the effects of changes in social participation and social contacts and support from friends on changes in chronic conditions. This longitudinal study suggests that frailty moderated the relationships between social isolation and mental and cognitive health but not physical health. Overall, social support and strong friendship ties are key determinants of frail older adults’ health.
APA, Harvard, Vancouver, ISO, and other styles
3

Cesaria, Maura, Marco Mazzeo, Gianluca Quarta, Muhammad Rizwan Aziz, Concetta Nobile, Sonia Carallo, Maurizio Martino, Lucio Calcagnile, and Anna Paola Caricato. "Pulsed Laser Deposition of CsPbBr3 Films: Impact of the Composition of the Target and Mass Distribution in the Plasma Plume." Nanomaterials 11, no. 12 (November 26, 2021): 3210. http://dx.doi.org/10.3390/nano11123210.

Full text
Abstract:
All-inorganic cesium lead bromine (CsPbBr3) perovskites have gained a tremendous potential in optoelectronics due to interesting photophysical properties and much better stability than the hybrid counterparts. Although pulsed laser deposition (PLD) is a promising alternative to solvent-based and/or thermal deposition approaches due to its versatility in depositing multi-elemental materials, deep understanding of the implications of both target composition and PLD mechanisms on the properties of CsPbBr3 films is still missing. In this paper, we deal with thermally assisted preparation of mechano-chemically synthesized CsPbBr3 ablation targets to grow CsPbBr3 films by PLD at the fluence 2 J/cm2. We study both Cs rich- and stoichiometric PbBr2-CsBr mixture-based ablation targets and point out compositional deviations of the associated films resulting from the mass distribution of the PLD-generated plasma plume. Contrary to the conventional meaning that PLD guarantees congruent elemental transfer from the target to the substrate, our study demonstrates cation off-stoichiometry of PLD-grown CsPbBr3 films depending on composition and thermal treatment of the ablation target. The implications of the observed enrichment in the heavier element (Pb) and deficiency in the lighter element (Br) of the PLD-grown films are discussed in terms of optical response and with the perspective of providing operative guidelines and future PLD-deposition strategies of inorganic perovskites.
APA, Harvard, Vancouver, ISO, and other styles
4

Lin, Tsung-I., Wan-Lun Wang, Geoffrey J. McLachlan, and Sharon X. Lee. "Robust mixtures of factor analysis models using the restricted multivariate skew-t distribution." Statistical Modelling 18, no. 1 (September 4, 2017): 50–72. http://dx.doi.org/10.1177/1471082x17718119.

Full text
Abstract:
This article introduces a robust extension of the mixture of factor analysis models based on the restricted multivariate skew- t distribution, called mixtures of skew- t factor analysis (MSTFA) model. This model can be viewed as a powerful tool for model-based clustering of high-dimensional data where observations in each cluster exhibit non-normal features such as heavy-tailed noises and extreme skewness. Missing values may be frequently present due to the incomplete collection of data. A computationally feasible EM-type algorithm is developed to carry out maximum likelihood estimation and create single imputation of possible missing values under a missing at random mechanism. The numbers of factors and mixture components are determined via penalized likelihood criteria. The utility of our proposed methodology is illustrated through analysing both simulated and real datasets. Numerical results are shown to perform favourably compared to existing approaches.
APA, Harvard, Vancouver, ISO, and other styles
5

Kari, Eetu, Liqing Hao, Arttu Ylisirniö, Angela Buchholz, Ari Leskinen, Pasi Yli-Pirilä, Ilpo Nuutinen, et al. "Potential dual effect of anthropogenic emissions on the formation of biogenic secondary organic aerosol (BSOA)." Atmospheric Chemistry and Physics 19, no. 24 (December 20, 2019): 15651–71. http://dx.doi.org/10.5194/acp-19-15651-2019.

Full text
Abstract:
Abstract. The fraction of gasoline direct-injection (GDI) vehicles comprising the total vehicle pool is projected to increase in the future. However, thorough knowledge about the influence of GDI engines on important atmospheric chemistry processes is missing – namely, their contribution to secondary organic aerosol (SOA) precursor emissions, contribution to SOA formation, and potential role in biogenic–anthropogenic interactions. The objectives of this study were to (1) characterize emissions from modern GDI vehicles and investigate their role in SOA formation chemistry and (2) investigate biogenic–anthropogenic interactions related to SOA formation from a mixture of GDI-vehicle emissions and a model biogenic compound, α-pinene. Specifically, we studied SOA formation from modern GDI-vehicle emissions during the constant-load driving. In this study we show that SOA formation from GDI-vehicle emissions was observed in each experiment. Volatile organic compounds (VOCs) measured with the proton-transfer-reaction time-of-flight mass spectrometer (PTR-ToF-MS) could account for 19 %–42 % of total SOA mass generated in each experiment. This suggests that there were lower-volatility intermediate VOCs (IVOCs) and semi-volatile organic compounds (SVOCs) in the GDI-vehicle exhaust that likely contributed to SOA production but were not detected with the instrumentation used in this study. This study also demonstrates that two distinct mechanisms caused by anthropogenic emissions suppress α-pinene SOA mass yield. The first suppressing effect was the presence of NOx. This mechanism is consistent with previous reports demonstrating suppression of biogenic SOA formation in the presence of anthropogenic emissions. Our results indicate a possible second suppressing effect, and we suggest that the presence of anthropogenic gas-phase species may have suppressed biogenic SOA formation by alterations to the gas-phase chemistry of α-pinene. This hypothesized change in oxidation pathways led to the formation of α-pinene oxidation products that most likely did not have vapor pressures low enough to partition into the particle phase. Overall, the presence of gasoline-vehicle exhaust caused a more than 50 % suppression in α-pinene SOA mass yield compared to the α-pinene SOA mass yield measured in the absence of any anthropogenic influence.
APA, Harvard, Vancouver, ISO, and other styles
6

Pennarossa, G., G. Tettamanti, F. Gandolfi, M. deEguileor, and T. A. L. Brevini. "5 PARTHENOGENETIC EMBRYONIC STEM CELLS ARE CONNECTED BY FUNCTIONAL INTERCELLULAR BRIDGES." Reproduction, Fertility and Development 24, no. 1 (2012): 114. http://dx.doi.org/10.1071/rdv24n1ab5.

Full text
Abstract:
We previously reported that parthenogenetic stem cells display abnormal centrosome and spindle formation that results in severe chromosome missegregation, with a high incidence of hypoploid karyotypes. Unexpectedly, this is not accompanied by a correspondingly high rate of apoptosis and, by contrast, parthenogenetic cells share the pluripotency, self-renewal and in vitro differentiation properties of their bi-parental counterparts. We hypothesise that this is possible through a series of adaptive mechanisms that include the presence of intercellular bridges similar to those that connect germ cells during spermatogenesis. This would provide a way for mutual exchange of missing cell products, thus alleviating the unbalanced chromosome distribution that would otherwise hamper normal cell functions. The presence of intercellular bridges was investigated in pig parthenogenetic embryonic stem cells (PESC) by transmission electron microscopy (TEM). Cultured cells were fixed in 2% glutaraldehyde and post-fixed in 1% osmic acid. After standard dehydration in ethanol series, samples were embedded in an Epon-Araldite 812 mixture and sectioned with a Reichert Ultracut S ultratome (Leica). Thin sections were stained and observed with a Jeol 1010 electron microscope. Pig PESC were also subjected to scanning electron microscopy (SEM). To this purpose, they were fixed and dehydrated as described above, covered with a 9-nm gold film by flash evaporation of carbon in an Emitech K 250 sputter coater (Emitech) and examined with an SEM-FEG Philips XL-30 microscope. To demonstrate functional trafficking activity through intercellular canals, fluorescent 10-kDa dextran was injected into the cytoplasm of a single cell with FemtoJet Microinjector (Eppendorf). Movement of the molecule from the injected cell to others was observed with a Nikon Eclipse TE200 microscope. Ultra-structural analysis of PESC demonstrated the existence of intercellular bridges that ensured cytoplasmic continuity among cells. These canals appeared variable in size and were characterised by the presence of stabilising actin patches. Furthermore, extensive movement of 10-kDa dextran among cells demonstrated functional intercellular trafficking through these communication canals, suggesting their use for transfer of mRNA, proteins and ribosomes among cells. Our results demonstrate that PESC present a wide network of functional intercellular bridges that may constitute an adaptive mechanism to support normal cell functions. This process is commonly observed in transformed cells and gives further support to the recent hypothesis that suggests the existence of common features and links between oncogenesis and self-renewal in pluripotent cell lines. Supported by AIRC IG 10376. PG was supported by INGM.
APA, Harvard, Vancouver, ISO, and other styles
7

Ulrich, Bernhard. "The history and possible causes of forest decline in central Europe, with particular attention to the German situation." Environmental Reviews 3, no. 3-4 (July 1, 1995): 262–76. http://dx.doi.org/10.1139/a95-013.

Full text
Abstract:
The elasticity (nutrient storage, litter decomposition, bioturbation of soil) and diversity of central European forest ecosystems has been reduced by centuries of overutilization. Since the middle of the nineteenth century, their development has been influenced by silvicultural measures, as well as by the deposition of acids and nutrients, especially nitrogen from anthropogenic sources, i.e., by a mixture of stabilizing and destabilizing external influences. During recent decades, most forest soils have been acidified by acid deposition resulting in low levels of nutrient cations and negative alkalinity in the soil solution. Widespread acute acidification of soil in the rooting zone is indicated by extremely high manganese (Mn) contents in leaves (fingerprint). Soil acidification has caused drastic losses of fine roots in subsoil, indicated by denuded structural root systems where adventitious fine root complexes exist only sporadically. Research at the organ (leaf, fine root, mycorrhiza) and cellular levels has provided much information on the effects of air pollutants and soil acidification on leaves and roots. There are considerable uncertainties, however, as to how changes in the status of leaves or roots are processed within the tree and ecosystem from one level of hierarchy to the next on an increasing spatial and time scale, and how these lead to decline symptoms like crown thinning, stand opening (as a consequence of dieback or perturbations), and changes in species composition (soil biota, ground vegetation, tree regeneration). At the tree level, nutrient imbalances (due to cation losses from soil, changes in the acid/base status of the soil, proton buffering in leaves, and N deposition), as well as disturbances in the transport system of assimilates and water, are suspected of causing the decline symptoms. Information on the filtering mechanisms at various hierarchical levels, especially in the case of a break in the hierarchy, is missing. The null hypothesis (no effects of air pollutants on forest ecosystems) can be considered to be falsified. Forest ecosystems are in transition. The current state of knowledge is not sufficient to define precisely the final state that will be reached, given continuously changing environmental conditions and human impacts. The hypothesis, however, of large-scale forest dieback in the near future is not backed by data and can be discarded.Key words: forest ecosystem, process hierarchy, air pollution, deposition, acidity, nitrogen.
APA, Harvard, Vancouver, ISO, and other styles
8

Hill, Jennifer L. "Accommodating Missing Data in Mixture Models for Classification by Opinion-Changing Behavior." Journal of Educational and Behavioral Statistics 26, no. 2 (June 2001): 233–68. http://dx.doi.org/10.3102/10769986026002233.

Full text
Abstract:
Popular theories in political science regarding opinion-changing behavior postulate the existence of one or both of two broad categories of people: those with stable opinions over time; and those who appear to hold no solid opinion and, when asked to make a choice, do so seemingly at random. The model presented here explores evidence for a third category: durable changers. People in this group will change their opinions in a rational, informed manner, after being exposed to new information. Survey data collected at four time points over nearly two years track Swiss citizens' readiness to support pollution-reduction policies. We analyzed the data using finite mixture models that allow estimation of the percentage in the poluation falling in each category for each question as well as the frequency of certain types of relevant behaviors within each category. These models extend the finite mixture model structure used in Hill and Kriesi (2001a,b) to accommodate missing response data. This extension increases the sample size by nearly 60% and weakens the missing-data assumptions required. We describe augmented models and fitting algorithms corresponding to different assumptions about the missing-data mechanism as well as the differences in results obtained.
APA, Harvard, Vancouver, ISO, and other styles
9

Khalagi, Kazem, Mohammad Ali Mansournia, Seyed-Abbas Motevalian, Keramat Nourijelyani, Afarin Rahimi-Movaghar, and Mahmood Bakhtiyari. "An ad hoc method for dual adjusting for measurement errors and nonresponse bias for estimating prevalence in survey data: Application to Iranian mental health survey on any illicit drug use." Statistical Methods in Medical Research 27, no. 10 (February 23, 2017): 3062–76. http://dx.doi.org/10.1177/0962280217690939.

Full text
Abstract:
Purpose The prevalence estimates of binary variables in sample surveys are often subject to two systematic errors: measurement error and nonresponse bias. A multiple-bias analysis is essential to adjust for both biases. Methods In this paper, we linked the latent class log-linear and proxy pattern-mixture models to adjust jointly for measurement errors and nonresponse bias with missing not at random mechanism. These methods were employed to estimate the prevalence of any illicit drug use based on Iranian Mental Health Survey data. Results After jointly adjusting for measurement errors and nonresponse bias in this data, the prevalence (95% confidence interval) estimate of any illicit drug use changed from 3.41 (3.00, 3.81)% to 27.03 (9.02, 38.76)%, 27.42 (9.04, 38.91)%, and 27.18 (9.03, 38.82)% under “missing at random,” “missing not at random,” and an intermediate mode, respectively. Conclusions Under certain assumptions, a combination of the latent class log-linear and binary-outcome proxy pattern-mixture models can be used to jointly adjust for both measurement errors and nonresponse bias in the prevalence estimation of binary variables in surveys.
APA, Harvard, Vancouver, ISO, and other styles
10

Arciniegas-Alarcón, Sergio, Marisol García-Peña, Wojtek Janusz Krzanowski, and Carlos Tadeu dos Santos Dias. "An alternative methodology for imputing missing data in trials with genotype-by-environment interaction: some new aspects." Biometrical Letters 51, no. 2 (December 1, 2014): 75–88. http://dx.doi.org/10.2478/bile-2014-0006.

Full text
Abstract:
Abstract A common problem in multi-environment trials arises when some genotypeby- environment combinations are missing. In Arciniegas-Alarcón et al. (2010) we outlined a method of data imputation to estimate the missing values, the computational algorithm for which was a mixture of regression and lower-rank approximation of a matrix based on its singular value decomposition (SVD). In the present paper we provide two extensions to this methodology, by including weights chosen by cross-validation and allowing multiple as well as simple imputation. The three methods are assessed and compared in a simulation study, using a complete set of real data in which values are deleted randomly at different rates. The quality of the imputations is evaluated using three measures: the Procrustes statistic, the squared correlation between matrices and the normalised root mean squared error between these estimates and the true observed values. None of the methods makes any distributional or structural assumptions, and all of them can be used for any pattern or mechanism of the missing values.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhou, Guoqing, Xinghui Wang, and Xinrong Li. "Applying the Technology of Moving Target Detection in Missile Training Equipment." International Journal of Pattern Recognition and Artificial Intelligence 31, no. 06 (March 30, 2017): 1750017. http://dx.doi.org/10.1142/s0218001417500173.

Full text
Abstract:
The process of missile launch training was confined to the virtual scene in the past. So, cooperating with an artillery college, the group makes the moving target detection technology to be applied in missile training equipment, so as to make the training apply to the field operations. This paper presents the frame difference mapping algorithm, which is used to detect the moving target in the background of moving video frame. According to the target region which is given out by the system in the graphical interface, the students do the launching missile training. The moving target detection algorithm which is provided with the low complexity and the high accuracy, i.e. proposed by the paper, is based on Gauss mixture model and frame difference mapping. The mechanism of layered-graphics and the message agent which makes the modules in the system be independent of each other are used in the system designing. So, the module coupling degree in terms of this mechanism is lower than before. This mechanism brings convenience to system maintenance and upgrade, especially for the system’s transplanting to the real missile launch system in future.
APA, Harvard, Vancouver, ISO, and other styles
12

Shi, Shaoyan, Yi Jiang, and Xiang Li. "Transient influence of water injection on the flow field of a hot launch in a W-shaped silo." Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering 234, no. 3 (September 25, 2019): 665–80. http://dx.doi.org/10.1177/0954410019878535.

Full text
Abstract:
This study attempts to determine the effect of and mechanisms for the suppression of the ignition pressure and back-flowing exhaust during a hot launch process in a W-shaped silo, considering the missile movement via dynamic mesh technology. The mixture model for the multiphase and the turbulent model employed were experimentally proved to be accurate and reliable. The numerical results show that the injection of water into a missile exhaust can effectively suppress the ignition overpressure, thereby reducing the time of oscillation and oscillation amplitude, and reducing the maximum pressure at missile bottom from 3.68 × 105 Pa to 2.39 × 105 Pa. In addition, water injection makes the back-flowing exhaust remain at the bottom of the launch duct instead of filling the duct, and it cools the missile bottom, thus leading to a reduction in the maximum temperature from 3000 K to approximately 500 K.
APA, Harvard, Vancouver, ISO, and other styles
13

Boulangier, Jels, D. Gobrecht, L. Decin, A. de Koter, and J. Yates. "Developing a self-consistent AGB wind model – II. Non-classical, non-equilibrium polymer nucleation in a chemical mixture." Monthly Notices of the Royal Astronomical Society 489, no. 4 (September 10, 2019): 4890–911. http://dx.doi.org/10.1093/mnras/stz2358.

Full text
Abstract:
ABSTRACT Unravelling the composition and characteristics of gas and dust lost by asymptotic giant branch (AGB) stars is important as these stars play a vital role in the chemical life cycle of galaxies. The general hypothesis of their mass-loss mechanism is a combination of stellar pulsations and radiative pressure on dust grains. However, current models simplify dust formation, which starts as a microscopic phase transition called nucleation. Various nucleation theories exist, yet all assume chemical equilibrium, growth restricted by monomers, and commonly use macroscopic properties for a microscopic process. Such simplifications for initial dust formation can have large repercussions on the type, amount, and formation time of dust. By abandoning equilibrium assumptions, discarding growth restrictions, and using quantum mechanical properties, we have constructed and investigated an improved nucleation theory in AGB wind conditions for four dust candidates, TiO2, MgO, SiO, and Al2O3. This paper reports the viability of these candidates as first dust precursors and reveals implications of simplified nucleation theories. Monomer restricted growth underpredicts large clusters at low temperatures and overpredicts formation times. Assuming the candidates are present, Al2O3 is the favoured precursor due to its rapid growth at the highest considered temperatures. However, when considering an initially atomic chemical mixture, only TiO2-clusters form. Still, we believe Al2O3 to be the prime candidate due to substantial physical evidence in presolar grains, observations of dust around AGB stars at high temperatures, and its ability to form at high temperatures and expect the missing link to be insufficient quantitative data of Al-reactions.
APA, Harvard, Vancouver, ISO, and other styles
14

Fu, Xiaojing, Joaquin Jimenez-Martinez, Thanh Phong Nguyen, J. William Carey, Hari Viswanathan, Luis Cueto-Felgueroso, and Ruben Juanes. "Crustal fingering facilitates free-gas methane migration through the hydrate stability zone." Proceedings of the National Academy of Sciences 117, no. 50 (November 30, 2020): 31660–64. http://dx.doi.org/10.1073/pnas.2011064117.

Full text
Abstract:
Widespread seafloor methane venting has been reported in many regions of the world oceans in the past decade. Identifying and quantifying where and how much methane is being released into the ocean remains a major challenge and a critical gap in assessing the global carbon budget and predicting future climate [C. Ruppel, J. D. Kessler. Rev. Geophys. 55, 126–168 (2017)]. Methane hydrate (CH4⋅5.75H2O) is an ice-like solid that forms from methane–water mixture under elevated-pressure and low-temperature conditions typical of the deep marine settings (>600-m depth), often referred to as the hydrate stability zone (HSZ). Wide-ranging field evidence indicates that methane seepage often coexists with hydrate-bearing sediments within the HSZ, suggesting that hydrate formation may play an important role during the gas-migration process. At a depth that is too shallow for hydrate formation, existing theories suggest that gas migration occurs via capillary invasion and/or initiation and propagation of fractures (Fig. 1). Within the HSZ, however, a theoretical mechanism that addresses the way in which hydrate formation participates in the gas-percolation process is missing. Here, we study, experimentally and computationally, the mechanics of gas percolation under hydrate-forming conditions. We uncover a phenomenon—crustal fingering—and demonstrate how it may control methane-gas migration in ocean sediments within the HSZ.
APA, Harvard, Vancouver, ISO, and other styles
15

Compernolle, S., K. Ceulemans, and J. F. Müller. "Influence of non-ideality on condensation to aerosol." Atmospheric Chemistry and Physics 9, no. 4 (February 19, 2009): 1325–37. http://dx.doi.org/10.5194/acp-9-1325-2009.

Full text
Abstract:
Abstract. Secondary organic aerosol (SOA) is a complex mixture of water and organic molecules. Its composition is determined by the presence of semi-volatile or non-volatile compounds, their saturation vapor pressure and activity coefficient. The activity coefficient is a non-ideality effect and is a complex function of SOA composition. In a previous publication, the detailed chemical mechanism (DCM) for α-pinene oxidation and subsequent aerosol formation BOREAM was presented. In this work, we investigate with this DCM the impact of non-ideality by simulating smog chamber experiments for α-pinene degradation and aerosol formation and taking the activity coefficient into account of all molecules in the aerosol phase. Several versions of the UNIFAC method are tested for this purpose, and missing parameters for e.g. hydroperoxides and nitrates are inferred from fittings to activity coefficient data generated using the SPARC model. Alternative approaches to deal with these missing parameters are also tested, as well as an activity coefficient calculation method based on Hansen solubility parameters (HSP). It turns out that for most experiments, non-ideality has only a limited impact on the interaction between the organic molecules, and therefore on SOA yields and composition, when water uptake is ignored. The reason is that often, the activity coefficient is on average close to 1 and, specifically for high-VOC experiments, partitioning is not very sensitive on the activity coefficient because the equilibrium is shifted strongly towards condensation. Still, for ozonolysis experiments with low amounts of volatile organic carbon (low-VOC), the UNIFAC parameterization of Raatikainen et al. leads to significantly higher SOA yields (by up to a factor 1.6) compared to the ideal case and to other parameterizations. Water uptake is model dependent, in the order: ideal > UNIFAC-Raatikainen > UNIFAC-Peng > UNIFAC-Hansen ≈ UNIFAC-Magnussen ≈ UNIFAC-Ming. In the absence of salt dissolution, phase splitting from pure SOA is unlikely.
APA, Harvard, Vancouver, ISO, and other styles
16

Compernolle, S., K. Ceulemans, and J. F. Müller. "Influence of non-ideality on aerosol growth." Atmospheric Chemistry and Physics Discussions 8, no. 5 (September 10, 2008): 17061–93. http://dx.doi.org/10.5194/acpd-8-17061-2008.

Full text
Abstract:
Abstract. Secondary organic aerosol (SOA) is a complex mixture of water and organic molecules. Its composition is determined by the presence of semi-volatile or non-volatile compounds, their vapor pressure and activity coefficient. The activity coefficient is a non-ideality effect and is a complex function of SOA composition. In a previous publication, the detailed chemical mechanism (DCM) for α-pinene oxidation and subsequent aerosol formation BOREAM was presented. In this work, we investigate with this DCM the impact of non-ideality by simulating smog chamber experiments for α-pinene degradation and aerosol formation. Several versions of the UNIFAC method are tested for this purpose, and missing parameters for e.g. hydroperoxides and nitrates are inferred from fittings to activity coefficient data generated using the SPARC model. It turns out that for most experiments, non-ideality has only a limited impact on the interaction between the organic molecules, and therefore on SOA yields and composition, when water uptake is ignored. Still, for ozonolysis experiments with low amounts of volatile organic carbon (low-VOC), the UNIFAC parameterization of Raatikainen et al. leads to significantly higher SOA yields (by up to a factor 1.6) compared to the ideal case and to other parameterizations. Water uptake is model dependent, in the order: ideal>UNIFAC-Raatikainen>UNIFAC-Peng>UNIFAC-Hansen≈UNIFAC-Magnussen≈UNIFAC-Ming. In the absence of salt dissolution, phase splitting from pure SOA is unlikely.
APA, Harvard, Vancouver, ISO, and other styles
17

Nagata, Yasunobu, Hiromichi Suzuki, Vera Grossmann, Genta Nagae, Yusuke Okuno, Ulrike Bacher, Susanne Schnittger, et al. "Landscape of DNA Methylation and Genetic Profiles in 291 Patients with Myelodysplastic Syndromes." Blood 126, no. 23 (December 3, 2015): 5205. http://dx.doi.org/10.1182/blood.v126.23.5205.5205.

Full text
Abstract:
Abstract DNA hypermethylation has long been implicated in the pathogenesis of myelodysplastic syndromes (MDS) and also highlighted by the frequent efficacy of demethylating agents to this disease. Meanwhile, recent genetic studies in MDS have revealed high frequency of somatic mutations involving epigenetic regulators, suggesting a causative link between gene mutations and epigenetic alterations in MDS. The accumulation of genetic and epigenetic alterations promotes tumorigenesis, hypomethylating agents such as Azacitidine exert their therapeutic effect through inhibition of DNA methylation. However, the relationship between patterns of epigenetic phenotypes and mutations, as well as their impact on therapy, has not been clarified. To address this issue, we performed genome-wide DNA methylation profiling (Infinium 450K) in combination with targeted-deep sequencing of 104 genes for somatic mutations in 291 patients with MDS. Beta-mixture quantile normalization was performed for correcting probe design bias in Illumina Infinium 450k DNA methylation data. Of the >480,000 probes on the methylation chip, we selected probes using the following steps: (i) probes annotated with "Promotor_Associated" or "Promoter_Associated_Cell_type_specific; (ii) probes designed in "Island", "N_Shore" or "S_Shore"; (iii) removing probes designed on the X and Y chormosomes; (iv) removing probes with >10% of missing value. Consensus clustering was performed utilizing the hierarchical clustering based on Ward and Pearson correlation algorithms with 1000 iterations on the top 0.5% (2,000) of probes showing high variation by median absolute deviation across the dataset using Bioconductor package Consensus cluster plus. The number of cluster was determined by relative change in area under cumulative distribution function curve by consensus clustering. Unsupervised clustering analysis of DNA methylation revealed 3 subtypes of MDS, M1-M3, showing discrete methylation profiles with characteristic gene mutations and cytogenetics. The M1 subtype (n=121) showed a high frequency of SF3B1 mutations, exhibiting the best clinical outcome, whereas the M2 subtype (n=106), characterized by frequent ASXL1, TP53 mutations and high-risk cytogenetics, showed the shortest overall survival with the hazard ratios of 3.4 (95% CI:1.9-6.0) and 2.2 (95% CI:1.2-4.0) compared to M1 and M3, respectively. Finally, the M3 subtype (n=64) was highly enriched (70% of cases) for biallelic alterations of TET2 and showed the highest level of CpG island methylation and showed an intermediate survival. In the current cohort, we had 47 patients who were treated with demethylating agents, including 11 responders and 36 non-responders. When DNA methylation status at diagnosis was evaluated in terms of response to demethylating agents, we identified 54 differentiated methylated genes showing >20% difference in mean methylation levels between responders and non-responders (q < 0.1). Twenty-five genes more methylated in responders were enriched in functional pathways such as chemokine receptor and genes with EGF-like domain, whereas 29 less methylated gene in responders were in the gene set related to regulation of cell proliferation. Genetic alterations were also assessed how they affected treatment responses. In responders, TET2 mutated patients tended to more frequently respond (45% vs 34%), whereas patients with IDH1/2 and DNMT3A mutations were less frequently altered (0% vs 14%, 9% vs 14%) in responders, compared in non-responders. In conclusion, our combined genetic and methylation analysis unmasked previously unrecognized associations between gene mutations and DNA methylation, suggesting a causative link in between. We identified correlations between genetic/epigenetic profiles and the response to demethylating agents, which however, needs further investigation to clarify the mechanism of and predict response to demethylation agents in MDS. Disclosures Alpermann: MLL Munich Leukemia Laboratory: Employment. Nadarajah:MLL Munich Leukemia Laboratory: Employment. Haferlach:MLL Munich Leukemia Laboratory: Employment, Equity Ownership. Kiyoi:Taisho Toyama Pharmaceutical Co., Ltd.: Research Funding; Novartis Pharma K.k.: Research Funding; Pfizer Inc.: Research Funding; Takeda Pharmaceutical Co.,Ltd.: Research Funding; MSD K.K.: Research Funding; Sumitomo Dainippon Pharma Co., Ltd.: Research Funding; Alexion Pharmaceuticals.: Research Funding; Teijin Ltd.: Research Funding; Zenyaku Kogyo Company,Ltd.: Research Funding; FUJIFILM RI Pharma Co.,Ltd.: Patents & Royalties, Research Funding; Nippon Shinyaku Co.,Ltd.: Research Funding; Japan Blood Products Organization.: Research Funding; Eisai Co.,Ltd.: Research Funding; Yakult Honsha Co.,Ltd.: Research Funding; Astellas Pharma Inc.: Consultancy, Research Funding; Kyowa-Hakko Kirin Co.,Ltd.: Consultancy, Research Funding; Fujifilm Corporation.: Patents & Royalties, Research Funding; Nippon Boehringer Ingelheim Co., Ltd.: Research Funding; Bristol-Myers Squibb.: Research Funding; Chugai Pharmaceutical Co.,LTD.: Research Funding; Mochida Pharmaceutical Co.,Ltd.: Research Funding. Kobayashi:Gilead Sciences: Research Funding. Naoe:Toyama Chemical CO., LTD.: Research Funding; Otsuka Pharmaceutical Co., Ltd.: Research Funding; Nippon Boehringer Ingelheim Co., Ltd.: Research Funding; Kyowa Hakko Kirin Co., Ltd.: Patents & Royalties, Research Funding; Pfizer Inc.: Research Funding; Astellas Pharma Inc.: Research Funding; FUJIFILM Corporation: Patents & Royalties, Research Funding; Celgene K.K.: Research Funding; Chugai Pharmaceutical Co., Ltd.: Patents & Royalties. Kern:MLL Munich Leukemia Laboratory: Employment, Equity Ownership. Miyazaki:Chugai: Honoraria, Research Funding; Shin-bio: Honoraria; Sumitomo Dainippon: Honoraria; Celgene Japan: Honoraria; Kyowa-Kirin: Honoraria, Research Funding.
APA, Harvard, Vancouver, ISO, and other styles
18

Staudt, Andreas, Jennis Freyer-Adam, Till Ittermann, Christian Meyer, Gallus Bischof, Ulrich John, and Sophie Baumann. "Sensitivity analyses for data missing at random versus missing not at random using latent growth modelling: a practical guide for randomised controlled trials." BMC Medical Research Methodology 22, no. 1 (September 24, 2022). http://dx.doi.org/10.1186/s12874-022-01727-1.

Full text
Abstract:
Abstract Background Missing data are ubiquitous in randomised controlled trials. Although sensitivity analyses for different missing data mechanisms (missing at random vs. missing not at random) are widely recommended, they are rarely conducted in practice. The aim of the present study was to demonstrate sensitivity analyses for different assumptions regarding the missing data mechanism for randomised controlled trials using latent growth modelling (LGM). Methods Data from a randomised controlled brief alcohol intervention trial was used. The sample included 1646 adults (56% female; mean age = 31.0 years) from the general population who had received up to three individualized alcohol feedback letters or assessment-only. Follow-up interviews were conducted after 12 and 36 months via telephone. The main outcome for the analysis was change in alcohol use over time. A three-step LGM approach was used. First, evidence about the process that generated the missing data was accumulated by analysing the extent of missing values in both study conditions, missing data patterns, and baseline variables that predicted participation in the two follow-up assessments using logistic regression. Second, growth models were calculated to analyse intervention effects over time. These models assumed that data were missing at random and applied full-information maximum likelihood estimation. Third, the findings were safeguarded by incorporating model components to account for the possibility that data were missing not at random. For that purpose, Diggle-Kenward selection, Wu-Carroll shared parameter and pattern mixture models were implemented. Results Although the true data generating process remained unknown, the evidence was unequivocal: both the intervention and control group reduced their alcohol use over time, but no significant group differences emerged. There was no clear evidence for intervention efficacy, neither in the growth models that assumed the missing data to be at random nor those that assumed the missing data to be not at random. Conclusion The illustrated approach allows the assessment of how sensitive conclusions about the efficacy of an intervention are to different assumptions regarding the missing data mechanism. For researchers familiar with LGM, it is a valuable statistical supplement to safeguard their findings against the possibility of nonignorable missingness. Trial registration The PRINT trial was prospectively registered at the German Clinical Trials Register (DRKS00014274, date of registration: 12th March 2018).
APA, Harvard, Vancouver, ISO, and other styles
19

Si, Yajuan, Roderick J. A. Little, Ya Mo, and Nell Sedransk. "A Case Study of Nonresponse Bias Analysis in Educational Assessment Surveys." Journal of Educational and Behavioral Statistics, December 15, 2022, 107699862211410. http://dx.doi.org/10.3102/10769986221141074.

Full text
Abstract:
Nonresponse bias is a widely prevalent problem for data on education. We develop a ten-step exemplar to guide nonresponse bias analysis (NRBA) in cross-sectional studies and apply these steps to the Early Childhood Longitudinal Study, Kindergarten Class of 2010–2011. A key step is the construction of indices of nonresponse bias based on proxy pattern-mixture models for survey variables of interest. A novel feature is to characterize the strength of evidence about nonresponse bias contained in these indices, based on the strength of the relationship between the characteristics in the nonresponse adjustment and the key survey variables. Our NRBA improves the existing methods by incorporating both missing at random and missing not at random mechanisms, and all analyses can be done straightforwardly with standard statistical software.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Chao, Shaojie Hu, Zemin Qiu, and Ning Lu. "A Poroelasticity Theory for Soil Incorporating Adsorption and Capillarity." Géotechnique, October 31, 2022, 1–57. http://dx.doi.org/10.1680/jgeot.22.00097.

Full text
Abstract:
Adsorption and capillarity, in the order of high free energy to low, are the two soil-water interaction mechanisms controlling hydro-mechanical behavior of soils. Yet most of the poroelasticity theories of soil are based on capillarity only, leading to misrepresentations of hydro-mechanical behavior in the low free energy regime beyond vaporization. This inability is reasoned to be caused by two major limitations in the existing theories: missing interparticle attraction energy and incomplete definition of adsorption-induced pore water pressure. A poroelasticity theory is formulated to incorporate the two soil-water interaction mechanisms, and the transition between them, i.e., condensation/vaporization, by expanding the classical three-phase mixture system to a four-phase mixture system with adsorptive water as an additional phase. An interparticle attractive stress is identified as one of the key sources for deformation and strength of soils induced by adsorption and is implemented in the poroelasticity theory. A recent breakthrough concept of soil sorptive potential is utilized to establish the physical link between adsorption-induced pore water pressure and matric suction. The proposed poroelasticity theory can be reduced to several previous theories when interparticle attractive stress is ignored. The new theory is used to derive the effective stress equation for variably-saturated soil by identifying energy-conjugated pairs. The derived effective stress equation leads to Zhang and Lu's unified effective stress equation, and can be reduced to Bishop's effective stress equation when only capillary mechanism is considered and to Terzaghi's effective stress equation when saturated condition is imposed. The derived effective stress equation is experimentally validated for a variety of soil in the full matric suction range, substantiating the validity and accuracy of the poroelasticity theory for soil under variably-saturated conditions.
APA, Harvard, Vancouver, ISO, and other styles
21

Yu, Fangtang, Chao Xu, Hong-Wen Deng, and Hui Shen. "A novel computational strategy for DNA methylation imputation using mixture regression model (MRM)." BMC Bioinformatics 21, no. 1 (December 2020). http://dx.doi.org/10.1186/s12859-020-03865-z.

Full text
Abstract:
Abstract Background DNA methylation is an important heritable epigenetic mark that plays a crucial role in transcriptional regulation and the pathogenesis of various human disorders. The commonly used DNA methylation measurement approaches, e.g., Illumina Infinium HumanMethylation-27 and -450 BeadChip arrays (27 K and 450 K arrays) and reduced representation bisulfite sequencing (RRBS), only cover a small proportion of the total CpG sites in the human genome, which considerably limited the scope of the DNA methylation analysis in those studies. Results We proposed a new computational strategy to impute the methylation value at the unmeasured CpG sites using the mixture of regression model (MRM) of radial basis functions, integrating information of neighboring CpGs and the similarities in local methylation patterns across subjects and across multiple genomic regions. Our method achieved a better imputation accuracy over a set of competing methods on both simulated and empirical data, particularly when the missing rate is high. By applying MRM to an RRBS dataset from subjects with low versus high bone mineral density (BMD), we recovered methylation values of ~ 300 K CpGs in the promoter regions of chromosome 17 and identified some novel differentially methylated CpGs that are significantly associated with BMD. Conclusions Our method is well applicable to the numerous methylation studies. By expanding the coverage of the methylation dataset to unmeasured sites, it can significantly enhance the discovery of novel differential methylation signals and thus reveal the mechanisms underlying various human disorders/traits.
APA, Harvard, Vancouver, ISO, and other styles
22

Gomer, Brenna, and Ke-Hai Yuan. "A Realistic Evaluation of Methods for Handling Missing Data When There is a Mixture of MCAR, MAR, and MNAR Mechanisms in the Same Dataset." Multivariate Behavioral Research, January 4, 2023, 1–26. http://dx.doi.org/10.1080/00273171.2022.2158776.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Marrocco, Francesco, Mary Delli Carpini, Stefano Garofalo, Ottavia Giampaoli, Eleonora De Felice, Maria Amalia Di Castro, Laura Maggi, et al. "Short-chain fatty acids promote the effect of environmental signals on the gut microbiome and metabolome in mice." Communications Biology 5, no. 1 (May 31, 2022). http://dx.doi.org/10.1038/s42003-022-03468-9.

Full text
Abstract:
AbstractGut microorganisms and the products of their metabolism thoroughly affect host brain development, function and behavior. Since alterations of brain plasticity and cognition have been demonstrated upon motor, sensorial and social enrichment of the housing conditions, we hypothesized that gut microbiota and metabolome could be altered by environmental stimuli, providing part of the missing link among environmental signals and brain effects. In this preliminary study, metagenomic and metabolomic analyses of mice housed in different environmental conditions, standard and enriched, identify environment-specific microbial communities and metabolic profiles. We show that mice housed in an enriched environment have distinctive microbiota composition with a reduction in gut bacterial richness and biodiversity and are characterized by a metabolomic fingerprint with the increase of formate and acetate and the decrease of bile salts. We demonstrate that mice treated with a mixture of formate and acetate recapitulate some of the brain plasticity effects modulated by environmental enrichment, such as hippocampal neurogenesis, neurotrophin production, short-term plasticity and cognitive behaviors, that can be further exploited to decipher the mechanisms involved in experience-dependent brain plasticity.
APA, Harvard, Vancouver, ISO, and other styles
24

Ma, Zhihua, and Guanghui Chen. "Bayesian joint analysis using a semiparametric latent variable model with non-ignorable missing covariates for CHNS data." Statistical Modelling, February 19, 2020, 1471082X1989668. http://dx.doi.org/10.1177/1471082x19896688.

Full text
Abstract:
Motivated by the China Health and Nutrition Survey (CHNS) data, a semiparametric latent variable model with a Dirichlet process (DP) mixtures prior on the latent variable is proposed to jointly analyse mixed binary and continuous responses. Non-ignorable missing covariates are considered through a selection model framework where a missing covariate model and a missing data mechanism model are included. The logarithm of the pseudo-marginal likelihood (LPML) is applied for selecting the priors, and the deviance information criterion measure focusing on the missing data mechanism model only is used for selecting different missing data mechanisms. A Bayesian index of local sensitivity to non-ignorability (ISNI) is extended to explore the local sensitivity of the parameters in our model. A simulation study is carried out to examine the empirical performance of the proposed methodology. Finally, the proposed model and the ISNI index are applied to analyse the CHNS data in the motivating example.
APA, Harvard, Vancouver, ISO, and other styles
25

Shen, Minjie, Yi-Tan Chang, Chiung-Ting Wu, Sarah J. Parker, Georgia Saylor, Yizhi Wang, Guoqiang Yu, et al. "Comparative assessment and novel strategy on methods for imputing proteomics data." Scientific Reports 12, no. 1 (January 20, 2022). http://dx.doi.org/10.1038/s41598-022-04938-0.

Full text
Abstract:
AbstractMissing values are a major issue in quantitative proteomics analysis. While many methods have been developed for imputing missing values in high-throughput proteomics data, a comparative assessment of imputation accuracy remains inconclusive, mainly because mechanisms contributing to true missing values are complex and existing evaluation methodologies are imperfect. Moreover, few studies have provided an outlook of future methodological development. We first re-evaluate the performance of eight representative methods targeting three typical missing mechanisms. These methods are compared on both simulated and masked missing values embedded within real proteomics datasets, and performance is evaluated using three quantitative measures. We then introduce fused regularization matrix factorization, a low-rank global matrix factorization framework, capable of integrating local similarity derived from additional data types. We also explore a biologically-inspired latent variable modeling strategy—convex analysis of mixtures—for missing value imputation and present preliminary experimental results. While some winners emerged from our comparative assessment, the evaluation is intrinsically imperfect because performance is evaluated indirectly on artificial missing or masked values not authentic missing values. Nevertheless, we show that our fused regularization matrix factorization provides a novel incorporation of external and local information, and the exploratory implementation of convex analysis of mixtures presents a biologically plausible new approach.
APA, Harvard, Vancouver, ISO, and other styles
26

Pugh, Stephanie L., Paul D. Brown, and Danielle Enserro. "Missing repeated measures data in clinical trials." Neuro-Oncology Practice, July 16, 2021. http://dx.doi.org/10.1093/nop/npab043.

Full text
Abstract:
Abstract Clinical trials typically collect longitudinal data, data that are collected repeated over time, such as laboratories, scans, or patient-reported outcomes. Due to a variety of reasons, this data can be missing, whether a patient stops attending clinical visits (ie, dropout) or misses assessments intermittently. Understanding the reasons for missing data as well as predictors of missing data can aid in determination of the missing data mechanism. The analysis methods used are dependent on the missing data mechanism and may make certain assumptions about the missing data itself. Methods for nonignorable missing data, which assumes that the missing data depend on the missing data itself, make stronger assumptions and include pattern-mixture models and shared parameter models. Missing data that are ignorable after adjusting for other covariates can be analyzed using methods that adjust for covariates, such as mixed-effects models or multiple imputation. Missing data that are ignorable can be analyzed using standard approaches that require complete case data, such as change from baseline or proportion of patients who declined at a specified time point. In clinical trials, truly ignorable data are rare, resulting in additional analysis methods required for proper interpretation of the results. Conducting several analyses under different assumptions, called sensitivity analyses, can determine the extent of the impact of the missing data.
APA, Harvard, Vancouver, ISO, and other styles
27

Papageorgiou, Grigorios, and Dimitris Rizopoulos. "An alternative characterization of MAR in shared parameter models for incomplete longitudinal data and its utilization for sensitivity analysis." Statistical Modelling, June 10, 2020, 1471082X2092711. http://dx.doi.org/10.1177/1471082x20927114.

Full text
Abstract:
Dropout is a common complication in longitudinal studies, especially since the distinction between missing not at random (MNAR) and missing at random (MAR) dropout is intractable. Consequently, one starts with an analysis that is valid under MAR and then performs a sensitivity analysis by considering MNAR departures from it. To this end, specific classes of joint models, such as pattern-mixture models (PMMs) and selection models (SeMs), have been proposed. On the contrary, shared-parameter models (SPMs) have received less attention, possibly because they do not embody a characterization of MAR. A few approaches to achieve MAR in SPMs exist, but are difficult to implement in existing software. In this article, we focus on SPMs for incomplete longitudinal and time-to-dropout data and propose an alternative characterization of MAR by exploiting the conditional independence assumption, under which outcome and missingness are independent given a set of random effects. By doing so, the censoring distribution can be utilized to cover a wide range of assumptions for the missing data mechanism on the subject-specific level. This approach offers substantial advantages over its counterparts and can be easily implemented in existing software. More specifically, it offers flexibility over the assumption for the missing data generating mechanism that governs dropout by allowing subject-specific perturbations of the censoring distribution, whereas in PMMs and SeMs dropout is considered MNAR strictly.
APA, Harvard, Vancouver, ISO, and other styles
28

Quintano, Claudio, Rosalia Castellano, and Antonella Rocca. "Influence of outliers on some multiple imputation methods." Advances in Methodology and Statistics 7, no. 1 (January 1, 2010). http://dx.doi.org/10.51936/tuki4538.

Full text
Abstract:
In the field of data quality, imputation is the most used method for handling missing data. The performance of imputation techniques is influenced by various factors, especially when data represent only a sample of population, for example the survey design characteristics. In this paper, we compare the results of different multiple imputation methods in terms of final estimates when outliers occur in a dataset. Consequently, in order to evaluate the influence of outliers on the performance of these methods, the procedure is applied before and after that we have identified and removed them. For this purpose, missing data were simulated on data coming from sample ISTAT annual survey on Small and Medium Enterprises. MAR mechanism is assumed for missing data. The methods are based on the multiple imputation through the Markov Chain Monte Carlo (MCMC), the propensity score and the mixture models. The results highlight the strong influence of data characteristics on final estimates.
APA, Harvard, Vancouver, ISO, and other styles
29

Sacco, Chiara, Cinzia Viroli, and Mario Falchi. "A statistical test for detecting parent-of-origin effects when parental information is missing." Statistical Applications in Genetics and Molecular Biology 16, no. 4 (January 1, 2017). http://dx.doi.org/10.1515/sagmb-2017-0007.

Full text
Abstract:
AbstractGenomic imprinting is an epigenetic mechanism that leads to differential contributions of maternal and paternal alleles to offspring gene expression in a parent-of-origin manner. We propose a novel test for detecting the parent-of-origin effects (POEs) in genome wide genotype data from related individuals (twins) when the parental origin cannot be inferred. The proposed method exploits a finite mixture of linear mixed models: the key idea is that in the case of POEs the population can be clustered in two different groups in which the reference allele is inherited by a different parent. A further advantage of this approach is the possibility to obtain an estimation of parental effect when the parental information is missing. We will also show that the approach is flexible enough to be applicable to the general scenario of independent data. The performance of the proposed test is evaluated through a wide simulation study. The method is finally applied to known imprinted genes of the MuTHER twin study data.
APA, Harvard, Vancouver, ISO, and other styles
30

Jonuks, Tõnno, Atko Remmel, Lona Päll, and Ulla Kadakas. "Sõjakas kaitse – konfliktid loodus- ja kultuuripärandi hoiu kujundamisel / Warlike Protection – Conflicts in Shaping the Preservation of Natural and Cultural Heritage." Methis. Studia humaniora Estonica 24, no. 30 (December 13, 2022). http://dx.doi.org/10.7592/methis.v24i30.22112.

Full text
Abstract:
Artiklis uurime Eesti looduse ja kultuuri kaitsel tekkinud teravaid vastasseise, mida osalejad on mõnikord nimetanud sõdadeks. Näitejuhtumid ulatuvad pühapaikade kaitselt metsa ja linnalooduse kaitseni. Vaatleme artiklis, kuidas vastasseisud on arenenud, milliseid argumente kasutatakse ning millised konflikti osapooled neis eristuvad. Meie eesmärk ei ole otsida konfliktidele lahendusi või neid ennetada – selle asemel soovime mõista, miks mõnikord muutub looduse ja kultuuri kaitse sõjakat retoorikat kasutavaks konfliktiks. Summary This paper studies examples of the protection of natural or cultural objects in Estonia developing into sharp conflicts during the past couple of decades. Various mechanisms have been developed to avoid, prevent and solve conflicts, yet sharp oppositions still occur. Our aim is not to provide yet another methodology of conflict solving, but rather to look behind it: who participates in such conflicts, what their reasons and arguments are, what kind of rhetoric they use. Such an approach proceeds from Juri Lotman’s suggestion that it is not agreements, but contradictions that make a dialogue fruitful. The case studies discussed in the paper range from folkloric sacred sites to the protection of forest and natural objects in urban environments. In all examples, we could observe the presence of two parties that we call the ‘developers’ and the ‘protectors’. In all cases, the developers found themselves in the middle of a conflict they had not foreseen and could not handle, as their only purpose was to develop the initial project, be it a building, forest clearing or the like. In terms of conflict management, ‘developers’ have always been followers of the conflict, reacting to it, but not leading it. The other side, ‘protectors’, consists of an amorphous group of people, some of whom are local inhabitants, while others participate in the protection because of their world view, moral or ideological reasons. In all cases observed it is the ‘protectors’ who lead it to a conflict – mostly as they are un-institutionalised, and thus less visible, so in order to become an equal partner and force developers into a discussion, they use conflict rhetoric and methods. Conflicts are usually expanded in public and on social media in the form of short and easy-to-read messages. Mediatisation is the main characteristic of contemporary conflicts and is adopted by both sides. Our cases demonstrate that a clear and uniform narrative is important in order to control a conflict and make the other side accept it. Protection of folkloristic sacred sites has been guided by Maavalla Koda, a representative body of a leading contemporary pagan organisation in Estonia. Likewise, protecting forests from clearcutting has been directed by grass-root organisations. In the case of the folkloric sacred sites, the protectors have been successful and the developments have been stopped in almost all cases. Avoiding forest clearcutting has not been so unambiguously successful, but the aggressive rhetoric and active public campaigns have certainly influenced the public opinion in Estonia. Other cases, in which there have been no organisations in the background and that have lacked a common narrative, e.g. protecting a white willow in Tallinn’s suburb of Haabersti, have not been successful. Due to the missing common narrative and lack of a leader, several persons or groups were trying to act as leaders and distribute their message, which ended with a mixture of dissimilar statements, which eventually led to the protectors losing their credibility. The core of such conflicts lies in a collision of different worldviews, characterised by opposing rhetoric, in which one party is using economical reasoning, while the arguments of the other are based on nature conservation, protecting of cultural and national values, and mixed with spiritual claims. Such different standpoints lead any discussion into a situation of opposition in which a compromise and solutions are difficult or even impossible to find. However, in Metsapoole the local dwellers, who acted against the State Forest Management Centre, deliberately excluded any spiritual arguments. Choosing rational rhetoric let them speak the same “language” as the Forest Management authorities and the conflict ended with a reappraisal of the plans of the State Forest Management Centre. There certainly are multiple reasons why conflicts arise in protecting natural and cultural objects. In addition to differences in world views, the effects of NIMBY attitudes or personal disagreements are obvious. Still, often the cases follow a similar pattern in which the conflict is brought to the public and is guided by social media and media rules. In this process, emotional arguments become more important than rational ones, which deepens the gap between the two sides involved in the conflict.
APA, Harvard, Vancouver, ISO, and other styles
31

"Phase instability of the topmost surface layer of Inconel-718 superalloy in low-temperature gas carburization atmosphere, revealed by means of (HR)(S)TEM." Proceedings International 1, no. 1 (October 15, 2019): 11–12. http://dx.doi.org/10.33263/proceedings11.00110012.

Full text
Abstract:
The surface microstructure modifications induced by the industrial treatments with the effect of enhancement of the surface engineering parameters in metallic alloys (hardness, wear, corrosion resistance) are utterly important. The recently invented, industrially successful surface hardening by carburization of 316L stainless steel (SS) in gas atmosphere at low-temperature (LT) [1] was extended with similar results to the Ni-Fe base Inconel-718 superalloy (IN-718) [3] and to several other alloys. The accepted model of surface microstructure modification due to processing is not based on full data about the modified topmost layer microstructure. It is missing the visualization of the new microstructure by means of (HR)(S)TEM and analysis. The effect of LT gas carburization is currently explained by a model of surface grains lattice dilatation due to the so called colossal carbon supersaturation (LTCSS) [2] that is not conform to the classical phase diagrams and is claimed to be valid in all alloys including IN-718 [4]. Our work reports new findings that reveal the up to now unknown nanostructure of the topmost carburized (at 570ºC) layer ~5 µm thick of IN-718 and cast doubts on the general validity of the LTCSS. The (HR)(S)TEM and nanoscale analysis were used to reveal the nanostructure of samples prepared in a special way by ion beam milling. We got no evidence of a classical crystalline microstructure of the carburized IN-718 upper-most layer that can match the requests of the LTCSS model. Instead, we put into evidence the nano-structure of the topmost ~ 5 µm thick layer of the carburized surface showing that it consists both (i) in nanocrystalline austenite and carbides and (ii) in an amorphous phase that contains the main alloy elements, with or without carbon (Fig-1). Segregated Ni nanoclusters (Fig-2) are occurring at large areas, apparently not including other main alloy elements. A fragmentation mechanism of the topmost surface layer (consisting initially of µm-sized grains) can be figured out by starting from the observed segregation in the initial alloy grains of the main alloy elements, agglomerated both inside grains close to grain boundaries (GBs) and at the GBs (Fig-3) and by the segregation of carbon both inside grains along GBs (Fig-4) and outside grains at GBs. Those chemical segregates can be considered as precursors in the process of fragmentation and are proving the phase instability of the surface alloy grains in the LT carburizing gas atmosphere, thereby revealing the chemical reactivity of the IN-718 surface grains with the LT carburization gas mixture. The surface reactivity can be valorized industrially through developing new research for finding novel compositions of LT gaseous mixtures able to modify the engineering properties of the alloys surfaces.
APA, Harvard, Vancouver, ISO, and other styles
32

Ojovan, M. I., O. K. Karlina, V. L. Klimov, G. A. Bergman, G. Yu Pavlova, A. Yu Yurchenko, and S. A. Dmitriev. "Investigation of reactor graphite processing with the carbon-14 retention." MRS Proceedings 757 (2002). http://dx.doi.org/10.1557/proc-757-ii3.13.

Full text
Abstract:
ABSTRACTThe system C – Al – TiO2 has been demonstrated to be a strong candidate for the processing of irradiated reactor graphite waste with the retention of biologic hazardous carbon-14 in chemically and thermal stable corundum-carbide ceramics. The corundum-carbide ceramics is obtained from the powdered precursors blend through self-sustaining thermochemical reactions. Investigations of the system C – Al – TiO2 were carried out both theoretically and experimentally. The refining thermodynamic calculations of the phase composition of resulting end product were performed for a wide variety of components content in the system being investigated. Aluminium oxycarbides production was taken into account in the calculations. Thermodynamic functions of aluminium oxycarbides Al4O4C and Al2OC have been calculated for this purpose using currently available literature evidences and own assessments of missing data. On the basis of thermodynamic simulation the proportions of the source substances were determined, which result in the aluminium oxycarbides production. These simulation results have been supported by XRD-analysis of produced specimens. The experimental processing of reactor graphite was conducted by the use of self-sustaining reactions in C – Al – TiO2 powder blends. Test specimens were produced by mass ranging from 0.1 to 3 kg in the argon atmosphere. Various techniques were used to characterize the produced specimens. The compressive strength of specimens of corundum-carbide matrices produced ranges from 7 to 13 MPa. The leaching rates of Cs-137 and Sr-90 from specimens ranged between 10-4 and 10-5 g/(cm2.day) respectively. The carry-over of the carbon combined in carbon monoxide from the reacting mixtures during exothermic process may run up to 1% wt. that appropriates roughly to less than 0.01% wt. of the carbon-14 in the irradiated reactor graphite.
APA, Harvard, Vancouver, ISO, and other styles
33

Leder, Kerstin, Angelina Karpovich, Maria Burke, Chris Speed, Andrew Hudson-Smith, Simone O'Callaghan, Morna Simpson, et al. "Tagging is Connecting: Shared Object Memories as Channels for Sociocultural Cohesion." M/C Journal 13, no. 1 (March 22, 2010). http://dx.doi.org/10.5204/mcj.209.

Full text
Abstract:
Connections In Small Pieces Loosely Joined, David Weinberger identifies some of the obvious changes which the Web has brought to human relations. Social connections, he argues, used to be exclusively defined and constrained by the physics and physicality of the “real” world, or by geographical and material facts: it’s … true that we generally have to travel longer to get to places that are farther away; that to be heard at the back of the theater, you have to speak louder; that when a couple moves apart, their relationship changes; that if I give you something, I no longer have it. (xi) The Web, however, is a place (or many places) where the boundaries of space, time, and presence are being reworked. Further, since we built this virtual world ourselves and are constantly involved in its evolution, the Web can tell us much about who we are and how we relate to others. In Weinberger’s view, it demonstrates that “we are creatures who care about ourselves and the world we share with others”, and that “we live within a context of meaning” beyond what we had previously cared to imagine (xi-xii). Before the establishment of computer-mediated communication (CMC), we already had multiple means of connecting people commonly separated by space (Gitelman and Pingree). Yet the Web has allowed us to see each other whilst separated by great distances, to share stories, images and other media online, to co-construct or “produse” (Bruns) content and, importantly, to do so within groups, rather than merely between individuals (Weinberger 108). This optimistic evaluation of the Web and social relations is a response to some of the more cautious public voices that have accompanied recent technological developments. In the 1990s, Jan van Dijk raised concerns about what he anticipated as wide-reaching social consequences in the new “age of networks” (2). The network society, as van Dijk described it, was defined by new interconnections (chiefly via the World Wide Web), increased media convergence and narrowcasting, a spread of both social and media networks and the decline of traditional communities and forms of communication. Modern-day communities now consisted both of “organic” (physical) and “virtual” communities, with mediated communication seemingly beginning to replace, or at least supplement, face-to-face interaction (24). Recently, we have found ourselves on the verge of even more “interconnectedness” as the future seems determined by ubiquitous computing (ubicomp) and a new technological and cultural development known as the “Internet of Things” (Greenfield). Ubicomp refers to the integration of information technology into everyday objects and processes, to such an extent that the end-users are often unaware of the technology. According to Greenfield, ubicomp has significant potential to alter not only our relationship with technology, but the very fabric of our existence: A mobile phone … can be switched off or left at home. A computer … can be shut down, unplugged, walked away from. But the technology we're discussing here–ambient, ubiquitous, capable of insinuating itself into all the apertures everyday life affords it–will form our environment in a way neither of those technologies can. (6) Greenfield's ideas are neither hypothesis, nor hyperbole. Ubicomp is already a reality. Dodson notes, Ubicomp isn't just part of our ... future. Its devices and services are already here. Think of the use of prepaid smart cards for use of public transport or the tags displayed in our cars to help regulate congestion charge pricing or the way in which corporations track and move goods around the world. (7) The Internet of Things advances the ubicomp notion of objects embedded with the capacity to receive and transmit data and anticipates a move towards a society in which every device is “on” and in some way connected to the Internet; in other words, objects become networked. Information contained within and transmitted among networked objects becomes a “digital overlay” (Valhouli 2) over the physical world. Valhouli explains that objects, as well as geographical sites, become part of the Internet of Things in two ways. Information may become associated with a specific location using GPS coordinates or a street address. Alternatively, embedding sensors and transmitters into objects enables them to be addressed by Internet protocols, and to sense and react to their environments, as well as communicate with users or with other objects. (2) The Internet of Things is not a theoretical paradigm. It is a framework for describing contemporary technological processes, in which communication moves beyond the established realm of human interaction, to enable a whole range of potential communications: “person-to-device (e.g. scheduling, remote control, or status update), device-to-device, or device-to-grid” (Valhouli 2). Are these newer forms of communication in any sense meaningful? Currently, ubicomp's applications are largely functional, used in transport, security, and stock control. Yet, the possibilities afforded by the technology can be employed to enhance “connectedness” and “togetherness” in the broadest social sense. Most forms of technology have at least some social impact; this is particularly true of communication technology. How can that impact be made explicit? Here, we discuss one such potential application of ubicomp with reference to a new UK research project: TOTeM–Tales of Things and Electronic Memory. TOTeM aims to draw on personal narratives, digital media, and tagging to create an “Internet” of people, things, and object memories via Web 2.0 and mobile technologies. Communicating through Objects The TOTeM project, began in August 2009 and funded by Research Councils UK's Digital Economy Programme, is concerned with eliciting the memory and value of “old” artefacts, which are generally excluded from the discourse of the Internet of Things, which focuses on new and future objects produced with embedded sensors and transmitters. We focus instead on existing artefacts that hold significant personal resonance, not because they are particularly expensive or useful, but because they contain or “evoke” (Turkle) memories of people, places, times, events, or ideas. Objects across a mantelpiece can become conduits between events that happened in the past and people who will occupy the future (Miller 30). TOTeM will draw on user-generated content and innovative tagging technology to study the personal relationships between people and objects, and between people through objects. Our hypothesis is that the stories that are connected to particular objects can become binding ties between individuals, as they provide insights into personal histories and values that are usually not shared, not because they are somehow too personal or uninteresting, but because there is currently little systematic context for sharing them. Even in families, where objects routinely pass down through generations, the stories associated with these objects are generally either reduced to a vague anecdote or lost entirely. Beyond families, there are some objects whose stories are deemed culturally-significant: monuments, the possessions of historical figures, religious artefacts, and archaeological finds. The current value system which defines an object’s cultural significance appears to replicate Bourdieu's assessment of the hierarchies which define aesthetic concepts such as taste. In both cases, the popular, everyday, or otherwise mundane is deemed to possess less cultural capital than that which is less accessible or otherwise associated with the social elites. As a result, objects whose histories are well-known are mostly found in museums, untouchable and unused, whereas objects which are within reach, all around us, tend to travel from owner to owner without anyone considering what histories they might contain. TOTeM’s aim is to provide both a context and a mechanism for enabling individuals and community groups to share object-related stories and memories through digital media, via a custom-built platform of “tales of things”. Participants will be able to use real-life objects as conduits for memory, by producing “tales” about the object's personal significance, told through digital video, photographs, audio, or a mixture of media. These tales will be hosted on the TOTeM project's website. Through specifically-developed TOTeM technology, each object tale will generate a unique physical tag, initially in the form of RFID (Radio Frequency Identification) and QR (Quick Response) codes. TOTeM participants will be able to attach these tags/codes to their objects. When scanned with a mobile phone equipped with free TOTeM software or an RFID tag reader, each tag will access the individual object's tale online, playing the media files telling that object’s story on the mobile phone or computer. The object's user-created tale will be persistently accessible via both the Internet and 3G (third generation) mobile phones. The market share of 3G and 4G mobile networks is expanding, with some analysts predicting that they will account for 30% of the global mobile phone market by 2014 (Kawamoto). As the market for mobile phones with fast data transfer rates keeps growing, TOTeM will become accessible to an ever-growing number of mobile, as well as Internet, users. The TOTeM platform will serve two primary functions. It will become an archive for object memories and thus grow to become an “archaeology for the future”. We hope that future generations will be able to return to this repository and learn about the things that are meaningful to groups and individuals right now. The platform will also serve as an arena for contemporary communication. As the project develops, object memories will be directly accessible through tagged artefacts, as well as through browsing and keyword searches on the project website. Participants will be able to communicate via the TOTeM platform. On a practical level, the platform can bring together people who already share an interest in certain objects, times, or places (e.g. collectors, amateur historians, genealogists, as well as academics). In addition, we hope that the novelty of TOTeM’s approach to objects may encourage some of those individuals for whom non-participation in the digital world is not a question of access but one of apathy and perceived irrelevance (Ofcom 3). Tales of Things: Pilots Since the beginning of this research project, we have begun to construct the TOTeM platform and develop the associated tagging technology. While the TOTeM platform is being built, we have also used this time to conduct a pilot “tale-telling” phase, with the aim of exploring how people might choose to communicate object stories and how this might make them feel. In this initial phase, we focus on eliciting and constructing object tales, without the use of the TOTeM platform or the tagging technology, which will be tested in a future trial. Following Thomson and Holland’s autoethnographic approach, in the first instance, the TOTeM team and advisors shared their own tales with each other (some of these can be viewed on the TOTeM Website). Each of us chose an object that was personally significant to us, digitally recorded our object memories, and uploaded videos to a YouTube channel for discussion amongst the group. Team members in Edinburgh subsequently involved a group of undergraduate students in the pilot. Here, we offer some initial reflections on what we have learned from recording and sharing these early TOTeM tales. The objects the TOTeM team and advisors chose independently from each other included a birth tag, a box of slides, a tile, a block of surf wax, a sweet jar from Japan, a mobile phone, a concert ticket, a wrist band, a cricket bat, a watch, an iPhone, a piece of the Berlin Wall, an antique pocket sundial, and a daughter’s childhood toy. The sheer variety of the objects we selected as being personally significant was intriguing, as were the varying reasons for choosing the objects. Even there was some overlap in object choice, for instance between the mobile and the iPhone, the two items (one (relatively) old, one new) told conspicuously different stories. The mobile held the memory of a lost friend via an old text message; the iPhone was valued not only for its practical uses, but because it symbolised the incarnation of two childhood sci-fi fantasies: a James Bond-inspired tracking device (GPS) and the “Hitchhiker’s Guide to the Galaxy”. While the memories and stories linked to these objects were in many ways idiosyncratic, some patterns have emerged even at this early stage. Stories broadly differed in terms of whether they related to an individual’s personal experience (e.g. memorable moments or times in one’s life) or to their connection with other people. They could also relate to the memory of particular events, from football matches, concerts and festivals on a relatively local basis, to globally significant milestones, such as the fall of the Berlin Wall. In many cases, objects had been kept as tokens and reminders of particularly “colourful” and happy times. One student presented a wooden stick which he had picked up from a beach on his first parent-free “lads’ holiday”. Engraved on the stick were the names of the friends who had accompanied him on this memorable trip. Objects could also mark the beginning or end of a personal life stretch: for one student, his Dub Child vinyl record symbolised the moment he discovered and began to understand experimental music; it also constituted a reminder of the influence his brother had had on his musical taste. At other times, objects were significant because they served as mementos for people who had been “lost” in one way or another, either because they had moved to different places, or because they had gone missing or passed away. With some, there was a sense that the very nature of the object enabled the act of holding on to a memory in a particular way. The aforementioned mobile phone, though usually out of use, was actively recharged for the purposes of remembering. Similarly, an unused wind-up watch was kept going to simultaneously keep alive the memory of its former owner. It is commonly understood that the sharing of insights into one’s personal life provides one way of building and maintaining social relationships (Greene et al.). Self-disclosure, as it is known in psychological terms, carries some negative connotations, such as making oneself vulnerable to the judgement of others or giving away “too much too soon”. Often its achievement is dependent on timing and context. We were surprised by the extent to which some of us chose to disclose quite sensitive information with full knowledge of eventually making these stories public online. At the same time, as both researchers and, in a sense, as an audience, we found it a humbling experience to be allowed into people’s and objects’ meaningful pasts and presents. It is obvious that the invitation to talk about meaningful objects also results in stories about things and people we deeply care about. We have yet to see what shape the TOTeM platform will take as more people share their stories and learn about those of others. We don’t know whether it will be taken up as a fully-fledged communication platform or merely as an archive for object memories, whether people will continue to share what seem like deep insights into personal life stories, or if they choose to make more subversive (no less meaningful) contributions. Likewise, it is yet to be seen how the linking of objects with personal stories through tagging could impact people’s relationships with both the objects and the stories they contain. To us, this initial trial phase, while small in scale, has re-emphasised the potential of sharing object memories in the emerging network of symbolic meaning (Weinberger’s “context of meaning”). Seemingly everyday objects did turn out to contain stories behind them, personal stories which people were willing to share. Returning to Weinberger’s quote with which we began this article, TOTeM will enable the traces of material experiences and relationships to become persistently accessible: giving something away would no longer mean entirely not having it, as the narrative of the object’s significance would persist, and can be added to by future participants. Indeed, TOTeM would enable participants to “give away” more than just the object, while retaining access to the tale which would augment the object. Greenfield ends his discussion of the potential of ubicomp by listing multiple experiences which he does not believe would benefit from any technological augmentation: Going for a long run in the warm gentle rain, gratefully and carefully easing my body into the swelter of a hot springs, listening to the first snowfall of winter, savouring the texture of my wife’s lips … these are all things that require little or no added value by virtue of being networked, relational, correlated to my other activities. They’re already perfect, just as they stand. (258) It is a resonant set of images, and most people would be able to produce a similar list of meaningful personal experiences. Yet, as we have already suggested, technology and meaning need not be mutually exclusive. Indeed, as the discussion of TOTeM begins to illustrate, the use of new technologies in new contexts can augment the commercial applications of ubiquoutous computing with meaningful human communication. At the time of writing, the TOTeM platform is in the later stages of development. We envisage the website taking shape and its content becoming more and more meaningful over time. However, some initial object memories should be available from April 2010, and the TOTeM platform and mobile tagging applications will be fully operational in the summer of 2010. Our progress can be followed on www.youtotem.com and http://twitter.com/talesofthings. TOTeM looks forward to receiving “tales of things” from across the world. References Bourdieu, Pierre. Distinction: A Social Critique of the Judgement of Taste. London: Routledge, 1984.Bruns, Axel. “The Future is User-Led: The Path towards Widespread Produsage.” fibreculture 11 (2008). 20 Mar. 2010 ‹http://www.journal.fibreculture.org/issue11/issue11_bruns_print.html›. Dodson, Sean. “Forward: A Tale of Two Cities.” Rob van Kranenburg. The Internet of Things: A Critique of Ambient Technology and the All-Seeing Network of RFID. Amsterdam: Institute of Network Cultures, Network Notebooks 02, 2008. 5-9. 20 Mar. 2010 ‹http://www.networkcultures.org/_uploads/notebook2_theinternetofthings.pdf›. Gitelman, Lisa, and Geoffrey B. Pingree. Eds. New Media: 1740-1915. Cambridge, MA: MIT Press, 2003. Greene, Kathryn, Valerian Derlega, and Alicia Mathews. “Self-Disclosure in Personal Relationships.” Ed. Anita L. Vangelisti and Daniel Perlman. Cambridge Handbook of Personal Relationships. Cambridge: Cambridge UP, 2006. 409-28. Greenfield, Adam. Everyware: The Dawning Age of Ubiquitous Computing. Berkeley, CA: New Riders, 2006. Kawamoto, Dawn. “Report: 3G and 4G Market Share on the Rise.” CNET News 2009. 20 Mar. 2010 ‹http://news.cnet.com/8301-1035_3-10199185-94.html›. Kwint, Marius, Christopher Breward, and Jeremy Aynsley. Material Memories: Design and Evocation. Oxford: Berg, 1999. Miller, Daniel. The Comfort of Things. Cambridge: Polity Press, 2008. Ofcom. ”Accessing the Internet at Home”. 2009. 20 Mar. 2010 ‹http://www.ofcom.org.uk/research/telecoms/reports/bbresearch/bbathome.pdf›. Thomson, Rachel, and Janet Holland. “‘Thanks for the Memory’: Memory Books as a Methodological Resource in Biographical Research.” Qualitative Research 5.2 (2005): 201-19. Turkle, Sherry. Evocative Objects: Things We Think With. Cambridge, MA: MIT Press, 2007. Valhouli, Constantine A. The Internet of Things: Networked Objects and Smart Devices. The Hammersmith Group Research Report, 2010. 20 Mar. 2010 ‹http://thehammersmithgroup.com/images/reports/networked_objects.pdf›. Van Dijk, Jan. The Network Society: Social Aspects of New Media. London: SAGE, 1999. Weinberger, David. Small Pieces Loosely Joined: How the Web Shows Us Who We Really Are. Oxford: Perseus Press, 2002.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography