Artículos de revistas sobre el tema "3-D context modelling"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: 3-D context modelling.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "3-D context modelling".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Rey, Federica, Bianca Barzaghini, Alessandra Nardini, Matteo Bordoni, Gian Vincenzo Zuccotti, Cristina Cereda, Manuela Teresa Raimondi y Stephana Carelli. "Advances in Tissue Engineering and Innovative Fabrication Techniques for 3-D-Structures: Translational Applications in Neurodegenerative Diseases". Cells 9, n.º 7 (7 de julio de 2020): 1636. http://dx.doi.org/10.3390/cells9071636.

Texto completo
Resumen
In the field of regenerative medicine applied to neurodegenerative diseases, one of the most important challenges is the obtainment of innovative scaffolds aimed at improving the development of new frontiers in stem-cell therapy. In recent years, additive manufacturing techniques have gained more and more relevance proving the great potential of the fabrication of precision 3-D scaffolds. In this review, recent advances in additive manufacturing techniques are presented and discussed, with an overview on stimulus-triggered approaches, such as 3-D Printing and laser-based techniques, and deposition-based approaches. Innovative 3-D bioprinting techniques, which allow the production of cell/molecule-laden scaffolds, are becoming a promising frontier in disease modelling and therapy. In this context, the specific biomaterial, stiffness, precise geometrical patterns, and structural properties are to be considered of great relevance for their subsequent translational applications. Moreover, this work reports numerous recent advances in neural diseases modelling and specifically focuses on pre-clinical and clinical translation for scaffolding technology in multiple neurodegenerative diseases.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Sochaczewski, Łukasz, Anthony Stockdale, William Davison, Wlodek Tych y Hao Zhang. "A three-dimensional reactive transport model for sediments, incorporating microniches". Environmental Chemistry 5, n.º 3 (2008): 218. http://dx.doi.org/10.1071/en08006.

Texto completo
Resumen
Environmental context. Modelling of discrete sites of diagenesis in sediments (microniches) has typically been performed in 1-D and has involved a limited set of components. Here we present a new 3-D model for microniches within a traditional vertical sequence of redox reactions, and show example modelled niches of a range of sizes, close to the sediment–water interface. Microniche processes may have implications for understanding trace metal diagenesis, via formation of sulfides. The model provides a quantitative framework for examining microniche data and concepts. Abstract. Most reactive transport models have represented sediments as one-dimensional (1-D) systems and have solely considered the development of vertical concentration gradients. However, application of recently developed microscale and 2-D measurement techniques have demonstrated more complicated solute structures in some sediments, including discrete localised sites of depleted oxygen, and elevated trace metals and sulfide, referred to as microniches. A model of transport and reaction in sediments that can simulate the dynamic development of concentration gradients occurring in 3-D was developed. Its graphical user interface allows easy input of user-specified reactions and provides flexible schemes that prioritise their execution. The 3-D capability was demonstrated by quantitative modelling of hypothetical solute behaviour at organic matter microniches covering a range of sizes. Significant effects of microniches on the profiles of oxygen and nitrate are demonstrated. Sulfide is shown to be readily generated in microniches within 1 cm of the sediment surface, provided the diameter of the reactive organic material is greater than 1 mm. These modelling results illustrate the geochemical complexities that arise when processes occur in 3-D and demonstrate the need for such a model. Future use of high-resolution measurement techniques should include the collection of data for relevant major components, such as reactive iron and manganese oxides, to allow full, multicomponent modelling of microniche processes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Mueller, Christian Olaf, Jacob Wächter, Christoph Jahnke, Emilio L. Pueyo Morer, Florian Riefstahl y Alexander Malz. "Integrated geological and gravity modelling to improve 3-D model harmonization—Methods and benefits for the Saxony-Anhalt/Brandenburg cross-border region (North German Basin)". Geophysical Journal International 227, n.º 2 (2 de julio de 2021): 1295–321. http://dx.doi.org/10.1093/gji/ggab256.

Texto completo
Resumen
SUMMARY As 3-D geological models become more numerous and widely available, the opportunity arises to combine them into large regional compilations. One of the biggest challenges facing these compilations is the connection and alignment of individual models, especially in less explored areas or across political borders. In this regard, gravity modelling is suitable for revealing additional subsurface information that can support a harmonization of structural models. Here, we present an integrated geological and gravity modelling approach to support the harmonization process of two geological 3-D models of the North German Basin in the cross-border region between the federal states of Saxony-Anhalt and Brandenburg. Gravity gradient calculation, filtering and Euler deconvolution are utilized to reveal new insights into the local fault system and gravity anomaly sources. The independent models are merged and harmonized during 3-D forward and inverse gravity modelling. Herein, density gradients for individual layers are incorporated in the framework of model parametrization. The resulting geological 3-D model consists of harmonized interfaces and is consistent with the observed gravity field. To demonstrate the plausibility of the derived model, we discuss the new geophysical findings on the sedimentary and crustal structures of the cross-border region in the context of the regional geological setting. The cross-border region is dominated by an NW–SE oriented fault system that coincides with the Elbe Fault System. We interpret a low-density zone within the basement of the Mid-German Crystalline Rise as a northward continuation of the Pretzsch–Prettin Crystalline Complex into the basement of the North German Basin. Additionally, we observe two types of anticlines within the basin, which we link to provinces of contrasting basement rigidity. Our gravity modelling implies that the Zechstein salt has mostly migrated into the deeper parts of the basin west of the Seyda Fault. Finally, we identify a pronounced syncline that accommodates a narrow and up to 800 m deep Cenozoic basin.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Pini, Ronny, Nicholas T. Vandehey, Jennifer Druhan, James P. O’Neil y Sally M. Benson. "Quantifying solute spreading and mixing in reservoir rocks using 3-D PET imaging". Journal of Fluid Mechanics 796 (10 de mayo de 2016): 558–87. http://dx.doi.org/10.1017/jfm.2016.262.

Texto completo
Resumen
We report results of an experimental investigation into the effects of small-scale (mm–cm) heterogeneities on solute spreading and mixing in a Berea sandstone core. Pulse-tracer tests have been carried out in the Péclet number regime $Pe=6{-}40$ and are supplemented by a unique combination of two imaging techniques. X-ray computed tomography (CT) is used to quantify subcore-scale heterogeneities in terms of permeability contrasts at a spatial resolution of approximately $10~\text{mm}^{3}$, while [11C] positron emission tomography (PET) is applied to image the spatial and temporal evolution of the full tracer plume non-invasively. To account for both advective spreading and local (Fickian) mixing as driving mechanisms for solute transport, a streamtube model is applied that is based on the one-dimensional advection–dispersion equation. We refer to our modelling approach as semideterministic, because the spatial arrangement of the streamtubes and the corresponding solute travel times are known from the measured rock’s permeability map, which required only small adjustments to match the measured tracer breakthrough curve. The model reproduces the three-dimensional PET measurements accurately by capturing the larger-scale tracer plume deformation as well as subcore-scale mixing, while confirming negligible transverse dispersion over the scale of the experiment. We suggest that the obtained longitudinal dispersivity ($0.10\pm 0.02$ cm) is rock rather than sample specific, because of the ability of the model to decouple subcore-scale permeability heterogeneity effects from those of local dispersion. As such, the approach presented here proves to be very valuable, if not necessary, in the context of reservoir core analyses, because rock samples can rarely be regarded as ‘uniformly heterogeneous’.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Freitag, J. y W. Mathis. "Numerical modelling of nonlinear electromechanical coupling of an atomic force microscope with finite element method". Advances in Radio Science 8 (30 de septiembre de 2010): 33–36. http://dx.doi.org/10.5194/ars-8-33-2010.

Texto completo
Resumen
Abstract. In this contribution, an atomic force microscope is modelled and in this context, a non-linear coupled 3-D-boundary value problem is solved numerically using the finite element method. The coupling of this system is done by using the Maxwell stress tensor. In general, an iterative weak coupling is used, where the two physical problems are solved separately. However, this method does not necessarily guarantee convergence of the nonlinear computation. Hence, this contribution shows the possibility of solving the multiphysical problem by a strong coupling, which is also referred to as monolithic approach. The electrostatic field and the mechanical displacements are calculated simultaneously by solving only one system of equation. Since the Maxwell stress tensor depends nonlinearly on the potential, the solution is solved iteratively by the Newton method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Kramer, Stephan C., D. Rhodri Davies y Cian R. Wilson. "Analytical solutions for mantle flow in cylindrical and spherical shells". Geoscientific Model Development 14, n.º 4 (9 de abril de 2021): 1899–919. http://dx.doi.org/10.5194/gmd-14-1899-2021.

Texto completo
Resumen
Abstract. Computational models of mantle convection must accurately represent curved boundaries and the associated boundary conditions of a 3-D spherical shell, bounded by Earth's surface and the core–mantle boundary. This is also true for comparable models in a simplified 2-D cylindrical geometry. It is of fundamental importance that the codes underlying these models are carefully verified prior to their application in a geodynamical context, for which comparisons against analytical solutions are an indispensable tool. However, analytical solutions for the Stokes equations in these geometries, based upon simple source terms that adhere to physically realistic boundary conditions, are often complex and difficult to derive. In this paper, we present the analytical solutions for a smooth polynomial source and a delta-function forcing, in combination with free-slip and zero-slip boundary conditions, for both 2-D cylindrical- and 3-D spherical-shell domains. We study the convergence of the Taylor–Hood (P2–P1) discretisation with respect to these solutions, within the finite element computational modelling framework Fluidity, and discuss an issue of suboptimal convergence in the presence of discontinuities. To facilitate the verification of numerical codes across the wider community, we provide a Python package, Assess, that evaluates the analytical solutions at arbitrary points of the domain.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Rouquet, Simon, Pierre Boivin, Patrick Lachassagne y Emmanuel Ledoux. "A 3-D genetic approach to high-resolution geological modelling of the volcanic infill of a paleovalley system. Application to the Volvic catchment (Chaîne des Puys, France)". Bulletin de la Société Géologique de France 183, n.º 5 (1 de septiembre de 2012): 395–407. http://dx.doi.org/10.2113/gssgfbull.183.5.395.

Texto completo
Resumen
Abstract The Volvic natural mineral water is catched in a complex volcanic aquifer located in the northern part of the “Chaîne des Puys” volcanic system (Auvergne, France). In the watershed, water transits through scoria cones and basaltic to trachybasaltic lava flows. These aa lava flows, emitted by strombolian cones between 75,000 and 10,000 years ago, are emplaced in deep paleovalleys incised within the variscan crystalline bedrock. The volcanic infill is highly heterogeneous. In order to build a hydrogeological model of the watershed, a simple but robust methodology was developed to reconstruct the bedrock morphology and the volcanic infill in this paleovalley context. This methodology, based on the combination of genetic and geometric approaches, appears to be rather efficient to define both the substratum and the lava flows geometry. A 3D geological model is then proposed. It synthesizes the data from 99 boreholes logs, 2D geoelectric profiles, morphologic clues, datings and petrographic data. A genetic approach, integrating aa lava flow morphology and emplacement behaviour, was used to reconstruct the chronology of the volcanic events and lava flow emplacement from the upper part of the Dômes plateau to the Limagne plain. The precision of the volcanic reconstruction is discussed: the main limitation of the methodology are related to the homogeneity of the petrographic and geochemical composition of the lava flows succession (except for the trachyandesitic Nugere lava), the spatially variable borehole density, the lack of a real petrographical and geological description on most of the available geological logs. Nevertheless, the developed methodology combining spatial and genetic approaches appears to be well adapted to constrain complex lava flow infill geometries in paleovalley context.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Pereira-Santos, Marcos, Gisele Queiroz Carvalho, Djanilson Barbosa dos Santos y Ana Marlucia Oliveira. "Influence of vitamin D serum concentration, prenatal care and social determinants on birth weight: a northeastern Brazilian cohort study". British Journal of Nutrition 122, n.º 03 (11 de junio de 2019): 284–92. http://dx.doi.org/10.1017/s0007114519001004.

Texto completo
Resumen
AbstractThe relationship among social determinants, vitamin D serum concentration and the health and nutrition conditions is an important issue in the healthcare of pregnant women and newborns. Thus, the present study analyses how vitamin D, prenatal monitoring and social determinants are associated with birth weight. The cohort comprised 329 pregnant women, up to 34 weeks gestational age at the time of admission, who were receiving care through the prenatal services of Family Health Units. Structural equation modelling was used in the statistical analysis. The mean birth weight was 3340 (sd 0·545) g. Each nmol increase in maternal vitamin D serum concentration was associated with an increase in birth weight of 3·06 g. Prenatal healthcare with fewer appointments (β −41·49 g, 95 % CI −79·27, −3·71) and late onset of care in the second trimester or third trimester (β −39·24 g, 95 % CI −73·31, −5·16) favoured decreased birth weight. In addition, low socio-economic class and the practice of Afro-Brazilian religions showed a direct association with high vitamin D serum concentrations and an indirect association with high birth weight, respectively. High gestational BMI (β 23·84, 95 % CI 4·37, 43·31), maternal education level (β 24·52 g, 95 % CI 1·82, 47·23) and length of gestation (β 79·71, 95 % CI 52·81; 106·6) resulted in high birth weight. In conclusion, maternal vitamin D serum concentration, social determinants and prenatal care, evaluated in the context of primary healthcare, directly determined birth weight.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Sutter, Johannes, Hubertus Fischer, Klaus Grosfeld, Nanna B. Karlsson, Thomas Kleiner, Brice Van Liefferinge y Olaf Eisen. "Modelling the Antarctic Ice Sheet across the mid-Pleistocene transition – implications for Oldest Ice". Cryosphere 13, n.º 7 (19 de julio de 2019): 2023–41. http://dx.doi.org/10.5194/tc-13-2023-2019.

Texto completo
Resumen
Abstract. The international endeavour to retrieve a continuous ice core, which spans the middle Pleistocene climate transition ca. 1.2–0.9 Myr ago, encompasses a multitude of field and model-based pre-site surveys. We expand on the current efforts to locate a suitable drilling site for the oldest Antarctic ice core by means of 3-D continental ice-sheet modelling. To this end, we present an ensemble of ice-sheet simulations spanning the last 2 Myr, employing transient boundary conditions derived from climate modelling and climate proxy records. We discuss the imprint of changing climate conditions, sea level and geothermal heat flux on the ice thickness, and basal conditions around previously identified sites with continuous records of old ice. Our modelling results show a range of configurational ice-sheet changes across the middle Pleistocene transition, suggesting a potential shift of the West Antarctic Ice Sheet to a marine-based configuration. Despite the middle Pleistocene climate reorganisation and associated ice-dynamic changes, we identify several regions conducive to conditions maintaining 1.5 Myr (million years) old ice, particularly around Dome Fuji, Dome C and Ridge B, which is in agreement with previous studies. This finding strengthens the notion that continuous records with such old ice do exist in previously identified regions, while we are also providing a dynamic continental ice-sheet context.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Federico, S. "Implementation of a 3-D-Var system for atmospheric profiling data assimilation into the RAMS model: initial results". Atmospheric Measurement Techniques Discussions 6, n.º 2 (12 de abril de 2013): 3581–610. http://dx.doi.org/10.5194/amtd-6-3581-2013.

Texto completo
Resumen
Abstract. This paper presents the current status of development of a three-dimensional variational data assimilation system. The system can be used with different numerical weather prediction models, but it is mainly designed to be coupled with the Regional Atmospheric Modelling System (RAMS). Analyses are given for the following parameters: zonal and meridional wind components, temperature, relative humidity, and geopotential height. Important features of the data assimilation system are the use of incremental formulation of the cost-function, and the use of an analysis space represented by recursive filters and eigenmodes of the vertical background error matrix. This matrix and the length-scale of the recursive filters are estimated by the National Meteorological Center (NMC) method. The data assimilation and forecasting system is applied to the real context of atmospheric profiling data assimilation, and in particular to the short-term wind prediction. The analyses are produced at 20 km horizontal resolution over central Europe and extend over the whole troposphere. Assimilated data are vertical soundings of wind, temperature, and relative humidity from radiosondes, and wind measurements of the European wind profiler network. Results show the validity of the analysis solutions because they are closer to the observations (lower RMSE) compared to the background (higher RMSE), and the differences of the RMSEs are consistent with the data assimilation settings. To quantify the impact of improved initial conditions on the short-term forecast, the analyses are used as initial conditions of a three-hours forecast of the RAMS model. In particular two sets of forecasts are produced: (a) the first uses the ECMWF analysis/forecast cycle as initial and boundary conditions; (b) the second uses the analyses produced by the 3-D-Var scheme as initial conditions, then is driven by the ECMWF forecast. The improvement is quantified by considering the horizontal components of the wind, which are measured at a-synoptic times by the European wind profiler network. The results show that the RMSE is effectively reduced at the short range (1–2 h). The results are in agreement with the set-up of the numerical experiment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Trivizas, Dionyssios. "TMSIM: A Runway Capacity Study for Frankfurt and Chicago O'Hare Airports". Journal of Navigation 47, n.º 1 (enero de 1994): 70–88. http://dx.doi.org/10.1017/s0373463300011140.

Texto completo
Resumen
A realistic runway capacity study for two major airports, namely Frankfurt (EDDF) and Chicago O'Hare (ORD) is presented, assessing the effect of optimal scheduling on the runway capacity and air traffic delays. The maximum position shift (MPS) runway scheduling algorithm, used in the study, was developed by Trivizas at the Massachusetts Institute of Technology (1987).EDDF is studied in the context of 160 major European airports, with a real traffic sample from 6 July, 1990. ORD is studied in the context of 26 major US airports using a large traffic sample from 1 March, 1989. Secondary airport traffic has been assigned to the geographically nearest major hub and time compression has been used to extrapolate to an artificially denser scenario.The results show that optimal scheduling can bring about capacity improvements of the order of 20 percent, which in turn reduce delays up to 70 percent. These results are the product of a dynamic traffic management process which has been visually validated by observing animated runway operations and monitor functions.The study has been conducted with TMSIM, a comprehensive, object-oriented simulation tool that allows one to build an understanding of the structure and functionality of the air traffic control system, by modelling its components, their functionality and interactions, and measuring component and system performance. It features interactive route network editing (using menu/mouse techniques), complete route and airport structure modelling, independent flight and ATC objects, 3-D animation, advanced algorithms for scheduling, routeing, flow management, airspace restructuring (sectorization) and performance (capacity and communications workload) analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Qin, X., R. D. Müller, J. Cannon, T. C. W. Landgrebe, C. Heine, R. J. Watson y M. Turner. "The GPlates Geological Information Model and Markup Language". Geoscientific Instrumentation, Methods and Data Systems 1, n.º 2 (8 de octubre de 2012): 111–34. http://dx.doi.org/10.5194/gi-1-111-2012.

Texto completo
Resumen
Abstract. Understanding tectonic and geodynamic processes leading to the present-day configuration of the Earth involves studying data and models across a variety of disciplines, from geochemistry, geochronology and geophysics, to plate kinematics and mantle dynamics. All these data represent a 3-D spatial and 1-D temporal framework, a formalism which is not exploited by traditional spatial analysis tools. This is arguably a fundamental limit in both the rigour and sophistication in which datasets can be combined for geological deep time analysis, and often confines the extent of data analyses to the present-day configurations of geological objects. The GPlates Geological Information Model (GPGIM) represents a formal specification of geological and geophysical data in a time-varying plate tectonics context, used by the GPlates virtual-globe software. It provides a framework in which relevant types of geological data are attached to a common plate tectonic reference frame, allowing the data to be reconstructed in a time-dependent spatio-temporal plate reference frame. The GPlates Markup Language (GPML), being an extension of the open standard Geography Markup Language (GML), is both the modelling language for the GPGIM and an XML-based data format for the interoperable storage and exchange of data modelled by it. The GPlates software implements the GPGIM allowing researchers to query, visualise, reconstruct and analyse a rich set of geological data including numerical raster data. The GPGIM has recently been extended to support time-dependent geo-referenced numerical raster data by wrapping GML primitives into the time-dependent framework of the GPGIM. Coupled with GPlates' ability to reconstruct numerical raster data and import/export from/to a variety of raster file formats, as well as its handling of time-dependent plate boundary topologies, interoperability with geodynamic softwares is established, leading to a new generation of deep-time spatio-temporal data analysis and modelling, including a variety of new functionalities, such as 4-D data-mining.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Tampubolon, W. y W. Reinhardt. "UAV DATA PROCESSING FOR RAPID MAPPING ACTIVITIES". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3/W3 (19 de agosto de 2015): 371–77. http://dx.doi.org/10.5194/isprsarchives-xl-3-w3-371-2015.

Texto completo
Resumen
During disaster and emergency situations, geospatial data plays an important role to serve as a framework for decision support system. As one component of basic geospatial data, large scale topographical maps are mandatory in order to enable geospatial analysis within quite a number of societal challenges. <br><br> The increasing role of geo-information in disaster management nowadays consequently needs to include geospatial aspects on its analysis. Therefore different geospatial datasets can be combined in order to produce reliable geospatial analysis especially in the context of disaster preparedness and emergency response. A very well-known issue in this context is the fast delivery of geospatial relevant data which is expressed by the term “Rapid Mapping”. <br><br> Unmanned Aerial Vehicle (UAV) is the rising geospatial data platform nowadays that can be attractive for modelling and monitoring the disaster area with a low cost and timely acquisition in such critical period of time. Disaster-related object extraction is of special interest for many applications. <br><br> In this paper, UAV-borne data has been used for supporting rapid mapping activities in combination with high resolution airborne Interferometric Synthetic Aperture Radar (IFSAR) data. A real disaster instance from 2013 in conjunction with Mount Sinabung eruption, Northern Sumatra, Indonesia, is used as the benchmark test for the rapid mapping activities presented in this paper. On this context, the reliable IFSAR dataset from airborne data acquisition in 2011 has been used as a comparable dataset for accuracy investigation and assessment purpose in 3 D reconstructions. After all, this paper presents a proper geo-referencing and feature extraction method of UAV data to support rapid mapping activities.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Treiber, G., T. Wex, S. Eberhard, C. Hosius y P. Malfertheiner. "Imatinib for hepatocellular cancer (HCC)—Focus on PK/PD modelling and liver function". Journal of Clinical Oncology 24, n.º 18_suppl (20 de junio de 2006): 13088. http://dx.doi.org/10.1200/jco.2006.24.18_suppl.13088.

Texto completo
Resumen
13088 Background: Imatinib represents standard medical care for treatment of gastrointestinal stroma tumors (GIST) and chronic myeloic leucemia (CML). Indications for other malignancies are evolving, especially in the context of antiangiogenesis, mediated by the inhibition of platelet-derived growth factor (PDGF). Liver metastasis occurs often in GIST patients, usually not affecting liver function. No reliable data about imatinib metabolisation under conditions of true hepatic impairment are available. Methods: Phase-II trial, involving 10 patients with hepatocellular cancer (HCC) as a model disease, we attempted to (1) study short-term changes in serum biomarkers following treatment with octreotide (targeting VEGF) alone or octreotide added by imatinib (400 mg/d, additionally targeting PDGF); (2) to explore a potentially decreased metabolisation of imatinib because of hepatic impairment; and (3) to correlate pharmakokinetic and pharmacodynamic findings (PK/PD modelling). Results: Compared to literature results, PK parameters for imatinib are similiar to patients with normal liver function, however, the main active metabolite N-DMI shows a more prolonged half-life. The AUC of N-DMI depends on liver function as expressed by CPT score (r= −0.67) and serum bilirubine (r= −0.70), p<0.05 each. During short-termed imatinib treatment (4 weeks), plasma PDGF significantly decreased at week 2 compared to controls only receiving octreotide treatment. VEGF secretion was unaffected by imatinib. The AUC of N-DMI could be attributed to the pharmacodynamic effect of PDGF inhibition (r= −0,679 [−0.917 to −0.0868], p = 0.031). Conclusions: In HCC patients with different stages of liver cirrhosis, the metabolisation of N-DMI, but not imatinib, is impaired. PK of imatinib is closely correlated to PDGF inhibition. Whether this translates into antiangiogenesis and tumor regression must be awaited by long term clinical studies. [Table: see text] No significant financial relationships to disclose.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

van Maanen, Rosanne, Frans H. Rutten, Frederikus A. Klok, Menno V. Huisman, Jeanet W. Blom, Karel G. M. Moons y Geert-Jan Geersing. "Validation and impact of a simplified clinical decision rule for diagnosing pulmonary embolism in primary care: design of the PECAN prospective diagnostic cohort management study". BMJ Open 9, n.º 10 (octubre de 2019): e031639. http://dx.doi.org/10.1136/bmjopen-2019-031639.

Texto completo
Resumen
IntroductionCombined with patient history and physical examination, a negative D-dimer can safely rule-out pulmonary embolism (PE). However, the D-dimer test is frequently false positive, leading to many (with hindsight) ‘unneeded’ referrals to secondary care. Recently, the novel YEARS algorithm, incorporating flexible D-dimer thresholds depending on pretest risk, was developed and validated, showing its ability to safely exclude PE in the hospital environment. Importantly, this was accompanied with 14% fewer computed tomographic pulmonary angiography than the standard, fixed D-dimer threshold. Although promising, in primary care this algorithm has not been validated yet.Methods and analysisThe PECAN (DiagnosingPulmonaryEmbolism in the context ofCommonAlternative diagNoses in primary care) study is a prospective diagnostic study performed in Dutch primary care. Included patients with suspected acute PE will be managed by their general practitioner according to the YEARS diagnostic algorithm and followed up in primary care for 3 months to establish the final diagnosis. To study the impact of the use of the YEARS algorithm, the primary endpoints are the safety and efficiency of the YEARS algorithm in primary care. Safety is defined as the proportion of false-negative test results in those not referred. Efficiency denotes the proportion of patients classified in this non-referred category. Additionally, we quantify whether C reactive protein measurement has added diagnostic value to the YEARS algorithm, using multivariable logistic and polytomous regression modelling. Furthermore, we will investigate which factors contribute to the subjective YEARS item ‘PE most likely diagnosis’.Ethics and disseminationThe study protocol was approved by the Medical Ethical Committee Utrecht, the Netherlands. Patients eligible for inclusion will be asked for their consent. Results will be disseminated by publication in peer-reviewed journals and presented at (inter)national meetings and congresses.Trial registrationNTR 7431.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Zhang, Haofei, Harshal M. Parikh, Jyoti Bapat, Ying-Hsuan Lin, Jason D. Surratt y Richard M. Kamens. "Modelling of secondary organic aerosol formation from isoprene photooxidation chamber studies using different approaches". Environmental Chemistry 10, n.º 3 (2013): 194. http://dx.doi.org/10.1071/en13029.

Texto completo
Resumen
Environmental context Fine particulate matter (PM2.5) in the Earth’s atmosphere plays an important role in climate change and human health, in which secondary organic aerosol (SOA) that forms from the photooxidation of volatile organic compounds (VOCs) has a significant contribution. SOA derived from isoprene, the most abundant non-methane VOC emitted into the Earth’s atmosphere, has been widely studied to interpret its formation mechanisms. However, the ability to predict isoprene SOA using current models remains difficult due to the lack of understanding of isoprene chemistry. Abstract Secondary organic aerosol (SOA) formation from the photooxidation of isoprene was simulated against smog chamber experiments with varied concentrations of isoprene, nitrogen oxides (NOx=NO + NO2) and ammonium sulfate seed aerosols. A semi-condensed gas-phase isoprene chemical mechanism (ISO-UNC) was coupled with different aerosol-phase modelling frameworks to simulate SOA formation, including: (1) the Odum two-product approach, (2) the 1-D volatility basis-set (VBS) approach and (3) a new condensed kinetic model based upon the gas-particle partitioning theory and reactive uptake processes. The first two approaches are based upon empirical parameterisations from previous studies. The kinetic model uses a gas-phase mechanism to explicitly predict the major intermediate precursors, namely the isoprene-derived epoxides, and hence simulate SOA formation. In general, they all tend to significantly over predict SOA formation when semivolatile concentrations are higher because more semivolatiles are forced to produce SOA in the models to maintain gas-particle equilibrium; yet the data indicate otherwise. Consequently, modified dynamic parameterised models, assuming non-equilibrium partitioning, were incorporated and could improve the model performance. In addition, the condensed kinetic model was expanded by including an uptake limitation representation so that reactive uptake processes slow down or even stop; this assumes reactive uptake reactions saturate seed aerosols. The results from this study suggest that isoprene SOA formation by reactive uptake of gas-phase precursors is likely limited by certain particle-phase features, and at high gas-phase epoxide levels, gas-particle equilibrium is not obtained. The real cause of the limitation needs further investigation; however, the modified kinetic model in this study could tentatively be incorporated in large-scale SOA models given its predictive ability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

ZUBAIR, FAZLUL R. y HARIS J. CATRAKIS. "On separated shear layers and the fractal geometry of turbulent scalar interfaces at large Reynolds numbers". Journal of Fluid Mechanics 624 (10 de abril de 2009): 389–411. http://dx.doi.org/10.1017/s0022112008005612.

Texto completo
Resumen
This work explores fractal geometrical properties of scalar turbulent interfaces derived from experimental two-dimensional spatial images of the scalar field in separated shear layers at large Reynolds numbers. The resolution of the data captures the upper three decades of scales enabling examination of multiscale geometrical properties ranging from the largest energy-containing scales to inertial scales. The data show a −5/3 spectral exponent over a wide range of scales corresponding to the inertial range in fully developed turbulent flows. For the fractal aspects, we utilize two methods as it is known that different methods may lead to different fractal aspects. We use the recently developed method for fractal analysis known as the Multiscale-Minima Meshless (M3) method because it does not require the use of grids. We also use the conventional box-counting approach as it has been frequently employed in various past studies. The outer scalar interfaces are identified on the basis of the probability density function (p.d.f.) of the scalar field. For the outer interfaces, the M3 method shows strong scale dependence of the generalized fractal dimension with approximately linear variation of the dimension as a function of logarithmic scale, for interface-fitting reference areas, but there is evidence of a plateau near a dimension D ~ 1.3 for larger reference areas. The conventional box-counting approach shows evidence of a plateau with a constant dimension also of D ~ 1.3, for the same reference areas. In both methods, the observed plateau dimension value agrees with other studies in different flow geometries. Scalar threshold effects are also examined and show that the internal scalar interfaces exhibit qualitatively similar behaviour to the outer interfaces. The overall range of box-counting fractal dimension values exhibited by outer and internal interfaces is D ~ 1.2–1.4. The present findings show that the fractal aspects of scalar interfaces in separated shear layers at large Reynolds number with −5/3 spectral behaviour can depend on the method used for evaluating the dimension and on the reference area. These findings as well as the utilities and distinctions of these two different definitions of the dimension are discussed in the context of multiscale modelling of mixing and the interfacial geometry.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Guillemant, S., V. Génot, J. C. Matéo-Vélez, R. Ergun y P. Louarn. "Solar wind plasma interaction with solar probe plus spacecraft". Annales Geophysicae 30, n.º 7 (24 de julio de 2012): 1075–92. http://dx.doi.org/10.5194/angeo-30-1075-2012.

Texto completo
Resumen
Abstract. 3-D PIC (Particle In Cell) simulations of spacecraft-plasma interactions in the solar wind context of the Solar Probe Plus mission are presented. The SPIS software is used to simulate a simplified probe in the near-Sun environment (at a distance of 0.044 AU or 9.5 RS from the Sun surface). We begin this study with a cross comparison of SPIS with another PIC code, aiming at providing the static potential structure surrounding a spacecraft in a high photoelectron environment. This paper presents then a sensitivity study using generic SPIS capabilities, investigating the role of some physical phenomena and numerical models. It confirms that in the near- sun environment, the Solar Probe Plus spacecraft would rather be negatively charged, despite the high yield of photoemission. This negative potential is explained through the dense sheath of photoelectrons and secondary electrons both emitted with low energies (2–3 eV). Due to this low energy of emission, these particles are not ejected at an infinite distance of the spacecraft and would rather surround it. As involved densities of photoelectrons can reach 106 cm−3 (compared to ambient ions and electrons densities of about 7 × 103 cm−3), those populations affect the surrounding plasma potential generating potential barriers for low energy electrons, leading to high recollection. This charging could interfere with the low energy (up to a few tens of eV) plasma sensors and particle detectors, by biasing the particle distribution functions measured by the instruments. Moreover, if the spacecraft charges to large negative potentials, the problem will be more severe as low energy electrons will not be seen at all. The importance of the modelling requirements in terms of precise prediction of spacecraft potential is also discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Mohan, M. "GEOSPATIAL INFORMATION FROM SATELLITE IMAGERY FOR GEOVISUALISATION OF SMART CITIES IN INDIA". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B8 (24 de junio de 2016): 979–85. http://dx.doi.org/10.5194/isprsarchives-xli-b8-979-2016.

Texto completo
Resumen
In the recent past, there have been large emphasis on extraction of geospatial information from satellite imagery. The Geospatial information are being processed through geospatial technologies which are playing important roles in developing of smart cities, particularly in developing countries of the world like India. The study is based on the latest geospatial satellite imagery available for the multi-date, multi-stage, multi-sensor, and multi-resolution. In addition to this, the latest geospatial technologies have been used for digital image processing of remote sensing satellite imagery and the latest geographic information systems as 3-D GeoVisualisation, geospatial digital mapping and geospatial analysis for developing of smart cities in India. The Geospatial information obtained from RS and GPS systems have complex structure involving space, time and presentation. Such information helps in 3-Dimensional digital modelling for smart cities which involves of spatial and non-spatial information integration for geographic visualisation of smart cites in context to the real world. In other words, the geospatial database provides platform for the information visualisation which is also known as geovisualisation. So, as a result there have been an increasing research interest which are being directed to geospatial analysis, digital mapping, geovisualisation, monitoring and developing of smart cities using geospatial technologies. However, the present research has made an attempt for development of cities in real world scenario particulary to help local, regional and state level planners and policy makers to better understand and address issues attributed to cities using the geospatial information from satellite imagery for geovisualisation of Smart Cities in emerging and developing country, India.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Mohan, M. "GEOSPATIAL INFORMATION FROM SATELLITE IMAGERY FOR GEOVISUALISATION OF SMART CITIES IN INDIA". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B8 (24 de junio de 2016): 979–85. http://dx.doi.org/10.5194/isprs-archives-xli-b8-979-2016.

Texto completo
Resumen
In the recent past, there have been large emphasis on extraction of geospatial information from satellite imagery. The Geospatial information are being processed through geospatial technologies which are playing important roles in developing of smart cities, particularly in developing countries of the world like India. The study is based on the latest geospatial satellite imagery available for the multi-date, multi-stage, multi-sensor, and multi-resolution. In addition to this, the latest geospatial technologies have been used for digital image processing of remote sensing satellite imagery and the latest geographic information systems as 3-D GeoVisualisation, geospatial digital mapping and geospatial analysis for developing of smart cities in India. The Geospatial information obtained from RS and GPS systems have complex structure involving space, time and presentation. Such information helps in 3-Dimensional digital modelling for smart cities which involves of spatial and non-spatial information integration for geographic visualisation of smart cites in context to the real world. In other words, the geospatial database provides platform for the information visualisation which is also known as geovisualisation. So, as a result there have been an increasing research interest which are being directed to geospatial analysis, digital mapping, geovisualisation, monitoring and developing of smart cities using geospatial technologies. However, the present research has made an attempt for development of cities in real world scenario particulary to help local, regional and state level planners and policy makers to better understand and address issues attributed to cities using the geospatial information from satellite imagery for geovisualisation of Smart Cities in emerging and developing country, India.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Chraka, Anas, Ihssane Raissouni, Nordin Ben Seddik, Said Khayar, Soukaina El Amrani, Mustapha El Hadri, Faiza Chaouket y Dounia Bouchta. "Croweacin and Ammi visnaga (L.) Lam Essential Oil derivatives as green corrosion inhibitors for brass in 3% NaCl medium: Quantum Mechanics investigation and Molecular Dynamics Simulation Approaches". Mediterranean Journal of Chemistry 10, n.º 4 (28 de abril de 2020): 378. http://dx.doi.org/10.13171/mjc10402004281338ac.

Texto completo
Resumen
<p class="Mabstract"><span lang="EN-US">The computational study was carried out to understand the anti-corrosion properties of Croweacin, a major chemical component of two essential oils of <em>Ammi visnaga</em> (L.) Lam collected from northern Morocco in 2016 (EO16) and 2018 (EO18) against brass corrosion in a 3% NaCl medium. The study, moreover, considers the inhibitory effect of some minor compounds of EO18 such as Eugenol, Trans-Anethole, α-Isophorone, and Thymol. In this context, the quantum mechanics modelling using the density functional theory (DFT) method with B3LYP /6-31G (d, p) were conducted in the aqueous medium by the use of the IEFPCM model and SCRF theory. The DFT method was adopted to identify, analyze and interpret several elements such as the electronic features, the Frontier Molecular Orbitals (FMO) diagram, Molecular Electrostatic Potential (MEP), contours maps of the electrostatic potential (ESP), and the Mulliken population analysis. The DFT demonstrated that the studied compounds are excellent corrosion inhibitors.</span></p><p class="Mabstract"><span lang="EN-US">Furthermore, the Monte Carlo (MC) type simulation of molecular dynamics (MD) was carried out to provide information on the adsorption mechanism of the studied inhibitors through the active sites on the metal surface. This method informed us that the studied inhibitors have high adsorption energy when interacting with the metal surface, especially for Croweacin (-68.63 kcal/mol). The results obtained from DFT and the MC type simulations are in good agreement.</span></p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Qin, X., R. D. Müller, J. Cannon, T. C. W. Landgrebe, C. Heine, R. J. Watson y M. Turner. "The GPlates Geological Information Model and Markup Language". Geoscientific Instrumentation, Methods and Data Systems Discussions 2, n.º 2 (4 de julio de 2012): 365–428. http://dx.doi.org/10.5194/gid-2-365-2012.

Texto completo
Resumen
Abstract. Understanding tectonic and geodynamic processes leading to the present-day configuration of the Earth involves studying data and models across a variety of disciplines, from geochemistry, geochronology and geophysics, to plate kinematics and mantle dynamics. All these data represent a 3-dimensional spatial and 1-dimensional temporal framework, a formalism which is not exploited by traditional spatial analysis tools. This is arguably a fundamental limit in both the rigour and sophistication in which datasets can be combined for geological "deep time" analysis, and often confines the extent of data analyses to the present-day configurations of geological objects. The GPlates Geological Information Model (GPGIM) represents a formal specification of geological and geophysical data in a time-varying plate tectonics context, used by the GPlates virtual-globe software. It provides a framework in which relevant types of geological data are attached to a common plate tectonic reference frame, allowing the data to be reconstructed in a time-dependent spatio-temporal plate reference frame. The GPlates Markup Language (GPML), being an extension of the open standard Geography Markup Language (GML), is both the modelling language for the GPGIM and an XML-based data format for the interoperable storage and exchange of data modelled by it. The GPlates software implements the GPGIM allowing researchers to query, visualise, reconstruct and analyse a rich set of geological data including numerical raster data. The GPGIM has recently been extended to support time-dependent geo-referenced numerical raster data by wrapping GML primitives into the time-dependent framework of the GPGIM. Coupled with GPlates' ability to reconstruct numerical raster data and import/export from/to a variety of raster file formats, as well as its handling of time-dependent plate boundary topologies, interoperability with geodynamic softwares is established, leading to a new generation of deep-time spatio-temporal data analysis and modelling, including a variety of new functionalities such as 4-D data-mining.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Michoud, V., R. F. Hansen, N. Locoge, P. S. Stevens y S. Dusanter. "Detailed characterizations of a Comparative Reactivity Method (CRM) instrument: experiments vs. modelling". Atmospheric Measurement Techniques Discussions 8, n.º 4 (16 de abril de 2015): 3803–50. http://dx.doi.org/10.5194/amtd-8-3803-2015.

Texto completo
Resumen
Abstract. The Hydroxyl radical (OH) is an important oxidant in the daytime troposphere that controls the lifetime of most trace gases, whose oxidation leads to the formation of harmful secondary pollutants such as ozone (O3) and Secondary Organic Aerosols (SOA). In spite of the importance of OH, uncertainties remain concerning its atmospheric budget and integrated measurements of the total sink of OH can help reducing these uncertainties. In this context, several methods have been developed to measure the first-order loss rate of ambient OH, called total OH reactivity. Among these techniques, the Comparative Reactivity Method (CRM) is promising and has already been widely used in the field and in atmospheric simulation chambers. This technique relies on monitoring competitive OH reactions between a reference molecule (pyrrole) and compounds present in ambient air inside a sampling reactor. However, artefacts and interferences exist for this method and a thorough characterization of the CRM technique is needed. In this study, we present a detailed characterization of a CRM instrument, assessing the corrections that need to be applied on ambient measurements. The main corrections are, in the order of their integration in the data processing: (1) a correction for a change in relative humidity between zero air and ambient air, (2) a correction for the formation of spurious OH when artificially produced HO2 react with NO in the sampling reactor, and (3) a correction for a deviation from pseudo first-order kinetics. The dependences of these artefacts to various measurable parameters, such as the pyrrole-to-OH ratio or the bimolecular reaction rate constants of ambient trace gases with OH are also studied. From these dependences, parameterizations are proposed to correct the OH reactivity measurements from the abovementioned artefacts. A comparison of experimental and simulation results is then discussed. The simulations were performed using a 0-D box model including either (1) a simple chemical mechanism, taking into account the inorganic chemistry from IUPAC 2001 and a simple organic chemistry scheme including only a generic RO2 compounds for all oxidized organic trace gases; and (2) a more exhaustive chemical mechanism, based on the Master Chemical Mechanism (MCM), including the chemistry of the different trace gases used during laboratory experiments. Both mechanisms take into account self- and cross-reactions of radical species. The simulations using these mechanisms allow reproducing the magnitude of the corrections needed to account for NO interferences and a deviation from pseudo first-order kinetics, as well as their dependence on the Pyrrole-to-OH ratio and on bimolecular reaction rate constants of trace gases. The reasonable agreement found between laboratory experiments and model simulations gives confidence in the parameterizations proposed to correct the Total OH reactivity measured by CRM. However, it must be noted that the parameterizations presented in this paper are suitable for the CRM instrument used during the laboratory characterization and may be not appropriate for other CRM instruments, even if similar behaviours should be observed. It is therefore recommended that each group characterizes its own instrument following the recommendations given in this study. Finally, the assessment of the limit of detection and total uncertainties is discussed and an example of field deployment of this CRM instrument is presented.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Bodensteiner, J., T. Shenar, L. Mahy, M. Fabry, P. Marchant, M. Abdul-Masih, G. Banyard et al. "Is HR 6819 a triple system containing a black hole?" Astronomy & Astrophysics 641 (septiembre de 2020): A43. http://dx.doi.org/10.1051/0004-6361/202038682.

Texto completo
Resumen
Context. HR 6819 was recently proposed to be a triple system consisting of an inner B-type giant plus black hole (BH) binary with an orbital period of 40 d and an outer Be tertiary. This interpretation is mainly based on two inferences: that the emission attributed to the outer Be star is stationary and that the inner star, which is used as mass calibrator for the BH, is a B-type giant. Aims. We re-investigate the properties of HR 6819 to search for a possibly simpler alternative explanation for HR 6819, which does not invoke the presence of a triple system with a BH in the inner binary. Methods. Based on an orbital analysis, the disentangling of the spectra of the two visible components and the atmosphere analysis of the disentangled spectra, we investigate the configuration of the system and the nature of its components. Results. Disentangling implies that the Be component is not a static tertiary, but rather a component of the binary in the 40 d orbit. The inferred radial velocity amplitudes of K1 = 60.4 ± 1.0 km s−1 for the B-type primary and K2 = 4.0 ± 0.8 km s−1 for the Be-type secondary imply an extreme mass ratio of M2/M1 = 15 ± 3. We find that the B-type primary, which we estimate to contribute about 45% to the optical flux, has an effective temperature of Teff = 16 ± 1 kK and a surface gravity of log g = 2.8 ± 0.2 [cgs], while the Be secondary, which contributes about 55% to the optical flux, has Teff = 20 ± 2 kK and log g = 4.0 ± 0.3 [cgs]. We infer spectroscopic masses of 0.4−0.1+0.3and 6−3+5 for the primary and secondary which agree well with the dynamical masses for an inclination of i = 32°. This indicates that the primary might be a stripped star rather than a B-type giant. Evolutionary modelling suggests that a possible progenitor system would be a tight (Pi ≈ 2 d) B+B binary system that experienced conservative mass transfer. While the observed nitrogen enrichment of the primary conforms with the predictions of the evolutionary models, we find no indications for the predicted He enrichment. Conclusions. We suggest that HR 6819 is a binary system consisting of a stripped B-type primary and a rapidly-rotating Be star that formed from a previous mass-transfer event. In the framework of this interpretation, HR 6819 does not contain a BH. Interferometry can distinguish between these two scenarios by providing an independent measurement of the separation between the visible components.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Ding, Xuesong, Tristan Salles, Nicolas Flament y Patrice Rey. "Quantitative stratigraphic analysis in a source-to-sink numerical framework". Geoscientific Model Development 12, n.º 6 (28 de junio de 2019): 2571–85. http://dx.doi.org/10.5194/gmd-12-2571-2019.

Texto completo
Resumen
Abstract. The sedimentary architecture at continental margins reflects the interplay between the rate of change of accommodation creation (δA) and the rate of change of sediment supply (δS). Stratigraphic interpretation increasingly focuses on understanding the link between deposition patterns and changes in δA∕δS, with an attempt to reconstruct the contributing factors. Here, we use the landscape modelling code pyBadlands to (1) investigate the development of stratigraphic sequences in a source-to-sink context; (2) assess the respective performance of two well-established stratigraphic interpretation techniques: the trajectory analysis method and the accommodation succession method; and (3) propose quantitative stratigraphic interpretations based on those two techniques. In contrast to most stratigraphic forward models (SFMs), pyBadlands provides self-consistent sediment supply to basin margins as it simulates erosion, sediment transport and deposition in a source-to-sink context. We present a generic case of landscape evolution that takes into account periodic sea level variations and passive margin thermal subsidence over 30 million years, under uniform rainfall. A set of post-processing tools are provided to analyse the predicted stratigraphic architecture. We first reconstruct the temporal evolution of the depositional cycles and identify key stratigraphic surfaces based on observations of stratal geometries and facies relationships, which we use for comparison to stratigraphic interpretations. We then apply both the trajectory analysis and the accommodation succession methods to manually map key stratigraphic surfaces and define sequence units on the final model output. Finally, we calculate shoreline and shelf-edge trajectories, the temporal evolution of changes in relative sea level (proxy for δA) and sedimentation rate (proxy for δS) at the shoreline, and automatically produce stratigraphic interpretations. Our results suggest that the analysis of the presented model is more robust with the accommodation succession method than with the trajectory analysis method. Stratigraphic analysis based on manually extracted shoreline and shelf-edge trajectory requires calibrations of time-dependent processes such as thermal subsidence or additional constraints from stratal terminations to obtain reliable interpretations. The 3-D stratigraphic analysis of the presented model reveals small lateral variations of sequence formations. Our work provides an efficient and flexible quantitative sequence stratigraphic framework to evaluate the main drivers (climate, sea level and tectonics) controlling sedimentary architectures and investigate their respective roles in sedimentary basin development.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Jones, Valerie M., Hermie J. Hermens y Nick L. S. Fung. "The MADE Reference Information Model for Interoperable Pervasive Telemedicine Systems". Methods of Information in Medicine 56, n.º 02 (2017): 180–87. http://dx.doi.org/10.3414/me15-02-0013.

Texto completo
Resumen
SummaryObjectives: The main objective is to develop and validate a reference information model (RIM) to support semantic interoperability of pervasive telemedicine systems. The RIM is one component within a larger, computer-interpretable "MADE language" developed by the authors in the context of the MobiGuide project. To validate our RIM, we applied it to a clinical guideline for patients with gestational diabetes mellitus (GDM).Methods: The RIM is derived from a generic data flow model of disease management which comprises a network of four types of concurrent processes: Monitoring (M), Analysis (A), Decision (D) and Effectuation (E). This resulting MADE RIM, which was specified using the formal Vienna Development Method (VDM), includes six main, high-level data types representing measurements, observations, abstractions, action plans, action instructions and control instructions.Results: The authors applied the MADE RIM to the complete GDM guideline and derived from it a domain information model (DIM) comprising 61 archetypes, specifically 1 measurement, 8 observation, 10 abstraction, 18 action plan, 3 action instruction and 21 control instruction archetypes. It was observed that there are six generic patterns for transforming different guideline elements into MADE archetypes, although a direct mapping does not exist in some cases. Most notable examples are notifications to the patient and/or clinician as well as decision conditions which pertain to specific stages in the therapy.Conclusions: The results provide evidence that the MADE RIM is suitable for modelling clinical data in the design of pervasive tele-medicine systems. Together with the other components of the MADE language, the MADE RIM supports development of pervasive telemedicine systems that are interoperable and independent of particular clinical applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Zhi-Hua, zhong y Larsgunnar Nilsson. "A contact searching algorithm for general 3-D contact-impact problems". Computers & Structures 34, n.º 2 (enero de 1990): 327–35. http://dx.doi.org/10.1016/0045-7949(90)90377-e.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

KORKUT, L., D. ŽUBRINIĆ y V. ŽUPANOVIĆ. "BOX DIMENSION AND MINKOWSKI CONTENT OF THE CLOTHOID". Fractals 17, n.º 04 (diciembre de 2009): 485–92. http://dx.doi.org/10.1142/s0218348x09004570.

Texto completo
Resumen
We prove that the box dimension of the standard clothoid is equal to d = 4/3. Furthermore, this curve is Minkowski measurable, and we compute its d-dimensional Minkowski content. Oscillatory dimensions of component functions of the clothoid are also equal to 4/3.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Tesoniero, Andrea, Kuangdai Leng, Maureen D. Long y Tarje Nissen-Meyer. "Full wave sensitivity of SK(K)S phases to arbitrary anisotropy in the upper and lower mantle". Geophysical Journal International 222, n.º 1 (11 de abril de 2020): 412–35. http://dx.doi.org/10.1093/gji/ggaa171.

Texto completo
Resumen
SUMMARY Core-refracted phases such as SKS and SKKS are commonly used to probe seismic anisotropy in the upper and lowermost portions of the Earth’s mantle. Measurements of SK(K)S splitting are often interpreted in the context of ray theory, and their frequency dependent sensitivity to anisotropy remains imperfectly understood, particularly for anisotropy in the lowermost mantle. The goal of this work is to obtain constraints on the frequency dependent sensitivity of SK(K)S phases to mantle anisotropy, particularly at the base of the mantle, through global wavefield simulations. We present results from a new numerical approach to modelling the effects of seismic anisotropy of arbitrary geometry on seismic wave propagation in global 3-D earth models using the spectral element solver AxiSEM3D. While previous versions of AxiSEM3D were capable of handling radially anisotropic input models, here we take advantage of the ability of the solver to handle the full fourth-order elasticity tensor, with 21 independent coefficients. We take advantage of the computational efficiency of the method to compute wavefields at the relatively short periods (5 s) that are needed to simulate SK(K)S phases. We benchmark the code for simple, single-layer anisotropic models by measuring the splitting (via both the splitting intensity and the traditional splitting parameters ϕ and δt) of synthetic waveforms and comparing them to well-understood analytical solutions. We then carry out a series of numerical experiments for laterally homogeneous upper mantle anisotropic models with different symmetry classes, and compare the splitting of synthetic waveforms to predictions from ray theory. We next investigate the full wave sensitivity of SK(K)S phases to lowermost mantle anisotropy, using elasticity models based on crystallographic preferred orientation of bridgmanite and post-perovskite. We find that SK(K)S phases have significant sensitivity to anisotropy at the base of the mantle, and while ray theoretical approximations capture the first-order aspects of the splitting behaviour, full wavefield simulations will allow for more accurate modelling of SK(K)S splitting data, particularly in the presence of lateral heterogeneity. Lastly, we present a cross-verification test of AxiSEM3D against the SPECFEM3D_GLOBE spectral element solver for global seismic waves in an anisotropic earth model that includes both radial and azimuthal anisotropy. A nearly perfect agreement is achieved, with a significantly lower computational cost for AxiSEM3D. Our results highlight the capability of AxiSEM3D to handle arbitrary anisotropy geometries and its potential for future studies aimed at unraveling the details of anisotropy at the base of the mantle.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Burello, Nicole, Hua Zhang, Weijun Wang, Tania Archbold, Rong Tsao y Ming Z. Fan. "PSVIII-12 Comparative characterization of intestinal alkaline phosphatase kinetics in young piglets and human Caco-2 cells". Journal of Animal Science 97, Supplement_3 (diciembre de 2019): 282–83. http://dx.doi.org/10.1093/jas/skz258.572.

Texto completo
Resumen
Abstract This study reports the kinetics of young porcine jejunal alkaline phosphatase (IAP) towards the dephosphorylation of ATP, lipopolysaccharides (LPS) and p-nitrophenyl phosphate (pNPP) at the physiological conditions (pH = 7.4; and at 37 °C) in comparison with the IAP from human Caco-2 cells. The 10-day suckling young porcine jejunal IAP displayed the Km values of 1.26±0.50 mM, 1.35±0.64 mg/mL and 0.290±0.072 mM for the hydrolyses of ATP, LPS and pNPP, respectively; while the respective Km values were 0.030±0.007 mM, 0.66±0.22 mg/mL and 0.033±0.006 mM for the IAP in the Caco-2 cells towards the same set of substrates. In comparison, the Km values of the young porcine jejunal IAP were 2–40 times higher than those of IAP from the human Caco-2 cells. In addition, Pearson correlation analyses showed tight positive correlations (P &lt; 0.001) between IAP activities towards pNPP, ATP and LPS in the piglet and the Caco-2 cells, suggesting that the IAP activity towards pNPP could be used to predict the IAP activities on the physiological substrates of ATP and LPS. Lastly, four AP genes were identified in the genome of Sus scrofa. Three of these porcine AP genes are annotated as intestinal-type alkaline phosphatase (IAP) genes and clustered at the distal end of chromosome 15, namely IAPX1, IAPX2, and IAPX3. The genomic context of APs in the pig genome is highly similar to those in the human genome. We predict that pig IAPX3 (XP_003133777.1) is likely an IAP gene for pigs. Further comparisons in post-translational modification and protein 3-D structure modelling were performed, indicating that the observed differences in kinetic affinity between the young porcine jejunal IAP and the human Caco-2 cell IAP in hydrolyzing ATP, LPS and pNPP might relate to their differences in the coding sequences of the IAP protein, and/or the IAP protein N-glycosylation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Louis-Napoléon, Aurélie, Muriel Gerbault, Thomas Bonometti, Cédric Thieulot, Roland Martin y Olivier Vanderhaeghe. "3-D numerical modelling of crustal polydiapirs with volume-of-fluid methods". Geophysical Journal International 222, n.º 1 (20 de marzo de 2020): 474–506. http://dx.doi.org/10.1093/gji/ggaa141.

Texto completo
Resumen
SUMMARY Gravitational instabilities exert a crucial role on the Earth dynamics and in particular on its differentiation. The Earth’s crust can be considered as a multilayered fluid with different densities and viscosities, which may become unstable in particular with variations in temperature. With the specific aim to quantify crustal scale polydiapiric instabilities, we test here two codes, JADIM and OpenFOAM, which use a volume-of-fluid (VOF) method without interface reconstruction, and compare them with the geodynamics community code ASPECT, which uses a tracking algorithm based on compositional fields. The VOF method is well-known to preserve strongly deforming interfaces. Both JADIM and OpenFOAM are first tested against documented two and three-layer Rayleigh–Taylor instability configurations in 2-D and 3-D. 2-D and 3-D results show diapiric growth rates that fit the analytical theory and are found to be slightly more accurate than those obtained with ASPECT. We subsequently compare the results from VOF simulations with previously published Rayleigh–Bénard analogue and numerical experiments. We show that the VOF method is a robust method adapted to the study of diapirism and convection in the Earth’s crust, although it is not computationally as fast as ASPECT. OpenFOAM is found to run faster than, and conserve mass as well as JADIM. Finally, we provide a preliminary application to the polydiapiric dynamics of the orogenic crust of Naxos Island (Greece) at about 16 Myr, and propose a two-stages scenario of convection and diapirism. The timing and dimensions of the modelled gravitational instabilities not only corroborate previous estimates of timing and dimensions associated to the dynamics of this hot crustal domain, but also bring preliminary insight on its rheological and tectonic contexts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Al-Azawei, Ahmed. "Predicting the Adoption of Social Media: An Integrated Model and Empirical Study on Facebook Usage". Interdisciplinary Journal of Information, Knowledge, and Management 13 (2018): 233–58. http://dx.doi.org/10.28945/4106.

Texto completo
Resumen
Aim/Purpose: This study aims at (1) extending an existing theoretical framework to gain a deeper understanding of the technology acceptance process, notably of the Facebook social network in an unexplored Middle East context, (2) investigating the influence of social support theory on Facebook adoption outside the work context, (3) validating the effectiveness of the proposed research model for enhancing Facebook adoption, and (4) determining the effect of individual differences (gender, age, experience, and educational level) amongst Facebook users on the associated path between the proposed model constructs. Background: Social networking sites (SNSs) are widely adopted to facilitate social interaction in the Web-based medium. As such, this present work contends that there is a gap in the existing literature, particularly in the Middle East context, as regards an empirical investigation of the relationship between the social, psychological, individual, and cognitive constructs potentially affecting users’ intention to accept SNSs. The present research, therefore, attempts to address this deficit. The relevance of this work is also considered in light of the scarcity of empirical evidence and lack of detailed research on the effect of social support theory with regard to SNS adoption in a non-work context. Methodology: A quantitative research approach was adopted for this study. The corresponding analysis was carried out based on structural equation modelling (SEM), more specifically, partial least squares (PLS), using SmartPLS software. Earlier research recommended the PLS approach for exploratory studies when extending an existing model or developing a new theory. PLS is also a superior method of complex causal modelling. Moreover, a multi-group analysis technique was adopted to investigate the moderating influence of individual differences. This method divides the dataset into two groups and then computes the cause and effect relationships between the research model variables for each set. The analysis of an in-person survey with a sample of Facebook users (N=369) subsequently suggested four significant predictors of continuous Facebook use. Contribution: This study contributes to the body of knowledge relating to SNSs by providing empirical evidence of constructs that influence Facebook acceptance in the case of a developing country. It raises awareness of antecedents of Facebook acceptance at a time when SNSs are widely used in Arab nations and worldwide. It also contributes to previous literature on the effectiveness of the unified theory of acceptance and use of technology (UTAUT) in different cultural contexts. Another significant contribution of this study is that it has reported on the relevance of social support theory to Facebook adoption, with this theory demonstrating a significant and direct ability to predict Facebook acceptance. Finally, the present research identified the significant moderating effect of individual differences on the associated path between the proposed model constructs. This means that regardless of technological development, individual gaps still appeared to exist among users. Findings: The findings suggested four significant predictors of continuous Facebook use, namely, (a) performance expectancy, (b) peer support, (c) family support, and (d) perceived playfulness. Furthermore, behavioral intention and facilitating conditions were found to be significant determinants of actual Facebook use, while individual differences were shown to moderate the path strength between several variables in the proposed research model. Recommendations for Practitioners: The results of the present study make practical contributions to SNS organizations. For example, this research revealed that users do not adopt Facebook because of its usefulness alone; instead, users’ acceptance is developed through a sequence of variables such as individual differences, psychological factors, and social and organizational beliefs. Accordingly, social media organizations should not consider only strategies that apply to just one context, but also to other contexts characterized by different beliefs, perceptions, and cultures. Moreover, the evidence provided here is that social support theory has a significant influence on SNSs acceptance. This suggests that social media organizations should provide services to support this concept. Furthermore, the significant positive effect of perceived playfulness on the intention to use SNSs implied that designers and organizations should pay further attention to the entertainment services provided by social networks. Recommendation for Researchers: To validate the proposed conceptual framework, researchers from different countries and cultures are invited to apply the model. Moreover, a longitudinal research design could be implemented to gather data over a longer period, in order to investigate whether users have changed their attitudes, beliefs, perceptions, and intention by the end of the study period. Other constructs, such as individual experience, compatibility, and quality of working life could be included to improve the power of the proposed model. Impact on Society: Middle Eastern Facebook users regard the network as an important tool for interacting with others. The increasing number of Facebook users renders it a tool of universal communication and enjoyment, as well as a marketing network. However, knowledge of the constructs affecting the application of SNSs is valuable for ensuring that such sites have the various functions required to suit different types of user. Future Research: It is hoped that our future research will build on the results of this work and attempt to provide further explanation of why users accept SNSs. In this future research, the proposed research model could be adopted to explore SNSs acceptance in other developing countries. Researchers might also include other factors of potential influence on SNSs acceptance. The constructs influencing acceptance of other social networks could then be compared to the present research findings and thus, the differences and similarities would be highlighted.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Ladevèze, P., H. Lemoussu y P. A. Boucard. "A modular approach to 3-D impact computation with frictional contact". Computers & Structures 78, n.º 1-3 (noviembre de 2000): 45–51. http://dx.doi.org/10.1016/s0045-7949(00)00094-8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Kiwelu, Henry M. "Moisture Induced Deformations in Glulam Members - Experiments and 3-D Finite Element Model". Tanzania Journal of Engineering and Technology 36, n.º 2 (31 de diciembre de 2017): 35–45. http://dx.doi.org/10.52339/tjet.v36i2.536.

Texto completo
Resumen
Experiments were performed on scaled glue laminated bending specimens to observetime dependent development of deformations during drying and wetting. Measurementsdetermined changes in the average moisture content and external shape and dimensionsbetween when specimens were placed into constant or variable climates. Alterations inthe external shape and dimensions reflected changes in the average value anddistribution of moisture and mechanosorptive creep in the glulam. The results are beingused to develop a sequentially-coupled three-dimensional hygrothermal Finite Element(FE) model for predicting temporally varying internal strains and external deformationsof drying or wetting solid wood structural components. The model implies temporallyvarying, and eventual steady, state internal stress distributions in members based onelastic and creep compliances that represent wood within glulam as a continuousorthotropic homogenised material. Thus, predictions are consistent with smearedengineering stress analysis methods rather than being a physically correct analogue ofhow solid wood behaves. This paper discusses limitations of and intended improvementsto the FE modelling. Complementary investigations are underway to address otheraspects of the hygrothermal behaviour of structural members of wood and othermaterials (e.g. reinforced concrete) embedded within superstructure frameworks ofmulti-storey hybrid buildings.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Monga, O., P. Garnier, V. Pot, E. Coucheney, N. Nunan, W. Otten y C. Chenu. "Simulating microbial degradation of organic matter in a simple porous system using the 3-D diffusion-based model MOSAIC". Biogeosciences 11, n.º 8 (22 de abril de 2014): 2201–9. http://dx.doi.org/10.5194/bg-11-2201-2014.

Texto completo
Resumen
Abstract. This paper deals with the simulation of microbial degradation of organic matter in soil within the pore space at a microscopic scale. Pore space was analysed with micro-computed tomography and described using a sphere network coming from a geometrical modelling algorithm. The biological model was improved regarding previous work in order to include the transformation of dissolved organic compounds and diffusion processes. We tested our model using experimental results of a simple substrate decomposition experiment (fructose) within a simple medium (sand) in the presence of different bacterial strains. Separate incubations were carried out in microcosms using five different bacterial communities at two different water potentials of −10 and −100 cm of water. We calibrated the biological parameters by means of experimental data obtained at high water content, and we tested the model without changing any parameters at low water content. Same as for the experimental data, our simulation results showed that the decrease in water content caused a decrease of mineralization rate. The model was able to simulate the decrease of connectivity between substrate and microorganism due the decrease of water content.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Gun, H. "Boundary element analysis of 3-D elasto-plastic contact problems with friction". Computers & Structures 82, n.º 7-8 (marzo de 2004): 555–66. http://dx.doi.org/10.1016/j.compstruc.2004.02.002.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

McNorton, Joe R., Nicolas Bousserez, Anna Agustí-Panareda, Gianpaolo Balsamo, Margarita Choulga, Andrew Dawson, Richard Engelen, Zak Kipling y Simon Lang. "Representing model uncertainty for global atmospheric CO<sub>2</sub> flux inversions using ECMWF-IFS-46R1". Geoscientific Model Development 13, n.º 5 (15 de mayo de 2020): 2297–313. http://dx.doi.org/10.5194/gmd-13-2297-2020.

Texto completo
Resumen
Abstract. Atmospheric flux inversions use observations of atmospheric CO2 to provide anthropogenic and biogenic CO2 flux estimates at a range of spatio-temporal scales. Inversions require prior flux, a forward model and observation errors to estimate posterior fluxes and uncertainties. Here, we investigate the forward transport error and the associated biogenic feedback in an Earth system model (ESM) context. These errors can occur from uncertainty in the initial meteorology, the analysis fields used, or the advection schemes and physical parameterisation of the model. We also explore the spatio-temporal variability and flow-dependent error covariances. We then compare the error with the atmospheric response to uncertainty in the prior anthropogenic emissions. Although transport errors are variable, average total-column CO2 (XCO2) transport errors over anthropogenic emission hotspots (0.1–0.8 ppm) are comparable to, and often exceed, prior monthly anthropogenic flux uncertainties projected onto the same space (0.1–1.4 ppm). Average near-surface transport errors at three sites (Paris, Caltech and Tsukuba) range from 1.7 to 7.2 ppm. The global average XCO2 transport error standard deviation plateaus at ∼0.1 ppm after 2–3 d, after which atmospheric mixing significantly dampens the concentration gradients. Error correlations are found to be highly flow dependent, with XCO2 spatio-temporal correlation length scales ranging from 0 to 700 km and 0 to 260 min. Globally, the average model error caused by the biogenic response to atmospheric meteorological uncertainties is small (<0.01 ppm); however, this increases over high flux regions and is seasonally dependent (e.g. the Amazon; January and July: 0.24±0.18 ppm and 0.13±0.07 ppm). In general, flux hotspots are well-correlated with model transport errors. Our model error estimates, combined with the atmospheric response to anthropogenic flux uncertainty, are validated against three Total Carbon Observing Network (TCCON) XCO2 sites. Results indicate that our model and flux uncertainty account for 21 %–65 % of the total uncertainty. The remaining uncertainty originates from additional sources, such as observation, numerical and representation errors, as well as structural errors in the biogenic model. An underrepresentation of transport and flux uncertainties could also contribute to the remaining uncertainty. Our quantification of CO2 transport error can be used to help derive accurate posterior fluxes and error reductions in future inversion systems. The model uncertainty diagnosed here can be used with varying degrees of complexity and with different modelling techniques by the inversion community.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Mansoor, Wafaa, Graeme Hocking y Duncan Farrow. "Modelling of hydrogen diffusion in the retina". ANZIAM Journal 61 (7 de julio de 2020): C119—C136. http://dx.doi.org/10.21914/anziamj.v61i0.14995.

Texto completo
Resumen
A simple mathematical model for diffusion of hydrogen within the retina has been developed. The model consists of three, well-mixed, one dimensional layers that exchange hydrogen via a diffusion process. A Fourier series method is applied to compute the hydrogen concentration. The effect of important parameters is examined and discussed. The results may contribute to an understanding of the hydrogen clearance technique to estimate blood flow. A two dimensional numerical method for the hydrogen diffusion is also presented. It is shown that the predominant features of the process are captured quite well by the simpler model. References V. A. Alder, D. Y. Yu, S. J. Cringle and E. N. Su. Experimental approaches to diabetic retinopathy. Asia-Pac. J. Ophthalmol. 4:20–25, 1992. J. C. Arciero, P. Causin and F. Malgoroli. Mathematical methods for modeling the microcirculation. AIMS Biophys. 4:362–399, 2017. doi:10.3934/biophy.2017.3.362 D. E. Farrow, G. C. Hocking, S. J. Cringle and D.-Y. Yu. Modeling Hydrogen clearance from the retina. ANZIAM J. 59:281–292, 2018. doi:10.1017/S1446181117000426 A. B. Friedland. A mathematical model of transmural transport of oxygen to the retina. Bull. Math. Biol. 40:823–837, 2018; doi:10.1007/BF02460609 D. Goldman. Theoretical models of microvascular oxygen transport to tissue. Microcirculation 15:795–811, 2008. doi:10.1080/10739680801938289 A. C. Hindmarsh. ODEPACK, A Systematized Collection of ODE Solvers. In Scientific Computing, R. S. Stepleman, et al., Eds., pp. 55-64. North-Holland, Amsterdam, 1983. S. S. Kety. The theory and applications of the exchange of inert gas at the lungs and tissues. Pharmacol. Rev. 3:1–41, 1951. http://pharmrev.aspetjournals.org/content/3/1/1 B. P. Leonard. A stable and accurate convective modelling procedure based on quadratic upstream interpolation. Comput. Methods Appl. Mech. Eng. 19:59–98, 1979. doi:10.1016/0045-7825(79) 90034-3 S. L. Mitchell. Coupling transport and chemistry: numerics, analysis and applications. PhD thesis, University of Bath, UK, 2003. https://researchportal.bath.ac.uk/en/studentTheses/coupling-transport-and-chemistry-numerics-analysis-and-applicatio G. A. Winchell. Mathematical model of inert gas washout from the retina: evaluation of hydrogen washout as a means of determining retinal blood flow in the cat. Master\textquoteright s Thesis, Northwestern University, Evanston, USA, 1983. https://search.library.northwestern.edu/permalink/f/5c25nc/01NWU_ALMA21563278530002441 D. Y. Yu, V. A. Alder and S. J. Cringle. Measurement of blood flow in rat eyes by hydrogen clearance. Am. J. Physiol. (Heart Circ. Physiol.) 261:H960–H968, 1991. doi:10.1152/ajpheart.1991.261.3.H960 D. Y. Yu, S. J. Cringle, V. A. Alder, E. N. Su, and P. K. Yu, Intraretinal oxygen distribution and choroidal regulation in the avascular retina of guinea pigs. Am. J. Physiol. (Heart Circ. Physiol.) 270:H965-H973, 1996. doi:10.1152/ajpheart.1996.270.3.H965 S. Cringle, D.-Y. Yu, V. Alder, E.-N. Su, and P. Yu. Choroidal regulation of oxygen supply to the guinea pig retina. In A. G. Hudetz, and D. F. Bruley (Eds.), Oxygen Transport to Tissue XX, pp. 385–389. Springer, 1998. doi:10.1007/978-1-4615-4863-8
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Thomson, Barbara M. "Nutritional modelling: distributions of salt intake from processed foods in New Zealand". British Journal of Nutrition 102, n.º 5 (14 de septiembre de 2009): 757–65. http://dx.doi.org/10.1017/s000711450928901x.

Texto completo
Resumen
The salt content of processed foods is important because of the high intake of Na by most New Zealanders. A database of Na concentrations in fifty-eight processed foods was compiled from existing and new data and combined with 24 h diet recall data from two national nutrition surveys (5771 respondents) to derive salt intakes for seven population groups. Mean salt intakes from processed foods ranged from 6·9 g/d for young males aged 19–24 years to 3·5 g/d for children aged 5–6 years. A total of ≥ 50 % of children aged 5–6 years, boys aged 11–14 years and young males aged 19–24 years had salt intakes that exceeded the upper limit for Na, calculated as salt (3·2–5·3 g/d), from processed foods only. Bread accounted for the greatest contribution to salt intake for each population group (35–43 % of total salt intake). Other foods that contributed 2 % or more and common across most age groups were sausage, meat pies, pizza, instant noodles and cheese. The Na concentrations of key foods have changed little over the 16-year period from 1987 to 2003 except for corned beef and whole milk that have decreased by 34 and 50 % respectively. Bread is an obvious target for salt reduction but the implication on iodine intake needs consideration as salt is used as a vehicle for iodine fortification of bread.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Davis, C. P., K. F. Evans, S. A. Buehler, D. L. Wu y H. C. Pumphrey. "3-D polarised simulations of space-borne passive mm/sub-mm midlatitude cirrus observations: a case study". Atmospheric Chemistry and Physics 7, n.º 15 (7 de agosto de 2007): 4149–58. http://dx.doi.org/10.5194/acp-7-4149-2007.

Texto completo
Resumen
Abstract. Global observations of ice clouds are needed to improve our understanding of their impact on earth's radiation balance and the water-cycle. Passive mm/sub-mm has some advantages compared to other space-borne cloud-ice remote sensing techniques. The physics of scattering makes forward radiative transfer modelling for such instruments challenging. This paper demonstrates the ability of a recently developed RT code, ARTS-MC, to accurately simulate observations of this type for a variety of viewing geometries corresponding to operational (AMSU-B, EOS-MLS) and proposed (CIWSIR) instruments. ARTS-MC employs an adjoint Monte-Carlo method, makes proper account of polarisation, and uses 3-D spherical geometry. The actual field of view characteristics for each instrument are also accounted for. A 3-D midlatitude cirrus scenario is used, which is derived from Chilbolton cloud radar data and a stochastic method for generating 3-D ice water content fields. These demonstration simulations clearly demonstrate the beamfilling effect, significant polarisation effects for non-spherical particles, and also a beamfilling effect with regard to polarisation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Hu, Z. M., J. W. Brooks y T. A. Dean. "Three-Dimensional Finite Element Modelling of Forging of a Titanium Alloy Aerofoil Sectioned Blade". Journal of Manufacturing Science and Engineering 121, n.º 3 (1 de agosto de 1999): 366–71. http://dx.doi.org/10.1115/1.2832690.

Texto completo
Resumen
Analysis of the forging of aerofoil blades, particularly those for aeroengines, is a complex operation because of the complicated three-dimensional geometry and the non steady-state contact between the workpiece and the die surface. In this paper a three-dimensional analysis of the hot forging of a titanium compressor blade, currently being made and used commercially, using the finite element method is presented and validated by the results from hot-die forging tests. The process is modelled assuming isothermal conditions, the main interest being the mechanics of the deformation. Abaqus/Explicit FE software has been used for process simulation in which the complicated die form, assumed rigid, has been modelled using smoothed Bezier surfaces which enable die/workpiece contact phenomena to be handled effectively. Predicted strain patterns in sections of forging, using either 2-D or 3-D analysis are compared. It is shown that the experimental forging load and the overall flow pattern are predicted by the analysis with good accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Dwiningsih, Kusumawati y Nurlaily Yulia Safitri. "Development Interactive Multimedia Using 3D Virtual Modelling on Intermolecular Forces Matter". International Journal of Chemistry Education Research 3, n.º 3 (22 de mayo de 2020): 17–25. http://dx.doi.org/10.20885/ijcer.vol4.iss1.art3.

Texto completo
Resumen
One of characteristic in chemistry is its abstractness concept thus, the models or analogies to represent its microscopic and symbolic representation which not able seen by eyes are needed. Intermolecular forces lesson is one of chemistry concept that needs microscopic and symbolic representation. Modelling for microscopic and symbolic representation can be used to facilitate students for a better learning experience. Related to the problem background, the aim of this study is to know the validity of interactive multimedia using 3D modelling. The 3D intermolecular forces multimedia will contain intermolecular forces explanation be equipped with 3D illustration, animation, and text to describe microscopic and symbolic representation. This study used R&D method by Thiagarajan and the data gathering section conducted in SMA Negeri 1 Waru Sidoarjo. The instruments to assess the validities of multimedia are content validity sheet and construct validity sheet which assessed by 3 academics and practitioners. The validity average scores obtain 86,66% for content validity and 85,78% for construct validity which both categorize as “Very Valid”.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Kacimov, Anvar, Ali Al-Maktoumi, Said Al-Ismaily y Hamed Al-Busaidi. "Moisture and temperature in a proppant-enveloped silt block of a recharge dam reservoir: Laboratory experiment and 1-D mathematical modelling". Journal of Agricultural and Marine Sciences [JAMS] 22, n.º 1 (17 de enero de 2018): 8. http://dx.doi.org/10.24200/jams.vol22iss1pp8-17.

Texto completo
Resumen
Mosaic 3-D cascade of parallelepiped-shaped silt blocks, which sandwich sand- lled cracks, has been discovered in the eld and tested in lab experiments. Controlled wetting-drying of these blocks, collected from a dam reservoir, mimics field ponding-desiccation conditions of the topsoil layer subject to caustic solar radiation, high temperature and wind, typical in the Batinah region of Oman. In 1-D analytical modelling of a transient Richards’ equation for vertical evaporation, the method of small perturbations is applied, assuming that the relative permeability is Avery-anov’s 3.5-power function of the moisture content and capillary pressure is a given (measured) function. A linearized advective dispersion equation is solved with respect to the second term in the series expansion of the moisture content as a function of spatial coordinates and time. For a single block of a nite thickness we solve a boundary value problem with a no- ow condition at the bottom and a constant moisture content at the surface. Preliminary comparisons with theta-, TDR- probes measuring the moisture content and temperature at several in-block points are made. Results corroborate that a 3-D heterogeneity of soil physical properties, in particular, horizontal and vertical capillary barriers emerging on the interfaces between silt and sand generate eco-niches with stored soil water compartments favourable for lush vegetation in desert conditions. Desiccation significantly increases the temperature in the blocks and re-wetting of the blocks reduces the daily average and peak temperatures, the latter by almost 15°C. This is important for planning irrigation in smartly designed soil substrates and sustainability of wild plants in the region where the top soil peak temperature in the study area exceeds 70°C in Summer but smartly structured soils maintain lash vegetation. Thee layer of dry top-blocks acts as a thermal insulator for the subjacent layers of wet blocks that may host the root zone of woody species.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Franceschini, A., P. Teatini, C. Janna, M. Ferronato, G. Gambolati, S. Ye y D. Carreón-Freyre. "Modelling ground rupture due to groundwater withdrawal: applications to test cases in China and Mexico". Proceedings of the International Association of Hydrological Sciences 372 (12 de noviembre de 2015): 63–68. http://dx.doi.org/10.5194/piahs-372-63-2015.

Texto completo
Resumen
Abstract. The stress variation induced by aquifer overdraft in sedimentary basins with shallow bedrock may cause rupture in the form of pre-existing fault activation or earth fissure generation. The process is causing major detrimental effects on a many areas in China and Mexico. Ruptures yield discontinuity in both displacement and stress field that classic continuous finite element (FE) models cannot address. Interface finite elements (IE), typically used in contact mechanics, may be of great help and are implemented herein to simulate the fault geomechanical behaviour. Two main approaches, i.e. Penalty and Lagrangian, are developed to enforce the contact condition on the element interface. The incorporation of IE incorporation into a three-dimensional (3-D) FE geomechanical simulator shows that the Lagrangian approach is numerically more robust and stable than the Penalty, thus providing more reliable solutions. Furthermore, the use of a Newton-Raphson scheme to deal with the non-linear elasto-plastic fault behaviour allows for quadratic convergence. The FE – IE model is applied to investigate the likely ground rupture in realistic 3-D geologic settings. The case studies are representative of the City of Wuxi in the Jiangsu Province (China), and of the City of Queretaro, Mexico, where significant land subsidence has been accompanied by the generation of several earth fissures jeopardizing the stability and integrity of the overland structures and infrastructure.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Della Bruna, Lorenza, Angela Adamo, Arjan Bik, Michele Fumagalli, Rene Walterbos, Göran Östlin, Gustavo Bruzual et al. "Studying the ISM at ∼10 pc scale in NGC 7793 with MUSE". Astronomy & Astrophysics 635 (marzo de 2020): A134. http://dx.doi.org/10.1051/0004-6361/201937173.

Texto completo
Resumen
Context. Studies of nearby galaxies reveal that around 50% of the total Hα luminosity in late-type spirals originates from diffuse ionised gas (DIG), which is a warm, diffuse component of the interstellar medium that can be associated with various mechanisms, the most important ones being “leaking” HII regions, evolved field stars, and shocks. Aims. Using MUSE Wide Field Mode adaptive optics-assisted data, we study the condition of the ionised medium in the nearby (D = 3.4 Mpc) flocculent spiral galaxy NGC 7793 at a spatial resolution of ∼10 pc. We construct a sample of HII regions and investigate the properties and origin of the DIG component. Methods. We obtained stellar and gas kinematics by modelling the stellar continuum and fitting the Hα emission line. We identified the boundaries of resolved HII regions based on their Hα surface brightness. As a way of comparison, we also selected regions according to the Hα/[SII] line ratio; this results in more conservative boundaries. Using characteristic line ratios and the gas velocity dispersion, we excluded potential contaminants, such as supernova remnants (SNRs) and planetary nebulae (PNe). The continuum subtracted HeII map was used to spectroscopically identify Wolf Rayet stars (WR) in our field of view. Finally, we computed electron densities and temperatures using the line ratio [SII]6716/6731 and [SIII]6312/9069, respectively. We studied the properties of the ionised gas through “BPT” emission line diagrams combined with velocity dispersion of the gas. Results. We spectroscopically confirm two previously detected WR and SNR candidates and report the discovery of the other seven WR candidates, one SNR, and two PNe within our field of view. The resulting DIG fraction is between ∼27 and 42% depending on the method used to define the boundaries of the HII regions (flux brightness cut in Hα = 6.7 × 10−18 erg s−1 cm−2 or Hα/[SII] = 2.1, respectively). In agreement with previous studies, we find that the DIG exhibits enhanced [SII]/Hα and [NII]/Hα ratios and a median temperature that is ∼3000 K higher than in HII regions. We also observe an apparent inverse correlation between temperature and Hα surface brightness. In the majority of our field of view, the observed [SII]6716/6731 ratio is consistent within 1σ with ne < 30 cm−3, with an almost identical distribution for the DIG and HII regions. The velocity dispersion of the ionised gas indicates that the DIG has a higher degree of turbulence than the HII regions. Comparison with photoionisation and shock models reveals that, overall, the diffuse component can only partially be explained via shocks and that it is most likely consistent with photons leaking from density bounded HII regions or with radiation from evolved field stars. Further investigation will be conducted in a follow-up paper.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Sullivan, Timothy. "Cell Shape and Surface Colonisation in the Diatom Genus Cocconeis—An Opportunity to Explore Bio-Inspired Shape Packing?" Biomimetics 4, n.º 2 (31 de marzo de 2019): 29. http://dx.doi.org/10.3390/biomimetics4020029.

Texto completo
Resumen
Optimal packing of 2 and 3-D shapes in confined spaces has long been of practical and theoretical interest, particularly as it has been discovered that rotatable ellipses (or ellipsoids in the 3-D case) can, for example, have higher packing densities than disks (or spheres in the 3-D case). Benthic diatoms, particularly those of the genus Cocconeis (Ehr.)—which are widely regarded as prolific colonisers of immersed surfaces—often have a flattened (adnate) cell shape and an approximately elliptical outline or “footprint” that allows them to closely contact the substratum. Adoption of this shape may give these cells a number of advantages as they colonise surfaces, such as a higher packing fraction for colonies on a surface for more efficient use of limited space, or an increased contact between individual cells when cell abundances are high, enabling the cells to minimize energy use and maximize packing (and biofilm) stability on a surface. Here, the outline shapes of individual diatom cells are measured using scanning electron and epifluorescence microscopy to discover if the average cell shape compares favourably with those predicted by theoretical modelling of efficient 2-D ellipse packing. It is found that the aspect ratio of measured cells in close association in a biofilm—which are broadly elliptical in shape—do indeed fall within the range theoretically predicted for optimal packing, but that the shape of individual diatoms also differ subtly from that of a true ellipse. The significance of these differences for optimal packing of 2-D shapes on surfaces is not understood at present, but may represent an opportunity to further explore bio-inspired design shapes for the optimal packing of shapes on surfaces.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Li, Yong, Mark F. Seifert, Sun-Young Lim, Norman Salem y Bruce A. Watkins. "Bone mineral content is positively correlated to n-3 fatty acids in the femur of growing rats". British Journal of Nutrition 104, n.º 5 (27 de abril de 2010): 674–85. http://dx.doi.org/10.1017/s0007114510001133.

Texto completo
Resumen
The present study was conducted to determine whether provision of preformed dietary docosapentaenoic acid (DPAn-6) can replace DHA for normal long bone growth as assessed by dual-energy X-ray absorptiometry for mineral content (BMC). A newly modified artificial rearing method was employed to generate n-3 fatty acid-deficient rats. Except the dam-reared (DR; 3·1 % α-linolenic acid) group, newborn pups were separated from their mothers at age 2 d and given artificial rat milk containing linoleic acid (LA), or LA supplemented with 1 % DHA (22 : 6n-3; DHA), 1 % DPAn-6 (DPA), or 1 % DHA plus 0·4 % DPAn-6 (DHA/DPA). The rats were later weaned onto similar pelleted diets. At adulthood, the rats were euthanised and bones (femur, tibia, and lumbar vertebrae) collected for tissue fatty acid analysis and bone mineral density (BMD) determination. The analyses showed that long bones such as femur and tibia in DPAn-6-treated rats contained higher DPAn-6 content and generally had the lowest BMC and BMD values. Hence, DPAn-6 did not replace DHA for normal bone growth and maximal BMC in femur, indicating an indispensible role of DHA in bone health. In conclusion, DHA accumulates in the osteoblast-rich and nerve-abundant periosteum of femur; DHA but not EPA appears to be a vital constituent of marrow and periosteum of healthy modelling bone; and both DHA and total n-3 PUFA strongly correlate to BMC.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Yusoh, Rais, Md Azlin Md Said, Mohd Ashraf Mohamad Ismail y Mohd Firdaus Abdul Razak. "Subsurface Characterization Using Ground and Underwater Resistivity Techniques for Groundwater Abstraction". Applied Mechanics and Materials 802 (octubre de 2015): 629–33. http://dx.doi.org/10.4028/www.scientific.net/amm.802.629.

Texto completo
Resumen
A geophysical method was used in studying the subsurface profile for investigating the aquifer existence at Jenderam Hilir, Selangor, Malaysia. The 2-D electrical resistivity technique is to determine the presence of aquifer suitable for groundwater abstraction. Resistivity was measured through an ABEM SAS 4000 Terrameter and ABEM Terrameter LS. 2-D electrical-imaging resistivity data of subsurface profile for each survey line were calculated inverse modelling, validate by borehole data which showed the lithology: sandy clay to sandy silt sediments more than 3 m deep, composed of alternating layers of silty and sand. The aquifer potential are mostly in silty sand zones which resistivity value should be within 60-800 ohm-m. Based on interpretation, a potential water-bearing aquifer was located at a depth of 3 m and below which is good agreement with interpreted results. 2-D underwater resistivity survey lines were conducted across the ground and river. Resistivity image was interpreted as silty sand under the river bed, which the subsurface aquifer on land has a physical contact with surface water. Result has shown that ground and underwater resistivity technique can be used as alternative method in finding a good location for groundwater abstraction.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Yuliati, Siti Rohmi y Ika Lestari. "HIGHER-ORDER THINKING SKILLS (HOTS) ANALYSIS OF STUDENTS IN SOLVING HOTS QUESTION IN HIGHER EDUCATION". Perspektif Ilmu Pendidikan 32, n.º 2 (10 de octubre de 2018): 181–88. http://dx.doi.org/10.21009/pip.322.10.

Texto completo
Resumen
Students of Elementary School Teacher Education programs must be able to have higher-order thinking skills (HOTS) so that they can train students to have HOTS through learning activities created when they have become elementary school teachers. This study aims to explain students' high-level thinking skills in solving HOTS-oriented questions in Instructional Evaluation courses. This study uses qualitative research methods with data collection techniques using cognitive test instruments in the form of descriptions. Data analysis techniques use simple descriptive statistics. The results showed the level of thinking ability of students in answering HOTS practice questions still needed improvement. Students who have high learning abilities are better at answering HOTS-oriented questions compared to students in the medium and low categories. Recommendations for future research are required learning modules that can facilitate learning activities that lead to HOTS so that students are skilled in answering and making HOTS-oriented practice questions for elementary school students when they become a teacher. References Abdullah, Abdul Halim; Mokhtar, Mahani; Halim, Noor Dayana Abd; Ali, Dayana Farzeeha; Tahir, Lokman Mohd; Kohar, U. H. A. (2017). Mathematics Teachers’ Level of Knowledge and Practice on the Implementation of Higher-Order Thinking Skills (HOTS). EURASIA Journal of Mathematics, Science and Technology Education, 13(1), 3–17. https://doi.org/10.12973/eurasia.2017.00601a Altun, M., & Akkaya, R. (2014). Mathematics teachers’ comments on PISA math questions and our country’s students’ low achievement levels. Hacettepe Üniversitesi Eğitim Fakültesi Dergisi, 29(1), 19–34. Bakry, & Md Nor Bakar. (2015). The process of thinking among Junior High School students in solving HOTS question. International Journal of Evaluation and Research in Education (IJERE), 4(3), 138–145. Budsankom, P; Sawangboon, T; Damrongpanit, S; Chuensirimongkol, J. (2015). Factors affecting higher order thinking skills of students: A meta-analytic structural equation modeling study. Educational Research and Review, 10(19), 2639–2652. doi:10.5897/err2015.2371 Chinedu, C. C., Olabiyi, O. S., & Kamin, Y. Bin. (2015). Strategies for improving higher order thinking skills in teaching and learning of design and technology education. Journal of Technical Educationand Training, 7(2), 35–43. Retrieved from http://penerbit.uthm.edu.my/ojs/index.php/JTET/article/view/1081/795 Didis, M. G., Erbas, A. K., Cetinkaya, B., Cakiroglu, E., & Alacaci, C. (2016). Exploring prospective secondary mathematics teachers’ interpretation of student thinking through analysing students’work in modelling. Mathematics Education Research Journal, 28(3), 349–378. https://doi.org/10.1007/s13394-016-0170-6 Duan, J. (2012). Research about Technology Enhanced Higher-Order Thinking. IEEE Computer Society, (Iccse), 687–689. https://doi.org/10.1109/ICCSE.2012.6295167 Edwards, L. (2016). EDUCATION, TECHNOLOGY AND HIGHER ORDER THINKING SKILLS Lucy Edwards, 1–18. Ersoy, E., & Başer, N. (2014). The Effects of Problem-based Learning Method in Higher Education on Creative Thinking. Procedia - Social and Behavioral Sciences, 116, 3494–3498. https://doi.org/10.1016/j.sbspro.2014.01.790 Hugerat, M., & Kortam, N. (2014). Improving higher order thinking skills among freshmen by teaching science through inquiry. Eurasia Journal of Mathematics, Science and Technology Education, 10(5), 447–454. https://doi.org/10.12973/eurasia.2014.1107a Kaur, C., Singh, S., Kaur, R., Singh, A., & Singh, T. S. M. (2018). Developing a Higher Order Thinking Skills Module for Weak ESL Learners, 11(7), 86–100. https://doi.org/10.5539/elt.v11n7p86 King, F. J., Goodson, L., & Rohani, F. (1998). Higher order thinking skills. Publication of the Educational Services Program, Now Known as the Center for Advancement of Learning and Assessment. Obtido de: Www.Cala.Fsu.Edu, 1–176. Retrieved from http://www.cala.fsu.edu/files/higher_order_thinking_skills.pdf Kusuma, M. D., Rosidin, U., Abdurrahman, A., & Suyatna, A. (2017). The Development of Higher Order Thinking Skill (Hots) Instrument Assessment In Physics Study. IOSR Journal of Research & Method in Education (IOSRJRME), 07(01), 26–32. https://doi.org/10.9790/7388-0701052632 Marzano, R. J. (1993). How classroom teachers approach the teaching of thinking. Theory Into Practice, 32(3), 154–160. https://doi.org/10.1080/00405849309543591 McLoughlin, D., & Mynard, J. (2009). An analysis of higher order thinking in online discussions. Innovations in Education and Teaching International, 46(2), 147–160. https://doi.org/10.1080/14703290902843778 Miri, B., David, B. C., & Uri, Z. (2007). Purposely teaching for the promotion of higher-order thinking skills: A case of critical thinking. Research in Science Education, 37(4), 353–369. https://doi.org/10.1007/s11165-006-9029-2 Nagappan, R. (2001). Language teaching and the enhancement of higher-order thinking skills. Anthology Series-Seameo Regional Language Centre, (April 2000), 190–223. Retrieved from http://nsrajendran.tripod.com/Papers/RELC2000A.pdf Nguyen, T. (2018). Teachers ’ Capacity of Instruction for Developing Higher – Order Thinking Skills for Upper Secondary Students – A Case Study in Teaching Mathematics in Vietnam, 10(1), 8–19. Puchta, H. (2007). More than little parrots: Developing young learners’ speaking skills. Www.Herbertpuchta.Com. Raiyn, J., & Tilchin, O. (2015). Higher-Order Thinking Development through Adaptive Problem-based Learning. Journal of Education and Training Studies, 3(4), 93–100. https://doi.org/10.11114/jets.v3i4.769 Retnawati, H., Djidu, H., Kartianom, K., Apino, E., & Anazifa, R. D. (2018). Teachers’ knowledge about higher-order thinking skills and its learning strategy. Problem of Education in the 21st Century, 76(2), 215–230. Retrieved from http://oaji.net/articles/2017/457-1524597598.pdf Snyder, L. G., & Snyder, M. J. (2008). Teaching critical thinking and problem solving skills. The Delta Pi Epsilon Journal, L(2), 90–99. https://doi.org/10.1023/A:1009682924511 Stahnke, R., Schueler, S., & Roesken-Winter, B. (2016). Teachers’ perception, interpretation, and decision-making: a systematic review of empirical mathematics education research. ZDM - Mathematics Education, 48(1–2). https://doi.org/10.1007/s11858-016-0775-y Sulaiman, T., Muniyan, V., Madhvan, D., Hasan, R., & Rahim, S. S. A. (2017). Implementation of higher order thinking skills in teaching of science: A case study in Malaysia. International Research Journal of Education and Sciences (IRJES), 1(1), 2550–2158. Retrieved from http://www.masree.info/wp-content/uploads/2017/02/20170226-IRJES-VOL-1-ISSUE-1-ARTICLE-1.pdf Tan, S. Y., & Halili, S. H. (2015). Effective teaching of higher-order thinking (HOT) in education. The Online Journal of Distance Education and E-Learning, 3(2), 41–47. Thomas, A., & Thorne, G. (2009). How to increase higher level thinking | center for development and learning. The Center for Learning and Development Blog. Retrieved from http://www.cdl.org/articles/how-to-increase-high-order-thinking/ Thompson, T. (2008). Mathematics teachers’ interpretation of higher-order thinking in Bloom’s taxonomy. International Electronic Journal of Mathematics Education, 3(2), 96–109. https://doi.org/10.1126/science.318.5856.1534 Watson, J. M., Collis, K. F., Callingham, R. A., & Moritz, J. B. (1995). A model for assessing higher order thinking in statistics. Educational Research and Evaluation,(Vol.1). https://doi.org/10.1080/1380361950010303 Zohar, A. (2013). Challenges in wide scale implementation efforts to foster higher order thinking (HOT) in science education across a whole school system. Thinking Skills and Creativity, 10, 233–249. https://doi.org/10.1016/j.tsc.2013.06.002 Zohar, A., & Schwartzer, N. (2005). Assessing teachers’ pedagogical knowledge in the context of teaching higher-order thinking. International Journal of Science Education, 27(13), 1595–1620. https://doi.org/10.1080/09500690500186592 Zulkpli, Z., Mohamed, M., & Abdullah, A. H. (2017). Assessing mathematics teachers’ knowledge in teaching thinking skills. Sains Humanika, 9(1–4), 83–87. https://doi.org/10.11113/sh.v9n1-4.1129
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Wade, David C., Nathan Luke Abraham, Alexander Farnsworth, Paul J. Valdes, Fran Bragg y Alexander T. Archibald. "Simulating the climate response to atmospheric oxygen variability in the Phanerozoic: a focus on the Holocene, Cretaceous and Permian". Climate of the Past 15, n.º 4 (5 de agosto de 2019): 1463–83. http://dx.doi.org/10.5194/cp-15-1463-2019.

Texto completo
Resumen
Abstract. The amount of dioxygen (O2) in the atmosphere may have varied from as little as 5 % to as much as 35 % during the Phanerozoic eon (54 Ma–present). These changes in the amount of O2 are large enough to have led to changes in atmospheric mass, which may alter the radiative budget of the atmosphere, leading to this mechanism being invoked to explain discrepancies between climate model simulations and proxy reconstructions of past climates. Here, we present the first fully 3-D numerical model simulations to investigate the climate impacts of changes in O2 under different climate states using the coupled atmosphere–ocean Hadley Centre Global Environmental Model version 3 (HadGEM3-AO) and Hadley Centre Coupled Model version 3 (HadCM3-BL) models. We show that simulations with an increase in O2 content result in increased global-mean surface air temperature under conditions of a pre-industrial Holocene climate state, in agreement with idealised 1-D and 2-D modelling studies. We demonstrate the mechanism behind the warming is complex and involves a trade-off between a number of factors. Increasing atmospheric O2 leads to a reduction in incident shortwave radiation at the Earth's surface due to Rayleigh scattering, a cooling effect. However, there is a competing warming effect due to an increase in the pressure broadening of greenhouse gas absorption lines and dynamical feedbacks, which alter the meridional heat transport of the ocean, warming polar regions and cooling tropical regions. Case studies from past climates are investigated using HadCM3-BL and show that, in the warmest climate states in the Maastrichtian (72.1–66.0 Ma), increasing oxygen may lead to a temperature decrease, as the equilibrium climate sensitivity is lower. For the Asselian (298.9–295.0 Ma), increasing oxygen content leads to a warmer global-mean surface temperature and reduced carbon storage on land, suggesting that high oxygen content may have been a contributing factor in preventing a “Snowball Earth” during this period of the early Permian. These climate model simulations reconcile the surface temperature response to oxygen content changes across the hierarchy of model complexity and highlight the broad range of Earth system feedbacks that need to be accounted for when considering the climate response to changes in atmospheric oxygen content.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía