Artículos de revistas sobre el tema "Models and model making, 1952"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Models and model making, 1952.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Models and model making, 1952".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

DÜPPE, TILL. "ARROW AND DEBREU DE-HOMOGENIZED". Journal of the History of Economic Thought 34, n.º 4 (14 de noviembre de 2012): 491–514. http://dx.doi.org/10.1017/s1053837212000491.

Texto completo
Resumen
To this day, the so-called Arrow–Debreu model represents a trademark of rigorous economic research—be it as a benchmark for extending the model, for weakening its assumptions, for structuring data sets, or for providing alternative models. But who should earn the credit? Arrow or Debreu? This essay presents “the making of” Arrow’s and Debreu’s joint article of 1954 as documented in their extensive letter exchange between their first contact in February 1952 and submission in May 1953. I show, pivotally, that Arrow and Debreu did not share the same interest in their work, that they played different roles, and drew different lessons from it. Moreover, neither Arrow nor Debreu can be identified with the way the profession would later refer to the Arrow–Debreu model. To the contrary, both, in their own ways, sought to counter what others perceived as limitations when placing their hopes in the model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Ankeny, Rachel A., Sabina Leonelli, Nicole C. Nelson y Edmund Ramsden. "Making Organisms Model Human Behavior: Situated Models in North-American Alcohol Research, since 1950". Science in Context 27, n.º 3 (28 de julio de 2014): 485–509. http://dx.doi.org/10.1017/s0269889714000155.

Texto completo
Resumen
ArgumentWe examine the criteria used to validate the use of nonhuman organisms in North-American alcohol addiction research from the 1950s to the present day. We argue that this field, where the similarities between behaviors in humans and non-humans are particularly difficult to assess, has addressed questions of model validity by transforming the situatedness of non-human organisms into an experimental tool. We demonstrate that model validity does not hinge on the standardization of one type of organism in isolation, as often the case with genetic model organisms. Rather, organisms are viewed as necessarily situated: they cannot be understood as a model for human behavior in isolation from their environmental conditions. Hence the environment itself is standardized as part of the modeling process; and model validity is assessed with reference to the environmental conditions under which organisms are studied.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Heiderman, Ryan R. y Mark J. Kimsey. "A species-specific, site-sensitive maximum stand density index model for Pacific Northwest conifer forests". Canadian Journal of Forest Research 51, n.º 8 (agosto de 2021): 1166–77. http://dx.doi.org/10.1139/cjfr-2020-0426.

Texto completo
Resumen
Maximum stand density index (SDIMAX) models were developed for important Pacific Northwest conifers of western Oregon and Washington, USA, based on site and species influences and interactions. Inventory and monitoring data from numerous federal, state, and private forest management groups were obtained throughout the region to ensure a wide coverage of site characteristics. These observations include information on tree size, number, and species composition. The effects and influence on the self-thinning frontier of plot-specific factors such as climate, topography, soils, and geology, as well as species composition, were evaluated based on geographic location using a multistep approach to analysis involving linear quantile mixed models, random forest, and stochastic frontier functions. The self-thinning slope of forest stands dominated by Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) was found to be –1.517 and that of stands dominated by western hemlock (Tsuga heterophylla (Raf.) Sarg.) was found to be –1.461, leading to regionwide modelled SDIMAX values at the 95th percentile of 1728 and 1952 trees per hectare, respectively. The regional model of site-specific SDIMAX will support forest managers in decision-making regarding density management and species selection to more efficiently utilize site resources toward healthy, productive forests.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Ramadhan, Ali J., Tufleuddin Biswas, Soumik Ray, S. R. Anjanawe, Deepa Rawat, Binita Kumari, Shikha Yadav et al. "Modeling and Forecasting of Coconut Area, Production, and Productivity Using a Time Series Model". BIO Web of Conferences 97 (2024): 00113. http://dx.doi.org/10.1051/bioconf/20249700113.

Texto completo
Resumen
The study aimed to compare ARIMA and Holt's models for predicting coconut metrics in Kerala. The coconut data series was collected from the period 1957 to 2019. Of this, 80% of the data (from 1957 to 2007) is treated as training data, and the rest (20% from 2008 to 2019) is treated as testing data. Ideal models were selected based on lower AIC and BIC values. Their accuracy was evaluated through error estimation on testing data, revealing Holt's exponential, linear, and ARIMA (0,1,0) models as the bestfit choices for predicting coconut area, production, and productivity respectively. After using the testing data, we tried for the forecasting for 2020-2024 using these models, and the DM test confirmed their significant forecasting accuracy. This comprehensive analysis provides valuable insights into effective prediction models for coconut-related metrics, offering a foundation for informed decision-making and future projections.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Juanita M. Costillas y Lorelie P. Duarte. "MODELLING PREDICTORS OF ENGLISH PROFICIENCY VIA REASONING ATTRIBUTES AMONG COLLEGE FRESHMEN". Journal of Educational and Human Resource Development (JEHRD) 2 (10 de diciembre de 2014): 133–40. http://dx.doi.org/10.61569/av09wv04.

Texto completo
Resumen
This study determined the model that most likely predicts English Proficiency among freshmen. Through secondary data analysis with 277 out of 554 freshmen who were randomly selected, majority were found to be on the below average level, both verbal and non-verbal reasoning attributes. Linear Regression and General Linear Models show the independent variables that most likely predict English Proficiency are verbal comprehension, verbal reasoning, figural reasoning, sex and course enrolled specifically engineering courses. Thus, this study supports James (1950), Freud (1953) and Donges (2001) who pointed out making connections on the importance of reasoning attributes towards mathematics skills of students. This is where the dual process and filtering observation theories are substantiated relative to predicting English Proficiency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Cooke, William N., Aneil K. Mishra, Gretchen M. Spreitzer y Mary Tschirhart. "The Determinants of NLRB Decision-Making Revisited". ILR Review 48, n.º 2 (enero de 1995): 237–57. http://dx.doi.org/10.1177/001979399504800203.

Texto completo
Resumen
The authors develop a model of NLRB decision-making that, unlike the models employed in previous studies, distinguishes between decision-making in more important, complex cases and less important, simpler cases. Using a representative sample of Board decisions over 1957–86, they find that in deciding the minority (20%) of disputes that were particularly important or complex, Board members were influenced by their personal preferences and those of Presidents who appointed them—a finding consistent with the results of previous studies. In the remaining cases (about 80%), however, Board members were influenced in their decisions by little more than the recommendations of regional offices and administrative law judges. Another finding that substantially modifies the conclusions of earlier studies is that Board members appear to have been highly influenced by their accountability to the public when deciding more important, complex cases.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

McCarty, Nolan M. y Keith T. Poole. "An Empirical Spatial Model of Congressional Campaigns". Political Analysis 7 (1998): 1–30. http://dx.doi.org/10.1093/pan/7.1.1.

Texto completo
Resumen
Testing and estimating formal models of political behavior has not advanced as far as theoretical applications. One of the major literatures in formal theory is the spatial model of electoral competition which has its origins in the work of Black (1948) and Downs (1957). These models are used to make predictions about the policy positions candidates take in order to win elections. A lack of data on these candidate positions, especially challengers who never serve in Congress, has made direct testing of these models on congressional elections difficult.Recently, researchers have begun to incorporate campaign finance into the standard Downsian model. These models of position-induced contributions examine the tradeoff that candidates make between choosing positions favorable to interest group contributors and positions favorable to voters. A major premise of these models is that interest group contributions are based on the policy positions of candidates. This has been borne out empirically in the case of incumbents, but not challengers.To test key hypotheses of these models, we develop a simple spatial model of position-induced campaign contributions where the PAC's decision to contribute or abstain from each race is a function of the policy distance between the PAC and the candidates. We use data from political action committee contributions in order to estimate the locations of incumbents, challengers and PACs. Our model reliably estimates the spatial positions as well as correctly predicts nearly 74 percent of the contribution and abstention decisions of the PACs. Conditional upon making a contribution, we correctly predict the contribution in 94 percent of the cases. These results are strong evidence for position-induced campaign contributions. Furthermore, our estimates of candidate positions allow us to address issues of platform convergence between candidates.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

COLLANTES, FERNANDO. "Dairy Products and Shifts in Western Models of Food Consumption since 1950: A Spanish Perspective". Rural History 26, n.º 2 (2 de septiembre de 2015): 249–68. http://dx.doi.org/10.1017/s0956793315000060.

Texto completo
Resumen
Abstract:Through a case study of dairy products in Spain, this article discusses the evolution of what economist Louis Malassis called ‘food consumption models’ in the West from the Second World War. Two distinct consumption models are identified: a first model based on the massification of milk consumption, and a second model featuring decreasing dairy consumption, an increasing role for second-degree processed products and the emergence of new consumer segmentations. Rather than a sudden shift from the first to the second model, there was a punctuated sequence comprising an intermediate transition period in the last two decades of the twentieth century. Using an evolutionary political economy approach, I argue that the key to this transition was a transformation in consumer preferences resulting not only from changes in nutritional discourse, but also from changes in the profit making strategies of dairy agribusinesses and from the interaction of both trajectories of structural change with consumer agency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Schembs, Katharina. "Staging Work in the Corporatist State. Visual Propaganda in Fascist Italy and Peronist Argentina (1922-1955)". Anuario de Historia de América Latina 58 (28 de diciembre de 2021): 270–314. http://dx.doi.org/10.15460/jbla.58.162.

Texto completo
Resumen
Starting in 1922, Benito Mussolini (1922-1943) reformed Italian labour relations by adopting corporatism. As such, he served as a model for many other heads of state in search of ways out of economic crisis. When the corporatist model spread throughout Latin America in the 1930s and 1940s, the Argentine president Juan Domingo Perón (1946-1955) drew significantly on the Italian precedent. Adhering to an aestheticised concept of politics and making use of modern mass media, both regimes advertised corporatism in their respective visual propaganda, in which the worker came to play a prominent role. The article analyses parallels and differences in the formation of political identities in fascist and Peronist visual media that under both corporatist regimes centred around work. Comparing different role models as they were designed for different members of society, I argue that – apart from gender roles where Peronism resorted to similarly traditional images – Peronist propaganda messages were more future-oriented and inclusive. Racist exclusions of parts of the population from the central worker identity that increasingly characterised fascist propaganda over the course of the 1930s were not adopted in Argentina after 1945. Instead, in state visual media the category of work in its inclusionary dimension served as a promise of belonging to the Peronist community.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Bhusal, Chhabi Lal. "Models and Algorithms of Abstract Flows in Evacuation Planning". Prāgyik Prabāha 11, n.º 1 (12 de junio de 2023): 1–10. http://dx.doi.org/10.3126/pp.v11i1.55501.

Texto completo
Resumen
Flows over time generalize classical network flows by introducing a notion of time. Each arc is equipped with a transit time that specifies how long flow takes to traverse it, while flow rates may vary over time within the given edge capacities. Ford and Fulkerson’s original 1956 max flow/min cut paper formulated max flow in terms of flows on paths. In 1974, Hoffman pointed out that Ford and Fulkerson’s original proof was quite abstract, and applied to a wide range of max flow-like problems. In this abstract model we have capacitated elements and linearly ordered subsets of elements called paths that satisfy switching property. When two paths P and Q cross at an element (node) then there must be a path that is a subset of the first path up to the crossing element and a subset of the second path after the crossing element. Contraflow is a widely accepted solution approach that increases the flow and decreases the evacuation time making the traffic smooth during evacuation by reversing the required road directions from the risk areas to the safe places. In this paper, we integrate the abstract flow with contraflow, give mathematical formulations of these models and present efficient polynomial time algorithms for solving the abstract contraflow problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Firmansyah, Boy y Nuraini Purwandari. "Utilization of Big Data Technology in the Analysis of Academic Data for Students of the Faculty of Computer Science IBI Kosgoro 1957 for Decision Making". International Journal of Advanced Technology and Social Sciences 2, n.º 2 (29 de febrero de 2024): 207–26. http://dx.doi.org/10.59890/ijatss.v2i2.1325.

Texto completo
Resumen
This research discusses the use of Big Data technology in analyzing student academic data at the Ibi Kosgoro 1957 Faculty of Computer Science with the main aim of optimizing the decision-making process. The main focus of the article is to create a predictive model that can predict student academic success based on extensive data analysis. The research steps involve collecting and processing academic data, including grades, number of courses taken, and other variables that may influence student performance. The collected data is then used to train predictive models using machine learning techniques. The predictive model that is built aims to provide decision-making recommendations to academics. By utilizing Big Data, this article explores deep insights into academic patterns that may be difficult to detect with conventional methods. It is hoped that the research results can make a positive contribution in increasing the efficiency of academic management and help related parties in designing more targeted intervention strategies. In addition, it is hoped that the implementation of this predictive model can support efforts to increase student academic success at the Ibi Kosgoro 1957 Faculty of Computer Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Shafiei Shiva, Javad y David G. Chandler. "Projection of Future Heat Waves in the United States. Part I: Selecting a Climate Model Subset". Atmosphere 11, n.º 6 (3 de junio de 2020): 587. http://dx.doi.org/10.3390/atmos11060587.

Texto completo
Resumen
The widespread increase in global temperature is driving more frequent and more severe local heatwaves within the contiguous United States (CONUS). General circulation models (GCMs) show increasing, but spatially uneven trends in heatwave properties. However, the wide range of model outputs raises the question of the suitability of this method for indicating the future impacts of heatwaves on human health and well-being. This work examines the fitness of 32 models from CMIP5 and their ensemble median to predict a set of heatwave descriptors across the CONUS, by analyzing their capabilities in the simulation of historical heatwaves during 1950–2005. Then, we use a multi-criteria decision-making tool and rank the overall performance of each model for 10 locations with different climates. We found GCMs have different capabilities in the simulation of historical heatwave characteristics. In addition, we observed similar performances for GCMs over the areas with a partially similar climate. The ensemble model showed better performance in simulation of historical heatwave intensity in some locations, while other individual GCMs represented heatwave time-related components more similar to observations. These results are a step towards the use of contemporary weather models to guide heatwave impact predictions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Wang, Muyin, James E. Overland, Vladimir Kattsov, John E. Walsh, Xiangdong Zhang y Tatyana Pavlova. "Intrinsic versus Forced Variation in Coupled Climate Model Simulations over the Arctic during the Twentieth Century*". Journal of Climate 20, n.º 6 (15 de marzo de 2007): 1093–107. http://dx.doi.org/10.1175/jcli4043.1.

Texto completo
Resumen
Abstract There were two major multiyear, Arctic-wide (60°–90°N) warm anomalies (>0.7°C) in land surface air temperature (LSAT) during the twentieth century, between 1920 and 1950 and again at the end of the century after 1979. Reproducing this decadal and longer variability in coupled general circulation models (GCMs) is a critical test for understanding processes in the Arctic climate system and increasing the confidence in the Intergovernmental Panel on Climate Change (IPCC) model projections. This study evaluated 63 realizations generated by 20 coupled GCMs made available for the IPCC Fourth Assessment for their twentieth-century climate in coupled models (20C3M) and corresponding control runs (PIcntrl). Warm anomalies in the Arctic during the last two decades are reproduced by all ensemble members, with considerable variability in amplitude among models. In contrast, only eight models generated warm anomaly amplitude of at least two-thirds of the observed midcentury warm event in at least one realization, but not its timing. The durations of the midcentury warm events in all the models are decadal, while that of the observed was interdecadal. The variance of the control runs in nine models was comparable with the variance in the observations. The random timing of midcentury warm anomalies in 20C3M simulations and the similar variance of the control runs in about half of the models suggest that the observed midcentury warm period is consistent with intrinsic climate variability. Five models were considered to compare somewhat favorably to Arctic observations in both matching the variance of the observed temperature record in their control runs and representing the decadal mean temperature anomaly amplitude in their 20C3M simulations. Seven additional models could be given further consideration. Results support selecting a subset of GCMs when making predictions for future climate by using performance criteria based on comparison with retrospective data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Rupp, T. Scott, Xi Chen, Mark Olson y A. David McGuire. "Sensitivity of Simulated Boreal Fire Dynamics to Uncertainties in Climate Drivers". Earth Interactions 11, n.º 3 (1 de enero de 2007): 1–21. http://dx.doi.org/10.1175/ei189.1.

Texto completo
Resumen
Abstract Projected climatic warming has direct implications for future disturbance regimes, particularly fire-dominated ecosystems at high latitudes, where climate warming is expected to be most dramatic. It is important to ascertain the potential range of climate change impacts on terrestrial ecosystems, which is relevant to making projections of the response of the Earth system and to decisions by policymakers and land managers. Computer simulation models that explicitly model climate–fire relationships represent an important research tool for understanding and projecting future relationships. Retrospective model analyses of ecological models are important for evaluating how to effectively couple ecological models of fire dynamics with climate system models. This paper uses a transient landscape-level model of vegetation dynamics, Alaskan Frame-based Ecosystem Code (ALFRESCO), to evaluate the influence of different driving datasets of climate on simulation results. Our analysis included the use of climate data based on first-order weather station observations from the Climate Research Unit (CRU), a statistical reanalysis from the NCEP–NCAR reanalysis project (NCEP), and the fifth-generation Pennsylvania State University–NCAR Mesoscale Model (MM5). Model simulations of annual area burned for Alaska and western Canada were compared to historical fire activity (1950–2000). ALFRESCO was only able to generate reasonable simulation results when driven by the CRU climate data. Simulations driven by the NCEP and MM5 climate data produced almost no annual area burned because of substantially colder and wetter growing seasons (May–September) in comparison with the CRU climate data. The results of this study identify the importance of conducting retrospective analyses prior to coupling ecological models of fire dynamics with climate system models. The authors’ suggestion is to develop coupling methodologies that involve the use of anomalies from future climate model simulations to alter the climate data of more trusted historical climate datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Al-Otaibi, Shaha, Amjad Rehman, Muhammad Mujahid, Sarah Alotaibi y Tanzila Saba. "Efficient-gastro: optimized EfficientNet model for the detection of gastrointestinal disorders using transfer learning and wireless capsule endoscopy images". PeerJ Computer Science 10 (11 de marzo de 2024): e1902. http://dx.doi.org/10.7717/peerj-cs.1902.

Texto completo
Resumen
Gastrointestinal diseases cause around two million deaths globally. Wireless capsule endoscopy is a recent advancement in medical imaging, but manual diagnosis is challenging due to the large number of images generated. This has led to research into computer-assisted methodologies for diagnosing these images. Endoscopy produces thousands of frames for each patient, making manual examination difficult, laborious, and error-prone. An automated approach is essential to speed up the diagnosis process, reduce costs, and potentially save lives. This study proposes transfer learning-based efficient deep learning methods for detecting gastrointestinal disorders from multiple modalities, aiming to detect gastrointestinal diseases with superior accuracy and reduce the efforts and costs of medical experts. The Kvasir eight-class dataset was used for the experiment, where endoscopic images were preprocessed and enriched with augmentation techniques. An EfficientNet model was optimized via transfer learning and fine tuning, and the model was compared to the most widely used pre-trained deep learning models. The model’s efficacy was tested on another independent endoscopic dataset to prove its robustness and reliability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Massironi, Carlo. "Philip Fisher’s sense of numbers". Qualitative Research in Financial Markets 6, n.º 3 (10 de noviembre de 2014): 302–31. http://dx.doi.org/10.1108/qrfm-01-2013-0004.

Texto completo
Resumen
Purpose – This paper aims to propose an account of the use of numbers and mathematical formulae and, more generally, of the quantitative aspects in the qualitative equity valuation model of the American investor Philip A. Fisher who is considered to be one of the fathers of the qualitative equity valuation models. Design/methodology/approach – A Conceptual analysis was conducted (Glasersfeld, 1992) of the four volumes published by Fisher between 1954 and 1980 (1958, 1960, 1975, 1980) in relation to his equity valuation process. On the basis of this analysis, a modelization of this author’s perspective on quantitative instruments was built. Findings – A modelization to use quantitative data in a qualitative equity valuation model that is sufficiently detailed and useful for an asset manager is proposed. Originality/value – What is propose is a qualitative analysis of quantitative elements in the thought of a qualitative author on the subject of equity valuation. It is believed that this paper could be of interest to all those who use or are involved in the development of qualitative models of equity valuation or business valuation. This work is also an example of how conceptual analysis – generally employed in the field of mathematics education research – can be used to build descriptive models of decision-making processes of individual investors, models designed to enable the reproduction/approximation of the conceptual operations of the investor.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Pearson, Alastair William. "‘Heaping Offa upon Pelion, and Olympus upon Offa’: An assessment of the role of model making in the development of relief portrayal from 1780 to 1900". Abstracts of the ICA 1 (15 de julio de 2019): 1–2. http://dx.doi.org/10.5194/ica-abs-1-292-2019.

Texto completo
Resumen
<p><strong>Abstract.</strong> By 1800, national surveys had become a priority for regimes around Europe, keen to centralise government and secure territories during a period of significant political upheaval. Military requirements were paramount but the representation of relief remained woefully inadequate. Commanders, not content with simple rough impressions of relief, demanded effective representations from which absolute altitudes and gradients could be derived. However, innovative methods of relief depiction were unlikely to be spearheaded by new national mapping institutions, already committed to long-term mapping programmes. Conversely, for those independent cartographers and model makers, unfettered by the constraints that characterised national institutions, the pursuit of the optimum depiction of relief became a preoccupation verging on obsession. Inspired by early map and model makers, Swiss, German and Austrian cartographers embarked on a phase of developing more artistic, naturalistic means to create an illusion of the third dimension on the two-dimensional face of the map. Chromolithography had made possible the replacement of hachures by shading tones and the production of multicolour printed maps. As a result, a wide variety of maps appeared during the second half of the 19th century with hypsometric tints generating images of naturalistic and symbolic landscapes. Alternative and often competing methods of assigning colour in sequence were developed most notably in central Europe. This culminated in the publication of <i>Schatthenplastik</i> and <i>Farbenplastik</i> in 1898 in Vienna by Karl Peucker (1859&amp;ndash;1940) a work that injected new life and debate into the pursuit of an optimum colour sequence for layered relief maps that would last well into the next century.</p><p>This paper aims to assess the role of model making in initiating and fuelling a period of experimentation and development of relief portrayal. The increasing fascination with the natural wonders of the world combined with the growth of Alpine tourism kick started a period of private enterprise in which the production of relief models became a highly valued activity. Starting with the remarkable model of the Relief of Central Switzerland by Franz Ludwig Pfyffer von Wyher (1716&amp;ndash;1802), through the exploits of Joachim Eugen Müller (1752&amp;ndash;1833) (Figure 1) to the later models crafted by Xaver Imfeld (1853&amp;ndash;1909), Simon Simon (1857&amp;ndash;1925) and Fridolin Becker (1854&amp;ndash;1922), this period witnessed a level of artistry and craftsmanship that has arguably never been surpassed.</p><p>Opportunity is taken to assess the accuracy of one of the key models produced by Joachim Eugen Müller. This clearly demonstrates that early model making achieved standards of accuracy that were extraordinary for the time. Of course, such feats were not the preserve of European model makers. For example, readers of reports and newspaper articles from expeditions to the interior of the United States had thrilled at the photographs, drawings, sketches and maps of Niagara Falls, Yosemite Valley and the Grand Canyon. No sooner had John Wesley Powell completed his expedition to the Grand Canyon in 1874 and published a detailed report, than its true magnificence was brought to public attention through a model of the Grand Canyon constructed by Edwin Howell in 1875 (McCalmont, 2015).</p><p>The nineteenth century was characterized by great endeavour and craftsmanship that fashioned some of the most remarkable and visually stunning maps ever published. This paper pulls together the various strands of this complex story into a coherent narrative and assesses the role of model makers in underpinning this ‘golden age’.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Urban, Timothy L. "Home advantage in elimination games and the NBA play-in tournament". International Sports Studies 45, n.º 1 (14 de septiembre de 2023): 42–58. http://dx.doi.org/10.30819/iss.45-1.05.

Texto completo
Resumen
Some of the most exciting contests in US professional basketball are the win-or-gohome, game seven’s in the playoffs. The National Basketball Association has instituted a play-in tournament for the 2021 and 2022 playoffs consisting solely of one-game series, which has sparked considerable controversy among the league’s executives and players. To understand the effect of the play-in tournament on playoff participation, we developed a model to determine the home-court advantage in elimination games. Various solution techniques—including the log-binomial and robust Poisson regression models—are used to estimate the model parameters using elimination-game data from the 1955–2019 playoffs. These models are appropriate alternatives to logistic regression as probabilistic classifiers with dichotomous response variables and provide risk ratios (in terms of probabilities) that are easier to interpret for someone unfamiliar with odds ratios. Results indicate that the home-court advantage for equally matched teams would be in the 0.50 to 0.55 range; when considering the games are played at the home arena of the team with the better regular-season record, the home team is expected to win 65 per cent of elimination playoff games. These models can be used to estimate the likelihood of each play-in participant making the playoffs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Raval, Amit D., Jordi Casanellas, Orsolya Lunacsek, Niculae Constantinovici y Per Sandstrom. "Prediction model for real-world survival in men with castration-resistant prostate cancer and bone metastases in the United States: A real-world database study." JCO Oncology Practice 19, n.º 11_suppl (noviembre de 2023): 507. http://dx.doi.org/10.1200/op.2023.19.11_suppl.507.

Texto completo
Resumen
507 Background: Bone metastases (bm) are common in men with castration-resistant prostate cancer (CRPC) and are associated with poor prognosis. Therefore, identifying prognostic factors for survival is key to aid decision-making. While several models exist for predicting survival in men with mCRPC, most were derived using randomized controlled trial (RCT) data, which may have limited application to real-world (RW) populations and utilized traditional linear methods. We aimed to develop and validate a prediction model of real-world survival (rwOS) in men with bmCRPC using a national RW electronic medical record (EMR) database with traditional and advanced machine learning (ML) methods. Methods: A retrospective cohort of men diagnosed with bmCPRC between 2010 and 2021 was identified using the Optum EMR. rwOS was identified as documented evidence of death in the EMR derived through either the Social Security Death Index or EMR-reported death date. Independent variables were demographic (age, race), clinical conditions (Charlson comorbidity index, pain, prior phases of PC), PC-related medications, and laboratory parameters during a 6-month baseline period. A dynamic model was used to predict survival at 1, 2, and 3 years after diagnosis of bmCRPC using traditional (logistic, Cox regression) and ML (light gradient boosting (LGB)) models. Training, calibration, and validation were performed by splitting the data and utilizing 5-fold cross-validation on the training sample and calibration plots. Diagnostic performances were evaluated using area-under-the-curve (AUC), precision-recall curve (PRC), and calibration curves across three models. Results: The study cohort included 4,097 men with a median age of 76 years (interquartile range: 68-82), predominantly white (81.2%), and residing in the Midwest (48.2%). Over a median follow-up period of 19.2 months, 2,220 (54.2%) men died with median rwOS of 31.0 months (95% CI: 29.6-32.9). The AUC for Cox, logistic and LGB models at 1 year were well within the acceptable range of (≥0.7.0) 0.78, 0.79, and 0.81 respectively. Similar findings were observed at 2 and 3-year landmark analyses with marginal improvement in model performance with LGB. Top predictors of rwOS were consistent across all time points in the LGB models and included laboratory parameters (prostate-specific antigen level, alkaline phosphatase, albumin, hemoglobin), age, presence of pain, transition from mHSPC to mCRPC, surgical castration, and baseline use of androgen receptor inhibitors. Conclusions: Study findings highlight consistency in the predictors of rwOS in a large cohort of men with bmCRPC using a national EMR database. ML-based models predicted rwOS with modest performance improvement compared to traditional models and could provide rwOS prediction to aid treatment decision-making.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Nakaegawa, Tosiyuki. "High-Performance Computing in Meteorology under a Context of an Era of Graphical Processing Units". Computers 11, n.º 7 (13 de julio de 2022): 114. http://dx.doi.org/10.3390/computers11070114.

Texto completo
Resumen
This short review shows how innovative processing units—including graphical processing units (GPUs)—are used in high-performance computing (HPC) in meteorology, introduces current scientific studies relevant to HPC, and discusses the latest topics in meteorology accelerated by HPC computers. The current status surrounding HPC is distinctly complicated in both hardware and software terms, and flows similar to fast cascades. It is difficult to understand and follow the status for beginners; they need to overcome the obstacle of catching up on the information on HPC and connecting it to their studies. HPC systems have accelerated weather forecasts with physical-based models since Richardson’s dream in 1922. Meteorological scientists and model developers have written the codes of the models by making the most of the latest HPC technologies available at the time. Several of the leading HPC systems used for weather forecast models are introduced. Each institute chose an HPC system from many possible alternatives to best match its purposes. Six of the selected latest topics in high-performance computing in meteorology are also reviewed: floating points; spectral transform in global weather models; heterogeneous computing; exascale computing; co-design; and data-driven weather forecasts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Denroche, Charles. "The Three Grammars and the sign". Review of Cognitive Linguistics 19, n.º 1 (28 de abril de 2021): 206–31. http://dx.doi.org/10.1075/rcl.00081.den.

Texto completo
Resumen
Abstract This article presents an original three-component model of the linguistic sign. It shares with the established triadic models of Peirce (1955 [1897]) and Ogden and Richards (1923/1949) in identifying thought, word and thing as essential components; but differs in being linear, with thought and thing at opposite poles. It is argued that this arrangement reflects the way the components of the sign relate to reality and thereby serves well as an explanatory tool for linguistic research. The model is further modified at each of the ontological realms using concepts from cognitive linguistics, renamed cognition, language and reality. The new model is employed as a research tool in two case studies: one illustrates its use in making sense of the complex field of language grammar; the other does the same for figurative language – metaphor and metonymy. The article’s conclusions include that interrogating established cornerstones of linguistic theory in the light of new theory can lead to the development of improved research tools.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Bott, Simon R. J., Mark Emberton y Matthew R. Sydes. "Prostate Cancer Staging Tables—A Predictive Model for the UK". British Journal of Medical and Surgical Urology 1, n.º 3 (noviembre de 2008): 107–19. http://dx.doi.org/10.1016/j.bjmsu.2008.08.002.

Texto completo
Resumen
Introduction: The use of accurate risk stratification is a prerequisite to informed decision-making when considering potentially curative treatments for prostate cancer. Most models are derived from cases managed in the United States. The validity of these methods may be compromised when used on a population other than that used for generating the predicted outcomes. We present predictive tables derived from the observed outcomes of men treated by radical prostatectomy in the United Kingdom. Methods: Using logistic regression a pilot study identified the best predictors of pathological stage from eight pre-operative variables. All full BAUS members were asked to submit their consecutive RP patients' age, biopsy Gleason score, pre-operative PSA, number of biopsy cores, number of biopsy cores containing cancer (% positive cores) and pathological stage. Predictive tables were constructed using this data to predict pT2, pT3a or pT3b/4/N1 disease at radical prostatectomy and assessed using internal cross-validation methods. Results: 1912 patients undergoing radical prostatectomy by 39 consultant urologists in 19 centres were included. The impact of age was equivocal but a robust model was developed to predict outcomes based on Gleason sum score, pre-operative PSA and positive biopsy cores. A series of tables have been constructed to allow for use in practice. Conclusions: In this study we have generated a validated prostate cancer predictive table derived entirely from a UK surgical cohort and which is simple to use.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Shen, Ciyue, Collin Schlager, Deepta Rajan, Maryam Pouryahya, Mary Lin, Victoria Mountain, Ilan Wapinski et al. "Abstract 1922: Application of an interpretable graph neural network to predict gene expression signatures associated with tertiary lymphoid structures in histopathological images". Cancer Research 82, n.º 12_Supplement (15 de junio de 2022): 1922. http://dx.doi.org/10.1158/1538-7445.am2022-1922.

Texto completo
Resumen
Abstract Background: Tertiary lymphoid structures (TLS) are vascularized lymphocyte aggregates in the tumor microenvironment (TME) that correlate with better patient outcomes. Previous studies identified a 12 chemokine gene expression signature associated with disease progression and the type and degree of TLS. These signatures could provide insight important for clinical decision making during pathologic evaluation, but predicting gene expression from whole slide images (WSI) may be impeded by low prediction accuracy and lack of interpretability. Here we report an artificial intelligence (AI)-based, state-of-the-art workflow to predict the 12-chemokine TLS gene signature from lung cancer WSI, and identify histological features relevant to model predictions. Methods: Models were trained using 538 cases of paired lung cancer WSI and mRNA-seq expression data (The Cancer Genome Atlas). Cell and tissue classifiers, based on convolutional neural networks (CNN) were trained on WSI, and a graph neural network (GNN) model that leverages the relative spatial arrangement of the CNN-identified cells and tissues was used to predict gene expression. GNN predictions of TLS signature genes were compared with the predictions of models trained using hand-crafted, task-specific features (TLS feature models) describing the number, size, and cellular composition of identified TLS. The Pearson correlation coefficient was used to assess the accuracy of GNN and TLS feature model predictions. GNNExplainer1, a tool that simultaneously identifies a subgraph and a subset of node features important for predictions, was applied to interpret the GNN model predictions. Results: GNN model predictions show reasonable accuracy: GNN models significantly predicted mRNA expression of all 12 genes (p&lt;0.05), and the predicted expression of six genes was moderately correlated with ground-truth measurements (Pearson-r&gt;0.5). The correlation of GNN predictions was higher than that of the TLS feature models for all 12 signature genes. The GNNExplainer identified relevant features including the mean and standard deviation of lymphocyte count, and fraction of lymphocytes in cancer stroma. Subgraphs selected by the GNNExplainer focus on, but extend beyond, regions of human-annotated TLS objects, indicating that TLS may influence gene expression and the TME in regions beyond their immediate vicinity. Conclusion: Here, we show a comparison of two interpretable AI methods for the prediction of TLS-induced gene expression from WSI. The outperforming GNN-based approach is highly reproducible and accurate, predicting histopathology features relevant to TLS that may be used to inform patient prognosis and treatment. These methods could be applied to predict additional clinically relevant transcriptomic signatures. 1. ​​Ying, R, et al. 2019. arXiv:1903.03894v4 Citation Format: Ciyue Shen, Collin Schlager, Deepta Rajan, Maryam Pouryahya, Mary Lin, Victoria Mountain, Ilan Wapinski, Amaro Taylor-Weiner, Benjamin Glass, Robert Egger, Andrew Beck. Application of an interpretable graph neural network to predict gene expression signatures associated with tertiary lymphoid structures in histopathological images [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 1922.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Caplin, Andrew y John Leahy. "Economic Theory and the World of Practice: A Celebration of the (S, s) Model". Journal of Economic Perspectives 24, n.º 1 (1 de febrero de 2010): 183–202. http://dx.doi.org/10.1257/jep.24.1.183.

Texto completo
Resumen
It was the question of how best to balance the costs of ordering and of running out of stock against the costs of holding excess inventory that inspired Kenneth Arrow, Theodore Harris, and Jacob Marschak to introduce the (S, s) model in 1951. In this celebratory article, we show how this model not only answered important practical questions, but also opened the door to a quite startling range of important and challenging follow-up questions, many of great practical importance and analytic depth. The (S, s) model has become one of the touchstone models of economics, opening new vistas of applied economic theory to all who internalize its structure. Today it is universally applied to solve questions faced in inventory control. The core model elements, uncertainty and fixed costs of adjustment, are ubiquitous, which has resulted in its becoming the general purpose economic model of discrete adjustment. The (S, s) model has also become a profound source of inspiration for macroeconomists seeking to understand the role that discrete microeconomic adjustments play in macroeconomic fluctuations. Looking forward, we foresee rapid growth in the use of (S, s) modeling to aid households making complex and costly financial decisions, such as when and how to terminate a mortgage. In the projected era of “household operations research,” new modeling challenges will arise due to enriched feedback from the world of practice.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Aslan, Antonio, José-Luis Díez, Alejandro José Laguna Sanz y Jorge Bondia. "On the Use of Population Data for Training Seasonal Local Models-Based Glucose Predictors: An In Silico Study". Applied Sciences 13, n.º 9 (25 de abril de 2023): 5348. http://dx.doi.org/10.3390/app13095348.

Texto completo
Resumen
Most advanced technologies for the treatment of type 1 diabetes, such as sensor-pump integrated systems or the artificial pancreas, require accurate glucose predictions on a given future time-horizon as a basis for decision-making support systems. Seasonal stochastic models are data-driven algebraic models that use recent history data and periodic trends to accurately estimate time series data, such as glucose concentration in diabetes. These models have been proven to be a good option to provide accurate blood glucose predictions under free-living conditions. These models can cope with patient variability under variable-length time-stamped daily events in supervision and control applications. However, the seasonal-models-based framework usually needs of several months of data per patient to be fed into the system to adequately train a personalized glucose predictor for each patient. In this work, an in silico analysis of the accuracy of prediction is presented, considering the effect of training a glucose predictor with data from a cohort of patients (population) instead of data from a single patient (individual). Feasibility of population data as an input to the model is asserted, and the effect of the dataset size in the determination of the minimum amount of data for a valid training of the models is studied. Results show that glucose predictors trained with population data can provide predictions of similar magnitude as those trained with individualized data. Overall median root mean squared error (RMSE) (including 25% and 75% percentiles) for the predictor trained with population data are {6.96[4.87,8.67], 12.49[7.96,14.23], 19.52[10.62,23.37], 28.79[12.96,34.57], 32.3[16.20,41.59], 28.8[15.13,37.18]} mg/dL, for prediction horizons (PH) of {15,30,60,120,180,240} min, respectively, while the baseline of the individually trained RMSE results are {6.37[5.07,6.70], 11.27[8.35,12.65], 17.44[11.08,20.93], 22.72[14.29,28.19], 28.45[14.79,34.38], 25.58[13.10,36.60]} mg/dL, both training with 16 weeks of data. Results also show that the use of the population approach reduces the required training data by half, without losing any prediction capability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

McCannon, Desdemona. "Pattern and pedagogy in print: Art and Craft Education in the mid twentieth-century classroom". Journal of Illustration 6, n.º 2 (1 de diciembre de 2019): 241–63. http://dx.doi.org/10.1386/jill_00013_1.

Texto completo
Resumen
Abstract In this article I compare a set of early and mid-twentieth-century print publications supportive of the 'new' art teaching in schools. The educator Marion Richardson's reflections on her use of pattern in the classroom in Art and the Child (1948) is considered alongside publications by artist-teachers such as Robin Tanner's Children's Work in Block Printing (1936) and Gwen White's A World of Pattern (1957). The monthly publication Art and Craft Education first published in 1936 was a magazine for teachers of art which showcased the work being done in schools around Britain that were involved in the 'new' art instruction. Pattern-making in schools in these publications is positioned as a modular and constructivist form of learning encouraging multisensory and exploratory ways of looking at and making sense of the world. Ackerman (2004) outlining theories of constructivist models for learning stresses the need for children to be 'builders of their own cognitive tools', and I argue that the exploration of pattern offers multiple strategies for the children to explore their phenomenological experience of the world. Pattern-making is also presented as a democratic form of creativity and a means of introducing the concept of art into everyday life, inculcating an appreciation of well-made things in daily life. I argue that through the lens of this pedagogic print culture with this emphasis on the benefits of teaching pattern-making in schools a nostalgic and pastoral English arts and crafts sensibility can be seen meeting a modernist cultural agenda via psychological theories of child development, creating a distinctively egalitarian, child-centred and craft-led model for learning. Revisiting this moment in childrens' education in Britain offers a timely insight into alternatives to the current educational landscape, with its emphasis on measuring pupil's achievement and downgrading of creative subjects in the school curriculum.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Yuan, Yawen, Zhihong Li, Ke Wang, Shunguo Zhang, Qingfeng He, Lucy Liu, Zhijia Tang et al. "Pharmacokinetics of Novel Furoxan/Coumarin Hybrids in Rats Using LC-MS/MS Method and Physiologically Based Pharmacokinetic Model". Molecules 28, n.º 2 (13 de enero de 2023): 837. http://dx.doi.org/10.3390/molecules28020837.

Texto completo
Resumen
Novel furoxan/coumarin hybrids were synthesized, and pharmacologic studies showed that the compounds displayed potent antiproliferation activities via downregulating both the phosphatidylinositide 3-kinase (PI3K) pathway and the mitogen-activated protein kinase (MAPK) pathway. To investigate the preclinical pharmacokinetic (PK) properties of three candidate compounds (CY-14S-4A83, CY-16S-4A43, and CY-16S-4A93), liquid chromatography, in tandem with the mass spectrometry LC-MS/MS method, was developed and validated for the simultaneous determination of these compounds. The absorption, distribution, metabolism, and excretion (ADME) properties were investigated in in vitro studies and in rats. Meanwhile, physiologically based pharmacokinetic (PBPK) models were constructed using only in vitro data to obtain detailed PK information. Good linearity was observed over the concentration range of 0.01–1.0 μg/mL. The free drug fraction (fu) values of the compounds were less than 3%, and the clearance (CL) values were 414.5 ± 145.7 mL/h/kg, 2624.6 ± 648.4 mL/h/kg, and 500.6 ± 195.2 mL/h/kg, respectively. The predicted peak plasma concentration (Cmax) and the area under the concentration-time curve (AUC) were overestimated for the CY-16S-4A43 PBPK model compared with the experimental ones (fold error > 2), suggesting that tissue accumulation and additional elimination pathways may exist. In conclusion, the LC-MS/MS method was successively applied in the preclinical PK studies, and the detailed information from PBPK modeling may improve decision-making in subsequent new drug development.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Durand, Yves, Martin Laternser, Gérald Giraud, Pierre Etchevers, Bernard Lesaffre y Laurent Mérindol. "Reanalysis of 44 Yr of Climate in the French Alps (1958–2002): Methodology, Model Validation, Climatology, and Trends for Air Temperature and Precipitation". Journal of Applied Meteorology and Climatology 48, n.º 3 (1 de marzo de 2009): 429–49. http://dx.doi.org/10.1175/2008jamc1808.1.

Texto completo
Resumen
Abstract Since the early 1990s, Météo-France has used an automatic system combining three numerical models to simulate meteorological parameters, snow cover stratification, and avalanche risk at various altitudes, aspects, and slopes for a number of mountainous regions in France. Given the lack of sufficient directly observed long-term snow data, this “SAFRAN”–Crocus–“MEPRA” (SCM) model chain, usually applied to operational avalanche forecasting, has been used to carry out and validate retrospective snow and weather climate analyses for the 1958–2002 period. The SAFRAN 2-m air temperature and precipitation climatology shows that the climate of the French Alps is temperate and is mainly determined by atmospheric westerly flow conditions. Vertical profiles of temperature and precipitation averaged over the whole period for altitudes up to 3000 m MSL show a relatively linear variation with altitude for different mountain areas with no constraint of that kind imposed by the analysis scheme itself. Over the observation period 1958–2002, the overall trend corresponds to an increase in the annual near-surface air temperature of about 1°C. However, variations are large at different altitudes and for different seasons and regions. This significantly positive trend is most obvious in the 1500–2000-m MSL altitude range, especially in the northwest regions, and exhibits a significant relationship with the North Atlantic Oscillation index over long periods. Precipitation data are diverse, making it hard to identify clear trends within the high year-to-year variability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Ghiggi, Gionata, Vincent Humphrey, Sonia I. Seneviratne y Lukas Gudmundsson. "GRUN: an observation-based global gridded runoff dataset from 1902 to 2014". Earth System Science Data 11, n.º 4 (13 de noviembre de 2019): 1655–74. http://dx.doi.org/10.5194/essd-11-1655-2019.

Texto completo
Resumen
Abstract. Freshwater resources are of high societal relevance, and understanding their past variability is vital to water management in the context of ongoing climate change. This study introduces a global gridded monthly reconstruction of runoff covering the period from 1902 to 2014. In situ streamflow observations are used to train a machine learning algorithm that predicts monthly runoff rates based on antecedent precipitation and temperature from an atmospheric reanalysis. The accuracy of this reconstruction is assessed with cross-validation and compared with an independent set of discharge observations for large river basins. The presented dataset agrees on average better with the streamflow observations than an ensemble of 13 state-of-the art global hydrological model runoff simulations. We estimate a global long-term mean runoff of 38 452 km3 yr−1 in agreement with previous assessments. The temporal coverage of the reconstruction offers an unprecedented view on large-scale features of runoff variability in regions with limited data coverage, making it an ideal candidate for large-scale hydro-climatic process studies, water resource assessments, and evaluating and refining existing hydrological models. The paper closes with example applications fostering the understanding of global freshwater dynamics, interannual variability, drought propagation and the response of runoff to atmospheric teleconnections. The GRUN dataset is available at https://doi.org/10.6084/m9.figshare.9228176 (Ghiggi et al., 2019).
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

ÇELİKTOPUZ, Eser. "Forecasting some climate parameters of Türkiye using the SSP3-7.0 scenario for the years 2040–2059". International Journal of Agriculture, Environment and Food Sciences 8, n.º 1 (7 de enero de 2024): 62–71. http://dx.doi.org/10.31015/jaefs.2024.1.7.

Texto completo
Resumen
This study employs the Coupled Model Inter-comparison Projects (CMIPs) and the Sixth phase of CMIPs (CMIP6) to unravel the multifaceted impacts of global climate change on climate of Türkiye. The CMIP6 data, fundamental to the Intergovernmental Panel on Climate Change (IPCC) Assessment Reports, forms the basis for projecting future climate scenarios, specifically under the medium-high reference scenario SSP3-7. Utilizing a suite of global climate models, including the innovative Multi-Model Ensemble (MME) approach, this study combines predictions to enhance the precision climate projections of Türkiye. Historical data spanning from 1951 to 2020 were subjected to rigorous statistical analysis, including descriptive statistics and regression analysis. The findings reveal an unequivocal upward trajectory in Türkiye’s annual mean temperature, with an accelerated pace in recent decades. Despite a lack of a significant long-term trend in annual precipitation from 1951 to 2020, the rate of change in precipitation is accelerating, indicating potential future challenges. Projections for 2040-2059 under the SSP3-7.0 scenario indicate a non-uniform increase in mean temperature across Türkiye, with the southern and western regions facing the most significant impact. This warming trend poses imminent threats to agriculture, altering crop yields and increasing the risk of heat stress for livestock. Additionally, the projected decrease in precipitation, alongside a surge in hot days and tropical nights, underscores the urgency for adaptive measures. As Türkiye navigates the complex terrain of climate change, this study provides valuable insights, emphasizing the significance of robust climate modeling for informed decision-making. The results underscore the imminent challenges Türkiye faces and emphasize the critical importance of proactive climate action on both national and global fronts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Atanane, Othmane, Asmaa Mourhir, Nabil Benamar y Marco Zennaro. "Smart Buildings: Water Leakage Detection Using TinyML". Sensors 23, n.º 22 (16 de noviembre de 2023): 9210. http://dx.doi.org/10.3390/s23229210.

Texto completo
Resumen
The escalating global water usage and the increasing strain on major cities due to water shortages highlights the critical need for efficient water management practices. In water-stressed regions worldwide, significant water wastage is primarily attributed to leakages, inefficient use, and aging infrastructure. Undetected water leakages in buildings’ pipelines contribute to the water waste problem. To address this issue, an effective water leak detection method is required. In this paper, we explore the application of edge computing in smart buildings to enhance water management. By integrating sensors and embedded Machine Learning models, known as TinyML, smart water management systems can collect real-time data, analyze it, and make accurate decisions for efficient water utilization. The transition to TinyML enables faster and more cost-effective local decision-making, reducing the dependence on centralized entities. In this work, we propose a solution that can be adapted for effective leakage detection in real-world scenarios with minimum human intervention using TinyML. We follow an approach that is similar to a typical machine learning lifecycle in production, spanning stages including data collection, training, hyperparameter tuning, offline evaluation and model optimization for on-device resource efficiency before deployment. In this work, we considered an existing water leakage acoustic dataset for polyvinyl chloride pipelines. To prepare the acoustic data for analysis, we performed preprocessing to transform it into scalograms. We devised a water leak detection method by applying transfer learning to five distinct Convolutional Neural Network (CNN) variants, which are namely EfficientNet, ResNet, AlexNet, MobileNet V1, and MobileNet V2. The CNN models were found to be able to detect leakages where a maximum testing accuracy, recall, precision, and F1 score of 97.45%, 98.57%, 96.70%, and 97.63%, respectively, were observed using the EfficientNet model. To enable seamless deployment on the Arduino Nano 33 BLE edge device, the EfficientNet model is compressed using quantization resulting in a low inference time of 1932 ms, a peak RAM usage of 255.3 kilobytes, and a flash usage requirement of merely 48.7 kilobytes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Jervis, Emma. "Why Rape Law Revisions should be Consistent with Anderson’s Negotiation Model". St Andrews Law Journal 3, n.º 1 (22 de agosto de 2023): 59–68. http://dx.doi.org/10.15664/stalj.v3i1.2648.

Texto completo
Resumen
In this essay I argue that the current law structure unobjectionably fails to protect women against cases of rape and needs reform. I further maintain that Anderson’s suggestion of ‘negotiation consent’ is the most appropriate line of reform, and I will defend her proposal in the face of potential objections. The current rape law in the UK was implemented in 2003, which revised previous laws firstly defined in the Sexual Offenses Act of 1953. Despite the ostensibly ‘objective’ nature of this law, which will be further examined in this essay, many feminist philosophers have noted the biases within the law which favour male interests. This essay explores the present issues within UK law, as well as our current understandings of what constitutes ‘a reasonable belief of consent’, that fail to protect women in instances of rape. This foundational attitude towards such matters influence performative revision models, such as the No Model and the Yes model, which I consider within this essay. Yet the inadequacies of such approaches, as I demonstrate, mirror some of the current issues with rape law in the UK today; such as the lack of recognition of men’s frequent inability to interpret women’s nonverbal behaviour and disregard for instances where one person changes their mind. Furthermore, I advocate for Anderson’s proposal of the negotiation model as an alternative reform of the law as well as society’s attitude towards sex and how consent can be clearly obtained. This model, when legally applied, will not only legally protect women in cases of rape, but eventually protect them from the present societal norms that perpetuate the imminent risk of rape and sexual exploitation. Through making the act of negotiation a legal requirement, I maintain that there would be a ‘ripple effect’ throughout society that would, eventually, lead to a change in public expectations of men and women. Anderson’s emphasis on either party being able to initiate the negotiation establishes a much more open-minded attitude towards gender roles and expectations of individuals based on their gender. This is the greatest strength of Anderson’s argument, as this equality-driven initiative would eventually seep into society’s wider expectations of individuals when initiating sex, and create a world where understanding what the other person is anticipating in a sexual situation is the norm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Linke, Olivia, Johannes Quaas, Finja Baumer, Sebastian Becker, Jan Chylik, Sandro Dahlke, André Ehrlich et al. "Constraints on simulated past Arctic amplification and lapse rate feedback from observations". Atmospheric Chemistry and Physics 23, n.º 17 (7 de septiembre de 2023): 9963–92. http://dx.doi.org/10.5194/acp-23-9963-2023.

Texto completo
Resumen
Abstract. The Arctic has warmed more rapidly than the global mean during the past few decades. The lapse rate feedback (LRF) has been identified as being a large contributor to the Arctic amplification (AA) of climate change. This particular feedback arises from the vertically non-uniform warming of the troposphere, which in the Arctic emerges as strong near-surface and muted free-tropospheric warming. Stable stratification and meridional energy transport are two characteristic processes that are evoked as causes for this vertical warming structure. Our aim is to constrain these governing processes by making use of detailed observations in combination with the large climate model ensemble of the sixth Coupled Model Intercomparison Project (CMIP6). We build on the result that CMIP6 models show a large spread in AA and Arctic LRF, which are positively correlated for the historical period of 1951–2014. Thus, we present process-oriented constraints by linking characteristics of the current climate to historical climate simulations. In particular, we compare a large consortium of present-day observations to co-located model data from subsets that show a weak and strong simulated AA and Arctic LRF in the past. Our analyses suggest that the vertical temperature structure of the Arctic boundary layer is more realistically depicted in climate models with weak (w) AA and Arctic LRF (CMIP6/w) in the past. In particular, CMIP6/w models show stronger inversions in the present climate for boreal autumn and winter and over sea ice, which is more consistent with the observations. These results are based on observations from the year-long Multidisciplinary Drifting Observatory for the Study of Arctic Climate (MOSAiC) expedition in the central Arctic, long-term measurements at the Utqiaġvik site in Alaska, USA, and dropsonde temperature profiling from aircraft campaigns in the Fram Strait. In addition, the atmospheric energy transport from lower latitudes that can further mediate the warming structure in the free troposphere is more realistically represented by CMIP6/w models. In particular, CMIP6/w models systemically simulate a weaker Arctic atmospheric energy transport convergence in the present climate for boreal autumn and winter, which is more consistent with fifth generation reanalysis of the European Centre for Medium-Range Weather Forecasts (ERA5). We further show a positive relationship between the magnitude of the present-day transport convergence and the strength of past AA. With respect to the Arctic LRF, we find links between the changes in transport pathways that drive vertical warming structures and local differences in the LRF. This highlights the mediating influence of advection on the Arctic LRF and motivates deeper studies to explicitly link spatial patterns of Arctic feedbacks to changes in the large-scale circulation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Chaigneau, Alisée A., Guillaume Reffray, Aurore Voldoire y Angélique Melet. "IBI-CCS: a regional high-resolution model to simulate sea level in western Europe". Geoscientific Model Development 15, n.º 5 (10 de marzo de 2022): 2035–62. http://dx.doi.org/10.5194/gmd-15-2035-2022.

Texto completo
Resumen
Abstract. Projections of coastal sea level (SL) changes are of great interest for coastal risk assessment and decision making. SL projections are typically produced using global climate models (GCMs), which cannot fully resolve SL changes at the coast due to their coarse resolution and lack of representation of some relevant processes (tides, atmospheric surface pressure forcing, waves). To overcome these limitations and refine projections at regional scales, GCMs can be dynamically downscaled through the implementation of a high-resolution regional climate model (RCM). In this study, we developed the IBI-CCS (Iberian–Biscay–Ireland Climate Change Scenarios) regional ocean model based on a 1/12∘ northeastern Atlantic Nucleus for European Modelling of the Ocean (NEMO) model configuration to dynamically downscale CNRM-CM6-1-HR, a GCM with a 1/4∘ resolution ocean model component participating in the sixth phase of the Coupled Model Intercomparison Project (CMIP6) by the Centre National de Recherches Météorologiques (CNRM). For a more complete representation of the processes driving coastal SL changes, tides and atmospheric surface pressure forcing are explicitly resolved in IBI-CCS in addition to the ocean general circulation. To limit the propagation of climate drifts and biases from the GCM into the regional simulations, several corrections are applied to the GCM fields used to force the RCM. The regional simulations are performed over the 1950 to 2100 period for two climate change scenarios (SSP1-2.6 and SSP5-8.5). To validate the dynamical downscaling method, the RCM and GCM simulations are compared to reanalyses and observations over the 1993–2014 period for a selection of ocean variables including SL. Results indicate that large-scale performance of IBI-CCS is better than that of the GCM thanks to the corrections applied to the RCM. Extreme SLs are also satisfactorily represented in the IBI-CCS historical simulation. Comparison of the RCM and GCM 21st century projections shows a limited impact of increased resolution (1/4 to 1/12∘) on SL changes. Overall, bias corrections have a moderate impact on projected coastal SL changes, except in the Mediterranean Sea, where GCM biases were substantial.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Trinh, Mi Kieu, Matthew T. Wayland y Sudhakaran Prabakaran. "Behavioural analysis of single-cell aneural ciliate, Stentor roeseli, using machine learning approaches". Journal of The Royal Society Interface 16, n.º 161 (diciembre de 2019): 20190410. http://dx.doi.org/10.1098/rsif.2019.0410.

Texto completo
Resumen
There is still a significant gap between our understanding of neural circuits and the behaviours they compute—i.e. the computations performed by these neural networks (Carandini 2012 Nat. Neurosci. 15 , 507–509. ( doi:10.1038/nn.3043 )). Cellular decision-making processes, learning, behaviour and memory formation—all that have been only associated with animals with neural systems—have also been observed in many unicellular aneural organisms, namely Physarum , Paramecium and Stentor (Tang & Marshall2018 Curr. Biol. 28 , R1180–R1184. ( doi:10.1016/j.cub.2018.09.015 )). As these are fully functioning organisms, yet being unicellular, there is a much better chance to elucidate the detailed mechanisms underlying these learning processes in these organisms without the complications of highly interconnected neural circuits. An intriguing learning behaviour observed in Stentor roeseli (Jennings 1902 Am. J. Physiol. Legacy Content 8 , 23–60. ( doi:10.1152/ajplegacy.1902.8.1.23 )) when stimulated with carmine has left scientists puzzled for more than a century. So far, none of the existing learning paradigm can fully encapsulate this particular series of five characteristic avoidance reactions. Although we were able to observe all responses described in the literature and in a previous study (Dexter et al . 2019), they do not conform to any particular learning model. We then investigated whether models inferred from machine learning approaches, including decision tree, random forest and feed-forward artificial neural networks could infer and predict the behaviour of S. roeseli . Our results showed that an artificial neural network with multiple ‘computational’ neurons is inefficient at modelling the single-celled ciliate's avoidance reactions. This has highlighted the complexity of behaviours in aneural organisms. Additionally, this report will also discuss the significance of elucidating molecular details underlying learning and decision-making processes in these unicellular organisms, which could offer valuable insights that are applicable to higher animals.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Hazim K Khalaf y Laith A Mahgoob. "Study the effect of notches on bending fatigue strength of 2024 aluminum sheets". Global Journal of Engineering and Technology Advances 14, n.º 1 (30 de enero de 2023): 014–32. http://dx.doi.org/10.30574/gjeta.2023.14.1.0217.

Texto completo
Resumen
In the present study, the effect of holes on the curvature fatigue strength of 2024 aluminum sheets. Has been studied by taking a model of this type and cutting it into models with the dimensions required for testing and by using a central hole as the main variable by making a central whole (CH) with a diameter of (1mm). The device used in this research is a Reversed bending machine of type HI-TECH from a British origin, all tests were done at a constant stress ratio (R = -1), a rotational speed of 6000 rpm and a frequency (100 Hz). The study was practically and theoretically (using the ANSYS15.0 program). The results showed that stress is affected by the presence of holes and we notice increased stress in the samples that contain holes more than the core samples that do not contain holes and thus we notice that fatigue strength decreases in the samples that contain a central hole by (32%) of the core samples by the practical side on the theoretical side, strength of fatigue less than percentage (43.7%). The yield strength was found by tensile test equal to (73.6 MPa) and the ultimate tensile strength (195.2 MPa) as for the hardness value (59.7). Also, from the results obtained, it appears that streaking acts as stress centers, thereby increasing stress and thereby reducing the number of cycles and consequently failure.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Kindyuk, Boris, Mykhailo Kelman, Vasyl Patlachuk y Olexander Patlachuk. "History and Socio-Political Conditions of Preparation of the Polish Constitutions from 1919 to 1997". Universum Historiae et Archeologiae 2, n.º 2 (11 de octubre de 2020): 176. http://dx.doi.org/10.15421/26190212.

Texto completo
Resumen
The purpose of article deals with the study of history of preparation and the reasons for the adoption of the Polish Constitutions in the period from 1919 to 1997 years. Research methods: dialectical, chronological, comparative, system-structural. Main results. The article shows that the history of the preparation of the Polish Constitutions in the period from 1919 to 1997 years occurred under the conditions of constant changes of socio-political factors, which was reflected in the state system, political, economic and social relations, rights and freedoms of the population. It is proved that the history of Polish constitutionalism has evolved in a complex vector from the insignificant in volume and scientific level of the Little Constitution of 1919, which was adopted in conditions of armed confrontation with Soviet Russia, to the 1997 Constitution, which complies with European standards. The influence of the historical personality of Marshal Jozef Pilsudski was investigated, who became the sponsor of the rebirth of independent Poland on the history of the preparation and adoption of the Polish Constitutions of 1919, 1921 years and the Constitution of 1935 in which the President of the country was given dictatorial powers during the period of war. It is shown that the Constitution of 1952, which was written according to Soviet models and based on instructions received from Moscow, had to consolidate in Poland a socialist model in which the Polish United Workers Party had a leading role in society. It is shown that the collapse of the Soviet Union led to the elimination of the communist system in Poland, the rise to power of democratic forces, which resulted the adoption Constitution 1997. The peculiarity of the Constitutional process was the fact that for the first time in the history of Poland on 25th May 1997 a referendum was held regarding its adoption. The Constitution 1997 was adopted in the context of a transition from command-administrative to a democratic system of government, so its content is marked by a democratic nature that ensured the creation of private ownership of all means of production and free trade. The historical reasons of the drafting of the Polish Constitutions have undergone a complex dynamic, which is connected with political changes in the country, which is reflected in the content of the ideas, doctrinal views and Basic Laws. The practical significance of the study lies in the use of Polish historical experience in the development of event scenarios in Ukraine in order to prevent errors in modern state-making. Originality. A comprehensive study of the history of Polish constitutionalism, taking into account socio-political reasons. Article type: descriptive.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Сазонова y Svetlana Sazonova. "QUANTITATIVE DETERMINATION OF PARAMETERS OF MATHEMATICAL MODELS OF FLOW DISTRIBUTION IN THE SECURITY SYSTEMS OF FUNCTIONING OF HEAT-SUPPLY". Modeling of systems and processes 8, n.º 4 (11 de mayo de 2016): 53–57. http://dx.doi.org/10.12737/19525.

Texto completo
Resumen
The evaluation considered the problem of static estimation parameters of the regime. Considered the task used in the numerical implementation of mathematical models for load flow and state estimation of static heating systems. The practical application is possible with the complex task of monitoring the technical condition of systems to provide information to aid decision making in the event of accidents and safety in the operation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

DONALDSON, JAMES. "Second‐Order Topics and Prokofiev's String Quartets". Music Analysis 42, n.º 2 (julio de 2023): 227–61. http://dx.doi.org/10.1111/musa.12218.

Texto completo
Resumen
ABSTRACTIn his 1957 Mythologies, Roland Barthes proposes a second‐order semiological system, in which a familiar sign is repurposed to become a new signifier and in the process mythologises the previous sign system. This article adapts this principle to musical topics in twentieth‐century music, using the case study of the eighteenth‐century topical universe as it appears in Sergey Prokofiev's two String Quartets.I approach this from two directions, exploring in turn the mechanics of internal relationships and wider historical‐political implications. First, drawing upon William Caplin's concept of formal functions and their various possible associations with topics (Caplin 2005), I propose that topics with formal associations can express formal functions without the requirement for clarity in the primary parameters of harmony, tonality, grouping and cadence. Accordingly, they signify on a second‐order plane. The recurring Mannheim rocket in the first movement of Prokofiev's String Quartet No. 1 (1931) provides a case study. Second, I demonstrate the parallels between a second‐order system and key tenets of socialist realism. Concepts such as dostupnost′ (making the work understandable to everyone) and opora na klassiku (based on past Classical models) echo Barthes's model, as meaning is drawn from reference to Enlightenment‐associated art as a whole, rather than the individual topics. I demonstrate these principles in practice with reference to Prokofiev's Quartet No. 2 (1941). Together, these second‐order perspectives show the deeply rooted reliance on eighteenth‐century semantics in Prokofiev's music and their wider aesthetic implications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Väänänen, Niko. "Are ageing Nordic welfare states sustainable? An analysis of pension and care policies in Finland and Sweden". Ubezpieczenia Społeczne. Teoria i praktyka 157, n.º 2 (27 de noviembre de 2023): 1–27. http://dx.doi.org/10.5604/01.3001.0054.0861.

Texto completo
Resumen
Introduction: The article discusses the aging population in Nordic countries, focusing on Sweden and Finland, where the median age has steadily increased since 1950. The text emphasizes the impact of demographic changes on the old-age dependency ratio and the subsequent implications for the welfare stateObjective: It is examination and comparison of the aging policies of Nordic countries, with a specific focus on Finland and Sweden. The author aims to shed light on the differences in pension and long-term care systems between these two nations, challenging the perception of a common "Nordic pension model."Materials and methods: The article employs a theoretical background based on the "intergenerational reciprocity trichotomy" developed by André Masson. The methodological approach utilized in the research is that of a "comparative case study." The author analyzes the pension and long-term care systems of Finland and Sweden, reviewing key indicators, policy documents, and relevant research literature.Results: While the Swedish system is financially robust, it encounters political challenges due to low public pension levels, prompting discussions about potential reforms, such as increasing contribution rates for higher benefits. Finland's public pension system, characterized by stable political support, raises concerns about long-term financial sustainability. The decision-making model, led by social partners, may shift to a more parliamentary approach as trade union density decreases and aging-related issues become more significant for the electorate. Both countries have successfully promoted high employment rates among older workers, but long-term care policies pose a greater challenge to the sustainability of their welfare models. The growing importance of family and informal care, coupled with a reliance on migrant workers for healthcare, highlights the strain on the welfare systems. Demographic changes increase pressure on pro-old welfare policies, particularly in long-term care, with Sweden better positioned than Finland to sustain elevated spending towards the elderly.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

HERR, DONALD L. "MODERN MODELS AND MODEL-MAKING". Journal of the American Society for Naval Engineers 72, n.º 4 (18 de marzo de 2009): 673–74. http://dx.doi.org/10.1111/j.1559-3584.1960.tb04079.x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Kholostov, K. M. "Robotic systems implementation into law enforcement practice and peculiarities of decision-making models in such systems". Journal of Physics: Conference Series 1958, n.º 1 (1 de junio de 2021): 012021. http://dx.doi.org/10.1088/1742-6596/1958/1/012021.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Vakhrameeva, Zoya V. "PUBLIC COMMUNICATION OF SCIENCE AS PRESENTED IN THE INTERNATIONAL RESEARCHERS’ WORKS (IN THE GLOBAL DATABASE PROQUEST DISSERTATIONS & THESES)". Sign problematic field in mediaeducation 48, n.º 2 (30 de septiembre de 2023): 87–99. http://dx.doi.org/10.47475/2070-0695-2023-48-2-87-99.

Texto completo
Resumen
Science communication or communication of science and society has become very important in 21st century because of an ever-growing role of science and technology in people’s lives. People themselves have in turn increasingly been engaged in science and technology decision-making. Science communication has been researched abroad for several decades, became meanwhile an independent field of study of which dissertations and thesis are a part. This article describes the collection of international doctoral dissertations included in the world’s most comprehensive repository ProQuest Dissertation & Theses Global (PQDT). Taking into consideration the global terminology controversy and the lack of a unified definition of “science communication”, the first stage of study was a combined keyword search using the search terms selected from the PQDT index: attitudes towards science, citizen science, popularization of science, post normal science, public engagement with science, public understanding of science, science communication, scientific literacy. The search resulted in 2213 dissertations written in 1950–2022 in 11 languages from 19 countries. Further analysis showed that the most active research is being carried out in the USA, China, and the UK. 77 % of the works were written in English, 22 % in Chinese. The first works dated back to the 1950s, but an exponential increase in the number of dissertations began only in the 1980s and could be explained by a new policy making formulated in many countries in the second half of the 1980s to ensure developing and improving science communication. At the second stage, another search was carried out for each term separately to have a picture of trends. It is revealed that until the early 2000s the main dissertation topics were attitude towards science and scientific literacy. In the 2000s, such developing topics as public engagement with science, citizen science, and post-normal science reflected the changing nature of science communication and the transition from the one-way communication model “from scientists to public” to models of public participation and engagement. Since the 2010s, research interests have been shifted to public engagement and new ways of scientists and non-scientists interaction. One of the most actively developing directions is the co-production of knowledge aka citizen science, but the problem of scientific literacy also still remains relevant.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Guyennon, N., E. Romano, I. Portoghese, F. Salerno, S. Calmanti, A. B. Petrangeli, G. Tartari y D. Copetti. "Comparing dynamical, stochastic and combined downscaling approaches – lessons from a case study in the Mediterranean region". Hydrology and Earth System Sciences Discussions 9, n.º 8 (30 de agosto de 2012): 9847–84. http://dx.doi.org/10.5194/hessd-9-9847-2012.

Texto completo
Resumen
Abstract. Various downscaling techniques have been developed to bridge the scale gap between global climate models (GCMs) and finer scales required to assess hydrological impacts of climate change. Such techniques may be grouped into two downscaling approaches: the deterministic dynamical downscaling (DD) and the stochastic statistical downscaling (SD). Although SD has been traditionally seen as an alternative to DD, recent works on statistical downscaling have aimed to combine the benefits of these two approaches. The overall objective of this study is to examine the relative benefits of each downscaling approach and their combination in making the GCM scenarios suitable for basin scale hydrological applications. The case study presented here focuses on the Apulia region (South East of Italy, surface area about 20 000 km2), characterized by a typical Mediterranean climate; the monthly cumulated precipitation and monthly mean of daily minimum and maximum temperature distribution were examined for the period 1953–2000. The fifth-generation ECHAM model from the Max-Planck-Institute for Meteorology was adopted as GCM. The DD was carried out with the Protheus system (ENEA), while the SD was performed through a monthly quantile-quantile transform. The SD resulted efficient in reducing the mean bias in the spatial distribution at both annual and seasonal scales, but it was not able to correct the miss-modeled non-stationary components of the GCM dynamics. The DD provided a partial correction by enhancing the trend spatial heterogeneity and time evolution predicted by the GCM, although the comparison with observations resulted still underperforming. The best results were obtained through the combination of both DD and SD approaches.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Danby, Colin. "Meade, Phillips, and the Two-Country Model". History of Political Economy 53, n.º 4 (23 de junio de 2021): 673–95. http://dx.doi.org/10.1215/00182702-9308911.

Texto completo
Resumen
James Meade played an important role in the 1951 development of the “Mark II” Newlyn-Phillips machine, including making it fit for connection to a mirror-image machine so that policy interactions between two countries could be demonstrated. In 1952 Phillips built, for Meade, a “foreign exchange market” to link two machines. This article reconstructs the linking device from archival evidence, and places the resulting two- country model in the context of Meade’s thought.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Garner, Michelle D. "Advancing Discussion of Federal Faith-based Social Service Policies through Overview and Application of Established Health Services Research Models". Advances in Social Work 13, n.º 3 (2 de agosto de 2012): 484–509. http://dx.doi.org/10.18060/1952.

Texto completo
Resumen
Since the 1990s, federal policies have allowed public funds to support social services provided through pervasively faith-based organizations (FBOs). Public and academic discourse on these policies tends to be marked by limited data, narrow scope, and the lack of an appropriate analytic framework to adequately consider and critique the merits of the policies, as social workers are compelled to do. The goals of this study are to identify, and preliminarily apply, an established policy analysis model appropriate for use with FBO policy in order to progress discussion. Health service researchers Aday, Begley, Lairson, and Balkrishnan (2004) provide a theoretically based policy analysis framework, which is appropriate for this task and for use by social workers. Their effectiveness, efficiency, and equity policy analysis model is presented along with data and analysis intended to help frame and progress productive discussions on FBO policies within and beyond the profession.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Li, Jianan, Shiyue Liao, Jingtong Liu y Yue Lyu. "Economic Policy Uncertainty and Corporate Inefficient Investment: A Research Based on Fixed Effects Model with Evidence from China". BCP Business & Management 27 (6 de septiembre de 2022): 57–68. http://dx.doi.org/10.54691/bcpbm.v27i.1952.

Texto completo
Resumen
Investment decision-making is the core of corporate financial decision-making, which has a vital influence on investment income and shareholder wealth. This paper will use the fixed effects model to study the relationship between policy certainty and inefficient investment of enterprises, and the data of Chinese companies used are from the CSMAR database. This article first studies the correlation between inefficient investment and policy uncertainty. Based on the research on the relationship between economic policy uncertainty and corporate inefficient investment, this paper finds that economic policy uncertainty has a negative impact on the investment efficiency of enterprises. This conclusion is still valid after a series of robustness tests, including adding some omitted variables and fixed effects of the research year and industry. And through heterogeneous analysis, this paper finds that the investment efficiency of small-scale and low-quality auditing companies is more affected by economic policy uncertainty.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Scott, Tony. "The Embodiment of Buddhist History: Interpretive Methods and Models of Sāsana Decline in Burmese Debates about Female Higher Ordination". Religions 14, n.º 1 (23 de diciembre de 2022): 31. http://dx.doi.org/10.3390/rel14010031.

Texto completo
Resumen
The mid-twentieth century was celebrated in Theravāda civilizations as the halfway point in the five-thousand-year history of the Buddha’s dispensation, the sāsana. Around this time in Burma, fierce debates arose concerning the re-establishment of the extinct order of Theravāda nuns. While women were understood as having a crucial role in supporting and maintaining the sāsana, without a sanctioned means of higher ordination, they were excluded from its centre, that is, as active agents in sāsana history. In this paper, I explore what was at stake in these debates by examining the arguments of two monks who publicly called for the reintroduction of the order of nuns, the Mingun Jetavana Sayadaw (1868–1955) and Ashin Ādiccavaṃsa (1881–1950). I will show that both used the enigmatic Milindapañha (Questions of Milinda) to present their arguments, but more than this, by drawing from their writings and biographies, it will be seen that their methods of interpreting the Pāli canon depended on their unique models of sāsana history, models which understood this halfway point as ushering in a new era of emancipatory promise. This promise was premised on the practice of vipassanā meditation by both lay men and especially women, the latter who, through their participation in the mass lay meditation movement, were making strong claims as dynamic players in the unfolding of sāsana history. The question of whether the order of nuns should be revived therefore hinged on the larger question of what was and was not possible in the current age of sāsana decline. Beyond this, what I aim to show is that mid-twentieth-century debates around female ordination concerned the very nature of the sāsana itself, as either a transcendent, timeless ideal, or as a bounded history embodied in the practice of both monks and nuns.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Udoh, Salem. "REVIEW OF FINANCIAL MANAGEMENT IN PRIVATE FIRMS: UNLOCKING THE CASH MANAGEMENT MODEL". International Journal of Entrepreneurial Knowledge 10, n.º 2 (22 de diciembre de 2022): 95–106. http://dx.doi.org/10.37335/ijek.v10i2.172.

Texto completo
Resumen
Financial management is a complex body of knowledge that is still evolving without any successful template for its practice, especially in private firms. This paper reviews research on actual financial management in private firms to see if cash management models critical to working capital management are incorporated. The approach is to review all the models in extant literature used for cash management, itself a component of working capital. However, search results show that only Pugmire (1952) outlines the activities involved in actual financial management practice for local schools in the United States. He identified the generalizable constituents of the financial management process, including budgeting, accounting, auditing, records and reports, and cost analysis. The choice of cash as the review focus is because it is the essence of financial management in private firms: determining and sourcing capital as cash and utilizing it by allocation process to generate more cash and maximize the firm's value to stakeholders. Therefore, this paper contributes to financial management literature by extending Pugmire's (1952) template for public schools with cash management models in extant literature that can also be adapted for private firms, with potential for further research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Berry, David C. y Christine Noller. "Change Management and Athletic Training: A Primer for Athletic Training Educators". Athletic Training Education Journal 15, n.º 4 (1 de octubre de 2020): 269–77. http://dx.doi.org/10.4085/1947-380x-19-89.

Texto completo
Resumen
Context Change management is a discipline guiding how organizations prepare, equip, and support people to adopt a change to drive organizational success and outcomes successfully. Objective To introduce the concept of change management and create a primer document for athletic training educators to use in the classroom. Background While Lean and Six Sigma methodologies are essential for achieving a high-reliability organization, human resistance to change is inevitable. Change management provides a structured approach via different theoretical methods, specific principles, and tools to guide organizations through growth and development and serves an essential role during process improvement initiatives. Synthesis There are several theories or models of change management, 3 of which are specifically relevant in health care. Kotter and Rathgeber believe change has both an emotional and situational component and use an 8-step approach: increase urgency, guide teams, have the right vision, communicate for buy-in, enable action, create short-term wins, and make-it-stick [Kotter J., Rathgeber H. Our Iceberg is Melting: Changing and Succeeding Under Any Circumstances. New York, NY: St. Martin's Press, 2006]. Bridges' Transitional Model focuses on the premise that change does not influence project success; instead, a transition does [Bridges W. Managing Transitions: Making the Most of Change. Reading, MA: Addison-Wesley Publishing, 1991]. Lewin's model suggests that restraining forces influence organizations and that driving forces cause change to happen [Lewin K. Problems of research in social psychology. In: Cartwright D, ed. Field Theory in Social Science: Selected Theoretical Papers. New York, NY: Harpers; 1951]. Recommendation(s) Whether athletic trainers approach change management in a leadership role or as a stakeholder, newly transitioning professionals and those seeking leadership roles should value and appreciate change management theories and tools. Moreover, while no best practice statement exists relative to the incorporation of change management into a curriculum, addressing the subject early may allow immersive-experience students an opportunity to use change management during a process improvement initiative, facilitating a greater appreciation of the content. Conclusion(s) Athletic training curriculums should consider including change management course content, whether separately or in combination with other process-improvement content, thereby familiarizing athletic trainers with a common language for organizational and professional change.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía