Academic literature on the topic 'Agency approach to imitative modeling'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Agency approach to imitative modeling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Agency approach to imitative modeling"

1

Toiviainen, Petri. "Modeling the Target-Note Technique of Bebop-Style Jazz Improvisation: An Artificial Neural Network Approach." Music Perception 12, no. 4 (1995): 399–413. http://dx.doi.org/10.2307/40285674.

Full text
Abstract:
In cognitive science and research on artificial intelligence, there are two central paradigms: symbolic and analogical. Within the analogical paradigm, artificial neural networks (ANNs) have recently been successfully used to model and simulate cognitive phenomena. One of the most prominent features of ANNs is their ability to learn by example and, to a certain extent, generalize what they have learned. Improvisation, the art of spontaneously creating music while playing or singing, fundamentally has an imitative nature. Regardless of how much one studies and analyzes, the art of improvisation is learned mostly by example. Instead of memorizing explicit rules, the student mimics the playing of other musicians. This kind of learning procedure cannot be easily modeled with rule- based symbolic systems. ANNs, on the other hand, provide an effective means of modeling and simulating this kind of imitative learning. In this article, a model of jazz improvisation that is based on supervised learning ANNs is described. Some results, achieved by simulations with the model, are presented. The simulations show that the model is able to apply the material it has learned in a new context. It can even create new melodic patterns based on the learned patterns. This kind of adaptability is a direct consequence of the fact that the knowledge resides in a distributed form in the network.
APA, Harvard, Vancouver, ISO, and other styles
2

Fader, Peter S., and Bruce G. S. Hardie. "Modeling Consumer Choice among SKUs." Journal of Marketing Research 33, no. 4 (November 1996): 442–52. http://dx.doi.org/10.1177/002224379603300406.

Full text
Abstract:
Most choice models in marketing implicitly assume that the fundamental unit of analysis is the brand. In reality, however, many more of the decisions made by consumers, manufacturers, and retailers occur at the level of the stock-keeping unit (SKU). The authors address a variety of issues involved in defining and using SKUs in a choice model, as well as the unique benefits that arise from doing so. They discuss how a set of discrete attributes (e.g., brand name, package size, type) can be used to characterize a large set of SKUs in a parsimonious manner. They postulate that consumers do not form preferences for each individual SKU, per se, but instead evaluate the underlying attributes that describe each item. The model is shown to be substantially superior to a more traditional framework that does not emphasize the complete use of SKU attribute information. Their analysis also highlights several other benefits associated with the proposed modeling approach, such as the ability to forecast sales for imitative line extensions that enter the market in a future period. Other implications and extensions also are discussed.
APA, Harvard, Vancouver, ISO, and other styles
3

Gaus, Gerald. "CONSTRUCTIVIST AND ECOLOGICAL MODELING OF GROUP RATIONALITY." Episteme 9, no. 3 (September 2012): 245–54. http://dx.doi.org/10.1017/epi.2012.14.

Full text
Abstract:
AbstractThese brief remarks highlight three aspects of Christian List and Philip Pettit's Group Agency: The Possibility, Design, and Status of Corporate Agents that illustrate its constructivist nature: (i) its stress on the discursive dilemma as a primary challenge to group rationality and reasoning; (ii) its general though qualified support for premise-based decision-making as the preferred way to cope with the problems of judgment aggregation; and (iii) its account of rational agency and moral responsibility. The essay contrasts List and Pettit's constructivist analysis of group rationality with an ecological approach, inspired by social theorists such as F A. Hayek, Vernon L. Smith and Gerd Gigerenzer.
APA, Harvard, Vancouver, ISO, and other styles
4

Fung, Juan F., Siamak Sattar, David T. Butry, and Steven L. McCabe. "A predictive modeling approach to estimating seismic retrofit costs." Earthquake Spectra 36, no. 2 (February 2, 2020): 579–98. http://dx.doi.org/10.1177/8755293019891716.

Full text
Abstract:
This article presents a methodology for estimating seismic retrofit costs from historical data. In particular, historical retrofit-cost data from Federal Emergency Management Agency (FEMA) 156 is used to build a generalized linear model (GLM) to predict retrofit costs as a function of building characteristics. While not as accurate as an engineering professional’s estimate, this methodology is easy to apply to generate quick estimates and is especially useful for decision makers with large building portfolios. Moreover, the predictive modeling approach provides a measure of uncertainty in terms of prediction error. The article uses prediction error to compare different modeling choices, including the choice of distribution for costs. Finally, the proposed retrofit-cost model is implemented to estimate the cost to retrofit a portfolio of federal buildings. The application illustrates how the choice of distribution affects cost estimates.
APA, Harvard, Vancouver, ISO, and other styles
5

Qura-tul-ain Khan and Tahir Alyas. "Modeling Emotional Mutation and Evolvement Using Genetic Algorithm in Agency." Lahore Garrison University Research Journal of Computer Science and Information Technology 1, no. 2 (June 30, 2017): 52–61. http://dx.doi.org/10.54692/lgurjcsit.2017.010228.

Full text
Abstract:
Human mind has the ability to generate emotions based on internal and external environment. These emotions are based on past experiences and the current situation. Mutation of emotions in human is the change in the intensity of emotion and the more intense the emotion is, it has more chances of existence. In mutative state two emotions are crossover and from the new emotions only the fittest and strongest emotion survive. Emotional mutation and evolvement helps human mind in decision making and in generating response. In agency the phenomenon of emotional modeling can be accomplished by Mutation and Evolvement for generating output. Genetic algorithm is computational model that is inspired by evolution of biological population and by using mutation and crossover of Genetic Algorithm the agency is able to generate output.This paper presents the algorithmic approach for emotional Mutation and Evolvement using Genetic Algorithm for generating output in agency.
APA, Harvard, Vancouver, ISO, and other styles
6

SCHÜTTE, TINO. "INVESTMENT ADJUSTMENTS IN PRODUCT MARKET COMPETITION." International Journal of Innovation and Technology Management 10, no. 05 (October 2013): 1340017. http://dx.doi.org/10.1142/s0219877013400178.

Full text
Abstract:
Situated in the research field of market structure and strategic behavior, a model is developed, which shows the impacts of investment adjustments on product market competition. Placed in a multi-firm multi-product setting, the consequences of decisions to split budgets in: (i) marketing and development activities and (ii) development expenditures into innovative or imitative activities is investigated. The model is validated with empirical data of the pharmaceutical industry, especially the drug market in Germany. An agent-based modeling and simulation approach is used to explain how the freedom of firms to adjust their investment according to an absolute (individual aspiration level) or relative comparison (success of competitors) can change market performance. The results show that investment strategies adjusted to the behavior of direct competitors outperforms adjustments based on individual aspiration levels.
APA, Harvard, Vancouver, ISO, and other styles
7

Sukhonosenko, Zakhar Viktorovich, and Luybov Vadimovna Gajkova. "AGENT MODELING AS ANALYSIS TOOL IMPLEMENTATION OF INFORMATION AND ANALYTICAL ECOSYSTEMS." Krasnoyarsk Science 8, no. 4 (December 25, 2019): 124–38. http://dx.doi.org/10.12731/2070-7568-2019-4-124-138.

Full text
Abstract:
The use of agency modeling in the early analysis of the expected results of integration of the designed software into business processes is considered using the example of information and analytical mo-Delhi, which implements a new approach to the promotion of financial products, as well as problems of low conversion of the Internet and mobile advertising. Purpose: determination of scientifically based approaches to ecosystem performance growth with different values of efficiency of financial products promotion in the consumer market. Methodology: the dialectical method is used as a general scientific method of cognition; Techniques and tools for systemic, comparative, statistical, economic and financial analysis; Simulation methods and tools. Results: described is a new approach to promotion, regarding problems of low conversion of the Internet and mobile advertising of financial products, characterized by using agency simulation as a basis for analytical processing of research results. Scope of results: heads of companies when deciding on favorable and unfavourable conditions of the information and analytical financial ecosystem as one of the components of the business development strategy.
APA, Harvard, Vancouver, ISO, and other styles
8

Ertsen, M. W., J. T. Murphy, L. E. Purdue, and T. Zhu. "A journey of a thousand miles begins with one small step – human agency, hydrological processes and time in socio-hydrology." Hydrology and Earth System Sciences 18, no. 4 (April 8, 2014): 1369–82. http://dx.doi.org/10.5194/hess-18-1369-2014.

Full text
Abstract:
Abstract. When simulating social action in modeling efforts, as in socio-hydrology, an issue of obvious importance is how to ensure that social action by human agents is well-represented in the analysis and the model. Generally, human decision-making is either modeled on a yearly basis or lumped together as collective social structures. Both responses are problematic, as human decision-making is more complex and organizations are the result of human agency and cannot be used as explanatory forces. A way out of the dilemma of how to include human agency is to go to the largest societal and environmental clustering possible: society itself and climate, with time steps of years or decades. In the paper, another way out is developed: to face human agency squarely, and direct the modeling approach to the agency of individuals and couple this with the lowest appropriate hydrological level and time step. This approach is supported theoretically by the work of Bruno Latour, the French sociologist and philosopher. We discuss irrigation archaeology, as it is in this discipline that the issues of scale and explanatory force are well discussed. The issue is not just what scale to use: it is what scale matters. We argue that understanding the arrangements that permitted the management of irrigation over centuries requires modeling and understanding the small-scale, day-to-day operations and personal interactions upon which they were built. This effort, however, must be informed by the longer-term dynamics, as these provide the context within which human agency is acted out.
APA, Harvard, Vancouver, ISO, and other styles
9

Ertsen, M. W., J. T. Murphy, L. E. Purdue, and T. Zhu. "A journey of a thousand miles begins with one small step – human agency, hydrological processes and time in socio-hydrology." Hydrology and Earth System Sciences Discussions 10, no. 11 (November 21, 2013): 14265–304. http://dx.doi.org/10.5194/hessd-10-14265-2013.

Full text
Abstract:
Abstract. When simulating social action in modeling efforts, as in socio-hydrology, an issue of obvious importance is how to ensure that social action by human agents is well-represented in the analysis and the model. Generally, human decision-making is either modeled on a yearly basis or lumped together as collective social structures. Both responses are problematic, as human decision making is more complex and organizations are the result of human agency and cannot be used as explanatory forces. A way out of the dilemma how to include human agency is to go to the largest societal and environmental clustering possible: society itself and climate, with time steps of years or decades. In the paper, the other way out is developed: to face human agency squarely, and direct the modeling approach to the human agency of individuals and couple this with the lowest appropriate hydrological level and time step. This approach is supported theoretically by the work of Bruno Latour, the French sociologist and philosopher. We discuss irrigation archaeology, as it is in this discipline that the issues of scale and explanatory force are well discussed. The issue is not just what scale to use: it is what scale matters. We argue that understanding the arrangements that permitted the management of irrigation over centuries, requires modeling and understanding the small-scale, day-to-day operations and personal interactions upon which they were built. This effort, however, must be informed by the longer-term dynamics as these provide the context within which human agency, is acted out.
APA, Harvard, Vancouver, ISO, and other styles
10

Sperry, Richard C., and Antonie J. Jetter. "A Systems Approach to Project Stakeholder Management: Fuzzy Cognitive Map Modeling." Project Management Journal 50, no. 6 (July 10, 2019): 699–715. http://dx.doi.org/10.1177/8756972819847870.

Full text
Abstract:
Projects that make effective use of project stakeholder management (PSM) tend to run smoothly and be successful because stakeholders understand and agree with the project approaches and outcomes. Projects with ineffective stakeholder management, on the other hand, frequently experience delays and cost overruns or may even be terminated. To date, project teams have limited methodological support for PSM: Existing methods are dominantly static and internally focused, making it difficult to manage so-called external stakeholders, who are not under the authority of the project manager. This work aims to improve PSM practice by closing the methodological gap. We developed a novel decision-support methodology, based on Fuzzy Cognitive Map (FCM) modeling that leverages stakeholders’ public comments to anticipate the project’s impacts on them and to make conflicts between stakeholder interests and project objectives transparent. A demonstration of the method is provided using a single case—namely, a longitudinal case study at Bonneville Power Administration (BPA), a federal agency that provides power to the Pacific Northwest.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Agency approach to imitative modeling"

1

Чумак, Анна Вадимівна. "Моделі фінансового ринку на основі агентського підходу." Master's thesis, КПІ ім. Ігоря Сікорського, 2019. https://ela.kpi.ua/handle/123456789/33084.

Full text
Abstract:
Магістерська дисертація виконана на 90 сторінках, містить 12 ілюстрацій, 22 таблиць та 20 джерел. У дисертації досліджується проблема моделювання фінансових ринків. Завдання моделювання є складною проблемою, особливо у випадках, коли фінансові часові ряди демонструють фрактальні властивості. У зв'язку з виниклою незгодою теорії рівноваги за Вальрасом і статистичними закономірностями, що з'являються при дослідженні сучасних даних, розглядаються нові методи аналізу фінансових ринків — агентсько-орієнтоване моделювання та фрактальний аналіз часових рядів. Мета дослідження – дослідити агентсько-орієнтоване моделювання та довести доцільність подальшого використання як інструмента аналітиків, трейдерів та інших осіб, що приймають рішення. Об’єкт дослідження – сучасні моделі фінансового ринку. Предмет дослідження – агентські підході фінансового ринку. Методи дослідження – аналіз моделей, порівняння розробленої поведінки з реальними фінансовими даними. У роботі розглядаються дві агентсько-орієнтовані моделі фондового ринку — модель Сато-Такаясу й узагальнена модель Ізінга.
The master's thesis consists of 88 pages, contains 12 illustrations, 25 tables and 20 sources. The dissertation explores the problem of financial markets modeling. Modeling is a complex problem, especially in cases where financial time series exhibit fractal properties. Due to the disagreement of Walras equilibrium theory and statistical regularities that emerge in the study of current data, new methods of financial market analysis are considered - agent-oriented modeling and fractal time series analysis. The purpose of the study is to investigate agency-oriented modeling and to prove the feasibility of further use as a tool by analysts, traders and other decision-makers. The object of study - modern models of the financial market. The subject of the study - the agency approaches to the financial market. Research methods - model analysis, comparison of developed on-line behavior with real financial data. Two agent-oriented stock market models are considered in the paper - the Sato-Takayasu model and the generalized Ising model.
APA, Harvard, Vancouver, ISO, and other styles
2

Demirel, Hande. "An integrated approach to the conceptual data modeling of an entire highway agency geographic information system (GIS)." [S.l.] : [s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=963981048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jensen, Jonathan A. "The Path to Global Sport Sponsorship Success: An Event History Analysis Modeling Approach." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1426070279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Demirel, Hande [Verfasser]. "An integrated approach to the conceptual data modeling of an entire highway agency geographic information system (GIS) / vorgelegt von Hande Demirel." 2002. http://d-nb.info/963981048/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Agency approach to imitative modeling"

1

Trajkovski, Goran. Imitation-Based Approach to Modeling Homogenous Agents Societies. IGI Global, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

An Imitation-Based Approach to Modeling Homogenous Agents Societies (Computational Intelligence and Its Applications Series). IGI Global, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Trajkovski, Goran. An Imitation-based Approach to Modeling Homogenous Agents Societies (Computational Intelligence and Its Applications Series) (Computational Intelligence and Its Applications Series). Idea Group Publishing, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Agency approach to imitative modeling"

1

Trajkovski, Goran. "An Imitation-Based Approach to Modeling Homogenous Agents Societies." In Progress in Artificial Intelligence, 246–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45329-6_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bwana, K. M. "Modeling Efficiency in the Presence of Volunteering Agency Hospitals and Council Designated Hospitals in Tanzania: An Application of Non Parametric Approach." In Sustainable Education and Development, 173–82. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68836-3_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Aboushouk, Mahmoud Ahmed, Hala Hilaly, and Nashwa Fouad Attallah. "Organizational Barriers to Knowledge-Sharing." In Handbook of Research on International Travel Agency and Tour Operation Management, 184–200. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-8434-6.ch012.

Full text
Abstract:
This chapter aimed at identifying and removing knowledge-sharing organizational barriers in the Egyptian tourism companies. The deductive approach and quantitative method were employed by this study. Moreover, a semi-structured questionnaire distributed to a sample of 278 tourism companies is used for data collection purposes. Structural equation modeling (SEM) is used for data analysis. Findings revealed significant effect of organizational barriers on knowledge-sharing behavior in tourism companies' context. A set of recommendations to overcome the perceived barriers of knowledge-sharing in tourism companies was introduced.
APA, Harvard, Vancouver, ISO, and other styles
4

Martinello, Magnos, Mohamed Kaâniche, and Karama Kanoun. "Performability Evaluation of Web-Based Services." In Performance and Dependability in Service Computing, 243–64. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-60960-794-4.ch011.

Full text
Abstract:
The joint evaluation of performance and dependability in a unique approach leads to the notion of performability which usually combines different analytical modeling formalisms (Markov chains, queueing models, etc.) for assessing systems behaviors in the presence of faults. This chapter presents a systematic modeling approach allowing designers of web-based services to evaluate the performability of the service provided to the users. We have developed a multi-level modeling framework for analyzing the user perceived performability. Multiple sources of service unavailability are taken into account, particularly i) hardware and software failures affecting the servers, and ii) performance degradation due to e.g. overload of servers and probability of loss. The main concepts and the feasibility of the proposed framework are illustrated using a web-based travel agency. Various analytical models and sensitivity studies are presented considering different assumptions with respect to users profiles, architecture, faults, recovery strategies, and traffic characteristics.
APA, Harvard, Vancouver, ISO, and other styles
5

Posada, Marta. "Emissions Permits Auctions." In Social Simulation, 180–91. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-522-1.ch014.

Full text
Abstract:
In this chapter the authors demonstrate with three relevant issues that Agent Based Modeling (ABM) is very useful to design emissions permits auctions and to forecast emission permits prices. They argue that ABM offers a more efficient approach to auction design than the usual mechanistic models. The authors set up the essential components of any market institution far beyond supply and demand. They build an ABM for the emissions permits auction of the Environment Protection Agency (EPA), and demonstrate why the EPA failed. In the second experiment they show that in a competitive and efficient auction, the Continuous Double Auction, there is room for traders learning and strategic behavior, thus clearing the perfect market paradox. In the third experiment they build an ABM of the Spanish electricity market to get CO2 emissions prices forecasts that are more accurate than those obtained with econometric or mechanistic models.
APA, Harvard, Vancouver, ISO, and other styles
6

Vinayakumar, R., K. P. Soman, and Prabaharan Poornachandran. "Evaluation of Recurrent Neural Network and its Variants for Intrusion Detection System (IDS)." In Deep Learning and Neural Networks, 295–316. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-0414-7.ch018.

Full text
Abstract:
This article describes how sequential data modeling is a relevant task in Cybersecurity. Sequences are attributed temporal characteristics either explicitly or implicitly. Recurrent neural networks (RNNs) are a subset of artificial neural networks (ANNs) which have appeared as a powerful, principle approach to learn dynamic temporal behaviors in an arbitrary length of large-scale sequence data. Furthermore, stacked recurrent neural networks (S-RNNs) have the potential to learn complex temporal behaviors quickly, including sparse representations. To leverage this, the authors model network traffic as a time series, particularly transmission control protocol / internet protocol (TCP/IP) packets in a predefined time range with a supervised learning method, using millions of known good and bad network connections. To find out the best architecture, the authors complete a comprehensive review of various RNN architectures with its network parameters and network structures. Ideally, as a test bed, they use the existing benchmark Defense Advanced Research Projects Agency / Knowledge Discovery and Data Mining (DARPA) / (KDD) Cup ‘99' intrusion detection (ID) contest data set to show the efficacy of these various RNN architectures. All the experiments of deep learning architectures are run up to 1000 epochs with a learning rate in the range [0.01-0.5] on a GPU-enabled TensorFlow and experiments of traditional machine learning algorithms are done using Scikit-learn. Experiments of families of RNN architecture achieved a low false positive rate in comparison to the traditional machine learning classifiers. The primary reason is that RNN architectures are able to store information for long-term dependencies over time-lags and to adjust with successive connection sequence information. In addition, the effectiveness of RNN architectures are shown for the UNSW-NB15 data set.
APA, Harvard, Vancouver, ISO, and other styles
7

Paton, Ray, and Michael Fisher. "Proteins and Information Processing." In Cellular Computing. Oxford University Press, 2004. http://dx.doi.org/10.1093/oso/9780195155396.003.0006.

Full text
Abstract:
This chapter reviews and briefly discusses a set of computational methods that can assist biologists when seeking to model interactions between components in spatially heterogeneous and changing environments. The approach can be applied to many scales of biological organization, and the illustrations we have selected apply to networks of interaction among proteins. Biological populations, whether ecological or molecular, homogeneous or heterogeneous, moving or stationary, can be modeled at different scales of organization. Some models can be constructed that focus on factors or patterns that characterize the population as a whole such as population size, average mass or length, and so forth. Other models focus on values associated with individuals such as age, energy reserve, and spatial association with other individuals. A distinction can be made between population (p-state) and individual (i-state) variables and models. We seek to develop a general approach to modeling biosystems based on individuals. Individual-based models (IBMs) typically consist of an environment or framework in which interactions occur and a number of individuals defined in terms of their behaviors (such as procedural rules) and characteristic parameters. The actions of each individual can be tracked through time. IBMs represent heterogeneous systems as sets of nonidentical, discrete, interacting, autonomous, adaptive agents (e.g., Devine and Paton [5]). They have been used to model the dynamics of population interaction over time in ecological systems, but IBMs can equally be applied to biological systems at other levels of scale. The IBM approach can be used to simulate the emergence of global information processing from individual, local interactions in a population of agents. When it is sensible and appropriate, we seek to incorporate an ecological and social view of inter-agent interactions to all scales of the biological hierarch. In this case we distinguish among individual “devices” (agents), networks (societies or communities), and networks in habitats (ecologies). In that they are able to interact with other molecules in subtle and varied ways, we may say that many proteins have social abilities . This social dimension to protein agency also presupposes that proteins have an underlying ecology in that they interact with other molecules including substrates, products, regulators, cytoskeleton, membranes, water, and local electric fields.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Agency approach to imitative modeling"

1

Raina, Ayush, Christopher McComb, and Jonathan Cagan. "Learning to Design From Humans: Imitating Human Designers Through Deep Learning." In ASME 2019 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/detc2019-97399.

Full text
Abstract:
Abstract Humans as designers have quite versatile problem-solving strategies. Computer agents on the other hand can access large scale computational resources to solve certain design problems. Hence, if agents can learn from human behavior, a synergetic human-agent problem solving team can be created. This paper presents an approach to extract human design strategies and implicit rules, purely from historical human data, and use that for design generation. A two-step framework that learns to imitate human design strategies from observation is proposed and implemented. This framework makes use of deep learning constructs to learn to generate designs without any explicit information about objective and performance metrics. The framework is designed to interact with the problem through a visual interface as humans did when solving the problem. It is trained to imitate a set of human designers by observing their design state sequences without inducing problem-specific modelling bias or extra information about the problem. Furthermore, an end-to-end agent is developed that uses this deep learning framework as its core in conjunction with image processing to map pixel-to-design moves as a mechanism to generate designs. Finally, the designs generated by a computational team of these agents are then compared to actual human data for teams solving a truss design problem. Results demonstrates that these agents are able to create feasible and efficient truss designs without guidance, showing that this methodology allows agents to learn effective design strategies.
APA, Harvard, Vancouver, ISO, and other styles
2

Brown, Ronald, Shannon White, Jennifer Goode, Prachi Pradeep, and Stephen Merrill. "Use of QSAR Modeling to Predict the Carcinogenicity of Color Additives." In ASME 2013 Conference on Frontiers in Medical Devices: Applications of Computer Modeling and Simulation. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/fmd2013-16161.

Full text
Abstract:
Patients may be exposed to potentially carcinogenic color additives released from polymers used to manufacture medical devices; therefore, the need exists to adequately assess the safety of these compounds. The US FDA Center for Devices and Radiological Health (CDRH) recently issued draft guidance that, when final, will include FDA’s recommendations for the safety evaluation of color additives and other potentially toxic chemical entities that may be released from device materials. Specifically, the draft guidance outlines an approach that calls for evaluating the potential for the color additive to be released from the device in concert with available toxicity information about the additive to determine what types of toxicity information, if any, are necessary. However, when toxicity data are not available from the literature for the compounds of interest, a scientific rationale can sometimes be provided for omission of these tests. Although the FDA has issued draft guidance on this topic, the Agency continues to explore alternative approaches to understand when additional toxicity testing is needed to assure the safety of medical devices that contain color additives. An emerging approach that may be useful for determining the need for further testing of compounds released from device materials is Quantitative Structure Activity Relationship (QSAR) modeling. In this paper, we have shown how three publically available QSAR models (OpenTox/Lazar, Toxtree, and the OECD Toolbox) are able to successfully predict the carcinogenic potential of a set of color additives with a wide range of structures. As a result, this computational modeling approach may serve as a useful tool for determining the need to conduct carcinogenicity testing of color additives intended for use in medical devices.
APA, Harvard, Vancouver, ISO, and other styles
3

Abe, Satoshi, Etienne Studer, Masahiro Ishigaki, Yasuteru Sibamoto, and Taisuke Yonomoto. "Applicability of Dynamic Modeling for Turbulent Schmidt and Prandtl Numbers on Density Stratification Breakup in Several Flow Conditions." In 2020 International Conference on Nuclear Engineering collocated with the ASME 2020 Power Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/icone2020-16837.

Full text
Abstract:
Abstract Many experiments on density stratification breakup in several flow conditions have been performed with the large- and small-scale experimental facilities to understand the mechanism underlying hydrogen behavior in a nuclear containment vessel during a severe accident. To improve the predictability of the RANS (Reynolds-averaged Navier Stokes) approach, we implemented the dynamic modeling for turbulent Schmidt Sct and Prandtl Prt numbers. In this paper, the capability of the RANS analysis with dynamic Sct modeling is assessed with several experimental data obtained by using the MISTRA (Commissariat à l’énergie atomique et aux énergies alternatives, CEA, France), CIGMA and VIMES (Japan Atomic Energy Agency, Japan). For the quantitative assessment, the completion time of the stratification breakup, defined as when helium concentration in the upper region decreases to the same value in the lower region, is focused. The comparison study shows the good performance of the dynamic modeling for Sct and Prt. Besides, in the case with the low jet Froude number, the CFD accuracy declines significantly, because the jet upward bending is over-estimated.
APA, Harvard, Vancouver, ISO, and other styles
4

Bartashevich, M. V., V. V. Kuznetsov, and O. A. Kabov. "Mathematical Modeling of Rivulet Flow Driven by Variable Gravity and Gas Flow in a Minichannel." In ASME 2008 6th International Conference on Nanochannels, Microchannels, and Minichannels. ASMEDC, 2008. http://dx.doi.org/10.1115/icnmm2008-62135.

Full text
Abstract:
Rivulet flows are a special type of thin film flows with a bounded width. The use of rivulet flows is very perspective in various types of process equipment such as evaporators. The flow dynamics in rivulets has some peculiarities, the investigation of which allows understanding the gist of possible mechanism of enhancement of heat transfer. In the present work a mathematical model for the rivulet flow in conditions of a variable gravity has been elaborated. The liquid flow takes place in a slot between two plates and is caused by a co-current gas flow. The numerical calculations of the flow parameters depending from the gravity forces have been made. The analytical formula for connection of the main rivulet parameters (width, contact angle, liquid flow rate ...) in a linearized approach has been derived. The comparison of numerical results with experimental data obtained during 44 Parabolic Flights campaign of the European Space Agency has been carried out. Liquid film of FC-72 driven by the Nitrogen gas has been studied in experiments. Force balance is changed during a parabolic flight and due to surface tension effect the liquid film in a horizontal minichannel 40 mm width became a flattened rivulet 9 mm width at microgravity.
APA, Harvard, Vancouver, ISO, and other styles
5

Biggs, Simon, Michael Fairweather, James Young, Robin W. Grimes, Neil Milestone, and Francis Livens. "The KNOO Research Consortium: Work Package 3—An Integrated Approach to Waste Immobilisation and Management." In ASME 2009 12th International Conference on Environmental Remediation and Radioactive Waste Management. ASMEDC, 2009. http://dx.doi.org/10.1115/icem2009-16375.

Full text
Abstract:
The Keeping the Nuclear Option Open (KNOO) research consortium is a four-year research council funded initiative addressing the challenges related to increasing the safety, reliability and sustainability of nuclear power in the UK. Through collaboration between key industrial and governmental stakeholders, and with international partners, KNOO was established to maintain and develop skills relevant to nuclear power generation. Funded by a research grant of £6.1M from the “Towards a Sustainable Energy Economy Programme” of the UK Research Councils, it represents the single largest university-based nuclear research programme in the UK for more than 30 years. The programme is led by Imperial College London, in collaboration with the universities of Manchester, Sheffield, Leeds, Bristol, Cardiff and the Open University. These universities are working with the UK nuclear industry, who contributed a further £0.4M in funding. The industry/government stakeholders include AWE, British Energy, the Department for Environment, Food and Rural Affairs, the Environment Agency, the Health and Safety Executive, Doosan Babcock, the Ministry of Defence, Nirex, AMEC NNC, Rolls-Royce PLC and the UK Atomic Energy Authority. Work Package 3 of this consortium, led by the University of Leeds, concerns “An Integrated Approach to Waste Immobilisation and Management”, and involves Imperial College London, and the Universities of Manchester and Sheffield. The aims of this work package are: to study the re-mobilisation, transport, solid-liquid separation and immobilisation of particulate wastes; to develop predictive models for particle behaviour based on atomic scale, thermodynamic and process scale simulations; to develop a fundamental understanding of selective adsorption of nuclides onto filter systems and their immobilisation; and to consider mechanisms of nuclide leaving and transport. The paper describes highlights from this work in the key areas of multi-scale modeling (using atomic scale, thermodynamic and process scale models), the engineering properties of waste (linking microscopic and macroscopic behaviour, and transport and rheology), and waste reactivity (considering waste hosts and wasteforms, generation IV wastes, and waste interactions).
APA, Harvard, Vancouver, ISO, and other styles
6

Nanstad, Randy K., and Marc Scibetta. "IAEA Coordinated Research Project on Master Curve Approach to Monitor Fracture Toughness of RPV Steels: Effects of Bias, Constraint, and Geometry." In ASME 2007 Pressure Vessels and Piping Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/pvp2007-26231.

Full text
Abstract:
There is strong interest from the nuclear industry to use the precracked Charpy single-edge notched bend, SE(B), specimen (PCVN) to enable determination of the reference temperature, T0, with reactor pressure vessel surveillance specimens. Unfortunately, for many different ferritic steels, tests with the PCVN specimen (10×10×55 mm) have resulted in T0 temperatures up to 25°C lower than T0 values obtained using data from 25-mm thick compact specimens [1TC(T)]. This difference in T0 reference temperature has often been designated a specimen bias effect, and the primary focus for explaining this effect is loss of constraint in the PCVN specimen. The International Atomic Energy Agency has developed a three-part coordinated research project (CRP) to evaluate various issues associated with the fracture toughness Master Curve for application to light-water reactor pressure vessels. One part of the CRP is focused on the issue of test specimen geometry effects, with emphasis on the PCVN bias. Participating organizations for this part of the CRP are performing fracture toughness testing of various steels, including the reference steel JRQ (A533-B-1) often used for IAEA studies, with various types of specimens under various conditions. Additionally, many of the participants are taking part in a round robin exercise on finite element modeling of the PCVN specimen. Some preliminary results from fracture toughness tests are compared with regard to effects of specimen size and type on the reference temperature T0. In agreement with a number of published results, the results do generally show lower values of T0 from the PCVN specimen compared with the compact and larger bend specimens. They also clearly show higher apparent fracture toughness for the shallow crack compared with the deep crack configuration. Moreover, the SE(B) specimens exhibit a tendency for decreasing T0 with decreasing specimen size (thickness and/or remaining ligament). Additionally, as shown in previous CRPs, the results also exhibit a dependence on test temperature. Following completion of all testing, the results will be evaluated relative to existing proposed models with a view towards developing an understanding of the reasons for the observed differences.
APA, Harvard, Vancouver, ISO, and other styles
7

McNally, Amanda D. "A Tiered Approach for Evaluating the Sustainability of Remediation Activities at Rail Sites." In 2018 Joint Rail Conference. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/jrc2018-6163.

Full text
Abstract:
Remediation of environmental sites is of concern across the rail industry. Impacted sites may result from releases of chemicals to the environment along active rail lines or in rail yards; historical activities; or through acquisition of impacted property. Management of these liabilities may require investigation, planning, design, and remediation to reduce risks to human health and the environment and meet regulatory requirements. However, these investigation and remediation activities may generate unintended environmental, community, or economic impacts. To address these impacts, many organizations are focusing on the incorporation of sustainability concepts into the remediation paradigm. Sustainable remediation is defined as the use of sustainable practices during the investigation, construction, redevelopment, and monitoring of remediation sites, with the objective of balancing economic viability, conservation of natural resources and biodiversity, and the enhancement of the quality of life in surrounding communities (Sustainable Remediation Forum [SURF]). Benefits of considering and implementing measures to balance the three pillars of sustainability (i.e., society, economics, and environment) may include lower project implementation costs, reduced cleanup timeframes, and maximizing beneficial while alleviating detrimental impacts to surrounding communities. Sustainable remediation has evolved from discussions of environmental impacts of cleanups (with considerable greenwashing), to quantifying and minimizing the environmental footprint and subsequent long-term global impacts of a remedy, and currently, incorporating strategies to address all three components of sustainability — environmental, social, and economic. As organizations expand their use of more sustainable approaches to site cleanup, it is beneficial to establish consistent objectives and metrics that will guide implementation across a portfolio of sites. Sustainable remediation objectives should be consistent with corporate sustainability goals for environmental performance (e.g., greenhouse gas emissions, resource consumption, or waste generation), economic improvements (i.e., reduction of long term liability), and community engagement. In the last decade, there have been several Executive Orders (13423, 13514, 13693) that provide incrementally advanced protocols for achieving sustainability in government agency and corporate programs. Resources for remediation practitioners are available to assist in developing sustainable approaches, including SURF’s 2009 White Paper and subsequent issue papers, ITRC’s Green and Sustainable Remediation: State of the Science and Practice (GSR-1) and A Practical Framework (GSR-2), and ASTM’s Standard Guide for Greener Cleanups (E2893-16) and Standard Guide for Integrating Sustainable Objectives into Cleanup (E2876-13). These documents discuss frameworks that may be applied to projects of any size and during any phase of the remediation life cycle, and many provide best management practices (BMPs) that may be implemented to improve the environmental, social, or economic aspects of a project. Many of these frameworks encourage a tiered approach that matches the complexity of a sustainability assessment to the cost and scope of the remediation. For small remediation sites, a sustainability program may include the selection, implementation, or tracking of BMPs. A medium sized remediation site may warrant the quantification of environmental impacts (e.g., air emissions, waste generation, etc.) during the evaluation and selection of remedial alternatives. Often, only large and costly remediation sites demand detailed quantitative assessment of environmental impacts (e.g., life cycle assessment), economic modeling, or extensive community or stakeholder outreach. However, if a tiered approach is adopted by an organization, components of each of these assessments can be incorporated into projects where it makes sense to meet the needs of the stakeholders.
APA, Harvard, Vancouver, ISO, and other styles
8

Nanstad, Randy K., Milan Brumovsky, Rogelio Herna´ndez Callejas, Ferenc Gillemot, Mikhail Korshunov, Bong Sang Lee, Enrico Lucon, et al. "IAEA Coordinated Research Project on Master Curve Approach to Monitor Fracture Toughness of RPV Steels: Final Results of the Experimental Exercise to Support Constraint Effects." In ASME 2009 Pressure Vessels and Piping Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/pvp2009-78022.

Full text
Abstract:
The precracked Charpy single-edge notched bend, SE(B), specimen (PCC) is the most likely specimen type to be used for determination of the reference temperature, T0, with reactor pressure vessel (RPV) surveillance specimens. Unfortunately, for many RPV steels, significant differences have been observed between the T0 temperature for the PCC specimen and that obtained from the 25-mm thick compact specimen [1TC(T)], generally considered the standard reference specimen for T0. This difference in T0 has often been designated a specimen bias effect, and the primary focus for explaining this effect is loss of constraint in the PCC specimen. The International Atomic Energy Agency (IAEA) has developed a coordinated research project (CRP) to evaluate various issues associated with the fracture toughness Master Curve for application to light-water RPVs. Topic Area 1 of the CRP is focused on the issue of test specimen geometry effects, with emphasis on determination of T0 with the PCC specimen and the bias effect. Topic Area 1 has an experimental part and an analytical part. Participating organizations for the experimental part of the CRP performed fracture toughness testing of various steels, including the reference steel JRQ (A533-B-1) often used for IAEA studies, with various types of specimens under various conditions. Additionally, many of the participants took part in a round robin exercise on finite element modeling of the PCVN specimen, discussed in a separate paper. Results from fracture toughness tests are compared with regard to effects of specimen size and type on the reference temperature T0. It is apparent from the results presented that the bias observed between the PCC specimen and larger specimens for Plate JRQ is not nearly as large as that obtained for Plate 13B (−11°C vs −37°C) and for some of the results in the literature (bias values as much as −45°C). This observation is consistent with observations in the literature that show significant variations in the bias that are dependent on the specific materials being tested. There are various methods for constraint adjustments and two methods were used that reduced the bias for Plate 13B from −37°C to −13°C in one case and to − 11°C in the second case. Unfortunately, there is not a consensus methodology available that accounts for the differences observed with different materials. Increasing the Mlim value in the ASTM E-1921 to ensure no loss of constraint for the PCC specimen is not a practicable solution because the PCC specimen is derived from CVN specimens in RPV surveillance capsules and larger specimens are normally not available. Resolution of these differences are needed for application of the master curve procedure to operating RPVs, but the research needed for such resolution is beyond the scope of this CRP.
APA, Harvard, Vancouver, ISO, and other styles
9

Saegusa, Hiromitsu, Hironori Onoe, Shinji Takeuchi, Ryuji Takeuchi, and Takuya Ohyama. "Hydrogeological Characterization on Surface-Based Investigation Phase in the Mizunami Underground Research Laboratory Project, in Japan." In The 11th International Conference on Environmental Remediation and Radioactive Waste Management. ASMEDC, 2007. http://dx.doi.org/10.1115/icem2007-7117.

Full text
Abstract:
The Mizunami Underground Research Laboratory (MIU) project is being carried out by Japan Atomic Energy Agency in the Cretaceous Toki granite in the Tono area, central Japan. The MIU project is a purpose-built generic underground research laboratory project that is planned for a broad scientific study of the deep geological environment as a basis of research and development for geological disposal of nuclear wastes. One of the main goals of the MIU project is to establish comprehensive techniques for investigation, analysis, and assessment of the deep geological environment. The MIU project has three overlapping phases: Surface-based Investigation (Phase I), Construction (Phase II) and Operation (Phase III). Hydrogeological investigations using a stepwise process in Phase I have been carried out in order to obtain information on important properties such as, location of water conducting features, hydraulic conductivity and so on. Hydrogeological modeling and groundwater flow simulations in Phase I have been carried out in order to synthesize these investigation results, to evaluate the uncertainty of the hydrogeological model and to identify the main issues for further investigations. Using the stepwise hydrogeological characterization approach and combining the investigation with modeling and simulation, understanding of the hydrogeological environment has been progressively improved.
APA, Harvard, Vancouver, ISO, and other styles
10

Thompson, Jason, and Christopher Boyd. "CFD Verification and Validation Exercise: Turbulent Mixing of Stratified Layer." In ASME 2020 Verification and Validation Symposium. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/vvs2020-8821.

Full text
Abstract:
Abstract The US Nuclear Regulatory Commission (NRC) participated in an Organization for Economic Cooperation and Development / Nuclear Energy Agency (OECD/NEA) benchmark activity based on testing in the PANDA facility located at the Paul Scherrer Institute in Switzerland. In this test, a stratified helium layer was eroded by a turbulent jet from below. NRC participated in this benchmark to develop expertise and modeling guidelines for computational fluid dynamics (CFD) in anticipation of utilizing these methods for future safety and confirmatory analyses. CFD predictions using ANSYS FLUENT V19.0 are benchmarked using the PANDA test data, and sensitivity studies are used to evaluate the significance of key phenomena, such as boundary conditions and modeling options, that impact the helium erosion rates and jet velocity distribution. The k-epsilon realizable approach with second order differencing resulted in the best prediction of the test data. The most significant phenomena are found to be the inlet mass flowrate and turbulent Schmidt number. CFD uncertainty for helium and velocity due to numerical error and input parameter uncertainty are predicted using a sensitivity coefficient approach. Numerical uncertainty resulting from the mesh design is estimated using a Grid Convergence Index (GCI) approach. Meshes of 0.5, 1.5 (base mesh), and 4.5 million cells are used for the GCI. Approximately second order grid convergence was observed but p (order of convergence) values from 1 to 5 were common. The final helium predictions with one-sigma uncertainty interval generally bounded the experimental data. The predicted jet centerline velocity was approximately 50% of the measured value at multiple measurement locations. This velocity benchmark is likely affected by the difference in the He content between the experiment and prediction. The predicted jet centerline velocity with the one-sigma uncertainty interval did not bound the experimental data.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography