Academic literature on the topic 'Deep Equilibrium Models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Deep Equilibrium Models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Deep Equilibrium Models"

1

Lafond, Patrick G., R. Gary Grim, and Amadeu K. Sum. "Clathrate hydrate equilibrium modeling: Do self-consistent cell models provide unique equilibrium solutions?" Canadian Journal of Chemistry 93, no. 8 (August 2015): 826–30. http://dx.doi.org/10.1139/cjc-2014-0558.

Full text
Abstract:
When clathrate hydrates of xenon gas are formed deep within the stability field, anomalous melting behavior is readily observed in differential scanning calorimetry (DSC). In the DSC thermograms, multiple dissociation events may be observed, suggesting the presence of more than one solid phase. Following a suite of diffraction and NMR measurements, we are only able to detect the presence of simple structure I hydrate. Recognizing that hydrates are nonstoichiometric compounds, we look back to how the molar composition of a hydrate phase is determined. Making a mean-field improvement to current equilibrium models, we find that some conditions yield multiple solutions to the cage filling of the hydrate phase. Though the solutions are not truly stable, they would result in a kinetically trapped system. If such a case existed experimentally, this could explain the dissociation behavior observed for xenon hydrates. More importantly, this raises the question of how well defined the equilibrium condition is for a cell potential model, and whether or not multiple equilibrium solutions could exist.
APA, Harvard, Vancouver, ISO, and other styles
2

Plant, R. S., and G. C. Craig. "A Stochastic Parameterization for Deep Convection Based on Equilibrium Statistics." Journal of the Atmospheric Sciences 65, no. 1 (January 1, 2008): 87–105. http://dx.doi.org/10.1175/2007jas2263.1.

Full text
Abstract:
Abstract A stochastic parameterization scheme for deep convection is described, suitable for use in both climate and NWP models. Theoretical arguments and the results of cloud-resolving models are discussed in order to motivate the form of the scheme. In the deterministic limit, it tends to a spectrum of entraining/detraining plumes and is similar to other current parameterizations. The stochastic variability describes the local fluctuations about a large-scale equilibrium state. Plumes are drawn at random from a probability distribution function (PDF) that defines the chance of finding a plume of given cloud-base mass flux within each model grid box. The normalization of the PDF is given by the ensemble-mean mass flux, and this is computed with a CAPE closure method. The characteristics of each plume produced are determined using an adaptation of the plume model from the Kain–Fritsch parameterization. Initial tests in the single-column version of the Unified Model verify that the scheme is effective in producing the desired distributions of convective variability without adversely affecting the mean state.
APA, Harvard, Vancouver, ISO, and other styles
3

Azmoon, Behnam, Aynaz Biniyaz, and Zhen (Leo) Liu. "Evaluation of Deep Learning against Conventional Limit Equilibrium Methods for Slope Stability Analysis." Applied Sciences 11, no. 13 (June 29, 2021): 6060. http://dx.doi.org/10.3390/app11136060.

Full text
Abstract:
This paper presents a comparison study between methods of deep learning as a new category of slope stability analysis, built upon the recent advances in artificial intelligence and conventional limit equilibrium analysis methods. For this purpose, computer code was developed to calculate the factor of safety (FS) using four limit equilibrium methods: Bishop’s simplified method, the Fellenius method, Janbu’s simplified method, and Janbu’s corrected method. The code was verified against Slide2 in RocScience. Subsequently, the average FS values were used to approximate the “true” FS of the slopes for labeling the images for deep learning. Using this code, a comprehensive dataset of slope images with wide ranges of geometries and soil properties was created. The average FS values were used to label the images for implementing two deep learning models: a multiclass classification and a regression model. After training, the deep learning models were used to predict the FS of an independent set of slope images. Finally, the performance of the models was compared to that of the conventional methods. This study found that deep learning methods can reach accuracies as high as 99.71% while improving computational efficiency by more than 18 times compared with conventional methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Kollau, Laura J. B. M., Mark Vis, Adriaan van den Bruinhorst, Gijsbertus de With, and Remco Tuinier. "Activity modelling of the solid–liquid equilibrium of deep eutectic solvents." Pure and Applied Chemistry 91, no. 8 (August 27, 2019): 1341–49. http://dx.doi.org/10.1515/pac-2018-1014.

Full text
Abstract:
Abstract Compared to conventional solvents used in the chemical industry, deep eutectic solvents (DESs) are considered as promising potentially sustainable solvents. DESs are binary mixtures and the resulting liquid mixture is characterized by a large melting point depression with respect to the melting temperatures of its constituents. The relative melting point depression becomes larger as the two components have stronger attractive interactions, resulting in non-ideal behavior. The compositional range over which such binary mixtures are liquids is set by the location of the solid–liquid phase boundary. Here we present experimental phase diagrams of various recent and new DESs that vary in the degree of non-ideality. We investigate whether thermodynamic models are able to describe the solid–liquid equilibria and focus on relating the parameters of these models to the non-ideal behavior, including asymmetric behavior of the activity coefficients. It is shown that the orthogonal Redlich–Kister-like polynomial (OP) expansion, including an additional first order term, provides an accurate description. This theory can be considered as an extension of regular solution theory and enables physical interpretation of the fit parameters.
APA, Harvard, Vancouver, ISO, and other styles
5

Yano, Jun-Ichi, and Robert Plant. "Interactions between Shallow and Deep Convection under a Finite Departure from Convective Quasi Equilibrium." Journal of the Atmospheric Sciences 69, no. 12 (December 1, 2012): 3463–70. http://dx.doi.org/10.1175/jas-d-12-0108.1.

Full text
Abstract:
Abstract The present paper presents a simple theory for the transformation of nonprecipitating, shallow convection into precipitating, deep convective clouds. To make the pertinent point a much idealized system is considered, consisting only of shallow and deep convection without large-scale forcing. The transformation is described by an explicit coupling between these two types of convection. Shallow convection moistens and cools the atmosphere, whereas deep convection dries and warms the atmosphere, leading to destabilization and stabilization, respectively. Consequently, in their own stand-alone modes, shallow convection perpetually grows, whereas deep convection simply damps: the former never reaches equilibrium, and the latter is never spontaneously generated. Coupling the modes together is the only way to reconcile these undesirable separate tendencies, so that the convective system as a whole can remain in a stable periodic state under this idealized setting. Such coupling is a key missing element in current global atmospheric models. The energy cycle description used herein is fully consistent with the original formulation by Arakawa and Schubert, and is suitable for direct implementation into models using a mass flux parameterization. The coupling would alleviate current problems with the representation of these two types of convection in numerical models. The present theory also provides a pertinent framework for analyzing large-eddy simulations and cloud-resolving modeling.
APA, Harvard, Vancouver, ISO, and other styles
6

Nick, F. M., and J. Oerlemans. "Dynamics of tidewater glaciers: comparison of three models." Journal of Glaciology 52, no. 177 (2006): 183–90. http://dx.doi.org/10.3189/172756506781828755.

Full text
Abstract:
AbstractA minimal model of a tidewater glacier based solely on mass conservation is compared with two one-dimensional numerical flowline models, one with the calving rate proportional to water depth, and the other with the flotation criterion as a boundary condition at the glacier terminus. The models were run with two simplified bed geometries and two mass-balance formulations. The models simulate the full cycle of length variations and the equilibrium states for a tidewater glacier. This study shows that the branching of the equilibrium states depends significantly on the bed geometry. The similarity between the results of the three models indicates that if there is a submarine undulation at the terminus of a tidewater glacier, any model in which the frontal ice loss is related to the water depth yields qualitatively the same non-linear behaviour. For large glaciers extending into deep water, the flotation model causes unrealistic behaviour.
APA, Harvard, Vancouver, ISO, and other styles
7

Sladkowski, A. V., Y. O. Kyrychenko, P. I. Kogut, and V. I. Samusya. "Innovative designs of pumping deep-water hydrolifts based on progressive multiphase non-equilibrium models." Naukovyi Visnyk Natsionalnoho Hirnychoho Universytetu, no. 2 (April 2019): 51–57. http://dx.doi.org/10.29202/nvngu/2019-2/6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Latash, Mark L. "Equilibrium-point control? Yes! Deterministic mechanisms of control? No!" Behavioral and Brain Sciences 18, no. 4 (December 1995): 765–66. http://dx.doi.org/10.1017/s0140525x00040899.

Full text
Abstract:
AbstractThe equilibrium-point hypothesis (the λ-model) is superior to all other models of single-joint control and provides deep insights into the mechanisms of control of multi-joint movements. Attempts at associating control variables with neurophysiological variables look confusing rather than promising. Probabilistic mechanisms may play an important role in movement generation in redundant systems.
APA, Harvard, Vancouver, ISO, and other styles
9

Zalai, Ernő. "The von Neumann Model and the Early Models of General Equilibrium." Acta Oeconomica 54, no. 1 (May 1, 2004): 3–38. http://dx.doi.org/10.1556/aoecon.54.2004.1.2.

Full text
Abstract:
The paper reconstructs the von Neumann model, comments on its salient features and critically reviews some of its generalisations. The issues related to thetreatment of consumption, decomposability and uniqueness of the rate of growth and interest will be especially scrutinised. The most prominent models of general equilibrium that appeared before or roughly at the same time as von Neumann's model will be also reviewed in the paper and compared with it. It will be demonstrated that none of them had any noticeable influence on von Neumann's model, which is genuinely distinct, ideologically free and methodologically fresh and forward-looking. It will be argued that the model can be viewed as a brilliant mathematical metaphor of some deep-rooted old vision, pertaining to the core issues of commodity production.
APA, Harvard, Vancouver, ISO, and other styles
10

Tawfik, Abdel Nasser. "Equilibrium statistical–thermal models in high-energy physics." International Journal of Modern Physics A 29, no. 17 (June 26, 2014): 1430021. http://dx.doi.org/10.1142/s0217751x1430021x.

Full text
Abstract:
We review some recent highlights from the applications of statistical–thermal models to different experimental measurements and lattice QCD thermodynamics that have been made during the last decade. We start with a short review of the historical milestones on the path of constructing statistical–thermal models for heavy-ion physics. We discovered that Heinz Koppe formulated in 1948, an almost complete recipe for the statistical–thermal models. In 1950, Enrico Fermi generalized this statistical approach, in which he started with a general cross-section formula and inserted into it, the simplifying assumptions about the matrix element of the interaction process that likely reflects many features of the high-energy reactions dominated by density in the phase space of final states. In 1964, Hagedorn systematically analyzed the high-energy phenomena using all tools of statistical physics and introduced the concept of limiting temperature based on the statistical bootstrap model. It turns to be quite often that many-particle systems can be studied with the help of statistical–thermal methods. The analysis of yield multiplicities in high-energy collisions gives an overwhelming evidence for the chemical equilibrium in the final state. The strange particles might be an exception, as they are suppressed at lower beam energies. However, their relative yields fulfill statistical equilibrium, as well. We review the equilibrium statistical–thermal models for particle production, fluctuations and collective flow in heavy-ion experiments. We also review their reproduction of the lattice QCD thermodynamics at vanishing and finite chemical potential. During the last decade, five conditions have been suggested to describe the universal behavior of the chemical freeze-out parameters. The higher order moments of multiplicity have been discussed. They offer deep insights about particle production and to critical fluctuations. Therefore, we use them to describe the freeze-out parameters and suggest the location of the QCD critical endpoint. Various extensions have been proposed in order to take into consideration the possible deviations of the ideal hadron gas. We highlight various types of interactions, dissipative properties and location-dependences (spatial rapidity). Furthermore, we review three models combining hadronic with partonic phases; quasi-particle model, linear sigma model with Polyakov potentials and compressible bag model.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Deep Equilibrium Models"

1

Lee, Charles Kai-Wu. "Eurythermalism of a deep-sea symbiosis system from an enzymological aspect." The University of Waikato, 2007. http://hdl.handle.net/10289/2588.

Full text
Abstract:
The recently proposed and experimentally validated Equilibrium Model provides the most detailed description of temperature's effect on enzyme catalytic activity to date. By introducing an equilibrium between Eact, the active form of enzyme, and Einact, a reversibly inactivated form of enzyme, the Equilibrium Model explains apparent enzyme activity loss at high temperatures that cannot be accounted for by irreversible thermal denaturation. The Equilibrium Model describes enzyme behavior in the presence of substrates and under assay conditions; thus its associated parameters, deltaHeq and Teq, may have physiological significance. The Equilibrium Model parameters have been determined for twenty-one enzymes of diverse origins. The results demonstrated the wide applicability of the Equilibrium Model to enzymes of different types and temperature affinity. The study has also established deltaHeq as the first quantitative measure of enzyme eurythermalism and demonstrated the relationship between Teq and optimal growth temperature of organisms. The Equilibrium Model is therefore a useful tool for studying enzyme temperature adaptation and its role in adaptations to thermophily and eurythermalism. Moreover, it potentially enables a description of the originating environment from the properties of the enzymes. The Equilibrium Model has been employed to characterize enzymes isolated from bacterial episymbionts of Alvinella pompejana. A. pompejana inhabits one of the most extreme environments known to science and has been proposed as an extremely eurythermal organism. A metagenomic study of the A. pompejana episymbionts has unveiled new information related to the adaptive and metabolic properties of the bacterial consortium; the availability of metagenomic sequences has also enabled targeted retrieval and heterologous expression of A. pompejana episymbiont genes. By inspecting enzymes derived from the unique episymbiotic microbial consortium intimately associated with A. pompejana, the study has shed light on temperature adaptations in this unique symbiotic relationship. The findings suggested that eurythermal enzymes are one of the mechanisms used by the microbial consortium to achieve its adaptations. By combining metagenomic and enzymological studies, the research described in this thesis has lead to insights on the eurythermalism of a complex microbial system from an enzymological aspect. The findings have enhanced our knowledge on how life adapts to extreme environments, and the validation of the Equilibrium Model as a tool for studying enzyme temperature adaptation paves the way for future studies.
APA, Harvard, Vancouver, ISO, and other styles
2

(5930285), Karen N. Son. "Improved Prediction of Adsorption-Based Life Support for Deep Space Exploration." Thesis, 2019.

Find full text
Abstract:
Adsorbent technology is widely used in many industrial applications including waste heat recovery, water purification, and atmospheric revitalization in confined habitations. Astronauts depend on adsorbent-based systems to remove metabolic carbon dioxide (CO2) from the cabin atmosphere; as NASA prepares for the journey to Mars, engineers are redesigning the adsorbent-based system for reduced weight and optimal efficiency. These efforts hinge upon the development of accurate, predictive models, as simulations are increasingly relied upon to save cost and time over the traditional design-build-test approach. Engineers rely on simplified models to reduce computational cost and enable parametric optimizations. Amongst these simplified models is the axially dispersed plug-flow model for predicting the adsorbate concentration during flow through an adsorbent bed. This model is ubiquitously used in designing fixed-bed adsorption systems. The current work aims to improve the accuracy of the axially dispersed plug-flow model because of its wide-spread use. This dissertation identifies the critical model inputs that drive the overall uncertainty in important output quantities then systematically improves the measurement and prediction of these input parameters. Limitations of the axially dispersed plug-flow model are also discussed, and recommendations made for identifying failure of the plug-flow assumption.

An uncertainty and sensitivity analysis of an axially disperse plug-flow model is first presented. Upper and lower uncertainty bounds for each of the model inputs are found by comparing empirical correlations against experimental data from the literature. Model uncertainty is then investigated by independently varying each model input between its individual upper and lower uncertainty bounds then observing the relative change in predicted effluent concentration and temperature (e.g., breakthrough time, bed capacity, and effluent temperature). This analysis showed that the LDF mass transfer coefficient is the largest source of uncertainty. Furthermore, the uncertainty analysis reveals that ignoring the effect of wall-channeling on apparent axial dispersion can cause significant error in the predicted breakthrough times of small-diameter beds.

In addition to LDF mass transfer coefficient and axial-dispersion, equilibrium isotherms are known to be strong lever arms and a potentially dominant source of model error. As such, detailed analysis of the equilibrium adsorption isotherms for zeolite 13X was conducted to improve the fidelity of CO2 and H2O on equilibrium isotherms compared to extant data. These two adsorbent/adsorbate pairs are of great interest as NASA plans to use zeolite 13X in the next generation atmospheric revitalization system. Equilibrium isotherms describe a sorbent’s maximum capacity at a given temperature and adsorbate (e.g., CO2 or H2O) partial pressure. New isotherm data from NASA Ames Research Center and NASA Marshall Space Flight Center for CO2 and H2O adsorption on zeolite 13X are presented. These measurements were carefully collected to eliminate sources of bias in previous data from the literature, where incomplete activation resulted in a reduced capacity. Several models are fit to the new equilibrium isotherm data and recommendations of the best model fit are made. The best-fit isotherm models from this analysis are used in all subsequent modeling efforts discussed in this dissertation.

The last two chapters examine the limitations of the axially disperse plug-flow model for predicting breakthrough in confined geometries. When a bed of pellets is confined in a rigid container, packing heterogeneities near the wall lead to faster flow around the periphery of the bed (i.e., wall channeling). Wall-channeling effects have long been considered negligible for beds which hold more than 20 pellets across; however, the present work shows that neglecting wall-channeling effects on dispersion can yield significant errors in model predictions. There is a fundamental gap in understanding the mechanisms which control wall-channeling driven dispersion. Furthermore, there is currently no way to predict wall channeling effects a priori or even to identify what systems will be impacted by it. This dissertation aims to fill this gap using both experimental measurements and simulations to identify mechanisms which cause the plug-flow assumption to fail.

First, experimental evidence of wall-channeling in beds, even at large bed-to-pellet diameter ratios (dbed/dp=48) is presented. These experiments are then used to validate a method for accurately extracting mass transfer coefficients from data affected by significant wall channeling. The relative magnitudes of wall-channeling effects are shown to be a function of the adsorption/adsorbate pair and geometric confinement (i.e., bed size). Ultimately, the axially disperse plug-flow model fails to capture the physics of breakthrough when nonplug-flow conditions prevail in the bed.

The final chapter of this dissertation develops a two-dimensional (2-D) adsorption model to examine the interplay of wall-channeling and adsorption kinetics and the adsorbent equilibrium capacity on breakthrough in confined geometries. The 2-D model incorporates the effect of radial variations in porosity on the velocity profile and is shown to accurately capture the effect of wall-channeling on adsorption behavior. The 2-D model is validated against experimental data, and then used to investigate whether capacity or adsorption kinetics cause certain adsorbates to exhibit more significant radial variations in concentration compared than others. This work explains channeling effects can vary for different adsorbate and/or adsorbent pairs—even under otherwise identical conditions—and highlights the importance of considering adsorption kinetics in addition to the traditional dbed/dp criteria.

This dissertation investigates key gaps in our understanding of fixed-bed adsorption. It will deliver insight into how these missing pieces impact the accuracy of predictive models and provide a means for reconciling these errors. The culmination of this work will be an accurate, predictive model that assists in the simulation-based design of the next-generation atmospheric revitalization system for humans’ journey to Mars.
APA, Harvard, Vancouver, ISO, and other styles
3

Scellier, Benjamin. "A deep learning theory for neural networks grounded in physics." Thesis, 2020. http://hdl.handle.net/1866/25593.

Full text
Abstract:
Au cours de la dernière décennie, l'apprentissage profond est devenu une composante majeure de l'intelligence artificielle, ayant mené à une série d'avancées capitales dans une variété de domaines. L'un des piliers de l'apprentissage profond est l'optimisation de fonction de coût par l'algorithme du gradient stochastique (SGD). Traditionnellement en apprentissage profond, les réseaux de neurones sont des fonctions mathématiques différentiables, et les gradients requis pour l'algorithme SGD sont calculés par rétropropagation. Cependant, les architectures informatiques sur lesquelles ces réseaux de neurones sont implémentés et entraînés souffrent d’inefficacités en vitesse et en énergie, dues à la séparation de la mémoire et des calculs dans ces architectures. Pour résoudre ces problèmes, le neuromorphique vise à implementer les réseaux de neurones dans des architectures qui fusionnent mémoire et calculs, imitant plus fidèlement le cerveau. Dans cette thèse, nous soutenons que pour construire efficacement des réseaux de neurones dans des architectures neuromorphiques, il est nécessaire de repenser les algorithmes pour les implémenter et les entraîner. Nous présentons un cadre mathématique alternative, compatible lui aussi avec l’algorithme SGD, qui permet de concevoir des réseaux de neurones dans des substrats qui exploitent mieux les lois de la physique. Notre cadre mathématique s'applique à une très large classe de modèles, à savoir les systèmes dont l'état ou la dynamique sont décrits par des équations variationnelles. La procédure pour calculer les gradients de la fonction de coût dans de tels systèmes (qui dans de nombreux cas pratiques ne nécessite que de l'information locale pour chaque paramètre) est appelée “equilibrium propagation” (EqProp). Comme beaucoup de systèmes en physique et en ingénierie peuvent être décrits par des principes variationnels, notre cadre mathématique peut potentiellement s'appliquer à une grande variété de systèmes physiques, dont les applications vont au delà du neuromorphique et touchent divers champs d'ingénierie.
In the last decade, deep learning has become a major component of artificial intelligence, leading to a series of breakthroughs across a wide variety of domains. The workhorse of deep learning is the optimization of loss functions by stochastic gradient descent (SGD). Traditionally in deep learning, neural networks are differentiable mathematical functions, and the loss gradients required for SGD are computed with the backpropagation algorithm. However, the computer architectures on which these neural networks are implemented and trained suffer from speed and energy inefficiency issues, due to the separation of memory and processing in these architectures. To solve these problems, the field of neuromorphic computing aims at implementing neural networks on hardware architectures that merge memory and processing, just like brains do. In this thesis, we argue that building large, fast and efficient neural networks on neuromorphic architectures also requires rethinking the algorithms to implement and train them. We present an alternative mathematical framework, also compatible with SGD, which offers the possibility to design neural networks in substrates that directly exploit the laws of physics. Our framework applies to a very broad class of models, namely those whose state or dynamics are described by variational equations. This includes physical systems whose equilibrium state minimizes an energy function, and physical systems whose trajectory minimizes an action functional (principle of least action). We present a simple procedure to compute the loss gradients in such systems, called equilibrium propagation (EqProp), which requires solely locally available information for each trainable parameter. Since many models in physics and engineering can be described by variational principles, our framework has the potential to be applied to a broad variety of physical systems, whose applications extend to various fields of engineering, beyond neuromorphic computing.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Deep Equilibrium Models"

1

Johnsen, Bredo. Nelson Goodman. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780190662776.003.0008.

Full text
Abstract:
Goodman addressed the problem of induction twice. His first approach is famous, centers on his “new riddle of induction,” and is the locus classicus of modern reflective equilibrium theory. In it the focus is on inductive inferences and rules of inductive inference. In his second approach, the focus is instead on the conclusions of inductive inferences to explanations of the available data. Here reflective equilibrium theory is more fully developed. The author in this chapter argues that Goodman’s two accounts of inductive justification in terms of reflective equilibrium share a deep commonality.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Deep Equilibrium Models"

1

Ertenli, Can Ufuk, Emre Akbas, and Ramazan Gokberk Cinbis. "Streaming Multiscale Deep Equilibrium Models." In Lecture Notes in Computer Science, 189–205. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20083-0_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chau, Nguyen Minh, Le Truong Giang, and Dinh Viet Sang. "PolypDEQ: Towards Effective Transformer-Based Deep Equilibrium Models for Colon Polyp Segmentation." In Advances in Visual Computing, 456–67. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-20713-6_35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Valeyeva, Nailya Sh, Roman V. Kupriyanov, Julia N. Ziyatdinova, and Farida F. Frolova. "Self-Sustaining Ecosystem for Learning and Communication." In Handbook of Research on Ecosystem-Based Theoretical Models of Learning and Communication, 211–32. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-7853-6.ch013.

Full text
Abstract:
The global society faces a number of challenges and risks. In order to adapt, it is important to have liquid learning and communication skills in order to be flexible and adaptive to new knowledge. Therefore, there is a growing demand for smart individuals demonstrating a deep desire for self-directed professional and life development. Scientific research into self-directed professional development strategies is of crucial importance. A person who aims at self-directed professional development is the one who knows and uses certain mechanisms, methods, and techniques to build and update work-related knowledge, qualities, and skills, thus planning and tracking his own career growth to become competitive in the global market and to keep the static and dynamic equilibrium of the global society in balance. In real life, students seldom recognize the importance of self-directed development as a learning skill for their personal independence, and for the static and dynamic equilibrium of the global society as a whole; thus, they do it spontaneously, inconsistently, and inefficiently.
APA, Harvard, Vancouver, ISO, and other styles
4

Mishra, Prakash Chandra, and Anil Kumar Giri. "Prediction of Biosorption Capacity Using Artificial Neural Network Modeling and Genetic Algorithm." In Deep Learning and Neural Networks, 144–58. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-0414-7.ch010.

Full text
Abstract:
Artificial neural network model is applied for the prediction of the biosorption capacity of living cells of Bacillus cereus for the removal of chromium (VI) ions from aqueous solution. The maximum biosorption capacity of living cells of Bacillus cereus for chromium (VI) was found to be 89.24% at pH 7.5, equilibrium time of 60 min, biomass dosage of 6 g/L, and temperature of 30 ± 2 °C. The biosorption data of chromium (VI) ions collected from laboratory scale experimental set up is used to train a back propagation (BP) learning algorithm having 4-7-1 architecture. The model uses tangent sigmoid transfer function at input to hidden layer whereas a linear transfer function is used at output layer. The data is divided into training (75%) and testing (25%) sets. Comparison between the model results and experimental data gives a high degree of correlation R2 = 0.984 indicating that the model is able to predict the sorption efficiency with reasonable accuracy. Bacillus cereus biomass is characterized using AFM and FTIR.
APA, Harvard, Vancouver, ISO, and other styles
5

Moyar, Dean. "Value and the Expressive Conditions of the Subjective Will." In Hegel's Value, 150–88. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780197532539.003.0005.

Full text
Abstract:
This chapter analyzes the pivotal “Morality” section that makes subjective rights and universal welfare essential to the overall conception of justice. It is shown that Hegel’s analysis of the “deed” motivates the move to intentional action in which subjective value and the right to satisfaction come to the fore. The tension between objective and subjective value in the intention leads to the decisive conflict of abstract right and morality in the “right of necessity.” With the Basic Argument template it is shown why the right of necessity leads to an all-encompassing conception of value, the Good, that Hegel calls “the final purpose of the world.” The treatment of formal and true conscience is read in dialogue with the theory of justification that John Rawls calls reflective equilibrium. The chapter argues that conscience is the individual justification akin to reflective equilibrium, and that the transition out of “Morality” highlights the deficiencies of the individual (as opposed to institutional) reflective equilibrium model.
APA, Harvard, Vancouver, ISO, and other styles
6

Carreño, Ana Luisa, and Javier Helenes. "Geology and Ages of the Islands." In Island Biogeography in the Sea of Cortés II. Oxford University Press, 2002. http://dx.doi.org/10.1093/oso/9780195133462.003.0007.

Full text
Abstract:
Before middle Miocene times, Baja California was attached to the rest of the North American continent. Consequently, most of the terrestrial fauna and flora of the peninsula had its origins in mainland Mexico. However, the separation of the peninsula and its northwestward displacement resulted in a variety of distribution patterns, isolations, extinctions, origins and ultimate evolution of fauna and flora in several ways. The islands in the Gulf of California have been colonized by species from Baja California and mainland Mexico. Some workers (Soulé and Sloan 1966; Wilcox 1978) consider that many of these islands originated as landbridges. Geographically, most of the islands are closer to the peninsula than to the mainland. Therefore, it has been assumed that the Baja California Peninsula was the origin of most of the organisms inhabiting them (Murphy 1983). Islands separated by depths of 110 m or less from the peninsula or mainland Mexico apparently owe their current insular existence to a rise in sea level during the current interglacial period (Soulé and Sloan 1966). In contrast, little information exists for deep-water islands. Any complete analysis of the distribution and origin of several organic groups inhabiting the Gulf of California islands should involve the consideration of several contrasting models arguing in favor of or against the equilibrium theory (MacArthur and Wilson 1967). In any model, one of the most important features to consider is the relationship between the species inhabiting the gulf islands and the physical and geological processes of formation of the islands, as well as their age, size, and distance from either the peninsula or the mainland. Understanding colonization, migration, and distribution, particularly in some groups, requires information on whether a particular island was ever connected to a continental source. For example, to explain some characteristics of the populations of any island, which presumably had a recent (<10,000-15,000 years) connection to a continental source, it is necessary to evaluate the coastal erosion or the relative rise in the sea level. These factors might contribute to effectively isolating an insular habit or to forming landbridges.
APA, Harvard, Vancouver, ISO, and other styles
7

Bethke, Craig M. "Geothermometry." In Geochemical Reaction Modeling. Oxford University Press, 1996. http://dx.doi.org/10.1093/oso/9780195094756.003.0021.

Full text
Abstract:
Geothermometry is the use of a fluid’s (or, although not discussed here, a rock’s) chemical composition to estimate the temperature at which it equilibrated in the subsurface. The specialty is important, for example, in exploring for and exploiting geothermal fields, characterizing deep groundwater flow systems, and understanding the genesis of ore deposits. Several chemical geothermometers are in widespread use. The silica geothermometer (Fournier and Rowe, 1966) works because the solubilities of the various silica minerals (e.g., quartz and chalcedony, SiO2) increase monotonically with temperature. The concentration of dissolved silica, therefore, defines a unique equilibrium temperature for each silica mineral. The Na-K (White, 1970) and Na-K-Ca (Fournier and Truesdell, 1973) geothermometers take advantage of the fact that the equilibrium points of cation exchange reactions among various minerals (principally, the feldspars) vary with temperature. In applying these methods, it is necessary to make a number of assumptions or corrections (e.g., Fournier, 1977). First, the minerals with which the fluid reacted must be known. Applying the silica geothermometer assuming equilibrium with quartz, for example, would not give the correct result if the fluid’s silica content is controlled by reaction with chalcedony. Second, the fluid must have attained equilibrium with these minerals. Many studies have suggested that equilibrium is commonly approached in geothermal systems, especially for ancient waters at high temperature, but this may not be the case in young sedimentary basins like the Gulf of Mexico basin (Land and Macpherson, 1992). Third, the fluid’s composition must not have been altered by separation of a gas phase, mineral precipitation, or mixing with other fluids. Finally, corrections may be needed to account for the influence of certain dissolved components, including CO2 and Mg++, which affect the equilibrium composition (Paces, 1975; Fournier and Potter, 1979; Giggenbach, 1988). Using geochemical modeling, we can apply chemical geothermometry in a more generalized manner. By utilizing the entire chemical analysis rather than just a portion of it, we avoid some of the restricting assumptions mentioned in the preceding paragraph (see Michard et al., 1981; Michard and Roekens, 1983; and especially Reed and Spycher, 1984). Having constructed a theoretical model of the fluid in question, we can calculate the saturation state of each mineral in the database, noting the temperature at which each is in equilibrium with the fluid.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Deep Equilibrium Models"

1

Koyama, Yuichiro, Naoki Murata, Stefan Uhlich, Giorgio Fabbro, Shusuke Takahashi, and Yuki Mitsufuji. "Music Source Separation With Deep Equilibrium Models." In ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022. http://dx.doi.org/10.1109/icassp43922.2022.9746317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Czechowski, Aleksander, and Frans A. Oliehoek. "Decentralized MCTS via Learned Teammate Models." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/12.

Full text
Abstract:
Decentralized online planning can be an attractive paradigm for cooperative multi-agent systems, due to improved scalability and robustness. A key difficulty of such approach lies in making accurate predictions about the decisions of other agents. In this paper, we present a trainable online decentralized planning algorithm based on decentralized Monte Carlo Tree Search, combined with models of teammates learned from previous episodic runs. By only allowing one agent to adapt its models at a time, under the assumption of ideal policy approximation, successive iterations of our method are guaranteed to improve joint policies, and eventually lead to convergence to a Nash equilibrium. We test the efficiency of the algorithm by performing experiments in several scenarios of the spatial task allocation environment introduced in [Claes et al., 2015]. We show that deep learning and convolutional neural networks can be employed to produce accurate policy approximators which exploit the spatial features of the problem, and that the proposed algorithm improves over the baseline planning performance for particularly challenging domain configurations.
APA, Harvard, Vancouver, ISO, and other styles
3

Croce, Giulio, Giulio Mori, Viatcheslav V. Anisimov, and Joa˜o Parente. "Assessment of Traditional and Flamelets Models for Micro Turbine Combustion Chamber Optimisation." In ASME Turbo Expo 2003, collocated with the 2003 International Joint Power Generation Conference. ASMEDC, 2003. http://dx.doi.org/10.1115/gt2003-38385.

Full text
Abstract:
Different approaches for numerical simulation of premixed combustion are considered, in order to assess their usefulness as design tools for micro gas turbine systems. In particular, a flamelet concept routine by N. Peters has been developed taking into account both mixture fraction Z and G function as scalar flame locators, thus allowing computation of complex fully or partial premixed flame structure. The model can be used also in the thin reaction regime. Scalar transport equations for G, Z and their variance are added to the standard Navier Stokes and turbulence set of equation, in order to track the flame position. However, no chemical term appears explicitly in such equations, since the chemical effects are taken into account via pre-computed locally one-dimensional flamelet solutions. Here, the deep interaction between chemical and turbulence has been introduced through flamelets library built in non equilibrium conditions using CHEMKIN modules. The results of this model are compared the data obtained with a standard EBU model and different reaction mechanisms. Models validation has been carried out through experimental data coming from Aachen University for an axisymmetric Bunsen flame; finally, the code was applied to the analysis of a newly designed micro gas turbine combustor.
APA, Harvard, Vancouver, ISO, and other styles
4

Mros, Catherine, Kavic Rason, and Brad Kinsey. "Thin Film Superplastic Forming Model for Nanoscale Bulk Metallic Glass Forming." In ASME 2008 International Mechanical Engineering Congress and Exposition. ASMEDC, 2008. http://dx.doi.org/10.1115/imece2008-68759.

Full text
Abstract:
Geometrically complex, high aspect ratio microstructures have been successfully formed in Bulk Metallic Glass (BMG) via superplastic forming against silicon dies [1–3]. Although nanoscale features have been created in a similar fashion, there exists a demand to develop these metallic nanofeatures into high aspect ratio nanostructures with controlled geometries. In past research a process model was created to predict the achievable nanoscale feature sizes and aspect ratios through a flow model [4]. The flow model assumes force equilibrium with a viscous term to account for the required force to produce flow and a capillary pressure term required to overcome surface effects which are significant at the nanoscale. In this paper, a thin film model to predict the pressure distribution across the BMG during the forming process when it is in the supercooled liquid state is presented. Silicon molds with various nanofeatures were produced using Deep Reactive Ion Etching to achieve high aspect ratio dies over a relatively large area in order to validate these models.
APA, Harvard, Vancouver, ISO, and other styles
5

Hou, Ming, Brahim Chaib-draa, Chao Li, and Qibin Zhao. "Generative Adversarial Positive-Unlabelled Learning." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/312.

Full text
Abstract:
In this work, we consider the task of classifying binary positive-unlabeled (PU) data. The existing discriminative learning based PU models attempt to seek an optimal reweighting strategy for U data, so that a decent decision boundary can be found. However, given limited P data, the conventional PU models tend to suffer from overfitting when adapted to very flexible deep neural networks. In contrast, we are the first to innovate a totally new paradigm to attack the binary PU task, from perspective of generative learning by leveraging the powerful generative adversarial networks (GAN). Our generative positive-unlabeled (GenPU) framework incorporates an array of discriminators and generators that are endowed with different roles in simultaneously producing positive and negative realistic samples. We provide theoretical analysis to justify that, at equilibrium, GenPU is capable of recovering both positive and negative data distributions. Moreover, we show GenPU is generalizable and closely related to the semi-supervised classification. Given rather limited P data, experiments on both synthetic and real-world dataset demonstrate the effectiveness of our proposed framework. With infinite realistic and diverse sample streams generated from GenPU, a very flexible classifier can then be trained using deep neural networks.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Zhifu, Hui Xin, Bin Chen, and Guo-Xiang Wang. "Theoretical Evaporation Model of a Single Droplet in Laser Treatment of PWS in Conjunction With Cryogen Spray Cooling." In ASME 2008 Heat Transfer Summer Conference collocated with the Fluids Engineering, Energy Sustainability, and 3rd Energy Nanotechnology Conferences. ASMEDC, 2008. http://dx.doi.org/10.1115/ht2008-56063.

Full text
Abstract:
Cryogen spray is an effective cooling technique used for laser treatment of Port Wine Stain. The cooling process involves complex droplet evaporation and strong convective heat and mass transfer, therefore a deep understanding of spray characteristics is essential in order to optimize the nozzle design and improve the cooling efficiency of the spray. This paper improves a theoretical model to describe the equilibrium evaporation process of a single cryogen droplet in cryogen spray. The results of comparative analysis of gas phase models for single droplet heating and evaporation are presented. Six different semi-theoretical models based on various assumptions are compared and their effects on a single droplet heating and evaporation characteristics in the process of cryogen R-134a spray cooling are compared. It is pointed out that the gas phase model, in which the effect of superheat is taken into account, predicts the evaporation process closest to the experimental data. Finally, a parametric study of the influences of initial diameter and velocity on the droplet evaporation is then carried out. The results can be used to guide the Cryogen Spray Cooling of laser therapy.
APA, Harvard, Vancouver, ISO, and other styles
7

Saidu Mohamed, Anwarudin, Syafiq Effendi Jalis, Intiran Raman, Kumanan Sanmugam, Dhanaraj Turunawarasu, Mohd Firdaus Samsudin, Al Ashraf Zharif Al Bakri, and Kassim Selamat. "Restoring Technical Potential of Deep-Water Well Impaired by Hydrate Plug Embedded with Wax Deposit with Improved Characterization and Innovative Chemistry." In Offshore Technology Conference. OTC, 2021. http://dx.doi.org/10.4043/31232-ms.

Full text
Abstract:
Abstract Hydrate occurrence is synonymous in deep water wells, notably when the well experience significant reduction in fluid temperature during production. Hence, the operating philosophy must take into consideration the ability to maintain the well-fluid outside the hydrate or wax phase envelope and ensure the contingencies are in place to mitigate any plug, deposit or gel formation. This paper illustrates the characterization of hydrate and wax plug encountered and devise of innovative solution to remediate the blockage in two wells in Sabah waters which were plugged due to cooling of the wells during an unplanned shut down. The solution devised is to set precedence to manage temperature dependent blockages in similar Deepwater wells or facilities. Hydrate and wax models were created to predict blockage severity and its location. Nodal analysis was used to model thermodynamic equilibrium at target location of the plug where the temperature is below the melting point and ultimately to predict the required heat to dissolve the blockages. A Thermo-chemical system was identified, selected, and customized and then injected into well to ensure the temperature generated at the location of the plug was above the melting point of hydrate and wax. Thermo-chemical injection was identified as a viable method of In-situ Heat Generating Technique to generate heat at desired location. The chemical solution was injected via capillary tubing to transmit the heat via conduction and convection to melt the hydrate and paraffinic plug in these 2 wells. An arriving temperature of 40°C at the target zones was required to melt the plug. A positive pressure was maintained in the production tubing during chemical injection to avoid rapid pressure increase as the hydrate plugs dissolved. A temperature of 100 °C was recorded at the wellhead throughout the injection. The downhole gauge indicated positive response, suggesting the heat generated transmitted effectively. After a short duration of injection, communication was established. Hydrate inhibitor was injected to secure the well prior to unloading. The wells were successfully relieved and stabilized production of 1,200 bopd and 800 bopd respectively. The simulation was redesigned based on data collected from the operation to improve the model and to be used for future works. The ability to integrate laboratory analysis, computer aided simulation and operational data was integral to this paper demonstrating an effective way to characterize temperature dependent blockages in production system. Design of experiments provided better insight to address the problem. Innovative use of novel chemistry to produce heat, in-situ heat solved hydrate and wax related issues in a most cost-effective manner. The process of customizing a chemical system based on laboratory and simulation results was effective in ensuring delivery of the results. The bull-heading operation to inject the chemical system proved to be a cost-effective remedial method to unlock the barrels and can be considered preventive or as a contingency measure in dealing with temperature dependent blockages or plugs in future.
APA, Harvard, Vancouver, ISO, and other styles
8

Rafieepour, Saeed, and Stefan Z. Miska. "Spatio-Temporal Stress Path Prediction Under Different Deformational Conditions." In ASME 2017 36th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/omae2017-61597.

Full text
Abstract:
Drilling new infill wells in depleted reservoirs is extremely problematic and costly due to low formation fracture pressure and narrow mud window resulting from in-situ stress changes due to fluid extraction. This is of paramount importance especially for drilling operations in deep-water reservoirs, which requires precise prediction of formation fracture pressure. In turn, this entails accurate prediction of reservoir stress changes with pore pressure depletion, i.e., the stress path. Currently-used models assume a transient flow regime with reservoir depletion. However, flow regime in depleted reservoirs is dominantly pseudo-steady state (PSS). Shahri and Miska (2013) proposed a model under plane-strain assumption. However, subsea subsidence measurements confirm that depletion-induced reservoir deformation mainly occurs in axial direction. We provide analytical solutions for stress path prediction under different deformational conditions namely, plane strain-traction and displacement boundary conditions, generalized-plane-stress, generalized uniaxial strain, and uniaxial-strain. For this purpose, constitutive relations of poroelasticity are combined with equilibrium equations, and pore pressure profile is described by PSS flow regime. In a numerical example, we examine the effects of different deformational conditions on depletion-induced in-situ stress changes. Interestingly, results indicates that stress path in reservoir is significantly affected by reservoir’s boundary conditions. The stress path under plane strain-displacement assumption overestimates the stress path predicted under uniaxial strain state by almost a factor of two. However, the generalized plane stress and traction plane strain conditions underestimates the results of uniaxial strain assumption. The order of stress path values for different boundary conditions can be summarized as: SPps-disp > SPuniaxial > SPps-trac > SPgps.
APA, Harvard, Vancouver, ISO, and other styles
9

Yu, Youhao, and Richard M. Dansereau. "STP-DEQ-Net: A Deep Equilibrium Model Based on ISTA Method for Image Compressive Sensing." In 2022 30th European Signal Processing Conference (EUSIPCO). IEEE, 2022. http://dx.doi.org/10.23919/eusipco55093.2022.9909837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ghiasi, MohammadAmin, MohammadTaghi Hajiaghayi, Sébastien Lahaie, and Hadi Yami. "On the Efficiency and Equilibria of Rich Ads." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/43.

Full text
Abstract:
Search ads have evolved in recent years from simple text formats to rich ads that allow deep site links, rating, images and videos. In this paper, we consider a model where several slots are available on the search results page, as in the classic generalized second-price auction (GSP), but now a bidder can be allocated several consecutive slots, which are interpreted as a rich ad. As in the GSP, each bidder submits a bid-per-click, but the click-through rate (CTR) function is generalized from a simple CTR for each slot to a general CTR function over sets of consecutive slots. We study allocation and pricing in this model under subadditive and fractionally subadditive CTRs. We design and analyze a constant-factor approximation algorithm for the efficient allocation problem under fractionally subadditive CTRs, and a log-approximation algorithm for the subadditive case. Building on these results, we show that approximate competitive equilibrium prices exist and can be computed for subadditive and fractionally subadditive CTRs, with the same guarantees as for allocation.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Deep Equilibrium Models"

1

Foroni, Claudia, Paolo Gelain, and Massimiliano Marcellino. The financial accelerator mechanism: does frequency matter? Federal Reserve Bank of Cleveland, November 2022. http://dx.doi.org/10.26509/frbc-wp-202229.

Full text
Abstract:
We use mixed-frequency (quarterly-monthly) data to estimate a dynamic stochastic general equilibrium model embedded with the financial accelerator mechanism à la Bernanke et al. (1999). We find that the financial accelerator can work very differently at monthly frequency compared to quarterly frequency; that is, we document its inversion. That is because aggregating monthly data into quarterly data leads to large biases in the estimated quarterly parameters and, as a consequence, to a deep change in the transmission of shocks.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography