Auswahl der wissenschaftlichen Literatur zum Thema „Many-body methods“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Many-body methods" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Many-body methods"

1

Schäfer, T., C. W. Kao und S. R. Cotanch. „Many body methods and effective field theory“. Nuclear Physics A 762, Nr. 1-2 (November 2005): 82–101. http://dx.doi.org/10.1016/j.nuclphysa.2005.08.006.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Stewart, I. „Symmetry methods in collisionless many-body problems“. Journal of Nonlinear Science 6, Nr. 6 (November 1996): 543–63. http://dx.doi.org/10.1007/bf02434056.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

CARDY, JOHN. „EXACT RESULTS FOR MANY-BODY PROBLEMS USING FEW-BODY METHODS“. International Journal of Modern Physics B 20, Nr. 19 (30.07.2006): 2595–602. http://dx.doi.org/10.1142/s0217979206035072.

Der volle Inhalt der Quelle
Annotation:
Recently there has been developed a new approach to the study of critical quantum systems in 1+1 dimensions which reduces them to problems in one-dimensional Brownian motion. This goes under the name of stochastic, or Schramm, Loewner Evolution (SLE). I review some of the recent progress in this area, from the point of view of many-body theory. Connections to random matrices also emerge.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kaldor, Uzi. „Multireference many-body methods. Perspective on "Linked-cluster expansions for the nuclear many-body problem"“. Theoretical Chemistry Accounts: Theory, Computation, and Modeling (Theoretica Chimica Acta) 103, Nr. 3-4 (09.02.2000): 276–77. http://dx.doi.org/10.1007/s002149900014.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Viviani, M. „Few- and many-body methods in nuclear physics“. European Physical Journal A 31, Nr. 4 (März 2007): 429–34. http://dx.doi.org/10.1140/epja/i2006-10263-9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Drut, Joaquín E., und Amy N. Nicholson. „Lattice methods for strongly interacting many-body systems“. Journal of Physics G: Nuclear and Particle Physics 40, Nr. 4 (12.03.2013): 043101. http://dx.doi.org/10.1088/0954-3899/40/4/043101.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Pulay, P., und S. Sæbø. „Variational CEPA: Comparison with different many-body methods“. Chemical Physics Letters 117, Nr. 1 (Mai 1985): 37–41. http://dx.doi.org/10.1016/0009-2614(85)80400-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Nieves, J. „Quantum field theoretical methods in many body systems“. Czechoslovak Journal of Physics 46, Nr. 7-8 (Juli 1996): 673–720. http://dx.doi.org/10.1007/bf01692562.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lewin, Mathieu. „Geometric methods for nonlinear many-body quantum systems“. Journal of Functional Analysis 260, Nr. 12 (Juni 2011): 3535–95. http://dx.doi.org/10.1016/j.jfa.2010.11.017.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Gutfreund, H. „Applications of many body methods to large molecules“. Journal of Polymer Science Part C: Polymer Symposia 29, Nr. 1 (07.03.2007): 95–108. http://dx.doi.org/10.1002/polc.5070290113.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Many-body methods"

1

Wilson, Mark. „Many-body effects in ionic systems“. Thesis, University of Oxford, 1994. http://ora.ox.ac.uk/objects/uuid:3c66daa2-5318-40d2-a445-15296d598a57.

Der volle Inhalt der Quelle
Annotation:
The electron density of an ion is strongly influenced by its environment in a condensed phase. When the environment changes, for example due to thermal motion, non-trivial changes in the electron density, and hence the interionic interactions occur. These interactions give rise to many-body effects in the potential. In order to represent this phenomenon in molecular dynamics (MD) simulations a method has been developed in which the environmentally-induced changes in the ionic properties are represented by extra dynamical variables. These extra variables are handled in an extended Lagrangian formalism by techniques analogous to those used in Car and Parrinello's ab initio MD method. At its simplest level (the polarizable-ion model or PIM) induced dipoles are represented. With the PIM it has proven possible to quantitatively account for numerous properties of divalent metal halides, which had previously been attributed to unspecific "covalent" effects. In the solid-state the prevalence of layered crystal structures is explained. Analogous non-coulombic features in liquid structures, in particular network formation in "strong" liquids like ZnCl2 , have been studied as has network disruption by "modifiers" like RbCl. This work leads to an understanding of the relationship between the microscopic structure and anomalous peaks ("prepeaks") seen in diffraction data of such materials. The PIM was extended to include induced quadrupoles and their effect studied in simulations of AgCl. In the solid-state it is found that the both are crucial in improving the phonon dispersion curves with respect to experiment. In the liquidstate polarization effects lower the melting point markedly. For oxides the short-range energy has been further partitioned into overlap and rearrangement energies and electronic structure calculations are used to parameterize a model in which the radius of the anion is included as an additional degree of freedom. The Bl → B2 phase transition is studied in MgO and CaO and the differences between the new model and a rigid-ion model are analysed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Steiger, Don. „Numerical n-body methods in computational chemistry /“. free to MU campus, to others for purchase, 1998. http://wwwlib.umi.com/cr/mo/fullcit?p9924930.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Dinh, Thi Hanh Physics Faculty of Science UNSW. „Application of many-body theory methods to atomic problems“. Publisher:University of New South Wales. Physics, 2009. http://handle.unsw.edu.au/1959.4/43734.

Der volle Inhalt der Quelle
Annotation:
There is strong interest in atomic and nuclear physics to the study of superheavy elements by the search for the island of stability in the region Z=104 to Z=126. There are many experimental efforts and theoretical works devoted to these study in measuring the spectra and chemical properties. In this thesis, calculations of the spectra and the hyperfine structure of some superheavy elements have been performed in an attempt to enrich our knowledge about the elements and even may help in their detection. We perform the high-precision relativistic calculations to determine the spectra of the superheavy element Z=119 (eka-Fr) and the singly-ionized superheavy element Z=120+ (eka-Ra+). Dominating correlation corrections beyond relativistic Hartree-Fock are included to all orders in the residual electron interaction using the Feynman diagram technique and the correlation potential method. The Breit interaction and quantum electrodynamics radiative corrections are considered. Also, the volume isotope shift is determined. We present the relativistic calculations for the energy levels of the superheavy element Z=120. The relativistic Hartree-Fock and configuration interaction techniques are employed. The correlations between core and valence electrons are treated by means of the correlation potential method and many-body perturbation theory. We also try to address the absence of experimental data on the electron structure and energy spectrum of the Uub element (Z=112) by calculating its energy levels. The relativistic Hartree-Fock and configuration interaction methods are combined with the many-body perturbation theory to construct the many-electron wave function for valence electrons and to include core-valence correlations. The hyperfine structure constants of the lowest s and p1/2 states of superheavy elements Z=119 and Z= 120+ are calculated. Core polarization, dominating correlation, Breit and quantum electrodynamic effects are considered. The dependence of the hyperfine structure constants on nuclear radius is discussed. Measurements of the hyperfine structure combined with our calculations will allow one to study nuclear properties and distribution of magnetic moment inside nucleus. Finally, we discuss the possibility of measuring nuclear anapole moments in atomic Zeeman transitions and perform the necessary calculations. Advantages of using Zeeman transitions include variable transition frequencies and the possibility of enhancement of parity nonconservation effects.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Gerster, Matthias [Verfasser]. „Tensor network methods for quantum many-body simulations / Matthias Gerster“. Ulm : Universität Ulm, 2021. http://d-nb.info/1233737406/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Richard, Ryan. „Increasing the computational efficiency of ab initio methods with generalized many-body expansions“. The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1385570237.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Molnar, Andras [Verfasser], und Jan von [Akademischer Betreuer] Delft. „Tensor Network methods in many-body physics / Andras Molnar ; Betreuer: Jan von Delft“. München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2019. http://d-nb.info/1185979328/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Blandon, Juan. „DEVELOPMENT OF THEORETICAL AND COMPUTATIONAL METHODS FOR FEW-BODY PROCESSES IN ULTRACOLD QUANTUM GASES“. Master's thesis, University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2881.

Der volle Inhalt der Quelle
Annotation:
We are developing theoretical and computational methods to study two related three-body processes in ultracold quantum gases: three-body resonances and three-body recombination. Three-body recombination causes the ultracold gas to heat up and atoms to leave the trap where they are confined. Therefore, it is an undesirable effect in the process of forming ultracold quantum gases. Metastable three-body states (resonances) are formed in the ultracold gas. When decaying they also give additional kinetic energy to the gas, that leads to the heating too. In addition, a reliable method to obtain three-body resonances would be useful in a number of problems in other fields of physics, for example, in models of metastable nuclei or to study dissociative recombination of H3 +. Our project consists of employing computer modeling to develop a method to obtain three-body resonances. The method uses a novel two-step diagonalization approach to solve the three-body Schrödinger equation. The approach employs the SVD method of Tolstikhin et al. coupled with a complex absorbing potential. We tested this method on a model system of three identical bosons with nucleon mass and compared it to the results of a previous study. This model can be employed to understand the 3He nucleus . We found one three-body bound state and four resonances. We are also studying Efimov resonances using a 4He-based model. In a system of identical spinless bosons, Efimov states are a series of loosely bound three-body states which begin to appear as the energy of the two-body bound state approaches zero . Although they were predicted 35 years ago, recent evidence of Efimov states found by Kraemer et al. in a gas of ultracold Cs atoms has sparked great interest by theorists and experimentalists. Efimov resonances are a kind of pre-dissociated Efimov trimer. To search for Efimov resonances we tune the diatom interaction potential, V(r): V(r) → λV(r) as Esry et al. did . We calculated the first two values of λ for which there is a "condensation" (infinite number) of Efimov states. They are λEfimov1 = 0.9765 and λEfimov2 = 6.834. We performed calculations for λ = 2.4, but found no evidence of Efimov resonances. For future work we plan to work with λ ≈ 4 and λ ≈ λEfimov2 where we might see d-wave and higher l-wave Efimov resonances. There is also a many-body project that forms part of this thesis and consists of a direct diagonalization of the Bogolyubov Hamiltonian, which describes elementary excitations of a gas of bosons interacting through a pairwise interaction. We would like to reproduce the corresponding energy spectrum. So far we have performed several convergence tests, but have not observed the desired energy spectrum. We show preliminary results.
M.S.
Department of Physics
Sciences
Physics
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Motta, M. „DYNAMICAL PROPERTIES OF MANY--BODY SYSTEMS FROM CONFIGURATIONAL AND DETERMINANTAL QUANTUM MONTE CARLO METHODS“. Doctoral thesis, Università degli Studi di Milano, 2015. http://hdl.handle.net/2434/345455.

Der volle Inhalt der Quelle
Annotation:
The numerical simulation of quantum many-body systems is an essential instrument in the research on condensed matter Physics. Recent years have witnessed remarkable progress in studying dynamical properties of non-relativistic Bose systems with quantum Monte Carlo (QMC) methods. On the other hand, the numerical study of Fermi systems is a still open problem of great relevance, as fermions constitute a substantial part of ordinary matter and methods for the accurate calculation of their ground-state and dynamical properties would be useful instruments for the interpretation of experimental data. In this thesis, a number of approximate schemes for studying ground-state and dynamical properties of quantum many-body systems are presented and employed to calculate ground-state and dynamical properties of Bose and Fermi systems. In particular, the Path Integral Ground State QMC method is used to investigate density fluctuations in one-dimensional systems of Helium atoms and hard rods, and the phaseless Auxiliary field QMC is used to investigate the electronic band and effective mass of the two-dimensional homogeneous electron gas.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Holtz, Susan Lady. „Liouville resolvent methods applied to highly correlated systems“. Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/49795.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Scalesi, Alberto. „On the characterization of nuclear many-body correlations in the ab initio approach“. Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASP070.

Der volle Inhalt der Quelle
Annotation:
La branche 'ab initio' de la théorie de la structure nucléaire s'est traditionnellement concentrée sur l'étude des noyaux de masse légère à moyenne et des systèmes principalement sphériques. Les développements actuels visent à étendre cette approche aux noyaux de masse élevée et aux systèmes à double couche ouverte. L'étude de ces systèmes représente un défi qualitatif et quantitatif.Par conséquent, différentes stratégies doivent être conçues pour capturer efficacement les corrélations dominantes qui ont le plus d'impact sur les observables d'intérêt. Bien qu'il existe en principe des méthodes exactes pour résoudre l'équation de Schrödinger non relativiste pour un hamiltonien nucléaire donné, les limitations pratiques des simulations numériques rendent un tel espoir vain pour la plupart des isotopes. Cela nécessite une hiérarchisation des corrélations mises en jeu dans les différents systèmes nucléaires. La plupart des techniques ab initio reposent sur un calcul initial de type 'champ moyen', généralement effectué via la méthode Hartree-Fock (HF), qui fournit un état de référence contenant la majeure partie des corrélations contribuant aux propriétés nucléaires globales.Lorsqu'on s'attaque à des systèmes à couche ouverte, il s'est avéré particulièrement pratique de briser les symétries du Hamiltonien au niveau du champ moyen pour inclure efficacement les corrélations statiques apparaissant dans les noyaux superfluides (via la théorie HF-Bogoliubov, HFB) ou déformés (via la méthode HF déformée, dHF). Le présent travail contribue à cette ligne de recherche en proposant et en explorant de nouvelles techniques à N-corps applicables à tous les systèmes nucléaires exploitant cette idée de brisure de symétrie. La technique ab initio la plus simple applicable au-delà du champ moyen est la théorie des perturbations à N-corps. Le premier résultat de ce travail est la démonstration qu'une théorie des perturbations incorporant la brisure de la symétrie de rotation (dBMBPT) et employant des interactions nucléaires modernes peut déjà décrire qualitativement les principales observables nucléaires, telles que l'énergie de liaison et le rayon de l'état fondamental.Étant donné que la théorie des perturbations constitue une méthode peu coûteuse permettant d'effectuer des études systématiques sur large partie de la carte des noyaux, une partie du présent travail est consacrée à ouvrir la voie à de tels calculs à grande échelle. Afin de pousser les calculs à N-corps vers une plus grande précision, une nouvelle technique ab initio est ensuite introduite, à savoir la méthode des fonctions de Green-Dyson autoconsistantes déformées (dDSCGF). Cette approche nonperturbative (c'est-à-dire sommant un nombre infini de contributions perturbatives) permet de calculer une grande variété de quantités utiles, à la fois pour l'état fondamental du noyau ciblé et pour les états excités des systèmes voisins. En outre, elle s'étend naturellement en direction des réactions nucléaires afin d'évaluer, par exemple, les potentiels optiques. Étant donné le coût de calcul élevé des méthodes nonperturbatives à N-corps, la dernière section présente des approches possibles pour rendre ces calculs plus efficaces. En particulier, la base des orbitales naturelles est introduite et étudiée dans le contexte des systèmes déformés. Ainsi, il est prouvé que cette technique permet d'utiliser des bases beaucoup plus petites, réduisant ainsi de manière significative le coût final des simulations numériques et étendant leur domaine d'application. En conclusion, les développements présentés dans ce travail ouvrent des voies nouvelles et prometteuses en vue de la description ab initio des noyaux lourds à couches ouvertes
The 'ab initio' branch of nuclear structure theory has traditionally focused on the study of light to mid-mass nuclei and primarily spherical systems. Current developments aim at extending this focus to heavy-mass nuclei and doubly open-shell systems. The study of such systems is qualitatively and quantitatively challenging. Hence, different strategies must be designed to efficiently capture the dominant correlations that most significantly impact the observables of interest. While in principle exact methods exist to solve the non-relativistic Schrödinger equation for a given Nuclear Hamiltonian, practical limitations in numerical simulations make such an approach impossible for most isotopes. This calls for a hierarchical characterization of the main correlations at play in the various nuclear systems. Most ab initio techniques rely on an initial mean-field calculation, typically carried out via the Hartree-Fock (HF) method, which provide a reference state containing the principal part of the correlations contributing to bulk nuclear properties. When tackling open-shell systems, it has been proven particularly convenient to break symmetries at mean-field level to effectively include the static correlations arising in superfluid (via HF-Bogoliubov theory, HFB) or deformed nuclei (via deformed HF, dHF). The present work contributes to this research line by proposing end exploring novel symmetry-breaking many-body techniques applicable to all nuclear systems. The simplest ab initio technique that can be applied on top of the mean-field is many-body perturbation theory. The first result of this work is the demonstration that symmetry-breaking perturbation theory (dBMBPT) based on state-of-the-art nuclear interactions can already qualitatively describe the main nuclear observables, such as ground-state energies and radii. Given that perturbation theory constitutes a cheap and efficient way to perform systematic studies of different nuclei across the nuclear chart, a part of the present work is dedicated to pave the way to such large-scale calculations. In order to push many-body calculations to higher precision, a novel ab initio technique is then introduced, namely the deformed Dyson Self-Consistent Green's function (dDSCGF) method. Such a non-perturbative (i.e., resumming an infinite number of perturbation-theory contributions) approach allows one to compute a wide variety of quantities of interest, both for the ground state of the targeted nucleus and for excited states of neighbouring systems. In addition, it naturally bridges to nuclear reactions giving access to, e.g., the evaluation of optical potentials. Given the high computational cost of non-perturbative many-body methods, the final section introduces possible approaches to make such calculations more efficient. In particular, the Natural Orbital basis is introduced and investigated in the context of deformed systems. Eventually, it is proven that this technique enables the use of much smaller basis sets, thus significantly decreasing the final cost of numerical simulations and enlarging their reach. All together, the developments reported in the present work open up new and promising possibilities for the ab initio description of heavy-mass and open-shell nuclei
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Many-body methods"

1

Kaldor, U., Hrsg. Many-Body Methods in Quantum Chemistry. Berlin, Heidelberg: Springer Berlin Heidelberg, 1989. http://dx.doi.org/10.1007/978-3-642-93424-7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Paul, Gibbon, Hrsg. Many-body tree methods in physics. Cambridge: Cambridge University Press, 1996.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Hubac, Ivan, und Stephen Wilson. Brillouin-Wigner Methods for Many-Body Systems. Dordrecht: Springer Netherlands, 2010. http://dx.doi.org/10.1007/978-90-481-3373-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

1950-, Wilson S. (Stephen), Hrsg. Brillouin-Wigner methods for many-body systems. Dordrecht: Springer, 2010.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Schirmer, Jochen. Many-Body Methods for Atoms, Molecules and Clusters. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-93602-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

D, Brewer William, und SpringerLink (Online service), Hrsg. Fundamentals of Many-body Physics: Principles and Methods. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

NATO Advanced Study Institute on Dynamics : Models and Kinetic Methods for Non-equilibrium Many Body Systems (1998 Lorentz Institute, Leiden University). Dynamics: Models and kinetic methods for non-equilibrium many body systems. Dordrecht: Kluwer Academic Publishers, 2000.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Mukherjee, Debashis, Hrsg. Applied Many-Body Methods in Spectroscopy and Electronic Structure. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4757-9256-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Karkheck, John. Dynamics: Models and Kinetic Methods for Non-equilibrium Many Body Systems. Dordrecht: Springer Netherlands, 2002.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Karkheck, John, Hrsg. Dynamics: Models and Kinetic Methods for Non-equilibrium Many Body Systems. Dordrecht: Springer Netherlands, 2002. http://dx.doi.org/10.1007/978-94-011-4365-3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Many-body methods"

1

Ceperley, D. M., und M. H. Kalos. „Quantum Many-Body Problems“. In Monte Carlo Methods in Statistical Physics, 145–94. Berlin, Heidelberg: Springer Berlin Heidelberg, 1986. http://dx.doi.org/10.1007/978-3-642-82803-4_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Quiney, Harry M. „Relativistic Many-Body Perturbation Theory“. In Methods in Computational Chemistry, 227–78. Boston, MA: Springer US, 1988. http://dx.doi.org/10.1007/978-1-4613-0711-2_5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Trugman, S. A. „General Many-Body Systems“. In Applications of Statistical and Field Theory Methods to Condensed Matter, 253–63. Boston, MA: Springer US, 1990. http://dx.doi.org/10.1007/978-1-4684-5763-6_22.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Martin, Philippe A., und François Rothen. „Perturbative Methods in Many-Body Problems“. In Many-Body Problems and Quantum Field Theory, 393–422. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-662-08490-8_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Martin, Philippe A., und François Rothen. „Perturbative Methods in Many-Body Problems“. In Many-Body Problems and Quantum Field Theory, 401–30. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/978-3-662-04894-8_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Montangero, Simone. „Many-Body Quantum Systems at Equilibrium“. In Introduction to Tensor Network Methods, 97–108. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01409-4_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Birman, Joseph L., und Allan I. Solomon. „Dynamic Symmetry in Many-Body Problem“. In Group Theoretical Methods in Physics. Volume II, 61–69. London: CRC Press, 2024. http://dx.doi.org/10.1201/9781003580850-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Hubač, Ivan, und Stephen Wilson. „Brillouin-Wigner Methods for Many-Body Systems“. In Brillouin-Wigner Methods for Many-Body Systems, 133–89. Dordrecht: Springer Netherlands, 2009. http://dx.doi.org/10.1007/978-90-481-3373-4_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Stewart, I. „Symmetry Methods in Collisionless Many-Body Problems“. In Mechanics: From Theory to Computation, 313–33. New York, NY: Springer New York, 2000. http://dx.doi.org/10.1007/978-1-4612-1246-1_12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Martin, Philippe A., und François Rothen. „Perturbative Methods in Field Theory“. In Many-Body Problems and Quantum Field Theory, 325–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-662-08490-8_9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Many-body methods"

1

Markussen, Troels, Petr A. Khomyakov, Brecht Verstichel, Anders Blom und Rasmus Faber. „Band Alignment in GAA Nanosheet Structures from Density Dependent Hybrid Functional and Many-Body GW Methods“. In 2024 International Conference on Simulation of Semiconductor Processes and Devices (SISPAD), 01–04. IEEE, 2024. http://dx.doi.org/10.1109/sispad62626.2024.10732914.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

CARDY, JOHN. „EXACT RESULTS FOR MANY-BODY PROBLEMS USING FEW-BODY METHODS“. In Proceedings of the 12th International Conference. WORLD SCIENTIFIC, 2006. http://dx.doi.org/10.1142/9789812772893_0005.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Horiuchi, H., M. Kamimura, H. Toki, Y. Fujiwara, M. Matsuo und Y. Sakuragi. „Innovative Computational Methods in Nuclear Many-Body Problems“. In XVII RCNP International Symposium. WORLD SCIENTIFIC, 1998. http://dx.doi.org/10.1142/9789814528405.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

SEDRAKIAN, ARMEN, und JOHN W. CLARK. „MANY-BODY METHODS FOR NUCLEAR SYSTEMS AT SUBNUCLEAR DENSITIES“. In Proceedings of the 14th International Conference. WORLD SCIENTIFIC, 2008. http://dx.doi.org/10.1142/9789812779885_0017.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Das, M. P. „DENSITY FUNCTIONAL THEORY: MANY-BODY EFFECTS WITHOUT TEARS“. In Proceedings of the Miniworkshop on “Methods of Electronic Structure Calculations” and Working Group on “Disordered Alloys”. WORLD SCIENTIFIC, 1995. http://dx.doi.org/10.1142/9789814503778_0001.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Schachenmayer, Johannes. „Exploring Quantum Many-Body Spin Dynamics with Truncated Wigner Methods“. In Latin America Optics and Photonics Conference. Washington, D.C.: OSA, 2016. http://dx.doi.org/10.1364/laop.2016.ltu5b.4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

„Preface: Symmetries and Order: Algebraic Methods in Many-Body Systems“. In Symmetries and Order: Algebraic Methods in Many Body Systems: A symposium in celebration of the career of Professor Francesco Iachello. AIP Publishing, 2019. http://dx.doi.org/10.1063/1.5124570.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

„Dedication: Symmetries and Order: Algebraic Methods in Many-Body Systems“. In Symmetries and Order: Algebraic Methods in Many Body Systems: A symposium in celebration of the career of Professor Francesco Iachello. AIP Publishing, 2019. http://dx.doi.org/10.1063/1.5124571.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Cejnar, Pavel, Pavel Stránský, Michal Kloc und Michal Macek. „Static vs. dynamic phases of quantum many-body systems“. In Symmetries and Order: Algebraic Methods in Many Body Systems: A symposium in celebration of the career of Professor Francesco Iachello. AIP Publishing, 2019. http://dx.doi.org/10.1063/1.5124589.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Draayer, J. P., K. D. Sviratcheva, C. Bahri und A. I. Georgieva. „On the Physical Significance of q-deformation in Many-body Physics“. In Proceedings of the 23rd International Conference of Differential Geometric Methods in Theoretical Physics. WORLD SCIENTIFIC, 2006. http://dx.doi.org/10.1142/9789812772527_0012.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Many-body methods"

1

Bartlett, Rodney J. Molecular Interactions and Properties with Many-Body Methods. Fort Belvoir, VA: Defense Technical Information Center, April 1990. http://dx.doi.org/10.21236/ada222631.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Bartlett, Rodney J. Development of Many-Body Methods for Flame Chemistry and Large Molecule Applications. Fort Belvoir, VA: Defense Technical Information Center, Mai 1987. http://dx.doi.org/10.21236/ada184451.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Millis, Andrew. Many Body Methods from Chemistry to Physics: Novel Computational Techniques for Materials-Specific Modelling: A Computational Materials Science and Chemistry Network. Office of Scientific and Technical Information (OSTI), November 2016. http://dx.doi.org/10.2172/1332662.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Underwood, H., Madison Hand, Donald Leopold, Madison Hand, Donald Leopold und H. Underwood. Abundance and distribution of white-tailed deer on First State National Historical Park and surrounding lands. National Park Service, 2024. http://dx.doi.org/10.36967/2305428.

Der volle Inhalt der Quelle
Annotation:
We estimated both abundance and distribution of white-tailed deer (Odocoileus virginianus) on the Brandywine Valley unit of First State National Historical Park (FRST) and the Brandywine Creek State Park (BCSP) during 2020 and 2021 with two widely used field methods ? a road-based count and a network of camera traps. We conducted 24 road-based counts, covering 260 km of roadway, and deployed up to 16 camera traps, processing over 82,000 images representing over 5,000 independent observations. In both years, we identified bucks based on their body and antler characteristics, tracking their movements between baited camera trap locations. We tested seven estimators commonly reported in the literature, comparing the relative merits for managers of small, protected natural areas like FRST. Deer densities estimated from conventional road-based distance sampling were approximately 10 deer/km2 lower than densities estimated from camera-trapping surveys. We attribute the bias in road-based distance sampling to the difficulty of recording the precise effort expended to obtain the counts. Modifying the distance sampling method addressed many of the issues associated with the conventional approach. Despite little substantive differences in land cover types between the two methods, a clear spatial segregation of male and female deer at camera trap locations could bias road-based counts if the sexes are not encountered in proportion to their abundances. There was a distinct gradient in deer distribution across the study area, with higher proportions of deer recorded in camera traps at FRST than BCSP, which harvests 20?60 deer annually during a regulated, hunting season. The most reliable (i.e., low bias, acceptable precision) methods, Spatial Capture Recapture (SCR) and Density Surface Modeling (DSM), produced deer densities of approximately 50 deer/km2 in each year ? a number which is consistent with previous estimates for New Castle County, Delaware, and our experience in similar, unhunted natural areas. Across both FRST and BCSP, these densities translated into area-wide (~1000 ha) population sizes between 650?1000 deer, with about one-half to two-thirds comprising the FRST population. Density surface modeling of mapped locations of deer detected during surveys, combined with camera-trapping and a time-to-event data analysis might be the only practical means of reliably assessing white-tailed deer abundance in small (<2000 ha), protected natural areas like FRST. Most other approaches are either too time-consuming, require identification and tracking of individual deer, the use of bait, or require intervention by a subject-area expert.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Ruprah, Inder J., und Luis Marcano. Does Technical Assistance Matter?: An Impact Evaluation Approach to Estimate its Value Added. Inter-American Development Bank, Januar 2009. http://dx.doi.org/10.18235/0011138.

Der volle Inhalt der Quelle
Annotation:
Many public programs and operations by multilateral organisations include technical assistance to the direct beneficiaries of the program in addition to pure financing. However, there is no substantial body of studies that calculates the additional impact; in the sense of exclusively attributable to, of technical assistance on the outcome of interest of the program. In this working paper, the authors propose the use of multi-treatment impact evaluation method -propensity score combined with exact matching for dosage and double difference- for estimating technical assistance's impact. The two cases examined in this study correspond to the Neighbourhood Improvement Program (NIP) of Chile and the Social Investment Fund of Guatemala (SIF). Given the small dollar value of technical assistance relative to the dollar value of transfers not only does technical assistance matter but it is a way of getting more for less.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sappington, Jayne, Esther De León, Sara Schumacher, Kimberly Vardeman, Donell Callender, Marina Oliver, Hillary Veeder und Laura Heinz. Library Impact Research Report: Educating and Empowering a Diverse Student Body: Supporting Diversity, Equity, and Inclusion Research through Library Collections. Association of Research Libraries, Juli 2022. http://dx.doi.org/10.29242/report.texastech2022.

Der volle Inhalt der Quelle
Annotation:
As part of ARL’s Research Library Impact Framework initiative, a research team from the Texas Tech University (TTU) Libraries explored methods for assessing collections related to the study and research of diversity, equity, and inclusion (DEI) topics and their discoverability by users. DEI studies have increased in prominence on academic campuses along with calls to question privilege and power structures, making DEI collections assessment critical. The TTU Libraries undertook a two-part project that surveyed user needs, collections usage, cataloging and discoverability, and user behavior in searching for and evaluating DEI resources. While the researchers were not able to identify an effective method for assessing DEI in large-scale collections, key findings indicate the potential for partnering with women’s and gender studies and Mexican American and Latino/a studies and the need for increased attention on cataloging and metadata, particularly table of contents and abstract/summary fields. The research team identified that many users expressed uncertainty in searching and evaluating DEI resources and expressed interest in search enhancements for better filtering and more prominent website presence for DEI research help.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Halker Singh, Rashmi B., Juliana H. VanderPluym, Allison S. Morrow, Meritxell Urtecho, Tarek Nayfeh, Victor D. Torres Roldan, Magdoleen H. Farah et al. Acute Treatments for Episodic Migraine. Agency for Healthcare Research and Quality (AHRQ), Dezember 2020. http://dx.doi.org/10.23970/ahrqepccer239.

Der volle Inhalt der Quelle
Annotation:
Objectives. To evaluate the effectiveness and comparative effectiveness of pharmacologic and nonpharmacologic therapies for the acute treatment of episodic migraine in adults. Data sources. MEDLINE®, Embase®, Cochrane Central Registrar of Controlled Trials, Cochrane Database of Systematic Reviews, PsycINFO®, Scopus, and various grey literature sources from database inception to July 24, 2020. Comparative effectiveness evidence about triptans and nonsteroidal anti-inflammatory drugs (NSAIDs) was extracted from existing systematic reviews. Review methods. We included randomized controlled trials (RCTs) and comparative observational studies that enrolled adults who received an intervention to acutely treat episodic migraine. Pairs of independent reviewers selected and appraised studies. Results. Data on triptans were derived from 186 RCTs summarized in nine systematic reviews (101,276 patients; most studied was sumatriptan, followed by zolmitriptan, eletriptan, naratriptan, almotriptan, rizatriptan, and frovatriptan). Compared with placebo, triptans resolved pain at 2 hours and 1 day, and increased the risk of mild and transient adverse events (high strength of the body of evidence [SOE]). Data on NSAIDs were derived from five systematic reviews (13,214 patients; most studied was ibuprofen, followed by diclofenac and ketorolac). Compared with placebo, NSAIDs probably resolved pain at 2 hours and 1 day, and increased the risk of mild and transient adverse events (moderate SOE). For other interventions, we included 135 RCTs and 6 comparative observational studies (37,653 patients). Compared with placebo, antiemetics (low SOE), dihydroergotamine (moderate to high SOE), ergotamine plus caffeine (moderate SOE), and acetaminophen (moderate SOE) reduced acute pain. Opioids were evaluated in 15 studies (2,208 patients).Butorphanol, meperidine, morphine, hydromorphone, and tramadol in combination with acetaminophen may reduce pain at 2 hours and 1 day, compared with placebo (low SOE). Some opioids may be less effective than some antiemetics or dexamethasone (low SOE). No studies evaluated instruments for predicting risk of opioid misuse, opioid use disorder, or overdose, or evaluated risk mitigation strategies to be used when prescribing opioids for the acute treatment of episodic migraine. Calcitonin gene-related peptide (CGRP) receptor antagonists improved headache relief at 2 hours and increased the likelihood of being headache-free at 2 hours, at 1 day, and at 1 week (low to high SOE). Lasmiditan (the first approved 5-HT1F receptor agonist) restored function at 2 hours and resolved pain at 2 hours, 1 day, and 1 week (moderate to high SOE). Sparse and low SOE suggested possible effectiveness of dexamethasone, dipyrone, magnesium sulfate, and octreotide. Compared with placebo, several nonpharmacologic treatments may improve various measures of pain, including remote electrical neuromodulation (moderate SOE), magnetic stimulation (low SOE), acupuncture (low SOE), chamomile oil (low SOE), external trigeminal nerve stimulation (low SOE), and eye movement desensitization re-processing (low SOE). However, these interventions, including the noninvasive neuromodulation devices, have been evaluated only by single or very few trials. Conclusions. A number of acute treatments for episodic migraine exist with varying degrees of evidence for effectiveness and harms. Use of triptans, NSAIDs, antiemetics, dihydroergotamine, CGRP antagonists, and lasmiditan is associated with improved pain and function. The evidence base for many other interventions for acute treatment, including opioids, remains limited.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Sharp, Sarah M., Michael J. Moore, Craig A. Harms, Sarah M. Wilkin, W. Brian Sharp, Kristen M. Patchett und Kathryn S. Rose. Report of the live large whale stranding response workshop. Woods Hole Oceanographic Institution, November 2024. http://dx.doi.org/10.1575/1912/70889.

Der volle Inhalt der Quelle
Annotation:
Reasoned triage and management of live large whale stranding events prompted this workshop. Safety is paramount for humans and must be mitigated for them and whales during responses. Clinical assessment is critical, with emaciation and poor prognosis often worsening welfare if released. Accurate length and estimated weight data are essential. Supportive care and treatments depend on understanding the underlying pathophysiology of stranding. Maintaining an airway, monitoring breathing, minimizing stress, protecting eyes, modulating temperature, and preventing sunburn are priorities. Additional strategies can include mild sedation, fluid administration, and flipper excavation to relieve pressure. Tools to indicate post-release survival include photographs, genetic samples, paint sticks, and identification, VHF, or satellite-linked tags. Acceptable rescue techniques included towing offshore with a tackle or lines over or under the body and around the axillae, inflatable lift bags, and trenching. Nets and towing by the flukes are unsuitable. Refloating of stranded large whales can be considered if a clinical examination suggests a favorable prognosis, and a release method could be safely undertaken without undue stress and trauma to the animal. However, in many cases euthanasia is the most humane option if practical, or letting nature take its course if need be.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lunn, Pete, Marek Bohacek, Jason Somerville, Áine Ní Choisdealbha und Féidhlim McGowan. PRICE Lab: An Investigation of Consumers’ Capabilities with Complex Products. ESRI, Mai 2016. https://doi.org/10.26504/bkmnext306.

Der volle Inhalt der Quelle
Annotation:
Executive Summary This report describes a series of experiments carried out by PRICE Lab, a research programme at the Economic and Social Research Institute (ESRI) jointly funded by the Central Bank of Ireland, the Commission for Energy Regulation, the Competition and Consumer Protection Commission and the Commission for Communications Regulation. The experiments were conducted with samples of Irish consumers aged 18-70 years and were designed to answer the following general research question: At what point do products become too complex for consumers to choose accurately between the good ones and the bad ones? BACKGROUND AND METHODS PRICE Lab represents a departure from traditional methods employed for economic research in Ireland. It belongs to the rapidly expanding area of ‘behavioural economics’, which is the application of psychological insights to economic analysis. In recent years, behavioural economics has developed novel methods and generated many new findings, especially in relation to the choices made by consumers. These scientific advances have implications both for economics and for policy. They suggest that consumers often do not make decisions in the way that economists have traditionally assumed. The findings show that consumers have limited capacity for attending to and processing information and that they are prone to systematic biases, all of which may lead to disadvantageous choices. In short, consumers may make costly mistakes. Research has indeed documented that in several key consumer markets, including financial services, utilities and telecommunications, many consumers struggle to choose the best products for themselves. It is often argued that these markets involve ‘complex’ products. The obvious question that arises is whether consumer policy can be used to help them to make better choices when faced with complex products. Policies are more likely to be successful where they are informed by an accurate understanding of how real consumers make decisions between products. To provide evidence for consumer policy, PRICE Lab has developed a method for measuring the accuracy with which consumers make choices, using techniques adapted from the scientific study of human perception. The method allows researchers to measure how reliably consumers can distinguish a good deal from a bad one. A good deal is defined here as one where the product is more valuable than the price paid. In other words, it offers good value for money or, in the jargon of economics, offers the consumer a ‘surplus’. Conversely, a bad deal offers poor value for money, providing no (or a negative) surplus. PRICE Lab’s main experimental method, which we call the ‘Surplus Identification’ (S-ID) task, allows researchers to measure how accurately consumers can spot a surplus and whether they are prone to systematic biases. Most importantly, the S-ID task can be used to study how the accuracy of consumers’ decisions changes as the type of product changes. For the experiments we report here, samples of consumers arrived at the ESRI one at a time and spent approximately one hour doing the S-ID task with different kinds of products, which were displayed on a computer screen. They had to learn to judge the value of one or more products against prices and were then tested for accuracy. As well as people’s intrinsic motivation to do well when their performance on a task like this is tested, we provided an incentive: one in every ten consumers who attended PRICE Lab won a prize, based on their performance. Across a series of these experiments, we were able to test how the accuracy of consumers’ decisions was affected by the number and nature of the product’s characteristics, or ‘attributes’, which they had to take into account in order to distinguish good deals from bad ones. In other words, we were able to study what exactly makes for a ‘complex’ product, in the sense that consumers find it difficult to choose good deals. FINDINGS Overall, across all ten experiments described in this report, we found that consumers’ judgements of the value of products against prices were surprisingly inaccurate. Even when the product was simple, meaning that it consisted of just one clearly perceptible attribute (e.g. the product was worth more when it was larger), consumers required a surplus of around 16-26 per cent of the total price range in order to be able to judge accurately that a deal was a good one rather than a bad one. Put another way, when most people have to map a characteristic of a product onto a range of prices, they are able to distinguish at best between five and seven levels of value (e.g. five levels might be thought of as equivalent to ‘very bad’, ‘bad’, ‘average’, ‘good’, ‘very good’). Furthermore, we found that judgements of products against prices were not only imprecise, but systematically biased. Consumers generally overestimated what products at the top end of the range were worth and underestimated what products at the bottom end of the range were worth, typically by as much as 10-15 per cent and sometimes more. We then systematically increased the complexity of the products, first by adding more attributes, so that the consumers had to take into account, two, three, then four different characteristics of the product simultaneously. One product might be good on attribute A, not so good on attribute B and available at just above the xii | PRICE Lab: An Investigation of Consumers’ Capabilities with Complex Products average price; another might be very good on A, middling on B, but relatively expensive. Each time the consumer’s task was to judge whether the deal was good or bad. We would then add complexity by introducing attribute C, then attribute D, and so on. Thus, consumers had to negotiate multiple trade-offs. Performance deteriorated quite rapidly once multiple attributes were in play. Even the best performers could not integrate all of the product information efficiently – they became substantially more likely to make mistakes. Once people had to consider four product characteristics simultaneously, all of which contributed equally to the monetary value of the product, a surplus of more than half the price range was required for them to identify a good deal reliably. This was a fundamental finding of the present experiments: once consumers had to take into account more than two or three different factors simultaneously their ability to distinguish good and bad deals became strikingly imprecise. This finding therefore offered a clear answer to our primary research question: a product might be considered ‘complex’ once consumers must take into account more than two or three factors simultaneously in order to judge whether a deal is good or bad. Most of the experiments conducted after we obtained these strong initial findings were designed to test whether consumers could improve on this level of performance, perhaps for certain types of products or with sufficient practice, or whether the performance limits uncovered were likely to apply across many different types of product. An examination of individual differences revealed that some people were significantly better than others at judging good deals from bad ones. However the differences were not large in comparison to the overall effects recorded; everyone tested struggled once there were more than two or three product attributes to contend with. People with high levels of numeracy and educational attainment performed slightly better than those without, but the improvement was small. We also found that both the high level of imprecision and systematic bias were not reduced substantially by giving people substantial practice and opportunities to learn – any improvements were slow and incremental. A series of experiments was also designed to test whether consumers’ capability was different depending on the type of product attribute. In our initial experiments the characteristics of the products were all visual (e.g., size, fineness of texture, etc.). We then performed similar experiments where the relevant product information was supplied as numbers (e.g., percentages, amounts) or in categories (e.g., Type A, Rating D, Brand X), to see whether performance might improve. This question is important, as most financial and contractual information is supplied to consumers in a numeric or categorical form. The results showed clearly that the type of product information did not matter for the level of imprecision and bias in consumers’ decisions – the results were essentially the same whether the product attributes were visual, numeric or categorical. What continued to drive performance was how many characteristics the consumer had to judge simultaneously. Thus, our findings were not the result of people failing to perceive or take in information accurately. Rather, the limiting factor in consumers’ capability was how many different factors they had to weigh against each other at the same time. In most of our experiments the characteristics of the product and its monetary value were related by a one-to-one mapping; each extra unit of an attribute added the same amount of monetary value. In other words, the relationships were all linear. Because other findings in behavioural economics suggest that consumers might struggle more with non-linear relationships, we designed experiments to test them. For example, the monetary value of a product might increase more when the amount of one attribute moves from very low to low, than when it moves from high to very high. We found that this made no difference to either the imprecision or bias in consumers’ decisions provided that the relationship was monotonic (i.e. the direction of the relationship was consistent, so that more or less of the attribute always meant more or less monetary value respectively). When the relationship involved a turning point (i.e. more of the attribute meant higher monetary value but only up to a certain point, after which more of the attribute meant less value) consumers’ judgements were more imprecise still. Finally, we tested whether familiarity with the type of product improved performance. In most of the experiments we intentionally used products that were new to the experimental participants. This was done to ensure experimental control and so that we could monitor learning. In the final experiment reported here, we used two familiar products (Dublin houses and residential broadband packages) and tested whether consumers could distinguish good deals from bad deals any better among these familiar products than they could among products that they had never seen before, but which had the same number and type of attributes and price range. We found that consumers’ performance was the same for these familiar products as for unfamiliar ones. Again, what primarily determined the amount of imprecision and bias in consumers’ judgments was the number of attributes that they had to balance against each other, regardless of whether these were familiar or novel. POLICY IMPLICATIONS There is a menu of consumer polices designed to assist consumers in negotiating complex products. A review, including international examples, is given in the main body of the report. The primary aim is often to simplify the consumer’s task. Potential policies, versions of which already exist in various forms and which cover a spectrum of interventionist strength, might include: the provision and endorsement of independent, transparent price comparison websites and other choice engines (e.g. mobile applications, decision software); the provision of high quality independent consumer advice; ‘mandated simplification’, whereby regulations stipulate that providers must present product information in a simplified and standardised format specifically determined by regulation; and more strident interventions such as devising and enforcing prescriptive rules and regulations in relation to permissible product descriptions, product features or price structures. The present findings have implications for such policies. However, while the experimental findings have implications for policy, it needs to be borne in mind that the evidence supplied here is only one factor in determining whether any given intervention in markets is likely to be beneficial. The findings imply that consumers are likely to struggle to choose well in markets with products consisting of multiple important attributes that must all be factored in when making a choice. Interventions that reduce this kind of complexity for consumers may therefore be beneficial, but nothing in the present research addresses the potential costs of such interventions, or how providers are likely to respond to them. The findings are also general in nature and are intended to give insights into consumer choices across markets. There are likely to be additional factors specific to certain markets that need to be considered in any analysis of the costs and benefits of a potential policy change. Most importantly, the policy implications discussed here are not specific to Ireland or to any particular product market. Furthermore, they should not be read as criticisms of existing regulatory regimes, which already go to some lengths in assisting consumers to deal with complex products. Ireland currently has extensive regulations designed to protect consumers, both in general and in specific markets, descriptions of which can be found in Section 9.1 of the main report. Nevertheless, the experiments described here do offer relevant guidance for future policy designs. For instance, they imply that while policies that make it easier for consumers to switch providers may be necessary to encourage active consumers, they may not be sufficient, especially in markets where products are complex. In order for consumers to benefit, policies that help them to identify better deals reliably may also be required, given the scale of inaccuracy in consumers’ decisions that we record in this report when products have multiple important attributes. Where policies are designed to assist consumer decisions, the present findings imply quite severe limits in relation to the volume of information consumers can simultaneously take into account. Good impartial Executive Summary | xv consumer advice may limit the volume of information and focus on ensuring that the most important product attributes are recognised by consumers. The findings also have implications for the role of competition. While consumers may obtain substantial potential benefits from competition, their capabilities when faced with more complex products are likely to reduce such benefits. Pressure from competition requires sufficient numbers of consumers to spot and exploit better value offerings. Given our results, providers with larger market shares may face incentives to increase the complexity of products in an effort to dampen competitive pressure and generate more market power. Where marketing or pricing practices result in prices or attributes with multiple components, our findings imply that consumer choices are likely to become less accurate. Policymakers must of course be careful in determining whether such practices amount to legitimate innovations with potential consumer benefit. Yet there is a genuine danger that spurious complexity can be generated that confuses consumers and protects market power. The results described here provide backing for the promotion and/or provision by policymakers of high-quality independent choice engines, including but not limited to price comparison sites, especially in circumstances where the number of relevant product attributes is high. A longer discussion of the potential benefits and caveats associated with such policies is contained in the main body of the report. Mandated simplification policies are gaining in popularity internationally. Examples include limiting the number of tariffs a single energy company can offer or standardising health insurance products, both of which are designed to simplify the comparisons between prices and/or product attributes. The present research has some implications for what might make a good mandate. Consumer decisions are likely to be improved where a mandate brings to the consumer’s attention the most important product attributes at the point of decision. The present results offer guidance with respect to how many key attributes consumers are able simultaneously to trade off, with implications for the design of standardised disclosures. While bearing in mind the potential for imposing costs, the results also suggest benefits to compulsory ‘meta-attributes’ (such as APRs, energy ratings, total costs, etc.), which may help consumers to integrate otherwise separate sources of information. FUTURE RESEARCH The experiments described here were designed to produce findings that generalise across multiple product markets. However, in addition to the results outlined in this report, the work has resulted in new experimental methods that can be applied to more specific consumer policy issues. This is possible because the methods generate experimental measures of the accuracy of consumers’ decision-making. As such, they can be adapted to assess the quality of consumers’ decisions in relation to specific products, pricing and marketing practices. Work is underway in PRICE Lab that applies these methods to issues in specific markets, including those for personal loans, energy and mobile phones.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Beason, Scott, Taylor Kenyon, Robert Jost und Laurent Walker. Changes in glacier extents and estimated changes in glacial volume at Mount Rainier National Park, Washington, USA from 1896 to 2021. National Park Service, Juni 2023. http://dx.doi.org/10.36967/2299328.

Der volle Inhalt der Quelle
Annotation:
Surface area of glaciers and perennial snow within Mount Rainier National Park were delineated based on 2021 aerial Structure-from-Motion (SfM) and satellite imagery to document changes to glaciers over the last 125 years. These extents were compared with previously completed databases from 1896, 1913, 1971, 1994, 2009, and 2015. In addition to the glacial features mapped at the Park, any snow patches noted in satellite- and fixed-wing- acquired aerial images in September 2021 were mapped as perennial snowfields. In 2021, Mount Rainier National Park contained a total of 28 named glaciers which covered a total of 75.496 ± 4.109 km2 (29.149 ± 1.587 mi2). Perennial snowfields added another 1.938 ± 0.112 km2 (0.748 ± 0.043 mi2), bringing the total perennial snow and glacier cover within the Park in 2021 to 77.434 ± 4.221 km2 (29.897 ± 1.630 mi2). The largest glacier at Mount Rainier was the Emmons Glacier, which encompasses 10.959 ± 0.575 km2 (4.231 ± 0.222 mi2). The change in glacial area from 1896 to 2021 was -53.812 km2 (-20.777 mi2), a total reduction of 41.6%. This corresponds to an average rate of -0.430 km2 per year (-0.166 mi2 × yr-1) during the 125 year period. Recent changes (between the 6-year period of 2015 to 2021) showed a reduction of 3.262 km2 (-1.260 mi2) of glacial area, or a 4.14% reduction at a rate of -0.544 km2 per year ( 0.210 mi2 × yr-1). This rate is 2.23 times that estimated in 2015 (2009-2015) of -0.244 km2 per year (-0.094 mi2 × yr-1). Changes in ice volume at Mount Rainier and estimates of total volumes were calculated for 1896, 1913, 1971, 1994, 2009, 2015, and 2021. Volume change between 1971 and 2007/8 was -0.65 km3 ( 0.16 mi3; Sisson et al., 2011). We used the 2007/8 LiDAR digital elevation model and our 2021 SfM digital surface model to estimate a further loss of -0.404 km3 (-0.097 mi3). In the 50-year period between 1971 and 2021, the glaciers and perennial snowfields of Mount Rainier lost a total of -1.058 km3 (-0.254 mi3) at a rate of -0.021 km3 per year (-0.005 mi3 × yr-1). The calculation of the total volume of the glaciers during various glacier extent inventories at Mount Rainier is not straightforward and various methods are explored in this paper. Using back calculated scaling parameters derived from a single volume measurement in 1971 and estimates completed by other authors, we have developed an estimate of glacial mass during the last 125-years at Mount Rainier that mostly agree with volumetric changes observed in the last 50 years. Because of the high uncertainty with these methods, a relatively modest 35% error is chosen. In 2021, Mount Rainier’s 28 glaciers contain about 3.516 ± 1.231 km3 (0.844 ± 0.295 mi3) of glacial ice, snow, and firn. The change in glacial mass over the 125-year period from 1896 to 2021 was 3.742 km3 (-0.898 mi3), a total reduction of 51.6%, at an average rate of -0.030 km3 per year ( 0.007 mi3 × yr-1). Volume change over the 6-year period of 2015 to 2021 was 0.175 km3 (-0.042 mi3), or a 4.75% reduction, at a rate of -0.029 km3 per year (-0.007 mi3 × yr-1). This survey officially removes one glacier from the Park’s inventory and highlights several other glaciers in a critical state. The Stevens Glacier, an offshoot of the Paradise Glacier on the Park’s south face, was removed due to its lack of features indicating flow, and therefore is no longer a glacier but instead a perennial snowfield. Two other south facing glaciers – the Pyramid and Van Trump glaciers – are in serious peril. In the six-year period between 2015 and 2021, these two glaciers lost 32.9% and 33.6% of their area and 42.0% and 42.9% of their volume, respectively. These glaciers are also becoming exceedingly fragmented and no longer possess what can be called a main body of ice. Continued losses will quickly lead to the demise of these glaciers in the coming decades. Overall, the glaciers on the south face of the mountain have been rapidly shrinking over the last 125 years. Our data shows a continuation of gradual yet accelerating loss of glacial ice at Mount Rainier, resulting in significant changes in regional ice volume over the last century. The long-term impacts of this loss will be widespread and impact many facets of the Park ecosystem. Additionally, rapidly retreating south-facing glaciers are exposing large areas of loose sediment that can be mobilized to proglacial rivers during rainstorms, outburst floods, and debris flows. Regional climate change is affecting all glaciers at Mount Rainier, but especially those smaller cirque glaciers and discontinuous glaciers on the south side of the volcano. If the regional climate trend continues, further loss in glacial area and volume parkwide is anticipated, as well as the complete loss of small glaciers at lower elevations with surface areas less than 0.2 km2 (0.08 mi2) in the next few decades.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie