To see the other types of publications on this topic, follow the link: Fractal modelling.

Dissertations / Theses on the topic 'Fractal modelling'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 25 dissertations / theses for your research on the topic 'Fractal modelling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gregotski, Mark Edward. "Fractal stochastic modelling of airborne magnetic data." Thesis, McGill University, 1989. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=74300.

Full text
Abstract:
Airborne magnetic field data exhibit downward continued power spectra of the form $1/f sp beta$ (where f is the spatial frequency and $ beta$ is a non-negative real number). This form of spectrum is observed for magnetic data recorded over a range of sampling scales from various areas of the Canadian Shield. Two scaling regimes have been discovered. The first has a $ beta$ value near 3 for wavelengths $ sbsp{ sim}{$25 km. These results suggest a "variable fractal" description of the distribution of near-surface magnetic sources.
From a data modelling viewpoint, the magnetic measurements are derived from a linear superposition of a deterministic system function and a stochastic excitation process. A symmetric operator corresponds to the system function, and the near-surface magnetic source distribution represents the excitation process. The deconvolution procedure assumes an autoregressive (AR) system function and proceeds iteratively using bi-directional AR (BDAR) filtering in one dimension, which is extended to four-pass AR filtering in two dimensions. The traditional assumption of a spectrally white innovation is used in the deconvolution procedure. The data are modified prior to deconvolution by a Fourier domain prewhitening technique, to account for the long wavelength content of the fractal innovation. Deconvolution of the modified data produces the system function, which is removed from the original data to produce the near-surface magnetic source distribution. This distribution serves as a susceptibility map which can be used for enhancing magnetic field anomalies and geological mapping. Thus, the statistical descriptions of near-surface magnetic sources are useful for modelling airborne magnetic data in "shield-type" geologic environments.
APA, Harvard, Vancouver, ISO, and other styles
2

Nilsen, Christopher. "Fractal modelling of turbulent flows : Subgrid modelling for the Burgers equation." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for energi- og prosessteknikk, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-13916.

Full text
Abstract:
The stochastically forced Burgers equation shares some of the same characteristics as the three-dimensional Navier-Stokes equations. Because of this it is sometimes used as a model equation for turbulence. Simulating the stochastically forced Burgers equation with low resolution can be considered as a one dimensional model of a three-dimensional large eddy simulation, and can be used to evaluate subgrid models. Modified versions of subgrid models using the fractal interpolation technique are presented here and tested in low resolution simulations of the stochastically forced Burgers equations. The results are compared with high resolution simulations, then low resolution simulations first using the dynamic Smagorinsky model and then using no subgrid model other than the numerical dissipation of the convective flux discretisation scheme. The fractal models perform reasonably well and most of the large scale features from the high resolution simulations are reproduced by corresponding simulations with low resolution. The performance of the fractal models is not, however, better than the performance of the dynamic Smagorinsky model. Therefore one might say that although the fractal models give promising results, it is not obvious that they are in any way superior to the traditional models. Also the low resolution simulation with the dissipative convective scheme performs well, suggesting that numerical dissipation can be sufficient as a subgrid model in one dimension.The solutions to the stochastically forced Burgers equation follow a k^(-5/3) energy spectrum, but high order statistics are not similar to real turbulence, due to the complete domination of shocks. Thus the stochastically forced Burgers equation might not be a suitable model for turbulence. It is not likely that the complexity of three-dimensional subgrid modelling is sufficiently represented by the one-dimensional case either.
APA, Harvard, Vancouver, ISO, and other styles
3

Lauren, Michael Kyle. "The fractal modelling of turbulent surface-layer winds." Thesis, University of Auckland, 1999. http://hdl.handle.net/2292/1106.

Full text
Abstract:
Multiscaling analysis and cascade simulation techniques, which form part of the more general field of fractals, are introduced as a method for characterising and simulating surface-layer winds, particularly for time scales associated with the energy-containing range. This type of analysis consists of determining the power-law parameter of the spectrum of the data, and the scaling of the statistical moments. These techniques were applied to determine how the statistics depended on the duration (or scale) of the fluctuations in wind speed, the atmospheric conditions, and the topography of the site. It was found that the parameterisations produced using multiscaling analysis characterised differences in the statistics for each of these cases. Furthermore, the fractal cascade simulation techniques used provided simple methods for reproducing these statistics. This analysis is followed by an investigation into the robustness of some of these results. In particular, the data is examined for the existence of self-similar distributions of the cascade weighting factor, W. Such self-similar analysis allows the direct simulation of the data via a cascade. Cascade models have the virtue of being able to reproduce statistical properties such as intermittency, and in particular, the nesting of intermittency from different wavenumber bands in the same region of space. The existence of these properties in both the experimental and simulated data is investigated, with consideration given to the consequence of the results for simulation techniques. One notable discovery is the failure of these methods to reproduce the bias in the distribution of the gradients in the wind velocity field. This result has important implications for all workers dealing with simulation of geophysical data by fractal cascades. Finally, a brief numerical experiment is carried out to both demonstrate how this bias may be exploited to construct a model, and to test some of the analysis techniques presented on non-cascade based data. While not a particularly convincing simulator of turbulence, the model nevertheless displays some interesting turbulence-like characteristics.
APA, Harvard, Vancouver, ISO, and other styles
4

Vera, Epiphany. "Fractal modelling of residual in linear predictive coding of speech." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0006/MQ41642.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Arfeen, Muhammad Asad. "Contributions to modelling of internet traffic by fractal renewal processes." Thesis, University of Canterbury. Department of Computer Science & Software Engineering, 2014. http://hdl.handle.net/10092/10194.

Full text
Abstract:
The principle of parsimonious modelling of Internet traffic states that a minimal number of descriptors should be used for its characterization. Until early 1990s, the conventional Markovian models for voice traffic had been considered suitable and parsimonious for data traffic as well. Later with the discovery of strong correlations and increased burstiness in Internet traffic, various self-similar count models have been proposed. But, in fact, such models are strictly mono-fractal and applicable at coarse time scales, whereas Internet traffic modelling is about modelling traffic at fine and coarse time scales; modelling traffic which can be mono and multi-fractal; modelling traffic at interarrival time and count levels; modelling traffic at access and core tiers; and modelling all the three structural components of Internet traffic, that is, packets, flows and sessions. The philosophy of this thesis can be described as: “the renewal of renewal theory in Internet traffic modelling”. Renewal theory has a great potential in modelling statistical characteristics of Internet traffic belonging to individual users, access and core networks. In this thesis, we develop an Internet traffic modelling framework based on fractal renewal processes, that is, renewal processes with underlying distribution of interarrival times being heavy-tailed. The proposed renewal framework covers packets, flows and sessions as structural components of Internet traffic and is applicable for modelling the traffic at fine and coarse time scales. The properties of superposition of renewal processes can be used to model traffic in higher tiers of the Internet hierarchy. As the framework is based on renewal processes, therefore, Internet traffic can be modelled at both interarrival times and count levels.
APA, Harvard, Vancouver, ISO, and other styles
6

Yasrebi, Amir Bijan. "Determination of an ultimate pit limit utilising fractal modelling to optimise NPV." Thesis, University of Exeter, 2014. http://hdl.handle.net/10871/18449.

Full text
Abstract:
The speed and complexity of globalisation and reduction of natural resources on the one hand, and interests of large multinational corporations on the other, necessitates proper management of mineral resources and consumption. The need for scientific research and application of new methodologies and approaches to maximise Net Present Value (NPV) within mining operations is essential. In some cases, drill core logging in the field may result in an inadequate level of information and subsequent poor diagnosis of geological phenomenon which may undermine the delineation or separation of mineralised zones. This is because the interpretation of individual loggers is subjective. However, modelling based on logging data is absolutely essential to determine the architecture of an orebody including ore distribution and geomechanical features. For instance, ore grades, density and RQD values are not included in conventional geological models whilst variations in a mineral deposit are an obvious and salient feature. Given the problems mentioned above, a series of new mathematical methods have been developed, based on fractal modelling, which provide a more objective approach. These have been established and tested in a case study of the Kahang Cu-Mo porphyry deposit, central Iran. Recognition of different types of mineralised zone in an ore deposit is important for mine planning. As a result, it is felt that the most important outcome of this thesis is the development of an innovative approach to the delineation of major mineralised (supergene and hypogene) zones from ‘barren’ host rock. This is based on subsurface data and the utilisation of the Concentration-Volume (C-V) fractal model, proposed by Afzal et al. (2011), to optimise a Cu-Mo block model for better determination of an ultimate pit limit. Drawing on this, new approaches, referred to Density–Volume (D–V) and RQD-Volume (RQD-V) fractal modelling, have been developed and used to delineate rock characteristics in terms of density and RQD within the Kahang deposit (Yasrebi et al., 2013b; Yasrebi et al., 2014). From the results of this modelling, the density and RQD populations of rock types from the studied deposit showed a relationship between density and rock quality based on RQD values, which can be used to predict final pit slope. Finally, the study introduces a Present Value-Volume (PV-V) fractal model in order to identify an accurate excavation orientation with respect to economic principals and ore grades of all determined voxels within the obtained ultimate pit limit in order to achieve an earlier pay-back period.
APA, Harvard, Vancouver, ISO, and other styles
7

Wohlberg, Brendt. "Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective." Doctoral thesis, University of Cape Town, 1996. http://hdl.handle.net/11427/9475.

Full text
Abstract:
Bibliography: p. 208-225.
Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques.
APA, Harvard, Vancouver, ISO, and other styles
8

Cottet, Arnaud J. "Modelling of ceramic matrix composite microstructure using a 2-D fractal spatial particle distribution." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/12928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

McClean, Colin John. "The scale-free and scale-bound properties of land surfaces : fractal analysis and specific geomorphometry from digital terrain models." Thesis, Durham University, 1990. http://etheses.dur.ac.uk/5999/.

Full text
Abstract:
The scale-bound view of landsurfaces, being an assemblage of certain landforms, occurring within limited scale ranges, has been challenged by the scale-free characteristics of fractal geometry. This thesis assesses the fractal model by examining the irregularity of landsurface form, for the self-affine behaviour present in fractional Brownian surfaces. Different methods for detecting self-affine behaviour in surfaces are considered and of these the variogram technique is shown to be the most effective. It produces the best results of two methods tested on simulated surfaces, with known fractal properties. The algorithm used has been adapted to consider log (altitude variance) over a sample of log (distances) for: complete surfaces; subareas within surfaces; separate directions within surfaces. Twenty seven digital elevation models of landsurfaces arc re-examined for self- affine behaviour. The variogram results for complete surfaces show that none of these are self-affine over the scale range considered. This is because of dominant slope lengths and regular valley, spacing within areas. For similar reasons subarea analysis produces the non-fractal behaviour of markedly different variograms for separate subareas. The linearity of landforms in many areas, is detected by the variograms for separate directions. This indicates that the roughness of landsurfaces is anisotropic, unlike that of fractal surfaces. Because of difficulties in extracting particular landforms from their landsurfaces, no clear links between fractal behaviour, and landform size distribution could be established. A comparative study shows the geomorphometric parameters of fractal surfaces to vary with fractal dimension, while the geomorphometry of landsurfaces varies with the landforms present. Fractal dimensions estimated from landsurfaces do not correlate with geomorphometric parameters. From the results of this study, real landsurfaces would not appear to be scale- free. Therefore, a scale-bound approach towards landsurfaces would seem to be more appropriate to geomorphology than the fractal alternative.
APA, Harvard, Vancouver, ISO, and other styles
10

Sithebe, Nomcebo Princess. "Flocculation modelling of differential sedimentation based on fundamental physics of settling particles and fractal theory." Diss., University of Pretoria, 2013. http://hdl.handle.net/2263/40841.

Full text
Abstract:
Sedimentation is a fundamental operation in wastewater treatment works. A rational design of sedimentation tanks is currently achieved by plotting iso-percentile (iso-percentage) concentration removal profiles from flocculent settling data. A major drawback of the graphical iso-percentage method is that the iso-percentile lines are often manually interpolated and are mere hand drawn estimations. This is because the settling behaviour of sludge particles is highly non-linear. The manual analytical process is therefore very tedious, inaccurate and subjective. Hence, an optimised design of sedimentation tanks is necessary in order to eliminate the errors incurred during data analysis. In this study, a mechanistic iso-percentile flocculent model (referred to as the velocity flocculation model) is developed to simulate the behaviour of flocculating colloidal particles in turbid water. This model is based on the physical meanings of flocculent settling particles and on fractal theory. It is formulated to produce automated iso-percentile curves which are fundamental in the design of sedimentation tanks. The iso-percentile model was vertically integrated into a velocity model to produce a model expressing the velocity of particles as a function of removal rate. The velocity model has an obvious advantage over the iso-percentile model in that it is easy to contextualize. It can be reverted back to the iso-percentile trajectory analysis eliminating the need for extensive data interpolation and may in future eliminate the need for settling column analysis altogether. In the current study, the integrated velocity form is used to predict instantaneous flocculent settling velocity of fine suspended particles under near quiescent conditions. This is vital since it is difficult to obtain velocity values in-situ or directly from sedimentation tanks. Model validity and competency was tested by a direct comparison with existing literature models, such as Ozer’s model and Ramatsoma and Chirwa’s model. Model comparison was based on the goodness of fit, the least sum of square errors and mathematical consistency with known flocculent settling behaviour. The newly developed iso-percentile model achieved a more accurate simulation of physical experimental data, did not violate any of the mathematical constraints and yielded lower sum of square errors than originally achieved by Ozer and Ramatsoma and Chirwa. Notably, the proposed velocity model offers a distinctive advantage over conventional interpolated-iso-percentile based models which are prone to numerical errors during interpolation. Its performance (velocity model) was compared against Je and Chang’s velocity model. Higher velocity values were observed for the new model than for Je and Chang’s model implying that empirically based models would tend to under-predict the velocity values. The model developed in this study brings us one step closer to achieving full automation of the settling tank and clarifier design.
Dissertation (MEng)--University of Pretoria, 2013.
gm2014
Chemical Engineering
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
11

Geiselhart, Karin, and n/a. "Does democracy scale?: a fractal model for the role of interactive technologies in democratic policy processes." University of Canberra. Information Management and Tourism, 1999. http://erl.canberra.edu.au./public/adt-AUC20050628.122941.

Full text
Abstract:
The development of interactive communication technologies, such as email and the Internet, has stimulated much discussion about their potential to assist in the renewal of democracy. Globalisation, intimately connected with technological advance, has likewise led to considerations of 'governance' that transcend the nation state. This thesis poses the general question: Can the efficiencies and economies which drive commercial applications of interactive technology be applied to democratic policy processes? What conditions will facilitate this? The perspective taken here views policy as a communication process between networks of actors who seek to use resources and decisions to promote their value systems (Considine 1994). Democracy is also seen as a communication process, with a set of criteria which ensure equal access to information and agenda setting among participants (Dahl 1989). These definitions and criteria have been chosen for their generality. They apply at all levels of analysis, and thus may be used for comparison across scales. These are aligned with concepts from complexity theory as applied to human nonlinear systems and organisational dynamics, drawing mostly on Kiel(1994) and Stacey (1996). The inherent unpredictability of these systems and their capacity for complex learning and adaptive self-organisation offers an alternative paradigm to linear, hierarchical management models. A combination of literature analysis and case study evidence leads to three extensions of complexity theory as applied to human systems: Firstly, interactive technology is proposed as a technique for the potential re-pluralisation of democratic processes in complex human systems. The possibility of adding many new non-elite voices, and of making these available to all others, relates to both chaotic forms of self-organisation and the benefits of electronic democracy. Secondly, evidence is presented for the existence of fractal, or self-similar patterns in the ways information technology is applied at different levels of governance. It is shown that instrumental approaches which emphasise efficiency dominate technology use at the global, national and organisational levels, and these are interdependent. Alternative patterns which encourage participation also exist at every level. Thirdly, it is argued that the values of the dominant actors are the main determinants of whether interactive technologies will be structured to favour democratic forms of communication at every level. Three theory chapters in Part I develop these arguments by extensive reviews of relevant literatures. On the global level, convergent media, telecommunications and technology conglomerates underpin a global 'nervous system' which discourages government intervention, promotes a global monoculture, inhibits pluralistic debate by minimising access to alternative forms of information, and emphasises individualism and consumption. Within nations, widespread uniformity of public sector reform is sympathetic and responsive to these globalising pressures. Deregulation, privatisation, retreat from public broadcasting, down-sizing and outsourcing have become standard approaches, and are reflected in Australian information technology policy and programs. Several exceptions demonstrate more participatory approaches. At the organisational level, instrumental approaches to management and computerisation also prevail. In each case, a shift towards globalising values corresponds to applications of information technology which dampen the complex interactivity required for democratic policy processes. Part II supplements this analysis with case study evidence. The organisational data were collected primarily during a two year qualitative study of interactive technology use in the Australian Department of Finance and Administration. The researcher found technology use was inseparable from other change processes, and these were found to have strong elements inhibiting participation in internal policy. An instrumental approach to interactive technology use reinforced hierarchical decision processes. Three minor case studies looked at an internal mailing list in a federal agency, a mostly national list on Internet and telecommunications policy, and an experiment in electronic democracy at the local level. These offered additional insights into the ways interactive technologies can contribute to complex but adaptive policy processes, if normative democratic values guide their design. The researcher proposes a set of communication protocols for the use of interactive technologies in democratic policy processes. These would enable the forms of communication necessary to reinvigorate democracy in an information age. It is also argued that these protocols, if applied at the organisational level, and particularly within the public sector, could become part of a reaffirmation of industrial democracy. This is necessary to ensure the integrity and accountability of the public sector, given the progressive intermingling of these institutions with private enterprise. Additional suggestions are made for research into government uses of information technology as an important focus for policy analysts.
APA, Harvard, Vancouver, ISO, and other styles
12

Treifi, Muhammad. "Fractal-like finite element method and strain energy approach for computational modelling and analysis of geometrically V-notched plates." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/fractallike-finite-element-method-and-strain-energy-approach-for-computational-modelling-and-analysisof-geometrically-vnotched-plates(93e63366-8eef-4a29-88a4-0c89cf13ec1f).html.

Full text
Abstract:
The fractal-like finite element method (FFEM) is developed to compute stress intensity factors (SIFs) for isotropic homogeneous and bi-material V-notched plates. The method is semi-analytical, because analytical expressions of the displacement fields are used as global interpolation functions (GIFs) to carry out a transformation of the nodal displacements within a singular region to a small set of generalised coordinates. The concept of the GIFs in reducing the number of unknowns is similar to the concept of the local interpolation functions of a finite element. Therefore, the singularity at a notch-tip is modelled accurately in the FFEM using a few unknowns, leading to reduction of the computational cost.The analytical expressions of displacements and stresses around a notch tip are derived for different cases of notch problems: in-plane (modes I and II) conditions and out-of-plane (mode III) conditions for isotropic and bi-material notches. These expressions, which are eigenfunction series expansions, are then incorporated into the FFEM to carry out the transformation of the displacements of the singular nodes and to compute the notch SIFs directly without the need for post-processing. Different numerical examples of notch problems are presented and results are compared to available published results and solutions obtained by using other numerical methods.A strain energy approach (SEA) is also developed to extract the notch SIFs from finite element (FE) solutions. The approach is based on the strain energy of a control volume around the notch-tip. The strain energy may be computed using commercial FE packages, which are only capable of computing SIFs for crack problems and not for notch problems. Therefore, this approach is a strong tool for enabling analysts to compute notch SIFs using current commercial FE packages. This approach is developed for comparison of the FFEM results for notch problems where available published results are scarce especially for the bi-material notch cases.A very good agreement between the SEA results and the FFEM results is illustrated. In addition, the accuracy of the results of both procedures is shown to be very good compared to the available results in the literature. Therefore, the FFEM as a stand-alone procedure and the SEA as a post-processing technique, developed in this research, are proved to be very accurate and reliable numerical tools for computing the SIFs of a general notch in isotropic homogeneous and bi-material plates.
APA, Harvard, Vancouver, ISO, and other styles
13

Madison, Guy. "Functional modelling of the human timing mechanism." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis : Univ.-bibl. [distributör], 2001. http://publications.uu.se/theses/91-554-5012-1/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Zabzina, Natalia. "Mathematical modelling approach to collective decision-making." Doctoral thesis, Uppsala universitet, Tillämpad matematik och statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-314903.

Full text
Abstract:
In everyday situations individuals make decisions. For example, a tourist usually chooses a crowded or recommended restaurant to have dinner. Perhaps it is an individual decision, but the observed pattern of decision-making is a collective phenomenon. Collective behaviour emerges from the local interactions that give rise to a complex pattern at the group level. In our example, the recommendations or simple copying the choices of others make a crowded restaurant even more crowded. The rules of interaction between individuals are important to study. Such studies should be complemented by biological experiments. Recent studies of collective phenomena in animal groups help us to understand these rules and develop mathematical models of collective behaviour. The most important communication mechanism is positive feedback between group members, which we observe in our example. In this thesis, we use a generic experimentally validated model of positive feedback to study collective decision-making. The first part of the thesis is based on the modelling of decision-making associated to the selection of feeding sites. This has been extensively studied for ants and slime moulds. The main contribution of our research is to demonstrate how such aspects as "irrationality", speed and quality of decisions can be modelled using differential equations. We study bifurcation phenomena and describe collective patterns above critical values of a bifurcation points in mathematical and biological terms. In the second part, we demonstrate how the primitive unicellular slime mould Physarum Polycephalum provides an easy test-bed for theoretical assumptions and model predictions about decision-making. We study its searching strategies and model decision-making associated to the selection of food options. We also consider the aggregation model to investigate the fractal structure of Physarum Polycephalum plasmodia.

Fel serie i tryckt bok /Wrong series in the printed book

APA, Harvard, Vancouver, ISO, and other styles
15

Kuklik, Robert G. "Capital Asset Prices Modelling - Concept VAPM." Doctoral thesis, Vysoká škola ekonomická v Praze, 2008. http://www.nusl.cz/ntk/nusl-196945.

Full text
Abstract:
The key objective of this thesis is the outline of an alternative capital market modeling framework, the Volatility Asset Pricing Model, VAPM, inspired by the innovative dual approach of Mandelbrot and Hudson using the method based on synthesis of two seemingly antagonistic factors -- the volatility of market prices and their serial dependence determining the capital markets' dynamics. The pilot tests of this model in various periods using the market index as well as a portfolio of selected securities delivered generally satisfactory results. Firstly, the work delivers a brief recapitulation regarding the concepts of a consumer/investor choice under general conditions of hypothetical certainty. Secondly, this outline is then followed by a description of the "classical" methodologies in the risky environment of uncertainty, with assessment of their corresponding key models, i.e. the CAPM, SIM, MIM, APTM, etc., notwithstanding results of the related testing approaches. Thirdly, this assessment is based on evaluation of the underlying doctrine of Efficient Market Hypothesis in relation to the so called Random Walk Model. Fourthly, in this context the work also offers a brief exposure to a few selected tests of these contraversial concepts. Fifthly, the main points of conteporary approaches such as the Fractal Dimension and the Hurst Exponent in the dynamic framework of information entropy are subsequently described as the theoretical tools leading to development of the abovementioned model VAPM. The major contribution of this thesis is considered its attempt to apply the abovementioned concepts in practice, with the intention to possibly inspire a further analytical research.
APA, Harvard, Vancouver, ISO, and other styles
16

Mishkinis, Anton. "Extension des méthodes de géométrie algorithmique aux structures fractales." Phd thesis, Université de Bourgogne, 2013. http://tel.archives-ouvertes.fr/tel-00991384.

Full text
Abstract:
La définition de formes par ces procédés itératifs génère des structures avec des propriétésspécifiques intéressantes : rugosité, lacunarité. . . . Cependant, les modèles géométriques classiquesne sont pas adaptés à la description de ces formes.Dans le but de développer un modeleur itératif pour concevoir des objets fractals décrits à l'aide duBCIFS, nous avons développé un ensemble d'outils et d'algorithmes génériques qui nous permettentd'évaluer, de caractériser et d'analyser les différentes propriétés géométriques (la localisation, lecalcul de l'enveloppe convexe, de la distance à partir d'un point, etc) de fractals. Nous avons identifiéles propriétés des opérations standards (intersection, union, offset, . . . ) permettant de calculer uneapproximation d'image des fractales et de plus d'optimiser ces algorithmes d'approximation.Dans certains cas, il est possible de construire un CIFS avec l'opérateur de HUTCHINSON généralisédont l'attracteur est suffisamment proche du résultat de l'opération par rapport à la métrique deHausdorff. Nous avons développé un algorithme générique pour calculer ces CIFS pour une précisiondonnée. Nous avons défini la propriété d'auto-similarité de l'opération, qui définie un ensemble detransformations utilisé dans un système itératif résultant.Pour construire un CIFS exact de l'image, si il existe, il faut prouver tous les similitudes nécessairesmanuellement. Nous explicitons également la condition de l'opération, quand le résultat peut êtrereprésenté par un IFS avec un opérateur de HUTCHINSON généralisé. Dans ce cas, il n'est que cettecondition à prouver manuellement
APA, Harvard, Vancouver, ISO, and other styles
17

Adegoke, Elijah. "Radio frequency channel characterization for energy harvesting in factory environments." Thesis, Loughborough University, 2018. https://dspace.lboro.ac.uk/2134/33412.

Full text
Abstract:
This thesis presents ambient energy data obtained from a measurement campaign carried out at an automobile plant. At the automobile plant, ambient light, ambient temperature and ambient radio frequency were measured during the day time over two days. The measurement results showed that ambient light generated the highest DC power. For plant and operation managers at the automobile plant, the measurement data can be used in system design considerations for future energy harvesting wireless sensor nodes at the plant. In addition, wideband measurements obtained from a machine workshop are presented in this thesis. The power delay profile of the wireless channel was obtained by using a frequency domain channel sounding technique. The measurements were compared with an equivalent ray tracing model in order to validate the suitability of the commercial propagation software used in this work. Furthermore, a novel technique for mathematically recreating the time dispersion created by factory inventory in a radio frequency channel is discussed. As a wireless receiver design parameter, delay spread characterizes the amplitude and phase response of the radio channel. In wireless sensor devices, this becomes paramount, as it determines the complexity of the receiver. In reality, it is sometimes difficult to obtain full detail floor plans of factories for deterministic modelling or carry out spot measurements during building construction. As a result, radio provision may be suboptimal. The method presented in this thesis is based on 3-D fractal geometry. By employing the fractal overlaying algorithm presented, metallic objects can be placed on a floor plan so as to obtain similar radio frequency channel effects. The environment created using the fractal approach was used to estimate the amount of energy a harvesting device can accumulate in a University machine workshop space.
APA, Harvard, Vancouver, ISO, and other styles
18

Islam, Zahurul. "Fractals and fuzzy sets for modelling the heterogenity and spatial complexity of urban landscapes using multiscale remote sensing data." Curtin University of Technology, Department of Spatial Sciences, 2004. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=15414.

Full text
Abstract:
This research presents models for the analysis of textural and contextual information content of multiscale remote sensing to select an appropriate scale for the correct interpretation and mapping of heterogeneous urban land cover types. Spatial complexity measures such as the fractal model and the Moran’s I index of spatial autocorrelation were applied for addressing the issue of scale, while fuzzy set theory was applied for mapping heterogeneous urban land cover types. Three local government areas (e.g. the City of Perth, the City of Melville and the City of Armadale) of the Perth metropolitan area were selected, as the dominant land covers of these areas are representative to the whole metropolitan area, for the analysis of spatial complexity and the mapping of complex land covers. Characterisation of spatial complexity of the study areas computed from SPOT, Landsat-7 ETM+, and Landsat MSS was used for assessing the appropriateness of a scale for urban analysis. Associated with this outcome, the effect of spectral resolution and land cover heterogeneity on spatial complexity, the performance of fractal measurement algorithms and the relationship between the fractal dimension and Moran’s I were identified. A fuzzy supervised approach of the fuzzy c-means algorithm was used to generate fuzzy memberships of the selected bands of a Landsat-7 ETM+ scene based on the highest spectral separability among different urban land covers (e.g. forest, grassland, urban and dense urban) as determined by a transformed divergence analysis. Fuzzy land cover maps resulting from the application of fuzzy operators (e.g. maximum, minimum, algebraic sum, algebraic product and gamma operators) were evaluated against fuzzy memberships derived from the virtual field reference database (VFRDB).
The performance of fuzzy operators in generating fuzzy categorical maps along with the effect of land cover heterogeneity on fuzzy accuracy measures and sources of classification error were assessed. The analysis of spatial complexity computed from remote sensing images using a fractal model indicated that the various urban land cover types of the Perth metropolitan area are best represented at a resolution of 20 m (SPOT) as the fractal dimension (D) was found higher, as compared to the 25 m and 50 m resolutions of the Landsat-7 ETM+ and Landsat MSS, respectively, demonstrated the ability of the fractal model in distinguishing variations in the composition of built-up areas in the green and red bands of the satellite data, while forested areas typical of the urban fringe appear better characterised in the NIR band. Moran’s I of spatial autocorrelation was found useful in describing the spatial pattern of urban land cover types. A comparison between the D and Moran’s I of the study areas revealed a negative correlation, indicating that the higher the Moran’s I, the lesser the fractal dimension indicating a lower spatial complexity. Likewise, the results The accuracy of the fuzzy categorical maps associated with multiple spectral bands of a Landsat-7 ETM+ scene using various fuzzy operators reveals that the fuzzy gamma operator (y = 0.90) outperformed the categorical accuracy measures obtained by applying the fuzzy algebraic sum and other fuzzy operators for the City of Perth, while the accuracy measures of y value of 0.95 were found highest for the City of Melville and the City of Armadale.
A comparison of the accuracy measures of the fuzzy land cover maps of the study areas indicated that the overall accuracy of the City of Perth was up to 13% higher than the overall accuracy of the City of Melville and the City of Armadale which was found 69% and 71%, respectively. The lower accuracy measures of the City of Melville and the City of Armadale was attributed to highly mixed land cover classes resulting in mixed pixels in Landsat-7 ETM+ scene. In addition, the spectral similarity among the class forest and grassland, urban and dense urban were identified as sources of classification errors. The analysis of spatial complexity using multiscale and multisource remote sensing data and the application of fuzzy set theory provided a viable methodology for assessing the appropriateness of scale selection for an urban analysis and generating fuzzy urban land cover maps from a multispectral image. It also illustrated the longstanding issue of carrying out the accuracy of the fuzzy land cover map considering the fuzzy memberships of the classified data and the reference data using a fuzzy error matrix.
APA, Harvard, Vancouver, ISO, and other styles
19

Lagree, Bertrand. "Modelling of two-phase flow in porous media with volume-of-fluid method." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066199/document.

Full text
Abstract:
La compréhension des écoulements multiphasiques en milieu poreux revêt une importance capitale dans de nombreuses applications industrielles et environnementales, à des échelles spatiales et temporelles variées. Par conséquent, la présente étude propose une modélisation des écoulements multiphasiques en milieu poreux par le biais de la méthode Volume de Fluide, et présente des simulations de digitations de Saffman-Taylor, motivées par l'analyse d'expériences de balayage dans des blocs de grès de Bentheimer quasi bidimensionnels initialement saturés en huile extra-lourde par de l'eau. Le code Gerris, permettant des calculs parallèles efficaces à l'aide d'un maillage de type octree, est utilisé. Des tests de précision et de rapidité de calcul sont réalisés à l'aide de divers niveaux de raffinement, ainsi qu'une comparaison avec des simulations de référence dans la littérature. Des simulations 3D dans des milieux réels numérisés sont réalisés avec des résultats encourageants. Même s'il n'est pas encore possible d'atteindre des nombres capillaires réalistes, des écoulements dans des domaines cubiques de 1 mm de côté sont simulés, avec un temps de calcul raisonnable. Des simulations 2D de digitations visqueuses avec injection centrale ou latérale sont également présentées, basées sur la loi de Darcy. L'aspect fractal des digitations est étudié aussi bien à l'aide de la dimension fractale que de la variation de l'aire des motifs obtenus par rapport à leur périmètre. Enfin, des balayages à l'aide de polymères suivant des balayages à l'eau dans un processus en deux temps sont simulés à partir d'une modélisation darcéenne
Understanding multiphase flow in porous media is of tremendous importance for many industrial and environmental applications at various spatial and temporal scales. The present study consequently focuses on modelling multiphase flows by the Volume-of-Fluid method in porous media and shows simulations of Saffman-Taylor fingering motivated by the analysis of waterflooding experiments of extra-heavy oils in quasi-2D square slab geometries of Bentheimer sandstone. The Gerris code which allows efficient parallel computations with octree mesh refinement is used. It is tested for accuracy and computational speed using several levels of refinement and comparing to reference simulations in the literature. Simulations of real rocks are realised in three dimensions with very promising results. Though it is not yet possible to attain realistic capillary numbers, it is possible to simulate flows in domains of physical size up to 1 mm3 in reasonable CPU time. 2D simulations of viscous fingering with both central and lateral injection are also presented in this study, based on Darcy's law. The fractal aspect of this fingering is studied by considering both its fractal dimension and the variation of the area of the resulting pattern with respect to its arclength. Finally, polymer flooding following waterflooding in a two-step process is simulated with Darcy modelling
APA, Harvard, Vancouver, ISO, and other styles
20

Lanaro, Flavio. "Geometry, Mechanics and Transmissivity of Rock Fractures." Doctoral thesis, Stockholm : Tekniska högsk, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Stacey, Geoff. "Stochastic fractal modelling of dielectric discharges." 1995. http://hdl.handle.net/1993/19010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Zhijing. "GIS-based fractal/multifractal modelling of texture in mylonites and banded sphalerite ores /." 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:NR46019.

Full text
Abstract:
Thesis (Ph.D.)--York University, 2008. Graduate Programme in Earth and Space Science and Engineering.
Typescript. Includes bibliographical references (leaves123-134). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:NR46019
APA, Harvard, Vancouver, ISO, and other styles
23

Akinlabi, Emmanuel Olutayo. "Analysis and Modelling of Small-Scale Turbulence." Doctoral thesis, 2020. https://depotuw.ceon.pl/handle/item/3669.

Full text
Abstract:
The analysis and modelling of small-scale turbulence in the atmosphere play a significant role in improving our understanding of cloud processes, thereby contributing to the development of better parameterization of climate models. Advancement in our understanding of turbulence can be fueled from a more in-depth study of small-scale turbulence, which is the subject of this thesis. Within this thesis, small scales are understood as turbulent structures affected by viscosity as well as scales from the highwavenumber part of the inertial range which are of O(0.1m−1m) typically neglected in numerical simulations of atmospheric turbulence. This work is divided into two parts. In the first part, various approaches to estimate the turbulence kinetic energy (TKE) dissipation rate , from one-dimensional (1D) intersections that resemble experimental series, are tested using direct numerical simulation (DNS) of the stratocumulus cloudtop mixing layer and free convective boundary layer. Results of these estimates are compared with “true” DNS values of in buoyant and inhomogeneous atmospheric flows. This research focuses on recently proposed methods of the TKE dissipation-rate retrievals based on signal’s zero crossings and on recovering the missing part of the spectrum. The methods are tested on fully resolved turbulence fields and compared to standard retrievals from power spectra and structure functions. Anisotropy of turbulence due to buoyancy is shown to influence retrievals based on the vertical velocity component. TKE dissipation-rate estimates from the number of crossings correspond well to spectral estimates. As far as the recovery of the missing part of the spectrum is concerned, different models for the dissipation spectra was investigated, and the best one is chosen for further study. Results were improved when the Taylors’ microscale was used in the iterative method, instead of the Liepmann scale based on the number of signal’s zero crossings. This also allowed for the characterization of external intermittency by the Taylor-to-Liepmann scale ratio. It was shown that the new methods of TKE dissipation-rate retrieval from 1D series provide a valuable complement to standard approaches. The second part of this study addresses the reconstruction of sub-grid scales in large eddy simulation (LES) of turbulent flows in stratocumulus cloud-top. The approach is based on the fractality assumption of the turbulent velocity field. The fractal model reconstructs sub-grid velocity fields from known filtered values on LES grid, using fractal interpolation, proposed by Scotti and Meneveau [Physica D 127, 198–232 1999]. The characteristics of the reconstructed signal depend on the stretching parameter d, which is related to the fractal dimension of the signal. In many previous studies, the stretching parameter values were assumed to be constant in space and time. To improve the fractal interpolation approach, the stretching parameter variability is accounted for. The local stretching parameter is calculated from DNS data with an algorithm proposed by Mazel and Hayes [IEEE Trans. Signal Process 40(7), 1724–1734, 1992], and its probability density function (PDF) is determined. It is found that the PDFs of d have a universal form when the velocity field is filtered to wave-numbers within the inertial range. The inertial-range PDFs of d in DNS and LES of stratocumulus cloud-top and experimental airborne data from physics of stratocumulus top (POST) research campaign were compared in order to investigate its Reynolds number (Re) dependence. Next, fractal reconstruction of the subgrid velocity is performed and energy spectra and statistics of velocity increments are compared with DNS data. It is assumed that the stretching parameter d is a random variable with the prescribed PDF. Moreover, the autocorrelation of d in time is examined. It was discovered that d decorrelates with the characteristic timescale of the order of the Kolmogorov’s time scale and hence can be chosen randomly after each time step in LES. This follows from the fact that the time steps used in LES are typically considerably larger than Kolmogorov’s timescale. The implemented fractal model gives good agreement with the DNS and physics of stratocumulus cloud (POST) airborne data in terms of their spectra and PDFs of velocity increments. The error in mass conservation is smaller compared to the use of constant values of d. In conclusion, possible applications of the fractal model were addressed. A priori LES test shows that the fractal model can reconstruct the resolved stresses and residual kinetic energy. Also, based on the preliminary test, the fractal model can improve LES velocity fields used in the Lagrangian tracking of droplets for the simulation of cloud microphysics. Both parts of the thesis are based on the assumptions of scale self-similarity of Kolmogorov and local isotropy, which may not be satisfied in real atmospheric conditions. Since the standard methods for TKE dissipation rate retrieval are derived from these assumptions, the level of discrepancy is investigated by comparing the actual value of from DNS with estimates from these methods. Also, in the case of the modelling of small (subgrid) scales, the improved fractal model relies on scale-similarity. Range of scales, in which this assumption is sufficiently satisfied (i.e. inertial range scales) is reconstructed. Statistical tools from the Kolmogorov’s similarity hypotheses are used to assess the performance of the improved fractal model.
APA, Harvard, Vancouver, ISO, and other styles
24

Arendt, Lawrence H. "Stochastic modelling and multifractal characterization of dielectric discharges using Laplacian fractals." 1996. http://hdl.handle.net/1993/19113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Cordeiro, Margarida Machado. "Permeation of Weak Acids and Bases Through Lipid Bilayers – Modelling and Validation of a pH Variation Assay." Master's thesis, 2022. http://hdl.handle.net/10316/99393.

Full text
Abstract:
Dissertação de Mestrado em Química apresentada à Faculdade de Ciências e Tecnologia
A descoberta e desenvolvimento de fármacos é um processo iterativo e muito complexo. A insuficiente absorção, distribuição, eliminação, eficácia e segurança dos candidatos a fármacos são os principais obstáculos no desenvolvimento de novas terapias. As membranas lipídicas são a principal barreira à difusão dos solutos e determinam a disponibilidade destes compostos nos tecidos. Prever a velocidade de permeação de solutos in vivo é crucial, e existem vários estudos in vitro para entender e quantificar esse processo. O ensaio de variação de pH é particularmente relevante porque permite seguir a permeação de ácidos e bases fracas, mesmo quando estes não apresentam propriedades óticas. No entanto, existem alguns artefactos, a validade deste ensaio não é amplamente aceite e os coeficientes de permeabilidade nem sempre são consistentes com aqueles obtidos por outros métodos.Neste trabalho foi desenvolvido um modelo cinético para a permeação de ácidos e bases fracos através de membranas lipídicas que considera explicitamente os dois folhetos da membrana. As simulações desses processos permitiram identificar alguns princípios do desenho experimental necessários para não comprometer a precisão do método na previsão dos coeficientes de permeabilidade. Devem ser utilizadas vesículas lipídicas de grandes dimensões e a variação de pH deve ser inferior a 0.25 unidades. Estas conclusões resultaram da análise do efeito da topologia do sistema, da lipofilicidade do soluto e das concentrações do soluto e da sonda de fluorescência nos números de ocupação por vesícula e da comparação da dinâmica de permeação do soluto e da variação da fluorescência. Ao analisar o efeito destes parâmetros no coeficiente de permeabilidade, verificou-se que a equação comummente utilizada Papp = β × r/3 é inadequada para avaliar o coeficiente de permeabilidade de ácidos e bases fracas. Isso resulta do facto de vários pressupostos e aproximações considerados na derivação desta equação não serem válidos nas condições do ensaio.Este trabalho também se focou na análise do efeito de vários parâmetros (constante de velocidade de translocação, pKa do soluto, permeabilidades de protão e do potássio) na cinética de permeação do soluto e na variação do pH interno resultante. A permeação de ácidos fracos resulta numa rápida diminuição do pH, seguida de uma recuperação mais lenta do seu valor inicial. Na permeação de bases fracas é observado um efeito simétrico. Se apenas a espécie neutra permear a membrana, a dinâmica do soluto é bem descrita por uma função monoexponencial. No entanto, se a permeação das espécies carregadas for incluída (ainda que num processo mais lento), a acumulação de soluto pode seguir uma cinética bifásica. Neste caso, a permeabilidade aparente do soluto deve ser calculada a partir de uma constante característica média (α1 β1 + α2 β2). Porém, não é possível calculá-la com precisão a partir da dinâmica de fluorescência, uma vez que não existe uma relação direta entre as constantes características e os termos pré-exponenciais. Usar apenas a constante característica do processo rápido resultará numa sobrestimação do coeficiente de permeabilidade ao soluto. A fase lenta da permeação do soluto não é influenciada apenas pela permeabilidade das espécies de soluto carregadas, mas também pela permeabilidade de outras espécies carregadas em solução como H+/OH‒ e os outros iões responsáveis pela dissipação do potencial eletrostático, gerado pelo desequilíbrio de carga.Foram realizadas algumas experiências de equilíbrio de pH para estimar a permeabilidade dos iões H+/OH‒ e avaliar o efeito da valinomicina, um ionóforo com alta especificidade para K+. No entanto, estes objetivos não foram alcançados com sucesso, uma vez que os resultados experimentais obtidos eram bastante diferentes das variações previstas pelo nosso modelo cinético. Concluiu-se que as discrepâncias se devem principalmente à capacidade tampão de pH adicional presente no interior das vesículas, possivelmente devido à presença de ácido carbónico. O aumento da capacidade tampão resulta na necessidade de permeação de uma maior quantidade de iões H+/OH‒ para reestabelecer o equilíbrio de pH, o que, por sua vez, leva ao desenvolvimento de um maior desequilíbrio de cargas entre os meios aquosos externo e interno das vesículas. Assim, o potencial eletrostático gerado opõe-se ao movimento dos iões H+/OH‒ e impede o reequilibrar do pH. O completo reequilíbrio requer o movimento adicional de cargas, como K+ na presença de valinomicina, o que explica o forte efeito da valinomicina observado experimentalmente.
Drug discovery and development is an iterative and very complex process. The poor absorption, distribution, clearance, efficiency, and safety of drug candidates are the major pitfall in the development of new therapies. Lipid membranes represent the main barrier to the free diffusion of solutes and determine the availability of these compounds in the tissues. Predicting the rate at which solutes permeate in vivo barriers is crucial, and there are several in vitro studies valuable for this goal. The pH-variation assay is particularly relevant because it allows following the permeation of weak acids and bases even when they do not exhibit optical properties. However, there are some artefacts, its validity is not widely accepted, and the permeability coefficients are not always consistent with those from other methods.In this work, a kinetic model was developed for the permeation of weak acids and bases through lipid membrane barriers that considers explicitly the two membrane leaflets. The simulations of these processes were able to identify some experiment design principles to not compromise the accuracy of the method in the prediction of permeability coefficients. The assay must be employed with larger vesicles, and the pH variation must be under 0.25 units. These conclusions were achieved by analysing the effect of the topology of the system, solute lipophilicity, and solute and fluorescent pH probe concentrations on the occupancy numbers per vesicle and by comparing the dynamics of solute accumulation and fluorescence variation. When analysing the effect of these parameters on the permeability coefficient it was found that the widely used equation Papp = β × r/3 is inappropriate to assess the permeability coefficient of drug-like weak acids and bases. This results from the failure of several assumptions and approximations considered in the derivation of this equation.This work also examined the effect of several parameters (flip-flop rate constant, solute’s pKa, proton, and potassium permeabilities) on the kinetics of solute permeation and the resulting pH variation inside the vesicles. The permeation of weak acids leads to a fast decrease of the pH, which is followed by a slow recovery to the initial pH value, and a symmetric effect is observed for the permeation of weak bases. If only the neutral solute species may permeate the membrane, the solute equilibration is well described by a mono-exponential function. However, if permeation of charged species is included (albeit as a slower process), the accumulation of solute may follow a biphasic kinetics. In this case, the solute apparent permeability should be calculated from a weighted characteristic constant (α1 β1 + α2 β2). However, when using the fluorescence dynamics, this is not possible to perform accurately due to a non-direct relationship between the characteristic constants and pre-exponential terms. When using only the characteristic constants of the fast process, the solute permeability coefficient is overestimated. It was observed that the slow phase in solute accumulation is not influenced only by the permeability of the charged solute species, but also by the permeability of other charged species in solution such as H+/OH‒ and the ions responsible for the dissipation of electrostatic potentials generated by charge unbalance.Some pH equilibration experiments were performed to estimate the permeability of H+/OH‒ and assess the effect of valinomycin, an ionophore with high specificity for K+. However, our objectives were not successfully achieved as the experimental results obtained were quite different from the time courses predicted by our kinetic model. We concluded that the main reason for the discrepancies was the additional pH buffer capacity present inside the vesicles, possibly due to the presence of carbonic acid. The increased buffer capacity leads to a higher amount of H+/OH‒ required to achieve pH equilibration, which in turn leads to the development of a larger charge unbalance between the aqueous media inside and outside the vesicles. The electrostatic potential thus generated hinders the movement of additional H+/OH‒ and prevents pH equalisation. The full equalization requires the countermovement of additional charges, such as K+ in the presence of valinomycin, which explains the strong effect of valinomycin observed experimentally.
FCT
FCT
FCT
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography