Dissertationen zum Thema „Comptage de modèles“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Comptage de modèles" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Alaya, Elmokhtar Ezzahdi. „Segmentation de Processus de Comptage et modèles Dynamiques“. Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066062.
Der volle Inhalt der QuelleIn the first part of this thesis, we deal with the problem of learning the inhomogeneous intensity of a counting process, under a sparse segmentation assumption. We introduce a weighted total-variation penalization, using data-driven weights that correctly scale the penalization along the observation interval. In the second part, we study the binarization technique of continuous features, for which we construct a specific regularization. This regularization is called “binarsity”, it computes the different values of a parameter. In the third part, we are interested in the dynamic regression models of Aalen and Cox with time-varying covariates and coefficients in high-dimensional settings. For each proposed estimation procedure, we give theoretical guaranties by proving non-asymptotic oracle inequalities in prediction. We finally present proximal algorithms to solve the underlying studied convex problems, and we illustrate our methods with simulated and real datasets
Karaki, Muath. „Opérateurs de composition sur les espaces modèles“. Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10075/document.
Der volle Inhalt der QuelleThis thesis concerns the study of composition operators on model spaces. Let [Phi] be an analytic function on the unit disk into itself and let [Théta] be an inner function, that is a holomorphic function bounded by 1 such that the radial limits on the unit circle are of modulus 1 almost everywhere with respect to Lebesgue measure. With this function [Théta], we associate the model space K[Théta], defined as the set of functions f ∈ H², which are orthogonal to the subspace [Théta]H². Here H² is the Hardy space on the unit disc. These subspaces are important in operator theory because they are used to model a large class of contractions on Hilbert space. The first problem which we are interested in concerns the compactness of the composition operator C[Phi] as an operator on H² into H². Recently, Lyubarskii and Malinnikova have obtained a nice criterion for the compactness of these operators which is related to the Nevanlinna counting function. This criterion generalizes the classical criterion of Shapiro. In the first part of the thesis, we generalize this result of Lyubarskii-Malinnikova to a more general class of subspaces, known as de Branges-Rovnyak spaces or some subspaces of them. The techniques that are used are particular Bernstein type inequalities of these spaces.The second problem in which we are interested in this thesis concerns the invariance of K[Théta] under C[Phi]. We present a group structure on the unit disc via the automorphisms which fix the point 1. Then, through theinduced group action, each point of the unit disc produces an equivalence class which turns out to be a Blaschke sequence. Moreover, the corresponding Blaschke products are minimal solutions of the functional equation [Psi]°[Phi]=[Lambda][Psi] where [Lambda] is a unimodular constant and is an automorphism of the unit disc. These results are applied in the invariance problem of the model spaces by the composition operator
Diallo, Alpha Oumar. „Inférence statistique dans des modèles de comptage à inflation de zéro. Applications en économie de la santé“. Thesis, Rennes, INSA, 2017. http://www.theses.fr/2017ISAR0027/document.
Der volle Inhalt der QuelleThe zero-inflated regression models are a very powerful tool for the analysis of counting data with excess zeros from various areas such as epidemiology, health economics or ecology. However, the theoretical study in these models attracts little attention. This manuscript is interested in the problem of inference in zero-inflated count models.At first, we return to the question of the maximum likelihood estimator in the zero-inflated binomial model. First we show the existence of the maximum likelihood estimator of the parameters in this model. Then, we demonstrate the consistency of this estimator, and let us establish its asymptotic normality. Then, a comprehensive simulation study finite sample sizes are conducted to evaluate the consistency of our results. Finally, an application on real health economics data has been conduct.In a second time, we propose a new statistical analysis model of the consumption of medical care. This model allows, among other things, to identify the causes of the non-use of medical care. We have studied rigorously the mathematical properties of the model. Then, we carried out an exhaustive numerical study using computer simulations and finally applied to the analysis of a database on health care several thousand patients in the USA.A final aspect of this work was to focus on the problem of inference in the zero inflation binomial model in the context of missing covariate data. In this case we propose the weighting method by the inverse of the selection probabilities to estimate the parameters of the model. Then, we establish the consistency and asymptotic normality of the estimator offers. Finally, a simulation study on several samples of finite sizes is conducted to evaluate the behavior of the estimator
Zou, Tingxiang. „Structures pseudo-finies et dimensions de comptage“. Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1083/document.
Der volle Inhalt der QuelleThis thesis is about the model theory of pseudofinite structures with the focus on groups and fields. The aim is to deepen our understanding of how pseudofinite counting dimensions can interact with the algebraic properties of underlying structures and how we could classify certain classes of structures according to their counting dimensions. Our approach is by studying examples. We treat three classes of structures: The first one is the class of H-structures, which are generic expansions of existing structures. We give an explicit construction of pseudofinite H-structures as ultraproducts of finite structures. The second one is the class of finite difference fields. We study properties of coarse pseudofinite dimension in this class, show that it is definable and integer-valued and build a partial connection between this dimension and transformal transcendence degree. The third example is the class of pseudofinite primitive permutation groups. We generalise Hrushovski's classical classification theorem for stable permutation groups acting on a strongly minimal set to the case where there exists an abstract notion of dimension, which includes both the classical model theoretic ranks and pseudofinite counting dimensions. In this thesis, we also generalise Schlichting's theorem for groups to the case of approximate subgroups with a notion of commensurability
Sim, Tepmony. „Estimation du maximum de vraisemblance dans les modèles de Markov partiellement observés avec des applications aux séries temporelles de comptage“. Thesis, Paris, ENST, 2016. http://www.theses.fr/2016ENST0020/document.
Der volle Inhalt der QuelleMaximum likelihood estimation is a widespread method for identifying a parametrized model of a time series from a sample of observations. Under the framework of well-specified models, it is of prime interest to obtain consistency of the estimator, that is, its convergence to the true parameter as the sample size of the observations goes to infinity. For many time series models, for instance hidden Markov models (HMMs), such a “strong” consistency property can however be difficult to establish. Alternatively, one can show that the maximum likelihood estimator (MLE) is consistent in a weakened sense, that is, as the sample size goes to infinity, the MLE eventually converges to a set of parameters, all of which associate to the same probability distribution of the observations as for the true one. The consistency in this sense, which remains a preferred property in many time series applications, is referred to as equivalence-class consistency. The task of deriving such a property generally involves two important steps: 1) show that the MLE converges to the maximizing set of the asymptotic normalized loglikelihood; and 2) show that any parameter in this maximizing set yields the same distribution of the observation process as for the true parameter. In this thesis, our primary attention is to establish the equivalence-class consistency for time series models that belong to the class of partially observed Markov models (PMMs) such as HMMs and observation-driven models (ODMs)
Phi, Tien Cuong. „Décomposition de Kalikow pour des processus de comptage à intensité stochastique“. Thesis, Université Côte d'Azur, 2022. http://www.theses.fr/2022COAZ4029.
Der volle Inhalt der QuelleThe goal of this thesis is to construct algorithms which are able to simulate the activity of a neural network. The activity of the neural network can be modeled by the spike train of each neuron, which are represented by a multivariate point processes. Most of the known approaches to simulate point processes encounter difficulties when the underlying network is large.In this thesis, we propose new algorithms using a new type of Kalikow decomposition. In particular, we present an algorithm to simulate the behavior of one neuron embedded in an infinite neural network without simulating the whole network. We focus on mathematically proving that our algorithm returns the right point processes and on studying its stopping condition. Then, a constructive proof shows that this new decomposition holds for on various point processes.Finally, we propose algorithms, that can be parallelized and that enables us to simulate a hundred of thousand neurons in a complete interaction graph, on a laptop computer. Most notably, the complexity of this algorithm seems linear with respect to the number of neurons on simulation
Saint, Pierre Philippe. „Modèles multi-états de type Markovien et application à l'asthme“. Phd thesis, Université Montpellier I, 2005. http://tel.archives-ouvertes.fr/tel-00010146.
Der volle Inhalt der QuelleLupeau, Alexandre. „Étude et modélisation du comportement d'un écoulement annulaire dispersé : application à la mesure de débit de gaz humide à l'aide d'un débitmètre à Venturi“. École nationale supérieure de l'aéronautique et de l'espace (Toulouse ; 1972-2007), 2005. http://www.theses.fr/2005ESAE0013.
Der volle Inhalt der QuelleAubert, Julie. „Analyse statistique de données biologiques à haut débit“. Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS048/document.
Der volle Inhalt der QuelleThe technological progress of the last twenty years allowed the emergence of an high-throuput biology basing on large-scale data obtained in a automatic way. The statisticians have an important role to be played in the modelling and the analysis of these numerous, noisy, sometimes heterogeneous and collected at various scales. This role can be from several nature. The statistician can propose new concepts, or new methods inspired by questions asked by this biology. He can propose a fine modelling of the phenomena observed by means of these technologies. And when methods exist and require only an adaptation, the role of the statistician can be the one of an expert, who knows the methods, their limits and the advantages.In a first part, I introduce different methods developed with my co-authors for the analysis of high-throughput biological data, based on latent variables models. These models make it possible to explain a observed phenomenon using hidden or latent variables. The simplest latent variable model is the mixture model. The first two presented methods constitutes two examples: the first in a context of multiple tests and the second in the framework of the definition of a hybridization threshold for data derived from microarrays. I also present a model of coupled hidden Markov chains for the detection of variations in the number of copies in genomics taking into account the dependence between individuals, due for example to a genetic proximity. For this model we propose an approximate inference based on a variational approximation, the exact inference not being able to be considered as the number of individuals increases. We also define a latent-block model modeling an underlying structure per block of rows and columns adapted to count data from microbial ecology. Metabarcoding and metagenomic data correspond to the abundance of each microorganism in a microbial community within the environment (plant rhizosphere, human digestive tract, ocean, for example). These data have the particularity of presenting a dispersion stronger than expected under the most conventional models (we speak of over-dispersion). Biclustering is a way to study the interactions between the structure of microbial communities and the biological samples from which they are derived. We proposed to model this phenomenon using a Poisson-Gamma distribution and developed another variational approximation for this particular latent block model as well as a model selection criterion. The model's flexibility and performance are illustrated on three real datasets.A second part is devoted to work dedicated to the analysis of transcriptomic data derived from DNA microarrays and RNA sequencing. The first section is devoted to the normalization of data (detection and correction of technical biases) and presents two new methods that I proposed with my co-authors and a comparison of methods to which I contributed. The second section devoted to experimental design presents a method for analyzing so-called dye-switch design.In the last part, I present two examples of collaboration, derived respectively from an analysis of genes differentially expressed from microrrays data, and an analysis of translatome in sea urchins from RNA-sequencing data, how statistical skills are mobilized, and the added value that statistics bring to genomics projects
Prezotti, Filho Paulo Roberto. „Periodic models and variations applied to health problems“. Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC015.
Der volle Inhalt der QuelleThis manuscript deals with some extensions to time series taking integer values of the autoregressive periodic parametric model established for series taking real values. The models we consider are based on the use of the operator of Steutel and Van Harn (1979) and generalize the stationary integer autoregressive process (INAR) introduced by Al-Osh & Alzaid (1987) to periodically correlated counting series. These generalizations include the introduction of a periodic operator, the taking into account of a more complex autocorrelation structure whose order is higher than one, the appearance of innovations of periodic variances but also at zero inflation by relation to a discrete law given in the family of exponential distributions, as well as the use of explanatory covariates. These extensions greatly enrich the applicability domain of INAR type models. On the theoretical level, we establish mathematical properties of our models such as the existence, the uniqueness, the periodic stationarity of solutions to the equations defining the models. We propose different methods for estimating model parameters, including a method of moments based on Yule-Walker equations, a conditional least squares method, and a quasi-maximum likelihood method based on the maximization of a Gaussian likelihood. We establish the consistency and asymptotic normality of these estimation procedures. Monte Carlo simulations illustrate their behavior for different finite sample sizes. The models are then adjusted to real data and used for prediction purposes.The first extension of the INAR model that we propose consists of introducing two periodic operators of Steutel and Van Harn, one modeling the partial autocorrelations of order one on each period and the other capturing the periodic seasonality of the data. Through a vector representation of the process, we establish the conditions of existence and uniqueness of a solution periodically correlated to the equations defining the model. In the case where the innovations follow Poisson's laws, we study the marginal law of the process. As an example of real-world application, we are adjusting this model to daily count data on the number of people who received antibiotics for the treatment of respiratory diseases in the Vitória region in Brazil. Because respiratory conditions are strongly correlated with air pollution and weather, the correlation pattern of the daily numbers of people receiving antibiotics shows, among other characteristics, weekly periodicity and seasonality. We then extend this model to data with periodic partial autocorrelations of order higher than one. We study the statistical properties of the model, such as mean, variance, marginal and joined distributions. We are adjusting this model to the daily number of people receiving emergency service from the public hospital of the municipality of Vitória for treatment of asthma. Finally, our last extension deals with the introduction of innovations according to a Poisson law with zero inflation whose parameters vary periodically, and on the addition of covariates explaining the logarithm of the intensity of the Poisson's law. We establish some statistical properties of the model, and we use the conditional maximum likelihood method to estimate its parameters. Finally, we apply this modeling to daily data of the number of people who have visited a hospital's emergency department for respiratory problems, and we use the concentration of a pollutant in the same geographical area as a covariate
Este manuscrito trata de algumas extensões para séries temporais de valores inteiros domodelo paramétrico periódico autorregressivo estabelecido séries temporais de valores reais. Osmodelos considerados baseiam-se no uso do operadorde Steutel e Van Harn (1979) e generalizamo processo autorregressivo depara números inteiros estacionários (INAR) introduzidos por Al-Osh & Alzaid(1987) para séries de contagem periodicamente correlacionadas. Essas generalizações incluem aintrodução de um operador periódico, a consideração de uma estrutura de autocorrelação mais complexa,cuja ordem é maior do que um, o aparecimentode inovações de variâncias periódicas, e também ainflação zero em relação a uma lei discreta dadana família de distribuições exponenciais, bem comoo uso de covariáveis explicativas. Essas extensões enriquecem muito o domínio de aplicabilidade dosmodelos do tipo INAR. No nível teórico, estabelecemospropriedades matemáticas de nossos modeloscomo a existência, a unicidade, e a estacionariedadeperiódica de soluções para as equações que definemos modelos. Propomos três métodos para estimarparâmetros de modelos, incluindo um métodode momentos baseado nas equações de Yule-Walker,um método de mínimos quadrados condicionais e ummétodo de quasi-máxima verossimilhança (QML) baseadona maximização de uma probabilidade Gaussiana. Estabelecemos a consistência e a normalidadeassintótica desses procedimentos de estimativa. Assimulações de Monte Carlo ilustram seus comportamentospara diferentes tamanhos de amostras finitas.Os modelos são então ajustados para dados reais eusados para fins de previsão. A primeira extensão domodelo INAR que propomos consiste na introdução de dois operadores periódicos de Steutel e VanHarn, o primeiro atua modelando as autocorrelações parciais de ordem um em cada período e o outro capturando a sazonalidade periódica dos dados.Através de uma representação vetorial do processo,estabelecemos as condições existência e unicidadede uma solução periodicamente correlacionada às equações que definem o modelo. No casoem que as inovações seguem as leis de Poisson,estudamos a lei marginal do processo. Como umexemplo de aplicação no mundo real, estamos ajustandoeste modelo aos dados diários de contagemdo número de pessoas que receberam antibióticos para o tratamento de doenças respiratórias na região de Vitória, Brasil. Como as condições respiratórias estão fortemente correlacionadas com a poluição doar e o clima, o padrão de correlação dos números diários de pessoas que recebem antibióticos mostra,entre outras características, a periodicidade semanale a sazonalidade. Em seguida, estendemosesse modelo para dados com autocorrelações parciaisperiódicas de ordem maior que um. Estudamosas propriedades estatísticas do modelo, como média,variância, distribuições marginais e conjuntas. Ajustamosesse modelo ao número diário de pessoascom problema respiratório que receberam atendimentode emergência no pronto-atendimento da redepública do município de Vitória. Finalmente, nossa última extensão trata da introdução de inovações de acordo com uma lei de Poisson com inflação zero cujos parâmetros variam periodicamente, e daadição de covariáveis explicando o logaritmo da intensidadeda lei de Poisson. Estabelecemos algumaspropriedades estatísticas do modelo e usamoso método QML para estimar seus parâmetros. Porfim, aplicamos essa modelagem aos dados diários sobre o número de pessoas que visitaram o departamentode emergência de um hospital por problemasrespiratórios e usamos como covariável a sérieconcentrações diárias e um poluente medido namesma área geográfica
Araldi, Alessandro. „Distribution des commerces et forme urbaine : Modèles orientés-rue pour la Côte d'Azur“. Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR2018.
Der volle Inhalt der QuelleThis doctoral dissertation analyses and discusses the relationship between the spatial distribution of retail and urban form. More precisely, in this work, we focus on the spatial statistical relationships which occur between the localisation of small and average-sized stores and the physical properties of the urban form in the metropolitan area of the French Riviera. The underlying hypothesis of this research is that the physical characteristics of the built-up landscape might influence how humans perceive and use urban space, and, ultimately, how stores are distributed and organised within cities. In the last two decades, scholars have been increasingly investigating this relationship. Nonetheless, both retail and urban form characteristics are often reduced to the simple notions of store density and street-network configuration respectively. Several aspects, such as the retail morpho-functional agglomeration typology, the geometrical characteristics of the streetscape and the contextual influence of the urban fabric are traditionally excluded from these analyses. These aspects should be even more important when studying highly heterogeneous metropolitan areas like the French Riviera, a combination of differently-sized cities and paradigmatic morphological regions: medieval centres, modern and contemporary planned areas, and suburban sprawl. To overcome these limitations, computer-aided, theory-based protocols are accurately selected and developed in this dissertation, allowing for the extraction of quantitative measures of retail and urban form. In particular, starting from traditional theories of retail geography and urban morphology, two location-based network-constrained procedures are proposed and implemented, providing a fine-grained description of the urban and retail fabrics at the street-level. These methodologies are based on innovative combinations of geoprocessing and AI-based protocols (Bayesian Networks). The statistical relationship between retail and urban morphological descriptors isinvestigated through the implementation of several statistical regression models. The decomposition of the study area in morphological subregions both at the meso- and macroscale, combined with the implementation of penalised regression procedures, enables the identification of specific combinations of urban morphological characteristics and retail patterns. In the case of the French Riviera, the outcomes of these models confirm the statistical significance of the relationship between street-network configurational properties and retail distribution. Nevertheless, the role of specific streetscape morphometric variables is demonstrated to be also a relevant aspect of the urban form when investigating the retail distribution. Finally, the morphological context both at the meso- and macro- scale is demonstrated to be a key factor in explaining the distribution of retail within a large urban area
Pardo, Angel. „Comptage d'orbites périodiques dans le modèle de windtree“. Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0173/document.
Der volle Inhalt der QuelleThe Gauss circle problem consists in counting the number of integer points of bounded length in the plane. In other words, counting the number of closed geodesics of bounded length on a flat two dimensional torus. Many counting problems in dynamical systems have been inspired by this problem. For 30 years, the experts try to understand the asymptotic behavior of closed geodesics in translation surfaces. H. Masur proved that this number has quadratic growth rate. Compute the quadratic asymptotic (Siegel–Veech constant) is a very active research domain these days. The object of study in this thesis is the wind-tree model, a non-compact billiard model. In the classical setting, we place identical rectangular obstacles in the plane at each integer point. We play billiard on the complement. We show that the number of periodic trajectories has quadratic asymptotic growth rate and we compute the Siegel–Veech constant for the classical wind-tree model as well as for the Delecroix–Zorich variant. We prove that, for the classical wind-tree model, this constant does not depend on the dimensions of the obstacles (non-varying phenomenon, analogous to results of Chen–Möller). Finally, when the underlying compact translation surface is a Veech surface, we give a quantitative version of the counting
Gimenez, Olivier. „Estimation et tests d'adéquation pour les modèles de capture-recapture multiétats“. Montpellier 2, 2003. http://www.theses.fr/2003MON20045.
Der volle Inhalt der QuelleBuczkowska, Sabina. „Quantitative models of establishments location choices : spatial effects and strategic interactions“. Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC0052/document.
Der volle Inhalt der QuelleThis thesis is breathing new life into the location choice models of establishments. The need for methodological advances in order to more realistically model the complexity of establishment decision-making processes, such as their optimal location choices, is the key motivation of this thesis. First, location choice models use geo-referenced data, for which choice sets have an explicit spatial component. It is thus critical to understand how to represent spatial aspect in location choice models. The final decision of an establishment seems to be related to the surrounding economic landscape. When accounting for the linkage between neighboring observations, the decision on the spatial weight matrix specification must be made. Yet, researchers overwhelmingly apply the Euclidean metric without realizing its underlying assumptions and its alternatives. This representation has been originally proposed due to scarce data and low computing power, rather than because of its universality. In areas, such as the Paris region, where high congestion or uncrossable physical barriers problems clearly arise, distances purely based on topography may not be the most appropriate for the study of intra-urban location. There are insights to be gained by mindfully reconsidering and measuring distance depending on a problem being analyzed. Rather than locking researchers into a restrictive structure of the weight matrix, this thesis proposes a flexible approach to intimate which distance metric is more likely to correctly account for the nearby markets depending on the sector considered. In addition to the standard Euclidean distance, six alternative metrics are tested: travel times by car (for the peak and off-peak periods) and by public transit, and the corresponding network distances. Second, what makes these location choices particularly interesting and challenging to analyze is that decisions of a particular establishment are interrelated with choices of other players.These thorny problems posed by the interdependence of decisions generally cannot be assumed away, without altering the authenticity of the model of establishment decision making. The conventional approaches to location selection fail by providing only a set of systematic steps for problem-solving without considering strategic interactions between the establishments in the market. One of the goals of the present thesis is to explore how to correctly adapt location choice models to study establishment discrete choices when they are interrelated.Finally, a firm can open a number of units and serve the market from multiple locations. Once again, traditional theory and methods may not be suitable to situations wherein individual establishments, instead of locating independently from each other, form a large orgnization, such as a chain facing a fierce competition from other chains. There is a necessity to incorporate interactions between units within the same and competing firms. In addition, the need to state a clear difference between the daytime and nighttime population has been emphasized. Demand is represented by pedestrian and car flows, the crowd of potential clients passing through the commercial centers, train and subways stations, airports, and highly touristic sites. The Global Survey of Transport (EGT 2010), among others, is of service to reach this objective. More realistically designed location choice models accounting for spatial spillovers, strategic interaction, and with a more appropriate definition of distance and demand can become a powerful and flexible tool to assist in finding a befitting site. An appropriately chosen location in turn can make an implicative difference for the newly-created business. The contents of this thesis provide some useful recommendations for transport analysts, city planners, plan developers, business owners, and shopping center investors
Galanos, Jean. „Système d'information comptable et modèles de gestion émergents“. Nice, 1991. http://www.theses.fr/1991NICE0003.
Der volle Inhalt der QuelleThis work aims at contributing to adapt accounting information to the evolution of management models as well as to the differentiated needs of the users of such information. The inadequacy between management and accountancy was been formalized in a schedule of conditions, thanks to the preliminary setting up of methodological and theoretical tools in a systemic framework enlarged with philosophical, linguistic and antropological contributions. Thus a conceptual framework common to management and accountancy has been devised from the single concept of gap which incorporates and becomes more complex into the notions of movement and flow. A double-level accounting model is given from the schedule of conditions in the established conceptual framework. It enables at the first level to take into account changes relevant for the firm without break in the sequence and whatever the necessary units of measure are. The second level consists in specific accounting models and is only alluded to in this thesis, being the main opening of this work
Raybaud-Turrillo, Brigitte. „Le modèle comptable patrimonial : les enjeux d'un droit comptable substantiel“. Nice, 1993. http://www.theses.fr/1993NICE0024.
Der volle Inhalt der QuelleThe french accounting model stands mainly on the patrimonial principle. Under the increasing influence of anglo-saxon standard setting and internatinal harmonization, this principle is more and more contested and submitted to restrictions. A fundamental reflection must be worked out on this patrimonial concept. The search for a substantial patrimoniality, beyond a strictly formal vision, allows to overcome many incoherences. From a thorough research of the legal substance of property used as a guarantee, a coherent accounting treatment of agreements such as retention of title clause and financial lease can be proposed, in a field where economy and law are usually opposed. The search for a patrimonial accounting model refers to the definition of an independant accounting law, using first a substantial consideration, according the principle "substance over form". There are many issues leading this process. The setting of a substantial accounting law appears first like an alternative to the developpment of anglo-saxon conceptual frameworks and like an answer to creative accounting. There are also strategic issues : the definition of an accounting model implies a reflection on institutionnal aspects of standard setting and effects the statut and developp-ment of the european accountants over the enterprises advising market
Vidard, Arthur. „Vers une prise en compte des erreurs modèles en assimilation de données 4D-variationnelle : application à un modèle réaliste d'océan“. Phd thesis, Université Joseph Fourier (Grenoble), 2001. http://tel.archives-ouvertes.fr/tel-00325349.
Der volle Inhalt der QuelleDeux méthodes sont étudiées plus en détail. Premièrement, le Nudging optimal qui consiste en adjoindre au 4D-Var un rappel newtonien de l'état du modèle vers les observations et dont l'amplitude sera estimé par contrôle optimal. D'autre part le " contrôle de l'erreur systématique " considère l'erreur modèle comme étant un terme ne variant pas, ou très peu, dans le temps, ce terme étant également estimé par contrôle.
Dans un premier temps ces méthodes sont appliquées à des cas académiques de modèles simplifiés en assimilant des données simulées. La méthode de contrôle de la part systématique de l'erreur est ensuite appliquée à un modèle d'océan aux équations primitives dans le cadre d'une expérience réaliste afin de valider les bons résultats obtenus pour les configurations académiques.
Bou, Saleh Pierre. „La Conception comptable : vers un modèle gestionnaire intégré“. Paris, CNAM, 2009. http://www.theses.fr/2009CNAM0646.
Der volle Inhalt der QuelleThe accountancy, due to its data production and the financial statements provided, has a significant role in the management of an enterprise and decision making by users. Does accountancy nowadays fulfill the needs of managers and users ? Needs change, the accountancy evolves accordingly. This, revealing its relativist nature. Hence, the present thesis examines different aspects of accountancy, analyzes its recent evolution and developments and proposes a new model (regarding integration) or working system, allowing to clear away the ground in order to create a better way, an integrated or unified model. This proposed system is an alternative to the evolutions or actual multidimensional systems maintaining the double entry principle
Smerlak, Matteo. „Divergence des mousses de spins : comptage de puissance et resommation dans le modèle plat“. Phd thesis, Université de la Méditerranée - Aix-Marseille II, 2011. http://tel.archives-ouvertes.fr/tel-00662170.
Der volle Inhalt der QuelleSmerlak, Matteo. „Divergence des mousses de spins : Comptage de puissances et resommation dans le modèle plat“. Thesis, Aix-Marseille 2, 2011. http://www.theses.fr/2011AIX22115/document.
Der volle Inhalt der QuelleIn this thesis we study the flat model, the main buidling block for the spinfoam ap- proach to quantum gravity, with an emphasis on its divergences. Besides a personal introduction to the problem of quantum gravity, the manuscript consists in two part. In the first one, we establish an exact powercounting formula for the bubble divergences of the flat model, using tools from discrete gauge theory and twisted cohomology. In the second one, we address the issue of spinfoam continuum limit, both from the lattice field theory and the group field theory perspectives. In particular, we put forward a new proof of the Borel summability of the Boulatov-Freidel-Louapre model, with an improved control over the large-spin scaling behaviour. We conclude with an outlook of the renormalization program in spinfoam quantum gravity
Beverini-Beau, Carole. „Décision comptable et jeux d'acteurs“. Toulon, 2005. http://www.theses.fr/2005TOUL2001.
Der volle Inhalt der QuelleThe objective of our work is to study chief acountant officer role in accounting decision making process and wide ranging space of his discretionary, that is, the space where he can choose between several options, but in a dependency context. In a first part, we highlighted his lack of stuides related to the decision. We also tried to explain this paradox, using delegation and principal-agent theories. Their deficiencies are proved. In a second part, we described his role accounting decision making process. We defined what an accounting decision is, then built an accounting decision making process model. This processual approach leads us to the convention therory. We prposed a model who has been checked with qualitative methods, but also quantitative tools, a sruvey of chief accountant officers point of view. Our results show that a discretionary space really exists, and that chief accountant officer is a main actor in financial transparency, and it appears that convention of representation of job is changing, espacially form the experience of the actor mentioned earlier
Zekri, Inès. „La qualité de service électronique : proposition d'un modèle d'évaluation et application au contexte des cabinets comptables tunisiens“. Versailles-St Quentin en Yvelines, 2009. http://www.theses.fr/2009VERS005S.
Der volle Inhalt der QuelleAlthough the past works studied the electronic service quality in various contexts, no study proposed, until this day, an integrator model judging the electronic service quality offered by the accounting professionals. The objective of the present research is to identify the determiners of the electronic service quality of the accounting professional and to propose an evaluation model allowing to explai it. The research model was tested by mobilizing jointly the quantitative approcah (survey by questionnaire with 110 customers) and the qualitative approcah (30 semi-directive interviews) according to triangulation logic. The results showed that the electronic service quality is a complex and multidimensional concept. Thus, it dismisses to a joint appreciation of three multidimensional concepts : the electronic system quality, the web interface quality and the service quality
Lauvernet, Claire. „Assimilation variationnelle d'observations de télédétection dans les modèles de fonctionnement de la végétation : utilisation du modèle adjoint et prise en compte de contraintes spatiales“. Phd thesis, Université Joseph Fourier (Grenoble), 2005. http://tel.archives-ouvertes.fr/tel-00010443.
Der volle Inhalt der QuelleThomas, Frédéric. „Contribution à la prise en compte des plates-formes logicielles d'exécution dans une ingénierie générative dirigée par les modèles“. Phd thesis, Université d'Evry-Val d'Essonne, 2008. http://tel.archives-ouvertes.fr/tel-00382556.
Der volle Inhalt der QuelleMakki, Ahmad. „Étude de modèles en séparation de phase tenant compte d'effets d'anisotropie“. Thesis, Poitiers, 2016. http://www.theses.fr/2016POIT2288/document.
Der volle Inhalt der QuelleThis thesis is situated in the context of the theoretical and numerical analysis of models in phase separation which take into account the anisotropic effects. This is relevant, for example, for the development of crystals in their liquid matrix for which the effects of anisotropy are very strong. We study the existence, uniqueness and the regularity of the solution of Cahn-Hilliard and Alen-Cahn equations and the asymptotic behavior in terms of the existence of a global attractor with finite fractal dimension. The first part of the thesis concerns some models in phase separation which, in particular, describe the formation of dendritic patterns. We start by study- ing the anisotropic Cahn-Hilliard and Allen-Cahn equations in one space dimension both associated with Neumann boundary conditions and a regular nonlinearity. In particular, these two models contain an additional term called Willmore regularization. Furthermore, we study these two models with Periodic (respectively, Dirichlet) boundary conditions for the Cahn-Hilliard (respectively, Allen-Cahn) equation but in higher space dimensions. Finally, we study the dynamics of the viscous Cahn-Hilliard and Allen-Cahn equations with Neumann and Dirichlet boundary conditions respectively and a regular nonlinearity in the presence of the Willmore regularization term and we also give some numerical simulations which show the effects of the viscosity term on the anisotropic and isotropic Cahn-Hilliard equations. In the last chapter, we study the long time behavior, in terms of finite dimensional attractors, of a class of doubly nonlinear Allen-Cahn equations with Dirichlet boundary conditions and singular potentials
Tomasian, Alain. „Modélisation mathématique de deux plaques perpendiculaires soudées avec prise en compte du cordon de soudure“. Pau, 1993. http://www.theses.fr/1993PAUU3030.
Der volle Inhalt der QuelleMalachanne, Etienne. „Modèle du remodelage osseux prenant en compte la phase fluide“. Montpellier 2, 2008. http://www.theses.fr/2008MON20222.
Der volle Inhalt der Quelleferrey, paul. „Modèles aux tensions de Reynolds avec prise en compte de l'intermittence de frontière“. Phd thesis, Université de Poitiers, 2004. http://tel.archives-ouvertes.fr/tel-00010644.
Der volle Inhalt der QuelleMARQUE, Sebastien. „Prise en compte de la surdispersion par des modèles à mélange de Poisson“. Phd thesis, Université Victor Segalen - Bordeaux II, 2003. http://tel.archives-ouvertes.fr/tel-00009885.
Der volle Inhalt der QuelleFerrey, Paul. „Modèles aux tensions de Reynolds avec prise en compte de l’intermittence de frontière“. Poitiers, 2004. http://www.theses.fr/2004POIT2331.
Der volle Inhalt der QuelleThe goal of this PhD thesis is to optimise a Reynolds stress model. Some constraints are applied to the length scale transport equation in order to take into account the effect of a pressure gradient on the structure of the boundary layer. This leads us to revisit the analysis of the behaviour of turbulence models at the vicinity of a free stream edge. The behaviour is different from what has been assumed before as the expected power law solution is contained in a more complex solution. This study proved that the constraints for the free stream edge are incompatible with the other constraints. The intermittent character of the free stream edge is taken into account in order to unlink the behaviour of the model at the free stream edge from its behaviour in fully turbulent regions. The model deduced from this analysis fulfils all the constraints and is calibrated on free shear flows and boundary layers where the constraints significantly improve the quality of results
Marque, Sébastien. „Prise en compte de la surdisposition par des modèles à mélange de Poisson“. Bordeaux 2, 2003. https://tel.archives-ouvertes.fr/tel-00009885.
Der volle Inhalt der QuelleZaittouni, Fouad. „Modélisation théorique et numérique d'interfaces : Prise en compte du contact et du frottement“. Montpellier 2, 2000. http://www.theses.fr/2000MON20076.
Der volle Inhalt der QuelleCarbonnell, Frédéric. „Étude de la prise en compte des effets instationnaires dans les simulations numériques (laminaire/turbulent)“. Toulouse, INPT, 1994. http://www.theses.fr/1994INPT110H.
Der volle Inhalt der QuelleReynaud-Bouret, Patricia. „Estimation adaptative de l'intensité de certains processus ponctuels par sélection de modèle“. Phd thesis, Paris 11, 2002. http://tel.archives-ouvertes.fr/tel-00081412.
Der volle Inhalt der Quellede sélection de modèle au cadre particulier de l'estimation d'intensité de
processus ponctuels. Plus précisément, nous voulons montrer que les
estimateurs par projection pénalisés de l'intensité sont adaptatifs soit dans
une famille d'estimateurs par projection, soit pour le risque minimax. Nous
nous sommes restreints à deux cas particuliers : les processus de Poisson
inhomogènes et les processus de comptage à intensité
multiplicative d'Aalen.
Dans les deux cas, nous voulons trouver une inégalité de type
oracle, qui garantit que les estimateurs par projection pénalisés ont un risque
du même ordre de grandeur que le meilleur estimateur par projection pour une
famille de modèles donnés. La clé qui permet de prouver des inégalités de
type oracle est le phénomène de concentration de la mesure ou plus précisément
la connaissance d'inégalités exponentielles, qui permettent de contrôler en
probabilité les déviations de statistiques de type khi-deux au dessus de leur
moyenne. Nous avons prouvé deux types d'inégalités de concentration. La
première n'est valable que pour les processus de Poisson. Elle est comparable
en terme d'ordre de grandeur à l'inégalité de M. Talagrand pour les suprema de
processus empiriques. La deuxième est plus grossière mais elle est valable
pour des processus de comptage beaucoup plus généraux.
Cette dernière inégalité met en oeuvre des techniques de
martingales dont nous nous sommes inspirés pour prouver des inégalités de
concentration pour des U-statistiques dégénérées d'ordre 2 ainsi que pour des
intégrales doubles par rapport à une mesure de Poisson recentrée.
Nous calculons aussi certaines bornes inférieures pour les
risques minimax et montrons que les estimateurs par projection pénalisés
atteignent ces vitesses.
Genthon, Pierre. „Modèles pour la subduction égéenne prenant en compte les mesures altimétriques du satellite SEASAT“. Paris 11, 1985. http://www.theses.fr/1985PA112079.
Der volle Inhalt der QuelleThe altimetric surface deduced from SEASAT altimeter data, may be equated for geophysical interpretation to an equipotential surface of the terrestrial gravity field and is used to constrain tectonic models relative to Aeagean forearc area. Crustal structure profiles relative to the Aegean forearc area and based upon geoid ondulations and free air gravity anomalies are presented. They involve three layers with a geometry ajusted by weighted least squares. The first profile, drawn across the western (convergent) branch of the arc exhibits a large amount of subducted crustal material and a local trough in the Moho under the so-called Mediterranean Ridge superimposed to a long wavelength flexure increasing toward the trench. The Moho topography suggests a lithosphere with no rigidity below the Mediterranean Ridge while the flexure associated with the subduction implies an elastic thickness of 80km. This discreapency leads us to suppose the existence of another load acting downward below the Mediterranean Ridge. On the second profile, drawn across the eastern (strike-slip) branch, a 7km deepening of the Moho is associated with much less crustal material subducted. This moho deepening represents a hughe isostatic unbalance and requires an active compensation mechanism. A model involving the growth at the base of the old and dense African lithosphere is proposed and checked using s1mple analytical calculations
Icart, Isabelle. „Modèles d'illumination locaux pour les couches et multicouches prenant en compte les phénomènes interférentiels“. Marne-la-Vallée, 2000. http://www.theses.fr/2000MARN0082.
Der volle Inhalt der QuelleObounou, Marcel. „Modélisation de la combustion turbulente non prémélangée avec prise en compte d'une cinétique chimique complexe“. Rouen, 1994. http://www.theses.fr/1994ROUE5011.
Der volle Inhalt der QuelleMalki, Jamal. „Modélisation des relations spatiales : prise en compte des aspects topologiques et directionnels“. La Rochelle, 2001. http://www.theses.fr/2001LAROS069.
Der volle Inhalt der QuelleBriquel, Irénée. „Complexité de problèmes de comptage, d'évaluation et de recherche de racines de polynômes“. Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2011. http://tel.archives-ouvertes.fr/tel-00677977.
Der volle Inhalt der QuelleRigault, Cyril. „Etude mathématique de modèles cinétiques pour la gravitation, tenant compte d'effets relativistes : stabilité, solutions autosimilaires“. Phd thesis, Université Rennes 1, 2012. http://tel.archives-ouvertes.fr/tel-00787487.
Der volle Inhalt der QuelleRigault, Cyril. „Étude mathématique de modèles cinétiques pour la gravitation, tenant compte d'effets relativistes : stabilité, solutions autosimilaires“. Rennes 1, 2012. https://tel.archives-ouvertes.fr/tel-00787487.
Der volle Inhalt der QuelleThis document is concerned with the behavior of solutions near ground states for gravitational kinetic systems of Vlasov type. In the first chapter we build by variational methods some stationary states of the Vlasov-Manev system and we prove their orbital stability. The second chapter gives the existence of self-similar blow-up solutions to the « pur Vlasov- Manev » system near ground states. In the third chapter we obtain the orbital stability of a large class of ground states. New methods based on the rigidity of the flow are developed in these three chapters. In particular, they provide the uniqueness of ground states by avoiding the study of non-local Euler-Lagrange equations, they solve a variational problem with non finite constraints and they give the orbital stability of ground states which are not necessary obtained from variational methods. In the fourth chapter, we finish our analysis with a numerical study of the radialy symmetric Vlasov-Poisson system : we give numerical finite difference schemes which conserve the mass and the Hamiltonien of the system
Rondeau, Laurent. „Identification des systèmes par modèles flous linguistiques : prise en compte des aspects numériques et symboliques“. Nancy 1, 1997. http://www.theses.fr/1997NAN10278.
Der volle Inhalt der QuelleBuilding models from numeric and symbolic information relating to system behavior is the subject of this thesis. In the first chapter, the approach used in the domain of identification which is mainly based upon numeric information is compared with the expert system approach which is based on symbolic information. Then, we propose a strategy which takes into account both types of information by using a linguistic fuzzy model. Parametric estimation of these kinds of models from numeric information leads to two possible methods, classical or fuzzy. The second, which is chosen for our development, has the advantage of highlighting two particular criteria for the choice of the model and the parametric estimation method. The second chapter presents the analysis of linguistic fuzzy models and parametric estimation methods, with respect to the criteria defined in chapter 1. We demonstrate that only one model verifies all criteria, the single-input single-output gradual rules model. We also emphasize that no parametric estimation method satisfies the specified conditions. In the third chapter, the gradual rules model is extended to the multi-inputs single-output case. A symbolic form of this model is then proposed in order to define a parametric estimation strategy which fulfills our criteria and is based on the resolution of fuzzy relational equations. A methodology of identification which takes into account numeric and symbolic information is proposed. This is applied to the modelling of a static non-linear system which showcases the main characteristics of the method
Dao, Thi Hong Phu. „Modèle technique de contrôle externe de la conformité aux normes IFRS (IFRS enforcement)“. Angers, 2006. http://www.theses.fr/2006ANGE0011.
Der volle Inhalt der QuelleThe mandatory adoption of IFRS in Europe is aimed at improving the quality and the comparability of financial information of listed companies. Nevertheless, such goal cannot be achieved solely by making a requirement for EU companies to use the IFRS, but it would be also necessary to assure the compliance with those standards. While there is recent research that addresses the issues on enforcement of accounting standards in general and of IFRS in particular, there is still a lack of studies which discuss the technical aspects of monitoring compliance with accounting standards. The objective of this research is to elaborate a technical model of enforcement of IFRS to be used by the regulatory oversight bodies. The methodology adopted is qualitative, based on an exploratory study and empirical observation of the financial information oversight system of the AMF in France, completed by the interviews with the controllers at the AMF and by the surveys carried out with financial analysts and auditors of listed companies. Our model has been developed following the risk-based approach which consist of assessing the risk of non-compliance with IFRS by the analysis of three risk components (inherent risk, control risk and audit risk), in order to focus the review efforts on those important areas which are more likely to contain a risk of non-compliance. The model was then tested in practical cases (issuers’ financial statements) by the controllers of the AMF. The tests results indicate that if some factors had been proved difficult to be assessed at the regulatory oversight level, the model constitute a relevant methodological tool for risk detection which helps to identify important areas of risk of non-compliance with the IFRS. In addition, the use of the model can help to make the controllers sensitive to risk analysis
Payan, Jean-Luc. „Prise en compte de barrages-réservoirs dans un modèle global pluie-débit“. Phd thesis, ENGREF (AgroParisTech), 2007. http://pastel.archives-ouvertes.fr/pastel-00003555.
Der volle Inhalt der QuelleMoyne, Christian. „Transferts couples chaleur-masse lors du séchage : prise en compte du mouvement de la phase gazeuse“. Vandoeuvre-les-Nancy, INPL, 1987. http://www.theses.fr/1987NAN10348.
Der volle Inhalt der QuellePlatet, Pierrot Françoise. „L'information financière à la lumière d'un changement de cadre conceptuel comptable : Etude du message du Président des sociétés cotées françaises“. Phd thesis, Université Montpellier I, 2009. http://tel.archives-ouvertes.fr/tel-00480501.
Der volle Inhalt der QuelleGao, Yingzhong. „Modèles probabilistes et possibilistes pour la prise en compte de l'incertain dans la sécurité des structures“. Phd thesis, Ecole Nationale des Ponts et Chaussées, 1996. http://pastel.archives-ouvertes.fr/pastel-00569129.
Der volle Inhalt der QuellePrice, Nathaniel Bouton. „Conception sous incertitudes de modèles avec prise en compte des tests futurs et des re-conceptions“. Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEM012/document.
Der volle Inhalt der QuelleAt the initial design stage, engineers often rely on low-fidelity models that have high uncertainty. In a deterministic safety-margin-based design approach, uncertainty is implicitly compensated for by using fixed conservative values in place of aleatory variables and ensuring the design satisfies a safety-margin with respect to design constraints. After an initial design is selected, high-fidelity modeling is performed to reduce epistemic uncertainty and ensure the design achieves the targeted levels of safety. High-fidelity modeling is used to calibrate low-fidelity models and prescribe redesign when tests are not passed. After calibration, reduced epistemic model uncertainty can be leveraged through redesign to restore safety or improve design performance; however, redesign may be associated with substantial costs or delays. In this work, the possible effects of a future test and redesign are considered while the initial design is optimized using only a low-fidelity model. The context of the work and a literature review make Chapters 1 and 2 of this manuscript. Chapter 3 analyzes the dilemma of whether to start with a more conservative initial design and possibly redesign for performance or to start with a less conservative initial design and risk redesigning to restore safety. Chapter 4 develops a generalized method for simulating a future test and possible redesign that accounts for spatial correlations in the epistemic model error. Chapter 5 discusses the application of the method to the design of a sounding rocket under mixed epistemic model uncertainty and aleatory parameter uncertainty. Chapter 6 concludes the work
Laurent, Gautier. „Prise en compte de l'histoire géologique des structures dans la création de modèles numériques 3D compatibles“. Thesis, Université de Lorraine, 2013. http://www.theses.fr/2013LORR0057/document.
Der volle Inhalt der QuelleThe main approaches to the modelling of geological structures are mainly geometrical, static and deterministic. In other terms, their geometry and connections are determined by applying criteria based on the compatibility with available data in their current state. The evolution of the geological structures is only integrated indirectly by the modeller and the kinematical and mechanical compatibility of the produced models remain difficult to assess. This thesis explores different methods which aim at better including the evolution of geological structures in the modelling process. Three complementary approaches are developed. First, a kinematical fault operator based on a 3D curvilinear fault frame is presented. It aims at progressively deforming the structures surrounding faults. The second approach is based on a pseudo-mechanical deformation tool inspired form computer graphics, based on rigid elements. It is used to interactively editing the structures and approximately simulate their deformation history. The last proposal is to compute the paleo-geographical coordinates from the restoration of geological structures. This way, the heterogeneities are characterised based on paleo-geographic distances which are compatible with the structural, kinematical and mechanical hypotheses specified when building the geological model. These different contributions open numerous perspectives to better take into account the evolution of the geological structures when modelling the subsurface and its heterogeneities. They help us to increase the compatibility of geomodels and simplify the parameterization of geological deformation to facilitate the characterisation of geological structures by inverse approaches
Mokhtari, Mohamed. „Amélioration de la prise en compte des aérosols terrigènes dans les modèles atmosphériques à moyenne échelle“. Toulouse 3, 2012. http://thesesups.ups-tlse.fr/1918/.
Der volle Inhalt der QuelleThe goal of this PhD work is to improve the numerical modeling of the processes related to the onset, transport and deposition of ground-originating aerosols, namely desert sand dust. The first part of this work is to integrate a global physical parameterization of dust emissions more compatible with the ECOCLIMAP and FAO databases used in the surface model SURFEX, taking into account the surface soil size distribution and the soil texture, in order to improve the representation of surface fluxes in SURFEX. The second part of this study is devoted to modeling the transport and deposition (wet and dry) in the atmospheric model ALADIN. The aim is to ultimately obtain more reliable predictions of dust concentrations, their optical properties and their feedback on the forecast weather. The evaluation of the coupled system ALADIN-SURFEX on the situation of 6-13th March 2006 demonstrates the ability of this system to simulate dust events both in intensity and extension. The changes proposed in the dust emission model provoke a substantial increase of dust emission in areas that are well known and well documented for these phenomena, like the Bodélé region. They also greatly improve simulated optical thicknesses observed in the areas of Ilorin and Mbour. This coupled system was then used to establish a simulated climatology of emission and optical properties of dust aerosols for North Africa. The ALADIN simulations show that this region is a major source of emissions on a global scale with an average of 878 Mt. Year-1 dust aerosols. The Bodélé region is found to be the area with the highest average emission with about 2 kg. M-2. Year-1. Eventually, the results obtained when studying the impact of dust on the behaviour of the numerical weather prediction model, are found satisfactory in regards of the fact that this is the first time desert dust is introduced in ALADIN. However, these results, when analyzed in more details, show some defects related to the interaction between radiation and dust. These defects suggest that the dust/radiation interaction requires more analysis, along with experimental tests. Such analysis is part of the perspectives of this study