Academic literature on the topic 'Logiciel QUASES'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Logiciel QUASES.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Logiciel QUASES":

1

Zyrianov, Aleksandr I., and Inna S. Zyrianova. "Planning of the Interregional Tourist Route in the Urals." Quaestiones Geographicae 40, no. 2 (June 1, 2021): 109–18. http://dx.doi.org/10.2478/quageo-2021-0018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract The article aims to attract the attention of geographers to the development of technology for the design of tourist routes. The world and Russian experience of designing long interregional tourist routes is considered. The authors’ approaches to route design are illustrated by the example of the Urals. Interregional tourist routes in Russia are actively developed with the support of the government. They are initiated by interacting regions and especially federal districts. Interregional routes are tours covering several adjacent regions or regions that are close not territorially, but thematically in tourist aspects. Such routes are quite different, but they have mainly cultural and informative goals as well as an excursion, transport and sometimes cruise style. Among all the interregional routes, the ‘Golden Ring’ and the ‘Volga-Kama’ river cruises are exemplary, among which the ‘Moskovskaya krugosvetka’ stands out due to the uniqueness of the ring shape of the route. The geographic features of the Urals are at the heart of the logical decisions for the preparation of the interregional tourist project ’The Great Ural Route’. The Urals have attractive, image, logistic and other opportunities for organising a large tourist route. A route should be developed for residents of the country and foreign guests, which will introduce the most striking and characteristic objects of the macro region as a whole. Moreover, the annular shape of the route is most preferred. It is advisable to lay the route in most of the Ural regions. It should include the main cities, landscapes of different natural zones, the most significant excursion sites and distinctive territories. Geographic route design technologies make it possible to make them relevant for a long time.
2

GARCIA, André Luiz Ming. "A NATUREZA LÓGICA E SEMIÓTICA DOS SIGNOS DE PRIMEIRIDADE THE LOGICAL AND SEMIOTIC NATURE OF THE ‘PRIMEIRIDADE’ SIGNS." Acta Semiótica et Lingvistica 22, no. 1 (October 24, 2017). http://dx.doi.org/10.22478/ufpb.2446-7006.2017v22n1.36049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Resumo: Em diversas ocasiões, Charles |Sanders Peirce afirmou que os únicos tipos de signos realmente genuínos seriam os de terceiridade (símbolo, legi-signo e argumento), uma vez que consistem em signos que representam algo distinto deles mesmos, e também porque todos os elementos que os compõem (representamen, objeto e interpretante) podem ser de natureza sígnica. Em sua correspondência com Lady Welby, entretanto, Peirce afirmou ter rebaixado o grau de abstração de suas reflexões com vistas a torná-las mais palatáveis e compreensíveis. Tendo em vista a afirmação peirceana de que tudo, inclusive o homem, é signo, argumenta-se neste argumento que signos de primeiridade (ícones, quali-signos e remas) não são apenas quase-signos, mas signos com propriedades e estofo ontológico e lógico complexo e de extrema validez para a análise da arte, sobretudo aquela ensimesmada, autorreferencial e abstrata, que discursa sobre si mesma e suas qualidades. Como exemplos, serão utilizadas obras de arte visual e o livro ilustrado para ilustrar a argumentação.Palavras-chave: Semiótica; signos; primeiridade; Charles Sanders PeirceAbstract: On several occasions, Sanders Peirce stated that the only truly genuine types of signs would be those of thirdness (symbol, legi-sign and argument), since they consist of signs that represent something distinct from themselves, and also because all the elements that compose them (representamen, object and interpretant) can be of a symbolic nature. In his correspondence with Lady Welby, however, Peirce claimed to have lowered the degree of abstraction of his reflections in order to make them more palatable and comprehensible. In view of Peirce's assertion that everything, including man, is a sign, it is argued in this argument that signs of firstness (icons, quali-signs and remas) are not only quasi-signs, but signs with ontological and logical complexity and are extremely valid for the analysis of art, especially that of self-referential, and abstract nature, which speaks about itself and its qualities. As examples, works of visual art and the illustrated book will be used to illustrate the argumentation.Keywords: Semiotics; Signs; Firstness; Charles Sanders Peirce
3

Campos, Alexandre. "Algumas considerações sobre os movimentos dos corpos na antiguidade e na Idade Média: a teoria do ímpeto e a inércia." Ensino & Multidisciplinaridade, April 9, 2022, e0322. http://dx.doi.org/10.18764/2447-5777v8n1.2022.3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
O artigo apresenta algumas considerações presentes nas discussões acerca da natureza do movimento entre os séculos XIV e XVII. Para isso, adotarei as perspectivas presentes nas argumentações de pensadores como Aristóteles, Buridan, Oresme e Galileu. A teoria do impetus, de Buridan e Oresme, pavimentou o caminho para as argumentações galileanas, na medida em que possibilitou explicar a persistência do movimento após a perda de contato entre aquilo que dava origem a ele. Ou seja, a teoria do impetus permitiu que se explicasse a continuidade do movimento em função de uma propriedade intrínseca do objeto (quantidade de matéria e impetuosidade do agente movente no momento do lançamento) em contraposição à necessidade de um motor em contato permanente com o movido (antiperístase aristotélica). Essa perspectiva permitiu que se unificassem numa só teoria os movimentos terrestres e os movimentos celestes. Galileu parece se valer de alguns aspectos da construção lógica dessas argumentações. O trabalho não tem a pretensão de apresentar e discutir historicamente como se deu cada um desses aspectos, detalhes locais, nem seus contextos sociais. Ao contrário, trata-se de uma quase descrição de suas ideias centrais.Some considerations about motion of bodies at Antiquity and at Middle Age: the impetus’ theory and the inertiaThe article shows some considerations present in the discussions about the nature of movement between the 14th and 17th centuries. For this, the perspectives present in the arguments of thinkers such as Aristotle, Buridan, Oresme and Galileo will be adopted. The impetus theory of Buridan and Oresme paved the way for Galilean arguments, insofar as they made it possible to explain the persistence of movement after the loss of contact between what gave rise to it. In other words, the theory of impetus made it possible to explain the continuity of movement as a function of an intrinsic property of the object (amount of matter and impetuosity of the moving agent at the moment of launch) as opposed to the need for a motor in permanent contact with the moved (Aristotelian antiperstasis). This perspective allowed the unification of terrestrial and celestial movements in a single theory. Galileo seems to make use of some aspects of the logical construction of these arguments. The work does not intend to present and discuss historically how each of these aspects took place, local details, or their social contexts. On the contrary, it is a quasi-description of its central ideas.Keywords: Impetus Theory; Inertia; Mechanics; Galileo.

Dissertations / Theses on the topic "Logiciel QUASES":

1

Bure, Taylor Rose. "Inelastic background analysis from lab-based HAXPES spectra for critical interfaces in nano-electronics." Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2023. http://www.theses.fr/2023UCFA0125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Ce travail vise à utiliser la spectroscopie de photoélectrons à rayons X durs (HAXPES) à l'échelle du laboratoire dans la perspective de l'analyse du fond continue inélastique (IBA) pour des applications dans le domaine de la métrologie afin de fournir des mesures d'épaisseur de matériaux technologiquement pertinents pour les mémoires et transitors de puissance. Nous cherchons à répondre au besoin d'une méthode adaptée pour les processus de contrôle en salle blanche et l'analyse de routine. Les échantillons présentés dans ce travail ont été fabriqués par des procédés préindustriels et sont représentatifs de la technologie des dispositifs réels avec des préoccupations telles que des phénomènes de diffusion et des couches et interfaces actives profondément enfouies. Dans ce travail, nous évaluons la technique HAXPES-IBA par le biais des logiciels QUASES en étudiant les paramètres libres, les contributions des opérateurs et l'incertitude du résultat de distribution en profondeur. Nous présentons une analyse autonome en accédant aux spectres de photoémission haute-énergie des éléments de chaque couche de l'échantillon, enregistrés avec un nouvel instrument HAXPES (PHI Quantes) équipé d'une source de laboratoire délivrant la radiation Cr Kα (hv = 5414,72 eV). Tout d'abord, des échantillons de référence d'épaisseur rigoureusement contrôlés (films minces Al2O3 et HfO2) ont été étudiés pour confirmer la précision de la méthode IBA par rapport à des techniques de référence hautement quantitatives. Les déterminations d'épaisseur HAXPES-IBA d'échantillons bicouches comportant une couche de surface aussi épaisse que 25 nm et une couche enterrée d'environ 2,5 nm se sont avérées être en excellent accord avec les résultats obtenus par réflectivité des rayons X (XRR) avec une incertitude de la solution IBA sub-namométrique. La nécessité de sélectionner l'énergie d'excitation en HAXPES appropriée en fonction de l'épaisseur totale des films a été démontrée grâce à l'analyse de spectres HAXPES enregistrés avec une radiation Ga Kα (hv = 9251,74 eV). Enfin, nous appliquons la méthode à des échantillons technologiques réalistes. Dans la première étude, nous présentons les résultats d'épaisseur d'une série d'échantillons de film ALD d'Al2O3 déposés sur GaN, représentatifs d'un transistor à haute mobilité électronique (HEMT) MOS à grille encastrée. Les mesures quantitatives de spectrométrie de masse d'ions secondaires (SIMS) complètent la technique IBA en confirmant le besoin d'un spectre de référence. Dans la deuxième étude, la méthode HAXPES-IBA est combinée avec la pulvérisation ionique pour confirmer l'épaisseur de recouvrement Ti/TiN dans une structure Ti/HfO2 utilisée pour la technologie de mémoire d'accès aléatoire résistive à l'oxyde (OxRRAM). Enfin, nous fournissons un résumé critique des progrès à réaliser pour une méthode HAXPES-IBA fiable et précise, entièrement intégrée dans un environnement de contrôle en ligne
This work uses lab-scale hard X-ray photoelectron spectroscopy (HAXPES) in the perspective of inelastic background analysis (IBA) for applications in the metrology field in order to provide thickness measurements of technologically relevant materials in memory and power devices. We seek to meet the need for a method adapted for inline processes and routine analysis. The samples presented in this work were fabricated by pre-industrial processes and are representative of real device technology with concerns like complex interdiffusion properties and deeply buried active layers and interfaces. In this work, we evaluate the HAXPES-IBA technique executed with QUASES software by studying the free parameters, the operator contributions, and uncertainty in the depth distribution. We present a self-contained analysis by accessing high energy photoelectron spectra of elements from each sample layer recorded with a novel lab-scale HAXPES instrument (PHI Quantes) fitted with a Cr Kα photon source (hv = 5414.72 eV). First, highly controlled reference samples of known thicknesses (Al2O3 and HfO2 thin films) were studied to confirm the accuracy of the IBA method through validation against highly quantitative reference techniques. HAXPES-IBA thickness determinations of bilayer samples with a thick overlayer up to 25 nm and a buried layer of approximately 2.5 nm were found to be in excellent agreement with results from X-ray reflectivity (XRR) with fitting uncertainty of the IBA solution in the sub-nanometer range. The need to select the appropriate HAXPES excitation energy depending on total film thickness was demonstrated thanks to complimentary HAXPES measurements recorded with Ga Kα radiation (hv = 9251.74 eV). Finally, we apply the method to realistic technological samples. In the first study, we present thickness results from a sample class of Al2O3 films deposited over GaN by atomic layer deposition (ALD), representative of a recessed gate MOS channel High Electron Mobility Transistor (HEMT). Quantitative secondary ion mass spectrometry (SIMS) measurements compliment the IBA technique by confirming need for reference spectrum. In the second study, the HAXPES-IBA method is combined with ion sputtering to confirm the Ti/TiN overlayer thickness in a Ti/HfO2-based structure used for oxide resistive random access memory (OxRRAM) technology. We provide a critical summary of advances to reach for an accurate and reliable HAXPES-IBA method fully-integrated into inline process control
2

Binet, Sébastien. "Environnement logiciel et étalonnage de l'échelle en énergie des jets dans l'expérience ATLAS." Clermont-Ferrand 2, 2006. https://tel.archives-ouvertes.fr/tel-00140524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Ce document présente le travail réalisé pour doter l'environnement logiciel de la collaboration ATLAS, ATHENA, d'une bibliothèque d'outils pour l'analyse de physique et pour l'extraction d'une fonction d'étalonnage de l'énergie des jets à partir d'évènements de physique (étalonnage in-situ). La partie logicielle expose les différents composants de l'architecture logicielle qui a besoin du flot de données simulées et reconstruites, et les différentes étapes de ce flot, avant et pendant la prise de données. La construction d'une bibliothèque d'outils facilitant la reconstruction d'objets physiques, leur association avec les entités Monte-Carlo et les interfaces de programmation de ces objets est ensuite détaillée, l'accent étant mis sur l'importance d'avoir un langage et des outils communs à l'ensemble de la collaboration afin de partager l'effort de validation de ces outils et ainsi obtenir des résultats de physique reproductibles. Dans la partie analyse, l'implémentation dans le framework ATHENA d'un algorithme d'étalonnage de l'énergie des jets légers en utilisant la désintégration de bosons W en une paire de jets est traitée. A partir de l'application de cet algorithme sur des données simulées via la simulation rapide et la simulation complète, il semble envisageable de connaître l'échelle en énergie des jets légers à hauteur du pour-cent. Enfin, l'étude de faisabilité de l'extraction de l'échelle en énergie des jets de b en utilisant un processus est exposée. Il est montré que l'application de coupures séquentielles ne permet pas d'extraire le signal par rapport au bruit de fond. Cependant, une approche multivariable pourrait améliorer la sélection, permettant de collecter un nombre suffisant de paires Z° pour réaliser l'étalonnage en énergie des jets de b
3

LEDDA, ANTONIO. "Logical and algebraic structures from Quantum Computation." Doctoral thesis, Università degli Studi di Cagliari, 2008. http://hdl.handle.net/11584/265966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The main motivation for this thesis is given by the open problems regarding the axiomatisation of quantum computational logics. This thesis will be structured as follows: in Chapter 2 we will review some basics of universal algebra and functional analysis. In Chapters 3 through 6 the fundamentals of quantum gate theory will be produced. In Chapter 7 we will introduce quasi-MV algebras, a formal study of a suitable selection of algebraic operations associated with quantum gates. In Chapter 8 quasi-MV algebras will be expanded by a unary operation hereby dubbed square root of the inverse, formalising a quantum gate which allows to induce entanglement states. In Chapter 9 we will investigate some categorial dualities for the classes of algebras introduced in Chapters 7 and 8. In Chapter 10 the discriminator variety of linear Heyting quantum computational structures, an algebraic counterpart of the strong quantum computational logic, will be considered. In Chapter 11, we will list some open problems and, at the same time, draw some tentative conclusions. Lastly, in Chapter 12 we will provide a few examples of the previously investigated structures.
4

Bousson, Nicolas. "Recherche de nouveaux quarks lourds avec l'expérience ATLAS au LHC. Mise en oeuvre d'algorithmes d'identification de jets issus de quarks b." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM4101/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'hypothèse d'une quatrième famille de fermions –les particules de matière décrites au sein du Modèle Standard (MS) de la physique des particules– est un des plus simples modèles de nouvelle physique encore non exclu et accessible au démarrage du LHC – le plus puissant collisionneur hadronique au monde depuis 2009. Cette thèse s'intéresse à la production d'une paire de quarks t' se désintégrant chacun via Wb. La recherche se focalise sur le domaine des très hautes masses, où la production peut être distinguée de la production de bruit de fond d'une paire de quark top en exploitant la cinématique des produits de désintégration des collisions p-p produites au centre du détecteur ATLAS. Nous présentons une stratégie originale exploitant la collimation des produits de la désintégration des bosons W de grande impulsion transverse, permettant leur reconstruction explicite. L'analyse s'appuie sur un travail de mise en oeuvre des algorithmes d'identification des jets résultants de la fragmentation des quarks de saveur b. L'étiquetage-b permet à l'expérience ATLAS d'améliorer la (re)découverte du MS, et la sensibilité à la nouvelle physique. Il sera ainsi d'une grande importance pour les futures années d'opération du LHC, raison pour laquelle nous présentons une étude de prospective de ses performances attendues avec l'extension du détecteur à pixels d'ATLAS dénommée IBL. Notre recherche de quark t' a permis d'établir une limite inférieure à la masse du quark t' de 656 GeV à partir des 4.7 fb^−1 de données 7 TeV collectées en 2011, ce qui est la meilleure limite à ce jour en recherche directe, avec également une interprétation dans le cadre du modèle de quarks dits vecteurs
The hypothesis of a fourth generation of fermions – the matter particles described in the Standard Model (SM) of particle physics – is one of the simplest model of new physics still not excluded and accessible at the start of the Large Hadron Collider (LHC) – the world most powerful hadron collider since 2009. We search for the pair production of up-type t' quarks decaying to a W boson and a b-quark. The search is optimized for the high quark mass regime, for which the production can be distinguished from the top background by exploiting kinematic features of the decay products arising from the proton-proton collisions occurring at the center of the ATLAS detector. We present a novel search strategy reconstructing explicitly very high-pT W bosons from their collimated decay products. The analysis benefits from the commissioning of algorithms intended to identify jets stemming from the fragmentation of b-quarks. These algorithms are based on the precise reconstruction of the trajectory of charged particles, vertices of primary interaction and secondary vertices in jets. The b-tagging ability allows for ATLAS to improve the (re)discovery of the SM, and the sensibility to new physics. It will hence play an important role in the future of the LHC, the reason why we study the expected performance with an upgrade of the ATLAS pixel detector, called IBL and currently under construction. Our search of t' quark, using 4.7 fb^−1 of the 7 TeV data collected in 2011, has resulted in the world most stringent limit, excluding t' masses below 656 GeV, with also an interpretation in the framework of vector-like quarks
5

Desnos, Nicolas. "Ports composites pour l'assemblage automatique de composants logiciels : application à la construction dynamique et à l'évolution non anticipée." Montpellier 2, 2008. http://www.theses.fr/2008MON20124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Un grand nombre de logiciels est construit par réutilisation de composants existants. Le comportement global du logiciel résulte de l'interaction des comportements individuels de chaque composant. L'objectif de ce travail est de construire des applications par assemblage de composants existants issus d'un référentiel de composants. Le critère de qualité considéré découle de la satisfaction, par l'assemblage résultat, d'objectifs fonctionnels identifiés lors de l'analyse du besoin. Alors que la plupart des approches existantes fournissent des langages pour décrire et vérifier la correction syntaxique et comportementale d'un assemblage, peu y adjoignent la vérification de la satisfaction d'objectifs fonctionnels. Nous définissons la validité comme étant un niveau de vérification combinant la correction de l'assemblage et la satisfaction des objectifs fonctionnels. La principale contribution de cette thèse est de proposer un processus de construction automatique d'assemblages de composants satisfaisant des objectifs fonctionnels. Nous proposons un méta-modèle de composants dont les collaborations potentielles sont documentées par des ports composites. Cette information permet de définir une stratégie de construction autonomique d'assemblages potentiellement valides grâce à une recherche parmi tous les assemblages possibles. La complexité de cette recherche est maîtrisée grâce à des optimisations heuristiques. Ce mécanisme est aussi utilisé pour la re-construction de la partie manquante d'un assemblage lors de son évolution dynamique. Notre proposition est plus flexible que celles de travaux comparables car elle permet de réaliser des substitutions n à 1 afin de pallier l'indisponibilité d'un composant proposant exactement les fonctionnalités attendues. Une implémentation prototype, comme extension du modèle de composants Fractal, permet de réaliser diverses expérimentations sur des simulations de bases de composants et de montrer l'efficacité de nos algorithmes
6

Lefort, Vincent. "Un modèle lattice pour simuler la propagation de fissures sous l’effet d’une injection de fluide dans un milieu hétérogène quasi-fragile." Thesis, Pau, 2016. http://www.theses.fr/2016PAUU3011/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse vise à développer un modèle numérique de type lattice permettant de simuler la propagation de fissures sous l’effet d’une injection de fluide dans un milieu hétérogène quasi-fragile. Si la finalité de l'étude concerne l'étude de matrices rocheuses naturelles, dans les différentes parties du manuscrit détaillée ci-après et dans un souci de validation, le modèle a été régulièrement confronté à des résultats expérimentaux obtenus sur des matériaux cimentaires similaires à des roches naturelles en termes de comportements mécaniques et de transport mais présentant des hétérogénéités mieux contrôlées. La première partie du document est dédiée à l'étude du processus de fissuration caractéristique des matériaux quasi-fragiles présentant une zone d'élaboration. Un outil d'analyse statistique basé sur les fonctions de Ripley et permettant d'extraire une longueur caractéristique à partir d'un nuage de points -- lieux d'un endommagement mécanique -- et présenté. Il est ensuite utilisé dans le cadre d'essais numériques et expérimentaux de rupture par flexion 3 points sur des éprouvettes de bétons. Les résultats montrent que le modèle numérique de type lattice est capable de rendre compte à la fois du processus global de fissuration mais également du processus local de fissuration. Par ailleurs, cet outil permet également de montrer l'influence du mode de sollicitation sur le développement de l'endommagement au sein d'une structure. La deuxième partie du document présente une loi de comportement élasto-plastique endommageable représentative du comportement de joints. L'originalité du modèle réside dans le couplage entre l'endommagement sous sollicitation normale et la plasticité sous sollicitation tangentielle. Cette nouvelle loi permet de reproduire correctement des résultats d'essais de cisaillement indirects effectués sur des joints de plâtre séparant des épontes en mortier alors qu'un modèle de Mohr-Coulomb classique ne le permet pas. La troisième partie est dédiée à l'introduction d'un couplage hydromécanique complet dans le modèle lattice utilisé précédemment. Le couplage hydromécanique est introduit au travers du comportement poromécanique du milieu basé sur une description mécanique-hydraulique duale et intrinsèque du modèle lattice. La contrainte totale fait le lien entre la contrainte mécanique du lattice mécanique et la pression de pore du lattice hydraulique au travers du coefficient du Biot du milieu alors que la perméabilité locale pilotant le gradient de pression hydraulique est indexée sur les ouvertures locales de fissures estimées au travers du lattice mécanique. Les résultats obtenus par ce modèle hydro-mécanique dual ont été confrontés à des solutions analytiques données dans la littérature pour des fissures de type "bi-wings", et il est montré que les deux approches sont cohérentes pour une fissure parfaitement rectiligne. Après les différentes étapes de validation du modèle présentées dans les parties précédentes, la quatrième et dernière partie est dédiée à la simulation numérique du couplage hydromécanique sous-jacent à la propagation libre d'une fissure propageant sous l'effet d'une injection de fluide et de son interaction avec un joint rocheux naturel. Les trajets de fissuration, non maillés a priori, et les profils de pression au sein de la matrice poreuse sont obtenus et comparés en fonction de l'inclinaison du joint rocheux. Par ailleurs, le traitement statistique concernant les lieux d'endommagement développé en première partie est repris ici afin de caractériser l'évolution des longueurs de corrélation entre point s'endommageant au cours de la propagation de la fissure et de son interaction avec le joint. Il est montré que le modèle hydromécanique lattice permet de représenter différent mécanismes de ré-initiation de fissure à partir d'un joint suivant son inclinaison
This research study aims at developing a lattice-type numerical model allowing the simulation of crack propagation under fluid injection in a quasi-brittle heterogeneous medium. This numerical tool will be used to get a better understanding of initiation and propagation conditions of cracks in rock materials presenting natural joints where the coupling between mechanical damage and fluid transfer properties are at stake. If the final goal of the study does concern natural rocks, the model has been validated by different comparisons with experimental results obtained on cementitious materials mimicking natural rocks in term of mechanical and transport behaviours but presenting heterogeneities which are better controlled. The first part of the manuscript presents a general state of the art. The second part of the manuscript is dedicated to the study of crack propagation in quasi-brittle materials where a significant fracture process zone is evolving upon failure. Only the solid phase is studied here and a statistical tool based on Ripley’s functions is adapted in order to extract a characteristic length representative of the correlations appearing between a set of point undergoing mechanical damage. This tool is then used in the context of numerical and experimental fracture tests on 3 point bending concrete beams. The results show that the lattice-type numerical model is able to capture the global fracture process – in term of force vs. crack opening mouth displacement – but also the local fracture process – in term of dissipated energy and correlation length evolution between damage points. Moreover, this statistical tool shows how the solicitation mode may influence the development of damage within a structure. The third part presents a new elasto-plastic damage constitutive law for joint modelling. The originality of the model lies in the coupling between mechanical damage under normal strain and plasticity under tangential strain. This new constitutive law is able to reproduce indirect shear experimental tests performed on mortar specimens presenting a plaster joint where a classical Mohr-Coulomb criterion fails. The fourth part is dedicated to the representation of the full hydro-mechanical coupling within the lattice-type numerical model. The hydro-mechanical coupling is introduced through a poromechanical framework based on the intrinsic and dual hydro-mechanical description of the lattice model, which is based on a "hydraulic" Voronoï tessellation and a "mechanical" Delaunay triangulation. The total stress links the mechanical stress and the pore pressure through the Biot coefficient of the medium whereas the local permeability, which drives the hydraulic pressure gradient, depends on the local crack openings. The numerical results are compared with analytical solutions from the literature for "bi-wings" shape cracks and it is shown that both approaches present similar results for a perfect straight crack. Once the lattice-model has been successfully validated within the former parts of the manuscript, its fifth and last part is dedicated to the numerical simulation of the fully hydro-mechanical coupling problem of a free crack propagation due to fluid injection and its interaction with a natural joint in an heterogeneous rock medium. Different crack paths, which are not pre-meshed a priori, and different pressure profiles are obtained and compared for different joint inclinations. Finally, our statistical tool, which has been primarily developed for the analysis of the failure behaviour of the solid phase, is used to characterise the evolution of correlation lengths between points undergoing damage upon the crack propagation and its interaction with a natural joint. It is shown that the hydro-mechanical lattice model is able to represent different mechanism of crack stop and restart from a joint depending on its inclination
7

Vacher, Camille. "Automates d'arbres à contraintes globales pour la vérification de propriétés de sécurité." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2010. http://tel.archives-ouvertes.fr/tel-00598494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Nous étudions des classes d'automates à états finis calculant sur les arbres, étendus par des contraintes permettant de tester des égalités et diségalités entre sous-arbres. Nous nous concentrons sur des automates d'arbres à contraintes globales où les tests sont opérés en fonction des états que l'automate atteint lors de ses calculs. De tels automates ont été introduit dans le cadre de travaux sur les documents semi-structurés. Nous procédons ici à une comparaison détaillée en expressivité entre ces automates et d'autres modèles permettant de réaliser des tests similaires, comme les automates à contraintes entre frères ou les automates d'arbres avec une mémoire auxiliaire. Nous montrons comment de tels automates peuvent être utilisés pour vérifier des propriétés de sécurité sur les protocoles cryptographiques. Les automates d'arbres ont déjà été utilisés pour modéliser les messages échangés lors d'une session d'un protocole. En ajoutant des contraintes d'égalité, nous pouvons décrire précisement des sessions qui utilisent à plusieurs reprises un même message, évitant ainsi une approximation trop grande. Nous répondons ensuite positivement au problème de la décision du vide des langages reconnus par les automates à contraintes globales. En montrant que leur expressivité est très proche de celle des automates opérant sur des représentations de termes par des graphes orientés acycliques, nous en déduisons une procédure de décision du vide en temps non-déterministe doublement exponentiel. Finalement, nous étudions le problème de la décision du vide pour des automates à contraintes globales pour lesquels on autorise des contraintes dites de clé, exprimant intuitivement que tous les sous arbres d'un certain type dans un arbre en entrée sont distincts deux à deux. Le type des clés est classiquement utilisé pour représenter un identifiant unique, comme un numéro de sécurité sociale.Nous décrivons alors une procédure de décision du vide de complexité non-élementaire. Nous montrons que cette procédure est très robuste, et qu'il est possible d'étendre les automates avec des contraintes supplémentaires, comme des contraintes de comptage ou des tests locaux, tout en préservant la décidabilité du vide.
8

Meguellati, Fatima. "Estimation par approximation de Laplace dans les modèles GLM Mixtes : application à la gravité corporelle maximale des accidents de la route." Thesis, Lille 1, 2014. http://www.theses.fr/2014LIL10204/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse est une contribution à la construction de méthodes statistiques applicables à l’évaluation (modélisation et estimation) de certains indices utilisés pour analyser la gravité corporelle des accidents de la route. On se focalise sur quatre points lors du développement de la méthodologie adoptée : la sélection des variables (ou facteurs) présentant un effet aléatoire, la construction de modèles logistique-normaux mixtes, l’estimation des paramètres par approximation de Laplace et PQL (quasi-vraisemblance pénalisée), et la comparaison de la performance des méthodes d’estimation. Dans une première contribution, on construit un modèle logistique-Normal avec « Type de collision » comme variable à effet aléatoire pour analyser la gravité corporelle maximale observée dans un échantillon de véhicules accidentés. Des méthodes d’estimation fondées sur l’approximation de Laplace de la log-vraisemblance sont proposées pour estimer et analyser la contribution des variables présentes dans le modèle. On compare, par simulation, cette approximation Laplacienne à celle basée sur l’adaptation des polynômes de Gauss-Hermite (AGH). On montre que les deux approches sont équivalentes par rapport à la précision de l’estimation bien qu’AGH soit légèrement supérieure. Une deuxième contribution consiste à adapter certains algorithmes de la famille PQL à l’estimation des paramètres d’un deuxième modèle et à comparer sa performance en termes de biais aux méthodes de Laplace et AGH. Deux exemples de données simulées illustrent les résultats obtenus. Dans une troisième et dense contribution, on identifie plusieurs modèles logistique-normaux mixtes avec plus d’un effet aléatoire. La convergence numérique des algorithmes (Laplace, AGH, PQL) ainsi que la précision des estimations sont étudiées. Des simulations ainsi qu’une base de données détaillées d’accidents sont utilisées pour analyser la performance des modèles à détecter des véhicules contenant des usagers ayant des blessures graves corporelles maximales. Une programmation orientée R accompagnent l’ensemble des résultats obtenus. La thèse se termine sur des perspectives relatives aux critères de sélection de modèles GLM Mixtes et à l’extension de ces modèles à la famille multinomiale
This thesis is a contribution to the construction of statistical methods for the evaluation (modeling and estimation) of some indices used to analyze the injury severity of road crashes. We focus on four points during the development of the adopted methodology: the random variables (or factors) selection, the construction of mixed logistic-Normal model, the parameters estimation by Laplace approximation and PQL (penalized quasi-likelihood) and the performance comparison of the estimation methods. In a first contribution, a logistic-Normal model is constructed with "collision type" as random variable to analyze the maximum injury severity observed in a sample of crashed vehicles. Estimation methods based on the Laplace approximation of the log-likelihood are proposed to estimate and analyze the contribution of variables in the model. We compare, by simulation, this Laplacian approximation to those based on the adaptation of Gauss-Hermite polynomials (AGH). We show that the two approaches are equivalent with respect to the accuracy of the estimate although AGH is superior. A second contribution is to adapt some algorithms of PQL family to estimate the parameters of a second model and compare its performance to Laplace and AGH methods in terms of bias. Two examples of simulated data illustrate the obtained results. In a third and dense contribution, we identify several mixed logistic-Normal models with more than one random effect. The convergence of the algorithms (Laplace, AGH, and PQL) and the precision of the estimates are investigated. Simulations as well as a database of detailed crash data are used to analyze the models performance to detect vehicles containing users with maximum injury severity. Programming oriented R accompany all results. The thesis concludes with perspectives on GLM Mixed models selection criteria and the extension of these models to the multinomial family
9

Neu, Thibault. "Etude expérimentale et modélisation de la compression quasi isotherme d’air pour le stockage d’énergie en mer." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0021/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le stockage d’énergie par air comprimé est une des technologies nécessaires à l’emploi massif des énergies renouvelables intermittentes, d’origine solaire ou éolienne. La compression d’air par piston liquide permet d’augmenter l’efficacité du stockage d’énergie en favorisant un échange thermique intense dans la chambre de compression. La description et l’évaluation de cet échange convectif pour des chambres de compression à faible rapport alésage/course ne sont cependant que peu étudiées dans la littérature scientifique. A l’aide d’une étude expérimentale menée sur deux bancs d’essais, l’échange convectif interne dans la chambre de compression est étudié. Une méthode inverse, couplée à la mesure de la température de l’air comprimé et de la position du piston, est employée afin de déterminer les transferts thermiques pariétaux instantanés au cours des compressions.Après avoir mis en lumière la présence systématique d’une transition du régime convectif de type laminaire vers un régime turbulent dans le volume d’air comprimé, de nouvelles corrélations d’échange convectif sont recherchées. Sur la base de 73 expérimentations, plusieurs formes de corrélations basées sur des nombres sans dimension sont optimisées puis comparées. Deux nouvelles corrélations du nombre de Nusselt, l’une en régime laminaire et l’autre en régime turbulent, sont ensuite sélectionnées. Un modèle instationnaire thermodynamique 1D de la chambre de compression est alors construit dans l’environnement Matlab / Simulink afin de tester la qualité de ces corrélations. Les résultats numériques sont ainsi comparés aux données expérimentales. Finalement, deux essais expérimentaux supplémentaires, réalisés sur un banc d’essai différent, permettent de confirmer la qualité des nouvelles corrélations d’échange convectif proposées
Energy storage by compressed air would be one of the required technologies for enabling massive use of intermittent solar or wind renewable energy sources. Air compression using a liquid piston enables an increase in the energy storage efficiency by inducing an intense heat exchange in the compression chamber. Few studies reported in the literature have focused on the description and evaluation of the convective heat exchange for a low ratio compression chamber (L/D). Using an experimental study and two test benches, the internal convective heat transfer during compression has been studied. In addition to measuring liquid piston position and air pressure, an inverse method was used to determine the instantaneous parietal convective heat flow during compression. After highlighting the presence of a systematic transition from laminar to turbulent convective regime in the compressed air, new convective heat transfer correlations were sought. On the basis of 73 experiments, several correlation forms based on dimensionless numbers were optimized and compared. Two new Nusselt number correlations, one for laminar and the other for turbulent flow, were then selected. A 1D thermodynamic transient model of the compression chamber was built using Matlab / Simulink environment in order to test the quality of these correlations. Thus, numerical results and experimental data were compared. Finally, results from two additional experiments carried out on a different test bench have confirmed the quality of the new proposed correlations for convective heat exchange
10

Smith, Edward Charles. "Reconceptualizing mathematics teaching and learning: Teacher learning in a realistic mathematics context." University of the Western Cape, 2000. http://hdl.handle.net/11394/8470.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Philosophiae Doctor - PhD
In this study the construct of personal theories is used to represent the teacher's conceptions, which are interpreted as the consciously held beliefs. The teacher's personal theories encompass beliefs, images, values and attitudes as well as understanding about teaching and learning. This study investigates the influence of the teacher's conceptions of mathematics, of the teaching and learning of mathematics and of the context before and after a structured learning experience. The interest in the teacher's conceptions is derived from the assumption that these serve as a primary component that influence how teachers think about their professional responsibilities and how they act in their classrooms. Furthermore, the extent of implementation of a new curriculum has been linked to the scope of congruence between the teachers' conceptions and the underpinning philosophy of the intended curriculum. The study of the teacher's conceptions is especially relevant during a time of educational reform, such as the current transition to an Outcomes Based Education curriculum in South Africa. The participants in this study consist of four primary school mathematics teachers with various educational backgrounds, who teach at schools situated in different physical environments. The conceptions that these teachers have of mathematics, of the teaching and learning of mathematics and the influence of the context are investigated using a variety of instruments. Data collection was done with a questionnaire, a repertory grid, a semi-structured interview and lesson observations. The teachers participated in the Teaching Intervention and Support Programme (TISP), as a structured teacher learning experience. The programme is centred on the integration of the developmental and socio-cultural perspectives on teacher learning. With the developmental perspective the focus is on the acquisition of intellectual skills, while the socio-cultural perspective emphasizes participation in social practice. Both are directed at effecting conceptual change. With the developmental approach the process of conceptual change involves the development of new conceptions from existing conceptions. From the socio-cultural perspective the context is paramount and conceptual change is seen as new ways of being and acting within a particular context. The teachers were invited to attend a two-week intervention session, followed by a six months support programme that was aimed at establishing a teacher learning community. The learning experiences provided during the intervention session were drawn mainly from Realistic Mathematics Education. On completion of the programme, the teachers' conceptions of mathematics, of the teaching and learning of mathematics and the influence of the context were again investigated. The results of this study show that two of the participants had highly mechanistic conceptions of mathematics, and the teaching and learning of mathematics. The remaining two had a more empiristic approach with its high focus on environmental activities. After the programme, the teachers with the mechanistic views adopted a mixed. conception with some of the mechanistic conceptions retained, but now interspersed with some empiristic and realistic conceptions. The participants with the empiristic conceptions adopted a more realistic conception, but again to varying degrees. Thompson's (1991) hierarchical structure for the development of conceptions was also used to describe the extent of conceptual change. However, it was found that a concentric, rather than a hierarchical representation is a more appropriate to describe these changes. With regards to the socio-cultural view of conceptual change, all the participants perceived the context differently. The teachers' actions were also more commensurate with the practices associated with teachers that encourage learner autonomy, mathematical investigations and a facilitative role for the teacher.

Books on the topic "Logiciel QUASES":

1

Baldwin, Thomas. Russell on Modality. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198786436.003.0007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This essay presents a synoptic account of Russell’s changing views concerning possibility and necessity. The essay shows how an intuitionist view of logical necessity, according to which it is a fundamental, indefinable property that is ‘purely and simple perceived’, swiftly gives way in Russell’s work to scepticism concerning whether necessity exists at all, since he holds that it cannot be explained by analyticity. The essay then shows how Russell returns, in effect, to both Aristotle and Hume with the thought that necessity is grounded on the universal truth of the relevant propositional function, and an attendant feeling of necessity. The essay also addresses Russell’s later suggestion that the domain of quantification of propositional functions is possible worlds—the idiom was familiar to him from his early book on Leibniz—and argues that Russell’s commitments point towards what in contemporary modal theory would be called a quasi-linguistic modal ersatzism.
2

Parfit, Derek. On What Matters. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198778608.001.0001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This third volume of this series develops further previous treatment of reasons, normativity, the meaning of moral discourse, and the status of morality. It engages with critics, and shows the way to resolution of their differences. This volume is partly about what it is for things to matter, in the sense that we all have reasons to care about these things. Much of the book discusses three of the main kinds of meta-ethical theory: normative naturalism, quasi-realist expressivism, and non-metaphysical non-naturalism, which this book refers to as non-realist cognitivism. This third theory claims that, if we use the word ‘reality’ in an ontologically weighty sense, irreducibly normative truths have no mysterious or incredible ontological implications. If instead we use ‘reality’ in a wide sense, according to which all truths are truths about reality, this theory claims that some non-empirically discoverable truths — such as logical, mathematical, modal, and some normative truths — raise no difficult ontological questions. This book discusses these theories partly by commenting on the views of some of the contributors to Peter Singer's collection Does Anything Really Matter? Parfit on Objectivity.

Book chapters on the topic "Logiciel QUASES":

1

Blackburn, Simon. "Attitudes and Contents." In Essays in Quasi-Realism, 182–97. Oxford University PressNew York, NY, 1993. http://dx.doi.org/10.1093/oso/9780195080414.003.0015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract G. F. Schueler’s article puts in a forceful way various reservations about my treatment of indirect contexts, on behalf of the position I have called ‘quasirealism.’ His opposition is, I think, as complete as could be: it is not only that my treatment has been incomplete, which I happily concede, or that its formulation has been defective, which I am prepared to believe, but also that nothing like it could possibly succeed. That at least is the proper consequence of some of his views—on logical form, on validity, and on the nature of commitment. For example, if showing that an inference has ‘the logical form’ or ‘is an instance’ of modus ponens involves taking it as ‘the realist picture’ has it, then no attempt to explain it in other terms will be compatible with its having that form. Again, if validity is (‘as it is used in logic’) defined in terms of the impossibility of premises being true and conclusions false, then persons reluctant to apply truth and falsity to any of the elements of an inference will have to admit that the inference is not valid, as the term is used in logic. Third, if ‘talk of “commitments” is problematic for the antirealist’, then antirealism will make no headway by thinking of a class of commitments more general than those with representative or realistic truth conditions. Fortunately, none of these contentions seems to me correct. Since the survival of quasi-realism even in spirit demands their rebuttal, I shall start by considering them in turn.
2

Marcone, Alberto. "On the Logical Strength of Nash-Williams’ Theorem on Transfinite Sequences." In Logic: from Foundations to Applications, 327–552. Oxford University PressOxford, 1996. http://dx.doi.org/10.1093/oso/9780198538622.003.0014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract We show that Nash-Williams’ theorem asserting that the countable transfinite sequences of elements of a better-quasi-orderingordered by embeddability form a better-quasi-ordering is provable in the subsystem of second order arithmetic [ineq]but is not equivalent to [ineq]. We obtain some partial results towards the proof of this theorem in the weaker subsystem ATRo and we show that the minimality lemmas typical of wqo and bqo theory imply [ineq] CAo and hence cannot be used in such a proof.
3

Hill, Christopher S. "Percepts and Concepts." In Perceptual Experience, 189—C8.P85. Oxford University PressOxford, 2022. http://dx.doi.org/10.1093/oso/9780192867766.003.0008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Chapter 8 describes a number of essential differences separating perceptual experiences from high-level cognitive phenomena, and, in particular, from concepts and propositional attitudes. The chapter embraces the widely accepted view that concepts are akin to words in that they have reference and belong to logical categories, along with the companion view that the propositions that are the objects of attitudes like belief and desire are like sentences in that they have truth conditions and logical forms. In sum, concepts and propositions are quasi-linguistic. Not so for perceptual representations. The chapter argues that perceptual experiences are metaphysically independent of conceptualization. It also maintains that, on the one hand, perceptual representations lack the logical properties that are essential to concepts and propositions, and that, on the other hand, the former representations differ from the latter in that they are often isomorphic to the domains and individual entities that they represent.
4

Blackburn, Simon. "Quasi-Realism no Fictionalism." In Fictionalism In Metaphysics, 322–38. Oxford University PressOxford, 2005. http://dx.doi.org/10.1093/oso/9780199282180.003.0012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Suppose that Simon Blackburn’s quasi-realist program has succeeded perfectly on its own terms-something I think not unlikely. Given the controversial nature of the program, this much endorsement from a philosopher and logician of Lewis’ stature is pleasant indeed. And for the purpose of this chapter I am going to bask in its light. In other words, I am not going to say very much directly to defend my program, or render it more or less likely to be successful than it is already.
5

Kuusela, Oskari. "Epilogue." In Wittgenstein on Logic as the Method of Philosophy, 245–46. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198829751.003.0008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the Introduction I made the bold claim that Wittgenstein transforms Frege’s and Russell’s logical and methodological ideas in a way that ‘can be justifiably described as a second revolution in philosophical methodology and the philosophy of logic, following Frege’s and Russell’s first revolution’. This claim was meant in a specific sense relating to the use of logical methods in philosophy, a discipline where we are often dealing with complex and messy concepts and phenomena, and having to clarify highly complicated and fluid uses of natural language. The situation is not quite the same in metamathematics, for example, and my claim was not intended to concern the employment of logical methods there, i.e. that Wittgenstein’s later philosophy of logic would constitute a revolution in this area too. For, while his later philosophy of logic has no difficulty explaining the possibility of the employment of calculi to clarify other calculi, in metamathematics there is perhaps no similarly pressing need for idealization as in philosophy, when we clarify complex concepts originating in ordinary language, since the targets of clarification in metamathematics are systems governed by strict rules themselves. Thus, this area of the employment of logical methods seems not as significantly affected. But I hope that my claim concerning the use of logical methods in philosophy can now be recognized as justified, or at least worth considering seriously, on the basis of what I have said about 1) the later Wittgenstein’s account of the status of logical clarificatory models, and how this explains the possibility of simple and exact logical descriptions, thus safeguarding the rigour of logic, 2) how his account of the function of logical models makes possible the recognition of the relevance of natural history for logic without compromising the non-empirical character of the discipline of logic, and 3) in the light of Wittgenstein’s introduction of new non-calculus-based logical methods for the purpose of philosophical clarification, such as his methods of grammatical rules, the method of language-games, and quasi-ethnology....
6

*, Nicholas Asher. "Commonsense Entailment: A Conditional Logic for Some Generics." In Conditionals: from Philosophy to Computer Science, 103–46. Oxford University PressOxford, 1995. http://dx.doi.org/10.1093/oso/9780198538615.003.0005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Lions eat zebras, Quakers are pacifists, or birds fly are examples of generic statements. I believe that such statements should be represented logically as quantified conditionals and thus that the study of generic statements can reveal something of interest about the logical properties of conditionals that we find expressed in natural languages. One indication that this view may be right is that generic statements behave like quantified conditionals. They license certain patterns of inference that look much like those supported by some conditionals. Like almost all forms of conditionals, generics support an inference of weakening of the consequent: lions eat zebras implies lions eat zebras or kangaroos.
7

Kuusela, Oskari. "The Tractatus’ Philosophy of Logic and Carnap." In Wittgenstein on Logic as the Method of Philosophy, 77–108. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198829751.003.0003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This chapter discusses the relation between the Tractatus’ and Carnap’s philosophies of logic, arguing that Carnap’s position in The Logical Syntax of Language is in certain respects much closer to the Tractatus than has been recognized. Explained in Carnapian terms, the goal of the Tractatus is to introduce, by means of quasi-syntactical sentences, logical principles and concepts of a logical language to be used in philosophical clarification in the formal mode. A distinction between the material and formal mode is therefore part of the Tractatus’ view, and contrary to Carnap’s criticism, the sentences of the Tractatus can be clearly distinguished from nonsensical metaphysical statements. Moreover, despite the Tractatus’ rejection of syntactical statements, there is a correspondence between Wittgenstein’s saying–showing distinction and Carnap’s object-language/syntax-language distinction. Both constitute ways to clarify the logical distinction between the logico-syntactical determinations concerning language and the use of language according to such determinations, a distinction absent in Frege and Russell. Wittgenstein’s distinction thus constitutes a precursor of the object-language/syntax-language distinction which the latter in a certain sense affirms. The saying–showing distinction agrees with Carnap’s position also in marking logic as something that is not true/false about either language or reality, a view that underlies Carnap’s principle of tolerance. The standard view that Carnap overcame the philosophy of logic of the Tractatus in the 1930s must therefore be regarded as problematic and misleading.
8

Smith, Gary. "Beat the Market II." In The AI Delusion. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198824305.003.0013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Nowadays, technical analysts are called quants. Being overly impressed by computers, we are overly impressed by quants using computers instead of pencils and graph paper. Quants do not think about whether the patterns they discover make sense. Their mantra is, “Just show me the data.” Indeed, many quants have PhDs in physics or mathematics and only the most rudimentary knowledge of economics or finance. That does not deter them. If anything, their ignorance encourages them to search for patterns in the most unlikely places. The logical conclusion of moving from technical analysts using pencils to quants using computers is to eliminate humans entirely. Just turn the technical analysis over to computers. A 2011 article in the wonderful technology magazine Wired was filled with awe and admiration for computerized stock trading systems. These black-box systems are called algorithmic traders (algos) because the computers decide to buy and sell using computer algorithms in place of human judgment. Humans write the algorithms that guide the computers but, after that, the computers are on their own. Some humans are dumbstruck. After Pepperdine University invested 10 percent of its portfolio in quant funds in 2016, the director of investments argued that, “Finding a company with good prospects makes sense, since we look for under valued things in our daily lives, but quant strategies have nothing to do with our lives.” He thinks that not having the wisdom and common sense acquired by being alive is an argument for computers. He is not alone. Black-box investment algorithms now account for nearly a third of all U.S. stock trades. Some of these systems track stock prices; others look at economic and noneconomic data and dissect news stories. They all look for patterns. A momentum algorithm might notice that when a particular stock trades at a higher price for five straight days, the price is usually higher on the sixth day. A mean-reversion algorithm might notice that when a stock trades at a higher price for eight straight days, the price is usually lower on the ninth day. A pairs-trading algorithm might notice that two stock prices usually move up and down together, suggesting an opportunity when one price moves up and the other doesn’t.
9

Hanson, Ann Ellis. "Talking Recipes in the Gynaecological Texts of the Hippocratic Corpus." In Parchments of Gender Deciphering the Bodies of Antiquity, 71–94. Oxford University PressOxford, 1998. http://dx.doi.org/10.1093/oso/9780198150800.003.0004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract The Hippocratic writers of the fifth and early fourth centuries BCE were quick to pounce upon what they considered unsophisticated aetiologies for disease and upon remedies they claimed bore no logical relationship to the sickness being treated. The scorn these authors heaped upon those who professed irrational causes and cures is well known. The writer of Sacred Disease criticized witch-doctors, faith-healers, quacks, and charlatans, whose aetiology for epilepsy and sudden seizures invoked attacks from the gods and whose therapies consisted of purifications, incantations, prohibition of baths and specific foods, lying on goatskins and eating the flesh of goats. The writer of Diseases of Young Girls censured women who followed commands from Artemis’ priests to dedicate costly garments to the goddess in the effort to cure madness in the premenarchic young girl.
10

Crouch, Richard. "Ellipsis and Glue Languages." In Fragments, 32–67. Oxford University PressNew York, NY, 1999. http://dx.doi.org/10.1093/oso/9780195123029.003.0003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract A treatment of the interactions between ellipsis, quantifiers, and anaphora is presented in Crouch (1995), which gives comparable coverage to the higher-order unification account of Dalrymple et al. (1991) but without the burden of (i) ordersensitive interleaving of scope and ellipsis resolution or (ii) the same degree of potentially undecidable higher-order unification. That paper claims that the semantic formalism in which the treatment is cast (Quasi Logical Form, QLF) represents sets of constraints on permissible semantic compositions (where composition builds up the meaning of a sentence from the meanings of its constituent parts). Ellipsis resolution is to be seen as a matter of making “minimal” changes to the composition of the antecedent in order to accommodate the (previously identified) parallel material in the ellipsis. These minimal changes can be represented by substitutions on the set of constraints determining the permissible compositions of the antecedent.

Conference papers on the topic "Logiciel QUASES":

1

Alshawi, Hiyan, David Carter, Manny Rayner, and Björn Gambäck. "Translation by Quasi Logical Form transfer." In the 29th annual meeting. Morristown, NJ, USA: Association for Computational Linguistics, 1991. http://dx.doi.org/10.3115/981344.981365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Losev, Vladimir V., and Victor I. Staroselsky. "Regularities of power consumption in quasi-adiabatic logical gates." In SPIE Proceedings, edited by Kamil A. Valiev and Alexander A. Orlikovsky. SPIE, 2004. http://dx.doi.org/10.1117/12.562669.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lima Neto, Clodomir Silva, Thiago Nascimento da Silva, and Umberto Rivieccio. "Quasi-N4-lattices and their logic." In Workshop Brasileiro de Lógica. Sociedade Brasileira de Computação, 2022. http://dx.doi.org/10.5753/wbl.2022.222852.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The variety of quasi-N4-lattices (QN4) was recently introduced as a non-involutive generalization of N4-lattices (algebraic models of Nelson's paraconsistent logic). While research on these algebras is still at a preliminary stage, we know that QN4 is an arithmetical variety which possesses a ternary as well as a quaternary deductive term, enjoys equationally definable principal congruences and the strong congruence extension property. We furthermore have recently introduced an algebraizable logic having QN4 as its equivalent semantics. In this contribution we report on the results obtained so far on this class of algebras and on its logical counterpart.
4

Lager, Torbjörn, and William J. Black. "Bidirectional incremental generation and analysis with Categorial Grammar and indexed quasi-logical form." In the Seventh International Workshop. Morristown, NJ, USA: Association for Computational Linguistics, 1994. http://dx.doi.org/10.3115/1641417.1641444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Godovitsyn, Maxim, Julia Zhivchikova, Nickolay Starostin, and Anton Shtanyuk. "Algorithm for Implementing Logical Operations on Sets of Orthogonal Polygons." In 31th International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2021. http://dx.doi.org/10.20948/graphicon-2021-3027-1088-1097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
As part of the development CAD for design rule checks (DRC), it is necessary to use logical operations on orthogonal polygons that form the layout of an integrated circuit. Such operations as union, intersection, subtraction are performed over layers that contain orthogonal polygons. These operations are subject to stringent execution time requirements. The traditional representation of polygons in the form of bitmaps does not provide a quasi-linear dependence of time on the processed data size and requires development of new algorithms and polygon representation approaches. This paper contains a description of a modified sweeping line obscuring algorithm that achieves O(N log N) time. The algorithm uses three properties of the polygon: the separation of inner region from outer region by the edge, the belonging of edges to the set of either vertical or horizontal edges, and dissection of the layer plane into rectangular fragments which belong to either inner or outer region of the polygon. Procedures of input polygon contour representations that are dissected into sets of vertical and horizontal edges are described. As a result of performing logical operations, polygon edges of the resulting layer are formed. These edges, in turn, are converted into contour representations. The results of a computational experiment confirming the nature of the time dependences determined theoretically are presented. We propose the structure of a software system for DRC, built with the use of programming languages C++ and Lua.
6

Zhang, Xueping, Rajiv Shivpuri, and Anil K. Srivastava. "Stress Triaxiality in Chip Segmentation During High Speed Machining of Titanium Alloy." In ASME 2014 International Manufacturing Science and Engineering Conference collocated with the JSME 2014 International Conference on Materials and Processing and the 42nd North American Manufacturing Research Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/msec2014-3915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Beside strain intensity, stress triaxiality (pressure-stress states) is the most important factor to control initiation of ductile fracture in chip segmentation through affecting the loading capacity and strain to failure. The effect of stress triaxiality on failure strain is usually assessed by dynamic Split Hopkinson Pressure Bar (SHPB) or quasi-static tests of tension, compression, torsion, and shear. However, the stress triaxialities produced by these tests are considerably different from those in high speed machining of titanium alloys where adiabatic shear bands (ASB) are associated with much higher strains, stresses and temperatures. This aspect of shear localization and fracture are poorly understood in previous research. This paper aims to demonstrate the role of stress triaxiality in chip segmentation during machining titanium alloy using finite element method. This research promotes a fundamental understanding of thermo-mechanics of the high-speed machining process, and provides a logical insight into the fracture mechanism in discontinuous chips.
7

Nan, Emanuela. "Rinaturactivazione: nuove strategie di sviluppo sostenibile dai centri storici mediterranei: Genova città-laboratorio." In International Conference Virtual City and Territory. Roma: Centre de Política de Sòl i Valoracions, 2014. http://dx.doi.org/10.5821/ctv.7984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
I territori sono oggi utilizzati come un “menu” all’interno del quale ci si sposta sempre più liberamente secondo le proprie necessità. Le città, già da tempo, hanno assunto l’accezione di sistemi integrati, apparendo sempre più simili a una miscellanea, composita e variabile, alla cui definizione concorrono molteplici dispositivi e la cui comprensione e gestione operativa sembra trovarsi non più nella perimetrazione di registri e contesti formali, ma nell’individuazione di regole e tattiche capaci di guidare e prevedere gli esiti e le evoluzioni delle differenti dinamiche e vocazioni. In questo progressivo evolversi delle realtà urbane, i centri storici, in particolare quelli dell’area mediterranea, storicamente, fortemente strutturati, iper-stratificati e densi, risultano inadeguati a rispondere alle nuove necessità della vita odierna Fondamentale, in questo senso, diviene, dunque, individuare e riconoscere un nuovo criterio e metodo per la gestione di questi contesti, mirato, non a stravolgerli o snaturarli in nome del progresso, ma, al contrario, a riattivarli nuovamente come nodi propulsivi a partire dalla riscoperta di valenze e funzioni intrinseche alla loro storica dimensione e natura, sulla base una logica sostanzialmente rinnovativo-reinterpretativo. Genova, si propone come un fantastico laboratorio per l’individuazione di strategie e azioni mirate a tracciarne nuovi sviluppi futuribili rispondenti alle odierne logiche attive di sostenibilità, avanzamento e interconnessione locale e globale. The territories are now used as a "menu", where people move more freely according to their own needs . The city , for some time , have taken on the meaning of integrated systems , appearing more like a miscellany definition of which contribute to multiple devices and whose understanding and operational management appears to be no longer in the delimitation of records and formal, but in ' identification of rules and tactics capable of guiding and predicting possible outcomes and the evolution of the different dynamics and vocations. This gradual development of urban , historic centers , particularly those of the Mediterranean , historically , highly structured , hyper - layered and dense , are inadequate to meet the new needs of today's life . Fundamental , in this sense , becomes, therefore , to identify and recognize a new criterion and method for the management of these contexts , focused, not distorting them or altering them in the name of progress , but , on the contrary , to reactivate them again as dynamic hubs from the rediscovery of values and functions intrinsic to its historical dimension and nature , based on a logic substantially renewal and reinterpretation. Genoa is a fantastic laboratory for the identification of strategies and actions to trace new developments futuristic responding to today's active logical sustainability, progress and interconnecting local and global.
8

Maiorano, Massimo, and Enrico Sciubba. "Heat Exchangers Networks Synthesis and Optimisation Performed by an Exergy-Based Expert Assistant." In ASME 1999 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1999. http://dx.doi.org/10.1115/imece1999-0851.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract This paper presents a novel method for the design of “optimal” (or quasi-optimal) HEN. The method consists of an Expert System (“ES”) based on a small number of powerful and strongly selective heuristic rules. The important contribution of this study does not lie in the formulation of the rules, that have been adapted from the existing literature, but in their expression as logical propositions, and in their subsequent implementation in a prototype ES that performs interactively with the user. It is not unusual to find chemical processes with as many as 100 interacting streams, and even simple thermal processes, excluding refineries and chemical plants, contain at least a 10-streams-HEN: hence the high demand for an “automatic” (in some sense) Design Procedure that may conveniently be adapted to design-and-optimisation problems. Pinch Technology (“PT”), at present the almost universally adopted design procedure, is very successful in most types of applications (except in cases where mechanical and thermal power must be optimised concurrently), but it constitutes an operative tool, and does not improve its user’s comprehension of the problem: it assumes, rather, that the user is already familiar with the design of HEN. The approach we present in this paper is entirely different: we do not “mask” the thermodynamic and thermo-economic principles that guide the engineer in the path towards the “optimal” HEN configuration, and do not allow concerns about “user friendliness” to impair the necessary participation of the user to the HEN synthesis procedure. In fact, though our ES (which we prefer to call “Expert Assistant”, to underline its peculiarity of constantly interacting with the user) is still lacking many of the capabilities that a good designer possesses, the underlying procedure is, unlike any of the other existing Design-and-Optimisation Procedures, entirely inspectable by the user for what its decision-making rules are concerned. It can be interrogated about its decision making, so that the logical path followed from the design data to the final solution can be inspected at will, and it can be used to directly compare different alternatives in a logically systematic fashion. The paper begins with a brief review of the HEN design problem, followed by a critical discussion of the heuristic rules that form the basis for the Inference Engine of the Expert System. The formalisation of these rules into logical propositions suitable for Knowledge Based Methods is then presented, and the resulting macrocode developed. As a preliminary validation, two examples of application of the code (named Heat Exchanger Network Expert Assistant, HENEA for short) are presented and discussed: since both cases have published, and their “optimal” solutions are known, the performance of HENEA can be assessed by comparison.
9

Trillo, S., and S. Wabnitz. "Generation of Spatio-Temporal Patterns and All-Optical Switching based on Coherently Induced Modulational Instability in Fibers." In Nonlinear Guided-Wave Phenomena. Washington, D.C.: Optica Publishing Group, 1991. http://dx.doi.org/10.1364/nlgwp.1991.mb5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The appearance of modulational sidebands building up from noise has been reported when an intense cw or quasi-cw propagates in a fiber in the anomalous dispersion regime [1]. The modulation transforms the input wave into a train of pulses with ultra-high repetition rate. This process may be stimulated by seeding incoherently (i.e., by means of a different weak detuned laser) the modulational instability (MI) [2]. However the experiments and the early theory on MI [3] have led to the diffuse but erroneous belief that in the presence of MI the input wave becomes a train of solitons. On the contrary (temporally) periodic wave solutions of the nonlinear Schroedinger (NLS) equation have shown that the propagation is periodic also in space (a phenomenon known as Fermi-Pasta-Ulam recurrence [4]), leading to the formation of complex spatiotemporal patterns [5-7]. We show here that the nonlinear dynamics of modulated waves, which includes in principle the interaction of an infinite number of Fourier modes, is essentially locked to the simple interaction between three modes: the pump and the first symmetric sidebands. In this case a simple integrable one-dimensional equivalent oscillator model [8-9] enables one to unfold the role of a coherent modulation at the input in the generation of the spatio-temporal patterns. This suggests also the possibility of new experiments in which the pulse train and switching among two logical state is controlled by the input phase relation between the pump and the sidebands.
10

Mariano, Carmela. "Il futuro della città è policentrico? Una riflessione sull’area metropolitana romana." In International Conference Virtual City and Territory. Roma: Centre de Política de Sòl i Valoracions, 2014. http://dx.doi.org/10.5821/ctv.7965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Il territorio romano costituisce un sistema metropolitano ‘anomalo’ (Campos Venuti, 2005) ancora oggi tutto interno ai confini municipali della città, dal momento che l’area metropolitana non offriva poli attorno ai quali far crescere i nuovi tessuti misti produttivo-residenziali; dando luogo, in conclusione, ad un assetto territoriale totalmente disordinato. Il processo di metropolizzazione appare molto debole mentre sembra persistere un modello di periferizzazione. La scala sovracomunale in sostanza continua a svilupparsi secondo un modello vecchio che esprime logiche anche economiche vecchie. Un sistema metropolitano in cui, per tentare di riequilibrare la sua struttura, si è affermata la strategia delle ‘nuove centralità’, che rappresenta, assieme al sistema ambientale e al sistema della mobilità, uno degli elementi strutturali del piano urbanistico di Roma approvato nel 2008, in alternativa sia al persistente monocentrismo di Roma, sia all’incompiuto disegno del Sistema Direzionale Orientale previsto dal piano del 1962. Le centralità, immaginate come la chiave della trasformazione territoriale proposta per Roma, hanno, nelle intenzioni del piano, l’obiettivo di correggere l’anomalia di un sistema metropolitano, nel quale non esistono centri periferici da valorizzare per avanzare sulla strada del riequilibrio territoriale; sottraendo funzioni di eccellenza al polo centrale da decongestionare e rafforzando, appunto, le numerose localizzazioni periferiche. Il paper intende fornire una riflessione sul futuro di Roma, del suo territorio metropolitano e del suo modello di sviluppo policentrico, a partire dall’analisi di una serie di variabili che hanno contribuito a modificare il quadro di riferimento.

To the bibliography