Academic literature on the topic 'Simplification structurale'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Simplification structurale.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Simplification structurale"

1

Haßler, Gerda. "Le tournant sémiotique du début du XXème siècle." Historiographia Linguistica 46, no. 1-2 (September 2, 2019): 88–104. http://dx.doi.org/10.1075/hl.00039.has.

Full text
Abstract:
Résumé Le centenaire de la publication du Cours de linguistique générale (1916) de Ferdinand de Saussure nous a invité à reconsidérer l’importance de cet ouvrage et le rôle de son auteur pour la fondation d’une linguistique intégrée dans une sémiologie. Il n’y a aucun doute que cet auteur fut extrêmement important pour le développement de la linguistique structurale en Europe et qu’avec son concept du signe linguistique il a fait œuvre de pionnier pour le tournant sémiologique. Mais l’accueil favorable d’une théorie dans le milieu scientifique ne s’explique pas seulement par sa qualité intérieure, mais par plusieurs conditions extérieures. Ces conditions seront analysées sur trois plans: (1) l’arrivée de la méthode des néogrammairiens à ses limites qui incitait alors à l’étude de l’unité du signifiant et du signifié; (2) la simplification et l’outrance de la pensée structurale dans le Cours, publié en 1916 par Charles Bally et Albert Sechehaye et (3) la préparation de la réception de la pensée sémiologique par plusieurs travaux parallèles.
APA, Harvard, Vancouver, ISO, and other styles
2

Assayag, Jackie. "Homo Hierarchicus, Homo Symbolicus Approche structurale ou herméneutique en anthropologie sociale (de l'Inde) (note critique)." Annales. Histoire, Sciences Sociales 49, no. 1 (February 1994): 133–49. http://dx.doi.org/10.3406/ahess.1994.279249.

Full text
Abstract:
« Before everything else, without caste there is no Hindu » (Max Weber, The Religion in India, 1909).« Modern India, having created a caste of chauffeurs from the menials who tend motorcars, is almost ripe for a Rolls Royce caste rejecting food or marriage with the Fords » (R. E. Enthoven, cité par J. H. Hutton, Caste in India. Its Nature, Function and Origin, 1946).On peut gager, sans risque d'erreur, que cet ouvrage de C. J. Fuller fera date. Non à cause de ses découvertes ou de la nouveauté de ses propos — ce à quoi d'ailleurs il ne prétend pas—, mais bien parce qu'il propose une lumineuse synthèse anthropologique sur le monde indien. Il intègre, la plume alerte et toujours l'idée claire, la quasi-totalité des travaux ethnologiques et indologiques qui comptent depuis vingt-cinq ans, principalement ceux qui se situent à la confluence de ces deux champs, selon lui seuls heuristiques, dont ce livre peut être considéré comme l'aboutissement. Ne cédant jamais à la simplification ou à la tentation du catalogue de fiches de lecture, C. J. Fuller évite scrupuleusement de sacrifier sur l'autel de l'ambition synthétique l'exposition de cas de figure régionaux à la fois circonscrits et emblématiques.
APA, Harvard, Vancouver, ISO, and other styles
3

LI, TANPING, JUN WANG, KE FAN, and WEI WANG. "HOW SIMPLE CAN THE PROTEINS BE: FROM THE PREDICTION OF THE CLASSES OF PROTEIN STRUCTURES." Modern Physics Letters B 17, no. 05n06 (March 10, 2003): 245–52. http://dx.doi.org/10.1142/s0217984903005159.

Full text
Abstract:
The validity of complexity simplifications for proteins with different structural features may be different. In this paper, the simplification for proteins is studied using the ratios of successful prediction of structural class under a presumed amino-acid-grouping scheme with a composition-coupled method. It is found that for the α-class proteins, a two-letter alphabet may cover the degree of freedom to characterize the complexity of the class; for the β-class proteins, a 7-letter alphabet might indicate the minimal number of residue types to reconstruct the class feature of the natural proteins; for the α + β-class proteins and the α/β-class proteins, the redundancy of the compositions is weak and the simplification leads to a great loss of the information related to the corresponding structural classes.
APA, Harvard, Vancouver, ISO, and other styles
4

Müller, Adeline, Isabelle Clerc, and Thomas François. "Plain language practices of professional writers in Quebec." Discourse and Writing/Rédactologie 31 (May 6, 2021): 49–74. http://dx.doi.org/10.31468/dwr.849.

Full text
Abstract:
This article investigates the plain language practices of professional writers in Quebec, using a survey. We contacted 55 professional writers and asked them to complete an online survey about how they apply plain language in their work, and the type of writing assistance they would find useful. We also asked 40 of those writers to carry out a simplification task to see what kind of simplifications they were actually making. If the feelings about the reality of the writers’ work is in line with the literature, opinions on plain language guidelines are not. Most writers in our survey find them useful and precise enough, and this contrasts with reported criticisms of such guides. In the simplification task, we noticed that writers focus on the overall understanding of the text, and not only on some linguistic characteristics (as shown in plain language guidelines). The more experienced the writer, the more changes they will make to visual/structural aspects or relational efficiency. Putting the focus on the reader’s needs is their main concern.
APA, Harvard, Vancouver, ISO, and other styles
5

Issidorides, Diana C. "Comprehensie van Vreemdtalige Input." Taalverwerving in onderzoek 30 (January 1, 1988): 21–30. http://dx.doi.org/10.1075/ttwia.30.03iss.

Full text
Abstract:
Within a psycholinguistic approach to second language learning, an attempt is made to investigate the question of how morphology, syntax (word order phenomena), semantics and pragmatics affect the comprehension of Dutch sentences for normative learners of that language. When talking to nonnative language-learners, native spea-kers often tend to dehberately modify their speech -'simplify' it - in an attempt to make the target language more comprehensible. Omitting semantically redundant function words and copulas, or deliberate-ly modifying the word order in a sentence, are but a few characteris-tics of sucn 'simplifications'. In trying to determine whether, and what kinds of, linguistic simplifications promote comprehension, an important theoretical issue arises, namely, the relationship between linguistic (structural) and cognitive (ease of information processing) simplification. That one form of simplification is by no means a guarantee for the other form is an important assumption that forms the backbone to our approach. The results from research on morphological simplifications (omission of redundant function words in utterances) in two parallel experiments - an artificial and a natural language one (Dutch) - are discus-sed. They suggest that the presence of semantically redundant functi-on words is not experienced as bothersome "noise" in the successful inference of the meaning of unfamiliar utterances, as long as supra-segmental cues are present. The suprasegmental structure provides the listener/learner with cues for locating the potentially meaningful elements of such utterances. Research on syntactic simplifications is also discussed. Its aim was to examine the role and effect of syntactic and semantic cues on sen-tence interpretation. Two important questions were: (a) What are the processing strategies and cues responsible for the interpretation of Dutch sentences by native speakers, and how do they compare to those employed by nonnative speakers? (b) Are the processing stra-tegies and cues that are responsible and decisive for first language comprehension also those employed in second language comprehension? The performance of Dutch control subjects on a Dutch sentence interpretation task is presented, and hypotheses are put forward as to the locus and cause of eventual performance differences in a nonnative subject population (English learners of Dutch). Some relevant theoretical implications of our findings are also mentioned.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Shengzheng, Guoqiang Dong, and Chunquan Sheng. "Structural Simplification of Natural Products." Chemical Reviews 119, no. 6 (February 7, 2019): 4180–220. http://dx.doi.org/10.1021/acs.chemrev.8b00504.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Atrek, Erdal. "Theorems of structural variation: A simplification." International Journal for Numerical Methods in Engineering 21, no. 3 (March 1985): 481–85. http://dx.doi.org/10.1002/nme.1620210308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhai, Renjian, Anping Li, Jichong Yin, Jiawei Du, and Yue Qiu. "A Progressive Simplification Method for Buildings Based on Structural Subdivision." ISPRS International Journal of Geo-Information 11, no. 7 (July 12, 2022): 393. http://dx.doi.org/10.3390/ijgi11070393.

Full text
Abstract:
Building simplification is an important research area in automatic map generalization. Up to now, many approaches have been proposed by scholars. However, in the continuous transformation of scales for buildings, keeping the main shape characteristics, area, and orthogonality of buildings are always the key and difficult points. Therefore, this paper proposes a method of progressive simplification for buildings based on structural subdivision. In this paper, iterative simplification is adopted, which transforms the problem of building simplification into the simplification of the minimum details of building outlines. Firstly, a top priority structure (TPS) is determined, which represents the smallest detail in the outline of the building. Then, according to the orthogonality and concave–convex characteristics, the TPS are classified as 62 subdivisions, which cover the local structure of the building polygon. Then, the subdivisions are divided into four simplification types. The building is simplified to eliminate the TPS continuously, retaining the right-angle characteristics and area as much as possible, until the results satisfy the constraints and rules of simplification. A topographic dataset (1:1K) collected from Kadaster was used for our experiments. In order to evaluate the algorithm, many tests were undertaken, including tests of multi-scale simplification and simplification of typical buildings, which indicate that this method can realize multi-scale presentation of buildings. Compared with the existing simplification methods, the comparison results show that the proposed method can simplify buildings effectively, which has certain advantages in keeping shape characteristics, area, and rectangularity.
APA, Harvard, Vancouver, ISO, and other styles
9

Du, Shihong. "Analyzing topological changes for structural shape simplification." Journal of Visual Languages & Computing 25, no. 4 (August 2014): 316–32. http://dx.doi.org/10.1016/j.jvlc.2013.12.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Vermeiren, Jonathan, Selwyn L. Y. Villers, Lieve Wittemans, Wendy Vanlommel, Jeroen van Roy, Herman Marien, Jonas R. Coussement, and Kathy Steppe. "Quantifying the importance of a realistic tomato (Solanum lycopersicum) leaflet shape for 3-D light modelling." Annals of Botany 126, no. 4 (December 16, 2019): 661–70. http://dx.doi.org/10.1093/aob/mcz205.

Full text
Abstract:
Abstract Background and Aims Leaflet shapes of tomato plants (Solanum lycopersicum) have been reduced to simple geometric shapes in previous functional–structural plant models (FSPMs) in order to facilitate measurements and reduce the time required to reconstruct the plant virtually. The level of error that such simplifications introduce remains unaddressed. This study therefore aims to quantify the modelling error associated with simplifying leaflet shapes. Methods Realistic shapes were implemented in a static tomato FSPM based on leaflet scans, and simulation results were compared to simple geometric shapes used in previous tomato FSPMs in terms of light absorption and gross photosynthesis, for both a single plant and a glasshouse scenario. Key Results The effect of simplifying leaflet shapes in FSPMs leads to small but significant differences in light absorption, alterations of canopy light conditions and differences in photosynthesis. The magnitude of these differences depends on both the type of leaflet shape simplification used and the canopy shape and density. Incorporation of realistic shapes requires a small increase in initial measurement and modelling work to establish a shape database and comes at the cost of a slight increase in computation time. Conclusions Our findings indicate that the error associated with leaflet shape simplification is small, but often unpredictable, and is affected by plant structure but also lamp placement, which is often a primary optimization goal of these static models. Assessment of the cost–benefit of realistic shape inclusion shows relatively little drawbacks for a decrease in model uncertainty.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Simplification structurale"

1

Farjon, Jonathan. "Nouvelles méthodologies R. M. N. En milieu liquide chiral : contribution à l'analyse stéréochimique en chimie structurale organique." Paris 11, 2003. http://www.theses.fr/2003PA112137.

Full text
Abstract:
Cette thèse concerne divers développements méthodologiques et leurs applications en RMN dans les solvants cristaux liquides chiraux. Nous avons d'abord montré que les expériences comprenant des impulsions sélectives permettaient d'extraire des informations des spectres proton et carbone-13 qui étaient inaccessibles sur les spectres monodimensionnels standards. Nous avons ensuite utilisé d'autres techniques pour mesurer un maximum de couplages dipolaires pour préciser de manière non-ambigue͏̈ les configurations relatives de centres stéréogènes dans des molécules rigides. Nous avons ainsi montré que pour la DHEA, un stéroi͏̈de, une seule géométrie relative des substituants était compatible avec l'ensemble des couplages dipolaires mesurés et conduisait à un écart type entre couplages dipolaires calculés et expérimentaux correct. Cette méthode est promise à un grand avenir car elle est beaucoup moins ambigue͏̈ que les méthodes classiques utilisées en milieu isotrope. Enfin, nous avons développé cette méthode dans le cas des molécules flexibles où le problème se complique car il existe une corrélation totale entre conformation moléculaire et ordre orientationnel. Le nombre de paramètres à itérer explose rapidement, alors que le nombre de mesurables reste constant, et donc un traitement général est impossible. En prenant pour exemple le 1,2-dibromopropane, nous avons montré que le problème est gérable, à condition d'utiliser un modèle paramétrique pour calculer la valeur moyenne des paramètres d'ordre pour chaque conformation. Les paramètres à itérer, sont alors beaucoup moins nombreux et le problème peut être traité. Ainsi, nous avons montré qu'avec quatre indices de liaison et la géométrie des différents conformères obtenue par calcul ab initio, nous pouvions obtenir un excellent accord entre les couplages dipolaires calculés et expérimentaux, et déterminer sans ambigui͏̈té la configuration relative des protons méthyléniques par rapport à la configuration du centre chiral
This thesis relates to various methodological developments and their applications in NMR in chiral liquid crystal solvents. Firstly, we have shown that experiments using selective pulses allowed us to extract information from proton and carbon-13 spectra which were inaccessible on the standard monodimensional spectra. We then used these techniques to measure a maximum of dipolar couplings in order to specify, in an unambiguous way, the relative configurations of stereogenic centres in rigid molecules. We showed that for the steroid DHEA, only one relative geometry of the substituents was compatible with all the measured dipolar couplings and led to a correct standard deviation between calculated and experimental dipolar couplings. This method is extremely promising because it is much less ambiguous than the more traditional methods used in isotropic media. Finally, we developed this method in the case of flexible molecules where the problem becomes more complicated because of the correlation between molecular conformation and orientational order. The number of parameters to be re-iterated explodes rapidly, whereas the number of measurables remain constant, and thus a general treatment is not possible. By studying for example, the 1,2-dibromopropane, we showed that the problem is manageable, under the condition of using a parametric model to calculate the average value of order parameters for each conformation. The parameters to be re-iterated, are then much fewer and the problem can be dealt with. Thus, we showed that, with four bond indices and the geometry of different conformers obtained by ab initio calculations, we could obtain an excellent agreement between the calculated and experimental dipolar couplings, and determine without ambiguity the relative configuration of the methylenic protons compared with the configuration of the chiral center
APA, Harvard, Vancouver, ISO, and other styles
2

Amrane, Dyhia. "Pharmacomodulation d'hétérocycles α-trichlorométhylés ciblant l'apicoplaste chez P. falciparum." Electronic Thesis or Diss., Aix-Marseille, 2021. http://www.theses.fr/2021AIXM0379.

Full text
Abstract:
Le paludisme est la première parasitose en termes de mortalité à l’échelle mondiale. Les thérapies combinées à base d'artémisinine, traitement de première ligne du paludisme à Plasmodium falciparum, font face à des échecs dûs à l’apparition de résistances. Il est donc nécessaire de développer de nouvelles molécules antiplasmodiales possédant un mécanisme d’action novateur. Dans cet objectif, notre laboratoire a précédemment décrit la synthèse et les activités biologiques d'une chimiothèque de molécules azahétérocycliques α-trichlorométhylées, dont une molécule hit en série quinazoline qui présente le meilleur profil biologique.Une première partie de ce travail s’est intéressée à la pharmacomodulation en série 4-carboxamidoquinazoline. Afin de compléter l’étude RSA, la stratégie de scaffold hopping a permis l’obtention de nouvelles molécules en séries quinoxaline et phtalazine. Par simplification structurale, de nouveaux composés en séries pyrimidine, pyridazine et pyrazine ont été obtenus. Enfin, dans le but de moduler la partie benzénique des noyaux quinazoline et quinoxaline, des dérivés en série thiénopyrimidine et pyrido[2,3-b]pyrazine ont été synthétisés. Parmi plus de 110 nouvelles molécules originales synthétisées, plusieurs nouvelles molécules hit ont pu être identifiées. Leurs propriétés physicochimiques et pharmacocinétiques in vitro ont été déterminées en vue d’identifier une molécule candidate pour l’évaluation in vivo. De plus, afin d’élucider le mécanisme d’action de ces composés qui diffère de ceux des antipaludiques commerciaux, nous avons récemment identifié par immunofluorescence que ces molécules possèdent une action sur l’apicoplaste de P. falciparum
Malaria remains the leading cause of death among parasitic infections worldwide. Currently, there are major concerns about the spread of resistance to artemisinin derivatives that are the basis of first-line antimalarial treatment. Therefore, there is an urgent need to develop new antiplasmodial molecules with a novel mechanism of action. For this purpose, our laboratory has previously described the synthesis and biological activities of a chemical library of α-trichloromethylated azaheterocycles including a hit molecule in the quinazoline series which presents the best biological profile.The first part of this work focused on 4-carboxamide quinazoline pharmacomodulation. In order to complete the SARs, scaffold hopping strategies allowed us to obtain new compounds in the quinoxaline and phthalazine series. By structural simplification, new compounds in the pyrimidine, pyridazine and pyrazine series were obtained. Finally, in order to explore the benzene part of the quinazoline and quinoxaline rings, new thienopyrimidine and pyrido[2,3-b]pyrazine derivatives were also synthesized. More than 110 new original molecules were obtained, among them several new hit molecules were obtained. The physicochemical and in vitro pharmacokinetic properties were determined in order to initiate the study of their in vivo activity on Plasmodium berghei. In addition, in order to elucidate the mechanism of action of these compounds, which differs from those of commercial antimalarials, we have recently identified by immunofluorescence that these molecules target the apicoplast of P. falciparum, an organelle essential to parasite survival
APA, Harvard, Vancouver, ISO, and other styles
3

Claici, Sebastian. "Structure as simplification : transportation tools for understanding data." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127014.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 169-187).
The typical machine learning algorithms looks for a pattern in data, and makes an assumption that the signal to noise ratio of the pattern is high. This approach depends strongly on the quality of the datasets these algorithms operate on, and many complex algorithms fail in spectacular fashion on simple tasks by overfitting noise or outlier examples. These algorithms have training procedures that scale poorly in the size of the dataset, and their out-puts are difficult to intepret. This thesis proposes solutions to both problems by leveraging the theory of optimal transport and proposing efficient algorithms to solve problems in: (1) quantization, with extensions to the Wasserstein barycenter problem, and a link to the classical coreset problem; (2) natural language processing where the hierarchical structure of text allows us to compare documents efficiently;(3) Bayesian inference where we can impose a hierarchy on the label switching problem to resolve ambiguities.
by Sebastian Claici.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
4

Zupan, Alexander Martin. "Thin position, bridge structure, and monotonic simplification of knots." Diss., University of Iowa, 2012. https://ir.uiowa.edu/etd/3420.

Full text
Abstract:
Since its inception, the notion of thin position has played an important role in low-dimensional topology. Thin position for knots in the 3-sphere was first introduced by David Gabai in order to prove the Property R Conjecture. In addition, this theory factored into Cameron Gordon and John Luecke's proof of the knot complement problem and revolutionized the study of Heegaard splittings upon its adaptation by Martin Scharlemann and Abigail Thompson. Let h be a Morse function from the 3-sphere to the real numbers with two critical points. Loosely, thin position of a knot K in the 3-sphere is a particular embedding of K which minimizes the total number of intersections with a maximal collection of regular level sets, where this number of intersections is called the width of the knot. Although not immediately obvious, it has been demonstrated that there is a close relationship between a thin position of a knot K and essential meridional planar surfaces in its exterior E(K). In this thesis, we study the nature of thin position under knot companionship; namely, for several families of knots we establish a lower bound for the width of a satellite knot based on the width of its companion and the wrapping or winding number of its pattern. For one such class of knots, cable knots, in addition to finding thin position for these knots, we establish a criterion under which non-minimal bridge positions of cable knots are stabilized. Finally, we exhibit an embedding of the unknot whose width must be increased before it can be simplified to thin position.
APA, Harvard, Vancouver, ISO, and other styles
5

Chebaro, Omar. "Classification de menaces d'erreurs par analyse statique, simplification syntaxique et test structurel de programmes." Phd thesis, Université de Franche-Comté, 2011. http://tel.archives-ouvertes.fr/tel-00839151.

Full text
Abstract:
La validation des logiciels est une partie cruciale dans le cycle de leur développement. Deux techniques de vérification et de validation se sont démarquées au cours de ces dernières années : l'analyse statique et l'analyse dynamique. Les points forts et faibles des deux techniques sont complémentaires. Nous présentons dans cette thèse une combinaison originale de ces deux techniques. Dans cette combinaison, l'analyse statique signale les instructions risquant de provoquer des erreurs à l'exécution, par des alarmes dont certaines peuvent être de fausses alarmes, puis l'analyse dynamique (génération de tests) est utilisée pour confirmer ou rejeter ces alarmes. L'objectif de cette thèse est de rendre la recherche d'erreurs automatique, plus précise, et plus efficace en temps. Appliquée à des programmes de grande taille, la génération de tests, peut manquer de temps ou d'espace mémoire avant de confirmer certaines alarmes comme de vraies erreurs ou conclure qu'aucun chemin d'exécution ne peut atteindre l'état d'erreur de certaines alarmes et donc rejeter ces alarmes. Pour surmonter ce problème, nous proposons de réduire la taille du code source par le slicing avant de lancer la génération de tests. Le slicing transforme un programme en un autre programme plus simple, appelé slice, qui est équivalent au programme initial par rapport à certains critères. Quatre utilisations du slicing sont étudiées. La première utilisation est nommée all. Elle consiste à appliquer le slicing une seule fois, le critère de simplification étant l'ensemble de toutes les alarmes du programme qui ont été détectées par l'analyse statique. L'inconvénient de cette utilisation est que la génération de tests peut manquer de temps ou d'espace et les alarmes les plus faciles à classer sont pénalisées par l'analyse d'autres alarmes plus complexes. Dans la deuxième utilisation, nommée each, le slicing est effectué séparément par rapport à chaque alarme. Cependant, la génération de tests est exécutée pour chaque programme et il y a un risque de redondance d'analyse si des alarmes sont incluses dans d'autres slices. Pour pallier ces inconvénients, nous avons étudié les dépendances entre les alarmes et nous avons introduit deux utilisations avancées du slicing, nommées min et smart, qui exploitent ces dépendances. Dans l'utilisation min, le slicing est effectué par rapport à un ensemble minimal de sous-ensembles d'alarmes. Ces sous-ensembles sont choisis en fonction de dépendances entre les alarmes et l'union de ces sous-ensembles couvre l'ensemble de toutes les alarmes. Avec cette utilisation, on a moins de slices qu'avec each, et des slices plus simples qu'avec all. Cependant, l'analyse dynamique de certaines slices peut manquer de temps ou d'espace avant de classer certaines alarmes, tandis que l'analyse dynamique d'une slice éventuellement plus simple permettrait de les classer. L'utilisation smart consiste à appliquer l'utilisation précédente itérativement en réduisant la taille des sous-ensembles quand c'est nécessaire. Lorsqu'une alarme ne peut pas être classée par l'analyse dynamique d'une slice, des slices plus simples sont calculées. Nous prouvons la correction de la méthode proposée. Ces travaux sont implantés dans sante, notre outil qui relie l'outil de génération de tests PathCrawler et la plate-forme d'analyse statique Frama-C. Des expérimentations ont montré, d'une part, que notre combinaison est plus performante que chaque technique utilisée indépendamment et, d'autre part, que la vérification devient plus rapide avec l'utilisation du slicing. De plus, la simplification du programme par le slicing rend les erreurs détectées et les alarmes restantes plus faciles à analyser
APA, Harvard, Vancouver, ISO, and other styles
6

Chebaro, Omar. "Classification de menaces d’erreurs par analyse statique, simplification syntaxique et test structurel de programmes." Thesis, Besançon, 2011. http://www.theses.fr/2011BESA2021/document.

Full text
Abstract:
La validation des logiciels est une partie cruciale dans le cycle de leur développement. Deux techniques de vérification et de validation se sont démarquées au cours de ces dernières années : l’analyse statique et l’analyse dynamique. Les points forts et faibles des deux techniques sont complémentaires. Nous présentons dans cette thèse une combinaison originale de ces deux techniques. Dans cette combinaison, l’analyse statique signale les instructions risquant de provoquer des erreurs à l’exécution, par des alarmes dont certaines peuvent être de fausses alarmes, puis l’analyse dynamique (génération de tests) est utilisée pour confirmer ou rejeter ces alarmes. L’objectif de cette thèse est de rendre la recherche d’erreurs automatique, plus précise, et plus efficace en temps. Appliquée à des programmes de grande taille, la génération de tests, peut manquer de temps ou d’espace mémoire avant de confirmer certaines alarmes comme de vraies erreurs ou conclure qu’aucun chemin d’exécution ne peut atteindre l’état d’erreur de certaines alarmes et donc rejeter ces alarmes. Pour surmonter ce problème, nous proposons de réduire la taille du code source par le slicing avant de lancer la génération de tests. Le slicing transforme un programme en un autre programme plus simple, appelé slice, qui est équivalent au programme initial par rapport à certains critères. Quatre utilisations du slicing sont étudiées. La première utilisation est nommée all. Elle consiste à appliquer le slicing une seule fois, le critère de simplification étant l’ensemble de toutes les alarmes du programme qui ont été détectées par l’analyse statique. L’inconvénient de cette utilisation est que la génération de tests peut manquer de temps ou d’espace et les alarmes les plus faciles à classer sont pénalisées par l’analyse d’autres alarmes plus complexes. Dans la deuxième utilisation, nommée each, le slicing est effectué séparément par rapport à chaque alarme. Cependant, la génération de tests est exécutée pour chaque programme et il y a un risque de redondance d’analyse si des alarmes sont incluses dans d’autres slices. Pour pallier ces inconvénients, nous avons étudié les dépendances entre les alarmes et nous avons introduit deux utilisations avancées du slicing, nommées min et smart, qui exploitent ces dépendances. Dans l’utilisation min, le slicing est effectué par rapport à un ensemble minimal de sous-ensembles d’alarmes. Ces sous-ensembles sont choisis en fonction de dépendances entre les alarmes et l’union de ces sous-ensembles couvre l’ensemble de toutes les alarmes. Avec cette utilisation, on a moins de slices qu’avec each, et des slices plus simples qu’avec all. Cependant, l’analyse dynamique de certaines slices peut manquer de temps ou d’espace avant de classer certaines alarmes, tandis que l’analyse dynamique d’une slice éventuellement plus simple permettrait de les classer. L’utilisation smart consiste à appliquer l’utilisation précédente itérativement en réduisant la taille des sous-ensembles quand c’est nécessaire. Lorsqu’une alarme ne peut pas être classée par l’analyse dynamique d’une slice, des slices plus simples sont calculées. Nous prouvons la correction de la méthode proposée. Ces travaux sont implantés dans sante, notre outil qui relie l’outil de génération de tests PathCrawler et la plate-forme d’analyse statique Frama-C. Des expérimentations ont montré, d’une part, que notre combinaison est plus performante que chaque technique utilisée indépendamment et, d’autre part, que la vérification devient plus rapide avec l’utilisation du slicing. De plus, la simplification du programme par le slicing rend les erreurs détectées et les alarmes restantes plus faciles à analyser
Software validation remains a crucial part in software development process. Two major techniques have improved in recent years, dynamic and static analysis. They have complementary strengths and weaknesses. We present in this thesis a new original combination of these methods to make the research of runtime errors more accurate, automatic and reduce the number of false alarms. We prove as well the correction of the method. In this combination, static analysis reports alarms of runtime errors some of which may be false alarms, and test generation is used to confirm or reject these alarms. When applied on large programs, test generation may lack time or space before confirming out certain alarms as real bugs or finding that some alarms are unreachable. To overcome this problem, we propose to reduce the source code by program slicing before running test generation. Program slicing transforms a program into another simpler program, which is equivalent to the original program with respect to certain criterion. Four usages of program slicing were studied. The first usage is called all. It applies the slicing only once, the simplification criterion is the set of all alarms in the program. The disadvantage of this usage is that test generation may lack time or space and alarms that are easier to classify are penalized by the analysis of other more complex alarms. In the second usage, called each, program slicing is performed with respect to each alarm separately. However, test generation is executed for each sliced program and there is a risk of redundancy if some alarms are included in many slices. To overcome these drawbacks, we studied dependencies between alarms on which we base to introduce two advanced usages of program slicing : min and smart. In the min usage, the slicing is performed with respect to subsets of alarms. These subsets are selected based on dependencies between alarms and the union of these subsets cover the whole set of alarms. With this usage, we analyze less slices than with each, and simpler slices than with all. However, the dynamic analysis of some slices may lack time or space before classifying some alarms, while the dynamic analysis of a simpler slice could possibly classify some. Usage smart applies previous usage iteratively by reducing the size of the subsets when necessary. When an alarm cannot be classified by the dynamic analysis of a slice, simpler slices are calculated. These works are implemented in sante, our tool that combines the test generation tool PathCrawler and the platform of static analysis Frama-C. Experiments have shown, firstly, that our combination is more effective than each technique used separately and, secondly, that the verification is faster after reducing the code with program slicing. Simplifying the program by program slicing also makes the detected errors and the remaining alarms easier to analyze
APA, Harvard, Vancouver, ISO, and other styles
7

Wandji, Tchami Ornella. "Analyse contrastive des verbes dans des corpus médicaux et création d’une ressource verbale de simplification de textes." Thesis, Lille 3, 2018. http://www.theses.fr/2018LIL3H015/document.

Full text
Abstract:
Grâce à l’évolution de la technologie à travers le Web, la documentation relative à la santé est de plus en plus abondante et accessible à tous, plus particulièrement aux patients, qui ont ainsi accès à une panoplie d’informations sanitaires. Malheureusement, la grande disponibilité de l’information médicale ne garantit pas sa bonne compréhension par le public visé, en l’occurrence les non-experts. Notre projet de thèse a pour objectif la création d’une ressource de simplification de textes médicaux, à partir d’une analyse syntaxico-sémantique des verbes dans quatre corpus médicaux en français qui se distinguent de par le degré d’expertise de leurs auteurs et celui des publics cibles. La ressource conçue contient 230 patrons syntaxicosémantiques des verbes (appelés pss), alignés avec leurs équivalents non spécialisés. La méthode semi-automatique d’analyse des verbes appliquée pour atteindre notre objectif est basée sur quatre tâches fondamentales : l’annotation syntaxique des corpus, réalisée grâce à l’analyseur syntaxique Cordial (Laurent, Dominique et al, 2009) ; l’annotation sémantique des arguments des verbes, à partir des catégories sémantiques de la version française de la terminologie médicale Snomed Internationale (Côté, 1996) ; l’acquisition des patrons syntactico-sémantiqueset l’analyse contrastive du fonctionnement des verbes dans les différents corpus. Les patrons syntaxico-sémantiques des verbes acquis au terme de ce processus subissent une évaluation (par trois équipes d’experts en médecine) qui débouche sur la sélection des candidats constituant la nomenclature de la ressource de simplification. Les pss sont ensuite alignés avec leurs correspondants non spécialisés, cet alignement débouche sur le création de la ressource de simplification, qui représente le résultat principal de notre travail de thèse. Une évaluation du rendement du contenu de la ressource a été effectuée avec deux groupes d’évaluateurs : des linguistes et des non-linguistes. Les résultats montrent que la simplification des pss permet de faciliter la compréhension du sens du verbe en emploi spécialisé, surtout lorsque un certains paramètres sont réunis
With the evolution of Web technology, healthcare documentation is becoming increasinglyabundant and accessible to all, especially to patients, who have access to a large amount ofhealth information. Unfortunately, the ease of access to medical information does not guaranteeits correct understanding by the intended audience, in this case non-experts. Our PhD work aimsat creating a resource for the simplification of medical texts, based on a syntactico-semanticanalysis of verbs in four French medical corpora, that are distinguished according to the levelof expertise of their authors and that of the target audiences. The resource created in thepresent thesis contains 230 syntactico-semantic patterns of verbs (called pss), aligned withtheir non-specialized equivalents. The semi-automatic method applied, for the analysis of verbs,in order to achieve our goal is based on four fundamental tasks : the syntactic annotation of thecorpora, carried out thanks to the Cordial parser (Laurent et al., 2009) ; the semantic annotationof verb arguments, based on semantic categories of the French version of a medical terminologyknown as Snomed International (Côté, 1996) ; the acquisition of syntactico-semantic patternsof verbs and the contrastive analysis of the verbs behaviors in the different corpora. Thepss, acquired at the end of this process, undergo an evaluation (by three teams of medicalexperts) which leads to the selection of candidates constituting the nomenclature of our textsimplification resource. These pss are then aligned with their non-specialized equivalents, thisalignment leads to the creation of the simplification resource, which is the main result of ourPhD study. The content of the resource was evaluated by two groups of people : linguists andnon-linguists. The results show that the simplification of pss makes it easier for non-expertsto understand the meaning of verbs used in a specialized way, especially when a certain set ofparameters is collected
APA, Harvard, Vancouver, ISO, and other styles
8

Anquez, Pierre. "Correction et simplification de modèles géologiques par frontières : impact sur le maillage et la simulation numérique en sismologie et hydrodynamique." Thesis, Université de Lorraine, 2019. http://www.theses.fr/2019LORR0069/document.

Full text
Abstract:
Les modèles géologiques numériques 2D et 3D permettent de comprendre l'organisation spatiale des roches du sous-sol. Ils sont également conçus pour réaliser des simulations numériques afin d’étudier ou de prédire le comportement physique du sous-sol. Pour résoudre les équations qui gouvernent les phénomènes physiques, les structures internes des modèles géologiques peuvent être discrétisées spatialement à l’aide de maillages. Cependant, la qualité des maillages peut être considérablement altérée à cause de l’inadéquation entre, d’une part, la géométrie et la connectivité des objets géologiques à représenter et, d’autre part, les contraintes requises sur le nombre, la forme et la taille des éléments des maillages. Dans ce cas, il est souhaitable de modifier un modèle géologique afin de pouvoir générer des maillages de bonne qualité permettant la réalisation de simulations physiques fidèles en un temps raisonnable. Dans cette thèse, j’ai développé des stratégies de réparation et de simplification de modèles géologiques 2D dans le but de faciliter la génération de maillages et la simulation de processus physiques sur ces modèles. Je propose des outils permettant de détecter les éléments des modèles qui ne respectent pas le niveau de détail et les prérequis de validité spécifiés. Je présente une méthode pour réparer et simplifier des coupes géologiques de manière locale, limitant ainsi l’extension des modifications. Cette méthode fait appel à des opérations d’édition de la géométrie et de la connectivité des entités constitutives des modèles géologiques. Deux stratégies sont ainsi explorées : modifications géométriques (élargissements locaux de l'épaisseur des couches) et modifications topologiques (suppressions de petites composantes et fusions locales de couches fines). Ces opérations d’édition produisent un modèle sur lequel il est possible de générer un maillage et de réaliser des simulations numériques plus rapidement. Cependant, la simplification des modèles géologiques conduit inévitablement à la modification des résultats des simulations numériques. Afin de comparer les avantages et les inconvénients des simplifications de modèles sur la réalisation de simulations physiques, je présente trois exemples d'application de cette méthode : (1) la simulation de la propagation d'ondes sismiques sur une coupe au sein du bassin houiller lorrain, (2) l’évaluation des effets de site liés à l'amplification des ondes sismiques dans le bassin de la basse vallée du Var, et (3) la simulation d'écoulements fluides dans un milieu poreux fracturé. Je montre ainsi (1) qu'il est possible d’utiliser les paramètres physiques des simulations, la résolution sismique par exemple, pour contraindre la magnitude des simplifications et limiter leur impact sur les simulations numériques, (2) que ma méthode de simplification de modèles permet de réduire drastiquement le temps de calcul de simulations numériques (jusqu’à un facteur 55 sur une coupe 2D dans le cas de l’étude des effets de site) tout en conservant des réponses physiques équivalentes, et (3) que les résultats de simulations numériques peuvent être modifiés en fonction de la stratégie de simplification employée (en particulier, la modification de la connectivité d’un réseau de fractures peut modifier les écoulements fluides et ainsi surestimer ou sous-estimer la quantité des ressources produites)
Numerical geological models help to understand the spatial organization of the subsurface. They are also designed to perform numerical simulations to study or predict the rocks physical behavior. The internal structures of geological models are commonly discretized using meshes to solve the physical governing equations. The quality of the meshes can be, however, considerably degraded due to the mismatch between, on the one hand, the geometry and the connectivity of the geological objects to be discretized and, on the other hand, the constraints imposed on number, shape and size of the mesh elements. As a consequence, it may be desirable to modify a geological model in order to generate good quality meshes that allow realization of reliable physical simulations in a reasonable amount of time. In this thesis, I developed strategies for repairing and simplifying 2D geological models, with the goal of easing mesh generation and simulation of physical processes on these models. I propose tools to detect model elements that do not meet the specified validity and level of detail requirements. I present a method to repair and simplify geological cross-sections locally, thus limiting the extension of modifications. This method uses operations to edit both the geometry and the connectivity of the geological model features. Two strategies are thus explored: geometric modifications (local enlargements of the layer thickness) and topological modifications (deletions of small components and local fusions of thin layers). These editing operations produce a model on which it is possible to generate a mesh and to realize numerical simulations more efficiently. But the simplifications of geological models inevitably lead to the modification of the numerical simulation results. To compare the advantages and disadvantages of model simplifications on the physical simulations, I present three applications of the method: (1) the simulation of seismic wave propagation on a cross-section within the Lorraine coal basin, (2) the site effects evaluation related to the seismic wave amplifications in the basin of the lower Var river valley, and (3) the simulation of fluid flows in a fractured porous medium. I show that (1) it is possible to use the physical simulation parameters, like the seismic resolution, to constrain the magnitude of the simplifications and to limit their impact on the numerical simulations, (2) my method of model simplification is able to drastically reduce the computation time of numerical simulations (up to a factor of 55 in the site effects case study) while preserving an equivalent physical response, and (3) the results of numerical simulations can be changed depending on the simplification strategy employed (in particular, changing the connectivity of a fracture network can lead to a modification of fluid flow paths and overestimation or underestimation of the quantity of produced resources)
APA, Harvard, Vancouver, ISO, and other styles
9

TERLIZZI, VANESSA. "Applications of innovative materials, GFRP and structural adhesives, for the curtain wall: technological and performance verification." Doctoral thesis, Università Politecnica delle Marche, 2018. http://hdl.handle.net/11566/252565.

Full text
Abstract:
L’obiettivo del presente lavoro è verificare l’applicabilità di materiali innovativi, quali compositi (GFRP - Glass Fibre Reinforced Polymer) e colle strutturali, per la realizzazione di facciate continue ad alte prestazioni meccaniche e termiche e a basso impatto ambientale. Tale obiettivo è stato verificato anche tramite l’applicazione del principio della “Semplificazione tecnologica” che rappresenta il filo conduttore alla base dello studio e delle sperimentazioni svolte dal gruppo di ricerca, coordinato dal Prof. P.Munafò, che ha sviluppato il brevetto “Sistema per la realizzazione di facciate di edifici” (n.102015000087569) di cui il Professore è inventore. Con tale filosofia di approccio è possibile realizzare componenti edilizi altamente prestazionali e semplici nella loro concezione essendo costituiti con un numero limitato di pezzi implicando così un minor consumo di energia nella produzione, assemblaggio, manutenzione e smaltimento del prodotto, classificandolo quindi come eco-sostenibile. In questa tesi viene verificata la fattibilità di un sistema costruttivo per la realizzazione di facciate continue per edifici studiando preventivamente, con test sperimentali e analisi sul ciclo di vita dei componenti, le prestazioni meccaniche dei profili in GFRP e degli adesivi strutturali in condizioni di invecchiamento accelerato (durabilità) e non, e l’interazione del componente edilizio con l’ambiente, dalla produzione alla dismissione finale (LCA - Life Cycle Assessment). I metodi principalmente usati in questo studio sono di tipo sperimentale al fine di testare le proprietà meccaniche dei materiali, in condizioni ambientali e dopo invecchiamento (accelerato in camera climatica ad elevata umidità e temperatura (ISO 6270-2) e sotto esposizione ai raggi UV (ASTM D904–99)). In seguito ai singoli test di invecchiamento precedentemente citati, sono stati condotti ulteriori sperimentazioni riguardanti il trattamento di campioni a condizioni di invecchiamento combinato (camera climatica ed esposizione ai raggi UV - Tcc+Tuv - e viceversa - Tuv+Tcc -). Al fine di validare i risultati ottenuti dalle sperimentazioni effettuate sono stati eseguiti test numerici e analitici. Il risultato più significativo è dato proprio dalla validazione dell’idea brevettuale dimostrando la possibilità di industrializzare componenti (facciate continue) che utilizzano tale materiale composito (pultruso - GFRP), mediante l’accoppiamento a materiali come l’acciaio che possono conferire al componente alte prestazioni meccaniche, soprattutto per quanto riguarda il contenimento delle deformazioni sotto carico. Le soluzioni tecniche studiate inoltre evitano il problema della rottura fragile delle giunzioni bullonate che è uno dei problemi che riguardano le giunzioni di questo tipo su profili in pultruso. La deformabilità e la rottura fragile delle giunzioni bullonate dei profili in pultruso ne hanno limitato l’utilizzo nel settore dell’ingegneria edile per la realizzazione di facciate continue specie di grandi dimensioni. A tal fine l’attività di ricerca è stata prevalentemente incentrata a verificare la possibilità di inserire nei montati in pultruso di tali facciate, una lamina d’acciaio incollata per contenere la deformazione e per migliorare la qualità della giunzione bullonata in modo da evitare rotture di tipo fragile raggiunto il carico di collasso. Le risultanze dei test sperimentali condotti dimostrano le buone performance del sistema ibrido GFRP-acciaio anche in seguito all’esposizione a differenti condizioni di invecchiamento artificiale e verificano la fattibilità di realizzazione di una facciata continua ad alte prestazioni meccaniche e termiche.
The aim of this work is to demonstrate the applicability of innovative materials, such as Glass Fibre Reinforced Polymer (GFRP) industrialized components (profiles), structural adhesives, for the realization of curtain walls with high mechanical and thermal performances and low environmental impact. This objective with the “Technological Simplification” principle is verified. This latter is the guiding principle to the base of the search and experimental tests carried out by the research group. The teamwork coordinator and patent inventor is Prof P.Munafò, with him I developed a “System for the realization of building façade” (n. 102015000087569). The “Technological Simplification” principle allows the building components realization with high performance and easy to assemble, by using a limited number of pieces. All this involves lower energy consumption in the production, assembly, maintenance and disposal phases. For this reason, the construction element can be considered environmentally sustainable. In this thesis, the feasibility of the constructive system for the realization of building façade, through the experimental tests and component life cycle analysis, is verified. The components and materials properties both in laboratory conditions and after different types of ageing conditions (durability) are tested. The interaction between building components and environment, from the production to ultimate disposal (LCA - Life Cycle Assessment) are analysed. The methods used were mostly of the experimental type. The material mechanical properties both in environmental conditions and in different types of ageing conditions were analysed, such as continuous condensation (ISO 6270-2) and UV irradiation (ASTM D904–99). Additional test with combined artificial ageing (climatic chamber and exposure to UV radiation - Tcc+Tuv – and the other way around - Tuv+Tcc) were tested. The numerical and analytical studies were carried out, with the objective to check and validate the results obtained through experimental tests. The main outcome was the validation of the patents basic ideas, which is a key point in the industrialization process of the construction elements (Structural Member). The aim of this work is to demonstrate the feasibility of the use of pultruded Glass Fiber Reinforced Polymers (GFRP) profiles, adhesively joined with other materials (i.e. steel), in the construction sector. The objective is both to reduce the GFRP profiles deformation under loading conditions, and to avoid the brittle fractures that could occur in bolted joints. In the building engineering field, in fact, these issues (deformations and brittle fractures) prevent the use of pultruded materials. In the research activity, the possibility to adhesively join a steel laminate on the pultruded profile mullion for curtain walls was verified. The containment of the deformations and the prevention of brittle fractures in the bolted joint were checked, in order to verify the pultruded curtain wall feasibility, both constructively and for its structural and energy performances. Experimental results, in fact, demonstrated that the use of GFRP profiles, bonded with structural adhesives and combined with steel, is successful on curtain walls, even when they are exposed to adverse environmental conditions. The feasibility of the curtain wall implementation with high performance is verified.
APA, Harvard, Vancouver, ISO, and other styles
10

Curado, Manuel. "Structural Similarity: Applications to Object Recognition and Clustering." Doctoral thesis, Universidad de Alicante, 2018. http://hdl.handle.net/10045/98110.

Full text
Abstract:
In this thesis, we propose many developments in the context of Structural Similarity. We address both node (local) similarity and graph (global) similarity. Concerning node similarity, we focus on improving the diffusive process leading to compute this similarity (e.g. Commute Times) by means of modifying or rewiring the structure of the graph (Graph Densification), although some advances in Laplacian-based ranking are also included in this document. Graph Densification is a particular case of what we call graph rewiring, i.e. a novel field (similar to image processing) where input graphs are rewired to be better conditioned for the subsequent pattern recognition tasks (e.g. clustering). In the thesis, we contribute with an scalable an effective method driven by Dirichlet processes. We propose both a completely unsupervised and a semi-supervised approach for Dirichlet densification. We also contribute with new random walkers (Return Random Walks) that are useful structural filters as well as asymmetry detectors in directed brain networks used to make early predictions of Alzheimer's disease (AD). Graph similarity is addressed by means of designing structural information channels as a means of measuring the Mutual Information between graphs. To this end, we first embed the graphs by means of Commute Times. Commute times embeddings have good properties for Delaunay triangulations (the typical representation for Graph Matching in computer vision). This means that these embeddings can act as encoders in the channel as well as decoders (since they are invertible). Consequently, structural noise can be modelled by the deformation introduced in one of the manifolds to fit the other one. This methodology leads to a very high discriminative similarity measure, since the Mutual Information is measured on the manifolds (vectorial domain) through copulas and bypass entropy estimators. This is consistent with the methodology of decoupling the measurement of graph similarity in two steps: a) linearizing the Quadratic Assignment Problem (QAP) by means of the embedding trick, and b) measuring similarity in vector spaces. The QAP problem is also investigated in this thesis. More precisely, we analyze the behaviour of $m$-best Graph Matching methods. These methods usually start by a couple of best solutions and then expand locally the search space by excluding previous clamped variables. The next variable to clamp is usually selected randomly, but we show that this reduces the performance when structural noise arises (outliers). Alternatively, we propose several heuristics for spanning the search space and evaluate all of them, showing that they are usually better than random selection. These heuristics are particularly interesting because they exploit the structure of the affinity matrix. Efficiency is improved as well. Concerning the application domains explored in this thesis we focus on object recognition (graph similarity), clustering (rewiring), compression/decompression of graphs (links with Extremal Graph Theory), 3D shape simplification (sparsification) and early prediction of AD.
Ministerio de Economía, Industria y Competitividad (Referencia TIN2012-32839 BES-2013-064482)
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Simplification structurale"

1

Committee of the Regions., ed. Outlook report of the Committee of the Regions of 2 July 2003 on governance and simplification of the Structural Funds after 2006. Brussels: Committee of the Regions, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bacior, Stanisław. Optymalizacja wiejskich układów gruntowych – badania eksperymentalne. Publishing House of the University of Agriculture in Krakow, 2019. http://dx.doi.org/10.15576/978-83-66602-37-3.

Full text
Abstract:
Rural areas are subject to constant structural, spatial and economic transformations. The main purpose of this monograph was to present a new concept of shaping of rural land arrangement that takes into account the land value. The presented optimization methodology of shaping of the rural areas has a general range of application, not being limited by time or place. of the location of the consolidation object. The only condition for its use is the availability of a specific set of output data enabling the necessary calculations for the implementation of consolidation works. The described method has been successfully applied to the research object of the Mściowojów village, in a registry area located in the Dolnośląkie voivodeship, in the Jaworski district, providing with the assumed effects. In order to meet the research objectives, the shaping of rural land arrangement was conducted according to five models. The original arrangement of existing land division in a given village is considered as the 1st model. The 2nd model uses a rather accurate description of the locations of the lands in the village. To define this feature the location of farm parcels had to be determined. This model is the most accurate, but also the most labor-intensive of all. In the 3rd model, a fundamental simplification of the land arrangement was adopted, limiting the distance matrix to its measurement to the entry points from the settlements into the complexes. This simplification means that the location of parcels in the complex does not affect the average distance to the land in the whole village. On the basis of simplifications applied in the 3rd model allowing a significant reduction of the distance matrix the 4th model which uses a linear programming to minimize the distance to a parcel was developed. Introducing into the linear model an additional condition that eliminates distance growth in farms in relation to the initial state was important for the research. This was implemented in the 5th model and had a positive impact on the obtained results. The 6th model was developed by including the landowners' wants into the 5th model. These had to be taken into account so that the research/the new land arrangement did not cause complaints. The wants could not be fully included due to their inherently contradictory nature. The wants for having the parcel in a given arrangement was replaced with a guarantee of division, after which landowner receives no smaller share than the prior one. As demonstrated in the work, the solutions of the developed models allowed obtaining land arrangements close to the optimal in terms of distance to land and the shape of parcels and farms with regard to land specifics. The presented results allow to draw a conclusion that the methods and analyses applied in the research can have a wide range of application in shaping of rural land arrangement. Developing the most socially accepted optimization of parcel division in the process of land consolidation is important due to the actual needs for the implementation of the rural land arrangement research. This may also have influence on better use of the EU's financial resources for the consolidation of agricultural lands.
APA, Harvard, Vancouver, ISO, and other styles
3

Fukuyama, Francis, and Francesca Recanatini. Beyond Measurement. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198817062.003.0003.

Full text
Abstract:
Since the 1990s, governance and anti-corruption have become preoccupations of the international development and research communities, leading to the proliferation of sophisticated measures, which, while a critical starting point, have not had significant impact at the country level. This chapter examines approaches to reducing corruption, including structural state reform, simplification and reduction of administrative discretion, transparency and accountability, international agreements and conventions, and anti-corruption bodies. While some approaches have produced results in specific areas, their impact has been limited. The chapter argues that we should think about corruption differently, not as a market distortion or unethical behaviour but as a misallocation of power. To address corruption requires interventions that reallocate power among stakeholders. The limited success in addressing corruption suggests that policy-makers and the international community have not been able to reallocate power, mostly due to lack of political leverage to discipline entrenched local actors.
APA, Harvard, Vancouver, ISO, and other styles
4

Lobina, David J. The universality and uniqueness of recursion-in-language. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198785156.003.0005.

Full text
Abstract:
The role of recursion in language is universal and unique. It is universal because the (Specifier)-Head-Complement(s) geometry is the type of structuring that all phrases and all languages unequivocally adhere to, and complexes of such phrases constitute a general recursive structure. It is unique because the asymmetric nature of [(Specifier)-[Head-Complement(s)]] structures is unattested in other domains of human cognition or in the cognition of other animal species. The common claim that not all languages manifest recursive structures is usually couched in terms of self-embedded sentences, a particular sub-type of the (Specifier)-Head-Complement(s) geometry. The increasingly common claim that certain representations in human general cognition or in the animal kingdom are isomorphic to language’s recursive structures is the result of great simplification of the representations under comparison, which undercuts the force of the argument. Linguistic structures in the form of bundles of (Specifier)-Head-Complement(s) remain quirky through and through—and universal in language.
APA, Harvard, Vancouver, ISO, and other styles
5

Schmidt, Dieter, and Simon Shorvon. Resecting Epilepsy. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780198725909.003.0005.

Full text
Abstract:
The evolution of surgery for epilepsy in the late nineteenth century was partly the consequence of new ideas about the localisation of function in the brain and advances in the understanding of the physiological nature of epilepsy. This was an exciting time of discovery, and really fundamental and novel principles were enunciated which have stood the test of time. New techniques of investigation, including electroencephalography or magnetic resonance imaging, have since led to more accurate ‘targeting’, allowing the elucidation of the anatomical underpinning of epilepsy to be based, not only on semiology as in the earlier years, but also on more objective structural and functional measures. However, the fact remains that most surgery is based on the concept that resecting ‘bad’ tissue, and thus removing the ‘focus’ of epilepsy, will cure the condition—a postulation which has not changed since the time of Jackson (and which has its roots in earlier superstition). Such theories of epilepsy are surely gross simplifications, and the absence of any subsequent paradigm shift is why surgery has really not advanced conceptually much in the last 50 years. Technique and technology have profoundly changed, but the theoretical basis, generally speaking, has not.
APA, Harvard, Vancouver, ISO, and other styles
6

Bowes, Ashley. A Practical Approach to Planning Law. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198833253.001.0001.

Full text
Abstract:
Planning law is one of the most fast moving legal areas, with major structural changes to the planning system occurring since 2014 . Despite these attempts at simplification, it remains one of the most complex fields for both students and practitioners to navigate. In this continually evolving arena the fourteenth edition of A Practical Approach to Planning Law is an authoritative and reliable resource for all those working in the area, providing a comprehensive and systematic account of the principles and practice of planning law. The text guides the reader through each stage of the planning process, from permission applications through to disputes and appeals in a clear and accessible style. Containing coverage of all recent cases as well as important legislative and policy developments since the publication of the previous edition, particularly those arising out of the Neighbourhood Planning Act 2017, the Housing and Planning Act 2016, the Infrastructure Act 2015, and the Deregulation Act 2015, this new edition provides an invaluable introduction to the subject for professionals and students alike. The A Practical Approach series is the perfect partner for practice work. Each title provides a comprehensive overview of the subject together with clear, practical advice and tips on issues likely to arise in practice. The books are also an excellent resource for those new to the field, where the expert overview and clear layout promote clarity and ease of understanding.
APA, Harvard, Vancouver, ISO, and other styles
7

Sanderson, Benjamin Mark. Uncertainty Quantification in Multi-Model Ensembles. Oxford University Press, 2018. http://dx.doi.org/10.1093/acrefore/9780190228620.013.707.

Full text
Abstract:
Long-term planning for many sectors of society—including infrastructure, human health, agriculture, food security, water supply, insurance, conflict, and migration—requires an assessment of the range of possible futures which the planet might experience. Unlike short-term forecasts for which validation data exists for comparing forecast to observation, long-term forecasts have almost no validation data. As a result, researchers must rely on supporting evidence to make their projections. A review of methods for quantifying the uncertainty of climate predictions is given. The primary tool for quantifying these uncertainties are climate models, which attempt to model all the relevant processes that are important in climate change. However, neither the construction nor calibration of climate models is perfect, and therefore the uncertainties due to model errors must also be taken into account in the uncertainty quantification.Typically, prediction uncertainty is quantified by generating ensembles of solutions from climate models to span possible futures. For instance, initial condition uncertainty is quantified by generating an ensemble of initial states that are consistent with available observations and then integrating the climate model starting from each initial condition. A climate model is itself subject to uncertain choices in modeling certain physical processes. Some of these choices can be sampled using so-called perturbed physics ensembles, whereby uncertain parameters or structural switches are perturbed within a single climate model framework. For a variety of reasons, there is a strong reliance on so-called ensembles of opportunity, which are multi-model ensembles (MMEs) formed by collecting predictions from different climate modeling centers, each using a potentially different framework to represent relevant processes for climate change. The most extensive collection of these MMEs is associated with the Coupled Model Intercomparison Project (CMIP). However, the component models have biases, simplifications, and interdependencies that must be taken into account when making formal risk assessments. Techniques and concepts for integrating model projections in MMEs are reviewed, including differing paradigms of ensembles and how they relate to observations and reality. Aspects of these conceptual issues then inform the more practical matters of how to combine and weight model projections to best represent the uncertainties associated with projected climate change.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Simplification structurale"

1

Friedler, Ferenc, Ákos Orosz, and Jean Pimentel Losada. "Simplification of the Maximal Structure." In P-graphs for Process Systems Engineering, 151–58. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-92216-0_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kamiński, Tomasz. "Consequences of Simplifications in Modelling and Analysis of Masonry Arch Bridges." In Structural Integrity, 153–60. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-29227-0_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rusakov, A. I. "Simplification Ways of Analysis of Statically Indeterminate Systems." In Fundamentals of Structural Mechanics, Dynamics, and Stability, 165–78. First edition. | Boca Raton : CRC Press, 2021.: CRC Press, 2020. http://dx.doi.org/10.1201/9780429155291-21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zumpe, S., and W. Esswein. "Simplification of Knowledge Discovery using “Structure Classification”." In Studies in Classification, Data Analysis, and Knowledge Organization, 245–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/978-3-642-55991-4_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Madelaine, Guillaume, Elisa Tonello, Cédric Lhoussaine, and Joachim Niehren. "Normalizing Chemical Reaction Networks by Confluent Structural Simplification." In Computational Methods in Systems Biology, 201–15. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-45177-0_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Su, Jianyuan, and Xiaoming Wang. "Test Requirements Simplification Based on Nonlinear Data Structure." In Lecture Notes in Electrical Engineering, 221–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27311-7_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Madelaine, Guillaume, Cédric Lhoussaine, and Joachim Niehren. "Structural Simplification of Chemical Reaction Networks Preserving Deterministic Semantics." In Computational Methods in Systems Biology, 133–44. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23401-4_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hopej, Marian, and Marcin Kandora. "The Simplification of Organizational Structure: Lessons from Product Design." In Advances in Intelligent Systems and Computing, 188–200. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99993-7_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wei, Liang, John D. Marshall, and J. Renée Brooks. "Process-Based Ecophysiological Models of Tree-Ring Stable Isotopes." In Stable Isotopes in Tree Rings, 737–56. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-92698-4_26.

Full text
Abstract:
AbstractTree-ring stable isotopes can be used to parameterizeprocess-based models by providing long-term data on tree physiological processes on annual or finer time steps. They can also be used to test process-based ecophysiological models for the assumptions, hypotheses, and simplifications embedded within them. However, numerous physiological and biophysical processes influence the stable carbon (δ13C) and oxygen (δ18O) isotopes in tree rings, so the models must simplify how they represent some of these processes to be useful. Which simplifications are appropriate depends on the application to which the model is applied. Fortunately, water and carbon fluxes represented in process-based models often have strong isotopic effects that are recorded in tree-ring signals. In this chapter, we review the status of several tree-ring δ13C and δ18O models simulating processes for trees, stands, catchments, and ecosystems. This review is intended to highlight the structural differences among models with varied objectives and to provide examples of the valuable insights that can come from combining process modeling with tree-ring stable isotope data. We urge that simple stable isotope algorithms be added to any forest model with a process representation of photosynthesis and transpiration as a strict test of model structure and an effective means to constrain the models.
APA, Harvard, Vancouver, ISO, and other styles
10

Raymond, Patricia, Magnus Löf, Phil Comeau, Lars Rytter, Miguel Montoro Girona, and Klaus J. Puettmann. "Silviculture of Mixed-Species and Structurally Complex Boreal Stands." In Advances in Global Change Research, 403–16. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-15988-6_15.

Full text
Abstract:
AbstractUnderstanding structurally complex boreal stands is crucial for designing ecosystem management strategies that promote forest resilience under global change. However, current management practices lead to the homogenization and simplification of forest structures in the boreal biome. In this chapter, we illustrate two options for managing productive and resilient forests: (1) the managing of two-aged mixed-species forests; and (2) the managing of multi-aged, structurally complex stands. Results demonstrate that multi-aged and mixed stand management are powerful silvicultural tools to promote the resilience of boreal forests under global change.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Simplification structurale"

1

Yousefi, A., B. Lohmann, and M. Buttelmann. "Row by row structure simplification." In Proceedings of the 2004 American Control Conference. IEEE, 2004. http://dx.doi.org/10.23919/acc.2004.1383696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sulem, Elior, Omri Abend, and Ari Rappoport. "Semantic Structural Evaluation for Text Simplification." In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/n18-1063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

ADAMS, JR., L., and G. NEVILL, JR. "Heuristic simplification of geometric complexity in structural design." In 27th Structures, Structural Dynamics and Materials Conference. Reston, Virigina: American Institute of Aeronautics and Astronautics, 1986. http://dx.doi.org/10.2514/6.1986-987.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Leimer, Kurt, and Przemyslaw Musialski. "Simulation of Flexible Patterns by Structural Simplification." In SIGGRAPH '20: Special Interest Group on Computer Graphics and Interactive Techniques Conference. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3388770.3407446.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, Zhenjia (Jerry), and Hyun Joe Kim. "Physical Modeling and Simplification of FPSO Topsides Module in Wind Tunnel Model Tests." In ASME 2021 40th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/omae2021-63459.

Full text
Abstract:
Abstract To evaluate wind load on offshore structures, such as FPSO’s, wind tunnel model test is a common industry practice. Configuration of topsides structures and equipment can be very complex, and it is a practical challenge to model all the structural details for wind tunnel model tests. Sometimes, there may be significant modifications to the topsides over FPSO operation life cycle and there may not be detailed topsides drawing for wind tunnel to use in physical model construction. In practice, wind tunnel laboratories have to simplify physical topsides models. They also use metal meshes to cover the topsides modules to compensate for the force reduction due to the simplification. In order to help establish physical modeling practices of wind tunnel model test, we performed extensive tests using a single topsides module. The original topsides module without simplification and mesh was tested first. Then, two simplifications were adopted in the physical model construction. The module was covered with and without metal mesh of different porosities. Thorough test quality assurance (QA) and quality control (QC) were performed to ensure data quality. Test setup, quality assurance (QA) and results are presented in the paper. The results can be used not only for appropriate physical modeling practices of complex topsides modules, but also for validation of numerical predictions such as Computational Fluid Dynamics (CFD), as well as empirical formulas.
APA, Harvard, Vancouver, ISO, and other styles
6

Ersal, Tulga, Hosam K. Fathy, and Jeffrey L. Stein. "Orienting Body Coordinate Frames Using Karhunen-Loe`ve Expansion for More Effective Structural Simplification." In ASME 2006 International Mechanical Engineering Congress and Exposition. ASMEDC, 2006. http://dx.doi.org/10.1115/imece2006-14572.

Full text
Abstract:
Previous work by the authors developed a junction-inactivity-based structural simplification technique for bondgraph models. The technique is highly sensitive to the orientation of the body coordinate frames in multibody systems: improper alignment of body coordinate frames may prohibit a significant simplification. This paper demonstrates how the Karhunen-Loe`ve expansion can be used to automatically detect the existence of and to find the transformation into body coordinate frames that render the bond-graph of a multibody system more conducive to simplification. The conclusion is that the Karhunen-Loe`ve expansion complements well the junction-inactivity-based structural simplification technique when multibody dynamics are involved in the system.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Bo, and Ning Li. "SIMPLIFICATION OF A KIND OF COMPOSITE STRUCTURE." In ICHMT International Symposium on Advances in Computational Heat Transfer. Connecticut: Begellhouse, 2017. http://dx.doi.org/10.1615/ichmt.2017.610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Bo, and Ning Li. "SIMPLIFICATION OF A KIND OF COMPOSITE STRUCTURE." In ICHMT International Symposium on Advances in Computational Heat Transfer. Connecticut: Begellhouse, 2017. http://dx.doi.org/10.1615/ichmt.2017.cht-7.610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Xin, Chen, Qin Ye, Yuan Xiguang, Zhang Ping, and Sun Jian. "Updating Finite Element Model of Combined Structures on the Basis of Dynamic Test Results." In ASME 1996 Design Engineering Technical Conferences and Computers in Engineering Conference. American Society of Mechanical Engineers, 1996. http://dx.doi.org/10.1115/96-detc/dac-1061.

Full text
Abstract:
Abstract According to the real situation, a new method of updating the finite element model (FEM) of a combined structure step by step is proposed in this paper. It is assumed that there are two types of error when establishing the FEMs. One of them results from the simplifications, in fact, it is severe for complicated structures, which usually assume many simplifications; the other is from the process of identifying structural joint parameters. For this reason, it is recommended that the FEM should be established in two stages. At the first stage, the local physical parameters relating with the simplifications are corrected by using the dynamic test data of the corresponding substructures. Then, the structural joint parameters that link the substructures are corrected by the dynamic test data of the combined structure as a whole. The updating formula is presented and proved, and its algorithm is also described. And the experimental results show that the efficiency and accuracy of the proposed method are quite satisfactory.
APA, Harvard, Vancouver, ISO, and other styles
10

Xie, Maojin, Weiqun Cao, Gang Yang, and Xinyuan Huang. "Tree Axis Structure Simplification Correspondent to Botanical Properties." In 2009 Third International Symposium on Plant Growth Modeling, Simulation, Visualization and Applications (PMA). IEEE, 2009. http://dx.doi.org/10.1109/pma.2009.54.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Simplification structurale"

1

Libby, Margarita H. Business Climate for Competitiveness in the Americas: Simplification of Procedures to Promote Competitiveness. Inter-American Development Bank, November 2011. http://dx.doi.org/10.18235/0006894.

Full text
Abstract:
International organizations most often recommend a virtual one stop shop such as the Single Window for Foreign Trade (Spanish acronym: VUCE). This model is undoubtedly the most successful scheme available. This paper presents the general framework for trade facilitation and shows how VUCEs have triggered a new perspective of cohesiveness as countries seek to facilitate trade and influence competitiveness indexes. In addition, it assesses the current situation in countries of the Americas that are starting to or have already taken the first steps in developing a VUCE, such as Costa Rica, Colombia, Mexico, and Chile, and discusses the conditions required to implement a VUCE with the understanding that there is more than one possible model of implementation and every government must choose one that is suitable to its own institutional structure and technological progress. This paper was presented at the Fifth Americas Competiveness Forum for the Inter-American Development Bank and Compete Caribbean Santo Domingo, Dominican Republic, October 5-7, 2011.
APA, Harvard, Vancouver, ISO, and other styles
2

Lokke, Arnkjell, and Anil Chopra. Direct-Finite-Element Method for Nonlinear Earthquake Analysis of Concrete Dams Including Dam–Water–Foundation Rock Interaction. Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, March 2019. http://dx.doi.org/10.55461/crjy2161.

Full text
Abstract:
Evaluating the seismic performance of concrete dams requires nonlinear dynamic analysis of two- or three-dimensional dam–water–foundation rock systems that include all the factors known to be significant in the earthquake response of dams. Such analyses are greatly complicated by interaction between the structure, the impounded reservoir and the deformable foundation rock that supports it, and the fact that the fluid and foundation domains extend to large distances. Presented in this report is the development of a direct finite-element (FE) method for nonlinear earthquake analysis of two- and three-dimensional dam–water–foundation rock systems. The analysis procedure applies standard viscous-damper absorbing boundaries to model the semi-unbounded fluid and foundation domains, and specifies at these boundaries effective earthquake forces determined from a ground motion defined at a control point on the ground surface. This report is organized in three parts, with a common notation list, references, and appendices at the end of the report. Part I develops the direct FE method for 2D dam–water–foundation rock systems. The underlying analytical framework of treating dam–water–foundation rock interaction as a scattering problem, wherein the dam perturbs an assumed "free-field" state of the system, is presented, and by applying these concepts to a bounded FE model with viscous-damper boundaries to truncate the semi-unbounded domains, the analysis procedure is derived. Step-by-step procedures for computing effective earthquake forces from analysis of two 1D free-field systems are presented, and the procedure is validated by computing frequency response functions and transient response of an idealized dam–water–foundation rock system and comparing against independent benchmark results. This direct FE method is generalized to 3D systems in Part II of this report. While the fundamental concepts of treating interaction as a scattering problem are similar for 2D and 3D systems, the derivation and implementation of the method for 3D systems is much more involved. Effective earthquake forces must now be computed by analyzing a set of 1D and 2D systems derived from the boundaries of the free-field systems, which requires extensive book-keeping and data transfer for large 3D models. To reduce these requirements and facilitate implementation of the direct FE method for 3D systems, convenient simplifications of the procedure are proposed and their effectiveness demonstrated. Part III of the report proposes to use the direct FE method for conducting the large number of nonlinear response history analyses (RHAs) required for performance-based earthquake engineering (PBEE) of concrete dams, and discusses practical modeling considerations for two of the most influential aspects of these analyses: nonlinear mechanisms and energy dissipation (damping). The findings have broad implications for modeling of energy dissipation and calibration of damping values for concrete dam analyses. At the end of Part III, the direct FE method is implemented with a commercial FE program and used to compute the nonlinear response of an actual arch dam. These nonlinear results, although limited in their scope, demonstrate the capabilities and effectiveness of the direct FE method to compute the types of nonlinear engineering response quantities required for PBEE of concrete dams.
APA, Harvard, Vancouver, ISO, and other styles
3

SIMPLIFIED MODELLING OF NOVEL NON-WELDED JOINTS FOR MODULAR STEEL BUILDINGS. The Hong Kong Institute of Steel Construction, December 2021. http://dx.doi.org/10.18057/ijasc.2021.17.4.10.

Full text
Abstract:
Prefabricated modular steel (PFMS) construction is a more efficient and safe method of constructing a high-quality building with less waste material and labour dependency than traditional steel construction. It is indeed critical to have a precise and valuable intermodular joining system that allows for efficient load transfer, safe handling, and optimal use of modular units' strength. Thus, the purpose of this study was to develop joints using tension bolts and solid tenons welded into the gusset plate (GP). These joints ensured rigid and secure connectivity in both horizontal and vertical directions for the modular units. Using the three-dimensional (3D) finite element (FE) analysis software ABAQUS, the study investigated the nonlinear lateral structural performance of the joint and two-storey modular steel building (MSB). The solid element FE models of joints were then simplified by introducing connectors and beam elements to enhance computational efficiency. Numerous parameters indicated that column tenons were important in determining the joint's structural performance. Moreover, with a standard deviation (SD) of 0.025, the developed connectors and beam element models accurately predicted the structural behaviour of the joints. As a result of their simplification, these joints demonstrated effective load distribution, seismic performance, and ductility while reducing computational time, effort, and complexity. The validity of the FE analysis was then determined by comparing the results to the thirteen joint bending tests performed in the reference.
APA, Harvard, Vancouver, ISO, and other styles
4

SEISMIC PERFORMANCE AND REPLACEABILITY OF STEEL FRAME STRUCTURES WITH REPLACEABLE BEAM SEGMENTS. The Hong Kong Institute of Steel Construction, March 2024. http://dx.doi.org/10.18057/ijasc.2024.20.1.8.

Full text
Abstract:
This study assessed the seismic performance and replaceability of steel frame structures incorporating replaceable beam segments. A reduced-beam-section beam-column joint featuring a replaceable energy dissipation beam segment was specifically designed for this purpose. The joint underwent quasi-static analysis subjected to low-cycle reciprocating loading. The study extended to a single-story, single-span plane steel frame, where reduced-beam-section beam-column joints with replaceable energy dissipation beam segments were analyzed for hysteretic and deformation behavior. Moreover, the exploration of parameters such as end-plate opening clearance and rotation deformation was undertaken to inform the simplification of the overall plane frame model. Meanwhile, multi-scale models were developed for an eight-story, four-span, reduced-beam-section steel frame (RBSSF) with a replaceable energy dissipation beam segment and a rigid steel frame (RSF). These models were employed to analyze the elastoplastic time-history characteristics and the replaceability of the beam segment. The results demonstrated that the reduced-beam-section beam-column joint with a replaceable energy dissipation beam segment exhibited a relatively full hysteresis curve, affirming high ductility, energy dissipation, and plastic deformation capacities. Notably, damage and plastic development in the steel beam primarily concentrated in the low-yield-point replaceable energy dissipation beam segment. The small end-plate opening clearance ensured cooperative deformation between the end plates facilitated by the bolts. Comparatively, the RBSSF structure displayed superior seismic performance to the RSF structure during earthquakes, with the replaceable energy dissipation beam segment satisfying replaceability requirements under moderate seismic conditions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography