Dissertations / Theses on the topic 'Synthetic a priori knowledge'

To see the other types of publications on this topic, follow the link: Synthetic a priori knowledge.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Synthetic a priori knowledge.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bhowal, Nabanita. "Kants notion of synthetic a priori judgement and some later developments on it." Thesis, University of North Bengal, 2019. http://ir.nbu.ac.in/handle/123456789/4042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Serin, Ismail. "The Quiddity Of Knowledge In Kant&#039." Phd thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/2/12605758/index.pdf.

Full text
Abstract:
In this thesis the quiddity of knowledge in Kant'
s critical philosophy has been investigated within the historical context of the problem. In order to illustrate the origins of the subject-matter of the dissertation, the historical background of Kant'
s views on the theory of knowledge has been researched too. As a result of this research, it is concluded that Kant did not invent a new philosophical problem, but he tried to improve a decisive solution for one of the oldest question of history of philosophy i.e., &ldquo
How is synthetic a priori knowledge is possible?&rdquo
The theoretical dimension of Kant'
s theory of knowledge is reserved for this purpose. The above mentioned question is not new neither for us nor for Kant, but his answer and his philosophical stand have clearly revolutionary meaning both for us and for him. This thesis claims that his stand-point not only leads to an original epoch for the theory of knowledge, but creates a serious possibility for a new ontology explicating the quiddity of knowledge.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhou, Hao. "La chute du "triangle d'or" : apriorité, analyticité, nécessité : de l'équivalence à l'indépendance." Thesis, Paris 1, 2020. http://www.theses.fr/2020PA01H204.

Full text
Abstract:
Les trois concepts d’apriorité, d’analyticité et de nécessité, qui ont longtemps été considérés comme équivalents, constituent ce que l’on peut appeler le « triangle d’or » ou « triangle d’équivalence ». Or, la conception kantienne du synthétique a priori et les conceptions kripkéennes du contingent a priori et du nécessaire a posteriori représentent des critiques décisives contre ce triangle d’équivalence. Héritant, de manière critique, des idées révolutionnaires de Kant et de Kripke, un nouveau schéma épistémologique intitulé « sujet-connaissance-monde » est ici systématiquement construit. Ce schéma rend totalement caduc le triangle d’or. Les concepts d’apriorité, d’analyticité et de nécessité deviennent indépendants les uns des autres. On aboutit ainsi à un nouvel espace des catégories de la connaissance, issu du libre entre croisement des trois distinctions a priori-a posteriori, analytique-synthétique et nécessaire-contingent. Ces catégories de la connaissance, dont certaines sont nouvelles, s’appliquent aux sciences exclusivement et exhaustivement
The three concepts of apriority, analyticity and necessity, which have long been considered equivalent, constitute whatcould be called the “golden triangle” or “triangle of equivalence”. Yet, the Kantian conception of the synthetic a priori and the Kripkean conceptions of the contingent a priori and the necessary a posteriori represent decisive criticismsagainst this triangle of equivalence. Inheriting critically these revolutionary thoughts from Kant and Kripke, a newepistemological schema entitled “subject-knowledge-world” is here systematically constructed. This schema renders thegolden triangle totally obsolete. The concepts of apriority, analyticity and necessity become independent of each other.This leads to a new space of knowledge categories, resulting from the free intersecting of the three distinctions a priori-aposteriori, analytic-synthetic and necessary-contingent. These knowledge categories, some of which are new, apply to science exclusively and exhaustively
APA, Harvard, Vancouver, ISO, and other styles
4

Barin, Ozlem. "The Role Of Imagination In Kant&#039." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/2/1110089/index.pdf.

Full text
Abstract:
The purpose of this study is to examine the role of imagination in Immanuel Kant&
#8217
s Critique of Pure Reason by means of a detailed textual analysis and interpretation. In my systematic reading of the Kantian text, I analyse how the power of imagination comes to the foreground of Kant&
#8217
s investigation into the transcendental conditions of knowledge. This is to explain the mediating function of imagination between the two distinct faculties of the subject
between sensibility and understanding. Imagination achieves its mediating function between sensibility and understanding through its activity of synthesis. By means of exploring the features of the activity of synthesis I attempt to display that imagination provides the ground of the unification of sensibility and understanding. The argument of this study resides in the claim that the power of imagination, through its transcendental synthesis, provides the ground of the possibility of all knowledge and experience. This is to announce imagination as the building block of Kant&
#8217
s Copernican Revolution that grounds the objectivity of knowledge in its subjective conditions. Therefore, the goal of this study is to display imagination as a distinctive human capacity that provides the relation of our knowledge to the objects.
APA, Harvard, Vancouver, ISO, and other styles
5

Kroedel, Thomas. "A priori knowledge of modal truths." Thesis, University of Oxford, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.440706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Midelfart, Herman. "Knowledge discovery from cDNA microarrays and a priori knowledge." Doctoral thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2003. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-912.

Full text
Abstract:
Microarray technology has recently attracted a lot of attention. This technology can measure the behavior (i.e., RNA abundance) of thousands of genes simultaneously, while previous methods have only allowed measurements of single genes. By enabling studies on a genome-wide scale, microarray technology is currently revolutionizing biological research and creating a wide range of research opportunities. However, the technology generates a vast amount of data that cannot be handled manually. Computational analysis is thus a prerequisite for the success of this technology, and research and development of computational tools for microarray analysis are of great importance. This thesis develops supervised learning methods based on Rough Set Theory (RST) for analyzing microarray data together with prior knowledge. Two kinds of microarray studies are considered. The first is cancer studies where supervised learning may be used for predicting tumor subtypes and clinical parameters. We introduce a general RST approach for classification of tumor samples analyzed by microarrays. This includes a feature selection method for selecting genes that discriminate significantly between a set of classes. RST classifiers are then learned from the selected genes. The approach is applied to a data set of gastric tumors. Classifiers for six clinical parameters are developed and demonstrate that these parameters can be predicted from the expression profile of gastric tumors. Moreover, the performance of the feature selection method as well as several learning and discretization methods implemented in ROSETTA are examined and compared to the performance of linear and quadratic discrimination analysis. The classifiers are also biologically validated. One of the best classifiers is selected for each clinical parameter, and the connection between the genes used in these classifiers and the parameters are compared to the established knowledge in the biomedical literature. Many of these genes have no previously known connection to gastric cancer and provide interesting targets for further biological research. The second kind of study is prediction of gene function from expression profiles measured with microarrays. A serious problem in this case is that functional classes, which are assigned to genes, are typically organized in an ontology where the classes may be related to each other. One example is the Gene Ontology where the classes form a Directed Acyclic Graph (DAG). Standard learning methods such as RST assume, however, that the classes are unrelated, and cannot deal with this problem directly. This thesis gives a solution by introducing an extended RST framework and two novel algorithms for learning in a DAG. The DAG also constitutes a problem when a classifier is to be evaluated since standard performance measures such as accuracy or AUC do not recognize the structure of the DAG. Therefore, several new performance measures are introduced. The algorithms are first tested on a data set that was created from human fibroblast cells by the means of microarrays. They are then applied on artificial data in order to obtain a better understanding of their behavior, and their weaknesses and strengths are identified.
APA, Harvard, Vancouver, ISO, and other styles
7

Lane, Ashley Alexander. "A critique of a priori moral knowledge." Thesis, Birkbeck (University of London), 2018. http://bbktheses.da.ulcc.ac.uk/368/.

Full text
Abstract:
Many ethicists believe that if it is possible to know a true moral proposition, it is always possible to ascertain a priori the normative content of that proposition. I argue that this is wrong; the only way to ascertain the normative content of some moral propositions requires the use of a posteriori information. I examine what I call determinate core moral propositions. I assume that some of these propositions are true and that actual agents are able to know them. Ethicists whom I call coreapriorists believe that it is always possible to ascertain a priori the normative content of such propositions. Core-aposteriorists believe that this is false, and that sometimes a posteriori information must be used to ascertain that normative content. I develop what I call the a posteriori strategy to show that core-apriorists are likely to be wrong, and so core-aposteriorists are correct. The strategy examines the details of particular core-apriorist theories and then shows that the theories have one of two problems: either some of the knowable determinate core moral propositions in the theories are not knowable a priori, or some of the propositions are not determinate, so they cannot perform the epistemological work required of them. Therefore, some knowable determinate core moral propositions are only knowable with the aid of a posteriori information. I apply the strategy to four different core-apriorist theories. The first is Henry Sidgwick's theory of self-evident moral axioms, as recently developed by Katarzyna de Lazari-Radek and Peter Singer. The second is Matthew Kramer's moral realism. I then examine Michael Smith's moral realism, and Frank Jackson and Philip Pettit's moral functionalism. The a posteriori strategy shows that there are serious difficulties with all four theories. I conclude that it provides good evidence that the core-apriorist is mistaken, and that the core-aposteriorist is right.
APA, Harvard, Vancouver, ISO, and other styles
8

Kai, Li. "Neuroanatomical segmentation in MRI exploiting a priori knowledge /." view abstract or download file of text, 2007. http://proquest.umi.com/pqdweb?did=1400964181&sid=1&Fmt=2&clientId=11238&RQT=309&VName=PQD.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 2007.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 148-158). Also available for download via the World Wide Web; free to University of Oregon users.
APA, Harvard, Vancouver, ISO, and other styles
9

Lynch, Timothy J. "Aquinas, Lonergan, and the a priori." Thesis, Queen's University Belfast, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tozer, Geoffrey D. N. "The nature of synthetic judgements a priori and the categorical imperative." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq25966.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Cozzio-Büeler, Enrico Albert. "The design of neural networks using a priori knowledge /." Zürich, 1995. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=10991.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Young, Benedict. "Naturalising the 'a priori' : reliabilism and experience-independent knowledge." Thesis, University of Edinburgh, 2000. http://hdl.handle.net/1842/26064.

Full text
Abstract:
The thesis defends the view that the concept of a priori knowledge can be naturalised without sacrificing the core aspects of the traditional conception of apriority. I proceed by arguing for three related claims. The first claim is that the adoption of naturalism in philosophy is not automatically inconsistent with belief in the existence of a priori knowledge. A widespread view to the contrary has come about through the joint influence of Quine and the logical empiricists. I hold that by rejecting a key assumption made by the logical empiricists (the assumption that apriority can be explained only by appeal to the concept of analyticity), we can develop an account of naturalism in philosophy which does not automatically rule out the possibility of a priori knowledge, and which retains Quine's proposals that philosophy be seen as continuous with the enterprise of natural science, and that the theory of knowledge be developed within the conceptual framework of psychology. The first attempt to provide a theory of a priori knowledge within such a framework was made by Philip Kitcher. Kitcher's strategy involves giving an account of the idea of "experience-independence" independently of the theory of knowledge in general (he assumes that an appropriate account of the latter will be reliabilist). Later authors in the tradition Kitcher inaugurated have followed him on this, while criticising him for adopting too strong a notion of experience-independence. The second claim I make is an qualified agreement with this: it is that only a weak notion of experience-independence will give a viable account of a priori knowledge, but that the reasons why this is so have been obscured by Kitcher's segregation of the issues. Strong reasons for adopting a weak notion are provided by consideration of the theory of knowledge, but these same reasons also highlight severe problems for the project of providing a naturalistic theory of knowledge in general. The third claim is that a plausible naturalistic theory of knowledge in general can be given, and that it provides an appropriate framework within which to give an account of minimally experience-independent knowledge.
APA, Harvard, Vancouver, ISO, and other styles
13

Chan, Tung 1972. "The complexity and a priori knowledge of learning from examples." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/11464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Delisle, Sylvain. "Text processing without a priori domain knowledge: Semi-automatic linguistic analysis for incremental knowledge acquisition." Thesis, University of Ottawa (Canada), 1994. http://hdl.handle.net/10393/6574.

Full text
Abstract:
Technical texts are an invaluable source of the domain-specific knowledge which plays a crucial role in advanced knowledge-based systems today. However, acquiring such knowledge has always been a major difficulty in the construction of these systems--this critical obstacle is sometimes referred to as the "knowledge acquisition bottleneck". In order to lessen the burden on the knowledge engineer's shoulders, several approaches have been proposed in the literature. A few of these suggest processing texts pertaining to the domain of interest in order to extract the knowledge they contain and thus facilitate the domain modelling. We herein propose a new approach to knowledge acquisition from texts; this approach is comprised of a new methodology and computational framework for the implementation of a linguistic processor which represents the central component of a system for the acquisition of knowledge from text. The system, named TANKA, is not given the complete domain model beforehand. It is designed to process technical texts in order to incrementally build a knowledge base containing a conceptual model of the domain. TANKA is an intelligent assistant to the knowledge engineer; when it cannot proceed entirely on its own, the user is asked to collaborate. In the process, the system acquires knowledge from text; it can be said to learn about the domain. The originality of the research is due mainly to the fact that we do not assume significant a priori domain-specific (semantic) knowledge: this assumption represents a severe constraint on the natural language processor. The only external elements of knowledge we consider in the proposed framework are "off-the-shelf" publicly available and domain-independent repositories, such as a basic dictionary containing surface syntactic information (i.e. The Collins) and a lexical database (i.e. WordNet). Other components of the proposed framework are general-purpose. The parser (DIPETT) is domain-independent with a large coverage of English: our approach relies on full syntactic analysis. The Case-based semantic analyzer (HAIKU) is semi-automatic: it interacts with the user in order to get his$\sp1$ approval of the analysis it has just proposed and negotiates refined elements of the analysis when necessary. The combined processing of DIPETT and HAIKU allows TANKA, the encompassing system$\sp2$, to acquire knowledge, based on the conceptual elements produced by HAIKU. The thesis also describes experiments that have been conducted on a Prolog implementation of both of these text analysis components. The approach presented in the thesis is general and in principle portable to any domain in which suitable technical texts are available. The thesis presents theoretical considerations as well as engineering aspects of the many facets of this research work. We also provide a detailed discussion of many future work items that could be added to what has already been accomplished in order to make the framework even more productive. (Abstract shortened by UMI.) ftn$\sp1$In order to lighten the text, the terms 'he' and 'his' have been used generically to refer equally to persons of either sex. No discrimination is either implied or intended. $\sp2$DIPETT and HAIKU constitute a conceptual analyzer that can be used independently of TANKA or within a different encompassing system.
APA, Harvard, Vancouver, ISO, and other styles
15

Melis, Giacomo. "The epistemic defeat of a priori and empirical certainties : a comparison." Thesis, University of Aberdeen, 2014. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=225946.

Full text
Abstract:
I explore the traditional contention that a priori epistemic warrants enjoy some sort of higher epistemic security than empirical warrants. By focusing on warrants that might plausibly be called 'basic', and by availing myself of an original taxonomy of epistemic defeaters, I defend a claim in the vicinity of the traditional contention. By discussing some examples, I argue that basic a priori warrants are immune to some sort of empirical defeaters, which I describe in detail. An important by-product of my investigation is a novel theory of epistemic defeaters, according to which only agents able to engage in higher-order epistemic thinking can suffer undermining defeat, while wholly unreflective agents can, in principle, suffer overriding defeat.
APA, Harvard, Vancouver, ISO, and other styles
16

Kuntjoro, Wahyu. "Expert System for Structural Optimization Exploiting Past Experience and A-priori Knowledge." Thesis, Cranfield University, 1994. http://hdl.handle.net/1826/4534.

Full text
Abstract:
The availability of comprehensive Structural Optimization Systems in the market is allowing designers direct access to software tools previously the domain of the specialist. The use of Structural Optimization is particularly troublesome requiring knowledge of finite element analysis, numerical optimization algorithms, and the overall design environment. The subject of the research is the application of Expert System methodologies to support nonspecialists when using a Structural Optimization System. The specific target is to produce an Expert System as an adviser for a working structural optimization system. Three types of knowledge are required to use optimization systems effectively; that relating to setting up the structural optimization problem which is based on logical deduction; past, experience; together with run-time and results interpretation knowledge. A knowledge base which is based on the above is set, up and reasoning mechanisms incorporating case based and rule based reasoning, theory of certainty, and an object oriented approach are developed. The Expert SVstem described here concentrates on the optimization formulation aspects. It is able to set up an optimization run for the user and monitor the run-time performance. In this second mode the system is able to decide if an optimization run is likely to converge to a, solution and advice the user accordingly. The ideas and Expert System techniques presented in this thesis have been implemented in the development; of a prototype system written in C++. The prototype has been extended through the development of a user interface which is based on XView.
APA, Harvard, Vancouver, ISO, and other styles
17

Haase, Kristine [Verfasser]. "Maritime Augmented Reality with a priori knowledge of sea charts / Kristine Haase." Kiel : Universitätsbibliothek Kiel, 2013. http://d-nb.info/1034073729/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Paraskevopoulos, Vasileios. "Design of optimal neural network control strategies with minimal a priori knowledge." Thesis, University of Sussex, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Christiansen, Jesse G. "Apriority in naturalized epistemology investigation into a modern defense /." unrestricted, 2007. http://etd.gsu.edu/theses/available/etd-11272007-193136/.

Full text
Abstract:
Thesis (M.A.)--Georgia State University, 2007.
Title from file title page. George W. Rainbolt, committee chair; Jessica Berry, Steve Jacobson, committee members. Electronic text (43 p.) : digital, PDF file. Description based on contents viewed Jan 18, 2008. Includes bibliographical references (p. 43).
APA, Harvard, Vancouver, ISO, and other styles
20

Ebert, Philip A. "The context principle and implicit definitions : towards an account of our a priori knowledge of arithmetic." Thesis, University of St Andrews, 2005. http://hdl.handle.net/10023/14916.

Full text
Abstract:
This thesis is concerned with explaining how a subject can acquire a priori knowledge of arithmetic. Every account for arithmetical, and in general mathematical knowledge faces Benacerraf's well-known challenge, i.e. how to reconcile the truths of mathematics with what can be known by ordinary human thinkers. I suggest four requirements that jointly make up this challenge and discuss and reject four distinct solutions to it. This will motivate a broadly Fregean approach to our knowledge of arithmetic and mathematics in general. Pursuing this strategy appeals to the context principle which, it is proposed, underwrites a form of Platonism and explains how reference to and object-directed thought about abstract entities is, in principle, possible. I discuss this principle and defend it against different criticisms as put forth in recent literature. Moreover, I will offer a general framework for implicit definitions by means of which - without an appeal to a faculty of intuition or purely pragmatic considerations - a priori and non-inferential knowledge of basic mathematical principles can be acquired. In the course of this discussion, I will argue against various types of opposition to this general approach. Also, I will highlight crucial shortcomings in the explanation of how implicit definitions may underwrite a priori knowledge of basic principles in broadly similar conceptions. In the final part, I will offer a general account of how non-inferential mathematical knowledge resulting from implicit definitions is best conceived which avoids these shortcomings.
APA, Harvard, Vancouver, ISO, and other styles
21

Bauer, Patrick Marcel [Verfasser]. "Artificial Bandwidth Extension of Telephone Speech Signals Using Phonetic A Priori Knowledge / Patrick Marcel Bauer." Aachen : Shaker, 2017. http://d-nb.info/1138178519/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Basoukos, Antonios. "Science, practice, and justification : the a priori revisited." Thesis, University of Exeter, 2014. http://hdl.handle.net/10871/17358.

Full text
Abstract:
History is descriptive. Epistemology is conceived as normative. It appears, then, that a historical approach to epistemology, like historical epistemology, might not be epistemically normative. In our context here, epistemology is not a systematic theory of knowledge, truth, or justification. In this thesis I approach epistemic justification through the vantage point of practice of science. Practice is about reasoning. Reasoning, conceived as the human propensity to order perceptions, beliefs, memories, etc., in ways that permit us to have understanding, is not only about thinking. Reasoning has to do with our actions, too: In the ordering of reasoning we take into account the desires of ourselves and others. Reasoning has to do with tinkering with stuff, physical or abstract. Practice is primarily about skills. Practices are not mere groping. They have a form. Performing according to a practice is an activity with a lot of plasticity. The skilled performer retains the form of the practice in many different situations. Finally, practices are not static in time. Practices develop. People try new things, some of which may work out, others not. The technology involved in how to go about doing things in a particular practice changes, and the concepts concerning understanding what one is doing also may change. This is the point where history enters the picture. In this thesis I explore the interactions between history, reasoning, and skills from the viewpoint of a particular type of epistemic justification: a priori justification. An a priori justified proposition is a proposition which is evident independent of experience. Such propositions are self-evident. We will make sense of a priori justification in a context of regarding science as practice, so that we will be able to demonstrate that the latter accommodates the normative character of science.
APA, Harvard, Vancouver, ISO, and other styles
23

Campbell, Douglas Ian. "A Theory of Consciousness." Diss., The University of Arizona, 2010. http://hdl.handle.net/10150/195372.

Full text
Abstract:
It is shown that there is an unconditional requirement on rational beings to adopt “reflexive” beliefs, these being beliefs with a very particular sort of self-referential structure. It is shown that whoever adopts such beliefs will thereby adopt beliefs that imply that a certain proposition, ᴪ, is true. From the fact that there is this unconditional requirement on rational being to adopt beliefs that imply ᴪ, it is concluded that ᴪ is knowable a priori. ᴪ is a proposition that says, in effect, that one’s own point of view is a point in space and time that is the point of view of some being who has reflexive beliefs. It is argued that this information that is contained in ᴪ boils down to the information that one’s point of view is located at a point in the world at which there is something that is “conscious” in a certain natural and philosophically interesting sense of that word. In other words, a theory of consciousness is defended according to which an entity is conscious if and only if it has reflexive beliefs.
APA, Harvard, Vancouver, ISO, and other styles
24

Los, Artem. "Modelling an individual's selection of a partner in a speed-dating experiment using a priori knowledge." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208668.

Full text
Abstract:
Speed dating is a relative new concept that allows researchers to study various theories related to mate selection. A problem with current research is that it focuses on finding general trends and relationships between the attributes. This report explores the use of machine learning techniques to predict whether an individual will want to meet his partner again after the 4-minute meeting based on their attributes that were known before they met. We will examine whether Random Forest or Extremely Randomized Trees perform better than Support Vector Machines for both limited attributes (describe appearance only) and extended attributes (includes answers to some questions about their preferences). It is shown that Random Forests perform better than Support Vector Machines and that extended attributes give better result for both classifiers. Furthermore, it is observed that the more information is known about the individuals, the better a classifier performs. Clubbing preferences of the partner stands out as an important attribute, followed by the same preference for the individual.
Speed dating är ett relativt nytt koncept som tillåter forskare att studera olika teorier relaterade till val av partner. Ett problem med nuvarande forskning är att den fokuserar på att hitta generella trender och samband mellan attribut. Den här rapporten utforskar användning av maskinlärningsteknik för att förutsäga om en individ kommer vilja träffa sin partner igen efter ett 4-minuters möte baserat på deras attribut som var tillgängliga innan de träffades. Vi kommer att undersöka om Random Forest eller Extremely Randomized Trees fungerar bättre än Support Vector Machine för både begränsade attribut (beskriver bara utseende) och utökade attribut (inkluderar svar på några frågor om deras preferenser). Det visas att Random Forest fungerar bättre än Support Vector Machines och att utökade attribut ger bättre resultat för båda klassificerarna. Dessutom är det observerat att ju mer information som finns tillgänglig om individerna, desto bättre resultat ger en klassificerare. Partners preferens för att besöka nattklubbar står ut som ett viktigt attribut, följt av individers samma preferens för individen.
APA, Harvard, Vancouver, ISO, and other styles
25

Jonker, Anneliene. "Synthetic Lethality and Metabolism in Ewing Sarcoma : Knowledge Through Silence." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA11T039/document.

Full text
Abstract:
Le sarcome de Ewing est la seconde tumeur pédiatrique de l’os la plus fréquente. Elle est caractérisée par une translocation chromosomique résultant à la fusion de EWSR1 avec un membre de la famille ETS. Chez 85% des patients, cette fusion conduit à l’expression de la protéine chimérique EWS-FLI1 qui est l’oncogène majeur de ce sarcome. Ce dernier agit principalement par son action transcriptionelle sur des cibles qui lui sont propres. Au niveau thérapeutique, le sarcome d’Ewing est traité par chimiothérapie, chirurgie locale et par radiothérapie. La survie à long terme des patients est de l’ordre de 70%, mais beaucoup plus basse pour les patients métastatiques et quasi nulle lors d’une récidive. Parmi maintes caractéristiques, certains cancers présentent une dérégulation énergétique. L’influence d’EWS-FLI1 sur cet aspect n’a fait l’objet d’aucune étude dans le contexte du sarcome d’Ewing. Nous avons donc étudié par profilage métabolomique des cellules de sarcome d’Ewing en présence ou en absence d’EWS-FLI1. En comparant ces deux conditions, des modulations du profil énergétique relatif au cycle de Krebs, des précurseurs de le glycosylation ainsi que des métabolites de la voie de la méthionine et du tryptophane ont été observés. En parallèle, grâce à un crible de banque de shRNAs réalisé dans des conditions expérimentales similaires à l’étude métabolomique (lignée d’Ewing avec ou sans EWS-FLI1), nous avons pu identifier des gènes présentant des caractéristiques « synthétique létales », c'est-à-dire tuant uniquement les cellules du sarcome d’Ewing en présence de son oncogène
Ewing sarcoma, the second most commonly occurring pediatric bone tumor, is most often characterized by a chromosomal translocation between EWSR1 and FLI1. The gene fusion EWS-FLI1 accounts for 85% of all Ewing sarcoma and is considered the major oncogene and master regulator of Ewing sarcoma. EWS-FLI1 is a transcriptional modulator of targets, both directly and indirectly. Ewing sarcoma is aggressively treated with chemotherapy, localized surgery and radiation and has an overall survival of about 70%, however, survival for metastasis or relapsed cases remains low. One of the cancer hallmarks, metabolic deregulation, is most likely partly dependent on EWS-FLI1 in Ewing sarcoma cells. In order to get a better understanding of Ewing sarcoma biology and oncogenesis, it might be of high interest to investigate the influence of EWS-FLI1 in Ewing sarcoma cells. We therefore performed a global metabolic profiling of Ewing sarcoma cells with or without inhibition of EWS-FLI1. Several changes in the energy metabolism were observed throughout this study; the observed changes were consistent with an energy profile that moved from a cancer cell energy metabolism towards the energy metabolism of a more normal cell upon EWS-FLI1 inhibition, primarily based on the TCA cycle. Levels of TCA intermediates, glycosylation precursors, methionine pathway metabolites and amino acids, especially changes in the tryptophan metabolic pathway, were altered upon EWS-FLI1 inhibition. Parallel to this study, we performed a high-throughput synthetic lethality screen, in order to not only identify essential genes for cell survival and proliferation, but also to identify new synthetic lethal targets that could specifically target Ewing sarcoma cells carrying the EWS-FLI1 fusion gene
APA, Harvard, Vancouver, ISO, and other styles
26

Denaxas, Spiridon Christoforos. "A novel framework for integrating a priori domain knowledge into traditional data analysis in the context of bioinformatics." Thesis, University of Manchester, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.492124.

Full text
Abstract:
Recent advances in experimental technology have given scientists the ability to perform large-scale multidimensional experiments involving large data sets. As a direct implication, the amount of data that is being generated is rising in an exponential manner. However, in order to fully scrutinize and comprehend the results obtained from traditional data analysis approaches, it has been proven that a priori domain knowledge must be taken into consideration. Infusing existing knowledge into data analysis operations however is a non-trivial task which presents a number of challenges. This research is concerned into utilizing a structured ontology representing the individual elements composing such large data sets for assessing the results obtained. More specifically, statistical natural language processing and information retrieval methodologies are used in order to provide a seamless integration of existing domain knowledge in the context of cluster analysis experiments on gene product expression patterns. The aim of this research is to produce a framework for integrating a priori domain knowledge into traditional data analysis approaches. This is done in the context of DNA microarrays and gene expression experiments. The value added by the framework to the existing body of research is twofold. First, the framework provides a figure of merit score for assessing and quantifying the biological relatedness between individual gene products. Second, it proposes a mechanism for evaluating the results of data clustering algorithms from a biological point of view.
APA, Harvard, Vancouver, ISO, and other styles
27

Abruzzo, Vincent G. "Content and Contrastive Self-Knowledge." Digital Archive @ GSU, 2012. http://digitalarchive.gsu.edu/philosophy_theses/108.

Full text
Abstract:
It is widely believed that we have immediate, introspective access to the content of our own thoughts. This access is assumed to be privileged in a way that our access to the thought content of others is not. It is also widely believed that, in many cases, thought content is individuated according to properties that are external to the thinker's head. I will refer to these theses as privileged access and content externalism, respectively. Though both are widely held to be true, various arguments have been put forth to the effect that they are incompatible. This charge of incompatibilism has been met with a variety of compatibilist responses, each of which has received its own share of criticism. In this thesis, I will argue that a contrastive account of self-knowledge is a novel compatibilist response that shows significant promise.
APA, Harvard, Vancouver, ISO, and other styles
28

Barros, Cardoso da Silva André [Verfasser], and A. [Akademischer Betreuer] Moreira. "A Priori Knowledge-Based Post-Doppler STAP for Traffic Monitoring with Airborne Radar / André Barros Cardoso da Silva ; Betreuer: A. Moreira." Karlsruhe : KIT-Bibliothek, 2019. http://d-nb.info/1199458635/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Kaiser, Julius A., and Fredrick W. Herold. "ANTENNA CONTROL FOR TT&C ANTENNA SYSTEMS." International Foundation for Telemetering, 2000. http://hdl.handle.net/10150/608253.

Full text
Abstract:
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California
A thinned array sensor system develops error voltages for steering dish antennas from signals arriving over a broad range of angles, thereby eliminating need for a priori knowledge of signal location.
APA, Harvard, Vancouver, ISO, and other styles
30

Lapine, Lewis A. Commander. "Analytical calibration of the airborne photogrammetric system using a priori knowledge of the exposure station obtained from kinematic global positioning system techniques /." The Ohio State University, 1990. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487685204967272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Cooke, Jeffrey L. "Techniques and methodologies for intelligent A priori determination and characterisation of information required by decision makers." Thesis, Queensland University of Technology, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wei, Yulei. "Genetic Knowledge-based Artificial Control over Neurogenesis in Human Cells Using Synthetic Transcription Factor Mimics." Kyoto University, 2018. http://hdl.handle.net/2433/232265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Andrade, Mauren Louise Sguario Coelho de. "Uma nova abordagem do método Level Set baseada em conhecimento a e priori da forma." Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1686.

Full text
Abstract:
CAPES
A análise do comportamento dos fluidos em escoamentos multifásicos possui grande relevância para garantia de segurança em instalações industriais. O uso de equipamentos para monitorar tal comportamento fica sujeito a fatores tais como, alto investimento e mão de obra especializada. Neste contexto, a aplicação de técnicas de processamento de imagens na análise do escoamento seria de grande auxílio, no entanto, poucas pesquisas foram desenvolvidas. Nesta tese, uma nova abordagem para segmentação de imagens baseada no método Level Set que une contornos ativos e conhecimento a priori é desenvolvida. Para tanto, um modelo da forma do objeto alvo é treinado e definido por meio do modelo de distribuição de pontos e então inserido como uma função de velocidade de extensão para evolução da curva de nível zero do método Level Set. A abordagem proposta cria um framework que consiste em três termos de energia e uma função de velocidade de extensão λLg(θ)+vAg(θ)+muP(0)+θf. Os três primeiros termos desta equação são os mesmo introduzidos em (LI CHENYANG XU; FOX, 2005) e a última parcela θf é baseada na representação da forma do objeto proposta nesta tese. Duas variações do método são utilizadas: uma com restrição (Restrict Level Set - RLS) e outra sem restrição (Free Level Set - FLS). A primeira será utilizada na segmentação de imagens que contem alvos com pouca variação na forma e pose. A segunda deve ser utilizada para a identificação correta da forma de bolhas de gás no escoamento bifásico gás-líquido. A robustez e eficiência da abordagem RLS e FLS são apresentados em imagens do escoamento bifásico gás-líquido e na base de dados HTZ (FERRARI et al., 2009). Os resultados promissores confirmam o bom desempenho do algoritmo proposto (RLS e FLS) e indicam que a abordagem pode ser utilizada como um método eficiente para validação e/ou calibração de diversos equipamentos utilizados como medidores das propriedades do escoamento bifásico, bem como, em outros problemas de segmentação de imagens.
The analysis of fluid behavior in multiphase flow is very relevant to guarantee system safety. The use of equipment to describe such behavior is subjected to factors such as the high level of investments and of specialized labor. The application of image processing techniques to flow analysis can be a good alternative, however, very little research has been developed. In this subject, this study aims at developing a new approach to image segmentation based on Level Set method that connects the active contours and prior knowledge. In order to do that, a model shape of the targeted object is trained and defined through a model of point distribution and later this model is inserted as one of the extension velocity functions for the curve evolution at zero level of level set method. The proposed approach creates a framework that consists in three terms of energy and an extension velocity function λLg(θ)+vAg(θ)+muP(0)+θf. The first three terms of the equation are the same ones introduced in (LI CHENYANG XU; FOX, 2005) and the last part of the equation θf is based on the representation of object shape proposed in this work. Two method variations are used: one restricted (Restrict Level Set - RLS) and the other with no restriction (Free Level Set - FLS). The first one is used in image segmentation that contains targets with little variation in shape and pose. The second will be used to correctly identify the shape of the bubbles in the liquid gas two phase flows. The efficiency and robustness of the approach RLS and FLS are presented in the images of the liquid gas two phase flows and in the image dataset HTZ (FERRARI et al., 2009). The results confirm the good performance of the proposed algorithm (RLS and FLS) and indicate that the approach may be used as an efficient method to validate and/or calibrate the various existing equipment used as meters for two phase flow properties, as well as in other image segmentation problems.
APA, Harvard, Vancouver, ISO, and other styles
34

Xiang, Bo. "Knowledge-based image segmentation using sparse shape priors and high-order MRFs." Thesis, Châtenay-Malabry, Ecole centrale de Paris, 2013. http://www.theses.fr/2013ECAP0066/document.

Full text
Abstract:
Nous présentons dans cette thèse une approche nouvelle de la segmentation d’images, avec des descripteurs a priori utilisant des champs de Markov d’ordre supérieur. Nous représentons le modèle de forme par un graphe de distribution de points qui décrit les informations a priori des invariants de pose grâce à des cliques L1 discrètes d’ordre supérieur. Chaque clique de triplet décrit les variations statistiques locales de forme par des mesures d’angle,ce qui assure l’invariance aux transformations globales (translation, rotation et échelle). L’apprentissage d’une structure de graphe discret d’ordre supérieur est réalisé grâce à l’apprentissage d’un champ de Markov aléatoire utilisant une décomposition duale, ce qui renforce son efficacité tout en préservant sa capacité à rendre compte des variations.Nous introduisons la connaissance a priori d’une manière innovante pour la segmentation basée sur un modèle. Le problème de la segmentation est ici traité par estimation statistique d’un maximum a posteriori (MAP). L’optimisation des paramètres de la modélisation- c’est à dire de la position des points de contrôle - est réalisée par le calcul d’une fonction d’énergie globale de champs de Markov (MRF). On combine ainsi les calculs statistiques régionaux et le suivi des frontières avec la connaissance a priori de la forme.Les descripteurs invariants sont estimés par des potentiels de Markov d’ordre 2, tandis que les caractéristiques régionales sont transposées dans un espace de caractéristiques et calculées grâce au théorème de la Divergence.De plus, nous proposons une nouvelle approche pour la segmentation conjointe de l’image et de sa modélisation ; cette méthode permet d’obtenir une segmentation plus fine lorsque la délimitation précise d’un objet est recherchée. Un modèle graphique combinant l’information a priori et les informations de pixel est développé pour réaliser l’unité des modules "top-down" et "bottom-up". La cohérence entre l’image et sa modélisation est assurée par une décomposition qui associe les parties du modèle avec la labellisation de chaque pixel.Les deux champs de Markov d’ordre supérieur considérés sont optimisés par les algorithmes de l’état de l’art. Les résultats prometteurs dans les domaines de la vision par ordinateur et de l’imagerie médicale montrent le potentiel de cette méthode appliquée à la segmentation
In this thesis, we propose a novel framework for knowledge-based segmentation using high-order Markov Random Fields (MRFs). We represent the shape model as a point distribution graphical model which encodes pose invariant shape priors through L1 sparse higher order cliques. Each triplet clique encodes the local shape variation statistics on the angle measurements which inherit invariance to global transformations (i.e. translation,rotation and scale). A sparse higher-order graph structure is learned through MRF training using dual decomposition, producing boosting efficiency while preserving its ability to represent the shape variation.We incorporate the prior knowledge in a novel framework for model-based segmentation.We address the segmentation problem as a maximum a posteriori (MAP) estimation in a probabilistic framework. A global MRF energy function is defined to jointly combine regional statistics, boundary support as well as shape prior knowledge for estimating the optimal model parameters (i.e. the positions of the control points). The pose-invariant priors are encoded in second-order MRF potentials, while regional statistics acting on a derived image feature space can be exactly factorized using Divergence theorem. Furthermore, we propose a novel framework for joint model-pixel segmentation towardsa more refined segmentation when exact boundary delineation is of interest. Aunified model-based and pixel-driven integrated graphical model is developed to combine both top-down and bottom-up modules simultaneously. The consistency between the model and the image space is introduced by a model decomposition which associates the model parts with pixels labeling. Both of the considered higher-order MRFs are optimized efficiently using state-of the-art MRF optimization algorithms. Promising results on computer vision and medical image applications demonstrate the potential of the proposed segmentation methods
APA, Harvard, Vancouver, ISO, and other styles
35

Djintcharadze, Anna. "L'A priori de la connaissance au sein du statut logique et ontologique de l'argument de Dieu de Saint Anselme: La réception médiévale de l'argument (XIIIe-XIVe siècles) = The a priori of knowledge in the context of the logical and ontological status of Saint Anselm’s proof of God: the medieval reception of the argument (13th -14th centuries)." Thesis, Boston College, 2017. http://hdl.handle.net/2345/bc-ir:107407.

Full text
Abstract:
Thesis advisor: Olivier Boulnois
Thesis advisor: Stephen F. Brown
The Dissertation Text has Three Parts. Each paragraph is referred at the end to the Part it summarizes. My dissertation places Saint Anselm’s Ontological Argument within its original Neoplatonic context that should justify its validity. The historical thesis is that Anselm’s epistemology, underlying the Proslogion, the Monologion and De Veritate, was a natural, often unaccounted for, reflection of the essentially Neoplatonic vision that defined the pre-thirteenth century mental culture in Europe. (Introduction and Part I) This thesis is shown through the reception of Anselm’s argument by 27 XIIIth-XIVth century thinkers, whose reading of it exhibits a gradual weakening of Neoplatonic premises up to a complete change of paradigm towards the XIVth century, the first reason being the specificity of the Medieval reception of Aristotle’s teaching on first principles that is the subject of Posterior Analytics (Part II), and the second reason being the specificity of the Medieval reception of Dionysius the Areopagite (Part III, see sub-thesis 4 below). The defense of this main historical thesis aims at proving three systematic sub-theses, including a further historical sub-thesis. The Three Systematic Sub-Theses: 1) The inadequacy of rationalist and idealist epistemology in reaching and providing apodictic truths (the chief one of which is God’s existence) with ultimate ontic grounding, as well as the inadequacy of objectivistic metaphysics that underlies these epistemologies, calls for another, non-objectifying epistemic paradigm offered by the Neoplatonic (Proclian theorem of transcendence) apophatic and supra-discursive logic (kenotic epistemology) that should be a better method to achieve certainty, because of its ability to found logic in its ontic source and thus envisage thought as an experience and a mode of being in which it is grounded. Within such a dialectic, there cannot be any opposition or division either between being and thought, or between faith and reason, faith being an ontic ground of reason’s activity defined as self-transcendence. The argument of the Proslogion is thus an instance of logic that transcends itself into its own principle – into ‘that than which nothing greater can be conceived’. Such an epistemological vision is also supported by contemporary epistemology (Russell’s Paradox and Gödel’s Incompleteness Theorem) (Introduction and Part I) 2) In virtue of this apophatic and supra-discursive vision, God’s existence, thought by human mind (as expressed in the argument of the Proslogion), happens to be a common denominator between God’s inaccessible essence and the created essence of human mind, so that human consciousness can be defined as ‘con-science’ – the mind experiencing its own being as co-knowledge with God that forges being as such. (Part I) 3) However, God’s existence as a common denominator between God’s essence and the created essence of human mind cannot be legitimately accommodated within the XIIIth-XIVth century epistemology and metaphysics because of the specificity of relation between God’s essence and His attributes, typical of Medieval scholasticism and as stated by Peter Lombard and Thomas Aquinas. If this relation is kept, while at the same time God’s existence is affirmed as immanent to the human mind (God as the first object of intellect), God’s transcendence is sacrificed and He becomes subject to metaphysics (Scotus’ nominal univocity of being). In order to achieve real univocity between the existence of human thinking and God’s existence, one needs a relation between God’s essence and His attributes that would allow a real participation of the created in the uncreated. The configuration of such a relation, however, needs the distinction between God’s essence and His energies that Western Medieval thought did not know, but that is inherent to the Neoplatonic epistemic tradition persisting through the Eastern Church theologians and Dionysius the Areopagite up to Gregory Palamas. (Part III) Another Historical Sub-Thesis: 4) One of the reasons why Medieval readers of Anselm’s Proslogion misread it in the Aristotelian key, was that they did not have access to the original work of Dionysius the Areopagite, in which the said distinction between God’s essence and His energies is present. This is due to the fact that the Medievals read Dionysius through Eriugena’s translation. However, Eriugena was himself influenced by Augustine’s De Trinitate that exhibits an essentialist theology: in fact, it places ideas within God’s essence, which yields the notion of the created as a mere similitude, not real participation, and which ultimately makes the vision (knowledge) of God possible only in the afterlife. Since already with Augustine the relation between grace and nature is modified (grace becomes a created manifestation of God, instead of being His uncreated energy), God’s essence remains incommunicable. Similarly, God’s existence is not in any way immanent to the created world, of which the created human intellect is a part, so that it remains as transcendent to the human mind as is His incommunicable essence. This should explain why for the Medievals analogy, and eventually univocity, was the only way to say something about God, and also why they mostly could not read Anselm’s Proslogion otherwise than either in terms of propositional or modal logic. (Part III) The dissertation concludes that whilst Anselm’s epistemology in the Proslogion is an instance of Neoplatonic metaphysical tradition, the question of the possibility of certainty in epistemology, as well as the possibility of metaphysics as such, depends on the possibility of real communicability between the immanence of human predicating mind and the transcendence of God’s essence through His trans-immanent existence
APA, Harvard, Vancouver, ISO, and other styles
36

Veneziano, Dario. "Knowledge bases, computational methods and data mining techniques with applications to A-to-I RNA editing, Synthetic Biology and RNA interference." Doctoral thesis, Università di Catania, 2015. http://hdl.handle.net/10761/4085.

Full text
Abstract:
La Bioinformatica, nota anche come Biologia Computazionale, è un campo relativamente nuovo che mira alla risoluzione di problemi biologici attraverso approcci computazionali. Questa scienza interdisciplinare persegue due obiettivi particolari tra i molti: da un lato, la costruzione di database biologici per memorizzare razionalmente sempre maggiori quantità di dati che divengono sempre più disponibili, e, dall'altro, lo sviluppo e l'applicazione di algoritmi al fine di estrarre pattern di predizione ed inferire nuove conoscenze altrimenti impossibili da ottenere da tali dati. Questa tesi presenterà nuovi risultati su entrambi questi aspetti. Infatti, il lavoro di ricerca descritto in questa tesi di dottorato ha avuto come obiettivo lo sviluppo di euristiche e tecniche di data mining per la raccolta e l'analisi di dati relativi ai meccanismi di regolazione post-trascrizionale ed RNA interference, così come il collegamento del fenomeno dell RNA A-to-I editing con la regolazione genica mediate dai miRNA. In particolare, gli sforzi sono stati finalizzati allo sviluppo di una banca dati per la predizione di siti di legame per miRNA editati tramite RNA A-to-I editing; un algoritmo per la progettazione di miRNA sintetici con alta specificità; e una base di conoscenza dotata di algoritmi di data mining per l'annotazione funzionale dei microRNA, proposta come risorsa unificata per la ricerca sui miRNA.
APA, Harvard, Vancouver, ISO, and other styles
37

Mattsson, Nils-Göran. "Den moderata rationalismen : Kommentarer, preciseringar och kritik av några begrepp och teser som framlagts av Laurence Bonjour i dennes In Defense of Pure Reason." Thesis, Linköping University, Department of Religion and Culture, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-4543.

Full text
Abstract:

The paper contains comment, clarification and criticism, even constructive criticism, of some theses that have been put forward by Laurence Bonjour in his In Defense of Pure Reason.

It presents a concept of experience that deals with the relation between cognizer and object of experience that has a great similarity to that of Bonjour. Through analysis it is shown that the concept of a priori entails that Bonjour has two concepts of a priori, a narrow and a broad one. The narrow one is, in my own words: According to moderate rationalism a proposition p is a priori justified if and only if you apprehend that p must be true in every possible world. This doesn’t mean that Bonjour doesn’t believe in an epistemological, metaphysical and semantic realm. The broad one does not mention anything about possible worlds.

Casullo in his A priori justification rejects Bonjour’s argument against Quine’s coherentism. A defense is put forward with the concept ‘an ideal of science for apparent rational insights’. The concept of axiomatic system and foundationalism is used. If we assume that the colour proposition ‘nothing can be red all over and green all over at the same time’ has the meaning that we, in this very moment, are representing a property in the world, thus we have an argument of superposition for the correctness of the proposition. The ground for this argumentation relies on the identification of colours with superposing electromagnetic waves.

APA, Harvard, Vancouver, ISO, and other styles
38

Simonato, Pierluigi. "Evaluating and expanding knowledge and awareness of health professionals on the consumption and adverse consequences of Novel Psychoactive Substances (NPS) through innovative information technologic tools." Thesis, University of Hertfordshire, 2015. http://hdl.handle.net/2299/16557.

Full text
Abstract:
Background: The rapid diffusion of Novel Psychoactive Substances (NPS) constitutes an important challenge in terms of public health and a novelty in clinical settings, where these compounds may lead to erratic symptoms, unascertained effects and multi-intoxication scenarios, especially in emergency situations. The number of NPS available on the illicit drug market is astonishing: official reports suggest the appearance of a new drug every week. NPS may be enlisted in many different families such as synthetic phenethylamines, tryptamines, cathinones, piperazines, ketamine-like compounds, cannabimimetics and other plant-derived, medical products and derivatives. Therefore, healthcare services and professionals are often called to face this unknown 'galaxy' where NPS users seem to perceive traditional services 'unfitting' for their needs, requiring an attention which is quite different from known classic drug abusers. In this context, the Recreational Drugs European Network (ReDNet), a research project funded the European Commission and led by the University of Hertfordshire, aimed to explore the NPS galaxy and develop information tools for vulnerable individuals and professionals working with them. This initiative reported specific Technical Folders on new drugs and disseminated the collected information through innovative communication technologies (e.g. multimedia tools, social networking and mobile phone services) internationally. Aim and objectives: The aim of this work is to evaluate and contribute to expand the knowledge of health professionals on NPS. The key objectives are: 1) to assess the level of knowledge on NPS amongst a sample of Italian healthcare professionals; 2) to evaluate the effectiveness of dissemination tools developed by ReDNet, including an SMS-Email/mobile service (SMAIL); 3) to understand the clinical impact of NPS by providing four Technical Folders and collecting two clinical cases on NPS. Methodology: According to the objectives, the methodological approach has been articulated in the following three phases. Phase 1: investigating knowledge and preferred channels of information via an online survey among health professionals in Italy. This first Italian study on NPS awareness had been online from February to July 2011, recruiting participants from Departments of Addiction, Psychiatry and other services. Phase 2: evaluating the ReDNet initiative. An evaluation questionnaire was designed and disseminated online to assess the various resources provided by ReDNet project; it had been online from April to July 2013, targeting professionals registered to ReDNet services. This phase also investigated the SMAIL service, a mobile application that was the latest technological tool developed by ReDNet team. Phase 3: promoting evidence based work in clinical practice through the preparation of four Technical Folders and two case reports. Technical Folders followed the methodology optimised during the ReDNet experience, organising NPS data under specific headings, measured for the need of health professionals. Case reports were collected in a Dual Diagnosis Unit in Italy ('Casa di Cura Parco dei Tigli'); assessed patients revealed for the first time the use of NPS; clinical interviews were conducted to collect a full anamnesis while for the first time psychopathological characteristics were measured in NPS abusers, using a psychometric instrument (MMPI-2). Results: In Phase 1 Italian services, in particular interviewees (n=243) from Departments of Psychiatry and Addiction, showed a strong interest for the subject but a poor understanding of NPS: 26.7% of respondents did not know if their patients ever used NPS; at the same time they considered this phenomenon as very relevant to their profession (e.g. psychomotor agitation [75.7%], errors in the assessment [75.7%], management of the clients [72%]); in addition less of a quarter of them had reliable information on new substances. Interviewees also reported the need for easily accessible channels of information to expand their expertise in the field (including emails [70%] and dedicated websites [51.9%]). The ReDNet initiative (Phase 2) reached professionals (n=270) from European countries and various other regions; they appreciated the website above all (48.5%), which provided access to other information (in form of academic papers, news, technical folders, etc.). The integration of technological-based and classic educational resources was used to self-educate professionals (52.6%) and supply information for research (33.7%) with up-to-date and 3 reliable information; in the same Phase the SMAIL service was analysed in its first 557 searches: in the pilot period 122 professionals used SMS inquiries (95%), asking information on NPS while highlighting the increasing number of NPS available on the market. Technical folders (Phase 3) described two new phenethylamines (Bromo-dragonfly and 25I-NBOMe), a novel ethno drug (Kratom) and a new synthetic cathinone (alpha-PVP) whose severe effects were also described in one of the clinical cases. The first case report (Alice) involved a clubber who used mephedrone and other NPS with a severe worsening of her psychiatric disturbances; the second one (Marvin) described a patient who was referred by a psychiatric service and revealed himself as a 'psychonaut' with an intense abuse of alpha-PVP. Conclusions: The exploration of the NPS galaxy is a new challenge for healthcare professionals. In this study, Italian services seemed to be unprepared to face the emergency and requested rapid access to reliable information; the ReDNet project provided both technology-based and traditional resources to expand knowledge on NPS, making professionals more aware of emerging issues and helping especially clinicians working in the field (e.g. via SMAIL service and Technical Folders). Overall, it can be observed that effective information services on NPS targeted at professionals initiatives should include an online interface integrating up-to-date information, describing NPS through specific Technical Folders and disseminating scientific literature; the use of technological tools, including mobile applications, is an important strategy to support health professionals in their activity. Finally, more 'visual' guidelines, possibly in the form of a 'map' of these heterogeneous compounds, could be a useful framework to describe NPS to physicians and other professionals who are often unprepared and unconfident to face such an expanding galaxy.
APA, Harvard, Vancouver, ISO, and other styles
39

Danglade, Florence. "Traitement de maquettes numériques pour la préparation de modèles de simulation en conception de produits à l'aide de techniques d'intelligence artificielle." Thesis, Paris, ENSAM, 2015. http://www.theses.fr/2015ENAM0045/document.

Full text
Abstract:
Maitriser le triptyque coût-qualité-délai lors des différentes phases du Processus de Développement d’un Produit (PDP) dans un environnement de plus en plus concurrentiel est un enjeu majeur pour l’industrie. Le développement de nouvelles méthodes et de nouveaux outils pour adapter une représentation du produit à une activité du PDP est l’une des nombreuses pistes d’amélioration du processus et certainement l’une des plus prometteuses. Cela est particulièrement vrai dans le domaine du transfert de modèles de Conception Assistée par Ordinateur (CAO) vers des activités de simulations numériques. Actuellement, les méthodes et outils de préparation d’un modèle CAO original vers un modèle dédié à une activité existent. Cependant, ces processus de préparation sont des tâches complexes qui reposent souvent sur les connaissances des experts et sont peu formalisés, en particulier lorsque l’on considère des maquettes numériques riches comprenant plusieurs centaines de milliers de pièces. Pouvoir estimer a priori l’impact de la préparation de la maquette numérique sur le résultat de la simulation permettrait d’identifier dès le début le meilleur processus et assurerait une meilleure maitrise des processus et des coûts de préparation. Cette thèse a pour objectif de relever ce défi en utilisant des techniques d’intelligence artificielles capables d'imiter et de prévoir un comportement à partir d'exemples judicieusement choisis. L’idée principale est d’utiliser des exemples de préparation de maquettes numériques comme entrées d’algorithmes d’apprentissage pour configurer des estimateurs de la performance d’un processus. Lorsqu’un nouveau cas se présente, ces estimateurs pourront alors prédire a priori l’impact de la préparation sur le résultat de l’analyse sans avoir à la réaliser. Afin d'atteindre cet objectif, une méthode a été développée pour construire une base d’exemples représentatifs, identifier les variables d’entrée et de sortie déterminantes et configurer des modèles d’apprentissage. La performance d’un processus de préparation sera évaluée à l’aide de critères tels que des coûts de préparation, des coûts de simulation et des erreurs sur le résultat de l’analyse dues à la simplification des modèles CAO. Ces critères seront les données de sortie des algorithmes d’apprentissage. Le premier challenge de l’approche proposée est d’extraire les données des modèles 3D complétées par des données relatives au cas de simulation qui caractérisent au mieux un processus de préparation , puis d’identifier les variables explicatives les plus déterminantes. Un autre challenge est de configurer des modèles d’apprentissage capables d’évaluer avec une bonne précision la qualité d’un processus malgré un nombre limité d’exemples de processus de préparation et de données disponibles (seules les données relatives aux modèles CAO originaux, aux cas de simulation sont connues pour un nouveau cas). Au final, l’estimateur de la performance d’un processus aidera les analystes dans le choix d'opérations de préparation de modèles CAO. Cela ne les dispensera pas de la simulation mais permettra d'obtenir plus rapidement un modèle préparé de meilleure qualité. Les techniques d’intelligence artificielles utilisées seront des classifieurs de type réseaux de neurones ou arbres de décision. L’approche proposée sera appliquée à la préparation de modèles CAO riches pour l’analyse CFD
Controlling the well-known triptych costs, quality and time during the different phases of the Product Development Process (PDP) is an everlasting challenge for the industry. Among the numerous issues that are to be addressed, the development of new methods and tools to adapt to the various needs the models used all along the PDP is certainly one of the most challenging and promising improvement area. This is particularly true for the adaptation of CAD (Computer-Aided Design) models to CAE (Computer-Aided Engineering) applications. Today, even if methods and tools exist, such a preparation phase still requires a deep knowledge and a huge amount of time when considering Digital Mock-Up (DMU) composed of several hundreds of thousands of parts. Thus, being able to estimate a priori the impact of DMU preparation process on the simulation results would help identifying the best process right from the beginning, and this will ensure a better control of processes and preparation costs. This thesis addresses such a difficult problem and uses Artificial Intelligence (AI) techniques to learn and accurately predict behaviors from carefully selected examples. The main idea is to identify rules from these examples used as inputs of learning algorithms. Once those rules obtained, they can be used as estimators to be applied a priori on new cases for which the impact of a preparation process can be estimated without having to perform it. To reach this objective, a method to build a representative database of examples has been developed, the right input and output variables have been identified, then the learning model and its associated control parameters have been tuned. The performance of a preparation process is assessed by criteria like preparation costs, analysis costs and the errors induced by the simplifications on the analysis results. The first challenge of the proposed approach is to extract and select most relevant input variables from the original and 3D prepared models, which are completed with data characterizing the preparation processes. Another challenge is to configure learning models able to assess with good accuracy the quality of a process, despite a limited number of examples of preparation processes and data available (the only data known to a new case are the data that characterize the original CAD models and simulation case). In the end, the estimator of the process’ performance will help analysts in the selection of CAD model preparation operations. This does not exempt the analysts to make the numerical simulation. However, this will get faster a simplified model of best quality. The rules linking the output variables to the input ones are obtained using AI techniques such as well-known neural networks and decision trees. The proposed approach is illustrated and validated on industrial examples in the context of CFD simulations
APA, Harvard, Vancouver, ISO, and other styles
40

Dickens, Erik. "Towards automatic detection and visualization of tissues in medical volume rendering." Thesis, Linköping University, Department of Science and Technology, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-9800.

Full text
Abstract:

The technique of volume rendering can be a powerful tool when visualizing 3D medical data sets. Its characteristic of capturing 3D internal structures within a 2D rendered image makes it attractive in the analysis. However, the applications that implement this technique fail to reach out to most of the supposed end-users at the clinics and radiology departments of today. This is primarily due to problems centered on the design of the Transfer Function (TF), the tool that makes tissues visually appear in the rendered image. The interaction with the TF is too complex for a supposed end-user and its capability of separating tissues is often insufficient. This thesis presents methods for detecting the regions in the image volume where tissues are contained. The tissues that are of interest can furthermore be identified among these regions. This processing and classification is possible thanks to the use of a priori knowledge, i.e. what is known about the data set and its domain in advance. The identified regions can finally be visualized using tissue adapted TFs that can create cleaner renderings of tissues where a normal TF would fail to separate them. In addition an intuitive user control is presented that allows the user to easily interact with the detection and the visualization.

APA, Harvard, Vancouver, ISO, and other styles
41

Azam, Farooq. "Biologically Inspired Modular Neural Networks." Diss., Virginia Tech, 2000. http://hdl.handle.net/10919/27998.

Full text
Abstract:
This dissertation explores the modular learning in artificial neural networks that mainly driven by the inspiration from the neurobiological basis of the human learning. The presented modularization approaches to the neural network design and learning are inspired by the engineering, complexity, psychological and neurobiological aspects. The main theme of this dissertation is to explore the organization and functioning of the brain to discover new structural and learning inspirations that can be subsequently utilized to design artificial neural network. The artificial neural networks are touted to be a neurobiologicaly inspired paradigm that emulate the functioning of the vertebrate brain. The brain is a highly structured entity with localized regions of neurons specialized in performing specific tasks. On the other hand, the mainstream monolithic feed-forward neural networks are generally unstructured black boxes which is their major performance limiting characteristic. The non explicit structure and monolithic nature of the current mainstream artificial neural networks results in lack of the capability of systematic incorporation of functional or task-specific a priori knowledge in the artificial neural network design process. The problem caused by these limitations are discussed in detail in this dissertation and remedial solutions are presented that are driven by the functioning of the brain and its structural organization. Also, this dissertation presents an in depth study of the currently available modular neural network architectures along with highlighting their shortcomings and investigates new modular artificial neural network models in order to overcome pointed out shortcomings. The resulting proposed modular neural network models have greater accuracy, generalization, comprehensible simplified neural structure, ease of training and more user confidence. These benefits are readily obvious for certain problems, depending upon availability and usage of available a priori knowledge about the problems. The modular neural network models presented in this dissertation exploit the capabilities of the principle of divide and conquer in the design and learning of the modular artificial neural networks. The strategy of divide and conquer solves a complex computational problem by dividing it into simpler sub-problems and then combining the individual solutions to the sub-problems into a solution to the original problem. The divisions of a task considered in this dissertation are the automatic decomposition of the mappings to be learned, decompositions of the artificial neural networks to minimize harmful interaction during the learning process, and explicit decomposition of the application task into sub-tasks that are learned separately. The versatility and capabilities of the new proposed modular neural networks are demonstrated by the experimental results. A comparison of the current modular neural network design techniques with the ones introduced in this dissertation, is also presented for reference. The results presented in this dissertation lay a solid foundation for design and learning of the artificial neural networks that have sound neurobiological basis that leads to superior design techniques. Areas of the future research are also presented.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
42

Kimura, Yasuko. "The process of inter-firm acquisition of knowledge through collaboration : with a special emphasis on Japanese JISEDAI fine ceramics and synthetic metals collaborative R and D projects." Thesis, University of Sussex, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.394268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Zacharias, Sebastian. "The Darwinian revolution as a knowledge reorganization." Doctoral thesis, Humboldt-Universität zu Berlin, Philosophische Fakultät I, 2015. http://dx.doi.org/10.18452/17145.

Full text
Abstract:
Die Dissertation leistet drei Beiträge zur Forschung: (1) Sie entwickelt ein neuartiges vierstufiges Modell wissenschaftlicher Theorien. Dieses Modell kombiniert logisch-empiristische Ansätze (Carnap, Popper, Frege) mit Konzepten von Metaphern & Narrativen (Wittgenstein, Burke, Morgan), erlaubt so deutlich präzisiere Beschreibungen wissenschaftlicher Theorien bereit und löst/mildert Widersprüche in logisch-empiristischen Modellen. (Realismus vs. Empirismus, analytische vs. synthetische Aussagen, Unterdeterminiertheit/ Holismus, wissenschaftliche Erklärungen, Demarkation) (2) Mit diesem Modell gelingt ein Reihenvergleich sechs biologischer Theorien von Lamarck (1809), über Cuvier (1811), Geoffroy St. Hilaire (1835), Chambers (1844-60), Owen (1848-68), Wallace (1855/8) zu Darwin (1859-1872). Dieser Vergleich offenbart eine interessante Asymmetrie: Vergleicht man Darwin mit je einem Vorgänger, so bestehen zahlreiche wichtige Unterschiede. Vergleicht man ihn mit fünf Vorgängern, verschwinden diese fast völlig: Darwins originärer Beitrag zur Revolution in der Biologie des 19.Jh ist klein und seine Antwort nur eine aus einer kontinuierlichen Serie auf die empirischen Herausforderungen durch Paläontologie & Biogeographie seit Ende des 18. Jh. (3) Eine gestufte Rezeptionsanalyse zeigt, warum wir dennoch von einer Darwinschen Revolution sprechen. Zuerst zeigt eine quantitative Analyse der fast 2.000 biologischen Artikel in Britannien zwischen 1858 und 1876, dass Darwinsche Konzepte zwar wichtige Neuerungen brachten, jedoch nicht singulär herausragen. Verlässt man die Biologie und schaut sich die Rezeption bei anderen Wissenschaftlern und gebildeten Laien an, wechselt das Bild: Je weiter man aus der Biologie heraustritt, desto weniger Ebenen biologischen Wissens kennen die Rezipienten und desto sichtbarer wird Darwins Beitrag. Schließlich findet sich sein Beitrag in den abstraktesten Ebenen des biologischen Wissens: in Narrativ und Weltbild – den Ebenen die Laien rezipieren.
The dissertation makes three contributions to research: (1) It develops a novel 4-level-model of scientific theories which combines logical-empirical ideas (Carnap, Popper, Frege) with concepts of metaphors & narratives (Wittgenstein, Burke, Morgan), providing a new powerful toolbox for the analysis & comparison of scientific theories and overcoming/softening contradictions in logical-empirical models. (realism vs. empiricism, analytic vs. synthetic statements, holism, theory-laden observations, scientific explanations, demarcation) (2) Based on this model, the dissertation compares six biological theories from Lamarck (1809), via Cuvier (1811), Geoffroy St. Hilaire (1835), Chambers (1844-60), Owen (1848-68), Wallace (1855/8) to Darwin (1859-1872) and reveals an interesting asymmetry: Compared to any one of his predecessors, Darwins theory appears very original, however, compared to all five predecessor theories, many of these differences disappear and it remains but a small original contribution by Darwin. Thus, Darwin’s is but one in a continuous series of responses to the challenges posed to biology by paleontology and biogeography since the end of the 18th century. (3) A 3-level reception analysis, finally, demonstrates why we speak of a Darwinian revolution nevertheless. (i) A quantitative analysis of nearly 2.000 biological articles reveals that Darwinian concepts where indeed an important theoretical innovation – but definitely not the most important of the time. (ii) When leaving the circle of biology and moving to scientists from other disciplines or educated laymen, the landscape changes. The further outside the biological community, the shallower the audience’s knowledge – and the more visible Darwin’s original contribution. After all, most of Darwin’s contribution can be found in the narrative and worldview of 19th century biology: the only level of knowledge which laymen receive.
APA, Harvard, Vancouver, ISO, and other styles
44

Baiardi, Daniel Cerqueira. "Conhecimento, evolução e complexidade na filosofia sintética de Herbert Spencer." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/8/8133/tde-10022009-125210/.

Full text
Abstract:
Esta dissertação é um estudo da doutrina evolucionária do gradual desenvolvimento da mente de Herbert Spencer, em especial como aparece na terceira parte de seus Principles of Psychology: General Synthesis (1855). Atenção é dada aos princípios epistemológicos basilares do seu sistema da Filosofia Sintética, assim como os conceitos de complexidade, estrutura, função e teleologia, em sua concepção evolucionista pré-darwiniana. Examinam-se também alguns debates em que se envolveu Spencer nesse período vitoriano
This thesis is a study of the evolutionary doctrine of gradual development of the mind of Herbert Spencer, especially as it appears in the third part of his Principles of Psychology: General Synthesis (1855). The basic epistemological principles of his Synthetic Philosophy are studied, as well as the concepts of complexity, structure, function and teleology, in his pre-Darwinian evolutionary conception. Some of his debates in this Victorian era are also examined
APA, Harvard, Vancouver, ISO, and other styles
45

Reining, Stefan. "Apriority and Colour Inclusion." Doctoral thesis, Universitat de Barcelona, 2014. http://hdl.handle.net/10803/246105.

Full text
Abstract:
My central aim in this dissertation is to propose a new version of local scepticism regarding the a priori, namely, a version of scepticism regarding the apriority of (knowledge of) truths about certain relations between colours. The kind of relation in question is, for instance, expressed by sentences like ‘All ultramarine things are blue’ and ‘Nothing is both red all over and green all over’ – sentences that have, among defenders of the a priori, commonly been regarded as expressing paradigm examples of a priori truths. In the course of my argumentation for this kind of local scepticism regarding the a priori, I employ a relatively permissive notion of linguistic understanding (inspired by Timothy Williamson’s recent work on the a priori), according to which it is possible to obtain the relevant kind of understanding of colour terms in a certain non-standard way. The relatively permissive notion of linguistic understanding in question is, in turn, based on considerations in favour of a relatively coarse-grained conception of the primary objects of truth. Furthermore, my argumentation for the kind of local scepticism in question is based on considerations in favour of a certain conception of evidentiality, according to which a single experience-token can play both an enabling and an evidential role in the same instance of knowledge, and according to which some of the experience involved in alleged instances of a priori knowledge of the relations among colours in question plays this kind of double-role. Finally, I consider certain empirical phenomena apparently threatening the possibility of coming to understand colour terms in the non-standard way in question, and I argue that the threat posed by these phenomena is more widespread than hitherto acknowledged, and that all available ways of accommodating these phenomena are compatible with my local scepticism regarding the a priori.
Mi objetivo central en esta tesis es proponer una nueva versión de escepticismo local con respecto al a priori, es decir, una versión de escepticismo con respecto a la aprioridad (del conocimiento de) las verdades sobre ciertas relaciones entre colores. El tipo de relación en cuestión queda, por ejemplo, expresado en oraciones como 'Todas las cosas ultramarinas son azules' y 'Nada es verde y rojo en todas partes' – oraciones que, entre los defensores del a priori, han sido consideradas comúnmente como ejemplos paradigmáticos de verdades a priori. En el curso de mi argumentación, utilizo una noción relativamente permisiva de comprensión lingüística (inspirado en el trabajo reciente de Timothy Williamson sobre el a priori), según la cual es posible obtener comprensión lingüística de términos de color de una cierta manera no estándar. La noción de comprensión lingüística en juego está, a su vez, basada en consideraciones a favor de una concepción de grano relativamente grueso acerca de los objetos primarios de la verdad. Además, mi argumentación se basa en consideraciones a favor de una cierta concepción de evidencialidad, según la cual una experiencia puede jugar tanto un papel comprensión-produciendo como un papel probatorio en la misma instancia de conocimiento y según la cual algunas de las experiencias involucradas en presuntos casos de conocimiento a priori de las relaciones entre colores en juego tienen este tipo de doble función. Finalmente, examino ciertos fenómenos empíricos que al parecer amenazan la posibilidad de llegar a entender a los términos de color en el modo no estándar propuesto, y sostengo que la amenaza planteada por estos fenómenos está más extendida que lo que ha sido reconocido hasta ahora, y que todas las formas disponibles de acomodar estos fenómenos son compatibles con mi escepticismo local con respecto al a priori.
APA, Harvard, Vancouver, ISO, and other styles
46

Meunier, Bogdan. "Complexity, diplomatic relationships and business creation : a cross-regional analysis of the development of productive knowledge, trade facilitation and firm entry in regional markets." Thesis, Paris 1, 2019. http://www.theses.fr/2019PA01E001/document.

Full text
Abstract:
Cette thèse adopte une approche analytique interrégionale de trois régions économiques pour évaluer les connaissances productives et la diplomatie dans le contexte d’intégration régionale, et en parallèle, les déterminants de la création d'entreprises. Du point de vue de l'intégration européenne, nous introduisons une nouvelle méthodologie de contrôle synthétique pour évaluer l'impact de l'adhésion à l'UE sur l'indice de complexité économique des nouveaux États membres d'Europe centrale et orientale. Nos résultats indiquent que l'adhésion à l'UE a joué un rôle catalyseur pour la connaissance productive des pays portant de faibles niveaux de complexité avant l'adhésion, permettant un taux de développement plus élevé dans la sophistication de l'espace d'exportation de leurs produits. En élargissant notre analyse à tous les pays européens et aux États d’Afrique du Nord, nous procédons dans un deuxième temps à l’analyse des déterminants du commerce des infrastructures institutionnelles et logistiques en élargissant le modèle de Gravité pour y incorporer des éléments de diplomatie (notamment la présence d’ambassades et d’ambassadeurs). Nos résultats démontrent les avantages des infrastructures immatérielles et matérielles ainsi que de l'activité diplomatique sur le commerce bilatéral des PECO et de l'Afrique du Nord, confirmant l'importance de ces variables en tant que moteurs de l'intégration régionale. Dans une dernière partie, nous concentrons notre analyse sur Fédération de Russie en tant que région géographique en introduisant une régression panel des déterminants de l’entrée et de la sortie d’entreprises. Cette évaluation empirique conclut que les défaillances institutionnelles et l’environnement politico-économique ont des effets significatifs sur la création et la destruction d’entreprises russes, avec une estimation robuste du prix mondial du pétrole (quelle que soit la différence entre les régions cibles) suggérant une forte exposition de chaque région russe à une crise mondiale
This thesis takes a cross-regional analytical approach of three distinct economic areas to evaluate productive knowledge and diplomacy in the context of regional integration alongside determinants of business creation. From the angle of European integration, we introduce a new synthetic control methodology to evaluate the impact of EU accession on the economic complexity index of new CEE member states its results indicating that accession to the EU acted as a catalyst for the productive knowledge of countries with low levels of complexity before accession, allowing a higher rate of development in the sophistication of their product export space. Expanding our analysis to include all European countries and North African states, we proceed in a second stage to analyse institutional and logistical infrastructure determinants of trade by extending the traditional Gravity model to incorporate elements of diplomacy (including the presence of embassies and ambassadors). Our results demonstrate the benefits of soft and hard infrastructure as well as diplomatic activity on the bilateral trade fixed effect CEE and North African countries, validating their importance of these variables as powerful drivers of regional integration. In a final part, we turn our analysis to the Russian Federation as a regional geography with a panel regression analysis of the determinants of firm entry and exit. The empirical evaluation concludes that institutional failures and the politico-economic environment exhibit statistically significant and economically meaningful effects both on the creation and destruction of Russian firms, with a robust estimate of the world oil price (irrespective of the difference in target regions) suggesting a possible high exposure of each Russian region to a global crisis
APA, Harvard, Vancouver, ISO, and other styles
47

Filippi, Marc. "Séparation de sources en imagerie nucléaire." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAT025/document.

Full text
Abstract:
En imagerie nucléaire (scintigraphie, TEMP, TEP), les diagnostics sont fréquemment faits à l'aide des courbes d'activité temporelles des différents organes et tissus étudiés. Ces courbes représentent l'évolution de la distribution d'un traceur radioactif injecté dans le patient. Leur obtention est compliquée par la superposition des organes et des tissus dans les séquences d'images 2D, et il convient donc de séparer les différentes contributions présentes dans les pixels. Le problème de séparation de sources sous-jacent étant sous-déterminé, nous proposons d'y faire face dans cette thèse en exploitant différentes connaissances a priori d'ordre spatial et temporel sur les sources. Les principales connaissances intégrées ici sont les régions d'intérêt (ROI) des sources qui apportent des informations spatiales riches. Contrairement aux travaux antérieurs qui ont une approche binaire, nous intégrons cette connaissance de manière robuste à la méthode de séparation, afin que cette dernière ne soit pas sensible aux variations inter et intra-utilisateurs dans la sélection des ROI. La méthode de séparation générique proposée prend la forme d'une fonctionnelle à minimiser, constituée d'un terme d'attache aux données ainsi que de pénalisations et de relâchements de contraintes exprimant les connaissances a priori. L'étude sur des images de synthèse montrent les bons résultats de notre approche par rapport à l'état de l'art. Deux applications, l'une sur les reins, l'autre sur le cœur illustrent les résultats sur des données cliniques réelles
In nuclear imaging (scintigraphy, SPECT, PET), diagnostics are often made with time activity curves (TAC) of organs and tissues. These TACs represent the dynamic evolution of tracer distribution inside patient's body. Extraction of TACs can be complicated by overlapping in the 2D image sequences, hence source separation methods must be used in order to extract TAC properly. However, the underlying separation problem is underdetermined. We propose to overcome this difficulty by adding some spatial and temporal prior knowledge about sources on the separation process. The main knowledge used in this work is region of interest (ROI) of organs and tissues. Unlike state of the art methods, ROI are integrated in a robust way in our method, in order to face user-dependancy in their selection. The proposed method is generic and minimize an objective function composed with a data fidelity criterion, penalizations and relaxations expressing prior knowledge. Results on synthetic datasets show the efficiency of the proposed method compare to state of the art methods. Two clinical applications on the kidney and on the heart are also adressed
APA, Harvard, Vancouver, ISO, and other styles
48

Belharbi, Soufiane. "Neural networks regularization through representation learning." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMIR10/document.

Full text
Abstract:
Les modèles de réseaux de neurones et en particulier les modèles profonds sont aujourd'hui l'un des modèles à l'état de l'art en apprentissage automatique et ses applications. Les réseaux de neurones profonds récents possèdent de nombreuses couches cachées ce qui augmente significativement le nombre total de paramètres. L'apprentissage de ce genre de modèles nécessite donc un grand nombre d'exemples étiquetés, qui ne sont pas toujours disponibles en pratique. Le sur-apprentissage est un des problèmes fondamentaux des réseaux de neurones, qui se produit lorsque le modèle apprend par coeur les données d'apprentissage, menant à des difficultés à généraliser sur de nouvelles données. Le problème du sur-apprentissage des réseaux de neurones est le thème principal abordé dans cette thèse. Dans la littérature, plusieurs solutions ont été proposées pour remédier à ce problème, tels que l'augmentation de données, l'arrêt prématuré de l'apprentissage ("early stopping"), ou encore des techniques plus spécifiques aux réseaux de neurones comme le "dropout" ou la "batch normalization". Dans cette thèse, nous abordons le sur-apprentissage des réseaux de neurones profonds sous l'angle de l'apprentissage de représentations, en considérant l'apprentissage avec peu de données. Pour aboutir à cet objectif, nous avons proposé trois différentes contributions. La première contribution, présentée dans le chapitre 2, concerne les problèmes à sorties structurées dans lesquels les variables de sortie sont à grande dimension et sont généralement liées par des relations structurelles. Notre proposition vise à exploiter ces relations structurelles en les apprenant de manière non-supervisée avec des autoencodeurs. Nous avons validé notre approche sur un problème de régression multiple appliquée à la détection de points d'intérêt dans des images de visages. Notre approche a montré une accélération de l'apprentissage des réseaux et une amélioration de leur généralisation. La deuxième contribution, présentée dans le chapitre 3, exploite la connaissance a priori sur les représentations à l'intérieur des couches cachées dans le cadre d'une tâche de classification. Cet à priori est basé sur la simple idée que les exemples d'une même classe doivent avoir la même représentation interne. Nous avons formalisé cet à priori sous la forme d'une pénalité que nous avons rajoutée à la fonction de perte. Des expérimentations empiriques sur la base MNIST et ses variantes ont montré des améliorations dans la généralisation des réseaux de neurones, particulièrement dans le cas où peu de données d'apprentissage sont utilisées. Notre troisième et dernière contribution, présentée dans le chapitre 4, montre l'intérêt du transfert d'apprentissage ("transfer learning") dans des applications dans lesquelles peu de données d'apprentissage sont disponibles. L'idée principale consiste à pré-apprendre les filtres d'un réseau à convolution sur une tâche source avec une grande base de données (ImageNet par exemple), pour les insérer par la suite dans un nouveau réseau sur la tâche cible. Dans le cadre d'une collaboration avec le centre de lutte contre le cancer "Henri Becquerel de Rouen", nous avons construit un système automatique basé sur ce type de transfert d'apprentissage pour une application médicale où l'on dispose d’un faible jeu de données étiquetées. Dans cette application, la tâche consiste à localiser la troisième vertèbre lombaire dans un examen de type scanner. L’utilisation du transfert d’apprentissage ainsi que de prétraitements et de post traitements adaptés a permis d’obtenir des bons résultats, autorisant la mise en oeuvre du modèle en routine clinique
Neural network models and deep models are one of the leading and state of the art models in machine learning. They have been applied in many different domains. Most successful deep neural models are the ones with many layers which highly increases their number of parameters. Training such models requires a large number of training samples which is not always available. One of the fundamental issues in neural networks is overfitting which is the issue tackled in this thesis. Such problem often occurs when the training of large models is performed using few training samples. Many approaches have been proposed to prevent the network from overfitting and improve its generalization performance such as data augmentation, early stopping, parameters sharing, unsupervised learning, dropout, batch normalization, etc. In this thesis, we tackle the neural network overfitting issue from a representation learning perspective by considering the situation where few training samples are available which is the case of many real world applications. We propose three contributions. The first one presented in chapter 2 is dedicated to dealing with structured output problems to perform multivariate regression when the output variable y contains structural dependencies between its components. Our proposal aims mainly at exploiting these dependencies by learning them in an unsupervised way. Validated on a facial landmark detection problem, learning the structure of the output data has shown to improve the network generalization and speedup its training. The second contribution described in chapter 3 deals with the classification task where we propose to exploit prior knowledge about the internal representation of the hidden layers in neural networks. This prior is based on the idea that samples within the same class should have the same internal representation. We formulate this prior as a penalty that we add to the training cost to be minimized. Empirical experiments over MNIST and its variants showed an improvement of the network generalization when using only few training samples. Our last contribution presented in chapter 4 showed the interest of transfer learning in applications where only few samples are available. The idea consists in re-using the filters of pre-trained convolutional networks that have been trained on large datasets such as ImageNet. Such pre-trained filters are plugged into a new convolutional network with new dense layers. Then, the whole network is trained over a new task. In this contribution, we provide an automatic system based on such learning scheme with an application to medical domain. In this application, the task consists in localizing the third lumbar vertebra in a 3D CT scan. A pre-processing of the 3D CT scan to obtain a 2D representation and a post-processing to refine the decision are included in the proposed system. This work has been done in collaboration with the clinic "Rouen Henri Becquerel Center" who provided us with data
APA, Harvard, Vancouver, ISO, and other styles
49

Hejblum, Boris. "Analyse intégrative de données de grande dimension appliquée à la recherche vaccinale." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0049/document.

Full text
Abstract:
Les données d’expression génique sont reconnues comme étant de grande dimension, etnécessitant l’emploi de méthodes statistiques adaptées. Mais dans le contexte des essaisvaccinaux, d’autres mesures, comme par exemple les mesures de cytométrie en flux, sontégalement de grande dimension. De plus, ces données sont souvent mesurées de manièrelongitudinale. Ce travail est bâti sur l’idée que l’utilisation d’un maximum d’informationdisponible, en modélisant les connaissances a priori ainsi qu’en intégrant l’ensembledes différentes données disponibles, améliore l’inférence et l’interprétabilité des résultatsd’analyses statistiques en grande dimension. Tout d’abord, nous présentons une méthoded’analyse par groupe de gènes pour des données d’expression génique longitudinales. Ensuite,nous décrivons deux analyses intégratives dans deux études vaccinales. La premièremet en évidence une sous-expression des voies biologiques d’inflammation chez les patientsayant un rebond viral moins élevé à la suite d’un vaccin thérapeutique contre le VIH. Ladeuxième étude identifie un groupe de gènes lié au métabolisme lipidique dont l’impactsur la réponse à un vaccin contre la grippe semble régulé par la testostérone, et donc liéau sexe. Enfin, nous introduisons un nouveau modèle de mélange de distributions skew t àprocessus de Dirichlet pour l’identification de populations cellulaires à partir de donnéesde cytométrie en flux disponible notamment dans les essais vaccinaux. En outre, nousproposons une stratégie d’approximation séquentielle de la partition a posteriori dans lecas de mesures répétées. Ainsi, la reconnaissance automatique des populations cellulairespourrait permettre à la fois une avancée pratique pour le quotidien des immunologistesainsi qu’une interprétation plus précise des résultats d’expression génique après la priseen compte de l’ensemble des populations cellulaires
Gene expression data is recognized as high-dimensional data that needs specific statisticaltools for its analysis. But in the context of vaccine trials, other measures, such asflow-cytometry measurements are also high-dimensional. In addition, such measurementsare often repeated over time. This work is built on the idea that using the maximum ofavailable information, by modeling prior knowledge and integrating all data at hand, willimprove the inference and the interpretation of biological results from high-dimensionaldata. First, we present an original methodological development, Time-course Gene SetAnalysis (TcGSA), for the analysis of longitudinal gene expression data, taking into accountprior biological knowledge in the form of predefined gene sets. Second, we describetwo integrative analyses of two different vaccine studies. The first study reveals lowerexpression of inflammatory pathways consistently associated with lower viral rebound followinga HIV therapeutic vaccine. The second study highlights the role of a testosteronemediated group of genes linked to lipid metabolism in sex differences in immunologicalresponse to a flu vaccine. Finally, we introduce a new model-based clustering approach forthe automated treatment of cell populations from flow-cytometry data, namely a Dirichletprocess mixture of skew t-distributions, with a sequential posterior approximation strategyfor dealing with repeated measurements. Hence, the automatic recognition of thecell populations could allow a practical improvement of the daily work of immunologistsas well as a better interpretation of gene expression data after taking into account thefrequency of all cell populations
APA, Harvard, Vancouver, ISO, and other styles
50

Chiu, Hsien-I., and 邱獻儀. "How Is A Priori Knowledge possible?" Thesis, 2004. http://ndltd.ncl.edu.tw/handle/90737166593241084341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography