Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Information-Based Complexity.

Дисертації з теми "Information-Based Complexity"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-26 дисертацій для дослідження на тему "Information-Based Complexity".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Schmitt, Wagner. "A new 3D shape descriptor based on depth complexity and thickness information." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/127030.

Повний текст джерела
Анотація:
Modelos geométricos desempenham um papel fundamental em divérsas áreas, desde a indústria do entretenimento até aplicações científicas. Para reduzir o elevado custo de criação de um modelo 3D, a reutilização de modelos existentes é a solução ideal. Recuperação de modelos 3D utilizam técnicas baseadas em conteúdo (do inglês CBR) que auxiliam a busca de modelos desejados em repositórios massivos, muitos disponíveis publicamente na Internet. Pontos principais para técnicas CBR eficientes e eficazes são descritores de forma que capturam com precisão as características de uma forma 3D e são capazes de discriminar entre diferentes formas. Nós apresentamos um descritor com base na distribuição de duas características globais, extraídas de uma forma 3D, depth complexity e thickness, que, respectivamente, capturam aspectos da topologia e da geometria das formas 3D. O descritor final, chamado DCT (depth complexity and thickness histogram), é um histograma 2D invariante a translações, rotações e escalas das formas geométricas. Nós eficientemente implementamos o DCT na GPU, permitindo sua utilização em consultas em tempo real em grandes bases de dados de modelos 3D. Nós validamos o DCT com as Princeton e Toyohashi Forma Benchmarks, contendo 1815 e 10000 modelos respectivamente. Os resultados mostram que DCT pode discriminar classes significativas desses benchmarks, é rápido e robusto contra transformações de forma e diferentes níveis de subdivisão e suavidade dos modelos.
Geometric models play a vital role in several fields, from the entertainment industry to scientific applications. To reduce the high cost of model creation, reusing existing models is the solution of choice. Model reuse is supported by content-based shape retrieval (CBR) techniques that help finding the desired models in massive repositories, many publicly available on the Internet. Key to efficient and effective CBR techniques are shape descriptors that accurately capture the characteristics of a shape and are able to discriminate between different shapes. We present a descriptor based on the distribution of two global features measured on a 3D shape, depth complexity and thickness, which respectively capture aspects of the geometry and topology of 3D shapes. The final descriptor, called DCT (depth complexity and thickness histogram), is a 2D histogram that is invariant to the translation, rotation and scale of geometric shapes. We efficiently implement the DCT on the GPU, allowing its use in real-time queries of large model databases. We validate the DCT with the Princeton and Toyohashi Shape Benchmarks, containing 1815 and 10000 models respectively. Results show that DCT can discriminate meaningful classes of these benchmarks, and is fast to compute and robust against shape transformations and different levels of subdivision and smoothness.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Alamoudi, Rami Hussain. "Interaction Based Measure of Manufacturing Systems Complexity and Supply Chain Systems Vulnerability Using Information Entropy." Scholarly Repository, 2008. http://scholarlyrepository.miami.edu/oa_dissertations/76.

Повний текст джерела
Анотація:
The first primary objective of this dissertation is to develop a framework that can quantitatively measure complexity of manufacturing systems in various configurations, including conjoined and disjoined systems. In this dissertation, an analytical model for manufacturing systems complexity that employs information entropy theory is proposed and verified. The model uses probability distribution of information regarding resource allocations that are described in terms of interactions among resources for part processing and part processing requirements. In the proposed framework, both direct and indirect interactions among resources are modeled using a matrix, called interaction matrix, which accounts for part processing and waiting times. The proposed complexity model identifies a manufacturing system that has evenly distributed interactions among resources as being more complex, because under disruption situation more information is required to identify source of the disruption. In addition, implicit relationships between the system complexity and performance in terms of resource utilizations, waiting time, cycle time and throughput of the system are studied in this dissertation by developing a computer program for simulating general job shop environment. The second primary objective of this dissertation is to develop a mathematical model for measuring the vulnerability of the supply chain systems. Global supply chains are exposed to different kinds of disruptions. This has promoted the issue of supply chain resilience higher than ever before in business as well as supporting agendas. In this dissertation, an extension of the proposed measure for manufacturing system complexity is used to measure the vulnerability of the supply chain systems using information entropy theory and influence matrix. We define the vulnerability of supply chain systems based on required information that describes the system in terms of topology and interrelationship among components. The proposed framework for vulnerability modeling in this dissertation focus on disruptive events such as natural disasters, terrorist attacks, or industrial disputes, rather than deviations such as variations in demand, procurement and transportation.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Thost, Veronika. "Using Ontology-Based Data Access to Enable Context Recognition in the Presence of Incomplete Information." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-227633.

Повний текст джерела
Анотація:
Ontology-based data access (OBDA) augments classical query answering in databases by including domain knowledge provided by an ontology. An ontology captures the terminology of an application domain and describes domain knowledge in a machine-processable way. Formal ontology languages additionally provide semantics to these specifications. Systems for OBDA thus may apply logical reasoning to answer queries; they use the ontological knowledge to infer new information, which is only implicitly given in the data. Moreover, they usually employ the open-world assumption, which means that knowledge not stated explicitly in the data or inferred is neither assumed to be true nor false. Classical OBDA regards the knowledge however only w.r.t. a single moment, which means that information about time is not used for reasoning and hence lost; in particular, the queries generally cannot express temporal aspects. We investigate temporal query languages that allow to access temporal data through classical ontologies. In particular, we study the computational complexity of temporal query answering regarding ontologies written in lightweight description logics, which are known to allow for efficient reasoning in the atemporal setting and are successfully applied in practice. Furthermore, we present a so-called rewritability result for ontology-based temporal query answering, which suggests ways for implementation. Our results may thus guide the choice of a query language for temporal OBDA in data-intensive applications that require fast processing, such as context recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Dash, Santanu Kumar. "Adaptive constraint solving for information flow analysis." Thesis, University of Hertfordshire, 2015. http://hdl.handle.net/2299/16354.

Повний текст джерела
Анотація:
In program analysis, unknown properties for terms are typically represented symbolically as variables. Bound constraints on these variables can then specify multiple optimisation goals for computer programs and nd application in areas such as type theory, security, alias analysis and resource reasoning. Resolution of bound constraints is a problem steeped in graph theory; interdependencies between the variables is represented as a constraint graph. Additionally, constants are introduced into the system as concrete bounds over these variables and constants themselves are ordered over a lattice which is, once again, represented as a graph. Despite graph algorithms being central to bound constraint solving, most approaches to program optimisation that use bound constraint solving have treated their graph theoretic foundations as a black box. Little has been done to investigate the computational costs or design e cient graph algorithms for constraint resolution. Emerging examples of these lattices and bound constraint graphs, particularly from the domain of language-based security, are showing that these graphs and lattices are structurally diverse and could be arbitrarily large. Therefore, there is a pressing need to investigate the graph theoretic foundations of bound constraint solving. In this thesis, we investigate the computational costs of bound constraint solving from a graph theoretic perspective for Information Flow Analysis (IFA); IFA is a sub- eld of language-based security which veri es whether con dentiality and integrity of classified information is preserved as it is manipulated by a program. We present a novel framework based on graph decomposition for solving the (atomic) bound constraint problem for IFA. Our approach enables us to abstract away from connections between individual vertices to those between sets of vertices in both the constraint graph and an accompanying security lattice which defines ordering over constants. Thereby, we are able to achieve significant speedups compared to state-of-the-art graph algorithms applied to bound constraint solving. More importantly, our algorithms are highly adaptive in nature and seamlessly adapt to the structure of the constraint graph and the lattice. The computational costs of our approach is a function of the latent scope of decomposition in the constraint graph and the lattice; therefore, we enjoy the fastest runtime for every point in the structure-spectrum of these graphs and lattices. While the techniques in this dissertation are developed with IFA in mind, they can be extended to other application of the bound constraints problem, such as type inference and program analysis frameworks which use annotated type systems, where constants are ordered over a lattice.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Eugénio, António Luís Beja. "The information systems and technology innovation process: a study using an agent-based approach." Master's thesis, Instituto Superior de Economia e Gestão, 2007. http://hdl.handle.net/10400.5/636.

Повний текст джерела
Анотація:
Mestrado em Gestão de Sistemas de Informação
Um modelo abstracto baseado em agentes é utilizado para estudar a inovação em Sistemas de Informação e em Tecnologia de Informação, no plano organizacional, utilizando uma aproximação sócio-cognitiva. A conclusão do estudo indica que o poder dos profissionais conhecedores de tecnologias de informação na decisão de adopção de uma determinada inovação varia com o nível de concordância de ideias entre eles e os decisores, ao mesmo tempo que depende da taxa de depreciação das transacções, conduzindo a uma forte flutuação de poder quando o ambiente é instável.
An abstract Agent Based Model is used to study Information Systems and Information Technology innovation on an organizational realm, using a socio-cognitive approach. Conclusion is drawn that the power of the knowledge workers in the decision to adopt an IS/IT innovation within an organization varies with the matching level of ideas between them and the top management, while being dependant of the transactions’ depreciation rate, leading to a strong fluctuation of power when the environment is unstable.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Syed, Tamseel Mahmood. "Precoder Design Based on Mutual Information for Non-orthogonal Amplify and Forward Wireless Relay Networks." University of Akron / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=akron1392043776.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Clément, François. "An Optimization Perspective on the Construction of Low-Discrepancy Point Sets." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS138.

Повний текст джерела
Анотація:
Les mesures de discrépance sont une famille de mesures quantifiant l'uniformité d'un ensemble de points: sont-ils bien répartis dans l'espace (discrépance faible) ou concentrés par endroits (discrépance élevée)? Parmi ces mesures, la discrépance à l'origine L∞ joue un rôle primordial: elle permet de quantifier l'erreur faite lors de l'approximation d'une intégrale par une somme finie composée d'évaluations de la fonction en un ensemble de points. Ce résultat mène à de très nombreuses applications dans des domaines variés allant de la vision en informatique à la finance, sans oublier tout ce qui concerne le plan d'expérience (design of experiments), où des points bien répartis sont essentiels. Les points à faible discrépance utilisés dans ces applications sont issus d'approches mathématiques en théorie des nombres, avec de nombreuses séquences ou ensembles de points connus depuis plusieurs décennies. Malgré cela, ce sujet reste surtout le domaine de mathématiciens, souvent plus intéressés par les propriétés asymptotiques de ces points que leur utilisations en pratique. Il en résulte que les ensembles à faible discrépance, bien que très bons théoriquement, sont parfois moins bien que des points aléatoires pour certaines applications, particulièrement lorsque la dimension augmente. D'un point de vue plus qualitatif, malgré certaines bornes asymptotiques, nous ne savons pas précisément quelle est la limite inférieure pour la discrépance d'un ensemble de points d'une taille fixée. Or, ce type de question a un impact immédiat sur la qualité des résultats lors d'applications. Dans cette thèse, nous abordons le problème de la construction d'ensembles à faible discrépance d'un point de vue informatique, tout en obtenant de nombreux résultats empiriques sur la qualité de ces ensembles. Plutôt que de construire mathématiquement ces séquences, nous utilisons un ensemble de méthodes issues de l'optimisation, ainsi que les séquences existantes, pour obtenir de nouveaux ensembles de points. Nous montrons qu'il est possible d'obtenir des ensembles de points de bien meilleure discrépance à l'origine L∞ que ceux existant déjà, dans des dimensions variées. Enfin, nous présentons également une approche basée sur la discrépance à l'origine L2, pour montrer que son utilisation de manière gloutonne permet la construction d'une séquence extrêmement régulière pour la discrépance L∞, ouvrant la voie vers de nombreuses approches novatrices
Discrepancy measures are metrics designed to quantify how well spread a point set is in a given space. Among these, the L∞ star discrepancy is arguably one of the most popular. Indeed, by the Koksma-Hlawka inequality~cite{Hlawka,Koksma}, when replacing an integral by the average of function evaluations in specific points, the error made is bounded by a product of two terms, one depending only on the function and the other on the L∞ star discrepancy of the points. This leads to a variety of applications, from computer vision to financial mathematics and to design of experiments where well-spread points covering a space are essential.Low-discrepancy sets used in such applications usually correspond to number theoretic designs, with a wide variety of possible constructions. Despite the high demand in practice, the design of these point sets remains largely the work of mathematicians, often more interested in finding asymptotic bounds than in adapting the point sets to the desired applications. This results in point sets that, while theoretically excellent, sometimes leave a lot to be desired for applications, in particular high-dimensional ones. Indeed, the constructions are not tailored to the many different settings found in applications and are thus suboptimal. Furthermore, not only do we not know how low the discrepancy of point sets of a given size in a fixed dimension can go, but often we do not even know the discrepancy of existing constructions. This leaves essential questions unanswered in the design of low-discrepancy sets and sequences. In this thesis, we tackle the problem of constructing low-discrepancy sets from a computational perspective. With optimization approaches applied in isolation or on top of existing sets and sequences, we provide a diverse set of methods to generate excellent low-discrepancy sets, largely outperforming the discrepancy of known constructions in a wide variety of contexts. In particular, we describe a number of examples such as provably optimal sets for very few points in dimension 2, or improved sets of hundreds of points in moderate dimensions via subset selection. Finally, we extend recent work on greedy one-dimensional sequence construction to show that greedy L2 construction of point sets provides excellent empirical results with respect to the L∞ star discrepancy
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Domercant, Jean Charles. "ARC-VM: an architecture real options complexity-based valuation methodology for military systems-of-systems acquisitions." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42928.

Повний текст джерела
Анотація:
An Architecture Real Options Complexity-Based Valuation Methodology (ARC-VM) is developed for use to aid in the acquisition of military systems-of-systems (SoS). ARC-VM is suitable for acquisition-level decision making, where there is a stated desire for more informed tradeoffs between cost, schedule, and performance during the early phases of design. First, a framework is introduced to measure architecture complexity as it directly relates to military SoS. Development of the framework draws upon a diverse set of disciplines, including Complexity Science, software architecting, measurement theory, and utility theory. Next, a Real Options based valuation strategy is developed using techniques established for financial stock options that have recently been adapted for use in business and engineering decisions. The derived complexity measure provides architects with an objective measure of complexity that focuses on relevant complex system attributes. These attributes are related to the organization and distribution of SoS functionality and the sharing and processing of resources. The use of Real Options provides the necessary conceptual and visual framework to quantifiably and traceably combine measured architecture complexity, time-valued performance levels, as well as programmatic risks and uncertainties. An example suppression of enemy air defenses (SEAD) capability demonstrates the development and utility of the resulting architecture complexity&Real Options based valuation methodology. Different portfolios of candidate system types are used to generate an array of architecture alternatives that are then evaluated using an engagement model. This performance data is combined with both measured architecture complexity and programmatic data to assign an acquisition value to each alternative. This proves useful when selecting alternatives most likely to meet current and future capability needs.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Мельничук, Андрій Богданович, та Andrii Melnychuk. "Методи захисту інформації в рамках предметно-орієнтованого проєктування інформаційних систем". Master's thesis, ТНТУ, 2021. http://elartu.tntu.edu.ua/handle/lib/36742.

Повний текст джерела
Анотація:
«Методи захисту інформації в рамках предметно-орієнтованого проєктування інформаційних систем» // Дипломна робота ОР «Магістр» // Мельничук Андрій Богданович // Тернопільський національний технічний університет імені Івана Пулюя, факультет комп'ютерно інформаційних систем і програмної інженерії, кафедра кібербезпеки, група СБм-61 // Тернопіль, 2021 // С. 61, табл. – 3, рис. – 10, додат. – 1.
Предметно-орієнтоване проектування — це підхід до проектування програмного забезпечення. Метод проєктування визначає у собі практики як спілкуватись із спеціалістами, самої, предметної області та набір правил проектування де кінцевий код буде відображати у собі всі поняття самого домену. У деяких випадках розділення проблем, передбачених предметно орієнтованим підходом, важко досягти, а саме, коли розглядається функціональність, яка не залежить від домену, але є тісно пов’язаною з функціональністю, пов’язаною з предметною областю. Дані проблеми стосуються блоку програми яка б відповідала за безпеку. На жаль, засновник досліджуваного підходу не визначив як саме можна поєднати таку логіку, тому було досліджено різні методи забезпечення захисту даних та аналіз найкращих варіантів реалізації, які можна застосовувати у реальних проектах.
Domain Driven Design — is an approach to software design. The approach method defines the practices of how to communicate with specialists, of the subject area and a set of design rules where the final code will reflect all the concepts of the domain itself. In some cases, the separation covered by a subject-oriented approach is difficult to achieve, namely when considering functionality that is independent of the domain but is closely related to functionality related to the subject area. These issues are related to the security unit. Unfortunately, the founder of the researched approach did not define exactly how such logic can be combined. That is why were researched the different methods of data protection and analysis of the best implementation options that can be used in real projects.
ПЕРЕЛІК УМОВНИХ СКОРОЧЕНЬ…8 ВСТУП… 9 РОЗДІЛ 1. АНАЛІЗ ПРЕДМЕТНО-ОРІЄНТОВАНОГО ПРОЄКТУВАННЯ ТА ПРОБЛЕМИ БЕЗПЕКИ У НЬОМУ…13 1.1. Предметно-орієнтоване проєктування…13 1.2 Архітектура та структура проекту при DDD проєктуванні… 16 1.3 Аспектно-орієнтоване програмування…18 1.4 Вимірювання складності реалізації методу… 20 1.5 Проблема захисту даних при проектуванні…22 РОЗДІЛ 2. ЗАХИСТ ІНФОРМАЦІЇ В РАМКАХ ПРЕДМЕТНО-ОРІЄНТОВАНОГО ПРОЄКТУВАННЯ…24 2.1 Предмет дослідження проблеми захисту інформації… 24 2.2 Критерії оцінки варіантів реалізації безпеки в рамках предметно-орієнтованому проектуванні… 26 2.3 Аналіз методу захисту інформації, яка вбудована у шар предметної області… 28 2.3.1 Опис підходу та його плюси…28 2.3.2 Недоліки підходу… 30 2.3.3 Висновок по підрозділу…30 2.4 Аналіз методу захисту інформації, яка винесена в окремий контекст… 31 2.4.1 Опис підходу та його плюси…31 2.4.2 Недоліки підходу… 32 2.4.3 Висновок по підрозділу…34 2.5 Аналіз методу захисту інформації, з використанням фасаду предметної області… 34 2.5.1 Опис підходу та його плюси…35 2.5.2 Недоліки підходу… 36 2.5.3 Висновок по підрозділу…36 2.6 Аналіз методу захисту інформації, з використанням аспектно-орієнтованого підходу…37 2.6.1 Опис підходу та його плюси…37 2.6.2 Недоліки підходу… 38 2.6.3 Висновок по підрозділу…39 РОЗДІЛ 3. РЕАЛІЗАЦІЯ МЕТОДІВ ЗАХИСТУ ТА ПОРІВНЯННЯ СКЛАДНОСТІ…40 3.1 Реалізація базової структури проекту…40 3.1.1 Вибір середовища та технологія для створення інформаційної системи… 40 3.1.2 Розробка головного ядра інформаційної системи… 40 3.1.3 Розробка відокремленого контексту безпеки… 42 3.1.4 Розробка інтерфейсу користувача… 43 3.2 Аналіз складності у реалізацій та підтримки кожного із методів…44 3.2.1 Визначення значення залежностей у інформаційній системі…44 3.2.2 Вимірювання цикломатичної складності… 44 3.3 Підсумки отриманих результатів…45 РОЗДІЛ 4. ОХОРОНА ПРАЦІ ТА БЕЗПЕКА У НАДЗВИЧАЙНИХ СИТУАЦІЯХ…47 4.1 Охорона праці…47 4.2 Підвищення стійкості роботи підприємств будівельної галузі у воєнний час…49 ВИСНОВКИ… 54 СПИСОК ВИКОРИСТАНИХ ДЖЕРЕЛ… 56 ДОДАТКИ…59
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Kuhn, John. "A THEORY OF COMPLEX ADAPTIVE INQUIRING ORGANIZATIONS: APPLICATION TO CONTINUOUS ASSURANCE OF CORPORATE FINANCIAL INFORMATION." Doctoral diss., University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2432.

Повний текст джерела
Анотація:
Drawing upon the theories of complexity and complex adaptive systems and the Singerian Inquiring System from C. West Churchman’s seminal work The Design of Inquiring Systems the dissertation herein develops a systems design theory for continuous auditing systems. The dissertation consists of discussion of the two foundational theories, development of the Theory of Complex Adaptive Inquiring Organizations (CAIO) and associated design principles for a continuous auditing system supporting a CAIO, and instantiation of the CAIO theory. The instantiation consists of an agent-based model depicting the marketplace for Frontier Airlines that generates an anticipated market share used as an integral component in a mock auditor going concern opinion for the airline. As a whole, the dissertation addresses the lack of an underlying system design theory and comprehensive view needed to build upon and advance the continuous assurance movement and addresses the question of how continuous auditing systems should be designed to produce knowledge--knowledge that benefits auditors, clients, and society as a whole.
Ph.D.
Department of Management Information Systems
Business Administration
Business Administration PhD
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Holm, Cyril. "F. A. Hayek's Critique of Legislation." Doctoral thesis, Uppsala universitet, Juridiska institutionen, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-236890.

Повний текст джерела
Анотація:
The dissertation concerns F. A. Hayek’s (1899–1992) critique of legislation. The purpose of the investigation is to clarify and assess that critique. I argue that there is in Hayek’s work a critique of legislation that is distinct from his well-known critique of social planning. Further that the main claim of this critique is what I refer to as Hayek’s legislation tenet, namely that legislation that aims to achieve specific aggregate results in complex orders of society will decrease the welfare level.           The legislation tenet gains support; (i) from the welfare claim – according to which there is a positive correlation between the utilization of knowledge and the welfare level in society; (ii) from the dispersal of knowledge thesis – according to which the total knowledge of society is dispersed and not available to any one agency; and (iii) from the cultural evolution thesis – according to which evolutionary rules are more favorable to the utilization of knowledge in social cooperation than are legislative rules. More specifically, I argue that these form two lines of argument in support of the legislation tenet. One line of argument is based on the conjunction of the welfare claim and the dispersal of knowledge thesis. I argue that this line of argument is true. The other line of argument is based on the conjunction of the welfare claim and the cultural evolution thesis. I argue that this line of argument is false, mainly because the empirical work of political scientist Elinor Ostrom refutes it. Because the two lines of argument support the legislation tenet independently of each other, I argue that Hayek’s critique of legislation is true. In this dissertation, I further develop a legislative policy tool as based on the welfare claim and Hayek’s conception of coercion. I also consider Hayek’s idea that rules and law are instrumental in forging rational individual action and rational social orders, and turn to review this idea in light of the work of experimental economist Vernon Smith and economic historian Avner Greif. I find that Smith and Greif support this idea of Hayek’s, and I conjecture that it contributes to our understanding of Adam Smith’s notion of the invisible hand: It is rules – not an invisible hand – that prompt subjects to align individual and aggregate rationality in social interaction. Finally, I argue that Hayek’s critique is essentially utilitarian, as it is concerned with the negative welfare consequences of certain forms of legislation. And although it may appear that the dispersal of knowledge thesis will undermine the possibility of carrying out the utilitarian calculus, due to the lack of knowledge of the consequences of one’s actions – and therefore undermine the legislation tenet itself – I argue that the distinction between utilitarianism conceived as a method of deliberation and utilitarianism conceived as a criterion of correctness may be used to save Hayek’s critique from this objection.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Tröger, Ralph. "Supply Chain Event Management – Bedarf, Systemarchitektur und Nutzen aus Perspektive fokaler Unternehmen der Modeindustrie." Doctoral thesis, Universitätsbibliothek Leipzig, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-155014.

Повний текст джерела
Анотація:
Supply Chain Event Management (SCEM) bezeichnet eine Teildisziplin des Supply Chain Management und ist für Unternehmen ein Ansatzpunkt, durch frühzeitige Reaktion auf kritische Ausnahmeereignisse in der Wertschöpfungskette Logistikleistung und -kosten zu optimieren. Durch Rahmenbedingungen wie bspw. globale Logistikstrukturen, eine hohe Artikelvielfalt und volatile Geschäftsbeziehungen zählt die Modeindustrie zu den Branchen, die für kritische Störereignisse besonders anfällig ist. In diesem Sinne untersucht die vorliegende Dissertation nach einer Beleuchtung der wesentlichen Grundlagen zunächst, inwiefern es in der Modeindustrie tatsächlich einen Bedarf an SCEM-Systemen gibt. Anknüpfend daran zeigt sie nach einer Darstellung bisheriger SCEM-Architekturkonzepte Gestaltungsmöglichkeiten für eine Systemarchitektur auf, die auf den Designprinzipien der Serviceorientierung beruht. In diesem Rahmen erfolgt u. a. auch die Identifikation SCEM-relevanter Business Services. Die Vorzüge einer serviceorientierten Gestaltung werden detailliert anhand der EPCIS (EPC Information Services)-Spezifikation illustriert. Abgerundet wird die Arbeit durch eine Betrachtung der Nutzenpotenziale von SCEM-Systemen. Nach einer Darstellung von Ansätzen, welche zur Nutzenbestimmung infrage kommen, wird der Nutzen anhand eines Praxisbeispiels aufgezeigt und fließt zusammen mit den Ergebnissen einer Literaturrecherche in eine Konsolidierung von SCEM-Nutzeffekten. Hierbei wird auch beleuchtet, welche zusätzlichen Vorteile sich für Unternehmen durch eine serviceorientierte Architekturgestaltung bieten. In der Schlussbetrachtung werden die wesentlichen Erkenntnisse der Arbeit zusammengefasst und in einem Ausblick sowohl beleuchtet, welche Relevanz die Ergebnisse der Arbeit für die Bewältigung künftiger Herausforderungen innehaben als auch welche Anknüpfungspunkte sich für anschließende Forschungsarbeiten ergeben.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Fujdiak, Radek. "Analýza a optimalizace datové komunikace pro telemetrické systémy v energetice." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-358408.

Повний текст джерела
Анотація:
Telemetry system, Optimisation, Sensoric networks, Smart Grid, Internet of Things, Sensors, Information security, Cryptography, Cryptography algorithms, Cryptosystem, Confidentiality, Integrity, Authentication, Data freshness, Non-Repudiation.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

"The information-based complexity of dynamic programming." Laboratory for Information and Decision Systems, Massachusetts Institute of Technology], 1989. http://hdl.handle.net/1721.1/3123.

Повний текст джерела
Анотація:
Chee-Seng Chow, John N. Tsitsiklis.
Cover title.
Includes bibliographical references.
Supported by the NSF, with matching funds from Bellcore and Dupont. ECS-8552419 Supported by the ARO. DAAL03-86-K-0171
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Petras, Iasonas. "Contributions to Information-Based Complexity and to Quantum Computing." Thesis, 2013. https://doi.org/10.7916/D80V8M04.

Повний текст джерела
Анотація:
Multivariate continuous problems are widely encountered in physics, chemistry, finance and in computational sciences. Unfortunately, interesting real world multivariate continuous problems can almost never be solved analytically. As a result, they are typically solved numerically and therefore approximately. In this thesis we deal with the approximate solution of multivariate problems. The complexity of such problems in the classical setting has been extensively studied in the literature. On the other hand the quantum computational model presents a promising alternative for dealing with multivariate problems. The idea of using quantum mechanics to simulate quantum physics was initially proposed by Feynman in 1982. Its potential was demonstrated by Shor's integer factorization algorithm, which exponentially improves the cost of the best classical algorithm known. In the first part of this thesis we study the tractability of multivariate problems in the worst and average case settings using the real number model with oracles. We derive necessary and sufficient conditions for weak tractability for linear multivariate tensor product problems in those settings. More specifically, we initially study necessary and sufficient conditions for weak tractability on linear multivariate tensor product problems in the worst case setting under the absolute error criterion. The complexity of such problems depends on the rate of decay of the squares of the singular values of the solution operator for the univariate problem. We show a condition on the singular values that is sufficient for weak tractability. The same condition is known to be necessary for weak tractability. Then, we study linear multivariate tensor product problems in the average case setting under the absolute error criterion. The complexity of such problems depends on the rate of decay of the eigenvalues of the covariance operator of the induced measure of the one dimensional problem. We derive a necessary and sufficient condition on the eigenvalues for such problems to be weakly tractable but not polynomially tractable. In the second part of this thesis we study quantum algorithms for certain eigenvalue problems and the implementation and design of quantum circuits for a modification of the quantum NAND evaluation algorithm on k-ary trees, where k is a constant. First, we study quantum algorithms for the estimation of the ground state energy of the multivariate time-independent Schrodinger equation corresponding to a multiparticle system in a box. The dimension d of the problem depends linearly to the number of particles of the system. We design a quantum algorithm that approximates the lowest eigenvalue with relative error ε for a non-negative potential V, where V as well as its first order partial derivatives are continuous and uniformly bounded by one. The algorithm requires a number of quantum operations that depends polynomially on the inverse of the accuracy and linearly on the number of the particles of the system. We note that the cost of any classical deterministic algorithm grows exponentially in the number of particles. Thus we have an exponential speedup with respect to the dimension of the problem d, when compared to the classical deterministic case. We extend our results to convex non-negative potentials V, where V as well as its first order partial derivatives are continuous and uniformly bounded by constants C and C' respectively. The algorithm solves the eigenvalue problem for a sequence of convex potentials in order to obtain its final result. More specifically, the quantum algorithm estimates the ground state energy with relative error ε a number of quantum operations that depends polynomially on the inverse of the accuracy, the uniform bound C on the potential and the dimension d of the problem. In addition, we present a modification of the algorithm that produces a quantum state which approximates the ground state eigenvector of the discretized Hamiltonian within Δ. This algorithm requires a number of quantum operations that depends pollynomially on the inverse of ε, the inverse of Δ, the uniform bound C on the potential and the dimension d of the problem. Finally, we consider the algorithm by Ambainis et.al. that evaluates balanced binary NAND formulas. We design a quantum circuit that implements a modification of the algorithm for k-ary trees, where k is a constant. Furthermore, we design another quantum circuit that consists exclusively of Clifford and T gates. This circuit approximates the previous one with error ε using the Solovay-Kitaev algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Wu, Ming-Lu, and 吳明錄. "Low Complexity Antenna Array-Assisted Multiuser Detection Based on Information Theoretic Criteria." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/12168067347673016093.

Повний текст джерела
Анотація:
博士
國立臺灣科技大學
電子工程系
90
In this dissertation, a low complexity antenna array-assisted multiuser detection (MUD) which is based on the information theoretic criteria is proposed to detect the desired signal in TDMA systems. The contributions of this dissertation include the followings. First, a low complexity antenna array-assisted minimum mean-squared error (MMSE) MUD is addressed, which utilizes only partial information of the co-channel interferences (CCIs) in the demodulation process. In light of the fact that the power of the CCIs is lower than that of the desired user as the CCIs are out-of-cell in TDMA, the proposed approach truncates the channel length of the CCIs adaptively based on the power of the channel taps of the CCIs. As the truncated channel taps only account for negligible information of the CCIs, the performance does not substantially degrade. Moreover, the analytic expressions of the bit error rate (BER) performance for both of the full complexity and the proposed low complexity MMSE MUD are derived. Simulation results show that the simulations and the analytic results agree well in various scenarios, and that the performance remains close even if we truncate $60\%$ CCI channel length. Second, information theoretic criteria, which include the Akaike information theoretic criterion (AIC) and Rissanen's minimum distance length (MDL) criterion, are employed to form a theoretic foundation to determine the effective CCI channel length in the developed MUD. Two information theoretic approaches are considered. The first one is a direct extension of the previous works. Aiming at keeping the information of the desired user intact in the truncation, we propose modified information theoretic criteria, which first project the received signals onto the CCIs subspace and the noise subspace before the embarkation of the information theoretic analysis. To assess the statistical behavior of the proposed criteria, the consistency property is also investigated. The built simulations show that the developed MUD with the effective CCI channel length determined by the information theoretic criteria yields indistinguishable performance as the full complexity counterpart. Finally, to enhance the spectrum efficiency, we address a simple, yet effective dynamic channel assignment (DCA), which employs the angle constraints and distance constraints as the criteria to assign frequency bands. With a combination of the developed low complexity MUD and DCA, which are compensatory to each other, we can truncate more redundant information of the CCIs and thus achieve lower complexity, while still maintaining acceptable BER performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

簡添福. "A Study and Implementation in Software Product Complexity Measurement Based on Information Theory." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/07165585496918960997.

Повний текст джерела
Анотація:
碩士
佛光人文社會學院
資訊學研究所
90
The rapid development of software industry, fast change of industry environment, and consumer power grow up. Let software development confront a large test. Developer and purchaser must have more consideration about strategy at software quality and cost. If we can predict successfully the software development cost at the beginning. This will give the software company competition advantage. Software engineering has an important subject---software complexity metrics, which study the relationship between software complexity and development cost. More researchers agree that software complexity and development cost are positively related. In addition, they propose many metrics methods. Among those metrics methods, Simple Stack-Based Markov (SSBM) model is a very excellent method. SSBM solve the main drawback, which only can measure one attribute (ex. size, control flow, data flow, … etc.). And SSBM can use the new technical of software development, such as Object-oriented. Although SSBM have excellent capability. It has a weakness, which can’t express the nest complexity of control flow. And SSBM support the program control flow as same as expression control flow. Then researcher aim at nest complexity to refine it. But they support the program control flow as same as expression control flow. In this paper, we consider a different one and propose a method that uses two kinds of parenthesis to solve this problem. We follow a regular and strict process of the development of software metrics, which makes our model more complete. So we have made an experiment to proof the model correction.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Huang, Cheng-Chun, and 黃政鈞. "Lossless Information Hiding Schemes Based on Pixels Complexity Analysis and Histogram of Predicted Coding." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/83578901036736550668.

Повний текст джерела
Анотація:
碩士
朝陽科技大學
資訊管理系碩士班
96
Along with Internet fast popularization, people can easily share and obtain information each other. It also imperceptibly increases the probability of intercept by the illegal third party in the information transmission. Therefore, it appears the importance of information security, especially in the military or the medicine image that does not allow having any error. In order to achieve this goal, the lossless information hiding property is an important subject. According to the motivation above, we first propose an effective lossless information hiding scheme, in which a host image is quantized firstly to generate spare spaces for hiding secret messages. The proposed scheme applies the complexity analysis of neighboring pixels to predict the number of secret message bits concealed in a pixel. In other words, the scheme reserves the differences between the host image and the quantized image for completely restoring the host image. According to the experimental results, the information capacity of the proposed scheme is 0.9 BPP for the standard Lena while that of Maniccam and Bourbakis’s scheme is only 0.3 BPP. In addition, there are many proposed lossless information hiding techniques such as difference expansion, integer transformation method, histogram modification and so on. Among them, histogram modification modifies maximum pixel value in a histogram to embed secret messages into a host image. The quality of stego image generated by the histogram modification is good. However, the capacity of histogram modification method is low. Therefore, we propose a lossless information hiding scheme to improve this problem in this thesis by using black-based internal and external forecasting. According to the experimental results, the proposed scheme can increase the information capacity and maintain image quality.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Erar, Bahar. "Mixture model cluster analysis under different covariance structures using information complexity." 2011. http://trace.tennessee.edu/utk_gradthes/968.

Повний текст джерела
Анотація:
In this thesis, a mixture-model cluster analysis technique under different covariance structures of the component densities is developed and presented, to capture the compactness, orientation, shape, and the volume of component clusters in one expert system to handle Gaussian high dimensional heterogeneous data sets to achieve flexibility in currently practiced cluster analysis techniques. Two approaches to parameter estimation are considered and compared; one using the Expectation-Maximization (EM) algorithm and another following a Bayesian framework using the Gibbs sampler. We develop and score several forms of the ICOMP criterion of Bozdogan (1994, 2004) as our fitness function; to choose the number of component clusters, to choose the correct component covariance matrix structure among nine candidate covariance structures, and to select the optimal parameters and the best fitting mixture-model. We demonstrate our approach on simulated datasets and a real large data set, focusing on early detection of breast cancer. We show that our approach improves the probability of classification error over the existing methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Zhang, Rui. "Model selection techniques for kernel-based regression analysis using information complexity measure and genetic algorithms." 2007. http://etd.utk.edu/2007/ZhangRui.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Broadbent, Anne Lise. "Quantum nonlocality, cryptography and complexity." Thèse, 2008. http://hdl.handle.net/1866/6448.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Kao, Min-Chi, and 高明志. "QMF Banks Optimization Based on Derivative Information and Low-Complexity Design of Two-Channel Subband Filters Using Short Modular Half-Band Filters." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/40212442694390363151.

Повний текст джерела
Анотація:
博士
國立交通大學
電子工程系
88
The dissertation is concerned with three key issues of filter bank design, namely, responses optimization, low computational complexity, and low finite-precision-error realization of subband filters. In particular, this dissertation is divided into two parts: (I) Quadrature-Mirror-Filter (QMF) banks optimization based on derivative information, and (II) low-complexity design and realization of 1-D/2-D two-channel subband filters using short modular half-band/Nyquist(M) filters. The first part focuses on the optimization of QMF banks. New types of objective functions, utilizing derivative information of the reconstruction error in z-domain, are proposed. New designs of QMF banks using the objective functions are studied. Efficient design algorithms for low-delay QMF banks and linear-phase QMF banks are developed. From simulations, the new designs can achieve better results than the conventional design based on the standard least-square-error objective function. The second part focuses on the low-complexity design and realization of subband filters with good numerical properties. We devise novel low-complexity composition schemes for the design and realization of 1-D half-band filters, 1-D two-channel biorthogonal filter banks, 2-D Nyquist(M) filters, and 2-D two-channel diamond/quadrant filter banks, all with narrow transition band and high frequency selectivity. The existing design methods either result in high-performance but high-complexity subband filters or low-complexity but low-performance subband filters. The new schemes provide simple and efficient methods for synthesizing high-performance low-complexity subband filters with good numerical property for finite-precision realization. The synthesis process involves frequency response sharpening. For the low-complexity design and realization of 1-D half-band filters, the proposed scheme is based on an algebraic iterative composition method using adjustable short modular half-band filters. The modular filters can be user selectable as simple ones as desired. Specifically, the designed higher-order half-band filters can be made multiplierless if the modular filters are multiplierless. For the low-complexity design and realization of 1-D biorthogonal linear-phase filter banks, the proposed algebraic iterative composition scheme utilizes the solution of filter bank with two half-band filters. The resulting analysis filters are not only sharp but also low-complexity, which are composed of several short modular half-band filters. The 1-D schemes are extended to the synthesis of 2-D Nyquist(M) filters and two-channel nonseparable diamond/quadrant filter banks with sharp responses. Short modular 2-D Nyquist(M) filters, preferably multiplier-free ones, are used. Based on the proposed schemes, half-band/Nyquist(M) filters and 1-D/2-D filter banks can be synthesized in a tree-like multi-stage cascaded structure with considerably reduced arithmetic operations (that can be made multiplierless). Simulations are shown to validate the effectiveness of the proposed schemes.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

"Managing Distributed Information: Implications for Energy Infrastructure Co-production." Doctoral diss., 2018. http://hdl.handle.net/2286/R.I.49360.

Повний текст джерела
Анотація:
abstract: The Internet and climate change are two forces that are poised to both cause and enable changes in how we provide our energy infrastructure. The Internet has catalyzed enormous changes across many sectors by shifting the feedback and organizational structure of systems towards more decentralized users. Today’s energy systems require colossal shifts toward a more sustainable future. However, energy systems face enormous socio-technical lock-in and, thus far, have been largely unaffected by these destabilizing forces. More distributed information offers not only the ability to craft new markets, but to accelerate learning processes that respond to emerging user or prosumer centered design needs. This may include values and needs such as local reliability, transparency and accountability, integration into the built environment, and reduction of local pollution challenges. The same institutions (rules, norms and strategies) that dominated with the hierarchical infrastructure system of the twentieth century are unlikely to be good fit if a more distributed infrastructure increases in dominance. As information is produced at more distributed points, it is more difficult to coordinate and manage as an interconnected system. This research examines several aspects of these, historically dominant, infrastructure provisioning strategies to understand the implications of managing more distributed information. The first chapter experimentally examines information search and sharing strategies under different information protection rules. The second and third chapters focus on strategies to model and compare distributed energy production effects on shared electricity grid infrastructure. Finally, the fourth chapter dives into the literature of co-production, and explores connections between concepts in co-production and modularity (an engineering approach to information encapsulation) using the distributed energy resource regulations for San Diego, CA. Each of these sections highlights different aspects of how information rules offer a design space to enable a more adaptive, innovative and sustainable energy system that can more easily react to the shocks of the twenty-first century.
Dissertation/Thesis
Doctoral Dissertation Sustainability 2018
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Naik, Debendra Kumar. "Fuzzy Rule Based Approach for Quality Analysis of Web Service Composition Using Complexity Metrics." Thesis, 2015. http://ethesis.nitrkl.ac.in/7779/1/2015_Fuzzy_Rule__Naik.pdf.

Повний текст джерела
Анотація:
Since the human needs are fast changing, the present day software tends to be complex. So, complexity analysis of any software is the one of the challenging areas of research. In the literature review, a good number of articles are available on traditional software complexity analysis; but the complexity analysis of service oriented architecture based software is not studied extensively till date. The web service is the basic building block of SOA. Composition of web service is done through a Business Process Execution Language; but a large number of web service compositions make the software more complex. So, it is necessary to analyze the complexity of BPEL processes. Business activities govern long-running complex composed service. That reduces the service reliability, performability, and others quality attributes. Business process complexity metrics are considered for analysis of composed web service. In this work different complexity metrics are proposed and Fuzzy logic is used for quality analysis of web service composition. This model relates business complexity metrics such as activity complexity, structural complexity, control ow complexity to high-level quality attributes such as functionality, usability, maintainability, reliability, performability using fuzzy rule-based approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Howe, John Andrew. "A New Generation of Mixture-Model Cluster Analysis with Information Complexity and the Genetic EM Algorithm." 2009. http://trace.tennessee.edu/utk_graddiss/863.

Повний текст джерела
Анотація:
In this dissertation, we extend several relatively new developments in statistical model selection and data mining in order to improve one of the workhorse statistical tools - mixture modeling (Pearson, 1894). The traditional mixture model assumes data comes from several populations of Gaussian distributions. Thus, what remains is to determine how many distributions, their population parameters, and the mixing proportions. However, real data often do not fit the restrictions of normality very well. It is likely that data from a single population exhibiting either asymmetrical or nonnormal tail behavior could be erroneously modeled as two populations, resulting in suboptimal decisions. To avoid these pitfalls, we develop the mixture model under a broader distributional assumption by fitting a group of multivariate elliptically-contoured distributions (Anderson and Fang, 1990; Fang et al., 1990). Special cases include the multivariate Gaussian and power exponential distributions, as well as the multivariate generalization of the Student’s T. This gives us the flexibility to model nonnormal tail and peak behavior, though the symmetry restriction still exists. The literature has many examples of research generalizing the Gaussian mixture model to other distributions (Farrell and Mersereau, 2004; Hasselblad, 1966; John, 1970a), but our effort is more general. Further, we generalize the mixture model to be non-parametric, by developing two types of kernel mixture model. First, we generalize the mixture model to use the truly multivariate kernel density estimators (Wand and Jones, 1995). Additionally, we develop the power exponential product kernel mixture model, which allows the density to adjust to the shape of each dimension independently. Because kernel density estimators enforce no functional form, both of these methods can adapt to nonnormal asymmetric, kurtotic, and tail characteristics. Over the past two decades or so, evolutionary algorithms have grown in popularity, as they have provided encouraging results in a variety of optimization problems. Several authors have applied the genetic algorithm - a subset of evolutionary algorithms - to mixture modeling, including Bhuyan et al. (1991), Krishna and Murty (1999), and Wicker (2006). These procedures have the benefit that they bypass computational issues that plague the traditional methods. We extend these initialization and optimization methods by combining them with our updated mixture models. Additionally, we “borrow” results from robust estimation theory (Ledoit and Wolf, 2003; Shurygin, 1983; Thomaz, 2004) in order to data-adaptively regularize population covariance matrices. Numerical instability of the covariance matrix can be a significant problem for mixture modeling, since estimation is typically done on a relatively small subset of the observations. We likewise extend various information criteria (Akaike, 1973; Bozdogan, 1994b; Schwarz, 1978) to the elliptically-contoured and kernel mixture models. Information criteria guide model selection and estimation based on various approximations to the Kullback-Liebler divergence. Following Bozdogan (1994a), we use these tools to sequentially select the best mixture model, select the best subset of variables, and detect influential observations - all without making any subjective decisions. Over the course of this research, we developed a full-featured Matlab toolbox (M3) which implements all the new developments in mixture modeling presented in this dissertation. We show results on both simulated and real world datasets. Keywords: mixture modeling, nonparametric estimation, subset selection, influence detection, evidence-based medical diagnostics, unsupervised classification, robust estimation.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Baek, Seung Hyun. "Kernel-Based Data Mining Approach with Variable Selection for Nonlinear High-Dimensional Data." 2010. http://trace.tennessee.edu/utk_graddiss/676.

Повний текст джерела
Анотація:
In statistical data mining research, datasets often have nonlinearity and high-dimensionality. It has become difficult to analyze such datasets in a comprehensive manner using traditional statistical methodologies. Kernel-based data mining is one of the most effective statistical methodologies to investigate a variety of problems in areas including pattern recognition, machine learning, bioinformatics, chemometrics, and statistics. In particular, statistically-sophisticated procedures that emphasize the reliability of results and computational efficiency are required for the analysis of high-dimensional data. In this dissertation, first, a novel wrapper method called SVM-ICOMP-RFE based on hybridized support vector machine (SVM) and recursive feature elimination (RFE) with information-theoretic measure of complexity (ICOMP) is introduced and developed to classify high-dimensional data sets and to carry out subset selection of the variables in the original data space for finding the best for discriminating between groups. Recursive feature elimination (RFE) ranks variables based on the information-theoretic measure of complexity (ICOMP) criterion. Second, a dual variables functional support vector machine approach is proposed. The proposed approach uses both the first and second derivatives of the degradation profiles. The modified floating search algorithm for the repeated variable selection, with newly-added degradation path points, is presented to find a few good variables while reducing the computation time for on-line implementation. Third, a two-stage scheme for the classification of near infrared (NIR) spectral data is proposed. In the first stage, the proposed multi-scale vertical energy thresholding (MSVET) procedure is used to reduce the dimension of the high-dimensional spectral data. In the second stage, a few important wavelet coefficients are selected using the proposed SVM gradient-recursive feature elimination (RFE). Fourth, a novel methodology based on a human decision making process for discriminant analysis called PDCM is proposed. The proposed methodology consists of three basic steps emulating the thinking process: perception, decision, and cognition. In these steps two concepts known as support vector machines for classification and information complexity are integrated to evaluate learning models.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії