Дисертації з теми "Design for data"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Design for data".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Tjärnberg, Cecilia. "BIG DATA DESIGN - Strange but familiar." Thesis, Konstfack, Inredningsarkitektur & Möbeldesign, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:konstfack:diva-6952.
Hur form översätts när den rör sig mellan det fysiska och det digitala har fångat mitt intresse. Jag samlar in data genom olika typer av 3d-skanning och utforskar en rad olika tekniker. I det digitala rummet redovisas den dokumenterade datan som en rörig abstraktion till sitt original, där viss information adderas medan annan förloras. Jag antar i min designprocess komplexa content aware auto fill-algoritmer - en strategi som blir central för projektet. I min installation bjuds besökare att utforska möten mellan det verkliga och det virtuella. Det är min övertygelse att spåren från det fysiska och det digitala slitaget adderar mervärden genom att de packar upp min process samtidigt som något märkligt men bekant materialiseras.
COSTA, PIETRO. "Human-data experience design : progettare con i personal data." Doctoral thesis, Università IUAV di Venezia, 2015. http://hdl.handle.net/11578/278686.
Liu, Jianhua. "Contemporary data path design optimization." Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2006. http://wwwlib.umi.com/cr/ucsd/fullcit?p3214712.
Title from first page of PDF file (viewed July 10, 2006). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 80-82).
Pliuskuvienė, Birutė. "Adaptive data models in design." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2008. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2008~D_20080627_143940-41525.
Disertacijoje nagrinėjama taikomųjų uždavinių sprendimus realizuojančių programinių priemonių, kurių nepastovumą lemia pirminių duomenų turinio, jų struktūrų ir sprendžiamų taikomojo pobūdžio uždavinių algoritmų pokyčiai, adaptavimo problema.
張振隆 and Chun-lung Cheung. "Data warehousing mobile code design." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B29872996.
Owen, J. "Data management in engineering design." Thesis, University of Southampton, 2015. https://eprints.soton.ac.uk/385838/.
Cheung, Chun-lung. "Data warehousing mobile code design." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B23001057.
Siththara, Gedara Jagath Senarathne. "Experimental design for dependent data." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/201237/1/Jagath%20Senarathne_Siththara%20Gedara_Thesis.pdf.
Roberg, Abigail M. "Data Visualizations: Guidelines for Gathering, Analyzing, and Designing Data." Ohio University Honors Tutorial College / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ouhonors1524826335755109.
Herrmann, Amy Elizabeth. "Coupled design decisions in distributed design." Thesis, Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/16656.
Katsura-Gordon, Shigeo. "Democratizing Our Data : Finding Balance Living In A World Of Data Control." Thesis, Umeå universitet, Designhögskolan vid Umeå universitet, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-148942.
Romero, Moral Óscar. "Automating the multidimensional design of data warehouses." Doctoral thesis, Universitat Politècnica de Catalunya, 2010. http://hdl.handle.net/10803/6670.
Com a qualsevol altre sistema, els requeriments són necessaris per garantir que el sistema desenvolupat satisfà les necessitats de l'usuari. A més, essent aquest un procés de reenginyeria, les fonts de dades s'han de tenir en compte per: (i) garantir que el magatzem de dades resultant pot ésser poblat amb dades de l'organització, i, a més, (ii) descobrir capacitats d'anàlisis no evidents o no conegudes per l'usuari.
Actualment, a la literatura s'han presentat diversos mètodes per donar suport al procés de modelatge del magatzem de dades. No obstant això, les propostes basades en un anàlisi dels requeriments assumeixen que aquestos són exhaustius, i no consideren que pot haver-hi informació rellevant amagada a les fonts de dades. Contràriament, les propostes basades en un anàlisi exhaustiu de les fonts de dades maximitzen aquest enfocament, i proposen tot el coneixement multidimensional que es pot derivar des de les fonts de dades i, conseqüentment, generen massa resultats. En aquest escenari, l'automatització del disseny del magatzem de dades és essencial per evitar que tot el pes de la tasca recaigui en el dissenyador (d'aquesta forma, no hem de confiar únicament en la seva habilitat i coneixement per aplicar el mètode de disseny elegit). A més, l'automatització de la tasca allibera al dissenyador del sempre complex i costós anàlisi de les fonts de dades (que pot arribar a ser inviable per grans fonts de dades).
Avui dia, els mètodes automatitzables analitzen en detall les fonts de dades i passen per alt els requeriments. En canvi, els mètodes basats en l'anàlisi dels requeriments no consideren l'automatització del procés, ja que treballen amb requeriments expressats en llenguatges d'alt nivell que un ordenador no pot manegar. Aquesta mateixa situació es dona en els mètodes híbrids actual, que proposen un enfocament seqüencial, on l'anàlisi de les dades es complementa amb l'anàlisi dels requeriments, ja que totes dues tasques pateixen els mateixos problemes que els enfocament purs.
En aquesta tesi proposem dos mètodes per donar suport a la tasca de modelatge del magatzem de dades: MDBE (Multidimensional Design Based on Examples) and AMDO (Automating the Multidimensional Design from Ontologies). Totes dues consideren els requeriments i les fonts de dades per portar a terme la tasca de modelatge i a més, van ser pensades per superar les limitacions dels enfocaments actuals.
1. MDBE segueix un enfocament clàssic, en el que els requeriments d'usuari són coneguts d'avantmà. Aquest mètode es beneficia del coneixement capturat a les fonts de dades, però guia el procés des dels requeriments i, conseqüentment, és capaç de treballar sobre fonts de dades semànticament pobres. És a dir, explotant el fet que amb uns requeriments de qualitat, podem superar els inconvenients de disposar de fonts de dades que no capturen apropiadament el nostre domini de treball.
2. A diferència d'MDBE, AMDO assumeix un escenari on es disposa de fonts de dades semànticament riques. Per aquest motiu, dirigeix el procés de modelatge des de les fonts de dades, i empra els requeriments per donar forma i adaptar els resultats generats a les necessitats de l'usuari. En aquest context, a diferència de l'anterior, unes fonts de dades semànticament riques esmorteeixen el fet de no tenir clars els requeriments d'usuari d'avantmà.
Cal notar que els nostres mètodes estableixen un marc de treball combinat que es pot emprar per decidir, donat un escenari concret, quin enfocament és més adient. Per exemple, no es pot seguir el mateix enfocament en un escenari on els requeriments són ben coneguts d'avantmà i en un escenari on aquestos encara no estan clars (un cas recorrent d'aquesta situació és quan l'usuari no té clares les capacitats d'anàlisi del seu propi sistema). De fet, disposar d'uns bons requeriments d'avantmà esmorteeix la necessitat de disposar de fonts de dades semànticament riques, mentre que a l'inversa, si disposem de fonts de dades que capturen adequadament el nostre domini de treball, els requeriments no són necessaris d'avantmà. Per aquests motius, en aquesta tesi aportem un marc de treball combinat que cobreix tots els possibles escenaris que podem trobar durant la tasca de modelatge del magatzem de dades.
Previous experiences in the data warehouse field have shown that the data warehouse multidimensional conceptual schema must be derived from a hybrid approach: i.e., by considering both the end-user requirements and the data sources, as first-class citizens. Like in any other system, requirements guarantee that the system devised meets the end-user necessities. In addition, since the data warehouse design task is a reengineering process, it must consider the underlying data sources of the organization: (i) to guarantee that the data warehouse must be populated from data available within the organization, and (ii) to allow the end-user discover unknown additional analysis capabilities.
Currently, several methods for supporting the data warehouse modeling task have been provided. However, they suffer from some significant drawbacks. In short, requirement-driven approaches assume that requirements are exhaustive (and therefore, do not consider the data sources to contain alternative interesting evidences of analysis), whereas data-driven approaches (i.e., those leading the design task from a thorough analysis of the data sources) rely on discovering as much multidimensional knowledge as possible from the data sources. As a consequence, data-driven approaches generate too many results, which mislead the user. Furthermore, the design task automation is essential in this scenario, as it removes the dependency on an expert's ability to properly apply the method chosen, and the need to analyze the data sources, which is a tedious and timeconsuming task (which can be unfeasible when working with large databases). In this sense, current automatable methods follow a data-driven approach, whereas current requirement-driven approaches overlook the process automation, since they tend to work with requirements at a high level of abstraction. Indeed, this scenario is repeated regarding data-driven and requirement-driven stages within current hybrid approaches, which suffer from the same drawbacks than pure data-driven or requirement-driven approaches.
In this thesis we introduce two different approaches for automating the multidimensional design of the data warehouse: MDBE (Multidimensional Design Based on Examples) and AMDO (Automating the Multidimensional Design from Ontologies). Both approaches were devised to overcome the limitations from which current approaches suffer. Importantly, our approaches consider opposite initial assumptions, but both consider the end-user requirements and the data sources as first-class citizens.
1. MDBE follows a classical approach, in which the end-user requirements are well-known beforehand. This approach benefits from the knowledge captured in the data sources, but guides the design task according to requirements and consequently, it is able to work and handle semantically poorer data sources. In other words, providing high-quality end-user requirements, we can guide the process from the knowledge they contain, and overcome the fact of disposing of bad quality (from a semantical point of view) data sources.
2. AMDO, as counterpart, assumes a scenario in which the data sources available are semantically richer. Thus, the approach proposed is guided by a thorough analysis of the data sources, which is properly adapted to shape the output result according to the end-user requirements. In this context, disposing of high-quality data sources, we can overcome the fact of lacking of expressive end-user requirements.
Importantly, our methods establish a combined and comprehensive framework that can be used to decide, according to the inputs provided in each scenario, which is the best approach to follow. For example, we cannot follow the same approach in a scenario where the end-user requirements are clear and well-known, and in a scenario in which the end-user requirements are not evident or cannot be easily elicited (e.g., this may happen when the users are not aware of the analysis capabilities of their own sources). Interestingly, the need to dispose of requirements beforehand is smoothed by the fact of having semantically rich data sources. In lack of that, requirements gain relevance to extract the multidimensional knowledge from the sources.
So that, we claim to provide two approaches whose combination turns up to be exhaustive with regard to the scenarios discussed in the literature
Romero, Moral Oscar. "Automating the multidimensional design of data warehouses." Doctoral thesis, Universitat Politècnica de Catalunya, 2010. http://hdl.handle.net/10803/6670.
Previous experiences in the data warehouse field have shown that the data warehouse multidimensional conceptual schema must be derived from a hybrid approach: i.e., by considering both the end-user requirements and the data sources, as first-class citizens. Like in any other system, requirements guarantee that the system devised meets the end-user necessities. In addition, since the data warehouse design task is a reengineering process, it must consider the underlying data sources of the organization: (i) to guarantee that the data warehouse must be populated from data available within the organization, and (ii) to allow the end-user discover unknown additional analysis capabilities.Currently, several methods for supporting the data warehouse modeling task have been provided. However, they suffer from some significant drawbacks. In short, requirement-driven approaches assume that requirements are exhaustive (and therefore, do not consider the data sources to contain alternative interesting evidences of analysis), whereas data-driven approaches (i.e., those leading the design task from a thorough analysis of the data sources) rely on discovering as much multidimensional knowledge as possible from the data sources. As a consequence, data-driven approaches generate too many results, which mislead the user. Furthermore, the design task automation is essential in this scenario, as it removes the dependency on an expert's ability to properly apply the method chosen, and the need to analyze the data sources, which is a tedious and timeconsuming task (which can be unfeasible when working with large databases). In this sense, current automatable methods follow a data-driven approach, whereas current requirement-driven approaches overlook the process automation, since they tend to work with requirements at a high level of abstraction. Indeed, this scenario is repeated regarding data-driven and requirement-driven stages within current hybrid approaches, which suffer from the same drawbacks than pure data-driven or requirement-driven approaches.In this thesis we introduce two different approaches for automating the multidimensional design of the data warehouse: MDBE (Multidimensional Design Based on Examples) and AMDO (Automating the Multidimensional Design from Ontologies). Both approaches were devised to overcome the limitations from which current approaches suffer. Importantly, our approaches consider opposite initial assumptions, but both consider the end-user requirements and the data sources as first-class citizens.1. MDBE follows a classical approach, in which the end-user requirements are well-known beforehand. This approach benefits from the knowledge captured in the data sources, but guides the design task according to requirements and consequently, it is able to work and handle semantically poorer data sources. In other words, providing high-quality end-user requirements, we can guide the process from the knowledge they contain, and overcome the fact of disposing of bad quality (from a semantical point of view) data sources.2. AMDO, as counterpart, assumes a scenario in which the data sources available are semantically richer. Thus, the approach proposed is guided by a thorough analysis of the data sources, which is properly adapted to shape the output result according to the end-user requirements. In this context, disposing of high-quality data sources, we can overcome the fact of lacking of expressive end-user requirements.Importantly, our methods establish a combined and comprehensive framework that can be used to decide, according to the inputs provided in each scenario, which is the best approach to follow. For example, we cannot follow the same approach in a scenario where the end-user requirements are clear and well-known, and in a scenario in which the end-user requirements are not evident or cannot be easily elicited (e.g., this may happen when the users are not aware of the analysis capabilities of their own sources). Interestingly, the need to dispose of requirements beforehand is smoothed by the fact of having semantically rich data sources. In lack of that, requirements gain relevance to extract the multidimensional knowledge from the sources.So that, we claim to provide two approaches whose combination turns up to be exhaustive with regard to the scenarios discussed in the literature
Dahlqvist, Thea. "Användargenererad data i tjänstedesignprocessen." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-101012.
The goal of the study is to find out how three different innovative techniques can be used to collect user experience in a service design process. This is carried out using a design case regarding the user experience of the Norrkoping’s Symphony Orchestras (SON) concerts. This is a symphony orchestra that has experienced increasing difficulty in attracting more visitors, and new methods are needed to tackle the problem. Therefore, in this study mobile ethnography and innovative methods will be used. There were 20 participants in the study, including a mix of people who regularly attend SON-concerts and those who do not. The participant’s task in this study was to attend two of SON-concerts and document their experience using three different innovative techniques: Probe, smartphone application and automatic camera. The result of the study shows that the probe gave a much more detailed look into the participants’ view of the concert experience, and contributed in widening the focus of the service design process. The application gave a more detailed information in real-time, on sight. The automatic camera gave a more detailed flow, automatically on sight, of the concert experience. That may show certain patterns and behaviors of the participants linked to the concert experience. The study shows that the three innovative techniques puts focus on the user throughout the entire service design process, which is the foundation for working user-centered. If the techniques are used in combination they become more effective as they complete each other in a service design process.
Noaman, Amin Yousef. "Distributed data warehouse architecture and design." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0027/NQ51662.pdf.
Mathew, Michael Ian. "Design of nonlinear sampled-data systems." Thesis, Coventry University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.480606.
Kaczorowski, Kevin J. "Data-driven strategies for vaccine design." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/117327.
Cataloged from PDF version of thesis.
Includes bibliographical references.
Vaccination is one of the greatest achievements in immunology and in general medicine, and has virtually eradicated many infectious diseases that plagued humans in the past. Vaccination involves injecting an individual with some version of the pathogen in order to allow the individual to develop a memory immune response that will protect them from future challenge with the same pathogen. Until recently, vaccine development has largely followed empirical paradigms that have proven successful against many diseases. However, many pathogens have now evolved that defy success using the traditional approaches. Rational design of vaccines against such pathogens will likely require interdisciplinary approaches spanning engineering, immunology, and the physical sciences. In this thesis, we combine theoretical approaches with protein sequence and clinical data to address two contemporary problems in vaccinology: 1. Developing an antibody vaccine against HIV, an example of a highly mutable pathogen; and 2. Understanding how the many immune components work collectively to effect a systemic immune response, such as to vaccines. In HIV-infected individuals, antibodies produced by the immune system bind to specific parts of an HIV protein called Envelope (Env). However, the virus evades the immune response due to its high mutability, thus making effective vaccine design a huge challenge. To predict the mutational vulnerabilities of the virus, we developed a model (a fitness landscape) to translate sequence data into knowledge of viral fitness, a measure of the ability of the virus to replicate and thrive. The landscape accounts explicitly for coupling interactions between mutations at different positions within the protein, which often dictate how the virus evades the immune response. We developed new computational approaches that enabled us to tackle the large size and mutational variability of Env, since previous approaches have been unsuccessful in this case. A small fraction of HIV-infected individuals produce a class of antibodies called broadly neutralizing antibodies (bnAbs), which neutralize a diverse number of HIV strains and can thus tolerate many mutations in Env. To investigate the mechanisms underlying breadth of these bnAbs, we combined our landscape with 3D protein structures to gain insight into the spatial distribution of binding interactions between bnAbs and Env. Based on this, we designed an optimal set of immunogens (i.e. Env sequences), with mutations at key residues, that are potentially likely to lead to the elicitation of bnAbs via vaccination. We hope that these antigens will soon be tested in animal models. Even when the right immunogens are included in a vaccine, a potent immune response is not always induced. For example, some individuals do not respond to protective influenza vaccines as desired. The human immune system consists of many different immune cells that coordinate their actions to fight infections and respond to vaccines. The balance between these cell populations is determined by direct interactions and soluble factors such as cytokines, which serve as messengers between cells. A mechanistic understanding of how the various immune components cooperate to bring about the immune response can guide strategies to improve vaccine efficacy. To investigate whether differences in immune response could be explained by variation in immune cell compositions across individuals, we analyzed experimental measurements of various immune cell population frequencies in a cohort of healthy humans. We demonstrated that human immune variation in these parameters is continuous rather than discrete. Furthermore, we showed that key combinations of these immune parameters can be used to predict immune response to diverse stimulations, namely cytokine stimulation and vaccination. Thus, we defined the concept of an individual's "immunotype" as their location within the space of these key combinations of parameters. This result highlights a previously unappreciated connection between immune cell composition and systemic immune responses, and can guide future development of therapies that aim to collectively, rather than independently, manipulate immune cell frequencies.
by Kevin J. Kaczorowski.
Ph. D.
Valero, Bresó Alejandro. "Hybrid caches: design and data management." Doctoral thesis, Editorial Universitat Politècnica de València, 2013. http://hdl.handle.net/10251/32663.
Valero Bresó, A. (2013). Hybrid caches: design and data management [Tesis doctoral]. Editorial Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/32663
Alfresco
Premiado
Acuna, Stamp Annabelen. "Design Study for Variable Data Printing." University of Cincinnati / OhioLINK, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=ucin962378632.
Mahood, Christian. "Data center design & enterprise networking /." Online version of thesis, 2009. http://hdl.handle.net/1850/8699.
Lee, Heeseok. "Data allocation design in computer networks." Diss., The University of Arizona, 1991. http://hdl.handle.net/10150/185435.
Carlsson, Nicole. "Vulnerable data interactions — augmenting agency." Thesis, Malmö universitet, Fakulteten för kultur och samhälle (KS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-23309.
Cai, Simin. "Systematic Design of Data Management for Real-Time Data-Intensive Applications." Licentiate thesis, Mälardalens högskola, Inbyggda system, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-35369.
DAGGERS
Lundberg, Agnes. "Dealing with Data." Thesis, KTH, Arkitektur, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-298801.
鄭桂懷 and Kwai-wai Cheng. "A collaborative design tool for virtual design studios." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31220526.
Leung, Nim Keung. "Convexity-Preserving Scattered Data Interpolation." Thesis, University of North Texas, 1995. https://digital.library.unt.edu/ark:/67531/metadc277609/.
Pellegrino, Gregory S. "Design of a Low-Cost Data Acquisition System for Rotordynamic Data Collection." DigitalCommons@CalPoly, 2019. https://digitalcommons.calpoly.edu/theses/1978.
Yi, Xin. "Data visualization in conceptual design: developing a prototype for complex data visualization." Thesis, Blekinge Tekniska Högskola, Institutionen för maskinteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15192.
Galanis, Panagiotis. "Designing with data in mind : designer perceptions on visualising data within editorial information design practice." Thesis, University of Portsmouth, 2014. https://researchportal.port.ac.uk/portal/en/theses/designing-with-data-in-mind(1683f801-8926-48fb-850f-f51f9ddce9f0).html.
Mustafa, Mudassir Imran. "Design Principles for Data Export : Action Design Research in U-CARE." Thesis, Uppsala universitet, Institutionen för informatik och media, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-180061.
Uichanco, Joline Ann Villaranda. "Data-driven revenue management." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41728.
Includes bibliographical references (p. 125-127).
In this thesis, we consider the classical newsvendor model and various important extensions. We do not assume that the demand distribution is known, rather the only information available is a set of independent samples drawn from the demand distribution. In particular, the variants of the model we consider are: the classical profit-maximization newsvendor model, the risk-averse newsvendor model and the price-setting newsvendor model. If the explicit demand distribution is known, then the exact solutions to these models can be found either analytically or numerically via simulation methods. However, in most real-life settings, the demand distribution is not available, and usually there is only historical demand data from past periods. Thus, data-driven approaches are appealing in solving these problems. In this thesis, we evaluate the theoretical and empirical performance of nonparametric and parametric approaches for solving the variants of the newsvendor model assuming partial information on the distribution. For the classical profit-maximization newsvendor model and the risk-averse newsvendor model we describe general non-parametric approaches that do not make any prior assumption on the true demand distribution. We extend and significantly improve previous theoretical bounds on the number of samples required to guarantee with high probability that the data-driven approach provides a near-optimal solution. By near-optimal we mean that the approximate solution performs arbitrarily close to the optimal solution that is computed with respect to the true demand distributions.
(cont.) For the price-setting newsvendor problem, we analyze a previously proposed simulation-based approach for a linear-additive demand model, and again derive bounds on the number of samples required to ensure that the simulation-based approach provides a near-optimal solution. We also perform computational experiments to analyze the empirical performance of these data-driven approaches.
by Joline Ann Villaranda Uichanco.
S.M.
Duch, Brown Amàlia. "Design and Analysis of Multidimensional Data Structures." Doctoral thesis, Universitat Politècnica de Catalunya, 2004. http://hdl.handle.net/10803/6647.
Les estructures de dades multidimensionals també es poden utilitzar com a indexos d'estructures de dades que emmagatzemen, possiblement en memòria externa, dades més complexes que els punts.
Les estructures de dades multidimensionals han d'oferir la possibilitat de realitzar operacions d'inserció i esborrat de claus dinàmicament, a més de permetre realitzar cerques anomenades associatives. Exemples d'aquest tipus de cerques són les cerques per rangs ortogonals (quins punts cauen dintre d'un hiper-rectangle donat?) i les cerques del veí més proper (quin és el punt més proper a un punt donat?).
Podem dividir les contribucions d'aquesta tesi en dues parts:
La primera part està relacionada amb el disseny d'estructures de dades per a punts multidimensionals. Inclou el disseny d'arbres binaris $K$-dimensionals al·leatoritzats (Randomized $K$-d trees), el d'arbres quaternaris al·leatoritzats (Randomized quad trees) i el d'arbres multidimensionals amb punters de referència (Fingered multidimensional trees).
La segona part analitza el comportament de les estructures de dades multidimensionals. En particular, s'analitza el cost mitjà de les cerques parcials en arbres $K$-dimensionals relaxats, i el de les cerques per rang en diverses estructures de dades multidimensionals.
Respecte al disseny d'estructures de dades multidimensionals, proposem algorismes al·leatoritzats d'inserció i esborrat de registres per als arbres $K$-dimensionals i per als arbres quaternaris. Aquests algorismes produeixen arbres aleatoris, independentment de l'ordre d'inserció dels registres i desprès de qualsevol seqüència d'insercions i esborrats. De fet, el comportament esperat de les estructures produïdes mitjançant els algorismes al·leatoritzats és independent de la distribució de les dades d'entrada, tot i conservant la simplicitat i la flexibilitat dels arbres $K$-dimensionals i quaternaris estàndard. Introduïm també els arbres multidimensionals amb punters de referència. Això permet que les estructures multidimensionals puguin aprofitar l'anomenada localitat de referència en cerques associatives altament correlacionades.
I respecte de l'anàlisi d'estructures de dades multidimensionals, primer analitzem el cost esperat de las cerques parcials en els arbres $K$-dimensionals relaxats. Seguidament utilitzem aquest resultat com a base per a l'anàlisi de les cerques per rangs ortogonals, juntament amb arguments combinatoris i geomètrics. D'aquesta manera obtenim un estimat asimptòtic precís del cost de les cerques per rangs ortogonals en els arbres $K$-dimensionals aleatoris. Finalment, mostrem que les tècniques utilitzades es poden estendre fàcilment a d'altres estructures de dades i per tant proporcionem una anàlisi exacta del cost mitjà de cerques per rang en estructures de dades com són els arbres $K$-dimensionals estàndard, els arbres quaternaris, els tries quaternaris i els tries $K$-dimensionals.
Esta tesis está dedicada al diseño y al análisis de estructuras de datos multidimensionales; es decir, estructuras de datos específicas para almacenar registros $K$-dimensionales que suelen representarse como puntos en el espacio $[0,1]^K$. Estas estructuras de datos tienen aplicaciones en diversas áreas de la informática como son: los sistemas de información geográfica, la robótica, el procesamiento de imágenes, la world wide web o data mining, entre otras.
Las estructuras de datos multidimensionales suelen utilizarse también como índices de estructuras que almacenan, posiblemente en memoria externa, datos complejos.
Las estructuras de datos multidimensionales deben ofrecer la posibilidad de realizar operaciones de inserción y borrado de llaves de manera dinámica, pero además deben permitir realizar búsquedas asociativas en los registros almacenados. Ejemplos de búsquedas asociativas son las búsquedas por rangos ortogonales (¿qué puntos de la estructura de datos están dentro de un hiper-rectángulo dado?) y las búsquedas del vecino más cercano (¿cuál es el punto de la estructura de datos más cercano a un punto dado?).
Las contribuciones de esta tesis se dividen en dos partes:
La primera parte está dedicada al diseño de estructuras de datos para puntos multidimensionales, que incluye el diseño de los árboles binarios $K$-dimensionales aleatorios (Randomized $K$-d trees), el de los árboles cuaternarios aleatorios (Randomized quad trees), y el de los árboles multidimensionales con punteros de referencia (Fingered multidimensional trees).
La segunda parte contiene contribuciones al análisis del comportamiento de las estructuras de datos para puntos multidimensionales. En particular, damos el análisis del costo promedio de las búsquedas parciales en los árboles $K$-dimensionales relajados y el de las búsquedas por rango en varias estructuras de datos multidimensionales.
Con respecto al diseño de estructuras de datos multidimensionales, proponemos algoritmos aleatorios de inserción y borrado de registros para los árboles $K$-dimensionales y los árboles cuaternarios que producen árboles aleatorios independientemente del orden de inserción de los registros y después de cualquier secuencia de inserciones y borrados intercalados. De hecho, con la aleatorización garantizamos un buen rendimiento esperado de las estructuras de datos resultantes, que es independiente de la distribución de los datos de entrada, conservando la flexibilidad y la simplicidad de los árboles $K$-dimensionales y de los árboles cuaternarios estándar. También proponemos los árboles multidimensionales con punteros de referencia, una técnica que permite que las estructuras de datos multidimensionales exploten la localidad de referencia en búsquedas asociativas que se presentan altamente correlacionadas.
Con respecto al análisis de estructuras de datos multidimensionales, comenzamos dando un análisis preciso del costo esperado de las búsquedas parciales en los árboles $K$-dimensionales relajados. A continuación, utilizamos este resultado como base para el análisis de las búsquedas por rangos ortogonales, combinándolo con argumentos combinatorios y geométricos. Como resultado obtenemos un estimado asintótico preciso del costo de las búsquedas por rango en los árboles $K$-dimensionales relajados. Finalmente, mostramos que las técnicas utilizadas pueden extenderse fácilmente a otras estructuras de datos y por tanto proporcionamos un análisis preciso del costo promedio de búsquedas por rango en estructuras de datos como los árboles $K$-dimensionales estándar, los árboles cuaternarios, los tries cuaternarios y los tries $K$-dimensionales.
This thesis is about the design and analysis of point multidimensional data structures: data structures that store $K$-dimensional keys which we may abstract as points in $[0,1]^K$. These data structures are present in many applications of geographical information systems, image processing or robotics, among others. They are also frequently used as indexes of more complex data structures, possibly stored in external memory.
Point multidimensional data structures must have capabilities such as insertion, deletion and (exact) search of items, but in addition they must support the so called {em associative queries}. Examples of these queries are orthogonal range queries (which are the items that fall inside a given hyper-rectangle?) and nearest neighbour queries (which is the closest item to some given point?).
The contributions of this thesis are two-fold:
Contributions to the design of point multidimensional data structures: the design of randomized $K$-d trees, the design of randomized quad trees and the design of fingered multidimensional search trees;
Contributions to the analysis of the performance of point multidimensional data structures: the average-case analysis of partial match queries in relaxed $K$-d trees and the average-case analysis of orthogonal range queries in various multidimensional data structures.
Concerning the design of randomized point multidimensional data structures, we propose randomized insertion and deletion algorithms for $K$-d trees and quad trees that produce random $K$-d trees and quad trees independently of the order in which items are inserted into them and after any sequence of interleaved insertions and deletions. The use of randomization provides expected performance guarantees, irrespective of any assumption on the data distribution, while retaining the simplicity and flexibility of standard $K$-d trees and quad trees.
Also related to the design of point multidimensional data structures is the proposal of fingered multidimensional search trees, a new technique that enhances point multidimensional data structures to exploit locality of reference in associative queries.
With regards to performance analysis, we start by giving a precise analysis of the cost of partial matches in randomized $K$-d trees. We use these results as a building block in our analysis of orthogonal range queries, together with combinatorial and geometric arguments and we provide a tight asymptotic estimate of the cost of orthogonal range search in randomized $K$-d trees. We finally show that the techniques used apply easily to other data structures, so we can provide an analysis of the average cost of orthogonal range search in other data structures such as standard $K$-d trees, quad trees, quad tries, and $K$-d tries.
Munir, Wahab. "Optimization of Data Warehouse Design and Architecture." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-37233.
Siqueira, Thiago Luís Lopes. "The design of vague spatial data warehouses." Universidade Federal de São Carlos, 2015. https://repositorio.ufscar.br/handle/ufscar/298.
Universidade Federal de Minas Gerais
O data warehouse espacial (DWE) é um banco de dados multidimensional integrado e volumoso que armazena dados espaciais e dados convencionais. Já o processamento analítico espacial online (SOLAP) permite consultar o DWE, tanto pela seleção de dados espaciais que satisfazem um relacionamento topológico, quanto pela agregação dos dados espaciais. Deste modo, DWE e SOLAP beneficiam o suporte a tomada de decisão. As aplicações de DWE e SOLAP abordam majoritarimente fenômenos representados por dados espaciais exatos, ou seja, que assumem localizações e fronteiras bem definidas. Contudo, tais aplicações negligenciam dados espaciais afetados por imperfeições, tais como a vagueza espacial, a qual interfere na identificação precisa de um objeto e de seus vizinhos. Um objeto espacial vago não tem sua fronteira ou seu interior precisamente definidos. Além disso, é composto por partes que certamente pertencem a ele e partes que possivelmente pertencem a ele. Apesar de inúmeros fenômenos do mundo real serem caracterizados pela vagueza espacial, na literatura consultada não se identificaram trabalhos que considerassem a vagueza espacial no projeto de DWE e nem para consultar o DWE. Tal limitação motivou a elaboração desta tese de doutorado, a qual introduz os conceitos de DWE vago e de SOLAP vago. Um DWE vago é um DWE que armazena dados espaciais vagos, enquanto que SOLAP vago provê os meios para consultar o DWE vago. Nesta tese, o projeto de DWE vago é abordado e as principais contribuições providas são: (i) o modelo conceitual VSCube que viabiliza a criação de um cubos de dados multidimensional para representar o esquema conceitual de um DWE vago; (ii) o modelo conceitual VSMultiDim que permite criar um diagrama para representar o esquema conceitual de um DWE vago; (iii) diretrizes para o projeto lógico do DWE vago e de suas restrições de integridade, e para estender a linguagem SQL visando processar as consultas de SOLAP vago no DWE vago; e (iv) o índice VSB-index que aprimora o desempenho do processamento de consultas no DWE vago. A aplicabilidade dessas contribuições é demonstrada em dois estudos de caso no domínio da agricultura, por meio da criação de esquemas conceituais de DWE vago, da transformação dos esquemas conceituais em esquemas lógicos de DWE vago, e do processamento de consultas envolvendo as regiões vagas do DWE vago.
Spatial data warehouses (SDW) and spatial online analytical processing (SOLAP) enhance decision making by enabling spatial analysis combined with multidimensional analytical queries. A SDW is an integrated and voluminous multidimensional database containing both conventional and spatial data. SOLAP allows querying SDWs with multidimensional queries that select spatial data that satisfy a given topological relationship and that aggregate spatial data. Existing SDW and SOLAP applications mostly consider phenomena represented by spatial data having exact locations and sharp boundaries. They neglect the fact that spatial data may be affected by imperfections, such as spatial vagueness, which prevents distinguishing an object from its neighborhood. A vague spatial object does not have a precisely defined boundary and/or interior. Thus, it may have a broad boundary and a blurred interior, and is composed of parts that certainly belong to it and parts that possibly belong to it. Although several real-world phenomena are characterized by spatial vagueness, no approach in the literature addresses both spatial vagueness and the design of SDWs nor provides multidimensional analysis over vague spatial data. These shortcomings motivated the elaboration of this doctoral thesis, which addresses both vague spatial data warehouses (vague SDWs) and vague spatial online analytical processing (vague SOLAP). A vague SDW is a SDW that comprises vague spatial data, while vague SOLAP allows querying vague SDWs. The major contributions of this doctoral thesis are: (i) the Vague Spatial Cube (VSCube) conceptual model, which enables the creation of conceptual schemata for vague SDWs using data cubes; (ii) the Vague Spatial MultiDim (VSMultiDim) conceptual model, which enables the creation of conceptual schemata for vague SDWs using diagrams; (iii) guidelines for designing relational schemata and integrity constraints for vague SDWs, and for extending the SQL language to enable vague SOLAP; (iv) the Vague Spatial Bitmap Index (VSB-index), which improves the performance to process queries against vague SDWs. The applicability of these contributions is demonstrated in two applications of the agricultural domain, by creating conceptual schemata for vague SDWs, transforming these conceptual schemata into logical schemata for vague SDWs, and efficiently processing queries over vague SDWs.
Tan, Geok Leng. "Design issues in Trellis Coded Data Modulation." Thesis, University of Cambridge, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.358810.
Wilke, Achim. "Data-processing devolopment in German design offices." Thesis, Brunel University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.292979.
洪宜偉 and Edward Hung. "Data cube system design: an optimization problem." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31222730.
Støtvig, Jeanett Gunneklev. "Censored Weibull Distributed Data in Experimental Design." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for matematiske fag, 2014. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-24418.
Edgar, John Alexander. "The design of a unified data model." Thesis, University of Aberdeen, 1986. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=185667.
King, Brent. "Automatic extraction of knowledge from design data." Thesis, University of Sunderland, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307964.
Hanson, James A. (James Andrew) 1976. "Improving process capability data access for design." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/88897.
Schanzenberger, Anja. "System design for periodic data production management." Thesis, Middlesex University, 2006. http://eprints.mdx.ac.uk/10697/.
Hughes, Simon. "Geohydrology data model design : South African boreholes." Thesis, Stellenbosch : University of Stellenbosch, 2005. http://hdl.handle.net/10019.1/2799.
Since mechanised borehole drilling began in South Africa in the late 1800s, over 1 100 000 boreholes have been drilled. As the country’s growing population and the perceived impacts of climate change increase pressure on water surface supplies, attention is turning to groundwater to meet the shortfall in water supply. This will mean even more drilling will take place. Until the introduction of the Standard Descriptors for Boreholes, published in 2003, South Africa has not had a set of guidelines for borehole information capture. This document provides a detailed description of the basic information requirements needed to describe and characterise the process of drilling, constructing, developing, managing and monitoring a borehole. However, this document stands alone as a specification with little or no implementation or interpretation to date. Following the development and publishing of the ArcHydro data model for water resource management by the CRWR based at the University of Texas at Austin, there has been a great deal of interest in object-oriented data modelling for natural resource data management. This thesis describes the utilisation of an object oriented data modelling approach using UML CASE tools to design a data model for South African Boreholes, based on the Standard Descriptors for Boreholes. The data model was converted to a geodatabase schema and implemented in ArcGIS.
Cranley, Nikki, and Diarmuid Corry. "Design Considerations for Networked Data Acquisition Systems." International Foundation for Telemetering, 2009. http://hdl.handle.net/10150/606149.
Ethernet technology offers numerous benefits for networked Flight Test Instrumentation (FTI) systems such as increased data rates, flexibility, scalability and most importantly interoperability owing to the inherent interface, protocol and technological standardization. However, the best effort nature of Ethernet is in sharp contrast to the intrinsic determinism of tradition FTI systems. The challenge for network designers is to optimize the configuration of the Ethernet network to meet the data processing demands in terms of reliability and latency. This paper discusses the necessary planning and design phases to investigate, analyze, fine-tune and optimize the networks performance.
O'Shea, Timothy James. "Learning from Data in Radio Algorithm Design." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/89649.
Ph. D.
Shiao, Grace. "Design and Implementation of Data Analysis Components." University of Akron / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=akron1143652311.
Guo, Minzhe. "Algorithmic Mechanism Design for Data Replication Problems." University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1470757536.
Fang, Wei Yi, and 魏儀方. "Data Transfer Block Design." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/74630649698468791271.
淡江大學
電機工程學系碩士在職專班
92
This paper is focused on the hardware implementation of SD Card using FPGA, microprocessor and flash memory. SD card contains controller and flash memory. The controller is mainly consisted of two parts. The first part is the SD host controller and the second part is the flash controller. SD host controller has five different units -- command unit, response unit, register unit, data transfer unit, and input/output unit. In this thesis, our objective is to design the data transfer unit and input/output unit. Our design is realized and tested using Altera’s Cyclone EP1C20F324C7 FPGA. The testing result shows the architecture is feasible.
Tsai, Tzu-Chao, and 蔡子超. "Supporting Data Warehouse Design with Data Mining Approach." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/54821897552356791828.
國立中山大學
資訊管理學系研究所
89
Traditional relational database model does not have enough capability to cope with a great deal of data in finite time. To address these requirements, data warehouses and online analytical processing (OLAP) have emerged. Data warehouses improve the productivity of corporate decision makers through consolidation, conversion, transformation, and integration of operational data, and supports online analytical processing (OLAP). The data warehouse design is a complex and knowledge intensive process. It needs to consider not only the structure of the underlying operational databases (source-driven), but also the information requirements of decision makers (user-driven). Past research focused predominately on supporting the source-driven data warehouse design process, but paid less attention to supporting the user-driven data warehouse design process. Thus, the goal of this research is to propose a user-driven data warehouse design support system based on the knowledge discovery approach. Specifically, a Data Warehouse Design Support System was proposed and the generalization hierarchy and generalized star schemas were used as the data warehouse design knowledge. The technique for learning these design knowledge and reasoning upon them were developed. An empirical evaluation study was conducted to validate the effectiveness on the proposed techniques in supporting data warehouse design process. The result of empirical evaluation showed that this technique was useful to support data warehouse design especially on reducing the missing design and enhancing the potentially useful design.
Gibson, Christopher Thomas. "Website optimization, design, and restructuring." 2005. http://etd.louisville.edu/data/UofL0116t2005.pdf.
Title and description from thesis home page (viewed Jan. 30, 2007). Department of Computer Engineering and Computer Science. Vita. "December 2005." Includes bibliographical references (p. 100-102).