Dissertations / Theses on the topic 'Design for data'

To see the other types of publications on this topic, follow the link: Design for data.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Design for data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tjärnberg, Cecilia. "BIG DATA DESIGN - Strange but familiar." Thesis, Konstfack, Inredningsarkitektur & Möbeldesign, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:konstfack:diva-6952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
How form translates as it moves between the physical and the digital has caught my interest. I collect data through different types of 3d scanning exploring a range of technologies. In the digital realm, the information captured presents itself as a messy abstraction to the original where some information is added while other is lost. Developing the material, I adopt complex content aware auto fill algorithms - a strategy that becomes essential for the project.  In my installation visitors can explore thresholds between the real and the virtual. My firm belief is that the traces from the physical and digital wear and tear add value in that they unpack my process, birthing something strange while familiar.
Hur form översätts när den rör sig mellan det fysiska och det digitala har fångat mitt intresse. Jag samlar in data genom olika typer av 3d-skanning och utforskar en rad olika tekniker. I det digitala rummet redovisas den dokumenterade datan som en rörig abstraktion till sitt original, där viss information adderas medan annan förloras. Jag antar i min designprocess komplexa content aware auto fill-algoritmer - en strategi som blir central för projektet. I min installation bjuds besökare att utforska möten mellan det verkliga och det virtuella. Det är min övertygelse att spåren från det fysiska och det digitala slitaget adderar mervärden genom att de packar upp min process samtidigt som något märkligt men bekant materialiseras.
2

COSTA, PIETRO. "Human-data experience design : progettare con i personal data." Doctoral thesis, Università IUAV di Venezia, 2015. http://hdl.handle.net/11578/278686.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Jianhua. "Contemporary data path design optimization." Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2006. http://wwwlib.umi.com/cr/ucsd/fullcit?p3214712.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2006.
Title from first page of PDF file (viewed July 10, 2006). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 80-82).
4

Pliuskuvienė, Birutė. "Adaptive data models in design." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2008. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2008~D_20080627_143940-41525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the dissertation the adaptation problem of the software whose instability is caused by the changes in primary data contents and structure as well as the algorithms for applied problems implementing solutions to problems of applied nature is examined. The solution to the problem is based on the methodology of adapting models for the data expressed as relational sets.
Disertacijoje nagrinėjama taikomųjų uždavinių sprendimus realizuojančių programinių priemonių, kurių nepastovumą lemia pirminių duomenų turinio, jų struktūrų ir sprendžiamų taikomojo pobūdžio uždavinių algoritmų pokyčiai, adaptavimo problema.
5

張振隆 and Chun-lung Cheung. "Data warehousing mobile code design." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B29872996.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Owen, J. "Data management in engineering design." Thesis, University of Southampton, 2015. https://eprints.soton.ac.uk/385838/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Engineering design involves the production of large volumes of data. These data are a sophisticated mix of high performance computational and experimental results, and must be managed, shared and distributed across worldwide networks. Given limited storage and networking bandwidth, but rapidly growing rates of data production, effective data management is becoming increasingly critical. Within the context of Airbus, a leading aerospace engineering company, this thesis bridges the gap between academia and industry in the management of engineering data. It explores the high performance computing (HPC) environment used in aerospace engineering design, about which little was previously known, and applies the findings to the specific problem of file system cleaning. The properties of Airbus HPC file systems show many similarities with other environments, such as workstations and academic or public HPC file systems, but there are also some notably unique characteristics. In this research study it was found that Airbusfile system volumes exhibit a greater disk usage by a smaller proportion of files than any other case, and a single file type accounts for 65% of the disk space but less than 1% of the files. The characteristics and retention requirements of this file type formed the basis of a new cleaning tool we have researched and deployed within Airbus that is cognizant of these properties, and yielded disk space savings of 21.1 TB (15.2%) and 37.5 TB (28.2%) over two cleaning studies, and may be able to extend the life of existing storage systems by up to 5.5 years. It was also noted that the financial value of the savings already made exceed the cost of this entire research programme. Furthermore, log files contain information about these key files, and further analysis reveals that direct associations can be made to infer valuable additional metadata about such files. These additional metadata were shown to be available for a significant proportion of the data, and could be used to improve the effectiveness and efficiency of future data management methods even further.
7

Cheung, Chun-lung. "Data warehousing mobile code design." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B23001057.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Siththara, Gedara Jagath Senarathne. "Experimental design for dependent data." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/201237/1/Jagath%20Senarathne_Siththara%20Gedara_Thesis.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This PhD focused on developing new methods to design experiments where dependent data are observed. Of primary consideration was Bayesian design, i.e. designs found based on undertaking a Bayesian analysis of the data. The generic design algorithms and the loss functions proposed in this study cater to a wide range of applications, including designing clinical trials and geostatistical experiments. These tools enable informed decisions to be made efficiently through maximizing the information gained from experiments while reducing costs.
9

Roberg, Abigail M. "Data Visualizations: Guidelines for Gathering, Analyzing, and Designing Data." Ohio University Honors Tutorial College / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ouhonors1524826335755109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Herrmann, Amy Elizabeth. "Coupled design decisions in distributed design." Thesis, Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/16656.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Katsura-Gordon, Shigeo. "Democratizing Our Data : Finding Balance Living In A World Of Data Control." Thesis, Umeå universitet, Designhögskolan vid Umeå universitet, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-148942.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The 2018 scandal where Cambridge Analytica tampered with U.S. elections using targeted ad campaigns driven by illicitly collected Facebook data has shown us that there consequences of living in a world of technology driven by data. Mark Zuckerberg recently took part in a congressional hearing making the topic of controlling data an important discussion at even the highest level of the government. Alternatively we can also recognize the benefits that data has in terms of technology and services that are highly personalized because of data.There’s nothing better than a targeted ad that appears at just the right time when you need to make a purchase or when Spotify provides you with the perfect playlist for a Friday night. This leaves us torn between opposites; To reject data and abandon our technology returning to the proverbial stone age, or to accept being online all the time monitored by a vast network of sensors that feed data into algorithms that may know more about our habits then we do. It is the friction of these polar opposites that will lead us on a journey to find balance between the benefits and negatives of having data as part of our everyday lives.To help explore the negatives and positives that will occur on this journey I developed Data Control Box, a product that ask the question “How would you live in a world where you can control your data?” Found in homes and workplaces, it allows individuals or groups of people to control their data by placing their mobile devices into it’s 14x22.5x15 cm acrylic container.Where the General Data Protect Act (GDPR) regulates and controls data after it has been produced by enforcing how “business processes that handle personal data must be built with data protection by design and by default, meaning that personal data must be stored using pseudonymisation or full anonymisation, and use the highest-possible privacy settings by default, so that the data is not available publicly without explicit consent, and cannot be used to identify a subject without additional information stored separately” (Wikipedia, 2018),Data Control Box limits personal data production through a physical barrier to it’s user prior to it’s creation. This physical embodiment of data control disrupts everyday habits when using a mobile device, which in turn of a creates the opportunity for reflection and questioning on what control of data is and how it works. For example a person using Data Control Box can still create data using a personal computer despite having placed their mobile device inside Data ControlBox. Being faced with this realization reveals aspects of the larger systems that might not have been as apparent without Data Control Box and can serve as a starting point to answering the question “How would you live in a world where you can control your data.” To further build on this discussion people using DataControl Box are encouraged to share their reflections by tweeting to the hashtag#DataControlBox. These tweets are displayed through Data Control Box’s 1.5 inchOLED breakout board connected to an Arduino micro-controller. Data ControlBox can interface with any network connected computer using a usb cord which also serves as a power source. The connected feature of Data Control Box allows units found around the world to become nodes in a real time discussion about the balance of data as a part of everyday life, but also serves as a collection of discussions that took place over time starting May of 2018.As a designer, the deployment of Data Control Box allowed me to probe the lives of real people and to see how they might interact with Data Control Box but also their data in a day to day setting. A total of fifteen people interacted with DataControl Box following a single protocol that was read aloud to them beforehand.A number of different contexts for the deployment of Data Control Box we’re explored such as at home, on a desk at school and during a two hour human computer lecture. I collected a variety of qualitative research in the form of photos and informal video interviews during these deployments which I synthesized into the following insights that can be used by designers when considering how to design for the control of data but also how to design for complex subjects like data. This paper retraces my arrival at this final prototype sharing the findings of my initial research collected during desk research, initial participant activities, and creation of my initial prototype Data Box /01. It then closes with a deeper dive into the design rationale and process when building my final prototype Data ControlBox and summarizes in greater detail insights I’ve learned from it’s deployment through results discussion and creative reflection.
12

Romero, Moral Óscar. "Automating the multidimensional design of data warehouses." Doctoral thesis, Universitat Politècnica de Catalunya, 2010. http://hdl.handle.net/10803/6670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les experiències prèvies en l'àmbit dels magatzems de dades (o data warehouse), mostren que l'esquema multidimensional del data warehouse ha de ser fruit d'un enfocament híbrid; això és, una proposta que consideri tant els requeriments d'usuari com les fonts de dades durant el procés de disseny.
Com a qualsevol altre sistema, els requeriments són necessaris per garantir que el sistema desenvolupat satisfà les necessitats de l'usuari. A més, essent aquest un procés de reenginyeria, les fonts de dades s'han de tenir en compte per: (i) garantir que el magatzem de dades resultant pot ésser poblat amb dades de l'organització, i, a més, (ii) descobrir capacitats d'anàlisis no evidents o no conegudes per l'usuari.

Actualment, a la literatura s'han presentat diversos mètodes per donar suport al procés de modelatge del magatzem de dades. No obstant això, les propostes basades en un anàlisi dels requeriments assumeixen que aquestos són exhaustius, i no consideren que pot haver-hi informació rellevant amagada a les fonts de dades. Contràriament, les propostes basades en un anàlisi exhaustiu de les fonts de dades maximitzen aquest enfocament, i proposen tot el coneixement multidimensional que es pot derivar des de les fonts de dades i, conseqüentment, generen massa resultats. En aquest escenari, l'automatització del disseny del magatzem de dades és essencial per evitar que tot el pes de la tasca recaigui en el dissenyador (d'aquesta forma, no hem de confiar únicament en la seva habilitat i coneixement per aplicar el mètode de disseny elegit). A més, l'automatització de la tasca allibera al dissenyador del sempre complex i costós anàlisi de les fonts de dades (que pot arribar a ser inviable per grans fonts de dades).
Avui dia, els mètodes automatitzables analitzen en detall les fonts de dades i passen per alt els requeriments. En canvi, els mètodes basats en l'anàlisi dels requeriments no consideren l'automatització del procés, ja que treballen amb requeriments expressats en llenguatges d'alt nivell que un ordenador no pot manegar. Aquesta mateixa situació es dona en els mètodes híbrids actual, que proposen un enfocament seqüencial, on l'anàlisi de les dades es complementa amb l'anàlisi dels requeriments, ja que totes dues tasques pateixen els mateixos problemes que els enfocament purs.

En aquesta tesi proposem dos mètodes per donar suport a la tasca de modelatge del magatzem de dades: MDBE (Multidimensional Design Based on Examples) and AMDO (Automating the Multidimensional Design from Ontologies). Totes dues consideren els requeriments i les fonts de dades per portar a terme la tasca de modelatge i a més, van ser pensades per superar les limitacions dels enfocaments actuals.

1. MDBE segueix un enfocament clàssic, en el que els requeriments d'usuari són coneguts d'avantmà. Aquest mètode es beneficia del coneixement capturat a les fonts de dades, però guia el procés des dels requeriments i, conseqüentment, és capaç de treballar sobre fonts de dades semànticament pobres. És a dir, explotant el fet que amb uns requeriments de qualitat, podem superar els inconvenients de disposar de fonts de dades que no capturen apropiadament el nostre domini de treball.
2. A diferència d'MDBE, AMDO assumeix un escenari on es disposa de fonts de dades semànticament riques. Per aquest motiu, dirigeix el procés de modelatge des de les fonts de dades, i empra els requeriments per donar forma i adaptar els resultats generats a les necessitats de l'usuari. En aquest context, a diferència de l'anterior, unes fonts de dades semànticament riques esmorteeixen el fet de no tenir clars els requeriments d'usuari d'avantmà.

Cal notar que els nostres mètodes estableixen un marc de treball combinat que es pot emprar per decidir, donat un escenari concret, quin enfocament és més adient. Per exemple, no es pot seguir el mateix enfocament en un escenari on els requeriments són ben coneguts d'avantmà i en un escenari on aquestos encara no estan clars (un cas recorrent d'aquesta situació és quan l'usuari no té clares les capacitats d'anàlisi del seu propi sistema). De fet, disposar d'uns bons requeriments d'avantmà esmorteeix la necessitat de disposar de fonts de dades semànticament riques, mentre que a l'inversa, si disposem de fonts de dades que capturen adequadament el nostre domini de treball, els requeriments no són necessaris d'avantmà. Per aquests motius, en aquesta tesi aportem un marc de treball combinat que cobreix tots els possibles escenaris que podem trobar durant la tasca de modelatge del magatzem de dades.
Previous experiences in the data warehouse field have shown that the data warehouse multidimensional conceptual schema must be derived from a hybrid approach: i.e., by considering both the end-user requirements and the data sources, as first-class citizens. Like in any other system, requirements guarantee that the system devised meets the end-user necessities. In addition, since the data warehouse design task is a reengineering process, it must consider the underlying data sources of the organization: (i) to guarantee that the data warehouse must be populated from data available within the organization, and (ii) to allow the end-user discover unknown additional analysis capabilities.

Currently, several methods for supporting the data warehouse modeling task have been provided. However, they suffer from some significant drawbacks. In short, requirement-driven approaches assume that requirements are exhaustive (and therefore, do not consider the data sources to contain alternative interesting evidences of analysis), whereas data-driven approaches (i.e., those leading the design task from a thorough analysis of the data sources) rely on discovering as much multidimensional knowledge as possible from the data sources. As a consequence, data-driven approaches generate too many results, which mislead the user. Furthermore, the design task automation is essential in this scenario, as it removes the dependency on an expert's ability to properly apply the method chosen, and the need to analyze the data sources, which is a tedious and timeconsuming task (which can be unfeasible when working with large databases). In this sense, current automatable methods follow a data-driven approach, whereas current requirement-driven approaches overlook the process automation, since they tend to work with requirements at a high level of abstraction. Indeed, this scenario is repeated regarding data-driven and requirement-driven stages within current hybrid approaches, which suffer from the same drawbacks than pure data-driven or requirement-driven approaches.
In this thesis we introduce two different approaches for automating the multidimensional design of the data warehouse: MDBE (Multidimensional Design Based on Examples) and AMDO (Automating the Multidimensional Design from Ontologies). Both approaches were devised to overcome the limitations from which current approaches suffer. Importantly, our approaches consider opposite initial assumptions, but both consider the end-user requirements and the data sources as first-class citizens.

1. MDBE follows a classical approach, in which the end-user requirements are well-known beforehand. This approach benefits from the knowledge captured in the data sources, but guides the design task according to requirements and consequently, it is able to work and handle semantically poorer data sources. In other words, providing high-quality end-user requirements, we can guide the process from the knowledge they contain, and overcome the fact of disposing of bad quality (from a semantical point of view) data sources.
2. AMDO, as counterpart, assumes a scenario in which the data sources available are semantically richer. Thus, the approach proposed is guided by a thorough analysis of the data sources, which is properly adapted to shape the output result according to the end-user requirements. In this context, disposing of high-quality data sources, we can overcome the fact of lacking of expressive end-user requirements.

Importantly, our methods establish a combined and comprehensive framework that can be used to decide, according to the inputs provided in each scenario, which is the best approach to follow. For example, we cannot follow the same approach in a scenario where the end-user requirements are clear and well-known, and in a scenario in which the end-user requirements are not evident or cannot be easily elicited (e.g., this may happen when the users are not aware of the analysis capabilities of their own sources). Interestingly, the need to dispose of requirements beforehand is smoothed by the fact of having semantically rich data sources. In lack of that, requirements gain relevance to extract the multidimensional knowledge from the sources.
So that, we claim to provide two approaches whose combination turns up to be exhaustive with regard to the scenarios discussed in the literature
13

Romero, Moral Oscar. "Automating the multidimensional design of data warehouses." Doctoral thesis, Universitat Politècnica de Catalunya, 2010. http://hdl.handle.net/10803/6670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les experiències prèvies en l'àmbit dels magatzems de dades (o data warehouse), mostren que l'esquema multidimensional del data warehouse ha de ser fruit d'un enfocament híbrid; això és, una proposta que consideri tant els requeriments d'usuari com les fonts de dades durant el procés de disseny.Com a qualsevol altre sistema, els requeriments són necessaris per garantir que el sistema desenvolupat satisfà les necessitats de l'usuari. A més, essent aquest un procés de reenginyeria, les fonts de dades s'han de tenir en compte per: (i) garantir que el magatzem de dades resultant pot ésser poblat amb dades de l'organització, i, a més, (ii) descobrir capacitats d'anàlisis no evidents o no conegudes per l'usuari.Actualment, a la literatura s'han presentat diversos mètodes per donar suport al procés de modelatge del magatzem de dades. No obstant això, les propostes basades en un anàlisi dels requeriments assumeixen que aquestos són exhaustius, i no consideren que pot haver-hi informació rellevant amagada a les fonts de dades. Contràriament, les propostes basades en un anàlisi exhaustiu de les fonts de dades maximitzen aquest enfocament, i proposen tot el coneixement multidimensional que es pot derivar des de les fonts de dades i, conseqüentment, generen massa resultats. En aquest escenari, l'automatització del disseny del magatzem de dades és essencial per evitar que tot el pes de la tasca recaigui en el dissenyador (d'aquesta forma, no hem de confiar únicament en la seva habilitat i coneixement per aplicar el mètode de disseny elegit). A més, l'automatització de la tasca allibera al dissenyador del sempre complex i costós anàlisi de les fonts de dades (que pot arribar a ser inviable per grans fonts de dades).Avui dia, els mètodes automatitzables analitzen en detall les fonts de dades i passen per alt els requeriments. En canvi, els mètodes basats en l'anàlisi dels requeriments no consideren l'automatització del procés, ja que treballen amb requeriments expressats en llenguatges d'alt nivell que un ordenador no pot manegar. Aquesta mateixa situació es dona en els mètodes híbrids actual, que proposen un enfocament seqüencial, on l'anàlisi de les dades es complementa amb l'anàlisi dels requeriments, ja que totes dues tasques pateixen els mateixos problemes que els enfocament purs.En aquesta tesi proposem dos mètodes per donar suport a la tasca de modelatge del magatzem de dades: MDBE (Multidimensional Design Based on Examples) and AMDO (Automating the Multidimensional Design from Ontologies). Totes dues consideren els requeriments i les fonts de dades per portar a terme la tasca de modelatge i a més, van ser pensades per superar les limitacions dels enfocaments actuals.1. MDBE segueix un enfocament clàssic, en el que els requeriments d'usuari són coneguts d'avantmà. Aquest mètode es beneficia del coneixement capturat a les fonts de dades, però guia el procés des dels requeriments i, conseqüentment, és capaç de treballar sobre fonts de dades semànticament pobres. És a dir, explotant el fet que amb uns requeriments de qualitat, podem superar els inconvenients de disposar de fonts de dades que no capturen apropiadament el nostre domini de treball.2. A diferència d'MDBE, AMDO assumeix un escenari on es disposa de fonts de dades semànticament riques. Per aquest motiu, dirigeix el procés de modelatge des de les fonts de dades, i empra els requeriments per donar forma i adaptar els resultats generats a les necessitats de l'usuari. En aquest context, a diferència de l'anterior, unes fonts de dades semànticament riques esmorteeixen el fet de no tenir clars els requeriments d'usuari d'avantmà.Cal notar que els nostres mètodes estableixen un marc de treball combinat que es pot emprar per decidir, donat un escenari concret, quin enfocament és més adient. Per exemple, no es pot seguir el mateix enfocament en un escenari on els requeriments són ben coneguts d'avantmà i en un escenari on aquestos encara no estan clars (un cas recorrent d'aquesta situació és quan l'usuari no té clares les capacitats d'anàlisi del seu propi sistema). De fet, disposar d'uns bons requeriments d'avantmà esmorteeix la necessitat de disposar de fonts de dades semànticament riques, mentre que a l'inversa, si disposem de fonts de dades que capturen adequadament el nostre domini de treball, els requeriments no són necessaris d'avantmà. Per aquests motius, en aquesta tesi aportem un marc de treball combinat que cobreix tots els possibles escenaris que podem trobar durant la tasca de modelatge del magatzem de dades.
Previous experiences in the data warehouse field have shown that the data warehouse multidimensional conceptual schema must be derived from a hybrid approach: i.e., by considering both the end-user requirements and the data sources, as first-class citizens. Like in any other system, requirements guarantee that the system devised meets the end-user necessities. In addition, since the data warehouse design task is a reengineering process, it must consider the underlying data sources of the organization: (i) to guarantee that the data warehouse must be populated from data available within the organization, and (ii) to allow the end-user discover unknown additional analysis capabilities.Currently, several methods for supporting the data warehouse modeling task have been provided. However, they suffer from some significant drawbacks. In short, requirement-driven approaches assume that requirements are exhaustive (and therefore, do not consider the data sources to contain alternative interesting evidences of analysis), whereas data-driven approaches (i.e., those leading the design task from a thorough analysis of the data sources) rely on discovering as much multidimensional knowledge as possible from the data sources. As a consequence, data-driven approaches generate too many results, which mislead the user. Furthermore, the design task automation is essential in this scenario, as it removes the dependency on an expert's ability to properly apply the method chosen, and the need to analyze the data sources, which is a tedious and timeconsuming task (which can be unfeasible when working with large databases). In this sense, current automatable methods follow a data-driven approach, whereas current requirement-driven approaches overlook the process automation, since they tend to work with requirements at a high level of abstraction. Indeed, this scenario is repeated regarding data-driven and requirement-driven stages within current hybrid approaches, which suffer from the same drawbacks than pure data-driven or requirement-driven approaches.In this thesis we introduce two different approaches for automating the multidimensional design of the data warehouse: MDBE (Multidimensional Design Based on Examples) and AMDO (Automating the Multidimensional Design from Ontologies). Both approaches were devised to overcome the limitations from which current approaches suffer. Importantly, our approaches consider opposite initial assumptions, but both consider the end-user requirements and the data sources as first-class citizens.1. MDBE follows a classical approach, in which the end-user requirements are well-known beforehand. This approach benefits from the knowledge captured in the data sources, but guides the design task according to requirements and consequently, it is able to work and handle semantically poorer data sources. In other words, providing high-quality end-user requirements, we can guide the process from the knowledge they contain, and overcome the fact of disposing of bad quality (from a semantical point of view) data sources.2. AMDO, as counterpart, assumes a scenario in which the data sources available are semantically richer. Thus, the approach proposed is guided by a thorough analysis of the data sources, which is properly adapted to shape the output result according to the end-user requirements. In this context, disposing of high-quality data sources, we can overcome the fact of lacking of expressive end-user requirements.Importantly, our methods establish a combined and comprehensive framework that can be used to decide, according to the inputs provided in each scenario, which is the best approach to follow. For example, we cannot follow the same approach in a scenario where the end-user requirements are clear and well-known, and in a scenario in which the end-user requirements are not evident or cannot be easily elicited (e.g., this may happen when the users are not aware of the analysis capabilities of their own sources). Interestingly, the need to dispose of requirements beforehand is smoothed by the fact of having semantically rich data sources. In lack of that, requirements gain relevance to extract the multidimensional knowledge from the sources.So that, we claim to provide two approaches whose combination turns up to be exhaustive with regard to the scenarios discussed in the literature
14

Dahlqvist, Thea. "Användargenererad data i tjänstedesignprocessen." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-101012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Syftet med studien är att ta reda på hur tre olika innovativa tekniker kan användas för att samla in användarinsikter i en tjänstedesignprocess. Detta genomförs med ett designcase som handlar om besökarnas upplevelse av Norrköpings Symfoniorkesters konserter. Symfoniorkestern behöver hjälp med att locka fler besökare till konserterna och nya sätt behöver användas för att samla in användarinsikter för att få så många olika perspektiv som möjligt av konsertupplevelsen. Det var 20 personer som deltog i studien med en blandning av personer, dels de som idag går på Norrköpings Symfoniorkesters konserter och dels de som inte besöker dem. Deltagarnas uppgift i studien var att gå på två av deras konserter och dokumentera sina upplevelser med hjälp av tre olika innovativa tekniker: designsond, app och automatisk kamera. Resultatet av studien visar att den första tekniken designsonden gav en fördjupad bild av deltagarnas syn på konsertupplevelsen och hjälpte till att bredda fokus i tjänstedesignprocessen. Den andra tekniken,appen, kunde ge en mer detaljerad information i realtid om deltagarnas konsertupplevelse på plats. Den tredje tekniken, automatisk kamera, kunde ge ett detaljerat flöde automatiskt på plats över deltagarnas konsertupplevelse som visade mönster och beteenden som kunde kopplas till konsertupplevelsen. Studien visar på att de tre teknikerna gör att användaren hamnar i fokus under hela tjänstedesignprocessen, vilket är grunden för att kunna arbeta användarcenterat. Tillsammans blir teknikerna ännu mer kraftfulla att använda då de kan komplettera varandra i en tjänstedesignprocess.
The goal of the study is to find out how three different innovative techniques can be used to collect user experience in a service design process. This is carried out using a design case regarding the user experience of the Norrkoping’s Symphony Orchestras (SON) concerts. This is a symphony orchestra that has experienced increasing difficulty in attracting more visitors, and new methods are needed to tackle the problem. Therefore, in this study mobile ethnography and innovative methods will be used. There were 20 participants in the study, including a mix of people who regularly attend SON-concerts and those who do not. The participant’s task in this study was to attend two of SON-concerts and document their experience using three different innovative techniques: Probe, smartphone application and automatic camera. The result of the study shows that the probe gave a much more detailed look into the participants’ view of the concert experience, and contributed in widening the focus of the service design process. The application gave a more detailed information in real-time, on sight. The automatic camera gave a more detailed flow, automatically on sight, of the concert experience. That may show certain patterns and behaviors of the participants linked to the concert experience. The study shows that the three innovative techniques puts focus on the user throughout the entire service design process, which is the foundation for working user-centered. If the techniques are used in combination they become more effective as they complete each other in a service design process.
15

Noaman, Amin Yousef. "Distributed data warehouse architecture and design." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0027/NQ51662.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Mathew, Michael Ian. "Design of nonlinear sampled-data systems." Thesis, Coventry University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.480606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Kaczorowski, Kevin J. "Data-driven strategies for vaccine design." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/117327.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, February 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references.
Vaccination is one of the greatest achievements in immunology and in general medicine, and has virtually eradicated many infectious diseases that plagued humans in the past. Vaccination involves injecting an individual with some version of the pathogen in order to allow the individual to develop a memory immune response that will protect them from future challenge with the same pathogen. Until recently, vaccine development has largely followed empirical paradigms that have proven successful against many diseases. However, many pathogens have now evolved that defy success using the traditional approaches. Rational design of vaccines against such pathogens will likely require interdisciplinary approaches spanning engineering, immunology, and the physical sciences. In this thesis, we combine theoretical approaches with protein sequence and clinical data to address two contemporary problems in vaccinology: 1. Developing an antibody vaccine against HIV, an example of a highly mutable pathogen; and 2. Understanding how the many immune components work collectively to effect a systemic immune response, such as to vaccines. In HIV-infected individuals, antibodies produced by the immune system bind to specific parts of an HIV protein called Envelope (Env). However, the virus evades the immune response due to its high mutability, thus making effective vaccine design a huge challenge. To predict the mutational vulnerabilities of the virus, we developed a model (a fitness landscape) to translate sequence data into knowledge of viral fitness, a measure of the ability of the virus to replicate and thrive. The landscape accounts explicitly for coupling interactions between mutations at different positions within the protein, which often dictate how the virus evades the immune response. We developed new computational approaches that enabled us to tackle the large size and mutational variability of Env, since previous approaches have been unsuccessful in this case. A small fraction of HIV-infected individuals produce a class of antibodies called broadly neutralizing antibodies (bnAbs), which neutralize a diverse number of HIV strains and can thus tolerate many mutations in Env. To investigate the mechanisms underlying breadth of these bnAbs, we combined our landscape with 3D protein structures to gain insight into the spatial distribution of binding interactions between bnAbs and Env. Based on this, we designed an optimal set of immunogens (i.e. Env sequences), with mutations at key residues, that are potentially likely to lead to the elicitation of bnAbs via vaccination. We hope that these antigens will soon be tested in animal models. Even when the right immunogens are included in a vaccine, a potent immune response is not always induced. For example, some individuals do not respond to protective influenza vaccines as desired. The human immune system consists of many different immune cells that coordinate their actions to fight infections and respond to vaccines. The balance between these cell populations is determined by direct interactions and soluble factors such as cytokines, which serve as messengers between cells. A mechanistic understanding of how the various immune components cooperate to bring about the immune response can guide strategies to improve vaccine efficacy. To investigate whether differences in immune response could be explained by variation in immune cell compositions across individuals, we analyzed experimental measurements of various immune cell population frequencies in a cohort of healthy humans. We demonstrated that human immune variation in these parameters is continuous rather than discrete. Furthermore, we showed that key combinations of these immune parameters can be used to predict immune response to diverse stimulations, namely cytokine stimulation and vaccination. Thus, we defined the concept of an individual's "immunotype" as their location within the space of these key combinations of parameters. This result highlights a previously unappreciated connection between immune cell composition and systemic immune responses, and can guide future development of therapies that aim to collectively, rather than independently, manipulate immune cell frequencies.
by Kevin J. Kaczorowski.
Ph. D.
18

Valero, Bresó Alejandro. "Hybrid caches: design and data management." Doctoral thesis, Editorial Universitat Politècnica de València, 2013. http://hdl.handle.net/10251/32663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cache memories have been usually implemented with Static Random-Access Memory (SRAM) technology since it is the fastest electronic memory technology. However, this technology consumes a high amount of leakage currents, which is a major design concern because leakage energy consumption increases as the transistor size shrinks. Alternative technologies are being considered to reduce this consumption. Among them, embedded Dynamic RAM (eDRAM) technology provides minimal area and leakage by design but reads are destructive and it is not as fast as SRAM. In this thesis, both SRAM and eDRAM technologies are mingled to take the advantatges that each of them o¿ers. First, they are combined at cell level to implement an n-bit macrocell consisting of one SRAM cell and n-1 eDRAM cells. The macrocell is used to build n-way set-associative hybrid ¿rst-level (L1) data caches having one SRAM way and n-1 eDRAM ways. A single SRAM way is enough to achieve good performance given the high data locality of L1 caches. Architectural mechanisms such as way-prediction, swaps, and scrub operations are considered to avoid unnecessary eDRAM reads, to maintain the Most Recently Used (MRU) data in the fast SRAM way, and to completely avoid refresh logic. Experimental results show that, compared to a conventional SRAM cache, leakage and area are largely reduced with a scarce impact on performance. The study of the bene¿ts of hybrid caches has been also carried out in second-level (L2) caches acting as Last-Level Caches (LLCs). In this case, the technologies are combined at bank level and the optimal ratio of SRAM and eDRAM banks that achieves the best trade-o¿ among performance, energy, and area is identi¿ed. Like in L1 caches, the MRU blocks are kept in the SRAM banks and they are accessed ¿rst to avoid unnecessary destructive reads. Nevertheless, refresh logic is not removed since data locality widely di¿ers in this cache level. Experimental results show that a hybrid LLC with an eighth of its banks built with SRAM technology is enough to achieve the best target trade-o¿. This dissertation also deals with performance of replacement policies in heterogeneous LLCs mainly focusing on the energy overhead incurred by refresh operations. In this thesis it is de¿ned a new concept, namely MRU-Tour (MRUT), that helps estimate reuse information of cache blocks. Based on this concept, it is proposed a family of MRUTbased replacement algorithms that randomly select the victim block among those having a single MRUT. These policies are enhanced to leverage recency of information for a few blocks and to adapt to changes in the working set of the benchmarks. Results show that the proposed MRUT policies, with simpler hardware complexity, outperform the Least Recently Used (LRU) policy and a set of the most representative state-of-the-art replacement policies for LLCs. Refresh operations represent an important fraction of the overall dynamic energy consumption of eDRAM LLCs. This fraction increases with the cache capacity, since more blocks have to be refreshed for a given period of time. Prior works have attacked the refresh energy taking into account inter-cell feature variations. Unlike these works, this thesis proposes a selective refresh policy based on the MRUT concept. The devised policy takes into account the number of MRUTs of a block to select whether the block is refreshed. In this way, many refreshes done in a typical distributed refresh policy are skipped (i.e., in those blocks having a single MRUT). This refresh mechanism is applied in the hybrid LLC memory. Results show that refresh energy consumption is largely reduced with respect to a conventional eDRAM cache, while the performance degradation is minimal with respect to a conventional SRAM cache.
Valero Bresó, A. (2013). Hybrid caches: design and data management [Tesis doctoral]. Editorial Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/32663
Alfresco
Premiado
19

Acuna, Stamp Annabelen. "Design Study for Variable Data Printing." University of Cincinnati / OhioLINK, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=ucin962378632.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Mahood, Christian. "Data center design & enterprise networking /." Online version of thesis, 2009. http://hdl.handle.net/1850/8699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Lee, Heeseok. "Data allocation design in computer networks." Diss., The University of Arizona, 1991. http://hdl.handle.net/10150/185435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Well distributed data can dramatically improve the efficiency and effectiveness of the use of distributed database systems to satisfy geographically dispersed data processing demands. Among several issues related to distribution design in distributed databases, data allocation design is of major importance. Choices of a fragmentation strategy and location of database files are two critical decisions. Thus far, solutions of these design problems, although interdependent, have been attempted separately. Solving both design problems simultaneously in a real design setting is not a trivial task. By formulating typical data allocation design problems, we can analyze the solution space and analytical properties of optimal data allocation design. Based on this, we suggest that clustering data elements into uniform fragments and then allocating these fragments is equivalent to solving the data allocation design as a whole. Such analytical examination of the data allocation design problem has not been attempted by other researchers, but it is essential to provide the theoretical foundation for solving the fragmentation design and fragment allocation design problem. We then extended the research by studying the effect on design issues of such characteristics of distributed processing as database access patterns, network scope, and design objectives. We also propose a generic taxonomy of data allocation design models. We further advance data allocation design skills in the following two directions. The first of these involves developing a design method that guarantees the minimum number of fragments to be considered as units of allocation. This improves upon existing fragment allocation methodologies, which are based on the assumed units of allocation. The second direction involves enhancements in modeling and solution procedures that allow efficient fragment allocation design. Concentration is on information processing environments, which have received little attention in the research literature. We first studied databases connected on local area networks under weak locality of reference. The model proposed is validated by simulation study. We then explored the multiple design objective optimization phase, which involves searching for models where several design objectives are in conflict. We addressed three important design objectives including response time, operating cost and data availability. In conclusion, we submit that the methodology proposed is likely to provide a better understanding of data allocation design problems, the solutions for which are expected to continue providing key design tools as advancing data communication techniques evolve.
22

Carlsson, Nicole. "Vulnerable data interactions — augmenting agency." Thesis, Malmö universitet, Fakulteten för kultur och samhälle (KS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-23309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis project opens up an interaction design space in the InfoSec domain concerning raising awareness of common vulnerabilities and facilitating counter practices through seamful design.This combination of raising awareness coupled with boosting possibilities for deliberate action (or non-action) together account for augmenting agency. This augmentation takes the form of bottom up micro-movements and daily gestures contributing to opportunities for greater agency in the increasingly fraught InfoSec domain.
23

Cai, Simin. "Systematic Design of Data Management for Real-Time Data-Intensive Applications." Licentiate thesis, Mälardalens högskola, Inbyggda system, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-35369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Modern real-time data-intensive systems generate large amounts of data that are processed using complex data-related computations such as data aggregation. In order to maintain the consistency of data, such computations must be both logically correct (producing correct and consistent results) and temporally correct (completing before specified deadlines). One solution to ensure logical and temporal correctness is to model these computations as transactions and manage them using a Real-Time Database Management System (RTDBMS). Ideally, depending on the particular system, the transactions are customized with the desired logical and temporal correctness properties, which are achieved by the customized RTDBMS with appropriate run-time mechanisms. However, developing such a data management solution with provided guarantees is not easy, partly due to inadequate support for systematic analysis during the design. Firstly, designers do not have means to identify the characteristics of the computations, especially data aggregation, and to reason about their implications. Design flaws might not be discovered, and thus they may be propagated to the implementation. Secondly, trade-off analysis of conflicting properties, such as conflicts between transaction isolation and temporal correctness, is mainly performed ad-hoc, which increases the risk of unpredictable behavior. In this thesis, we propose a systematic approach to develop transaction-based data management with data aggregation support for real-time systems. Our approach includes the following contributions: (i) a taxonomy of data aggregation, (ii) a process for customizing transaction models and RTDBMS, and (iii) a pattern-based method of modeling transactions in the timed automata framework, which we show how to verify with respect to transaction isolation and temporal correctness. Our proposed taxonomy of data aggregation processes helps in identifying their common and variable characteristics, based on which their implications can be reasoned about. Our proposed process allows designers to derive transaction models with desired properties for the data-related computations from system requirements, and decide the appropriate run-time mechanisms for the customized RTDBMS to achieve the desired properties. To perform systematic trade-off analysis between transaction isolation and temporal correctness specifically, we propose a method to create formal models of transactions with concurrency control, based on which the isolation and temporal correctness properties can be verified by model checking, using the UPPAAL tool. By applying the proposed approach to the development of an industrial demonstrator, we validate the applicability of our approach.
DAGGERS
24

Lundberg, Agnes. "Dealing with Data." Thesis, KTH, Arkitektur, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-298801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Being an architect means dealing with data. All architectural thinking—whether it is done with pen and paper or the most advanced modeling softwares—starts with data, conveying information about the world, and ultimately outputs data, in the form of drawings or models. Reality is neither the input nor the output. All architectural work is abstractions of reality, mediated by data. What if data, the abstractions of reality that are crucial for our work as architects, was to be used more literally? Could data actually be turned into architecture? Could data be turned into, for example, a volume, a texture, or an aperture? What qualities would such an architecture have?  These questions form the basis of this thesis project. The topic was investigated first by developing a simple design method for generating architectural forms from data, through an iterative series of tests. Then, the design method was applied to create a speculative design proposal for a combined data center and museum, located in Södermalm, Stockholm.
25

鄭桂懷 and Kwai-wai Cheng. "A collaborative design tool for virtual design studios." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31220526.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Leung, Nim Keung. "Convexity-Preserving Scattered Data Interpolation." Thesis, University of North Texas, 1995. https://digital.library.unt.edu/ark:/67531/metadc277609/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Surface fitting methods play an important role in many scientific fields as well as in computer aided geometric design. The problem treated here is that of constructing a smooth surface that interpolates data values associated with scattered nodes in the plane. The data is said to be convex if there exists a convex interpolant. The problem of convexity-preserving interpolation is to determine if the data is convex, and construct a convex interpolant if it exists.
27

Pellegrino, Gregory S. "Design of a Low-Cost Data Acquisition System for Rotordynamic Data Collection." DigitalCommons@CalPoly, 2019. https://digitalcommons.calpoly.edu/theses/1978.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A data acquisition system (DAQ) was designed based on the use of a STM32 microcontroller. Its purpose is to provide a transparent and low-cost alternative to commercially available DAQs, providing educators a means to teach students about the process through which data are collected as well as the uses of collected data. The DAQ was designed to collect data from rotating machinery spinning at a speed up to 10,000 RPM and send this data to a computer through a USB 2.0 full-speed connection. Multitasking code was written for the DAQ to allow for data to be simultaneously collected and transferred over USB. Additionally, a console application was created to control the DAQ and read data, and MATLAB code written to analyze the data. The DAQ was compared against a custom assembled National Instruments CompactDAQ system. Using a Bentley-Nevada RK 4 Rotor Kit, data was simultaneously collected using both DAQs. Analysis of this data shows the capabilities and limitations of the low cost DAQ compared to the custom CompactDAQ.
28

Yi, Xin. "Data visualization in conceptual design: developing a prototype for complex data visualization." Thesis, Blekinge Tekniska Högskola, Institutionen för maskinteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In today's highly competitive industries, engineers are driven to not only design a better product to fulfill users' needs but also demanded to develop a product in a short time to occupy the market. With the development of data collection and visualization technology, the application of data visualization into product development to enhance the ability of better product design is a significant trend.  Data visualization becomes more and more important since it could illustrate the valuable information, such as tacit needs and patterns which hidden from data, in a communicated way to help engineers get more inspiration for the conceptual design.   It is not hard to collect data; however, the challenge is to visualize the valuable information from a large number of data concisely and intuitively. In recent years, there are some visualization techniques available for product design, while, most of them are implemented in the later stage of product development, few methods are applicable for conceptual design. Therefore, this thesis is carried out to explore appropriate visualization techniques to provide support for conceptual design.   The aim of this thesis is, in an engineering environment, to investigate ways to visualize complex data legibly and intuitively to enhance engineers’ ability for conceptual design from better understanding the current machine. In order to achieve the objective, a conceptual design case of the improvement of wheel loader fuel consumption is applied, which consisted of plenty of data sets within various parameters, to explore how to reveal the hidden information of complex data for engineers.   As the result of this thesis, a prototype contains a series of visualization techniques is proposed to demonstrate data information from a wheel loader under several visualization situations. The final prototype has the functions of visualizing different operations separately; visualizing the overall fuel consumption in one operation; cluster's patterns visualization; visualizing the impact of one variable on the whole value.
29

Galanis, Panagiotis. "Designing with data in mind : designer perceptions on visualising data within editorial information design practice." Thesis, University of Portsmouth, 2014. https://researchportal.port.ac.uk/portal/en/theses/designing-with-data-in-mind(1683f801-8926-48fb-850f-f51f9ddce9f0).html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This research identifies and addresses a critical knowledge gap on the discipline of editorial information design, a new area of data visualisation within the editorial environment. Due to the paucity of literature of the specific area at the moment of writing, this study aims to bring explicitness to design practices that remain, in research terms, largely unexplored. Literature supporting the emergent research, was examined from two key areas. Firstly by investigating general design theory and principles from well-developed design fields, and secondly by examining selected material from the established area of information design, reviewing essential concepts of information visualisation. With both areas combined, this ensured breadth and depth of research perspective and sensitised the researcher on critical issues later used to evaluate emerging material. To fulfil the aims of the study, interviewing was the primary method of data acquisition, with the Grounded Theory Method the selected methodology to analyse the data, as it was perceived as the most effective to capture tacit and empirical knowledge and connect it with practitioner activity. As a qualitative method it consists of practices that interpret data and makes the world visible, encouraging the researchers to be active and engaged analysts, utilising abductive reasoning on findings, even during the data collection. This effect informs and advances both areas as through forming iterative process, the abstract level is raised and analysis is intensified. The material highlighted the tacit, embedded in the act of designing, knowledge that practitioners of editorial information design possessed, informing the observed knowledge gap. The combined material was coded, juxtaposed, and refined through multiple analytic cycles, seeking emergent elements of critical activity of editorial information design, with the potential to define practice. The outcomes of the analysis are presented in structured form: emerging codes construct themes of designer activity, delineating essential operations and producing indepth descriptions grounded on empirical data. Cross-theme conceptual structures also emerge through further analysis, as abstract categories that capture designer operations in continuity and offer insight on how practice transitions between key stages. This study concludes with the presentation of a set of grounded theories, elucidating areas of editorial information design absent from the existing literature. While previously the design area remained obscure and implicit, leaving a lot to speculation, through this study key areas and activities become visible: elements directly associated with tacit designer action and design epistemology become explicit, revealing and defining the area under investigation.
30

Mustafa, Mudassir Imran. "Design Principles for Data Export : Action Design Research in U-CARE." Thesis, Uppsala universitet, Institutionen för informatik och media, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-180061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this thesis, we report the findings of designing data export functionality in Uppsala University Psychosocial Care Program (U-CARE) at Uppsala University. The aim of this thesis was to explore the design space for generic data export functionality in data centric clinical research applications for data analysis. This was attained by the construction and evaluation of a prototype for a data-centric clinical research application. For this purpose Action Design Research (ADR) was conducted, situated in the domain of clinical research. The results consist of a set of design principles expressing key aspects needed to address when designing data export functionality. The artifacts derived from the development and evaluation process each one constitutes an example of how to design for data export functionality of this kind.
31

Uichanco, Joline Ann Villaranda. "Data-driven revenue management." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41728.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007.
Includes bibliographical references (p. 125-127).
In this thesis, we consider the classical newsvendor model and various important extensions. We do not assume that the demand distribution is known, rather the only information available is a set of independent samples drawn from the demand distribution. In particular, the variants of the model we consider are: the classical profit-maximization newsvendor model, the risk-averse newsvendor model and the price-setting newsvendor model. If the explicit demand distribution is known, then the exact solutions to these models can be found either analytically or numerically via simulation methods. However, in most real-life settings, the demand distribution is not available, and usually there is only historical demand data from past periods. Thus, data-driven approaches are appealing in solving these problems. In this thesis, we evaluate the theoretical and empirical performance of nonparametric and parametric approaches for solving the variants of the newsvendor model assuming partial information on the distribution. For the classical profit-maximization newsvendor model and the risk-averse newsvendor model we describe general non-parametric approaches that do not make any prior assumption on the true demand distribution. We extend and significantly improve previous theoretical bounds on the number of samples required to guarantee with high probability that the data-driven approach provides a near-optimal solution. By near-optimal we mean that the approximate solution performs arbitrarily close to the optimal solution that is computed with respect to the true demand distributions.
(cont.) For the price-setting newsvendor problem, we analyze a previously proposed simulation-based approach for a linear-additive demand model, and again derive bounds on the number of samples required to ensure that the simulation-based approach provides a near-optimal solution. We also perform computational experiments to analyze the empirical performance of these data-driven approaches.
by Joline Ann Villaranda Uichanco.
S.M.
32

Duch, Brown Amàlia. "Design and Analysis of Multidimensional Data Structures." Doctoral thesis, Universitat Politècnica de Catalunya, 2004. http://hdl.handle.net/10803/6647.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Aquesta tesi està dedicada al disseny i a l'anàlisi d'estructures de dades multidimensionals, és a dir, estructures de dades que serveixen per emmagatzemar registres $K$-dimensionals que solen representar-se com a punts en l'espai $[0,1]^K$. Aquestes estructures tenen aplicacions en diverses àrees de la informàtica com poden ser els sistemes d'informació geogràfica, la robòtica, el processament d'imatges, la world wide web, el data mining, entre d'altres.

Les estructures de dades multidimensionals també es poden utilitzar com a indexos d'estructures de dades que emmagatzemen, possiblement en memòria externa, dades més complexes que els punts.

Les estructures de dades multidimensionals han d'oferir la possibilitat de realitzar operacions d'inserció i esborrat de claus dinàmicament, a més de permetre realitzar cerques anomenades associatives. Exemples d'aquest tipus de cerques són les cerques per rangs ortogonals (quins punts cauen dintre d'un hiper-rectangle donat?) i les cerques del veí més proper (quin és el punt més proper a un punt donat?).

Podem dividir les contribucions d'aquesta tesi en dues parts:

La primera part està relacionada amb el disseny d'estructures de dades per a punts multidimensionals. Inclou el disseny d'arbres binaris $K$-dimensionals al·leatoritzats (Randomized $K$-d trees), el d'arbres quaternaris al·leatoritzats (Randomized quad trees) i el d'arbres multidimensionals amb punters de referència (Fingered multidimensional trees).

La segona part analitza el comportament de les estructures de dades multidimensionals. En particular, s'analitza el cost mitjà de les cerques parcials en arbres $K$-dimensionals relaxats, i el de les cerques per rang en diverses estructures de dades multidimensionals.

Respecte al disseny d'estructures de dades multidimensionals, proposem algorismes al·leatoritzats d'inserció i esborrat de registres per als arbres $K$-dimensionals i per als arbres quaternaris. Aquests algorismes produeixen arbres aleatoris, independentment de l'ordre d'inserció dels registres i desprès de qualsevol seqüència d'insercions i esborrats. De fet, el comportament esperat de les estructures produïdes mitjançant els algorismes al·leatoritzats és independent de la distribució de les dades d'entrada, tot i conservant la simplicitat i la flexibilitat dels arbres $K$-dimensionals i quaternaris estàndard. Introduïm també els arbres multidimensionals amb punters de referència. Això permet que les estructures multidimensionals puguin aprofitar l'anomenada localitat de referència en cerques associatives altament correlacionades.

I respecte de l'anàlisi d'estructures de dades multidimensionals, primer analitzem el cost esperat de las cerques parcials en els arbres $K$-dimensionals relaxats. Seguidament utilitzem aquest resultat com a base per a l'anàlisi de les cerques per rangs ortogonals, juntament amb arguments combinatoris i geomètrics. D'aquesta manera obtenim un estimat asimptòtic precís del cost de les cerques per rangs ortogonals en els arbres $K$-dimensionals aleatoris. Finalment, mostrem que les tècniques utilitzades es poden estendre fàcilment a d'altres estructures de dades i per tant proporcionem una anàlisi exacta del cost mitjà de cerques per rang en estructures de dades com són els arbres $K$-dimensionals estàndard, els arbres quaternaris, els tries quaternaris i els tries $K$-dimensionals.
Esta tesis está dedicada al diseño y al análisis de estructuras de datos multidimensionales; es decir, estructuras de datos específicas para almacenar registros $K$-dimensionales que suelen representarse como puntos en el espacio $[0,1]^K$. Estas estructuras de datos tienen aplicaciones en diversas áreas de la informática como son: los sistemas de información geográfica, la robótica, el procesamiento de imágenes, la world wide web o data mining, entre otras.

Las estructuras de datos multidimensionales suelen utilizarse también como índices de estructuras que almacenan, posiblemente en memoria externa, datos complejos.

Las estructuras de datos multidimensionales deben ofrecer la posibilidad de realizar operaciones de inserción y borrado de llaves de manera dinámica, pero además deben permitir realizar búsquedas asociativas en los registros almacenados. Ejemplos de búsquedas asociativas son las búsquedas por rangos ortogonales (¿qué puntos de la estructura de datos están dentro de un hiper-rectángulo dado?) y las búsquedas del vecino más cercano (¿cuál es el punto de la estructura de datos más cercano a un punto dado?).

Las contribuciones de esta tesis se dividen en dos partes:

La primera parte está dedicada al diseño de estructuras de datos para puntos multidimensionales, que incluye el diseño de los árboles binarios $K$-dimensionales aleatorios (Randomized $K$-d trees), el de los árboles cuaternarios aleatorios (Randomized quad trees), y el de los árboles multidimensionales con punteros de referencia (Fingered multidimensional trees).
La segunda parte contiene contribuciones al análisis del comportamiento de las estructuras de datos para puntos multidimensionales. En particular, damos el análisis del costo promedio de las búsquedas parciales en los árboles $K$-dimensionales relajados y el de las búsquedas por rango en varias estructuras de datos multidimensionales.


Con respecto al diseño de estructuras de datos multidimensionales, proponemos algoritmos aleatorios de inserción y borrado de registros para los árboles $K$-dimensionales y los árboles cuaternarios que producen árboles aleatorios independientemente del orden de inserción de los registros y después de cualquier secuencia de inserciones y borrados intercalados. De hecho, con la aleatorización garantizamos un buen rendimiento esperado de las estructuras de datos resultantes, que es independiente de la distribución de los datos de entrada, conservando la flexibilidad y la simplicidad de los árboles $K$-dimensionales y de los árboles cuaternarios estándar. También proponemos los árboles multidimensionales con punteros de referencia, una técnica que permite que las estructuras de datos multidimensionales exploten la localidad de referencia en búsquedas asociativas que se presentan altamente correlacionadas.

Con respecto al análisis de estructuras de datos multidimensionales, comenzamos dando un análisis preciso del costo esperado de las búsquedas parciales en los árboles $K$-dimensionales relajados. A continuación, utilizamos este resultado como base para el análisis de las búsquedas por rangos ortogonales, combinándolo con argumentos combinatorios y geométricos. Como resultado obtenemos un estimado asintótico preciso del costo de las búsquedas por rango en los árboles $K$-dimensionales relajados. Finalmente, mostramos que las técnicas utilizadas pueden extenderse fácilmente a otras estructuras de datos y por tanto proporcionamos un análisis preciso del costo promedio de búsquedas por rango en estructuras de datos como los árboles $K$-dimensionales estándar, los árboles cuaternarios, los tries cuaternarios y los tries $K$-dimensionales.
This thesis is about the design and analysis of point multidimensional data structures: data structures that store $K$-dimensional keys which we may abstract as points in $[0,1]^K$. These data structures are present in many applications of geographical information systems, image processing or robotics, among others. They are also frequently used as indexes of more complex data structures, possibly stored in external memory.

Point multidimensional data structures must have capabilities such as insertion, deletion and (exact) search of items, but in addition they must support the so called {em associative queries}. Examples of these queries are orthogonal range queries (which are the items that fall inside a given hyper-rectangle?) and nearest neighbour queries (which is the closest item to some given point?).

The contributions of this thesis are two-fold:

Contributions to the design of point multidimensional data structures: the design of randomized $K$-d trees, the design of randomized quad trees and the design of fingered multidimensional search trees;
Contributions to the analysis of the performance of point multidimensional data structures: the average-case analysis of partial match queries in relaxed $K$-d trees and the average-case analysis of orthogonal range queries in various multidimensional data structures.


Concerning the design of randomized point multidimensional data structures, we propose randomized insertion and deletion algorithms for $K$-d trees and quad trees that produce random $K$-d trees and quad trees independently of the order in which items are inserted into them and after any sequence of interleaved insertions and deletions. The use of randomization provides expected performance guarantees, irrespective of any assumption on the data distribution, while retaining the simplicity and flexibility of standard $K$-d trees and quad trees.

Also related to the design of point multidimensional data structures is the proposal of fingered multidimensional search trees, a new technique that enhances point multidimensional data structures to exploit locality of reference in associative queries.

With regards to performance analysis, we start by giving a precise analysis of the cost of partial matches in randomized $K$-d trees. We use these results as a building block in our analysis of orthogonal range queries, together with combinatorial and geometric arguments and we provide a tight asymptotic estimate of the cost of orthogonal range search in randomized $K$-d trees. We finally show that the techniques used apply easily to other data structures, so we can provide an analysis of the average cost of orthogonal range search in other data structures such as standard $K$-d trees, quad trees, quad tries, and $K$-d tries.
33

Munir, Wahab. "Optimization of Data Warehouse Design and Architecture." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-37233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A huge number of SCANIA trucks and busses are running on the roads. Unlike the trucks and buses of the past, they are hi-tech vehicles, carrying a lot of technical and operational information that pertains to different aspects like load statistics, driving time, engine-speed over time and much more. This information is fed into an analysis system where it is organized to support analytical questions. Over the period of time this system has become overloaded and needs to be optimized. There are a number of areas identified that can be considered for improvement. However, it is not possible to analyze the whole system within the given constraints. A subset is picked which has been thought of to be sufficient for the purpose of the thesis. The system takes a lot of time to load new data. Data loading is not incremental. There is a lot of redundancy in the storage structure. Query execution takes a lot of time in some parts of the database. The methods chosen for this thesis includes data warehouse design and architecture analysis, end user queries review, and code analysis. A potential solution is presented to reduce the storage space requirements and maintenance time taken by the databases. This is achieved by presenting a solution to reduce the number of databases maintained in parallel and contains duplicated data. Some optimizations have been made in the storage structure and design to improve the query processing time for the end users. An example incremental loading strategy is also implemented to demonstrate the working and idea. This helps in the reduction of loading time. Moreover, An investigation has been made into a commercially available Data warehouse management System. The investigation is mostly based on hardware architecture and how it can contributes to better performance. This portion is only theoretical. Based on the analysis recommendations are made regarding the architecture and design of the data warehouse.
34

Siqueira, Thiago Luís Lopes. "The design of vague spatial data warehouses." Universidade Federal de São Carlos, 2015. https://repositorio.ufscar.br/handle/ufscar/298.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Made available in DSpace on 2016-06-02T19:04:00Z (GMT). No. of bitstreams: 1 6824.pdf: 22060515 bytes, checksum: bde19feb7a6e296214aebe081f2d09de (MD5) Previous issue date: 2015-12-07
Universidade Federal de Minas Gerais
O data warehouse espacial (DWE) é um banco de dados multidimensional integrado e volumoso que armazena dados espaciais e dados convencionais. Já o processamento analítico espacial online (SOLAP) permite consultar o DWE, tanto pela seleção de dados espaciais que satisfazem um relacionamento topológico, quanto pela agregação dos dados espaciais. Deste modo, DWE e SOLAP beneficiam o suporte a tomada de decisão. As aplicações de DWE e SOLAP abordam majoritarimente fenômenos representados por dados espaciais exatos, ou seja, que assumem localizações e fronteiras bem definidas. Contudo, tais aplicações negligenciam dados espaciais afetados por imperfeições, tais como a vagueza espacial, a qual interfere na identificação precisa de um objeto e de seus vizinhos. Um objeto espacial vago não tem sua fronteira ou seu interior precisamente definidos. Além disso, é composto por partes que certamente pertencem a ele e partes que possivelmente pertencem a ele. Apesar de inúmeros fenômenos do mundo real serem caracterizados pela vagueza espacial, na literatura consultada não se identificaram trabalhos que considerassem a vagueza espacial no projeto de DWE e nem para consultar o DWE. Tal limitação motivou a elaboração desta tese de doutorado, a qual introduz os conceitos de DWE vago e de SOLAP vago. Um DWE vago é um DWE que armazena dados espaciais vagos, enquanto que SOLAP vago provê os meios para consultar o DWE vago. Nesta tese, o projeto de DWE vago é abordado e as principais contribuições providas são: (i) o modelo conceitual VSCube que viabiliza a criação de um cubos de dados multidimensional para representar o esquema conceitual de um DWE vago; (ii) o modelo conceitual VSMultiDim que permite criar um diagrama para representar o esquema conceitual de um DWE vago; (iii) diretrizes para o projeto lógico do DWE vago e de suas restrições de integridade, e para estender a linguagem SQL visando processar as consultas de SOLAP vago no DWE vago; e (iv) o índice VSB-index que aprimora o desempenho do processamento de consultas no DWE vago. A aplicabilidade dessas contribuições é demonstrada em dois estudos de caso no domínio da agricultura, por meio da criação de esquemas conceituais de DWE vago, da transformação dos esquemas conceituais em esquemas lógicos de DWE vago, e do processamento de consultas envolvendo as regiões vagas do DWE vago.
Spatial data warehouses (SDW) and spatial online analytical processing (SOLAP) enhance decision making by enabling spatial analysis combined with multidimensional analytical queries. A SDW is an integrated and voluminous multidimensional database containing both conventional and spatial data. SOLAP allows querying SDWs with multidimensional queries that select spatial data that satisfy a given topological relationship and that aggregate spatial data. Existing SDW and SOLAP applications mostly consider phenomena represented by spatial data having exact locations and sharp boundaries. They neglect the fact that spatial data may be affected by imperfections, such as spatial vagueness, which prevents distinguishing an object from its neighborhood. A vague spatial object does not have a precisely defined boundary and/or interior. Thus, it may have a broad boundary and a blurred interior, and is composed of parts that certainly belong to it and parts that possibly belong to it. Although several real-world phenomena are characterized by spatial vagueness, no approach in the literature addresses both spatial vagueness and the design of SDWs nor provides multidimensional analysis over vague spatial data. These shortcomings motivated the elaboration of this doctoral thesis, which addresses both vague spatial data warehouses (vague SDWs) and vague spatial online analytical processing (vague SOLAP). A vague SDW is a SDW that comprises vague spatial data, while vague SOLAP allows querying vague SDWs. The major contributions of this doctoral thesis are: (i) the Vague Spatial Cube (VSCube) conceptual model, which enables the creation of conceptual schemata for vague SDWs using data cubes; (ii) the Vague Spatial MultiDim (VSMultiDim) conceptual model, which enables the creation of conceptual schemata for vague SDWs using diagrams; (iii) guidelines for designing relational schemata and integrity constraints for vague SDWs, and for extending the SQL language to enable vague SOLAP; (iv) the Vague Spatial Bitmap Index (VSB-index), which improves the performance to process queries against vague SDWs. The applicability of these contributions is demonstrated in two applications of the agricultural domain, by creating conceptual schemata for vague SDWs, transforming these conceptual schemata into logical schemata for vague SDWs, and efficiently processing queries over vague SDWs.
35

Tan, Geok Leng. "Design issues in Trellis Coded Data Modulation." Thesis, University of Cambridge, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.358810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Wilke, Achim. "Data-processing devolopment in German design offices." Thesis, Brunel University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.292979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

洪宜偉 and Edward Hung. "Data cube system design: an optimization problem." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31222730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Støtvig, Jeanett Gunneklev. "Censored Weibull Distributed Data in Experimental Design." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for matematiske fag, 2014. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-24418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Give an introduction to experimental design. Investigate how four methods handle Weibull distributed censored data, where the four methods are the quick and dirty method, the maximum likelihood method, single imputation and multiple imputation.
39

Edgar, John Alexander. "The design of a unified data model." Thesis, University of Aberdeen, 1986. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=185667.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A unified data model is presented which offers a superset of the data modelling constructs and semantic integrity constraints of major existing data models. These semantic integrity constraints are both temporal and non-temporal, and are classified by constraint type (attribute, membership, set, temporal) and semantic integrity category (type, attribute value, intra-tuple, intra-class, inter-class). The unified data model has an onion-skin architecture comprising a DB state, DB state transition and temporal models, the realization of all three providing the facilities of a temporal DB. The DB state model is concerned with object-entities and the DB state transition model deals with event-entities and the non-destructive updating of data. A third species of entity is the rule. The temporal model conveys the times of object existence, event occurrence, retro-/post-active update, data error correction, the historical states of objects, and Conceptual Schema versions. Times are either instantaneous/durational time-points or time-intervals. Object and event classes are organized along the taxonomic axes of aggregation, association, categorization and generalization. Semantic integrity constraints and attribute inheritance are defined for each kind of data abstraction. A predicate logic oriented Conceptual Schema language is outlined for specifying class definitions, abstraction and transformation rules, and semantic integrity constraints. Higher-order abstraction classes are primarily defined in terms of the constraints for their lower-order, definitive classes. Transformation rules specify update dependencies between classes. Support is shown for the major features of the main semantic data models, and a token implementation is presented.
40

King, Brent. "Automatic extraction of knowledge from design data." Thesis, University of Sunderland, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307964.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Hanson, James A. (James Andrew) 1976. "Improving process capability data access for design." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/88897.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Schanzenberger, Anja. "System design for periodic data production management." Thesis, Middlesex University, 2006. http://eprints.mdx.ac.uk/10697/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This research project introduces a new type of information system, the periodic data production management system, and proposes several innovative system design concepts for this application area. Periodic data production systems are common in the information industry for the production of information. These systems process large quantities of data in order to produce statistical reports in predefined intervals. The workflow of such a system is typically distributed world-wide and consists of several semi-computerized production steps which transform data packages. For example, market research companies apply these systems in order to sell marketing information over specified timelines. production of information. These systems process large quantities of data in order to produce statistical reports in predefined intervals. The workflow of such a system is typically distributed world-wide and consists of several semi-computerized production steps which transform data packages. For example, market research companies apply these systems in order to sell marketing information over specified timelines. There has been identified a lack of concepts for IT-aided management in this area. This thesis clearly defines the complex requirements of periodic data production management systems. It is shown that these systems can be defines as IT-support for planning, monitoring and controlling periodic data production processes. Their significant advantages are that information industry will be enabled to increase production performance, and to ease (and speed up) the identification of the production progress as well as the achievable optimisation potential in order to control rationalisation goals. In addition, this thesis provides solutions for he generic problem how to introduce such a management system on top of an unchangeable periodic data production system. Two promising system designs for periodic data production management are derived, analysed and compared in order to gain knowledge about appropriate concepts and this application area. Production planning systems are the metaphor models used for the so-called closely coupled approach. The metaphor model for the loosely coupled approach is project management. The latter approach is prototyped as an application in the market research industry and used as case study. Evaluation results are real-world experiences which demonstrate the extraordinary efficiency of systems based on the loosely coupled approach. Special is a scenario-based evaluation that accurately demonstrates the many improvements achievable with this approach. Main results are that production planning and process quality can vitally be improved. Finally, among other propositions, it is suggested to concentrate future work on the development of product lines for periodic data production management systems in order to increase their reuse.
43

Hughes, Simon. "Geohydrology data model design : South African boreholes." Thesis, Stellenbosch : University of Stellenbosch, 2005. http://hdl.handle.net/10019.1/2799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (MSc (Geography and Environmental Studies))--University of Stellenbosch, 2005.
Since mechanised borehole drilling began in South Africa in the late 1800s, over 1 100 000 boreholes have been drilled. As the country’s growing population and the perceived impacts of climate change increase pressure on water surface supplies, attention is turning to groundwater to meet the shortfall in water supply. This will mean even more drilling will take place. Until the introduction of the Standard Descriptors for Boreholes, published in 2003, South Africa has not had a set of guidelines for borehole information capture. This document provides a detailed description of the basic information requirements needed to describe and characterise the process of drilling, constructing, developing, managing and monitoring a borehole. However, this document stands alone as a specification with little or no implementation or interpretation to date. Following the development and publishing of the ArcHydro data model for water resource management by the CRWR based at the University of Texas at Austin, there has been a great deal of interest in object-oriented data modelling for natural resource data management. This thesis describes the utilisation of an object oriented data modelling approach using UML CASE tools to design a data model for South African Boreholes, based on the Standard Descriptors for Boreholes. The data model was converted to a geodatabase schema and implemented in ArcGIS.
44

Cranley, Nikki, and Diarmuid Corry. "Design Considerations for Networked Data Acquisition Systems." International Foundation for Telemetering, 2009. http://hdl.handle.net/10150/606149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada
Ethernet technology offers numerous benefits for networked Flight Test Instrumentation (FTI) systems such as increased data rates, flexibility, scalability and most importantly interoperability owing to the inherent interface, protocol and technological standardization. However, the best effort nature of Ethernet is in sharp contrast to the intrinsic determinism of tradition FTI systems. The challenge for network designers is to optimize the configuration of the Ethernet network to meet the data processing demands in terms of reliability and latency. This paper discusses the necessary planning and design phases to investigate, analyze, fine-tune and optimize the networks performance.
45

O'Shea, Timothy James. "Learning from Data in Radio Algorithm Design." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/89649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Algorithm design methods for radio communications systems are poised to undergo a massive disruption over the next several years. Today, such algorithms are typically designed manually using compact analytic problem models. However, they are shifting increasingly to machine learning based methods using approximate models with high degrees of freedom, jointly optimized over multiple subsystems, and using real-world data to drive design which may have no simple compact probabilistic analytic form. Over the past five years, this change has already begun occurring at a rapid pace in several fields. Computer vision tasks led deep learning, demonstrating that low level features and entire end-to-end systems could be learned directly from complex imagery datasets, when a powerful collection of optimization methods, regularization methods, architecture strategies, and efficient implementations were used to train large models with high degrees of freedom. Within this work, we demonstrate that this same class of end-to-end deep neural network based learning can be adapted effectively for physical layer radio systems in order to optimize for sensing, estimation, and waveform synthesis systems to achieve state of the art levels of performance in numerous applications. First, we discuss the background and fundamental tools used, then discuss effective strategies and approaches to model design and optimization. Finally, we explore a series of applications across estimation, sensing, and waveform synthesis where we apply this approach to reformulate classical problems and illustrate the value and impact this approach can have on several key radio algorithm design problems.
Ph. D.
46

Shiao, Grace. "Design and Implementation of Data Analysis Components." University of Akron / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=akron1143652311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Guo, Minzhe. "Algorithmic Mechanism Design for Data Replication Problems." University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1470757536.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Fang, Wei Yi, and 魏儀方. "Data Transfer Block Design." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/74630649698468791271.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
淡江大學
電機工程學系碩士在職專班
92
This paper is focused on the hardware implementation of SD Card using FPGA, microprocessor and flash memory. SD card contains controller and flash memory. The controller is mainly consisted of two parts. The first part is the SD host controller and the second part is the flash controller. SD host controller has five different units -- command unit, response unit, register unit, data transfer unit, and input/output unit. In this thesis, our objective is to design the data transfer unit and input/output unit. Our design is realized and tested using Altera’s Cyclone EP1C20F324C7 FPGA. The testing result shows the architecture is feasible.
49

Tsai, Tzu-Chao, and 蔡子超. "Supporting Data Warehouse Design with Data Mining Approach." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/54821897552356791828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
國立中山大學
資訊管理學系研究所
89
Traditional relational database model does not have enough capability to cope with a great deal of data in finite time. To address these requirements, data warehouses and online analytical processing (OLAP) have emerged. Data warehouses improve the productivity of corporate decision makers through consolidation, conversion, transformation, and integration of operational data, and supports online analytical processing (OLAP). The data warehouse design is a complex and knowledge intensive process. It needs to consider not only the structure of the underlying operational databases (source-driven), but also the information requirements of decision makers (user-driven). Past research focused predominately on supporting the source-driven data warehouse design process, but paid less attention to supporting the user-driven data warehouse design process. Thus, the goal of this research is to propose a user-driven data warehouse design support system based on the knowledge discovery approach. Specifically, a Data Warehouse Design Support System was proposed and the generalization hierarchy and generalized star schemas were used as the data warehouse design knowledge. The technique for learning these design knowledge and reasoning upon them were developed. An empirical evaluation study was conducted to validate the effectiveness on the proposed techniques in supporting data warehouse design process. The result of empirical evaluation showed that this technique was useful to support data warehouse design especially on reducing the missing design and enhancing the potentially useful design.
50

Gibson, Christopher Thomas. "Website optimization, design, and restructuring." 2005. http://etd.louisville.edu/data/UofL0116t2005.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M.Eng.)--University of Louisville, 2005.
Title and description from thesis home page (viewed Jan. 30, 2007). Department of Computer Engineering and Computer Science. Vita. "December 2005." Includes bibliographical references (p. 100-102).

To the bibliography