Littérature scientifique sur le sujet « Creation of data models »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Creation of data models ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Creation of data models"

1

Parvinen, Petri, Essi Pöyry, Robin Gustafsson, Miikka Laitila et Matti Rossi. « Advancing Data Monetization and the Creation of Data-based Business Models ». Communications of the Association for Information Systems 47, no 1 (1 octobre 2020) : 25–49. http://dx.doi.org/10.17705/1cais.04702.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Schadd, Maarten, Nico de Reus, Sander Uilkema et Jeroen Voogd. « Data-driven behavioural modelling for military applications ». Journal of Defence & ; Security Technologies 4, no 1 (janvier 2022) : 12–36. http://dx.doi.org/10.46713/jdst.004.02.

Texte intégral
Résumé :
This article investigates the possibilities for creating behavioural models of military decision making in a data-driven manner. As not much data from actual operations is available, and data cannot easily be created in the military context, most approaches use simulators to learn behaviour. A simulator is however not always available or is difficult to create. This study focusses on the creation of behavioural models from data that was collected during a field exercise. As data in general is limited, noisy and erroneous, this makes the creation of realistic models challenging. Besides using the traditional approach of hand-crafting a model based on data, we investigate the emerging research area of imitation learning. One of its techniques, reward engineering, is applied to learn the behaviour of soldiers in an urban warfare operation. Basic, but realistic, soldier behaviour is learned, which lays the groundwork for more elaborate models in the future.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Fisher, Nathan B., John C. Charbonneau et Stephanie K. Hurst. « Rapid Creation of Three-Dimensional, Tactile Models from Crystallographic Data ». Journal of Crystallography 2016 (14 août 2016) : 1–8. http://dx.doi.org/10.1155/2016/3054573.

Texte intégral
Résumé :
A method for the conversion of crystallographic information framework (CIF) files to stereo lithographic data files suitable for printing on three-dimensional printers is presented. Crystallographic information framework or CIF files are capable of being manipulated in virtual space by a variety of computer programs, but their visual representations are limited to the two-dimensional surface of the computer screen. Tactile molecular models that demonstrate critical ideas, such as symmetry elements, play a critical role in enabling new students to fully visualize crystallographic concepts. In the past five years, major developments in three-dimensional printing has lowered the cost and complexity of these systems to a level where three-dimensional molecular models may be easily created provided that the data exists in a suitable format. Herein a method is described for the conversion of CIF file data using existing free software that allows for the rapid creation of inexpensive molecular models. This approach has numerous potential applications in basic research, education, visualization, and crystallography.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Saarijärvi, Hannu, Christian Grönroos et Hannu Kuusela. « Reverse use of customer data : implications for service-based business models ». Journal of Services Marketing 28, no 7 (7 octobre 2014) : 529–37. http://dx.doi.org/10.1108/jsm-05-2013-0111.

Texte intégral
Résumé :
Purpose – The purpose of this study is to explore and analyze the implications of reverse use of customer data for service-based business models. In their quest for competitive advantage, firms traditionally use customer data as resources to redesign and develop new products and services or identify the most profitable customers. However, in the shift from a goods-dominant logic toward customer value creation, the potential of customer data for the benefit of the customer, not just the firm, is an emerging, underexplored area of research. Design/methodology/approach – Business model criteria and three service examples combine to uncover the implications of reverse use of customer data for service-based business models. Findings – Implications of reverse use of customer data for service-based business models are identified and explored. Through reverse use of customer data, a firm can provide customers with additional resources and support customers’ value-creating processes. Accordingly, the firm can move beyond traditional exchanges, take a broader role in supporting customers’ value creation and diversify the value created by the customer through resource integration. The attention shifts from internal to external customer data usage; customer data transform from the firm’s resource to the customer’s, which facilitates the firm’s shift from selling goods to supporting customers’ value creation. Originality/value – Reverse use of customer data represent a new emerging research phenomenon; their implications for service-based business models have not been explored.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Senderovich, Arik, Kyle E. C. Booth et J. Christopher Beck. « Learning Scheduling Models from Event Data ». Proceedings of the International Conference on Automated Planning and Scheduling 29 (25 mai 2021) : 401–9. http://dx.doi.org/10.1609/icaps.v29i1.3504.

Texte intégral
Résumé :
A significant challenge in declarative approaches to scheduling is the creation of a model: the set of resources and their capacities and the types of activities and their temporal and resource requirements. In practice, such models are developed manually by skilled consultants and used repeatedly to solve different problem instances. For example, in a factory, the model may be used each day to schedule the current customer orders. In this work, we aim to automate the creation of such models by learning them from event data. We introduce a novel methodology that combines process mining, timed Petri nets (TPNs), and constraint programming (CP). The approach learns a sub-class of TPN from event logs of executions of past schedules and maps the TPN to a broad class of scheduling problems. We show how any problem of the scheduling class can be converted to a CP model. With new instance data (e.g., the day’s orders), the CP model can then be solved by an off-the-shelf solver. Our approach provides an end-to-end solution, going from event logs to model-based optimal schedules. To demonstrate the value of the methodology we conduct experiments in which we learn and solve scheduling models from two types of data: logs generated from job-shop scheduling benchmarks and real-world event logs from an outpatient hospital.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Hron, Vojtěch, et Lena Halounová. « AUTOMATIC RECONSTRUCTION OF ROOF MODELS FROM BUILDING OUTLINES AND AERIAL IMAGE DATA ». Acta Polytechnica 59, no 5 (1 novembre 2019) : 448–57. http://dx.doi.org/10.14311/ap.2019.59.0448.

Texte intégral
Résumé :
The knowledge of roof shapes is essential for the creation of 3D building models. Many experts and researchers use 3D building models for specialized tasks, such as creating noise maps, estimating the solar potential of roof structures, and planning new wireless infrastructures. Our aim is to introduce a technique for automating the creation of topologically correct roof building models using outlines and aerial image data. In this study, we used building footprints and vertical aerial survey photographs. Aerial survey photographs enabled us to produce an orthophoto and a digital surface model of the analysed area. The developed technique made it possible to detect roof edges from the orthophoto and to categorize the edges using spatial relationships and height information derived from the digital surface model. This method allows buildings with complicated shapes to be decomposed into simple parts that can be processed separately. In our study, a roof type and model were determined for each building part and tested with multiple datasets with different levels of quality. Excellent results were achieved for simple and medium complex roofs. Results for very complex roofs were unsatisfactory. For such structures, we propose using multitemporal images because these can lead to significant improvements and a better roof edge detection. The method used in this study was shared with the Czech national mapping agency and could be used for the creation of new 3D modelling products in the near future.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Milkau, Udo. « Value Creation within AI-enabled Data Platforms ». Journal of Creating Value 5, no 1 (30 octobre 2018) : 25–39. http://dx.doi.org/10.1177/2394964318803244.

Texte intégral
Résumé :
With digitalization, new type of firms—the so-called business platforms—emerged as a central hub in two-sided markets. As business platforms do not ‘produce’ products or services, they represent a new model of value creation that raises the question about the core nature of a firm in the twenty-first century, when ‘data is the new oil’. At the end of the twentieth century, the concept of ‘value chains, value shops and value networks’ represented the latest development about internal value creation in a firm, but lacked any discussion about information technology (IT) or even ‘data as raw material’. This digital approach to monetarize aggregated data sets as internal core function of a firm needs more clarification, as value creation ‘without production’ is a shift of paradigm. This article starts with the concept of ‘value chains, value shops and value networks’, extends this to current IT and includes business platforms within an integrated framework of internal value creation in a firm. Based on this framework and the current development of leading-edge artificial intelligence (AI), this framework is applied to forecast the development towards ‘AI-enabled data platforms’, which are not covered by traditional economic theories. This article calls for more research to clarify the impact of such data-based business models compared to production-based models.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Abrukov, S. V., E. V. Karlovich, V. N. Afanasyev, Yu V. Semenov et Victor S. Abrukov. « CREATION OF PROPELLANT COMBUSTION MODELS BY MEANS OF DATA MINING TOOLS ». International Journal of Energetic Materials and Chemical Propulsion 9, no 5 (2010) : 385–96. http://dx.doi.org/10.1615/intjenergeticmaterialschemprop.2011001405.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Vycital, Miroslav, et Cenek Jarský. « An automated nD model creation on BIM models ». Organization, Technology and Management in Construction : an International Journal 12, no 1 (22 juin 2020) : 2218–31. http://dx.doi.org/10.2478/otmcj-2020-0018.

Texte intégral
Résumé :
AbstractThe construction technology (CONTEC) method was originally developed for automated CONTEC planning and project management based on the data in the form of a budget or bill of quantities. This article outlines a new approach in an automated creation of the discrete nD building information modeling (BIM) models by using data from the BIM model and their processing by existing CONTEC method through the CONTEC software. This article outlines the discrete modeling approach on BIM models as one of the applicable approaches for nD modeling. It also defines the methodology of interlinking BIM model data and CONTEC software through the classification of items. The interlink enables automation in the production of discrete nD BIM model data, such as schedule (4D) including work distribution end resource planning, budget (5D)—based on integrated pricing system, but also nD data such as health and safety risks (6D) plans (H&S Risk register), quality plans, and quality assurance checklists (7D) including their monitoring and environmental plans (8D). The methodology of the direct application of the selected classification system, as well as means of data transfer and conditions of data transferability, is described. The method was tested on the case study of an office building project, and acquired data were compared to actual construction time and costs. The case study proves the application of the CONTEC method as a usable method in the BIM model environment, enabling the creation of not only 4D, 5D models but also nD discrete models up to 8D models in the perception of the construction management process. In comparison with the existing BIM classification systems, further development of the method will enable full automated discrete nD model creation in the BIM model environment.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Elvas, Luís B., João C. Ferreira, Miguel Sales Dias et Luís Brás Rosário. « Health Data Sharing towards Knowledge Creation ». Systems 11, no 8 (21 août 2023) : 435. http://dx.doi.org/10.3390/systems11080435.

Texte intégral
Résumé :
Data sharing and service reuse in the health sector pose significant privacy and security challenges. The European Commission recognizes health data as a unique and cost-effective resource for research, while the OECD emphasizes the need for privacy-protecting data governance systems. In this paper, we propose a novel approach to health data access in a hospital environment, leveraging homomorphic encryption to ensure privacy and secure sharing of medical data among healthcare entities. Our framework establishes a secure environment that enforces GDPR adoption. We present an Information Sharing Infrastructure (ISI) framework that seamlessly integrates artificial intelligence (AI) capabilities for data analysis. Through our implementation, we demonstrate the ease of applying AI algorithms to treated health data within the ISI environment. Evaluating machine learning models, we achieve high accuracies of 96.88% with logistic regression and 97.62% with random forest. To address privacy concerns, our framework incorporates Data Sharing Agreements (DSAs). Data producers and consumers (prosumers) have the flexibility to express their prefearences for sharing and analytics operations. Data-centric policy enforcement mechanisms ensure compliance and privacy preservation. In summary, our comprehensive framework combines homomorphic encryption, secure data sharing, and AI-driven analytics. By fostering collaboration and knowledge creation in a secure environment, our approach contributes to the advancement of medical research and improves healthcare outcomes. A real case application was implemented between Portuguese hospitals and universities for this data sharing.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Creation of data models"

1

Wojatzki, Michael Maximilian [Verfasser], et Torsten [Akademischer Betreuer] Zesch. « Computer-assisted understanding of stance in social media : formalizations, data creation, and prediction models / Michael Maximilian Wojatzki ; Betreuer : Torsten Zesch ». Duisburg, 2019. http://d-nb.info/1177681471/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Stienmetz, Jason Lee. « Foundations for a Network Model of Destination Value Creation ». Diss., Temple University Libraries, 2016. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/419874.

Texte intégral
Résumé :
Tourism and Sport
Ph.D.
Previous research has demonstrated that a network model of destination value creation (i.e. the Destination Value System model) based on the flows of travelers within a destination can be used to estimate and predict individual attractions’ marginal contributions to total visitor expenditures. While development to date of the Destination Value System (DVS) has focused on the value created from dyadic relationships within the destination network, previous research supports the proposition that system-level network structures significantly influence the total value created within a destination. This study, therefore, builds upon previous DVS research in order to determine the relationships between system-level network structures and total value creation within a destination. To answer this question econometric analysis of panel data covering 43 Florida destinations over the period from 2007 to 2015 was conducted. The panel data was created utilizing volunteered geographic information (VGI) obtained from 4.6 million photographs shared on Flickr. Results of econometric analysis indicate that both seasonal effects and DVS network structures have statistically significant relationships with total tourism-related sales within a destination. Specifically, network density, network out-degree centralization, and network global clustering coefficient are found to have negative and statistically significant effects on destination value creation, while network in-degree centralization, network betweenness centralization, and network subcommunity count are found to have positive and statistically significant effects. Quarterly seasonality is also found to have dynamic and statistically significant effects on total tourism-related sales within a destination. Based on the network structures of destinations and total tourism related sales within destinations, this study also uses k-means cluster analysis to classify tourism destinations into a taxonomy of six different system types (Exploration, Involvement, Development I, Development II, Consolidation, and Stars). This taxonomy of DVS types is found to correspond to Butler’s (1980) conceptualization of the destination life cycle, and additional data visualization and exploration based on the DVS taxonomy finds distinct characteristics in destination structure, dynamics, evolution, and performance that may be useful for benchmarking. Additionally, this study assesses the quality of VGI data for tourism related research by comparing DVS network structures based on Flickr data and visitor intercept survey data. Support for the use of VGI data is found, provided that thousands of observations are available for analysis. When fewer observations are available, aggregation techniques are recommended in order to improve the quality of overall destination network system quantification. This research makes important contributions to both the academic literature and the practical management of destinations by demonstrating that DVS network structures significantly influence the economic value created within the destination, and thus suggests that a strategic network management approach is needed for the governance of competitive destinations. As a result, this study provides a strong foundation for the DVS model and future research in the areas of destination resiliency, “smarter” destination management, and tourism experience design.
Temple University--Theses
Styles APA, Harvard, Vancouver, ISO, etc.
3

Khadgi, Vinaya, et Tianyi Wang. « Automatic Creation of Researcher’s Competence Profiles Based on Semantic Integration of Heterogeneous Data sources ». Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH. Forskningsområde Informationsteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-21904.

Texte intégral
Résumé :
The research journals and publications are great source of knowledge produced by the virtue of hard work done by researchers. Several digital libraries have been maintaining the records of such research publications in order for general people and other researchers to find and study the previous work done in the research field they are interested in. In order to make the search criteria effective and easier, all of these digital libraries keep a record/database to store the meta-data of the publications. These meta-data records are generally well design to keep the vital records of the publications/articles, which has the potential to give information about the researcher, their research activities, and hence the competence profile. This thesis work is a study and search of method for building the competence profile of researchers’ base on the records of their publications in the well-known digital libraries. The publications of researchers publish in different publication houses, so, in order to make a complete profile, the data from several of these heterogeneous digital libraries sources have to be integrated semantically. Several of the semantic technologies were studied in order to investigate the challenges of integration of the heterogeneous sources and modeling the researchers’ competence profile .An approach of on-demand profile creation was chosen where user of system could enter some basic name detail of the researcher whose profile is to be created. In this thesis work, Design Science Research methodology was used as the method for research work and to complement this research method with a working artifact, scrum- an agile software development methodology was used to develop a competence profile system as proof of concept.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Chawinga, Winner Dominic. « Research data management in public universities in Malawi ». University of the Western Cape, 2019. http://hdl.handle.net/11394/6951.

Texte intégral
Résumé :
Philosophiae Doctor - PhD
The emergence and subsequent uptake of Information and Communication Technologies has transformed the research processes in universities and research institutions across the globe. One indelible impact of Information and Communication Technologies on the research process is the increased generation of research data in digital format. This study investigated how research data has been generated, organised, shared, stored, preserved, accessed and re-used in Malawian public universities with a view to proposing a framework for research data management in universities in Malawi. The objectives of the study were: to determine research data creation, sharing and re-use practices in public universities in Malawi; to investigate research data preservation practices in public universities in Malawi; to investigate the competencies that librarians and researchers need to effectively manage research data; and to find out the challenges that affect the management of research data in public universities in Malawi. Apart from being guided by the Community Capability Model Framework (Lyon, Ball, Duke & Day, 2011) and Data Curation Centre Lifecycle Model (Higgins, 2008), the study was inspired by the pragmatic school of thought which is the basis for a mixed methods research enabling the collection of quantitative and qualitative data from two purposively selected universities. A census was used to identify researchers and librarians while purposive sampling was used to identify directors of research. Questionnaires were used to collect mostly quantitative and some qualitative data from 36 librarians and 187 researchers while interviews were conducted with directors of research. The Statistical Package for the Social Sciences was used to analyse the quantitative data by producing percentages, means, independent samples ttest and one-way analysis of variance. Thematic analysis was used to analyse the qualitative data.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Puerto, Valencia J. (Jose). « Predictive model creation approach using layered subsystems quantified data collection from LTE L2 software system ». Master's thesis, University of Oulu, 2019. http://jultika.oulu.fi/Record/nbnfioulu-201907192705.

Texte intégral
Résumé :
Abstract. The road-map to a continuous and efficient complex software system’s improvement process has multiple stages and many interrelated on-going transformations, these being direct responses to its always evolving environment. The system’s scalability on this on-going transformations depends, to a great extent, on the prediction of resources consumption, and systematic emergent properties, thus implying, as the systems grow bigger in size and complexity, its predictability decreases in accuracy. A predictive model is used to address the inherent complexity growth and be able to increase the predictability of a complex system’s performance. The model creation processes are driven by the recollection of quantified data from different layers of the Long-term Evolution (LTE) Data-layer (L2) software system. The creation of such a model is possible due to the multiple system analysis tools Nokia has already implemented, allowing a multiple-layers data gathering flow. The process starts by first, stating the system layers differences, second, the use of a layered benchmark approach for the data collection at different levels, third, the design of a process flow organizing the data transformations from recollection, filtering, pre-processing and visualization, and forth, As a proof of concept, different Performance Measurements (PM) predictive models, trained by the collected pre-processed data, are compared. The thesis contains, in parallel to the model creation processes, the exploration, and comparison of various data visualization techniques that addresses the non-trivial graphical representation of the in-between subsystem’s data relations. Finally, the current results of the model process creation process are presented and discussed. The models were able to explain 54% and 67% of the variance in the two test configurations used in the instantiation of the model creation process proposed in this thesis.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Fadul, Waad. « Data-Driven Health Services : an Empirical Investigation on the Role of Artificial Intelligence and Data Network Effects in Value Creation ». Thesis, Uppsala universitet, Informationssystem, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447507.

Texte intégral
Résumé :
The purpose of this study is to produce new knowledge concerning the perceived user’s value generated using machine learning technologies that activate data network effects factors that create value through various business model themes. The data network effects theory represents a set of factors that increase the user’s perceived value for a platform that uses artificial intelligence capabilities. The study followed an abductive research approach where initially found facts were matched against the data network effects theory to be put in context and understood. The study’s data was gathered through semi-structured interviews with experts who were active within the research area and chosen based on their practical experience and their role in the digitization of the healthcare sector. The results show that three out of six factors were fully realized contributing to value creation while two of the factors showed to be partially realized in order to contribute to value creation and that is justified by the exclusion of users' perspectives in the scope of the research. Lastly, only one factor has limited contribution to the value creation due to the heavy regulations limiting its realization in the health sector. It is concluded that data network effects moderators contributed differently in the activation of various business model themes for value creation in a general manner where further studies should apply the theory in the assessment of one specific AI health offering to take full advantage of the theory potential. The theoretical implications showed that the data network factors may not necessarily be equally activated to contribute to value creation which was not initially highlighted by the theory. Additionally, the practical implications of the study’s results may help managers in their decision-making process on which factors to be activated for which business model theme.
Styles APA, Harvard, Vancouver, ISO, etc.
7

TAHERIFARD, ERSHAD. « Open data in Swedish municipalities ? : Value creation and innovation in local public sector organizations ». Thesis, KTH, Skolan för industriell teknik och management (ITM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-299877.

Texte intégral
Résumé :
Digital transformation is highlighted as a way of solving many of the problems and challenges that the public sector faces in terms of cost developments and increased demands for better public services. Open data is a strategic resource that is necessary for this development to takeplace and the municipal sector collects information that could be published to create value inmany stages. Previous research believes that economic value is generated through new innovative services and productivity gains, but also social values such as increased civic participation and transparency. But despite previous attempts to stimulate open data, Sweden is far behind comparable countries and there is a lack of research that looks at exactly how these economic values should be captured. To investigate why this is the case and what role open datahas in value creation in the municipal sector, this study has identified several themes through qualitative interviews with an inductive approach. The study resulted in a deeper theoretical analysis of open data and its properties. By considering it as a public good, it is possible to use several explanatory models to explain its slow spread and but also understand the difficult conditions for value capture which results in incentive problems. In addition, there are structural problems linked to legislation and infrastructure that hamper the dissemination of open data and its value-creating role in the municipal sector.
Digital transformationen lyfts som ett sätt att lösa många av de problem och utmaningar som den offentliga sektorn står inför gällande kostnadsutveckling och ökade krav på bättre samhällsservice. Öppna data är en sådan strategisk resurs som är nödvändig för att dennautveckling ska ske och kommunsektorn samlar på sig information som skulle kunna publiceras för att skapa värden i många led. Dels menar tidigare forskning att ekonomiska värden kan genereras genom nya innovativa tjänster och produktivitetsökningar, men även sociala värden som ökad medborgardelaktighet och transparens. Men trots tidigare försök att stimulera öppna data, ligger Sverige långt efter jämförbara länder och det saknas forskning som tittar på exakt hur dessa ekonomiska värden ska fångas. För att undersöka varför så är fallet och vilken roll öppna data har på värdeskapande i kommunsektorn har denna studie genom kvalitativa intervjuer med en induktiv ansats identifierat flertalet teman. Studien resulterade i en djupare teoretisk analys av öppna data och dess egenskaper. Genom att betrakta det som en kollektiv vara går det att använda flera förklaringsmodeller för att förklara dess långsamma spridning och förstå de svåra förutsättningarna för värdefångst vilket resulterar i incitamentsproblem. Till det finns det strukturella problem kopplat till lagstiftning och infrastruktur som hämmarspridningen av öppna data och dess värdeskapande roll i kommunsektorn.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Huber, Peter, Harald Oberhofer et Michael Pfaffermayr. « Who Creates Jobs ? Econometric Modeling and Evidence for Austrian Firm Level Data ». WU Vienna University of Economics and Business, 2015. http://epub.wu.ac.at/4650/1/wp205.pdf.

Texte intégral
Résumé :
This paper offers an empirical analysis of net job creation patterns at the firm level for the Austrian economy between 1993 and 2013 focusing on the impact of firm size and age. We propose a new estimation strategy based on a two-part model. This allows to identify the structural parameters of interest and to decompose behavioral differences between exiting and surviving firms. Our findings suggest that conditional on survival, young Austrian firms experience the largest net job creation rates. Differences in firm size are not able to explain variation in net job creation rates among the group of continuing enterprises. Job destruction induced by market exit, however, is largest among the young and small firms with this effect being even more pronounced during the times of the Great Recession. In order to formulate sensible policy recommendations, a separate treatment of continuing versus exiting firms as proposed by the new two-part model estimation approach seems crucial.(authors' abstract)
Series: Department of Economics Working Paper Series
Styles APA, Harvard, Vancouver, ISO, etc.
9

Kenjangada, Kariappa Ganapathy, et Marcus Bjersér. « Value as a Motivating Factor for Collaboration : The case of a collaborative network for wind asset owners for potential big data sharing ». Thesis, Högskolan i Halmstad, Centrum för innovations-, entreprenörskaps- och lärandeforskning (CIEL), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-40699.

Texte intégral
Résumé :
The world's need for energy is increasing while we realize the consequences of existing unsustainable methods for energy production. Wind power is a potential partial solution, but it is a relatively new source of energy. Advances in technology and innovation can be one solution, but the wind energy industry is embracing them too slow due to, among other reasons, lack of incentives in terms of the added value provided. Collaboration and big data may possibly provide a key to overcome this. However, to our knowledge, this research area has received little attention, especially in the context of the wind energy industry.   The purpose of this study is to explore value as a motivating factor for potential big data collaboration via a collaborative network. This will be explored within the context of big data collaboration, and the collaborative network for wind asset owners O2O WIND International. A cross sectional, multi-method qualitative single in-depth case study is conducted. The data collected and analyzed is based on four semi-structured interviews and a set of rich documentary secondary data on the 25 of the participants in the collaborative network in the form of 3866 pages and 124 web pages visited.  The main findings are as follows. The 25 participants of the collaborative network were evaluated and their approach to three different types of value were visualized through a novel model: A three-dimensional value approach space. From this visualization clusters of participants resulting in 6 different approaches to value can be distinguished amongst the 25 participants.  Furthermore, 14 different categories of value as the participants express are possible to create through the collaborative network has been identified. These values have been categorized based on fundamental types of value, their dimensions and four value processes. As well as analyzed for patterns and similarities amongst them. The classification results in a unique categorization of participants of a collaborative network. These categories prove as customer  segments that the focal firm of the collaborative network can target.  The interviews resulted in insights about the current state of the industry, existing and future market problems and needs as well as existing and future market opportunities. Then possible business model implications originating from our findings, for the focal firm behind the collaborative network O2O WIND International as well as the participants of the collaboration, has been discussed. We conclude that big data and collaborative networks has potential for value creation in the wind power sector, if the business model of those involved takes it into account. However, more future research is necessary, and suggestions are made.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Vieira, Fábio Danilo 1977. « Modelos baseados em técnicas de mineração de dados para suporte à certificação racial de ovinos ». [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/257128.

Texte intégral
Résumé :
Orientadores: Stanley Robson de Medeiros Oliveira, Samuel Rezende Paiva
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Agrícola
Made available in DSpace on 2018-08-26T01:06:59Z (GMT). No. of bitstreams: 1 Vieira_FabioDanilo_M.pdf: 3608471 bytes, checksum: 4705c25d2fbd6794b8aa85559e3620a0 (MD5) Previous issue date: 2014
Resumo: As raças de ovinos localmente adaptadas descendem de animais trazidos durante o período colonial, e durante anos foram submetidas a cruzamentos indiscriminados com raças exóticas. Estas raças de ovinos são consideradas importantes por possuírem características adaptativas às diversas condições ambientais brasileiras. Para evitar a perda deste importante material genético, a Empresa Brasileira de Pesquisa Agropecuária (Embrapa) decidiu incluí-las no seu Programa de Pesquisa em Recursos Genéticos, armazenando-as em seus bancos de germoplasma, sendo que as que possuem maior destaque nacional são as raças Crioula, Morada Nova e Santa Inês. A seleção dos ovinos para compor estes bancos é realizada por meio da avaliação de características morfológicas e produtivas. Entretanto, essa avaliação está sujeita a falhas, pois alguns animais cruzados mantêm características semelhantes àquelas dos animais locais. Desta forma, identificar se os animais depositados nos bancos são ou não pertencentes a uma raça é uma tarefa que exige muita cautela. Em busca de soluções, nos últimos anos houve um aumento significativo no uso de tecnologias que utilizam marcadores moleculares SNP (do inglês Single Nucleotide Polimorphism). No entanto, o grande número de marcadores gerados, que pode chegar a centenas de milhares por animal, torna-se um problema crucial. Para abordar esse problema, o objetivo deste trabalho é desenvolver modelos baseados em técnicas de mineração de dados para selecionar os principais marcadores SNP para as raças Crioula, Morada Nova e Santa Inês. Os dados utilizados neste estudo foram obtidos do Consórcio Internacional de Ovinos e são compostos por 72 animais destas três raças e 49.034 marcadores SNP para cada ovino. O resultado obtido com a conclusão deste trabalho foi um conjunto de modelos preditivos baseados em técnicas de mineração de dados que selecionaram os principais marcadores SNP para identificação das raças estudadas. A partir da intersecção desses modelos identificou-se um subconjunto de 15 marcadores com maior potencial de identificação das raças. Os modelos poderão ser utilizados para certificação das raças de ovinos já depositados nos bancos de germoplasma e de novos animais a serem inclusos, além de subsidiar associações de criadores interessadas em certificar seus animais, bem como o MAPA (Ministério da Agricultura, Pecuária e Abastecimento) no controle de animais registrados. Os modelos gerados poderão ser estendidos para outras espécies animais de produção
Abstract: The locally adapted breeds of sheep are descended from animals brought in during the colonial period, and for years were subjected to indiscriminate crossbreeding with exotic breeds. These breeds of sheep are considered important by having adaptive characteristics to several Brazilian environmental conditions. To avoid the loss of this important genetic material, the Brazilian Agricultural Research Corporation (Embrapa) decided to include them in its Programme of Research in Genetic Resources, storing them in their genebanks, while those with greater national prominence are Creole breeds, Morada Nova and Santa Ines. The selection of sheep to compose these banks is performed through the evaluation of morphological and productive characteristics. However, this assessment is subject to failures, because some crossbred maintains similar characteristics to those of the local animals. Thus, identifying if the animals deposited in banks belong or not to a breed is a challenging task. In search for solutions in recent years there has been a significant increase in the use of technologies that use molecular markers SNP (Single Nucleotide Polimorphism). However, the large number of markers generated, which can reach hundreds of thousands per animal, becomes a crucial issue. To address this problem, the aim of this study is to develop models based on data mining techniques to select the main SNP markers for Creole, Morada Nova and Santa Ines breeds. The data used in this study were obtained from the International Consortium of Sheep and consist of 72 animals e of these three breeds and 49,034 SNP markers for each sheep. The result obtained with this study was a set of predictive models based on data mining techniques to selected major SNP markers to identify the breeds studied. The intersection of the generated models identified a subset of 15 markers, with greater potential for identification of sheep breeds. The models may be used for certification of sheep breeds already deposited in genebanks and new animals to be included, apart from subsidizing breeders associations interested in certifying their animals, as well as MAPA (Ministry of Agriculture, Livestock and Food Supply) in control registered animals. The proposed models can be extended to other species of production animals
Mestrado
Planejamento e Desenvolvimento Rural Sustentável
Mestre em Engenharia Agrícola
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Creation of data models"

1

Hughes, Barry. International futures : Choices in the creation of a new world order. 2e éd. Boulder, Colo : Westview Press, 1996.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Hughes, Barry. International futures : Choices in the creation of a new world order. Boulder, Colo : Westview Press, 1993.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Rodríguez Bolívar, Manuel Pedro, Kelvin Joseph Bwalya et Christopher G. Reddick, dir. Governance Models for Creating Public Value in Open Data Initiatives. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-14446-3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Hughes, Barry. International futures : Choices in the creation of a new world order. 2e éd. Boulder, Colo : Westview Press, 1996.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Grigorev, Anatoliy. Methods and algorithms of data processing. ru : INFRA-M Academic Publishing LLC., 2017. http://dx.doi.org/10.12737/22119.

Texte intégral
Résumé :
In this manual some methods and algorithms of data processing, the sequence of the solution of problems of processing and the analysis of data for creation of behavior model of an object taking into account all a component of his mathematical model are considered. Types of technological methods of use of software and hardware for the solution of tasks in this area are described. Algorithms of distributions, regressions of temporary ranks, their transformation for the purpose of receiving mathematical models and the forecast of behavior of information and economic systems (objects) are considered. Conforms to requirements of the Federal state educational standard of the higher education of the last generation. For students of economic specialties, experts, graduate students.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Kaufmann, Manuel. Dreaming Data : Aspekte der Ästhetik, Originalität und Autorschaft in der künstlichen Kreativität. Zürich : Chronos, 2022.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Boger, Dan C. On the feasibility of creating a comparable database for nonrecurring cost analysis under dual source competition. Monterey, Calif : Naval Postgraduate School, 1987.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Kyper, Eric. Improved Decision-making in Data Mining : A Heuristic Rule Induction Approach to Decision Tree Creation and Model Selection. Saarbrücken : VDM Verlag Dr. Müller, 2008.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Varlamov, Oleg. Mivar databases and rules. ru : INFRA-M Academic Publishing LLC., 2021. http://dx.doi.org/10.12737/1508665.

Texte intégral
Résumé :
The multidimensional open epistemological active network MOGAN is the basis for the transition to a qualitatively new level of creating logical artificial intelligence. Mivar databases and rules became the foundation for the creation of MOGAN. The results of the analysis and generalization of data representation structures of various data models are presented: from relational to "Entity — Relationship" (ER-model). On the basis of this generalization, a new model of data and rules is created: the mivar information space "Thing-Property-Relation". The logic-computational processing of data in this new model of data and rules is shown, which has linear computational complexity relative to the number of rules. MOGAN is a development of Rule - Based Systems and allows you to quickly and easily design algorithms and work with logical reasoning in the "If..., Then..." format. An example of creating a mivar expert system for solving problems in the model area "Geometry"is given. Mivar databases and rules can be used to model cause-and-effect relationships in different subject areas and to create knowledge bases of new-generation applied artificial intelligence systems and real-time mivar expert systems with the transition to"Big Knowledge". The textbook in the field of training "Computer Science and Computer Engineering" is intended for students, bachelors, undergraduates, postgraduates studying artificial intelligence methods used in information processing and management systems, as well as for users and specialists who create mivar knowledge models, expert systems, automated control systems and decision support systems. Keywords: cybernetics, artificial intelligence, mivar, mivar networks, databases, data models, expert system, intelligent systems, multidimensional open epistemological active network, MOGAN, MIPRA, KESMI, Wi!Mi, Razumator, knowledge bases, knowledge graphs, knowledge networks, Big knowledge, products, logical inference, decision support systems, decision-making systems, autonomous robots, recommendation systems, universal knowledge tools, expert system designers, logical artificial intelligence.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Winkelmann, Rainer. Count Data Models. Berlin, Heidelberg : Springer Berlin Heidelberg, 1994. http://dx.doi.org/10.1007/978-3-662-21735-1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Creation of data models"

1

Šprogar, Matej, Peter Kokol, Milan Zorman, Vili Podgorelec, Lenka Lhotska et Jiří Klema. « Notes on Medical Decision Model Creation ». Dans Medical Data Analysis, 270–75. Berlin, Heidelberg : Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45497-7_41.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Bressoud, Thomas, et David White. « Relational Model : Design, Constraints, and Creation ». Dans Introduction to Data Systems, 425–62. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54371-6_14.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Jugulum, Rajesh. « Creating and Analyzing Models ». Dans Common Data Sense for Professionals, 49–65. New York : Productivity Press, 2021. http://dx.doi.org/10.4324/9781003165279-4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Congelio, Bradley J. « Advanced Model Creation with NFL Data ». Dans Introduction to NFL Analytics with R, 236–326. Boca Raton : Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003364320-5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Voss, Vivien, K. Heinke Schlünzen et David Grawe. « AtMoDat (Atmospheric Model Data)—Creation of a Model Standard for Obstacle Resolving Models ». Dans Springer Proceedings in Complexity, 331–33. Berlin, Heidelberg : Springer Berlin Heidelberg, 2021. http://dx.doi.org/10.1007/978-3-662-63760-9_48.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Bassett, Debra J. « Losing the Data of the Dead and Expanding Existing Models of Bereavement ». Dans The Creation and Inheritance of Digital Afterlives, 123–45. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-91684-8_6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Patir, Rupam, Shubham Singhal, C. Anantaram et Vikram Goyal. « Interpretability of Black Box Models Through Data-View Extraction and Shadow Model Creation ». Dans Communications in Computer and Information Science, 378–85. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63823-8_44.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Ahle, Ulrich, et Juan Jose Hierro. « FIWARE for Data Spaces ». Dans Designing Data Spaces, 395–417. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-93975-5_24.

Texte intégral
Résumé :
AbstractThis chapter describes how smart applications from multiple domains can participate in the creation of data spaces based on FIWARE software building blocks. Smart applications participating in such data spaces share digital twin data in real time using a common standard API like NGSI-LD and relying on standard data models. Each smart solution contributes to build a complete digital twin data representation of the real world sharing their data. At the same time, they can exploit data shared by other applications. Relying on FIWARE Data Marketplace components, smart applications can publish data under concrete terms and conditions which include pricing or data usage/access policies.A federated cloud infrastructure and mechanisms supporting data sovereignty are necessary to create data spaces. However, additional elements have to be added to ease the creation of data value chains and the materialization of a data economy. Standard APIs, combined with standard data models, are crucial to support effective data exchange enabling loose coupling between parties as well as reusability and replaceability of data resources and applications. Similarly, data spaces need to incorporate mechanisms for publication, discovery, and trading of data resources. These are elements that FIWARE implements, and they can be combined with IDSA architecture elements like the IDS Connector to create data spaces supporting trusted and effective data sharing.The GAIA-X project, started in 2020, is aimed at creating a federated form of data infrastructure in Europe which strengthens the ability to both access and share data securely and confidently. FIWARE is bringing mature technologies, compatible with IDS and CEF Building Blocks, which will accelerate the delivery of GAIA-X to the market.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Sikora, Marek. « Rule Quality Measures in Creation and Reduction of Data Rule Models ». Dans Rough Sets and Current Trends in Computing, 716–25. Berlin, Heidelberg : Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11908029_74.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Wilfling, Sandra. « Augmenting Explainable Data-Driven Models in Energy Systems : A Python Framework for Feature Engineering ». Dans Machine Learning for Cyber-Physical Systems, 121–29. Cham : Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-47062-2_12.

Texte intégral
Résumé :
AbstractData-driven modeling is an approach in energy systems modeling that has been gaining popularity. In data-driven modeling, machine learning methods such as linear regression, neural networks or decision-tree based methods are applied. While these methods do not require domain knowledge, they are sensitive to data quality. Therefore, improving data quality in a dataset is beneficial for creating machine learning-based models. The improvement of data quality can be implemented through preprocessing methods. A selected type of preprocessing is feature engineering, which focuses on evaluating and improving the quality of certain features inside the dataset. Feature engineering includes methods such as feature creation, feature expansion, or feature selection. In this work, a Python framework containing different feature engineering methods is presented. This framework contains different methods for feature creation, expansion and selection; in addition, methods for transforming or filtering data are implemented. The implementation of the framework is based on the Python library scikit-learn. The framework is demonstrated on a use case from energy demand prediction. A data-driven model is created including selected feature engineering methods. The results show an improvement in prediction accuracy through the engineered features.
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Creation of data models"

1

Danilova, Tatyana V., Alexey O. Manturov, Gleb O. Mareev, Oleg V. Mareev et Innokentiy K. Alaytsev. « Creation of anatomical models from CT data ». Dans Saratov Fall Meeting 2017 : Fifth International Symposium on Optics and Biophotonics : Laser Physics and Photonics XIX ; Computational Biophysics and Analysis of Biomedical Data IV, sous la direction de Vladimir L. Derbov et Dmitry E. Postnov. SPIE, 2018. http://dx.doi.org/10.1117/12.2309318.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Schmuck, Matthias, et Mircea Georgescu. « Enabling Data Value Creation with Data Governance : A Success Measurement Model ». Dans 12th International Conference on Software Engineering & Trends. Academy & Industry Research Collaboration Center, 2024. http://dx.doi.org/10.5121/csit.2024.140816.

Texte intégral
Résumé :
This paper deals with measuring the success of Data Governance as an information system in corporate environments. Evaluating the success of information systems is an important but controversial topic for corporate management. It is not easy to measure the success or effectiveness of information systems in order to justify past and future investments. Many models have been developed by researchers to support the fulfilment of both tasks. The main objective of our work as ongoing research is to examine and review the most important models of information systems success. We have compared these models and discussed their relevance to the field of Data Governance. Key findings are the adapted and supplemented success factors for Data Governance and a preliminary model for measuring Data Governance based on DeLone and McLean's Information Systems Success Measurement Model.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Takkand, G. V., et S. V. Lyagushov. « Problems of Creation of Petrophysical Models When Integrating 3D Seismic Data ». Dans 6th Saint Petersburg International Conference and Exhibition. Netherlands : EAGE Publications BV, 2014. http://dx.doi.org/10.3997/2214-4609.20140170.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Martyshko, Petr. « DENSITY EARTH�S CRUST MODELS CREATION USING GRAVITY AND SEISMIC DATA ». Dans 18th International Multidisciplinary Scientific GeoConference SGEM2018. Stef92 Technology, 2018. http://dx.doi.org/10.5593/sgem2018/1.1/s05.094.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Cianciosa, Mark, Richard Archibald, Wael Elwasif, Ana Gainaru, Jin Myung Park et Ross Whitfield. « Adaptive Generation of Training Data for ML Reduced Model Creation ». Dans 2022 IEEE International Conference on Big Data (Big Data). IEEE, 2022. http://dx.doi.org/10.1109/bigdata55660.2022.10020884.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Adan, Antonio, Xuehan Xiong, Burcu Akinci et Daniel Huber. « Automatic Creation of Semantically Rich 3D Building Models from Laser Scanner Data ». Dans 28th International Symposium on Automation and Robotics in Construction. International Association for Automation and Robotics in Construction (IAARC), 2011. http://dx.doi.org/10.22260/isarc2011/0061.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Anuyah, Sydney, et Sunandan Chakraborty. « CAN DEEP LEARNING LARGE LANGUAGE MODELS BE USED TO UNRAVEL KNOWLEDGE GRAPH CREATION ? » Dans CMLDS 2024 : 2024 International Conference on Computing, Machine Learning and Data Science. New York, NY, USA : ACM, 2024. http://dx.doi.org/10.1145/3661725.3661733.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Thelen, Michael, Guntram Pressmair, Markus Lassnig et Veronika Hornung-Prähauser. « Electric Vehicles as Flexibility Assets : Unlocking Ecosystem Collaborations : Leveraging Value Creation Partnerships for Mutually-beneficial Exchanges ». Dans New Business Models 2023. Maastricht University Press, 2023. http://dx.doi.org/10.26481/mup.2302.18.

Texte intégral
Résumé :
The trend towards energy decentralization and innovations in data-driven e-mobility have given way to a new type of electric vehicle charging; namely, smart charging and vehicle-to-grid technologies. In order to unlock the full potential of electric mobility’s flexibility, an exploratory ecosystem approach is first warranted in order to uncover stakeholder requirements, activities and (inter-)dependencies. The purpose of this research is to lay the foundation for future resilient business models in the grid-aware mobility ecosystem, which require novel multi-stakeholder collaborations. Through rigorous exploratory ecosystem modeling, flexibility recipient taxonomies, and a co-creation workshop, we have sought to uncover stakeholder intricacies in order to improve the overall innovation ecosystem value proposition. The results suggest many novel perspectives which were not considered (such as the issue of double taxation) and several prospective cross-sector business opportunities for fleet operators, vehicle OEMs, aggregators, and even public parking spaces. Additionally, stakeholders vary considerably in terms of needs, value-adding activities, (inter-)dependencies, risk, and flexibility services provided/requested, which need to be weighed and overcome on an (inter-)sectoral level.
Styles APA, Harvard, Vancouver, ISO, etc.
9

N. Krilov, D. « Models Creation as a Part of Porosity and Lithology Studies Incorporating Seismic Data ». Dans 57th EAEG Meeting. Netherlands : EAGE Publications BV, 1995. http://dx.doi.org/10.3997/2214-4609.201409329.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Neydorf, Rudolf, Viktor Polyakh et Ivan Chernogorov. « Creation of Mathematical Models of Fragmented Data Arrays Using “Cut-Glue” Approximation Method ». Dans 2018 International Russian Automation Conference (RusAutoCon). IEEE, 2018. http://dx.doi.org/10.1109/rusautocon.2018.8501639.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Creation of data models"

1

Pagliarin, Sofia, Dominik Herrmann, Daniela Nicklas, Hannes Glückert, Jon Meyer et Patrick Vizitiu. Data policy models in European smart cities : Experiences, opportunities and challenges in data policies in Europe. Otto-Friedrich-Universität, 2022. http://dx.doi.org/10.20378/irb-53583.

Texte intégral
Résumé :
The report illustrates why a smart city should develop a data policy. Guiding questions for the creation of such a data policy in the context of the Smart City Bamberg are discussed. Furthermore, the report shows how the smart cities of Barcelona, Hamburg, Helsinki, Stuttgart, Vienna and Zurich proceed. The presented analysis is based on public documents and interviews.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Mackley, Rob D., John A. Serkowski et George V. Last. Status Report on the Creation of a Preliminary Data Model and Dictionary for a New Petrologic Database. Office of Scientific and Technical Information (OSTI), juin 2008. http://dx.doi.org/10.2172/971114.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Koop, Gary, Stuart McIntyre, James Mitchell, Aubrey Poon et Ping Wu. Incorporating short data into large mixed-frequency VARs for regional nowcasting. Federal Reserve Bank of Cleveland, mai 2023. http://dx.doi.org/10.26509/frbc-wp-202309.

Texte intégral
Résumé :
Interest in regional economic issues coupled with advances in administrative data is driving the creation of new regional economic data. Many of these data series could be useful for nowcasting regional economic activity, but they suffer from a short (albeit constantly expanding) time series which makes incorporating them into nowcasting models problematic. Regional nowcasting is already challenging because the release delay on regional data tends to be greater than that at the national level, and "short" data imply a "ragged edge" at both the beginning and the end of regional data sets, which adds a further complication. In this paper, via an application to the UK, we develop methods to include a wide range of short data into a regional mixed-frequency VAR model. These short data include hitherto unexploited regional VAT turnover data. We address the problem of the ragged edge at both the beginning and end of our sample by estimating regional factors using different missing data algorithms that we then incorporate into our mixed-frequency VAR model. We find that nowcasts of regional output growth are generally improved when we condition them on the factors, but only when the regional nowcasts are produced before the national (UK-wide) output growth data are published.
Styles APA, Harvard, Vancouver, ISO, etc.
4

MacCormack, K. E., C. H. Eyles et J. C. Maclachlan. Making the most of what you've got : creating 3D subsurface models with data of varying quality. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 2006. http://dx.doi.org/10.4095/221887.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Li, Honghai, Mitchell Brown, Lihwa Lin, Yan Ding, Tanya Beck, Alejandro Sanchez,, Weiming Wu, Christopher Reed et Alan Zundel. Coastal Modeling System user's manual. Engineer Research and Development Center (U.S.), avril 2024. http://dx.doi.org/10.21079/11681/48392.

Texte intégral
Résumé :
The Coastal Modeling System (CMS) is a suite of coupled 2D numerical models for simulating nearshore waves, currents, water levels, sediment transport, morphology change, and salinity and temperature. Developed by the Coastal Inlets Research Program of the US Army Corps of Engineers, the CMS provides coastal engineers and scientists a PC-based, easy-to-use, accurate, and efficient tool for understanding of coastal processes and for designing and managing of coastal inlets research, navigation projects, and sediment exchange between inlets and adjacent beaches. The present technical report acts as a user guide for the CMS, which contains comprehensive information on model theory, model setup, and model features. The detailed descriptions include creation of a new project, configuration of model grid, various types of boundary conditions, representation of coastal structures, numerical methods, and coupled simulations of waves, hydrodynamics, and sediment transport. Pre- and post-model data processing and CMS modeling procedures are also described through operation within a graphic user interface—the Surface- water Modeling System.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Gonzalez-Esteban, Cristina. Black Sea Wreck Virtual Reconstruction to Reinvigorate Archaeological Data and Comparative Studies. Honor Frost Foundation, 2023. http://dx.doi.org/10.33583/mags2021.07.

Texte intégral
Résumé :
This short report tests a repeatable methodology for creating detailed virtual reconstructions where the model is a scientific container of the reconstruction information. The project reconstructed a Black Sea shipwreck using a photogrammetry survey and proposed a hypothesis of how it would have looked prior to sinking. To this “shell”, the metadata and paradata were added using BIM: Extended Matrix and Graphic Scale of Evidence. Academically, the “source-based reconstruction” opened a new spectrum of questions related to the ship and its community (chronology, building, propulsion, usage). The models also reported potential as public engagement tools, displaying the scientific background of archaeology.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Merkulova, Yuliya. Система цифровых моделей - новая технология для баланса данных. Yuliya Merkulova, avril 2021. http://dx.doi.org/10.12731/er0430.26042021.

Texte intégral
Résumé :
Use of the digital technologies is new and very productive approach for balance of different data. It is very important for balance of supply and demand and for increase of competitiveness of products. Various types of digital models were developed as a result of scientific research, they found reflection in article. Digital models for the description of the list of the sequences of steps and operations of various stages and process in general allow to install system of interrelations between operations and steps and to reach necessary log-ic, increase of effectiveness of any process. Object-relational models for establishment of communications between data of various blocks of databases and functional models of the choice of strategy of data balance form analytical base for justification of the choice of the direction of transformation of data. Models of a combination of a plurality of various data of the offer of products in the form of matrixes of multi-purpose optimization have double effect, because they allow not only to develop various options of data combina-tion, taking into account opportunities of change of location of products over the markets and temporary phases, but also to estimate aggregate useful effect from products. These models together with models of comparison of various options and the choice of optimal solutions allow to generate compatible strategic and current programs of the offer of products as a plurality of the output data balanced with each other and with data of demand. It is providing the best synergetic result. The developed methodology of creation of system of the interconnected digital models for transformation of data and generation of the output data of the situational-strategic program of the offer of products is a cornerstone of formation of new digital econ-omy – of economy of balanced data.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Herman, Brook, Paula Whitfield, Jenny Davis, Amanda Tritinger, Becky Raves, S. Dillon, Danielle Szimanski, Todd Swannack, Joseph Gailani et Jeffery King. Swan Island resilience model development ; Phase I : conceptual model. Engineer Research and Development Center (U.S.), janvier 2023. http://dx.doi.org/10.21079/11681/46402.

Texte intégral
Résumé :
This report documents the development of an integrated hydrodynamic and ecological model to test assumptions about island resilience. Swan Island, a 25-acre island in Chesapeake Bay, Maryland, was used as a case study. An interagency, interdisciplinary team of scientists and engineers came together in a series of workshops to develop a simplified resilience model to examine the ability of islands to reduce waves and erosion and the impacts to nearby habitats and shorelines. This report describes the model development process and the results from this first key step: model conceptualization. The final conceptual model identifies four main components: vegetative biomass, island elevation, waves/currents, and sediment supply. These components interact to form and support specific habitat types occurring on the island: coastal dunes, high marsh, low marsh, and submerged aquatic vegetation. The pre-and post-construction field data, coupled with hydrodynamic ecological models, will provide predictive capabilities of island resilience and evaluations of accrued benefits for future island creation and restoration projects. The process and methods described can be applied to island projects in a variety of regions and geographic scales.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Shukla, Indu, Rajeev Agrawal, Kelly Ervin et Jonathan Boone. AI on digital twin of facility captured by reality scans. Engineer Research and Development Center (U.S.), novembre 2023. http://dx.doi.org/10.21079/11681/47850.

Texte intégral
Résumé :
The power of artificial intelligence (AI) coupled with optimization algorithms can be linked to data-rich digital twin models to perform predictive analysis to make better informed decisions about installation operations and quality of life for the warfighters. In the current research, we developed AI connected lifecycle building information models through the creation of a data informed smart digital twin of one of US Army Corps of Engineers (USACE) buildings as our test case. Digital twin (DT) technology involves creating a virtual representation of a physical entity. Digital twin is created by digitalizing data collected through sensors, powered by machine learning (ML) algorithms, and are continuously learning systems. The exponential advance in digital technologies enables facility spaces to be fully and richly modeled in three dimensions and can be brought together in virtual space. Coupled with advancement in reinforcement learning and computer graphics enables AI agents to learn visual navigation and interaction with objects. We have used Habitat AI 2.0 to train an embodied agent in immersive 3D photorealistic environment. The embodied agent interacts with a 3D environment by receiving RGB, depth and semantically segmented views of the environment and taking navigational actions and interacts with the objects in the 3D space. Instead of training the robots in physical world we are training embodied agents in simulated 3D space. While humans are superior at critical thinking, creativity, and managing people, whereas robots are superior at coping with harsh environments and performing highly repetitive work. Training robots in controlled simulated world is faster and can increase their surveillance, reliability, efficiency, and survivability in physical space.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Wilson, D., Daniel Breton, Lauren Waldrop, Danney Glaser, Ross Alter, Carl Hart, Wesley Barnes et al. Signal propagation modeling in complex, three-dimensional environments. Engineer Research and Development Center (U.S.), avril 2021. http://dx.doi.org/10.21079/11681/40321.

Texte intégral
Résumé :
The Signal Physics Representation in Uncertain and Complex Environments (SPRUCE) work unit, part of the U.S. Army Engineer Research and Development Center (ERDC) Army Terrestrial-Environmental Modeling and Intelligence System (ARTEMIS) work package, focused on the creation of a suite of three-dimensional (3D) signal and sensor performance modeling capabilities that realistically capture propagation physics in urban, mountainous, forested, and other complex terrain environments. This report describes many of the developed technical capabilities. Particular highlights are (1) creation of a Java environmental data abstraction layer for 3D representation of the atmosphere and inhomogeneous terrain that ingests data from many common weather forecast models and terrain data formats, (2) extensions to the Environmental Awareness for Sensor and Emitter Employment (EASEE) software to enable 3D signal propagation modeling, (3) modeling of transmitter and receiver directivity functions in 3D including rotations of the transmitter and receiver platforms, (4) an Extensible Markup Language/JavaScript Object Notation (XML/JSON) interface to facilitate deployment of web services, (5) signal feature definitions and other support for infrasound modeling and for radio-frequency (RF) modeling in the very high frequency (VHF), ultra-high frequency (UHF), and super-high frequency (SHF) frequency ranges, and (6) probabilistic calculations for line-of-sight in complex terrain and vegetation.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie