Artykuły w czasopismach na temat „Creation of data models”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Creation of data models.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Creation of data models”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Parvinen, Petri, Essi Pöyry, Robin Gustafsson, Miikka Laitila i Matti Rossi. "Advancing Data Monetization and the Creation of Data-based Business Models". Communications of the Association for Information Systems 47, nr 1 (1.10.2020): 25–49. http://dx.doi.org/10.17705/1cais.04702.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Schadd, Maarten, Nico de Reus, Sander Uilkema i Jeroen Voogd. "Data-driven behavioural modelling for military applications". Journal of Defence & Security Technologies 4, nr 1 (styczeń 2022): 12–36. http://dx.doi.org/10.46713/jdst.004.02.

Pełny tekst źródła
Streszczenie:
This article investigates the possibilities for creating behavioural models of military decision making in a data-driven manner. As not much data from actual operations is available, and data cannot easily be created in the military context, most approaches use simulators to learn behaviour. A simulator is however not always available or is difficult to create. This study focusses on the creation of behavioural models from data that was collected during a field exercise. As data in general is limited, noisy and erroneous, this makes the creation of realistic models challenging. Besides using the traditional approach of hand-crafting a model based on data, we investigate the emerging research area of imitation learning. One of its techniques, reward engineering, is applied to learn the behaviour of soldiers in an urban warfare operation. Basic, but realistic, soldier behaviour is learned, which lays the groundwork for more elaborate models in the future.
Style APA, Harvard, Vancouver, ISO itp.
3

Fisher, Nathan B., John C. Charbonneau i Stephanie K. Hurst. "Rapid Creation of Three-Dimensional, Tactile Models from Crystallographic Data". Journal of Crystallography 2016 (14.08.2016): 1–8. http://dx.doi.org/10.1155/2016/3054573.

Pełny tekst źródła
Streszczenie:
A method for the conversion of crystallographic information framework (CIF) files to stereo lithographic data files suitable for printing on three-dimensional printers is presented. Crystallographic information framework or CIF files are capable of being manipulated in virtual space by a variety of computer programs, but their visual representations are limited to the two-dimensional surface of the computer screen. Tactile molecular models that demonstrate critical ideas, such as symmetry elements, play a critical role in enabling new students to fully visualize crystallographic concepts. In the past five years, major developments in three-dimensional printing has lowered the cost and complexity of these systems to a level where three-dimensional molecular models may be easily created provided that the data exists in a suitable format. Herein a method is described for the conversion of CIF file data using existing free software that allows for the rapid creation of inexpensive molecular models. This approach has numerous potential applications in basic research, education, visualization, and crystallography.
Style APA, Harvard, Vancouver, ISO itp.
4

Saarijärvi, Hannu, Christian Grönroos i Hannu Kuusela. "Reverse use of customer data: implications for service-based business models". Journal of Services Marketing 28, nr 7 (7.10.2014): 529–37. http://dx.doi.org/10.1108/jsm-05-2013-0111.

Pełny tekst źródła
Streszczenie:
Purpose – The purpose of this study is to explore and analyze the implications of reverse use of customer data for service-based business models. In their quest for competitive advantage, firms traditionally use customer data as resources to redesign and develop new products and services or identify the most profitable customers. However, in the shift from a goods-dominant logic toward customer value creation, the potential of customer data for the benefit of the customer, not just the firm, is an emerging, underexplored area of research. Design/methodology/approach – Business model criteria and three service examples combine to uncover the implications of reverse use of customer data for service-based business models. Findings – Implications of reverse use of customer data for service-based business models are identified and explored. Through reverse use of customer data, a firm can provide customers with additional resources and support customers’ value-creating processes. Accordingly, the firm can move beyond traditional exchanges, take a broader role in supporting customers’ value creation and diversify the value created by the customer through resource integration. The attention shifts from internal to external customer data usage; customer data transform from the firm’s resource to the customer’s, which facilitates the firm’s shift from selling goods to supporting customers’ value creation. Originality/value – Reverse use of customer data represent a new emerging research phenomenon; their implications for service-based business models have not been explored.
Style APA, Harvard, Vancouver, ISO itp.
5

Senderovich, Arik, Kyle E. C. Booth i J. Christopher Beck. "Learning Scheduling Models from Event Data". Proceedings of the International Conference on Automated Planning and Scheduling 29 (25.05.2021): 401–9. http://dx.doi.org/10.1609/icaps.v29i1.3504.

Pełny tekst źródła
Streszczenie:
A significant challenge in declarative approaches to scheduling is the creation of a model: the set of resources and their capacities and the types of activities and their temporal and resource requirements. In practice, such models are developed manually by skilled consultants and used repeatedly to solve different problem instances. For example, in a factory, the model may be used each day to schedule the current customer orders. In this work, we aim to automate the creation of such models by learning them from event data. We introduce a novel methodology that combines process mining, timed Petri nets (TPNs), and constraint programming (CP). The approach learns a sub-class of TPN from event logs of executions of past schedules and maps the TPN to a broad class of scheduling problems. We show how any problem of the scheduling class can be converted to a CP model. With new instance data (e.g., the day’s orders), the CP model can then be solved by an off-the-shelf solver. Our approach provides an end-to-end solution, going from event logs to model-based optimal schedules. To demonstrate the value of the methodology we conduct experiments in which we learn and solve scheduling models from two types of data: logs generated from job-shop scheduling benchmarks and real-world event logs from an outpatient hospital.
Style APA, Harvard, Vancouver, ISO itp.
6

Hron, Vojtěch, i Lena Halounová. "AUTOMATIC RECONSTRUCTION OF ROOF MODELS FROM BUILDING OUTLINES AND AERIAL IMAGE DATA". Acta Polytechnica 59, nr 5 (1.11.2019): 448–57. http://dx.doi.org/10.14311/ap.2019.59.0448.

Pełny tekst źródła
Streszczenie:
The knowledge of roof shapes is essential for the creation of 3D building models. Many experts and researchers use 3D building models for specialized tasks, such as creating noise maps, estimating the solar potential of roof structures, and planning new wireless infrastructures. Our aim is to introduce a technique for automating the creation of topologically correct roof building models using outlines and aerial image data. In this study, we used building footprints and vertical aerial survey photographs. Aerial survey photographs enabled us to produce an orthophoto and a digital surface model of the analysed area. The developed technique made it possible to detect roof edges from the orthophoto and to categorize the edges using spatial relationships and height information derived from the digital surface model. This method allows buildings with complicated shapes to be decomposed into simple parts that can be processed separately. In our study, a roof type and model were determined for each building part and tested with multiple datasets with different levels of quality. Excellent results were achieved for simple and medium complex roofs. Results for very complex roofs were unsatisfactory. For such structures, we propose using multitemporal images because these can lead to significant improvements and a better roof edge detection. The method used in this study was shared with the Czech national mapping agency and could be used for the creation of new 3D modelling products in the near future.
Style APA, Harvard, Vancouver, ISO itp.
7

Milkau, Udo. "Value Creation within AI-enabled Data Platforms". Journal of Creating Value 5, nr 1 (30.10.2018): 25–39. http://dx.doi.org/10.1177/2394964318803244.

Pełny tekst źródła
Streszczenie:
With digitalization, new type of firms—the so-called business platforms—emerged as a central hub in two-sided markets. As business platforms do not ‘produce’ products or services, they represent a new model of value creation that raises the question about the core nature of a firm in the twenty-first century, when ‘data is the new oil’. At the end of the twentieth century, the concept of ‘value chains, value shops and value networks’ represented the latest development about internal value creation in a firm, but lacked any discussion about information technology (IT) or even ‘data as raw material’. This digital approach to monetarize aggregated data sets as internal core function of a firm needs more clarification, as value creation ‘without production’ is a shift of paradigm. This article starts with the concept of ‘value chains, value shops and value networks’, extends this to current IT and includes business platforms within an integrated framework of internal value creation in a firm. Based on this framework and the current development of leading-edge artificial intelligence (AI), this framework is applied to forecast the development towards ‘AI-enabled data platforms’, which are not covered by traditional economic theories. This article calls for more research to clarify the impact of such data-based business models compared to production-based models.
Style APA, Harvard, Vancouver, ISO itp.
8

Abrukov, S. V., E. V. Karlovich, V. N. Afanasyev, Yu V. Semenov i Victor S. Abrukov. "CREATION OF PROPELLANT COMBUSTION MODELS BY MEANS OF DATA MINING TOOLS". International Journal of Energetic Materials and Chemical Propulsion 9, nr 5 (2010): 385–96. http://dx.doi.org/10.1615/intjenergeticmaterialschemprop.2011001405.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Vycital, Miroslav, i Cenek Jarský. "An automated nD model creation on BIM models". Organization, Technology and Management in Construction: an International Journal 12, nr 1 (22.06.2020): 2218–31. http://dx.doi.org/10.2478/otmcj-2020-0018.

Pełny tekst źródła
Streszczenie:
AbstractThe construction technology (CONTEC) method was originally developed for automated CONTEC planning and project management based on the data in the form of a budget or bill of quantities. This article outlines a new approach in an automated creation of the discrete nD building information modeling (BIM) models by using data from the BIM model and their processing by existing CONTEC method through the CONTEC software. This article outlines the discrete modeling approach on BIM models as one of the applicable approaches for nD modeling. It also defines the methodology of interlinking BIM model data and CONTEC software through the classification of items. The interlink enables automation in the production of discrete nD BIM model data, such as schedule (4D) including work distribution end resource planning, budget (5D)—based on integrated pricing system, but also nD data such as health and safety risks (6D) plans (H&S Risk register), quality plans, and quality assurance checklists (7D) including their monitoring and environmental plans (8D). The methodology of the direct application of the selected classification system, as well as means of data transfer and conditions of data transferability, is described. The method was tested on the case study of an office building project, and acquired data were compared to actual construction time and costs. The case study proves the application of the CONTEC method as a usable method in the BIM model environment, enabling the creation of not only 4D, 5D models but also nD discrete models up to 8D models in the perception of the construction management process. In comparison with the existing BIM classification systems, further development of the method will enable full automated discrete nD model creation in the BIM model environment.
Style APA, Harvard, Vancouver, ISO itp.
10

Elvas, Luís B., João C. Ferreira, Miguel Sales Dias i Luís Brás Rosário. "Health Data Sharing towards Knowledge Creation". Systems 11, nr 8 (21.08.2023): 435. http://dx.doi.org/10.3390/systems11080435.

Pełny tekst źródła
Streszczenie:
Data sharing and service reuse in the health sector pose significant privacy and security challenges. The European Commission recognizes health data as a unique and cost-effective resource for research, while the OECD emphasizes the need for privacy-protecting data governance systems. In this paper, we propose a novel approach to health data access in a hospital environment, leveraging homomorphic encryption to ensure privacy and secure sharing of medical data among healthcare entities. Our framework establishes a secure environment that enforces GDPR adoption. We present an Information Sharing Infrastructure (ISI) framework that seamlessly integrates artificial intelligence (AI) capabilities for data analysis. Through our implementation, we demonstrate the ease of applying AI algorithms to treated health data within the ISI environment. Evaluating machine learning models, we achieve high accuracies of 96.88% with logistic regression and 97.62% with random forest. To address privacy concerns, our framework incorporates Data Sharing Agreements (DSAs). Data producers and consumers (prosumers) have the flexibility to express their prefearences for sharing and analytics operations. Data-centric policy enforcement mechanisms ensure compliance and privacy preservation. In summary, our comprehensive framework combines homomorphic encryption, secure data sharing, and AI-driven analytics. By fostering collaboration and knowledge creation in a secure environment, our approach contributes to the advancement of medical research and improves healthcare outcomes. A real case application was implemented between Portuguese hospitals and universities for this data sharing.
Style APA, Harvard, Vancouver, ISO itp.
11

Klikunova, A., i A. Khoperskov. "Creation of digital elevation models for river floodplains". Information Technology and Nanotechnology, nr 2391 (2019): 275–84. http://dx.doi.org/10.18287/1613-0073-2019-2391-275-284.

Pełny tekst źródła
Streszczenie:
A procedure for constructing a digital elevation model (DEM) of the northern part of the Volga-Akhtuba interfluve is described. The basis of our DEM is the elevation matrix of Shuttle Radar Topography Mission (SRTM) for which we carried out the refinement and updating of spatial data using satellite imagery, GPS data, depth measurements of the River Volga and River Akhtuba stream beds. The most important source of high-altitude data for the Volga-Akhtuba floodplain (VAF) can be the results of observations of the coastlines dynamics of small reservoirs (lakes, eriks, small channels) arising in the process of spring flooding and disappearing during lowflow periods. A set of digitized coastlines at different times of flooding can significantly improve the quality of the DEM. The method of constructing a digital elevation model includes an iterative procedure that uses the results of morphostructural analysis of the DEM and the numerical hydrodynamic simulations of the VAF flooding based on the shallow water model.
Style APA, Harvard, Vancouver, ISO itp.
12

Abrukov, Victor, Darya Anufrieva, Alexander Lukin, Charlie Oommen, V. R. Sanalkumar i Nichith Chandrasekaran. "Development of the Multifactor Computational Models of the Solid Propellants Combustion by Means of Data Science Methods. Propellant Combustion Genome Conception". MATEC Web of Conferences 330 (2020): 01048. http://dx.doi.org/10.1051/matecconf/202033001048.

Pełny tekst źródła
Streszczenie:
The results of usage of data science methods, in particular artificial neural networks, for the creation of new multifactor computational models of the solid propellants (SP) combustion that solve the direct and inverse tasks are presented. The own analytical platform Loginom was used for the models creation. The models of combustion of double based SP with such nano additives as metals, metal oxides, termites were created by means of experimental data published in scientific literature. The goal function of the models were burning rate (direct tasks) as well as propellants composition (inverse tasks). The basis (script) of a creation of Data Warehouse of SP combustion was developed. The Data Warehouse can be supplemented by new experimental data and metadata in automated mode and serve as a basis for creating generalized combustion models of SP and thus the beginning of work in a new direction of combustion science, which the authors propose to call "Propellant Combustion Genome" (by analogy with a very famous Materials Genome Initiative, USA). "Propellant Combustion Genome" opens wide possibilities for accelerate the advanced propellants development Genome" opens wide possibilities for accelerate the advanced propellants development.
Style APA, Harvard, Vancouver, ISO itp.
13

Schwoch, Sebastian, Maximilian Peter Dammann, Johannes Georg Bartl, Maximilian Kretzschmar, Bernhard Saske i Kristin Paetzold-Byhain. "Towards a process for the creation of synthetic training data for AI-computer vision models utilizing engineering data". Proceedings of the Design Society 4 (maj 2024): 2237–46. http://dx.doi.org/10.1017/pds.2024.226.

Pełny tekst źródła
Streszczenie:
AbstractArtificial Intelligence-based Computer Vision models (AI-CV models) for object detection can support various applications over the entire lifecycle of machines and plants such as monitoring or maintenance tasks. Despite ongoing research on using engineering data to synthesize training data for AI-CV model development, there is a lack of process guidelines for the creation of such data. This paper proposes a synthetic training data creation process tailored to the particularities of an engineering context addressing challenges such as the domain gap and methods like domain randomization.
Style APA, Harvard, Vancouver, ISO itp.
14

Trhan, Ondrej. "The Creation of Space Vector Models of Buildings From RPAS Photogrammetry Data". Slovak Journal of Civil Engineering 25, nr 2 (27.06.2017): 7–14. http://dx.doi.org/10.1515/sjce-2017-0007.

Pełny tekst źródła
Streszczenie:
Abstract The results of Remote Piloted Aircraft System (RPAS) photogrammetry are digital surface models and orthophotos. The main problem of the digital surface models obtained is that buildings are not perpendicular and the shape of roofs is deformed. The task of this paper is to obtain a more accurate digital surface model using building reconstructions. The paper discusses the problem of obtaining and approximating building footprints, reconstructing the final spatial vector digital building model, and modifying the buildings on the digital surface model.
Style APA, Harvard, Vancouver, ISO itp.
15

Xiong, Xuehan, Antonio Adan, Burcu Akinci i Daniel Huber. "Automatic creation of semantically rich 3D building models from laser scanner data". Automation in Construction 31 (maj 2013): 325–37. http://dx.doi.org/10.1016/j.autcon.2012.10.006.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Katayama, Akihiro, Shinji Uchiyama, Hideyuki Tamura, Takeshi Naemura, Masahide Kaneko i Hiroshi Harashima. "A cyber-space creation by mixing ray space data with geometric models". Systems and Computers in Japan 29, nr 9 (sierpień 1998): 21–31. http://dx.doi.org/10.1002/(sici)1520-684x(199808)29:9<21::aid-scj3>3.0.co;2-l.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Gotlib, Dariusz, i Marcin Karabin. "Integration of Models of Building Interiors with Cadastral Data". Reports on Geodesy and Geoinformatics 104, nr 1 (20.12.2017): 91–102. http://dx.doi.org/10.1515/rgg-2017-0018.

Pełny tekst źródła
Streszczenie:
Abstract Demands for applications which use models of building interiors is growing and highly diversified. Those models are applied at the stage of designing and construction of a building, in applications which support real estate management, in navigation and marketing systems and, finally, in crisis management and security systems. They are created on the basis of different data: architectural and construction plans, both, in the analogue form, as well as CAD files, BIM data files, by means of laser scanning (TLS) and conventional surveys. In this context the issue of searching solutions which would integrate the existing models and lead to elimination of data redundancy is becoming more important. The authors analysed the possible input- of cadastral data (legal extent of premises) at the stage of the creation and updating different models of building’s interiors. The paper focuses on one issue - the way of describing the geometry of premises basing on the most popular source data, i.e. architectural and construction plans. However, the described rules may be considered as universal and also may be applied in practice concerned may be used during the process of creation and updating indoor models based on BIM dataset or laser scanning clouds
Style APA, Harvard, Vancouver, ISO itp.
18

Smith, Dianna M., Graham P. Clarke i Kirk Harland. "Improving the Synthetic Data Generation Process in Spatial Microsimulation Models". Environment and Planning A: Economy and Space 41, nr 5 (maj 2009): 1251–68. http://dx.doi.org/10.1068/a4147.

Pełny tekst źródła
Streszczenie:
Simulation models are increasingly used in applied research to create synthetic micro-populations and predict possible individual-level outcomes of policy intervention. Previous research highlights the relevance of simulation techniques in estimating the potential outcomes of changes in areas such as taxation and child benefit policy, crime, education, or health inequalities. To date, however, there is very little published research on the creation, calibration, and testing of such micro-populations and models, and little on the issue of how well synthetic data can fit locally as opposed to globally in such models. This paper discusses the process of improving the process of synthetic micropopulation generation with the aim of improving and extending existing spatial microsimulation models. Experiments using different variable configurations to constrain the models are undertaken with the emphasis on producing a suite of models to match the different sociodemographic conditions found within a typical city. The results show that creating processes to generate area-specific synthetic populations, which reflect the diverse populations within the study area, provides more accurate population estimates for future policy work than the traditional global model configurations.
Style APA, Harvard, Vancouver, ISO itp.
19

Kliment, Marek, Jozef Trojan, Miriam Pekarcíková i Jana Kronová. "Creation of simulation models using the flexsim software module". Advanced Logistic Systems - Theory and Practice 16, nr 1 (8.07.2022): 41–50. http://dx.doi.org/10.32971/als.2022.004.

Pełny tekst źródła
Streszczenie:
All companies and production processes, in the current trend of applying digitization, innovate at all levels of their operation. This trend is an essential factor in maintaining competitiveness in global markets. Companies that have applied digitization and electronic data collection are intensifying these processes and extending them to the next levels of business processes. Those companies that have not used this trend so far are gradually introducing it. Data collection, archiving and transformation is an essential part of accelerating innovative elements in every company. An important factor when working with data is to be able to process it correctly. Elements of digitization are no longer common not only at the level of product modeling, equipment, production lines, but also in the mapping of current older productions, which are still used but are not transformed into a virtual form.
Style APA, Harvard, Vancouver, ISO itp.
20

Germak, Oksana, Oksana Gugueva i Natalya Kalacheva. "Creation of digital terrain models using software applications". E3S Web of Conferences 281 (2021): 05008. http://dx.doi.org/10.1051/e3sconf/202128105008.

Pełny tekst źródła
Streszczenie:
The article discusses the construction of a digital terrain model in various programs, which will be used for the vertical planning design. One of the major challenges in this area is the creation of an accurate and realistic surface, which will give an opportunity to create a quality and compliant site. To solve this problem and analyze, a model of the same territory and the same initial data has been built. The paper presents a construction algorithms DTM a method of constructing an irregular grid of heights, graphical implementation and analysis.
Style APA, Harvard, Vancouver, ISO itp.
21

Gao, Yingying, i Marijn Janssen. "The Open Data Canvas–Analyzing Value Creation from Open Data". Digital Government: Research and Practice 3, nr 1 (31.01.2022): 1–15. http://dx.doi.org/10.1145/3511102.

Pełny tekst źródła
Streszczenie:
Expectations to derive value from open data are high. However, how value is created from open data is still largely unknown. Open data value is usually generated in constellation of actors in which each player has different capabilities and roles. To understand the open data value creation process, the business model canvas is introduced in this article. The typical components of the business model canvas and open data value creation are derived from the literature. By combining these two research streams, the open data value model canvas is created. The case of Coronavirus disease 2019 (COVID-19) worldwide dashboard developed by the Johns Hopkins University is used to evaluate the model's utility. Key components of the open data value model are creating an overview of various data sources from public and private organizations, having capabilities to combine heterogeneous data, and connecting data and needs. In this way, the open data canvas helps to grasp the value creation logic.
Style APA, Harvard, Vancouver, ISO itp.
22

Konotop, Dmytro. "Information system for creating a generalized model of complex technical objects". MECHANICS OF GYROSCOPIC SYSTEMS, nr 40 (26.12.2021): 37–46. http://dx.doi.org/10.20535/0203-3771402020248764.

Pełny tekst źródła
Streszczenie:
Creation of complex technical objects (CTO, such as science-intensive engineering objects, which are characterized by the number of elements and connections equal to 106 and more) – a process containing long subprocesses, complex objects, models, and based on available standards and information technology (IT). It is known from practice that CTO models are created with the help of information systems - components of CALS and PLM-solutions. This process have the following shortcomings: models at different stages of creation of CTO are not completely interconnected; CTO modeling takes place using different components of IT CALS and PLM-solutions, which creates constant difficulties in data conversion and leads to partial or complete loss of model data; there is no automated communication with other CTO models. A generalized model of complex technical objects is proposed, which based on using the theoretical-multiple approach allows to establish an information connection between the models of the process of creating complex technical objects. The information system of creation of the generalized model of difficult technical objects that allowed automating process of processing and construction of models is developed; to supplement the technology of parallel PLM design and components of information technologies CALS and PLM-solutions for the task of creating models of complex technical objects.
Style APA, Harvard, Vancouver, ISO itp.
23

De Geyter, S., M. Bassier i M. Vergauwen. "AUTOMATED TRAINING DATA CREATION FOR SEMANTIC SEGMENTATION OF 3D POINT CLOUDS". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-5/W1-2022 (3.02.2022): 59–67. http://dx.doi.org/10.5194/isprs-archives-xlvi-5-w1-2022-59-2022.

Pełny tekst źródła
Streszczenie:
Abstract. The creation of as-built Building Information Modelling (BIM) models currently is mostly manual which makes it time consuming and error prone. A crucial step that remains to be automated is the interpretation of the point clouds and the modelling of the BIM geometry. Research has shown that despite the advancements in semantic segmentation, the Deep Learning (DL) networks that are used in the interpretation do not achieve the necessary accuracy for market adoption. One of the main reasons is a lack of sufficient and representative labelled data to train these models. In this work, the possibility to use already conducted Scan-to-BIM projects to automatically generate highly needed training data in the form of labelled point clouds is investigated. More specifically, a pipeline is presented that uses real-world point clouds and their corresponding manually created BIM models. In doing so, realistic and representative training data is created. The presented paper is focussed on the semantic segmentation of 6 common structure BIM classes, representing the main structure of a building. The experiments show that the pipeline successfully creates new training data for a recent DL network.
Style APA, Harvard, Vancouver, ISO itp.
24

Harshit, Pallavi Chaurasia, Sisi Zlatanova i Kamal Jain. "Low-Cost Data, High-Quality Models: A Semi-Automated Approach to LOD3 Creation". ISPRS International Journal of Geo-Information 13, nr 4 (3.04.2024): 119. http://dx.doi.org/10.3390/ijgi13040119.

Pełny tekst źródła
Streszczenie:
In the dynamic realm of digital twin modeling, where advancements are swiftly unfolding, users now possess the unprecedented ability to capture and generate geospatial data in real time. This article delves into a critical exploration of this landscape by presenting a meticulously devised workflow tailored for the creation of Level of Detail 3 (LOD3) models. Our research methodology capitalizes on the integration of Apple LiDAR technology alongside photogrammetric point clouds acquired from Unmanned Aerial Vehicles (UAVs). The proposed process unfolds with the transformation of point cloud data into Industry Foundation Classes (IFC) models, which are subsequently refined into LOD3 Geographic Information System (GIS) models leveraging the Feature Manipulation Engine (FME) workbench 2022.1.2. This orchestrated synergy among Apple LiDAR, UAV-derived photogrammetric point clouds, and the transformative capabilities of the FME culminates in the development of precise LOD3 GIS models. Our proposed workflow revolutionizes this landscape by integrating multi-source point clouds, imbuing them with accurate semantics derived from IFC models, and culminating in the creation of valid CityGML LOD3 buildings through sophisticated 3D geometric operations. The implications of this technical innovation are profound. Firstly, it elevates the capacity to produce intricate infrastructure models, unlocking new vistas for modeling digital twins. Secondly, it extends the horizons of GIS applications by seamlessly integrating enriched Building Information Modeling (BIM) components, thereby enhancing decision-making processes and facilitating more comprehensive spatial analyses.
Style APA, Harvard, Vancouver, ISO itp.
25

Pobiruchin, M., S. Bochum, U. M. Martens, M. Kieser i W. Schramm. "Automatic creation of disease models using data mining techniques on data from a clinical cancer registry". Value in Health 17, nr 3 (maj 2014): A206. http://dx.doi.org/10.1016/j.jval.2014.03.1203.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Plotnick, Roy E. "Creating Models for Interpreting Data". Paleontological Society Special Publications 11 (2002): 275–88. http://dx.doi.org/10.1017/s2475262200009989.

Pełny tekst źródła
Streszczenie:
For many of us, the word model may trigger remembrances of long-ago afternoons spent painstakingly gluing a small plastic car or airplane model together (for others, it may conjure up images of a somewhat emaciated young woman staring out of the cover of Vogue or Elle). The model car is not the same as a real car; it is made of different materials, it has many fewer parts, and it does not move. Nevertheless, it resembles a real automobile sufficiently that we recognize it as a realistic representation. Similarly, scientists use the term model to refer to a reconstruction of nature for the purpose of study (Levins, 1966). In other words, in order to understand nature, one may not always want to study it directly. Instead, understanding may come from studying a facsimile of nature that captures what is perceived to be its essential properties. In the same way, a child may learn how a car is built by building a plastic model of it, even if this model contains no moving parts.
Style APA, Harvard, Vancouver, ISO itp.
27

Plotnick, Roy E. "Creating Models for Interpreting Data". Paleontological Society Special Publications 9 (1999): 343–58. http://dx.doi.org/10.1017/s2475262200014180.

Pełny tekst źródła
Streszczenie:
For many of us, the word model may trigger remembrances of long-ago afternoons spent painstakingly gluing a small plastic car or airplane model together (for others, it may conjure up images of a somewhat emaciated young woman staring out of the cover of Vogue or Elle). The model car is not the same as a real car; it is made of different materials; it has many fewer parts, and it does not move. Nevertheless, it resembles a real automobile sufficiently that we recognize it as a realistic representation. Similarly, scientists use the term model to refer to a reconstruction of nature for the purpose of study (Levins, 1966). In other words, in order to understand nature, one may not always want to study it directly. Instead, understanding may come from studying a facsimile of nature that captures what is perceived to be its essential properties. In the same way, a child may learn how a car is built by building a plastic model of it, even if this model contains no moving parts.
Style APA, Harvard, Vancouver, ISO itp.
28

Kadhim, N., A. D. Mhmood i A. H. Abd-Ulabbas. "The creation of 3D building models using laser-scanning data for BIM modelling". IOP Conference Series: Materials Science and Engineering 1105, nr 1 (1.06.2021): 012101. http://dx.doi.org/10.1088/1757-899x/1105/1/012101.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Kazmi, Ismail Khalid, Lihua You, Xiaosong Yang, Xiaogang Jin i Jian J. Zhang. "Efficient sketch-based creation of detailed character models through data-driven mesh deformations". Computer Animation and Virtual Worlds 26, nr 3-4 (maj 2015): 469–81. http://dx.doi.org/10.1002/cav.1656.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Fidosova, Lora, i Gergana Antova. "Three-Dimensional Modelling of Spatial Data in Urban Territory". IOP Conference Series: Earth and Environmental Science 906, nr 1 (1.11.2021): 012128. http://dx.doi.org/10.1088/1755-1315/906/1/012128.

Pełny tekst źródła
Streszczenie:
Abstract The content of the research is divided into four points. The first part considers the need of 3D models creation - good practices applied in different countries related to Internet services for urban development and planning, preservation of cultural heritage, as well as scientific research. The second part focuses on the nature of 3D modeling, addressing theoretical issues concerning conceptual modeling, classification of three-dimensional models, geometry and topology. Different data formats are described. The third part focuses on an overview of the different 3D data sources and 3D modeling methods. The fourth part includes the description of a specific software for creating, editing and presenting 3D models - City Engine. The functionality, the specific possibilities for additional analysis and extraction of attribute information from the created models are described, as well as the programming language used in creating three-dimensional models in the software environment. In the fourth part, practical tasks are performed, which aim to make a comparison between the actual state of construction with the set project values in the general development plan for Sofia Municipality. 3D models of buildings in a neighbourhood in the Lozenets region were created, after which an additional analysis of the current state of construction was performed. The possibilities of the software for automatic generation of a street network are considered, as well as the functionality related to the modeling of facades.
Style APA, Harvard, Vancouver, ISO itp.
31

Cummings, Mary L., i Songpo Li. "Subjectivity in the Creation of Machine Learning Models". Journal of Data and Information Quality 13, nr 2 (6.05.2021): 1–19. http://dx.doi.org/10.1145/3418034.

Pełny tekst źródła
Streszczenie:
Transportation analysts are inundated with requests to apply popular machine learning modeling techniques to datasets to uncover never-before-seen relationships that could potentially revolutionize safety, congestion, and mobility. However, the results from such models can be influenced not just by biases in underlying data, but also through practitioner-induced biases. To demonstrate the significant number of subjective judgments made in the development and interpretation of machine learning models, we developed Logistic Regression and Neural Network models for transportation-focused datasets including those looking at driving injury/fatalities and pedestrian fatalities. We then developed five different representations of feature importance for each dataset, including different feature interpretations commonly used in the machine learning community. Twelve distinct judgments were highlighted in the development and interpretation of these models, which produced inconsistent results. Such inconsistencies can lead to very different interpretations of the results, which can lead to errors of commission and omission, with significant cost and safety implications if policies are erroneously adapted from such outcomes.
Style APA, Harvard, Vancouver, ISO itp.
32

Radło, Mariusz-Jan, Marek Ignor i Katarzyna Sum. "Regional Development Funds: Creation Rationale and Models of Operation". Economic and Regional Studies / Studia Ekonomiczne i Regionalne 17, nr 1 (1.03.2024): 1–20. http://dx.doi.org/10.2478/ers-2024-0001.

Pełny tekst źródła
Streszczenie:
Abstract Subject and purpose of work: The purpose of the article is to present the creation of Regional Development Funds (Regionalne Fundusze Rozwoju - RFR) in Poland, their organisational models and factors influencing their development and choice of model. The research is also focused on the analysis of the variety of RFRs and their sources. Materials and methods: The research collects and presents data on RFRs operating in Poland, and analyses these data in the context of selected legal and administrative conditions. Results: The analysis showed significant variation among RFRs in terms of functional/ organisational models and scale of operation. The choice of an operating model for an RDF is influenced by factors such as the maturity of regional financial markets, the amount of RDF funds and the types of financial instruments offered. Conclusions: The study highlights the development of the financial market segment working with local authorities and businesses over the past decade. It draws attention to an increasing importance of financial instruments and institutions in development policies implemented by voivodeship governments.
Style APA, Harvard, Vancouver, ISO itp.
33

Sultanov, Murodjon, Gayrat Ishankhodjayev, Rano Parpiyeva i Nafisa Norboyeva. "Creation of intelligent information decision support systems". E3S Web of Conferences 365 (2023): 04031. http://dx.doi.org/10.1051/e3sconf/202336504031.

Pełny tekst źródła
Streszczenie:
The use of intelligent information decision support systems implies considering the problem area's specifics. The object of study is characterized by the following set of features: - quality and efficiency of decision-making; - vagueness of goals and institutional boundaries; - the plurality of subjects involved in solving the problem; - randomness; - a plurality of mutually influencing factors; - weak formalizability, uniqueness of situations; - latency, concealment, the implicitness of information. For the efficient and reliable functioning of agricultural facilities and enterprises, it is necessary to create and implement intelligent information systems. Over the past quarter of a century, domestic information systems have undergone a progressive evolution, both in terms of developing the theoretical principles of their construction and implementing these systems. The restructuring of agriculture, the market conditions for the functioning of objects, and agriculture enterprises have their characteristics and problems. Building the structure of intelligent decision support information systems is primarily associated with building a system model, in which both traditional elements of the control system and knowledge processing models should be defined. To solve these problems, methods of system analysis were used. The key research method is the optimization of data representation structures of databases and knowledge. The following relational data representation structures have been identified: relations, attributes, and values. In the relational model, structures are not specially allocated to represent data about entity relationships. Semantic networks use a three-level representation of data on entities and a four-level representation of data on entity relationships. The conducted studies have shown that in data representation structures, entity-relationship models are a generalization and development of the structures of all traditional data models since only in this data model there are 4-level data representations of both entities and relationships. All other traditional models are some special cases of the most general entity-relationship model.
Style APA, Harvard, Vancouver, ISO itp.
34

Rudikova, L. V., i E. V. Zhavnerko. "ABOUT DATA MODELING SUBJECT DOMAINS PRACTICE-ORIENTED DIRECTION FOR UNIVERSAL SYSTEM OF STORAGE AND PROCESSING DATA". «System analysis and applied information science», nr 3 (2.11.2017): 4–12. http://dx.doi.org/10.21122/2309-4923-2017-3-4-12.

Pełny tekst źródła
Streszczenie:
This article describes data modeling for practice-oriented subject domains they are basis of general data model for data warehouse creation. Describes short subject domains characteristic relationship to different types of any human activities at the current time. Offered appropriate data models, considered relationship between them as data processing and data warehouse creation, which can be built on information data storage technology and which has some characteristics as extensible complex subject domain, data integration, which get from any data sources, data time invariance with required temporal marks, relatively high data stability, search necessary compromises in data redundancy, system blocks modularity, flexibility and extensibility of architecture, high requirements to data storage security. It’s proposed general approach of data collection and data storage, appropriate data models, in the future, will integrate in one database scheme and create generalized scheme of data warehouse as type «constellation of facts». For getting of data models applies structural methodology and consider general principles of conceptual design. Using complex system, which can work with some information sources and represent data in convenient view for users will in-demand for analysis data selected subject domains and determination of possible relationships.
Style APA, Harvard, Vancouver, ISO itp.
35

Vàzquez, Mercè, Antoni Oliver i Elisabeth Casademont. "Using open data to create the Catalan IATE e-dictionary". Terminology 25, nr 2 (26.11.2019): 175–97. http://dx.doi.org/10.1075/term.00035.vaz.

Pełny tekst źródła
Streszczenie:
Abstract Linguistic resources available in the form of open data are an essential source of information for creating e-dictionaries, but access to these linguistic resources is still limited. This paper presents a method for maximising use of open access linguistic resources and integrating them into specialised e-dictionaries. The method combines automatic compilation of terminology data with the creation of specialised linguistic corpora to produce a Catalan version of the IATE (InterActive Terminology for Europe) database. The paper presents new methodological advances applied here to the production of terminological e-dictionaries, using open access linguistic resources. We observe that, as a result of this new methodology, the Catalan version of the IATE will be able to include specialised economics, law and health dictionaries. In conclusion, the new methodology presented here permits the creation of new models of specialised e-dictionaries, facilitates the compilation of terminology in any language and unifies the access format for terminology data.
Style APA, Harvard, Vancouver, ISO itp.
36

Ten, Yuri, Olga Schukina i Albina Valieva. "Creation of topographic plans using unmanned aerial photography". E3S Web of Conferences 381 (2023): 01020. http://dx.doi.org/10.1051/e3sconf/202338101020.

Pełny tekst źródła
Streszczenie:
Program Agisoft Photoscan is a universal tool for generating 3D surface models of surveyed objects from photographic images of these objects. Photoscan is successfully used both for building models of objects and objects of different scales - from miniature archaeological artifacts to large buildings and structures, and for building terrain models based on aerial photography data and generating DEMs and orthomosaics based on these models . Data processing in Photoscan is extremely automated: the operator is entrusted only with the functions of monitoring and managing the operating modes of the program. This article discusses the use of aerial photography from an unmanned aerial vehicle (UAV) when creating digital orthophoto and topographic plans. On the improvement of the applied complexes with the use of UAVs for obtaining APS materials for cartographic purposes. Ease of UAV control, deadlines and quality, timeliness of the data transmitted from them, in our opinion, confirms the feasibility of performing these works on the creation of digital topographic maps and plans.
Style APA, Harvard, Vancouver, ISO itp.
37

STEBELETSKYI, Myroslav, Eduard MANZIUK, Tetyana SKRYPNYK i Ruslan BAHRIY. "METHOD OF BUILDING ENSEMBLES OF MODELS FOR DATA CLASSIFICATION BASED ON DECISION CORRELATIONS". Herald of Khmelnytskyi National University. Technical sciences 315, nr 6(1) (29.12.2022): 224–33. http://dx.doi.org/10.31891/2307-5732-2022-315-6-224-233.

Pełny tekst źródła
Streszczenie:
The scientific work highlights the problem of increasing the accuracy of binary classification predictions using machine learning algorithms. Over the past few decades, systems that consist of many machine learning algorithms, also called ensemble models, have received increasing attention in the computational intelligence and machine learning community. This attention is well deserved, as ensemble systems have proven to be very effective and extremely versatile in a wide range of problem domains and real-world applications. One algorithm may not make a perfect prediction for a particular data set. Machine learning algorithms have their limitations, so creating a model with high accuracy is a difficult task. If you create and combine several models by combining and aggregating the results of each model, there is a chance to improve the overall accuracy, this problem is dealt with by ensembling. The basis of the information system of binary classification is the ensemble model. This model, in turn, contains a set of unique combinations of basic classifiers – a kind of algorithmic primitives. An ensemble model can be considered as some kind of meta-algorithm, which consists of unique sets of machine learning (ML) classification algorithms. The task of the ensemble model is to find such a combination of basic classification algorithms that would give the highest performance. The performance is evaluated according to the main ML metrics in classification tasks. Another aspect of scientific work is the creation of an aggregation mechanism for combining the results of basic classification algorithms. That is, each unique combination within the ensemble consists of a set of basic models (harbingers), the results of which must be aggregated. In this work, a non-hierarchical clustering method is used to aggregate (average) the predictions of the base models. A feature of this study is to find the correlation coefficients of the base models in each combination. With the help of the magnitude of correlations, the relationship between the prediction of the classifier (base model) and the true value is established, as a result of which space is opened for further research on improving the ensemble model (meta-algorithm)
Style APA, Harvard, Vancouver, ISO itp.
38

Troisi, Orlando, Anna Visvizi i Mara Grimaldi. "Digitalizing business models in hospitality ecosystems: toward data-driven innovation". European Journal of Innovation Management 26, nr 7 (4.04.2023): 242–77. http://dx.doi.org/10.1108/ejim-09-2022-0540.

Pełny tekst źródła
Streszczenie:
PurposeDigitalization accelerates the need of tourism and hospitality ecosystems to reframe business models in line with a data-driven orientation that can foster value creation and innovation. Since the question of data-driven business models (DDBMs) in hospitality remains underexplored, this paper aims at (1) revealing the key dimensions of the data-driven redefinition of business models in smart hospitality ecosystems and (2) conceptualizing the key drivers underlying the emergence of innovation in these ecosystems.Design/methodology/approachThe empirical research is based on semi-structured interviews collected from a sample of hospitality managers, employed in three different accommodation services, i.e. hotels, bed and breakfast (B&Bs) and guesthouses, to explore data-driven strategies and practices employed on site.FindingsThe findings allow to devise a conceptual framework that classifies the enabling dimensions of DDBMs in smart hospitality ecosystems. Here, the centrality of strategy conducive to the development of data-driven innovation is stressed.Research limitations/implicationsThe study thus developed a conceptual framework that will serve as a tool to examine the impact of digitalization in other service industries. This study will also be useful for small and medium-sized enterprises (SMEs) managers, who seek to understand the possibilities data-driven management strategies offer in view of stimulating innovation in the managers' companies.Originality/valueThe paper reinterprets value creation practices in business models through the lens of data-driven approaches. In this way, this paper offers a new (conceptual and empirical) perspective to investigate how the hospitality sector at large can use the massive amounts of data available to foster innovation in the sector.
Style APA, Harvard, Vancouver, ISO itp.
39

Kaufmann, Michael. "Big Data Management Canvas: A Reference Model for Value Creation from Data". Big Data and Cognitive Computing 3, nr 1 (11.03.2019): 19. http://dx.doi.org/10.3390/bdcc3010019.

Pełny tekst źródła
Streszczenie:
Many big data projects are technology-driven and thus, expensive and inefficient. It is often unclear how to exploit existing data resources and map data, systems and analytics results to actual use cases. Existing big data reference models are mostly either technological or business-oriented in nature, but do not consequently align both aspects. To address this issue, a reference model for big data management is proposed that operationalizes value creation from big data by linking business targets with technical implementation. The purpose of this model is to provide a goal- and value-oriented framework to effectively map and plan purposeful big data systems aligned with a clear value proposition. Based on an epistemic model that conceptualizes big data management as a cognitive system, the solution space of data value creation is divided into five layers: preparation, analysis, interaction, effectuation, and intelligence. To operationalize the model, each of these layers is subdivided into corresponding business and IT aspects to create a link from use cases to technological implementation. The resulting reference model, the big data management canvas, can be applied to classify and extend existing big data applications and to derive and plan new big data solutions, visions, and strategies for future projects. To validate the model in the context of existing information systems, the paper describes three cases of big data management in existing companies.
Style APA, Harvard, Vancouver, ISO itp.
40

Agarwal, Vibhu, Tanya Podchiyska, Juan M. Banda, Veena Goel, Tiffany I. Leung, Evan P. Minty, Timothy E. Sweeney, Elsie Gyang i Nigam H. Shah. "Learning statistical models of phenotypes using noisy labeled training data". Journal of the American Medical Informatics Association 23, nr 6 (12.05.2016): 1166–73. http://dx.doi.org/10.1093/jamia/ocw028.

Pełny tekst źródła
Streszczenie:
Abstract Objective Traditionally, patient groups with a phenotype are selected through rule-based definitions whose creation and validation are time-consuming. Machine learning approaches to electronic phenotyping are limited by the paucity of labeled training datasets. We demonstrate the feasibility of utilizing semi-automatically labeled training sets to create phenotype models via machine learning, using a comprehensive representation of the patient medical record. Methods We use a list of keywords specific to the phenotype of interest to generate noisy labeled training data. We train L1 penalized logistic regression models for a chronic and an acute disease and evaluate the performance of the models against a gold standard. Results Our models for Type 2 diabetes mellitus and myocardial infarction achieve precision and accuracy of 0.90, 0.89, and 0.86, 0.89, respectively. Local implementations of the previously validated rule-based definitions for Type 2 diabetes mellitus and myocardial infarction achieve precision and accuracy of 0.96, 0.92 and 0.84, 0.87, respectively. We have demonstrated feasibility of learning phenotype models using imperfectly labeled data for a chronic and acute phenotype. Further research in feature engineering and in specification of the keyword list can improve the performance of the models and the scalability of the approach. Conclusions Our method provides an alternative to manual labeling for creating training sets for statistical models of phenotypes. Such an approach can accelerate research with large observational healthcare datasets and may also be used to create local phenotype models.
Style APA, Harvard, Vancouver, ISO itp.
41

Sefercik, U. G., T. Kavzoglu, M. Nazar, C. Atalay i M. Madak. "UAV-BASED 3D VIRTUAL TOUR CREATION". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-4/W5-2021 (23.12.2021): 493–99. http://dx.doi.org/10.5194/isprs-archives-xlvi-4-w5-2021-493-2021.

Pełny tekst źródła
Streszczenie:
Abstract. Lately, improvements in game engines have increased the interest in virtual reality (VR) technologies, that engages users with an artificial environment, and have led to the adoption of VR systems to display geospatial data. Because of the ongoing COVID-19 pandemic, and thus the necessity to stay at home, VR tours became very popular. In this paper, we tried to create a three-dimensional (3D) virtual tour for Gebze Technical University (GTU) Southern Campus by transferring high-resolution unmanned air vehicle (UAV) data into a virtual domain. UAV data is preferred in various applications because of its high spatial resolution, low cost and fast processing time. In this application, the study area was captured from different modes and altitudes of UAV flights with a minimum ground sampling distance (GSD) of 2.18 cm using a 20 MP digital camera. The UAV data was processed in Structure from Motion (SfM) based photogrammetric evaluation software Agisoft Metashape and high-quality 3D textured mesh models were generated. Image orientation was completed using an optimal number of ground control points (GCPs), and the geometric accuracy was calculated as ±8 mm (~0.4 pixels). To create the VR tour, UAV-based mesh models were transferred into the Unity game engine and optimization processes were carried out by applying occlusion culling and space subdivision algorithms. To improve the visualization, 3D object models such as trees, lighting poles and arbours were positioned on VR. Finally, textual metadata about buildings and a player with a first-person camera were added for an informative VR experience.
Style APA, Harvard, Vancouver, ISO itp.
42

Faroukhi, Abou Zakaria, Imane El Alaoui, Youssef Gahi i Aouatif Amine. "An Adaptable Big Data Value Chain Framework for End-to-End Big Data Monetization". Big Data and Cognitive Computing 4, nr 4 (23.11.2020): 34. http://dx.doi.org/10.3390/bdcc4040034.

Pełny tekst źródła
Streszczenie:
Today, almost all active organizations manage a large amount of data from their business operations with partners, customers, and even competitors. They rely on Data Value Chain (DVC) models to handle data processes and extract hidden values to obtain reliable insights. With the advent of Big Data, operations have become increasingly more data-driven, facing new challenges related to volume, variety, and velocity, and giving birth to another type of value chain called Big Data Value Chain (BDVC). Organizations have become increasingly interested in this kind of value chain to extract confined knowledge and monetize their data assets efficiently. However, few contributions to this field have addressed the BDVC in a synoptic way by considering Big Data monetization. This paper aims to provide an exhaustive and expanded BDVC framework. This end-to-end framework allows us to handle Big Data monetization to make organizations’ processes entirely data-driven, support decision-making, and facilitate value co-creation. For this, we present a comprehensive review of existing BDVC models relying on some definitions and theoretical foundations of data monetization. Next, we expose research carried out on data monetization strategies and business models. Then, we offer a global and generic BDVC framework that supports most of the required phases to achieve data valorization. Furthermore, we present both a reduced and full monetization model to support many co-creation contexts along the BDVC.
Style APA, Harvard, Vancouver, ISO itp.
43

Riveiro, B., B. Conde-Carnero, H. González-Jorge, P. Arias i J. C. Caamaño. "AUTOMATIC CREATION OF STRUCTURAL MODELS FROM POINT CLOUD DATA: THE CASE OF MASONRY STRUCTURES". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-3/W5 (19.08.2015): 3–9. http://dx.doi.org/10.5194/isprsannals-ii-3-w5-3-2015.

Pełny tekst źródła
Streszczenie:
One of the fields where 3D modelling has an important role is in the application of such 3D models to structural engineering purposes. The literature shows an intense activity on the conversion of 3D point cloud data to detailed structural models, which has special relevance in masonry structures where geometry plays a key role. In the work presented in this paper, color data (from Intensity attribute) is used to automatically segment masonry structures with the aim of isolating masonry blocks and defining interfaces in an automatic manner using a 2.5D approach. An algorithm for the automatic processing of laser scanning data based on an improved marker-controlled watershed segmentation was proposed and successful results were found. Geometric accuracy and resolution of point cloud are constrained by the scanning instruments, giving accuracy levels reaching a few millimetres in the case of static instruments and few centimetres in the case of mobile systems. In any case, the algorithm is not significantly sensitive to low quality images because acceptable segmentation results were found in cases where blocks could not be visually segmented.
Style APA, Harvard, Vancouver, ISO itp.
44

Pokojna, Hana, Caroline Erolin i Christopher Henstridge. "The transparent minds: methods of creation of 3D digital models from patient specific data". Journal of Visual Communication in Medicine 45, nr 2 (12.01.2022): 17–31. http://dx.doi.org/10.1080/17453054.2021.2008230.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Mendelev, M. I. "Creation of two-component liquid alloys computer models from data of two diffraction experiments". Physica B: Condensed Matter 262, nr 1-2 (luty 1999): 40–48. http://dx.doi.org/10.1016/s0921-4526(98)00658-9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Rudnytskyi, Volodymyr, Nataliia Lada, Dmytro Pidlasyi i Olga Melnyk. "SYNTHESIS OF DISCRETE AND ALGEBRAIC MODELS OF ELEMENTARY FUNCTIONS OF DATA-CONTROLLED OPERATIONS". Cybersecurity: Education, Science, Technique 3, nr 23 (2024): 6–16. http://dx.doi.org/10.28925/2663-4023.2024.23.616.

Pełny tekst źródła
Streszczenie:
Improvement of modern data exchange applications increases the complexity of cybersecurity. This leads to most applicable low-cost cryptographic algorithms becoming ineffective in the near future. On the other hand, CET encryption offers a great opportunity for development of the low-cost cryptography. The following article analyzes previously published results of CET-operations modeling, which serves as the foundation of CET encryption. The CET operations mentioned above use elementary functions as their basis. The results of our analysis allow to conclude that elementary functions of data-controlled operations have not been researched in the past. The primary goal of this article is to research these elementary functions of data-controlled operations and develop a method suitable for synthesis of a group of elementary functions of data-controlled operations. This can assist in automating the process of creating CET operations with defined attributes. This article proves that known discrete models of elementary functions of data-controlled operations do not represent their content and usage specifications during creation of CET operations. We suggest using discrete and algebraic presentation for modeling elementary functions data. The results of our analysis of the synthesized models of elementary functions of data-controlled operations allow us to develop a proper method of their synthesis. This method is adapted for usage in the automated systems of CET-operations modeling. We also provide examples of models of CET operations created based on elementary functions of data-controlled operations. The aforementioned method for synthesis of a group of elementary functions of data-controlled operations allows expanding possibilities for generating these elementary functions within the automated system used for research and creation of CET operations. Presented scientific results can be used for experimental modeling of CET operations, while the implementation algorithms of such operations will be defined by the operations themselves, as well as transformed data. Utilization of these operations allows modification of cryptographic algorithms controlled by encrypted data.
Style APA, Harvard, Vancouver, ISO itp.
47

R., Marsel Vagizov, Eugenie Istomin P., Valerie Miheev L., Artem Potapov P. i Natalya Yagotinceva V. "Visual Digital Forest Model Based on a Remote Sensing Data and Forest Inventory Data". Remote Sensing 13, nr 20 (13.10.2021): 4092. http://dx.doi.org/10.3390/rs13204092.

Pełny tekst źródła
Streszczenie:
This article discusses the process of creating a digital forest model based on remote sensing data, three-dimensional modeling, and forest inventory data. Remote sensing data of the Earth provide a fundamental tool for integrating subsequent objects into a digital forest model, enabling the creation of an accurate digital model of a selected forest quarter by using forest inventory data in educational and experimental forestry, and providing a valuable and extensive database of forest characteristics. The formalization and compilation of technologies for connecting forest inventory databases and remote sensing data with the construction of three-dimensional tree models for a dynamic display of changes in forests provide an additional source of data for obtaining new knowledge. The quality of forest resource management can be improved by obtaining the most accurate details of the current state of forests. Using machine learning and regression analysis methods as part of a digital model, it is possible to visually assess the course of planting growth, changes in species composition, and other morphological characteristics of forests. The goal of digital, interactive forest modeling is to create virtual simulations of the future status of forests using a combination of predictive forest inventory models and machine learning technology. The research findings provide a basic idea and technique for developing local digital forest models based on remote sensing and data integration technologies.
Style APA, Harvard, Vancouver, ISO itp.
48

Kashyap, Shubhankar, i Avantika Singh. "Testing Data Feminism in India". Scholars Journal of Arts, Humanities and Social Sciences 9, nr 10 (25.05.2021): 516–30. http://dx.doi.org/10.36347/sjahss.2021.v09i10.006.

Pełny tekst źródła
Streszczenie:
Catherine D’Ignazio and Lauren F.Klein in their book Data Feminism argue that data models reflect existing power structures and social hierarchies. We aim to test this hypothesis in India on Instagram. Instagram is one of the most accessible and politically engaging social media platforms in India which makes its data models appropriate subjects for our study. The research question of our study is “Do Instagram data models disproportionately prioritise accounts that publish majoritarian feminist content over intersectional feminist content in India?” The paper employs two methodological approaches; an experimental set-up under controlled setting for primary data collection through a positivist sociological approach kept under time bound observational study and secondary data qualitative analysis. This paper first analyses the biases and preconceived notions which cloud digital data models. It further elaborates upon the concept of Data Justice which acknowledges historical inequalities and power differentials amongst communities that drive data collection. The paper attempts to test this hypothesis through an experiment. The experiment includes creation of two Instagram accounts dedicated to two different forms of feminism. Account “A” would publish content ascribing to popular feminist ideals i.e. non-intersectional and Account “B” would publish intersectional feminist content. The creation of new accounts is critical for establishing a causal relationship between data models and disparity in accounts growth, as a pre-established follower count would affect the accounts engagement. For a period of three months, both the accounts will employ the same strategies to increase user engagement/reach. The impact of these strategies on metrics such as follower count, post likes, post reshares, post comments, profile visits, frequency and duration of story views, and duration of post visibility would be documented through a weekly monitoring system. The data collected......
Style APA, Harvard, Vancouver, ISO itp.
49

Dic, Michal, Miriam Pekarčíková, Jozef Trojan i Ján Kopec. "DATA PROCESSING FOR CREATING SIMULATION MODELS". Acta Simulatio 7, nr 2 (30.06.2021): 7–11. http://dx.doi.org/10.22306/asim.v7i2.60.

Pełny tekst źródła
Streszczenie:
This article is devoted to the main element of the Industry 4.0 concept, which is vertical integration software that connects the necessary parts of manufacturing companies. Software that meets these criteria is required to ensure error-free bi-directional communication and data transfer between IT and OT networks. Better competitiveness technological progress is pushing the possibilities of MES. As a result, MES is becoming an integral element that takes businesses to the next level. If we imagine all activities as operations connected by computer networks and analytical tools, the result is even greater efficiency in the industry and business. Analytical tools provide useful support for customer service.
Style APA, Harvard, Vancouver, ISO itp.
50

Janićijević, Stefana, Đorđe Petrović i Miodrag Stefanović. "Sales prediction on e-commerce platform, by using data mining model". Serbian Journal of Engineering Management 5, nr 2 (2020): 60–76. http://dx.doi.org/10.5937/sjem2002060j.

Pełny tekst źródła
Streszczenie:
In this paper we applied twinning algorithm for product that are sold via e-commerce platform. To establish relatively homogenous product groups that were on sale on this e-commerce platform during the last year, it was necessary to form predictive mathematical model. We determined set of relevant variables that will represent group attributes, and we applied K-means algorithm, Market Basket model and Vector Distance model. Based on analysis of basic and derived variables, fixed number of clusters was introduced. Silhouette index was used for the purposes of detecting whether these clusters are compact. Using these cluster separations, we created models that detect similar products, and try to analyze probability of sales for each product. Analysis results can be used for planning future sales campaigns, marketing expenses optimization, creation of new loyalty programs, and better understanding customer behavior in general.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii