Dissertations / Theses on the topic 'Commercial loans Data processing'

To see the other types of publications on this topic, follow the link: Commercial loans Data processing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 25 dissertations / theses for your research on the topic 'Commercial loans Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tian, Zhimin. "Essays on economic consequences of inside director reputation." HKBU Institutional Repository, 2014. https://repository.hkbu.edu.hk/etd_oa/62.

Full text
Abstract:
This study consists of two essays. The first essay investigates whether non-CEO inside director reputation matters in bank loan contracting. Reputable inside directors (RIDs) can improve borrowers’ financial reporting quality and reduce agency risk in loan contracting. Based on regression analysis of 5,104 loan facilities during 1999-2007 in the U.S., I find that borrowers with reputable inside directors enjoy lower loan interest rate and a smaller number of restrictive covenants, and are less likely to have the loans secured by collateral, compared with borrowers without RIDs. The results still hold after I control for CEO reputation, and address the endogeneity of RIDs and the joint determination of various loan contracting terms. These findings shed new light on the impact of director-level reputation in the context of bank loan market. The second essay investigates whether non-CEO inside directors with reputation incentives affect the effectiveness of a firm’s internal control over financial reporting. Internal control effectiveness is an important indicator of financial reporting quality. Using a large sample of 7,352 firm-year observations from 2004 to 2012, I find that firms with RIDs are less likely to have reported internal control weaknesses (ICWs). I also find that RIDs have a more pervasive impact on account-level ICWs than company-level ICWs. Empirical results also demonstrate that the association between RIDs and ICWs is more pronounced for firms with lower audit quality, higher CEO entrenchment, and higher cost of misreporting. Further test shows that RIDs can help to improve earnings quality through mitigating ICWs. The study results still hold after I control for CEO reputation, employing alternative proxies and estimation methods, and address the potential endogeneity problems of RIDs. The study findings add to the few empirical studies examining the determinants of ICWs and have corporate governance policy implications for regulators by supporting the desirable role of inside directors in terms of efficient contracting.
APA, Harvard, Vancouver, ISO, and other styles
2

Vuorio, R. (Riikka). "Use of public sector’s open spatial data in commercial applications." Master's thesis, University of Oulu, 2014. http://urn.fi/URN:NBN:fi:oulu-201311201883.

Full text
Abstract:
The objective of this study was to analyse how young Finnish information technology (IT) companies utilize the public sector’s open spatial data. The aim was to find out to what extent companies use public sector’s open spatial data in products and how companies are using it. In addition, defects related to data and its use and companies’ awareness of public sector open data were canvassed. Defects and unawareness might prevent or retard the utilization of public sector’s data. Public sector is collecting vast amount of data from various areas when performing public tasks. The major part of the data is spatial, meaning the data has a location aspect. Public sector is opening the data for everybody to use freely and companies could use this open spatial data for commercial purposes. High expectations have been set for the data opening: along with it, innovations and business — new companies and digital products — will be created. The European Union has promoted greatly the public sector data opening with its legislative actions. First with the PSI directive (directive on re-use of public sector data) and later with the INSPIRE directive (directive on establishing and Infrastructure for Spatial Information in the European Community). The both directives are aiming to facilitate the re-use and dissemination of public sector data, whereas the INSPIRE directive has focused on the use of interoperable spatial data by creating the spatial data infrastructure. Even if the developments are still on going, these undertakings have already created possibilities for companies to use public sector data. This applies especially to the spatial data. This study was quantitative by nature and the empirical data for the study was collected through online survey, which was targeted to randomly selected Finnish IT companies established during the years 2009–2012. Data was analyzed by descriptive statistics. The results can be generalized to the whole target population in Finland. The results of this study shows that the number of companies utilizing public sector’s open spatial data is small and the public sector’s open spatial data has not yet enabled establishing of new companies. However, companies have developed few new products with the contribution of public sector’s open spatial data and the value of the data for the products is not minor. The thesis concludes that there is a need for greater investment in promoting the public sector’s open data amongst companies: the awareness of public sector’s open spatial data could be increased. In addition, coverage of datasets and interface services could be improved. Perhaps by eliminating these defects, the number of utilizers of public sector’s open spatial data would increase. Now there is a quiet sign of awakening of the business to utilize public sector’s data.
APA, Harvard, Vancouver, ISO, and other styles
3

Hines, Dennis O., Donald C. Rhea, and Guy W. Williams. "ADVANCED DATA ACQUISITION AND PROCESSING SYSTEMS (ADAPS) UPDATE." International Foundation for Telemetering, 1994. http://hdl.handle.net/10150/608546.

Full text
Abstract:
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California
The rapid technology growth in the aerospace industry continues to manifest itself in increasingly complex computer systems and weapons systems platforms. To meet the data processing challenges associated with these new weapons systems, the Air Force Flight Test Center (AFFTC) is developing the next generation of data acquisition and processing systems under the Advanced Data Acquisition and Processing Systems (ADAPS) Program. The ADAPS program has evolved into an approach that utilizes Commercial-Off-The-Shelf (COTS) components as the foundation for Air Force enhancements to meet specific customer requirements. The ADAPS program has transitioned from concept exploration to engineering and manufacturing development (EMD). This includes the completion of a detailed requirements analysis and a overall system design. This paper will discuss the current status of the ADAPS program including the requirements analysis process, details of the system design, and the result of current COTS acquisitions.
APA, Harvard, Vancouver, ISO, and other styles
4

McJannet, Lawrence George 1952. "REALIZATION OF A REGULAR FACILITY BLOCK PLAN FROM AN ADJACENCY GRAPH USING GRAPH THEORETIC BASED HEURISTICS." Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/275537.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chung, Kit-lun, and 鐘傑麟. "Intelligent agent for Internet Chinese financial news retrieval." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B30106503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Alawady, Amro M. "TURNKEY TELEMETRY DATA ACQUISITION AND PROCESSING SYSTEMS UTILIZING COMMERCIAL OFF THE SHELF (COTS) PRODUCTS." International Foundation for Telemetering, 1996. http://hdl.handle.net/10150/608369.

Full text
Abstract:
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California
This paper discusses turnkey telemetry data acquisition and analysis systems. A brief history of previous systems used at Lockheed Martin Vought Systems is presented. Then, the paper describes systems that utilize more COTS hardware and software and discusses the time and resources saved by integrating these products into a complete system along with a description of what some newer systems will offer.
APA, Harvard, Vancouver, ISO, and other styles
7

顧銘培 and Ming-pui Ku. "The essentials of project management in tackling the change of year 2000 on computer systems of an airline." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1997. http://hub.hku.hk/bib/B31267993.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Euawatana, Teerapong. "Implementation business-to-business electronic commercial website using ColdFusion 4.5." CSUSB ScholarWorks, 2001. https://scholarworks.lib.csusb.edu/etd-project/1917.

Full text
Abstract:
This project was created using ColdFusion 4.5 to build and implement a commercial web site to present a real picture of electronic commerce. This project is intended to provide enough information for other students who are looking for a guideline for further study and to improve their skills in business from an information management aspect.
APA, Harvard, Vancouver, ISO, and other styles
9

Du, Yun Yan. "Legal recognition and implications of electronic bill of lading in international business : international legal developments and the legal status in China." Thesis, University of Macau, 2011. http://umaclib3.umac.mo/record=b2487632.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tembo, Rachael. "Information and communication technology usage trends and factors in commercial agriculture in the wine industry." Thesis, [S.l. : s.n.], 2008. http://dk.cput.ac.za/cgi/viewcontent.cgi?article=1066&context=td_cput.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Luque, N. E. "Cluster dynamics in the Basque region of Spain." Thesis, Coventry University, 2011. http://curve.coventry.ac.uk/open/items/4f4161ca-11db-4d70-9954-aea64f4fbaa4/1.

Full text
Abstract:
Developing and retaining competitive advantage was a major concern for all companies; it fundamentally relied on being aware of the external environment and customer satisfaction. Modifications of the environment conditions and unexpected economic events could cause of a loss of the level of organisational adjustment and subsequent loss in competitiveness, only those organisations able to rapidly adjust to these dynamics would be able to remain. In some instances, companies decided to geographically co-locate seeking economies of scale and benefiting from complementarities. Literature review revealed the strong support that clusters had from Government and Local Authorities, but it also highlighted the limited practical research in the field. The aim of this research was to measure the dynamism of the cluster formed by the geographical concentration of diverse manufacturers within the Mondragon Cooperativa Group in the Basque region of Spain, and compared it to the individual dynamism of these organisations in order to have a better understanding the actual complementarities and synergies of this industrial colocation. Literature review identified dynamic capabilities as the core enablers of organisation when competing in dynamic environments; based on these capabilities, a model was formulated. This model combined with the primary data collected via questionnaire and interviews helped measure the dynamism of the individual cluster members and the cluster as whole as well as provided an insight on the complementarities and synergies of this type of alliance. The findings of the research concluded that the cluster as a whole was more dynamic than the individual members; nevertheless, the model suggested that there were considerable differences in speed among the cluster members. These differences on speed were determined by the size of the company and their performance in dimensions such as marketing, culture and management. The research also suggested that despite of the clear differences in the level of dynamism among cluster members, all companies benefited in some way from being part of the cluster; these benefits were different in nature depending on each specific members.
APA, Harvard, Vancouver, ISO, and other styles
12

Camacho, Rodriguez Jesus. "Efficient techniques for large-scale Web data management." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112229/document.

Full text
Abstract:
Le développement récent des offres commerciales autour du cloud computing a fortement influé sur la recherche et le développement des plateformes de distribution numérique. Les fournisseurs du cloud offrent une infrastructure de distribution extensible qui peut être utilisée pour le stockage et le traitement des données.En parallèle avec le développement des plates-formes de cloud computing, les modèles de programmation qui parallélisent de manière transparente l'exécution des tâches gourmandes en données sur des machines standards ont suscité un intérêt considérable, à commencer par le modèle MapReduce très connu aujourd'hui puis par d'autres frameworks plus récents et complets. Puisque ces modèles sont de plus en plus utilisés pour exprimer les tâches de traitement de données analytiques, la nécessité se fait ressentir dans l'utilisation des langages de haut niveau qui facilitent la charge de l'écriture des requêtes complexes pour ces systèmes.Cette thèse porte sur des modèles et techniques d'optimisation pour le traitement efficace de grandes masses de données du Web sur des infrastructures à grande échelle. Plus particulièrement, nous étudions la performance et le coût d'exploitation des services de cloud computing pour construire des entrepôts de données Web ainsi que la parallélisation et l'optimisation des langages de requêtes conçus sur mesure selon les données déclaratives du Web.Tout d'abord, nous présentons AMADA, une architecture d'entreposage de données Web à grande échelle dans les plateformes commerciales de cloud computing. AMADA opère comme logiciel en tant que service, permettant aux utilisateurs de télécharger, stocker et interroger de grands volumes de données Web. Sachant que les utilisateurs du cloud prennent en charge les coûts monétaires directement liés à leur consommation de ressources, notre objectif n'est pas seulement la minimisation du temps d'exécution des requêtes, mais aussi la minimisation des coûts financiers associés aux traitements de données. Plus précisément, nous étudions l'applicabilité de plusieurs stratégies d'indexation de contenus et nous montrons qu'elles permettent non seulement de réduire le temps d'exécution des requêtes mais aussi, et surtout, de diminuer les coûts monétaires liés à l'exploitation de l'entrepôt basé sur le cloud.Ensuite, nous étudions la parallélisation efficace de l'exécution de requêtes complexes sur des documents XML mis en œuvre au sein de notre système PAXQuery. Nous fournissons de nouveaux algorithmes montrant comment traduire ces requêtes dans des plans exprimés par le modèle de programmation PACT (PArallelization ConTracts). Ces plans sont ensuite optimisés et exécutés en parallèle par le système Stratosphere. Nous démontrons l'efficacité et l'extensibilité de notre approche à travers des expérimentations sur des centaines de Go de données XML.Enfin, nous présentons une nouvelle approche pour l'identification et la réutilisation des sous-expressions communes qui surviennent dans les scripts Pig Latin. Notre algorithme, nommé PigReuse, agit sur les représentations algébriques des scripts Pig Latin, identifie les possibilités de fusion des sous-expressions, sélectionne les meilleurs à exécuter en fonction du coût et fusionne d'autres expressions équivalentes pour partager leurs résultats. Nous apportons plusieurs extensions à l'algorithme afin d’améliorer sa performance. Nos résultats expérimentaux démontrent l'efficacité et la rapidité de nos algorithmes basés sur la réutilisation et des stratégies d'optimisation
The recent development of commercial cloud computing environments has strongly impacted research and development in distributed software platforms. Cloud providers offer a distributed, shared-nothing infrastructure, that may be used for data storage and processing.In parallel with the development of cloud platforms, programming models that seamlessly parallelize the execution of data-intensive tasks over large clusters of commodity machines have received significant attention, starting with the MapReduce model very well known by now, and continuing through other novel and more expressive frameworks. As these models are increasingly used to express analytical-style data processing tasks, the need for higher-level languages that ease the burden of writing complex queries for these systems arises.This thesis investigates the efficient management of Web data on large-scale infrastructures. In particular, we study the performance and cost of exploiting cloud services to build Web data warehouses, and the parallelization and optimization of query languages that are tailored towards querying Web data declaratively.First, we present AMADA, an architecture for warehousing large-scale Web data in commercial cloud platforms. AMADA operates in a Software as a Service (SaaS) approach, allowing users to upload, store, and query large volumes of Web data. Since cloud users support monetary costs directly connected to their consumption of resources, our focus is not only on query performance from an execution time perspective, but also on the monetary costs associated to this processing. In particular, we study the applicability of several content indexing strategies, and show that they lead not only to reducing query evaluation time, but also, importantly, to reducing the monetary costs associated with the exploitation of the cloud-based warehouse.Second, we consider the efficient parallelization of the execution of complex queries over XML documents, implemented within our system PAXQuery. We provide novel algorithms showing how to translate such queries into plans expressed in the PArallelization ConTracts (PACT) programming model. These plans are then optimized and executed in parallel by the Stratosphere system. We demonstrate the efficiency and scalability of our approach through experiments on hundreds of GB of XML data.Finally, we present a novel approach for identifying and reusing common subexpressions occurring in Pig Latin scripts. In particular, we lay the foundation of our reuse-based algorithms by formalizing the semantics of the Pig Latin query language with extended nested relational algebra for bags. Our algorithm, named PigReuse, operates on the algebraic representations of Pig Latin scripts, identifies subexpression merging opportunities, selects the best ones to execute based on a cost function, and merges other equivalent expressions to share its result. We bring several extensions to the algorithm to improve its performance. Our experiment results demonstrate the efficiency and effectiveness of our reuse-based algorithms and optimization strategies
APA, Harvard, Vancouver, ISO, and other styles
13

Knudtson, Kevin M., and Randy Glass. "DIGITAL VOICE DECODING IN TODAY'S TELEMETRY SYSTEM." International Foundation for Telemetering, 1999. http://hdl.handle.net/10150/607327.

Full text
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada
Today’s telemetry systems can reduce spectrum demand and maintain secure voice by encoding analog voice into digital data using; Continuously Variable Slope Delta Modulation ( CVSD ) format and imbedding it into a telemetry stream. The model CSC-0390 DvD system is an excellent choice in decoding digital voice, designed with flexibility, efficiency, and simplicity in mind. Flexibility in design brings forth a capability of operating on a wide variety of telemetry systems and data formats without any specialized interfaces. The utilization of 74HC series circuit technology makes this DvD system efficient in design, low cost, and lower power consumption. In addition the front panel display and control function is also is an example of Simplicity in design and operation.
APA, Harvard, Vancouver, ISO, and other styles
14

Yao, Yufeng. "Topics in Fractional Airlines." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14563.

Full text
Abstract:
Fractional aircraft ownership programs offer companies and individuals all the benefits of owning private jet, such as safety, consistency, and guaranteed availability, at a fraction of the cost of owning an aircraft. In the fractional ownership model, the partial owners of an aircraft are entitled to certain number of hours per year, and the management company is responsible for all the operational considerations and making sure an aircraft is available to the owners at the requested time and location. This thesis research proposes advance optimization techniques to help the management company to optimally operate its available resources and provides tools for strategic decision making. The contributions of this thesis are: (i) The development of optimization methodologies to assign and schedule aircraft and crews so that all flight requests are covered at the lowest possible cost. First, a simple model is developed to solve the crew pairing and aircraft routing problem with column generation assuming that a crew stays with one specific aircraft during its duty period. Secondly, this assumption is partially relaxed to improve resource utilization by revising the simple model to allow a crew to use another aircraft when its original aircraft goes under long maintenance. Thirdly, a new comprehensive model utilizing Benders decomposition technique and a fleet-station time line is proposed to completely relax the assumption that crew stays with one specific aircraft. It combines the fleet assignment, aircraft routing, and crew pairing problems. In the proposed methodologies, real world details are taken into consideration, such as crew transportation and overtime costs, scheduled and unscheduled maintenance effects, crew rules, and the presence of non-crew-compatible fleets. Scheduling with time windows is also discussed. (ii) The analysis of operational strategies to provide decision making support. Scenario analyses are performed to provide insights on improving business profitability and aircraft availability, such as impact of aircraft maintenance, crew swapping, effect of increasing demand by Jet-card and geographical business expansion, size of company owned aircraft, and strategies to deal with the stochastic feature of unscheduled maintenance and demand.
APA, Harvard, Vancouver, ISO, and other styles
15

Zampetakis, Stamatis. "Scalable algorithms for cloud-based Semantic Web data management." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112199/document.

Full text
Abstract:
Afin de construire des systèmes intelligents, où les machines sont capables de raisonner exactement comme les humains, les données avec sémantique sont une exigence majeure. Ce besoin a conduit à l’apparition du Web sémantique, qui propose des technologies standards pour représenter et interroger les données avec sémantique. RDF est le modèle répandu destiné à décrire de façon formelle les ressources Web, et SPARQL est le langage de requête qui permet de rechercher, d’ajouter, de modifier ou de supprimer des données RDF. Être capable de stocker et de rechercher des données avec sémantique a engendré le développement des nombreux systèmes de gestion des données RDF.L’évolution rapide du Web sémantique a provoqué le passage de systèmes de gestion des données centralisées à ceux distribués. Les premiers systèmes étaient fondés sur les architectures pair-à-pair et client-serveur, alors que récemment l’attention se porte sur le cloud computing.Les environnements de cloud computing ont fortement impacté la recherche et développement dans les systèmes distribués. Les fournisseurs de cloud offrent des infrastructures distribuées autonomes pouvant être utilisées pour le stockage et le traitement des données. Les principales caractéristiques du cloud computing impliquent l’évolutivité́, la tolérance aux pannes et l’allocation élastique des ressources informatiques et de stockage en fonction des besoins des utilisateurs.Cette thèse étudie la conception et la mise en œuvre d’algorithmes et de systèmes passant à l’échelle pour la gestion des données du Web sémantique sur des platformes cloud. Plus particulièrement, nous étudions la performance et le coût d’exploitation des services de cloud computing pour construire des entrepôts de données du Web sémantique, ainsi que l’optimisation de requêtes SPARQL pour les cadres massivement parallèles.Tout d’abord, nous introduisons les concepts de base concernant le Web sémantique et les principaux composants des systèmes fondés sur le cloud. En outre, nous présentons un aperçu des systèmes de gestion des données RDF (centralisés et distribués), en mettant l’accent sur les concepts critiques de stockage, d’indexation, d’optimisation des requêtes et d’infrastructure.Ensuite, nous présentons AMADA, une architecture de gestion de données RDF utilisant les infrastructures de cloud public. Nous adoptons le modèle de logiciel en tant que service (software as a service - SaaS), où la plateforme réside dans le cloud et des APIs appropriées sont mises à disposition des utilisateurs, afin qu’ils soient capables de stocker et de récupérer des données RDF. Nous explorons diverses stratégies de stockage et d’interrogation, et nous étudions leurs avantages et inconvénients au regard de la performance et du coût monétaire, qui est une nouvelle dimension importante à considérer dans les services de cloud public.Enfin, nous présentons CliqueSquare, un système distribué de gestion des données RDF basé sur Hadoop. CliqueSquare intègre un nouvel algorithme d’optimisation qui est capable de produire des plans massivement parallèles pour des requêtes SPARQL. Nous présentons une famille d’algorithmes d’optimisation, s’appuyant sur les équijointures n- aires pour générer des plans plats, et nous comparons leur capacité à trouver les plans les plus plats possibles. Inspirés par des techniques de partitionnement et d’indexation existantes, nous présentons une stratégie de stockage générique appropriée au stockage de données RDF dans HDFS (Hadoop Distributed File System). Nos résultats expérimentaux valident l’effectivité et l’efficacité de l’algorithme d’optimisation démontrant également la performance globale du système
In order to build smart systems, where machines are able to reason exactly like humans, data with semantics is a major requirement. This need led to the advent of the Semantic Web, proposing standard ways for representing and querying data with semantics. RDF is the prevalent data model used to describe web resources, and SPARQL is the query language that allows expressing queries over RDF data. Being able to store and query data with semantics triggered the development of many RDF data management systems. The rapid evolution of the Semantic Web provoked the shift from centralized data management systems to distributed ones. The first systems to appear relied on P2P and client-server architectures, while recently the focus moved to cloud computing.Cloud computing environments have strongly impacted research and development in distributed software platforms. Cloud providers offer distributed, shared-nothing infrastructures that may be used for data storage and processing. The main features of cloud computing involve scalability, fault-tolerance, and elastic allocation of computing and storage resources following the needs of the users.This thesis investigates the design and implementation of scalable algorithms and systems for cloud-based Semantic Web data management. In particular, we study the performance and cost of exploiting commercial cloud infrastructures to build Semantic Web data repositories, and the optimization of SPARQL queries for massively parallel frameworks.First, we introduce the basic concepts around Semantic Web and the main components and frameworks interacting in massively parallel cloud-based systems. In addition, we provide an extended overview of existing RDF data management systems in the centralized and distributed settings, emphasizing on the critical concepts of storage, indexing, query optimization, and infrastructure. Second, we present AMADA, an architecture for RDF data management using public cloud infrastructures. We follow the Software as a Service (SaaS) model, where the complete platform is running in the cloud and appropriate APIs are provided to the end-users for storing and retrieving RDF data. We explore various storage and querying strategies revealing pros and cons with respect to performance and also to monetary cost, which is a important new dimension to consider in public cloud services. Finally, we present CliqueSquare, a distributed RDF data management system built on top of Hadoop, incorporating a novel optimization algorithm that is able to produce massively parallel plans for SPARQL queries. We present a family of optimization algorithms, relying on n-ary (star) equality joins to build flat plans, and compare their ability to find the flattest possibles. Inspired by existing partitioning and indexing techniques we present a generic storage strategy suitable for storing RDF data in HDFS (Hadoop’s Distributed File System). Our experimental results validate the efficiency and effectiveness of the optimization algorithm demonstrating also the overall performance of the system
APA, Harvard, Vancouver, ISO, and other styles
16

Jafari, Farhang. "The concerns of the shipping industry regarding the application of electronic bills of lading in practice amid technological change." Thesis, University of Stirling, 2015. http://hdl.handle.net/1893/24071.

Full text
Abstract:
In the sea trade, the traditional paper-based bill of lading has played an important role across the globe for centuries, but with the advent of advanced commercial modes of transportation and communication, the central position of this document is under threat. The importance of the bill of lading still prevails as does the need of the functions that this document served in the past, although in a changed format. In the recent past, the world has witnessed a lot of debate about replacing this traditional paper-based document with an electronic equivalent that exhibits all of its functions and characteristics, both commercial and legal. More specifically, unlike many rival travel documents, such as the Sea Waybill, a bill of lading has two prominent features, that is to say, its negotiability and its acceptability as a document of title in certain legal jurisdictions that are required to be retained in an electronic bill of lading so as to also retain the prominence of this document in the future landscape. This thesis is, however, more concerned about the legal aspects of adopting the electronic bill of lading as a traditional paper-based legal document as well as an effective legal document in the present age. However, the scope of this debate remains primarily focused on the USA and UK jurisdictions. In the course of this thesis, it is observed that, in the past, the bill of lading has been subject to a variety of international regimes, such as The Hague Rules and The Hague-Visby Rules, and presently efforts are being made to arrive at a universal agreement under the umbrella of The Rotterdam Rules, but such an agreement is yet to arrive among the comity of nations. On the other hand, efforts made by the business community to introduce an electronic bill of lading are much louder and more evident. The private efforts, such as the SeaDocs System, CMI Rules, and the BOLERO Project, etc., were, however, received by the fellow business community with both applause as well as suspicion. At the same time, there are a number of concerns voiced by the international business community on the legislative adoptability in national and international jurisdictions and the courts’ approach in adjudicating cases involving electronic transactions and these are making the task of adoption of electronic bill of lading in the sea-based transactions a difficult task. Therefore, in the absence of any formal legal backing from national and international legislations, these attempts could not achieve the desired results. In this thesis, the present situation of the acceptability of electronic transactions in general, and of the electronic bill of lading specifically, has also been discussed with reference to certain national jurisdictions, such as Australia, India, South Korea and China, in order to present comparative perspectives on the preparedness of these nations. On the regional level, the efforts made by the European Union have also been discussed to promote electronic transactions within its jurisdiction. All the discussion, however, leads to the situation where the level of acceptability of electronic bill of lading in the near future is found to be dependent upon the official efforts from the national governments and putting these efforts towards arriving at an agreement on Rotterdam Rules as early as possible. The other area of importance revealed in this thesis is the need for change in juristic approach by the courts while interpreting and adjudicating upon cases involving electronic transactions. On the whole, this thesis has provided a cohesive and systematic review, synthesis and analysis of the history of the bill of lading, its importance as a document of title, and attempts to incorporate its important functions within the fast-paced electronic shipping commerce of today. In such a way it has provided a valuable contribution to the literature by providing a comprehensive resource for jurists, policy-makers and the business community alike, as they work towards adapting the bill of lading so that it might be successfully applied in electronic form.
APA, Harvard, Vancouver, ISO, and other styles
17

Teng, Sin Yong. "Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive Industries." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-433427.

Full text
Abstract:
S tím, jak se neustále vyvíjejí nové technologie pro energeticky náročná průmyslová odvětví, stávající zařízení postupně zaostávají v efektivitě a produktivitě. Tvrdá konkurence na trhu a legislativa v oblasti životního prostředí nutí tato tradiční zařízení k ukončení provozu a k odstavení. Zlepšování procesu a projekty modernizace jsou zásadní v udržování provozních výkonů těchto zařízení. Současné přístupy pro zlepšování procesů jsou hlavně: integrace procesů, optimalizace procesů a intenzifikace procesů. Obecně se v těchto oblastech využívá matematické optimalizace, zkušeností řešitele a provozní heuristiky. Tyto přístupy slouží jako základ pro zlepšování procesů. Avšak, jejich výkon lze dále zlepšit pomocí moderní výpočtové inteligence. Účelem této práce je tudíž aplikace pokročilých technik umělé inteligence a strojového učení za účelem zlepšování procesů v energeticky náročných průmyslových procesech. V této práci je využit přístup, který řeší tento problém simulací průmyslových systémů a přispívá následujícím: (i)Aplikace techniky strojového učení, která zahrnuje jednorázové učení a neuro-evoluci pro modelování a optimalizaci jednotlivých jednotek na základě dat. (ii) Aplikace redukce dimenze (např. Analýza hlavních komponent, autoendkodér) pro vícekriteriální optimalizaci procesu s více jednotkami. (iii) Návrh nového nástroje pro analýzu problematických částí systému za účelem jejich odstranění (bottleneck tree analysis – BOTA). Bylo také navrženo rozšíření nástroje, které umožňuje řešit vícerozměrné problémy pomocí přístupu založeného na datech. (iv) Prokázání účinnosti simulací Monte-Carlo, neuronové sítě a rozhodovacích stromů pro rozhodování při integraci nové technologie procesu do stávajících procesů. (v) Porovnání techniky HTM (Hierarchical Temporal Memory) a duální optimalizace s několika prediktivními nástroji pro podporu managementu provozu v reálném čase. (vi) Implementace umělé neuronové sítě v rámci rozhraní pro konvenční procesní graf (P-graf). (vii) Zdůraznění budoucnosti umělé inteligence a procesního inženýrství v biosystémech prostřednictvím komerčně založeného paradigmatu multi-omics.
APA, Harvard, Vancouver, ISO, and other styles
18

"Use of expert system in consumer lending in Hong Kong." Chinese University of Hong Kong, 1988. http://library.cuhk.edu.hk/record=b5885889.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

"Information extraction and data mining from Chinese financial news." 2002. http://library.cuhk.edu.hk/record=b5891298.

Full text
Abstract:
Ng Anny.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2002.
Includes bibliographical references (leaves 139-142).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Problem Definition --- p.2
Chapter 1.2 --- Thesis Organization --- p.3
Chapter 2 --- Chinese Text Summarization Using Genetic Algorithm --- p.4
Chapter 2.1 --- Introduction --- p.4
Chapter 2.2 --- Related Work --- p.6
Chapter 2.3 --- Genetic Algorithm Approach --- p.10
Chapter 2.3.1 --- Fitness Function --- p.11
Chapter 2.3.2 --- Genetic operators --- p.14
Chapter 2.4 --- Implementation Details --- p.15
Chapter 2.5 --- Experimental results --- p.19
Chapter 2.6 --- Limitations and Future Work --- p.24
Chapter 2.7 --- Conclusion --- p.26
Chapter 3 --- Event Extraction from Chinese Financial News --- p.27
Chapter 3.1 --- Introduction --- p.28
Chapter 3.2 --- Method --- p.29
Chapter 3.2.1 --- Data Set Preparation --- p.29
Chapter 3.2.2 --- Positive Word --- p.30
Chapter 3.2.3 --- Negative Word --- p.31
Chapter 3.2.4 --- Window --- p.31
Chapter 3.2.5 --- Event Extraction --- p.32
Chapter 3.3 --- System Overview --- p.33
Chapter 3.4 --- Implementation --- p.33
Chapter 3.4.1 --- Event Type and Positive Word --- p.34
Chapter 3.4.2 --- Company Name --- p.34
Chapter 3.4.3 --- Negative Word --- p.36
Chapter 3.4.4 --- Event Extraction --- p.37
Chapter 3.5 --- Stock Database --- p.38
Chapter 3.5.1 --- Stock Movements --- p.39
Chapter 3.5.2 --- Implementation --- p.39
Chapter 3.5.3 --- Stock Database Transformation --- p.39
Chapter 3.6 --- Performance Evaluation --- p.40
Chapter 3.6.1 --- Performance measures --- p.40
Chapter 3.6.2 --- Evaluation --- p.41
Chapter 3.7 --- Conclusion --- p.45
Chapter 4 --- Mining Frequent Episodes --- p.46
Chapter 4.1 --- Introduction --- p.46
Chapter 4.1.1 --- Definitions --- p.48
Chapter 4.2 --- Related Work --- p.50
Chapter 4.3 --- Double-Part Event Tree for the database --- p.56
Chapter 4.3.1 --- Complexity of tree construction --- p.62
Chapter 4.4 --- Mining Frequent Episodes with the DE-tree --- p.63
Chapter 4.4.1 --- Conditional Event Trees --- p.66
Chapter 4.4.2 --- Single Path Conditional Event Tree --- p.67
Chapter 4.4.3 --- Complexity of Mining Frequent Episodes with DE-Tree --- p.67
Chapter 4.4.4 --- An Example --- p.68
Chapter 4.4.5 --- Completeness of finding frequent episodes --- p.71
Chapter 4.5 --- Implementation of DE-Tree --- p.71
Chapter 4.6 --- Method 2: Node-List Event Tree --- p.76
Chapter 4.6.1 --- Tree construction --- p.79
Chapter 4.6.2 --- Order of Position Bits --- p.83
Chapter 4.7 --- Implementation of NE-tree construction --- p.84
Chapter 4.7.1 --- Complexity of NE-Tree Construction --- p.86
Chapter 4.8 --- Mining Frequent Episodes with NE-tree --- p.87
Chapter 4.8.1 --- Conditional NE-Tree --- p.87
Chapter 4.8.2 --- Single Path Conditional NE-Tree --- p.88
Chapter 4.8.3 --- Complexity of Mining Frequent Episodes with NE-Tree --- p.89
Chapter 4.8.4 --- An Example --- p.89
Chapter 4.9 --- Performance evaluation --- p.91
Chapter 4.9.1 --- Synthetic data --- p.91
Chapter 4.9.2 --- Real data --- p.99
Chapter 4.10 --- Conclusion --- p.103
Chapter 5 --- Mining N-most Interesting Episodes --- p.104
Chapter 5.1 --- Introduction --- p.105
Chapter 5.2 --- Method --- p.106
Chapter 5.2.1 --- Threshold Improvement --- p.108
Chapter 5.2.2 --- Pseudocode --- p.112
Chapter 5.3 --- Experimental Results --- p.112
Chapter 5.3.1 --- Synthetic Data --- p.113
Chapter 5.3.2 --- Real Data --- p.119
Chapter 5.4 --- Conclusion --- p.121
Chapter 6 --- Mining Frequent Episodes with Event Constraints --- p.122
Chapter 6.1 --- Introduction --- p.122
Chapter 6.2 --- Method --- p.123
Chapter 6.3 --- Experimental Results --- p.125
Chapter 6.3.1 --- Synthetic Data --- p.126
Chapter 6.3.2 --- Real Data --- p.129
Chapter 6.4 --- Conclusion --- p.131
Chapter 7 --- Conclusion --- p.133
Chapter A --- Test Cases --- p.135
Chapter A.1 --- Text 1 --- p.135
Chapter A.2 --- Text 2 --- p.137
Bibliography --- p.139
APA, Harvard, Vancouver, ISO, and other styles
20

Chang, Yu-Ching, and 張雨青. "A Study of the Information Literacy of Vocational Commercial High School Students-the Department of Data Processing For Example." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/01327189198279640411.

Full text
Abstract:
碩士
國立臺灣師範大學
工業科技教育學系
94
This study was intended to conduct an investigation of the information literacy of students in vocational commercial high schools. The information literacy of the study includes library literacy, media literacy, computer literacy and network literacy. The information literacy Standards of the study is the Educational Technology Standards for Student (NETS.S) of International Society for Technology in Educations (ISTE).The questionnaires were carried out by the first and third grade of the Department of Data Processing in vocational commercial high schools, comparing the information literacy of the third grade with the first grade. Firstly, literature review was conducted to develop the questionnaire. After the examination of experts and a pilot test, a questionnaire was created as the main tool for the research. Secondly, there are 33 schools that have Department of Data Processing in Taipei, and random sampling was used to select 17 schools. There were 1020 questionnaires completed, and 830 returned. The validity is around 82%, the results are listed below. 1. Students of the department of data processing are aware that the library literacy reaches to common level. The library literacy does not improve because of the studying age increasing. 2. Students of the department of data processing are aware that the media literacy reaches to common level. The media literacy does not improve because of the studying age to increasing. 3. Students of the department of data processing are aware that the computer literacy reaches to conformable level. The computer literacy improves because of the studying age to increasing. 4. Students of the department of data processing are aware that the network literacy reaches to conformable level. The network literacy does not improve because of the studying age to increasing. 5. Technology research tools of NETS.S is the best ability, and technology problem-solving and decision-making tools of NETS.S is the worst ability of department of data processing students. 6.NETS.S of the department of data processing students stress the network and computer literacy , and it needs to strengthen the library and media literacy. According to the results obtained, suggestions could be provided to educational responsible institution and further study.
APA, Harvard, Vancouver, ISO, and other styles
21

Hahn, Howard Davis. "Microcomputer-assisted site design in landscape architecture: evaluation of selected commercial software." 1985. http://hdl.handle.net/2097/27452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Lemaire, Alain Philippe. "Essays on the use of computational linguistics in marketing." Thesis, 2020. https://doi.org/10.7916/d8-8k3b-nj91.

Full text
Abstract:
This thesis explores the use of unstructured data, and specifically textual data, in providing consumer insights and improving business decisions. The thesis consists of two essays. In essay I, I examine how the linguistic similarity between the language used by reviewers of a product and a prospective customer’s own writing style can be leveraged to assess the match between customers and products. Applying tools from machine learning, Bayesian statistics, and computational linguistics to a large-scale dataset from Yelp, I find that the closer the writing style of a restaurant’s past reviews are to a prospective customer’s writing style, the more likely that customer is to write a review for that restaurant. This effect holds across restaurant types and is driven by the linguistic similarity between the customer’s own reviews and positive past reviews for the restaurant. Further, I find that similarity with respect to words related to leisure (e.g., family, wine, beer, weekend), biology (e.g., eat, life, love), as well as swear words are most influential in creating a match between customers and restaurants. In essay II, I examine whether borrowers consciously or not, leave traces of their intentions, circumstances, and personality traits in the text they write when applying for a loan. I find that this textual information has a substantial and significant ability to predict whether borrowers will pay back the loan above and beyond the financial and demographic variables commonly used in models predicting default. Using text-mining and machine-learning tools to automatically process and analyze the raw text in over 120 thousand loan requests from Prosper.com, an online crowdfunding platform, I find that including the textual information in the loan significantly helps predict loan default and can have substantial financial implications. I find that loan requests written by defaulting borrowers are more likely to include words related to their family, mentions of God, the borrower’s financial and general hardship, pleading lenders for help, and short-term focused words. I further observe that defaulting loan requests are written in a manner consistent with the writing style of extroverts and liars.
APA, Harvard, Vancouver, ISO, and other styles
23

"Two research problems in a 4th party logistics platform: shipment planning in a dynamic environment and e-service platform design." Thesis, 2006. http://library.cuhk.edu.hk/record=b6074134.

Full text
Abstract:
1. Problem one: Shipment planning in a dynamic environment. The planning of air cargo logistics is a complex endeavor that involves the collaboration of multiple logistics agents to deliver shipments in a timely, safe, and economic manner. Airfreight forwarders coordinate and manage shipments for their clients, and with the development of Internet logistics platforms, airfreight forwarders can now trade jobs and resources with other participants effectively. The incorporation of trading alternatives significantly complicates the shipment planning process.
2. Problem two: e-services platform design. The need for business logistics starts with a buyer and a seller. It involves arrangements of materials/products moving from the seller to the buyer and payment flows from the buyer and the seller. When the logistics arrangements are not done by the buyer nor the seller but rather by a specialist, we call the specialist a 3rd party logistics (3PL) service providers. A typical logistics service/job involves many agents, for instance, forwarders, truckers, warehouse operators, carriers, etc. In the process, a lot of information will be shared and exchanged among the agents, the buyer and the seller. With the advancement of information technologies, an emerging trend is to have the business dealing, information sharing and even payment arrangement among the logistics agents, buyers and sellers done through e-services on the Internet. In this thesis, we propose a 4th party logistics (4PL) platform, which is an Internet environment to enable and facilitate 3PL providers collaboratively provide services to buyers and sellers.
The proposed platform is called 4PL platform because it facilitates the 3PL agents. To better serve its 3PL clients, the platform should be "neutral", meaning it will not provide logistics services competing with its clients. The 4PL platform will facilitate its clients through e-services. However, existing e-services technology only allows e-services to be provided to individual clients. The idea of providing e-service to collaborating clients is new. We called it the 3rd party e-Service. In this thesis, we have conceptualized and further defined the 3rd party e-Service. To realize the 3rd party e-Service, we have first proposed a 3rd party service-oriented architecture and then developed a set of new elements to the existing e-Service description technology. To prove the concept, the new architecture, and the new description technology, we put into action. Using the shipment planning model as an example, we are able to offer shipment planning e-service to collaborating agents on the Internet.
This dissertation studies two research problems in a 4th party logistics platform.
This study proposes a dynamic decision framework for air cargo shipment planning, within the dynamic environment of bidding and trading. The framework has three phases: estimation, trading, and execution. Planning in the phases proceeds iteratively until an acceptable plan is obtained and shipments are set and fulfilled. The optimization of shipment planning is formulated as a mixed 0-1 LP model from a portfolio point of view. Unlike the models in previous research, this model targets profit maximization and takes into account the decisions of job selection and resource selection, and can be solved using a Tabu-based approach. We also discuss the respective rules and strategies that would aid the decision-making processes in the framework.
Chen Gang.
"February 2006."
Advisers: Waiman Cheung; Chi Kin Leung.
Source: Dissertation Abstracts International, Volume: 67-11, Section: A, page: 4358.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2006.
Includes bibliographical references (p. 96-106).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts in English and Chinese.
School code: 1307.
APA, Harvard, Vancouver, ISO, and other styles
24

Disatapundhu, Suppakorn. "An assessment of computer utilization by graphic design professionals in Thailand." Thesis, 1993. http://hdl.handle.net/1957/35370.

Full text
Abstract:
The uses of computer technology in the fields of art and graphic design in Thailand were investigated for the purpose of identifying levels of current computer use from 280 responses to a specifically designed questionnaire among: 1) full-time graphic design educators, 2) art and design students, and 3) graphic design directors in professional business positions. The study instrument consisted of a questionnaire developed by the researcher, reviewed by a panel of seven experts selected by the Department of Creative Arts, Chula-longkorn University. The panel verified content-related evidence to ensure the validity of the instrument. Appropriate statistical procedures were implemented to develop responses to questions of interest. Analysis of the data showed that a majority of educators, students, and design professionals supported the use of computer in their professions and/or coursework, and that majorities of the same groups made regular use of computers. Subject to differences in rank ordering of computer usage among population groups, majorities from each group agreed that publications and graphics constituted the area of greatest use. A majority of the population agreed that computers helped to improve efficiency within the studio environment, and there were only slight differences among the three groups in generalized support of the use of computers within art and design curricula. All groups agreed that educational emphasis should be placed at the level of the baccalaureate degree, subject to the possible integration of computer training at all educational levels. Students reflected the highest percentage of use frequency, followed in order by professionals and educators. Each group reflected its own specific concerns in perceptions of major barriers to the use of computers in graphic design fields: Educators noted the lack of budgetary resources to install and maintain computers; students noted the lack of computer availability for hands-on experience; and design professionals perceived a lack of opportunity to attend training courses. Overall, the results of this study indicated that significant differences existed between groups representing academic fields (i.e., educators and students) and graphic design professionals for all criteria measured.
Graduation date: 1994
APA, Harvard, Vancouver, ISO, and other styles
25

Riba, Evans Mogolo. "Exploring advanced forecasting methods with applications in aviation." Diss., 2021. http://hdl.handle.net/10500/27410.

Full text
Abstract:
Abstracts in English, Afrikaans and Northern Sotho
More time series forecasting methods were researched and made available in recent years. This is mainly due to the emergence of machine learning methods which also found applicability in time series forecasting. The emergence of a variety of methods and their variants presents a challenge when choosing appropriate forecasting methods. This study explored the performance of four advanced forecasting methods: autoregressive integrated moving averages (ARIMA); artificial neural networks (ANN); support vector machines (SVM) and regression models with ARIMA errors. To improve their performance, bagging was also applied. The performance of the different methods was illustrated using South African air passenger data collected for planning purposes by the Airports Company South Africa (ACSA). The dissertation discussed the different forecasting methods at length. Characteristics such as strengths and weaknesses and the applicability of the methods were explored. Some of the most popular forecast accuracy measures were discussed in order to understand how they could be used in the performance evaluation of the methods. It was found that the regression model with ARIMA errors outperformed all the other methods, followed by the ARIMA model. These findings are in line with the general findings in the literature. The ANN method is prone to overfitting and this was evident from the results of the training and the test data sets. The bagged models showed mixed results with marginal improvement on some of the methods for some performance measures. It could be concluded that the traditional statistical forecasting methods (ARIMA and the regression model with ARIMA errors) performed better than the machine learning methods (ANN and SVM) on this data set, based on the measures of accuracy used. This calls for more research regarding the applicability of the machine learning methods to time series forecasting which will assist in understanding and improving their performance against the traditional statistical methods
Die afgelope tyd is verskeie tydreeksvooruitskattingsmetodes ondersoek as gevolg van die ontwikkeling van masjienleermetodes met toepassings in die vooruitskatting van tydreekse. Die nuwe metodes en hulle variante laat ʼn groot keuse tussen vooruitskattingsmetodes. Hierdie studie ondersoek die werkverrigting van vier gevorderde vooruitskattingsmetodes: outoregressiewe, geïntegreerde bewegende gemiddeldes (ARIMA), kunsmatige neurale netwerke (ANN), steunvektormasjiene (SVM) en regressiemodelle met ARIMA-foute. Skoenlussaamvoeging is gebruik om die prestasie van die metodes te verbeter. Die prestasie van die vier metodes is vergelyk deur hulle toe te pas op Suid-Afrikaanse lugpassasiersdata wat deur die Suid-Afrikaanse Lughawensmaatskappy (ACSA) vir beplanning ingesamel is. Hierdie verhandeling beskryf die verskillende vooruitskattingsmetodes omvattend. Sowel die positiewe as die negatiewe eienskappe en die toepasbaarheid van die metodes is uitgelig. Bekende prestasiemaatstawwe is ondersoek om die prestasie van die metodes te evalueer. Die regressiemodel met ARIMA-foute en die ARIMA-model het die beste van die vier metodes gevaar. Hierdie bevinding strook met dié in die literatuur. Dat die ANN-metode na oormatige passing neig, is deur die resultate van die opleidings- en toetsdatastelle bevestig. Die skoenlussamevoegingsmodelle het gemengde resultate opgelewer en in sommige prestasiemaatstawwe vir party metodes marginaal verbeter. Op grond van die waardes van die prestasiemaatstawwe wat in hierdie studie gebruik is, kan die gevolgtrekking gemaak word dat die tradisionele statistiese vooruitskattingsmetodes (ARIMA en regressie met ARIMA-foute) op die gekose datastel beter as die masjienleermetodes (ANN en SVM) presteer het. Dit dui op die behoefte aan verdere navorsing oor die toepaslikheid van tydreeksvooruitskatting met masjienleermetodes om hul prestasie vergeleke met dié van die tradisionele metodes te verbeter.
Go nyakišišitšwe ka ga mekgwa ye mentši ya go akanya ka ga molokoloko wa dinako le go dirwa gore e hwetšagale mo mengwageng ye e sa tšwago go feta. Se k e k a le b a k a la g o t šwelela ga mekgwa ya go ithuta ya go diriša metšhene yeo le yona e ilego ya dirišwa ka kakanyong ya molokolokong wa dinako. Go t šwelela ga mehutahuta ya mekgwa le go fapafapana ga yona go tšweletša tlhohlo ge go kgethwa mekgwa ya maleba ya go akanya. Dinyakišišo tše di lekodišišitše go šoma ga mekgwa ye mene ya go akanya yeo e gatetšego pele e lego: ditekanyotshepelo tšeo di kopantšwego tša poelomorago ya maitirišo (ARIMA); dinetweke tša maitirelo tša nyurale (ANN); metšhene ya bekthara ya thekgo (SVM); le mekgwa ya poelomorago yeo e nago le diphošo tša ARIMA. Go kaonafatša go šoma ga yona, nepagalo ya go ithuta ka metšhene le yona e dirišitšwe. Go šoma ga mekgwa ye e fepafapanego go laeditšwe ka go šomiša tshedimošo ya banamedi ba difofane ba Afrika Borwa yeo e kgobokeditšwego mabakeng a dipeakanyo ke Khamphani ya Maemafofane ya Afrika Borwa (ACSA). Sengwalwanyaki šišo se ahlaahlile mekgwa ya kakanyo ye e fapafapanego ka bophara. Dipharologanyi tša go swana le maatla le bofokodi le go dirišega ga mekgwa di ile tša šomišwa. Magato a mangwe ao a tumilego kudu a kakanyo ye e nepagetšego a ile a ahlaahlwa ka nepo ya go kwešiša ka fao a ka šomišwago ka gona ka tshekatshekong ya go šoma ga mekgwa ye. Go hweditšwe gore mokgwa wa poelomorago wa go ba le diphošo tša ARIMA o phadile mekgwa ye mengwe ka moka, gwa latela mokgwa wa ARIMA. Dikutollo tše di sepelelana le dikutollo ka kakaretšo ka dingwaleng. Mo k gwa wa ANN o ka fela o fetišiša gomme se se bonagetše go dipoelo tša tlhahlo le dihlo pha t ša teko ya tshedimošo. Mekgwa ya nepagalo ya go ithuta ka metšhene e bontšhitše dipoelo tšeo di hlakantšwego tšeo di nago le kaonafalo ye kgolo go ye mengwe mekgwa ya go ela go phethagatšwa ga mešomo. Go ka phethwa ka gore mekgwa ya setlwaedi ya go akanya dipalopalo (ARIMA le mokgwa wa poelomorago wa go ba le diphošo tša ARIMA) e šomile bokaone go phala mekgwa ya go ithuta ka metšhene (ANN le SVM) ka mo go sehlopha se sa tshedimošo, go eya ka magato a nepagalo ya magato ao a šomišitšwego. Se se nyaka gore go dirwe dinyakišišo tše dingwe mabapi le go dirišega ga mekgwa ya go ithuta ka metšhene mabapi le go akanya molokoloko wa dinako, e lego seo se tlago thuša go kwešiša le go kaonafatša go šoma ga yona kgahlanong le mekgwa ya setlwaedi ya dipalopalo.
Decision Sciences
M. Sc. (Operations Research)
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography