Dissertations / Theses on the topic 'Commercial loans Data processing'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 25 dissertations / theses for your research on the topic 'Commercial loans Data processing.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Tian, Zhimin. "Essays on economic consequences of inside director reputation." HKBU Institutional Repository, 2014. https://repository.hkbu.edu.hk/etd_oa/62.
Full textVuorio, R. (Riikka). "Use of public sector’s open spatial data in commercial applications." Master's thesis, University of Oulu, 2014. http://urn.fi/URN:NBN:fi:oulu-201311201883.
Full textHines, Dennis O., Donald C. Rhea, and Guy W. Williams. "ADVANCED DATA ACQUISITION AND PROCESSING SYSTEMS (ADAPS) UPDATE." International Foundation for Telemetering, 1994. http://hdl.handle.net/10150/608546.
Full textThe rapid technology growth in the aerospace industry continues to manifest itself in increasingly complex computer systems and weapons systems platforms. To meet the data processing challenges associated with these new weapons systems, the Air Force Flight Test Center (AFFTC) is developing the next generation of data acquisition and processing systems under the Advanced Data Acquisition and Processing Systems (ADAPS) Program. The ADAPS program has evolved into an approach that utilizes Commercial-Off-The-Shelf (COTS) components as the foundation for Air Force enhancements to meet specific customer requirements. The ADAPS program has transitioned from concept exploration to engineering and manufacturing development (EMD). This includes the completion of a detailed requirements analysis and a overall system design. This paper will discuss the current status of the ADAPS program including the requirements analysis process, details of the system design, and the result of current COTS acquisitions.
McJannet, Lawrence George 1952. "REALIZATION OF A REGULAR FACILITY BLOCK PLAN FROM AN ADJACENCY GRAPH USING GRAPH THEORETIC BASED HEURISTICS." Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/275537.
Full textChung, Kit-lun, and 鐘傑麟. "Intelligent agent for Internet Chinese financial news retrieval." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B30106503.
Full textAlawady, Amro M. "TURNKEY TELEMETRY DATA ACQUISITION AND PROCESSING SYSTEMS UTILIZING COMMERCIAL OFF THE SHELF (COTS) PRODUCTS." International Foundation for Telemetering, 1996. http://hdl.handle.net/10150/608369.
Full textThis paper discusses turnkey telemetry data acquisition and analysis systems. A brief history of previous systems used at Lockheed Martin Vought Systems is presented. Then, the paper describes systems that utilize more COTS hardware and software and discusses the time and resources saved by integrating these products into a complete system along with a description of what some newer systems will offer.
顧銘培 and Ming-pui Ku. "The essentials of project management in tackling the change of year 2000 on computer systems of an airline." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1997. http://hub.hku.hk/bib/B31267993.
Full textEuawatana, Teerapong. "Implementation business-to-business electronic commercial website using ColdFusion 4.5." CSUSB ScholarWorks, 2001. https://scholarworks.lib.csusb.edu/etd-project/1917.
Full textDu, Yun Yan. "Legal recognition and implications of electronic bill of lading in international business : international legal developments and the legal status in China." Thesis, University of Macau, 2011. http://umaclib3.umac.mo/record=b2487632.
Full textTembo, Rachael. "Information and communication technology usage trends and factors in commercial agriculture in the wine industry." Thesis, [S.l. : s.n.], 2008. http://dk.cput.ac.za/cgi/viewcontent.cgi?article=1066&context=td_cput.
Full textLuque, N. E. "Cluster dynamics in the Basque region of Spain." Thesis, Coventry University, 2011. http://curve.coventry.ac.uk/open/items/4f4161ca-11db-4d70-9954-aea64f4fbaa4/1.
Full textCamacho, Rodriguez Jesus. "Efficient techniques for large-scale Web data management." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112229/document.
Full textThe recent development of commercial cloud computing environments has strongly impacted research and development in distributed software platforms. Cloud providers offer a distributed, shared-nothing infrastructure, that may be used for data storage and processing.In parallel with the development of cloud platforms, programming models that seamlessly parallelize the execution of data-intensive tasks over large clusters of commodity machines have received significant attention, starting with the MapReduce model very well known by now, and continuing through other novel and more expressive frameworks. As these models are increasingly used to express analytical-style data processing tasks, the need for higher-level languages that ease the burden of writing complex queries for these systems arises.This thesis investigates the efficient management of Web data on large-scale infrastructures. In particular, we study the performance and cost of exploiting cloud services to build Web data warehouses, and the parallelization and optimization of query languages that are tailored towards querying Web data declaratively.First, we present AMADA, an architecture for warehousing large-scale Web data in commercial cloud platforms. AMADA operates in a Software as a Service (SaaS) approach, allowing users to upload, store, and query large volumes of Web data. Since cloud users support monetary costs directly connected to their consumption of resources, our focus is not only on query performance from an execution time perspective, but also on the monetary costs associated to this processing. In particular, we study the applicability of several content indexing strategies, and show that they lead not only to reducing query evaluation time, but also, importantly, to reducing the monetary costs associated with the exploitation of the cloud-based warehouse.Second, we consider the efficient parallelization of the execution of complex queries over XML documents, implemented within our system PAXQuery. We provide novel algorithms showing how to translate such queries into plans expressed in the PArallelization ConTracts (PACT) programming model. These plans are then optimized and executed in parallel by the Stratosphere system. We demonstrate the efficiency and scalability of our approach through experiments on hundreds of GB of XML data.Finally, we present a novel approach for identifying and reusing common subexpressions occurring in Pig Latin scripts. In particular, we lay the foundation of our reuse-based algorithms by formalizing the semantics of the Pig Latin query language with extended nested relational algebra for bags. Our algorithm, named PigReuse, operates on the algebraic representations of Pig Latin scripts, identifies subexpression merging opportunities, selects the best ones to execute based on a cost function, and merges other equivalent expressions to share its result. We bring several extensions to the algorithm to improve its performance. Our experiment results demonstrate the efficiency and effectiveness of our reuse-based algorithms and optimization strategies
Knudtson, Kevin M., and Randy Glass. "DIGITAL VOICE DECODING IN TODAY'S TELEMETRY SYSTEM." International Foundation for Telemetering, 1999. http://hdl.handle.net/10150/607327.
Full textToday’s telemetry systems can reduce spectrum demand and maintain secure voice by encoding analog voice into digital data using; Continuously Variable Slope Delta Modulation ( CVSD ) format and imbedding it into a telemetry stream. The model CSC-0390 DvD system is an excellent choice in decoding digital voice, designed with flexibility, efficiency, and simplicity in mind. Flexibility in design brings forth a capability of operating on a wide variety of telemetry systems and data formats without any specialized interfaces. The utilization of 74HC series circuit technology makes this DvD system efficient in design, low cost, and lower power consumption. In addition the front panel display and control function is also is an example of Simplicity in design and operation.
Yao, Yufeng. "Topics in Fractional Airlines." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14563.
Full textZampetakis, Stamatis. "Scalable algorithms for cloud-based Semantic Web data management." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112199/document.
Full textIn order to build smart systems, where machines are able to reason exactly like humans, data with semantics is a major requirement. This need led to the advent of the Semantic Web, proposing standard ways for representing and querying data with semantics. RDF is the prevalent data model used to describe web resources, and SPARQL is the query language that allows expressing queries over RDF data. Being able to store and query data with semantics triggered the development of many RDF data management systems. The rapid evolution of the Semantic Web provoked the shift from centralized data management systems to distributed ones. The first systems to appear relied on P2P and client-server architectures, while recently the focus moved to cloud computing.Cloud computing environments have strongly impacted research and development in distributed software platforms. Cloud providers offer distributed, shared-nothing infrastructures that may be used for data storage and processing. The main features of cloud computing involve scalability, fault-tolerance, and elastic allocation of computing and storage resources following the needs of the users.This thesis investigates the design and implementation of scalable algorithms and systems for cloud-based Semantic Web data management. In particular, we study the performance and cost of exploiting commercial cloud infrastructures to build Semantic Web data repositories, and the optimization of SPARQL queries for massively parallel frameworks.First, we introduce the basic concepts around Semantic Web and the main components and frameworks interacting in massively parallel cloud-based systems. In addition, we provide an extended overview of existing RDF data management systems in the centralized and distributed settings, emphasizing on the critical concepts of storage, indexing, query optimization, and infrastructure. Second, we present AMADA, an architecture for RDF data management using public cloud infrastructures. We follow the Software as a Service (SaaS) model, where the complete platform is running in the cloud and appropriate APIs are provided to the end-users for storing and retrieving RDF data. We explore various storage and querying strategies revealing pros and cons with respect to performance and also to monetary cost, which is a important new dimension to consider in public cloud services. Finally, we present CliqueSquare, a distributed RDF data management system built on top of Hadoop, incorporating a novel optimization algorithm that is able to produce massively parallel plans for SPARQL queries. We present a family of optimization algorithms, relying on n-ary (star) equality joins to build flat plans, and compare their ability to find the flattest possibles. Inspired by existing partitioning and indexing techniques we present a generic storage strategy suitable for storing RDF data in HDFS (Hadoop’s Distributed File System). Our experimental results validate the efficiency and effectiveness of the optimization algorithm demonstrating also the overall performance of the system
Jafari, Farhang. "The concerns of the shipping industry regarding the application of electronic bills of lading in practice amid technological change." Thesis, University of Stirling, 2015. http://hdl.handle.net/1893/24071.
Full textTeng, Sin Yong. "Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive Industries." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-433427.
Full text"Use of expert system in consumer lending in Hong Kong." Chinese University of Hong Kong, 1988. http://library.cuhk.edu.hk/record=b5885889.
Full text"Information extraction and data mining from Chinese financial news." 2002. http://library.cuhk.edu.hk/record=b5891298.
Full textThesis (M.Phil.)--Chinese University of Hong Kong, 2002.
Includes bibliographical references (leaves 139-142).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Problem Definition --- p.2
Chapter 1.2 --- Thesis Organization --- p.3
Chapter 2 --- Chinese Text Summarization Using Genetic Algorithm --- p.4
Chapter 2.1 --- Introduction --- p.4
Chapter 2.2 --- Related Work --- p.6
Chapter 2.3 --- Genetic Algorithm Approach --- p.10
Chapter 2.3.1 --- Fitness Function --- p.11
Chapter 2.3.2 --- Genetic operators --- p.14
Chapter 2.4 --- Implementation Details --- p.15
Chapter 2.5 --- Experimental results --- p.19
Chapter 2.6 --- Limitations and Future Work --- p.24
Chapter 2.7 --- Conclusion --- p.26
Chapter 3 --- Event Extraction from Chinese Financial News --- p.27
Chapter 3.1 --- Introduction --- p.28
Chapter 3.2 --- Method --- p.29
Chapter 3.2.1 --- Data Set Preparation --- p.29
Chapter 3.2.2 --- Positive Word --- p.30
Chapter 3.2.3 --- Negative Word --- p.31
Chapter 3.2.4 --- Window --- p.31
Chapter 3.2.5 --- Event Extraction --- p.32
Chapter 3.3 --- System Overview --- p.33
Chapter 3.4 --- Implementation --- p.33
Chapter 3.4.1 --- Event Type and Positive Word --- p.34
Chapter 3.4.2 --- Company Name --- p.34
Chapter 3.4.3 --- Negative Word --- p.36
Chapter 3.4.4 --- Event Extraction --- p.37
Chapter 3.5 --- Stock Database --- p.38
Chapter 3.5.1 --- Stock Movements --- p.39
Chapter 3.5.2 --- Implementation --- p.39
Chapter 3.5.3 --- Stock Database Transformation --- p.39
Chapter 3.6 --- Performance Evaluation --- p.40
Chapter 3.6.1 --- Performance measures --- p.40
Chapter 3.6.2 --- Evaluation --- p.41
Chapter 3.7 --- Conclusion --- p.45
Chapter 4 --- Mining Frequent Episodes --- p.46
Chapter 4.1 --- Introduction --- p.46
Chapter 4.1.1 --- Definitions --- p.48
Chapter 4.2 --- Related Work --- p.50
Chapter 4.3 --- Double-Part Event Tree for the database --- p.56
Chapter 4.3.1 --- Complexity of tree construction --- p.62
Chapter 4.4 --- Mining Frequent Episodes with the DE-tree --- p.63
Chapter 4.4.1 --- Conditional Event Trees --- p.66
Chapter 4.4.2 --- Single Path Conditional Event Tree --- p.67
Chapter 4.4.3 --- Complexity of Mining Frequent Episodes with DE-Tree --- p.67
Chapter 4.4.4 --- An Example --- p.68
Chapter 4.4.5 --- Completeness of finding frequent episodes --- p.71
Chapter 4.5 --- Implementation of DE-Tree --- p.71
Chapter 4.6 --- Method 2: Node-List Event Tree --- p.76
Chapter 4.6.1 --- Tree construction --- p.79
Chapter 4.6.2 --- Order of Position Bits --- p.83
Chapter 4.7 --- Implementation of NE-tree construction --- p.84
Chapter 4.7.1 --- Complexity of NE-Tree Construction --- p.86
Chapter 4.8 --- Mining Frequent Episodes with NE-tree --- p.87
Chapter 4.8.1 --- Conditional NE-Tree --- p.87
Chapter 4.8.2 --- Single Path Conditional NE-Tree --- p.88
Chapter 4.8.3 --- Complexity of Mining Frequent Episodes with NE-Tree --- p.89
Chapter 4.8.4 --- An Example --- p.89
Chapter 4.9 --- Performance evaluation --- p.91
Chapter 4.9.1 --- Synthetic data --- p.91
Chapter 4.9.2 --- Real data --- p.99
Chapter 4.10 --- Conclusion --- p.103
Chapter 5 --- Mining N-most Interesting Episodes --- p.104
Chapter 5.1 --- Introduction --- p.105
Chapter 5.2 --- Method --- p.106
Chapter 5.2.1 --- Threshold Improvement --- p.108
Chapter 5.2.2 --- Pseudocode --- p.112
Chapter 5.3 --- Experimental Results --- p.112
Chapter 5.3.1 --- Synthetic Data --- p.113
Chapter 5.3.2 --- Real Data --- p.119
Chapter 5.4 --- Conclusion --- p.121
Chapter 6 --- Mining Frequent Episodes with Event Constraints --- p.122
Chapter 6.1 --- Introduction --- p.122
Chapter 6.2 --- Method --- p.123
Chapter 6.3 --- Experimental Results --- p.125
Chapter 6.3.1 --- Synthetic Data --- p.126
Chapter 6.3.2 --- Real Data --- p.129
Chapter 6.4 --- Conclusion --- p.131
Chapter 7 --- Conclusion --- p.133
Chapter A --- Test Cases --- p.135
Chapter A.1 --- Text 1 --- p.135
Chapter A.2 --- Text 2 --- p.137
Bibliography --- p.139
Chang, Yu-Ching, and 張雨青. "A Study of the Information Literacy of Vocational Commercial High School Students-the Department of Data Processing For Example." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/01327189198279640411.
Full text國立臺灣師範大學
工業科技教育學系
94
This study was intended to conduct an investigation of the information literacy of students in vocational commercial high schools. The information literacy of the study includes library literacy, media literacy, computer literacy and network literacy. The information literacy Standards of the study is the Educational Technology Standards for Student (NETS.S) of International Society for Technology in Educations (ISTE).The questionnaires were carried out by the first and third grade of the Department of Data Processing in vocational commercial high schools, comparing the information literacy of the third grade with the first grade. Firstly, literature review was conducted to develop the questionnaire. After the examination of experts and a pilot test, a questionnaire was created as the main tool for the research. Secondly, there are 33 schools that have Department of Data Processing in Taipei, and random sampling was used to select 17 schools. There were 1020 questionnaires completed, and 830 returned. The validity is around 82%, the results are listed below. 1. Students of the department of data processing are aware that the library literacy reaches to common level. The library literacy does not improve because of the studying age increasing. 2. Students of the department of data processing are aware that the media literacy reaches to common level. The media literacy does not improve because of the studying age to increasing. 3. Students of the department of data processing are aware that the computer literacy reaches to conformable level. The computer literacy improves because of the studying age to increasing. 4. Students of the department of data processing are aware that the network literacy reaches to conformable level. The network literacy does not improve because of the studying age to increasing. 5. Technology research tools of NETS.S is the best ability, and technology problem-solving and decision-making tools of NETS.S is the worst ability of department of data processing students. 6.NETS.S of the department of data processing students stress the network and computer literacy , and it needs to strengthen the library and media literacy. According to the results obtained, suggestions could be provided to educational responsible institution and further study.
Hahn, Howard Davis. "Microcomputer-assisted site design in landscape architecture: evaluation of selected commercial software." 1985. http://hdl.handle.net/2097/27452.
Full textLemaire, Alain Philippe. "Essays on the use of computational linguistics in marketing." Thesis, 2020. https://doi.org/10.7916/d8-8k3b-nj91.
Full text"Two research problems in a 4th party logistics platform: shipment planning in a dynamic environment and e-service platform design." Thesis, 2006. http://library.cuhk.edu.hk/record=b6074134.
Full text2. Problem two: e-services platform design. The need for business logistics starts with a buyer and a seller. It involves arrangements of materials/products moving from the seller to the buyer and payment flows from the buyer and the seller. When the logistics arrangements are not done by the buyer nor the seller but rather by a specialist, we call the specialist a 3rd party logistics (3PL) service providers. A typical logistics service/job involves many agents, for instance, forwarders, truckers, warehouse operators, carriers, etc. In the process, a lot of information will be shared and exchanged among the agents, the buyer and the seller. With the advancement of information technologies, an emerging trend is to have the business dealing, information sharing and even payment arrangement among the logistics agents, buyers and sellers done through e-services on the Internet. In this thesis, we propose a 4th party logistics (4PL) platform, which is an Internet environment to enable and facilitate 3PL providers collaboratively provide services to buyers and sellers.
The proposed platform is called 4PL platform because it facilitates the 3PL agents. To better serve its 3PL clients, the platform should be "neutral", meaning it will not provide logistics services competing with its clients. The 4PL platform will facilitate its clients through e-services. However, existing e-services technology only allows e-services to be provided to individual clients. The idea of providing e-service to collaborating clients is new. We called it the 3rd party e-Service. In this thesis, we have conceptualized and further defined the 3rd party e-Service. To realize the 3rd party e-Service, we have first proposed a 3rd party service-oriented architecture and then developed a set of new elements to the existing e-Service description technology. To prove the concept, the new architecture, and the new description technology, we put into action. Using the shipment planning model as an example, we are able to offer shipment planning e-service to collaborating agents on the Internet.
This dissertation studies two research problems in a 4th party logistics platform.
This study proposes a dynamic decision framework for air cargo shipment planning, within the dynamic environment of bidding and trading. The framework has three phases: estimation, trading, and execution. Planning in the phases proceeds iteratively until an acceptable plan is obtained and shipments are set and fulfilled. The optimization of shipment planning is formulated as a mixed 0-1 LP model from a portfolio point of view. Unlike the models in previous research, this model targets profit maximization and takes into account the decisions of job selection and resource selection, and can be solved using a Tabu-based approach. We also discuss the respective rules and strategies that would aid the decision-making processes in the framework.
Chen Gang.
"February 2006."
Advisers: Waiman Cheung; Chi Kin Leung.
Source: Dissertation Abstracts International, Volume: 67-11, Section: A, page: 4358.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2006.
Includes bibliographical references (p. 96-106).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts in English and Chinese.
School code: 1307.
Disatapundhu, Suppakorn. "An assessment of computer utilization by graphic design professionals in Thailand." Thesis, 1993. http://hdl.handle.net/1957/35370.
Full textGraduation date: 1994
Riba, Evans Mogolo. "Exploring advanced forecasting methods with applications in aviation." Diss., 2021. http://hdl.handle.net/10500/27410.
Full textMore time series forecasting methods were researched and made available in recent years. This is mainly due to the emergence of machine learning methods which also found applicability in time series forecasting. The emergence of a variety of methods and their variants presents a challenge when choosing appropriate forecasting methods. This study explored the performance of four advanced forecasting methods: autoregressive integrated moving averages (ARIMA); artificial neural networks (ANN); support vector machines (SVM) and regression models with ARIMA errors. To improve their performance, bagging was also applied. The performance of the different methods was illustrated using South African air passenger data collected for planning purposes by the Airports Company South Africa (ACSA). The dissertation discussed the different forecasting methods at length. Characteristics such as strengths and weaknesses and the applicability of the methods were explored. Some of the most popular forecast accuracy measures were discussed in order to understand how they could be used in the performance evaluation of the methods. It was found that the regression model with ARIMA errors outperformed all the other methods, followed by the ARIMA model. These findings are in line with the general findings in the literature. The ANN method is prone to overfitting and this was evident from the results of the training and the test data sets. The bagged models showed mixed results with marginal improvement on some of the methods for some performance measures. It could be concluded that the traditional statistical forecasting methods (ARIMA and the regression model with ARIMA errors) performed better than the machine learning methods (ANN and SVM) on this data set, based on the measures of accuracy used. This calls for more research regarding the applicability of the machine learning methods to time series forecasting which will assist in understanding and improving their performance against the traditional statistical methods
Die afgelope tyd is verskeie tydreeksvooruitskattingsmetodes ondersoek as gevolg van die ontwikkeling van masjienleermetodes met toepassings in die vooruitskatting van tydreekse. Die nuwe metodes en hulle variante laat ʼn groot keuse tussen vooruitskattingsmetodes. Hierdie studie ondersoek die werkverrigting van vier gevorderde vooruitskattingsmetodes: outoregressiewe, geïntegreerde bewegende gemiddeldes (ARIMA), kunsmatige neurale netwerke (ANN), steunvektormasjiene (SVM) en regressiemodelle met ARIMA-foute. Skoenlussaamvoeging is gebruik om die prestasie van die metodes te verbeter. Die prestasie van die vier metodes is vergelyk deur hulle toe te pas op Suid-Afrikaanse lugpassasiersdata wat deur die Suid-Afrikaanse Lughawensmaatskappy (ACSA) vir beplanning ingesamel is. Hierdie verhandeling beskryf die verskillende vooruitskattingsmetodes omvattend. Sowel die positiewe as die negatiewe eienskappe en die toepasbaarheid van die metodes is uitgelig. Bekende prestasiemaatstawwe is ondersoek om die prestasie van die metodes te evalueer. Die regressiemodel met ARIMA-foute en die ARIMA-model het die beste van die vier metodes gevaar. Hierdie bevinding strook met dié in die literatuur. Dat die ANN-metode na oormatige passing neig, is deur die resultate van die opleidings- en toetsdatastelle bevestig. Die skoenlussamevoegingsmodelle het gemengde resultate opgelewer en in sommige prestasiemaatstawwe vir party metodes marginaal verbeter. Op grond van die waardes van die prestasiemaatstawwe wat in hierdie studie gebruik is, kan die gevolgtrekking gemaak word dat die tradisionele statistiese vooruitskattingsmetodes (ARIMA en regressie met ARIMA-foute) op die gekose datastel beter as die masjienleermetodes (ANN en SVM) presteer het. Dit dui op die behoefte aan verdere navorsing oor die toepaslikheid van tydreeksvooruitskatting met masjienleermetodes om hul prestasie vergeleke met dié van die tradisionele metodes te verbeter.
Go nyakišišitšwe ka ga mekgwa ye mentši ya go akanya ka ga molokoloko wa dinako le go dirwa gore e hwetšagale mo mengwageng ye e sa tšwago go feta. Se k e k a le b a k a la g o t šwelela ga mekgwa ya go ithuta ya go diriša metšhene yeo le yona e ilego ya dirišwa ka kakanyong ya molokolokong wa dinako. Go t šwelela ga mehutahuta ya mekgwa le go fapafapana ga yona go tšweletša tlhohlo ge go kgethwa mekgwa ya maleba ya go akanya. Dinyakišišo tše di lekodišišitše go šoma ga mekgwa ye mene ya go akanya yeo e gatetšego pele e lego: ditekanyotshepelo tšeo di kopantšwego tša poelomorago ya maitirišo (ARIMA); dinetweke tša maitirelo tša nyurale (ANN); metšhene ya bekthara ya thekgo (SVM); le mekgwa ya poelomorago yeo e nago le diphošo tša ARIMA. Go kaonafatša go šoma ga yona, nepagalo ya go ithuta ka metšhene le yona e dirišitšwe. Go šoma ga mekgwa ye e fepafapanego go laeditšwe ka go šomiša tshedimošo ya banamedi ba difofane ba Afrika Borwa yeo e kgobokeditšwego mabakeng a dipeakanyo ke Khamphani ya Maemafofane ya Afrika Borwa (ACSA). Sengwalwanyaki šišo se ahlaahlile mekgwa ya kakanyo ye e fapafapanego ka bophara. Dipharologanyi tša go swana le maatla le bofokodi le go dirišega ga mekgwa di ile tša šomišwa. Magato a mangwe ao a tumilego kudu a kakanyo ye e nepagetšego a ile a ahlaahlwa ka nepo ya go kwešiša ka fao a ka šomišwago ka gona ka tshekatshekong ya go šoma ga mekgwa ye. Go hweditšwe gore mokgwa wa poelomorago wa go ba le diphošo tša ARIMA o phadile mekgwa ye mengwe ka moka, gwa latela mokgwa wa ARIMA. Dikutollo tše di sepelelana le dikutollo ka kakaretšo ka dingwaleng. Mo k gwa wa ANN o ka fela o fetišiša gomme se se bonagetše go dipoelo tša tlhahlo le dihlo pha t ša teko ya tshedimošo. Mekgwa ya nepagalo ya go ithuta ka metšhene e bontšhitše dipoelo tšeo di hlakantšwego tšeo di nago le kaonafalo ye kgolo go ye mengwe mekgwa ya go ela go phethagatšwa ga mešomo. Go ka phethwa ka gore mekgwa ya setlwaedi ya go akanya dipalopalo (ARIMA le mokgwa wa poelomorago wa go ba le diphošo tša ARIMA) e šomile bokaone go phala mekgwa ya go ithuta ka metšhene (ANN le SVM) ka mo go sehlopha se sa tshedimošo, go eya ka magato a nepagalo ya magato ao a šomišitšwego. Se se nyaka gore go dirwe dinyakišišo tše dingwe mabapi le go dirišega ga mekgwa ya go ithuta ka metšhene mabapi le go akanya molokoloko wa dinako, e lego seo se tlago thuša go kwešiša le go kaonafatša go šoma ga yona kgahlanong le mekgwa ya setlwaedi ya dipalopalo.
Decision Sciences
M. Sc. (Operations Research)