Thèses sur le sujet « Billing data »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Billing data.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 18 meilleures thèses pour votre recherche sur le sujet « Billing data ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Seashore, Jonathan. « The automation of obtaining customer billing data ». [Denver, Colo.] : Regis University, 2006. http://165.236.235.140/lib/JSeashore2006.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Lukalapu, Sushma. « Billing and receivables database application ». CSUSB ScholarWorks, 2000. https://scholarworks.lib.csusb.edu/etd-project/1618.

Texte intégral
Résumé :
The purpose of this project is to design, build, and implement an information retrieval database system for the Accounting Department at CSUSB. The database will focus on the financial details of the student accounts maintained by the accounting personnel. It offers detailed information pertinent to tuition, parking, housing, boarding, etc.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Chan, Emily. « Evaluating the use of physician billing data for age and setting specific influenza surveillance ». Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=32402.

Texte intégral
Résumé :
Syndromic surveillance has emerged as a novel, automated approach to monitoring diseases using pre-diagnostic but often non-specific data sources. However, there is little consensus about the best data sources. Using physician billing data from community-based care settings and emergency departments in Quebec, Canada during 1998-2003, we evaluated the lead-lag relationship between ambulatory medical visits for influenza-like illnesses (ILI) and pneumonia and influenza (P& I) hospitalizations by age-group, visit setting, and influenza season. To do so, we applied ARIMA modeling methodology and computed the cross-correlation function (CCF) using the residuals. ILI visits in community settings by children aged 5-17 years tended to provide the greatest lead times (at least 2 but up to 3 weeks) over P&I hospitalizations. Lead times varied each season, possibly due to the circulation of different strains each season. These findings have important implications for syndromic surveillance of influenza, as well as epidemic control strategies such as vaccination and school closure policies.
La surveillance syndromique a émergé comme une nouvelle approche automatisée pour le contrôle des maladies avec des sources de données pré-diagnostic, mais qui sont souvent non-spécifiques. Pourtant, il y a peu de consensus concernant les meilleures sources de données. En utilisant des factures médicales émises entre 1998 et 2003, et provenant de centres communautaire et de services d'urgence au Québec, Canada, nous avons évalué par tranche d'âge, le cadre des visites, et la saison de la grippe la relation d'avance-décalage entre les visites médicales ambulatoires pour le syndrome d'allure grippale (SAG) et les hospitalisations pour la pneumonie et la grippe. Pour ce faire, nous avons appliqué la méthodologie des modèles d'ARIMA et calculé la fonction de contre-corrélation (CCF) avec les résidus. Les visites communautaires reliée au SAG par des enfants âgés de 5-17 ans ont eu tendance à pourvoir les plus grandes avances (au moins 2 semaines, mais quelques fois jusqu'à 3 semaines) contre des hospitalisations pour la pneumonie et la grippe. Les avances ont varié chaque année, peut-être à cause de la circulation des souches différentes chaque saison. Ces résultats ont des implications importantes pour la surveillance syndromique de la grippe, ainsi que pour des stratégies de lutte contre l'épidémie, comme la vaccination et la fermeture d'écoles.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Pidlaoan, Victorio P. « Use of Billing and Electronic Health Record Data to define an Alternative Payment Model for the Management of Acute Pancreatitis ». The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1511807834086251.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Handlon, Lauree E. « The Relationship of the Financial Condition of a Healthcare Organization and the Error Rate of Potentially Missed Coding/Billing of Select Outpatient Services ». The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1204650548.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Strmiska, Roman. « Hotspotový systém pro více operátorů ». Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-218144.

Texte intégral
Résumé :
Master’s thesis deals with the design and realization of a hotspot system for more Operators, it solves problems of QoS, billing of transferred data and distribution of services via a common wireless interface. The theoretic part is oriented to the selection of a suitable technology and explanation of a legislation, which relates to an activity of the hotspot‘s network. The practical part solves the choice of hardware, design and realization of the experimental network. In conclusion are tested transit parameters of the network and its functionality.
Styles APA, Harvard, Vancouver, ISO, etc.
7

D'Elboux, Adriano FogaÃa. « O impacto da autuaÃÃo fiscal no comportamento dos contribuintes do ICMS no Estado do Cearà». Universidade Federal do CearÃ, 2012. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=9902.

Texte intégral
Résumé :
nÃo hÃ
Diante da significativa importÃncia de se aferir a efetividade dos mecanismos de atuaÃÃo das administraÃÃes tributÃrias e o papel da auditoria fiscal no combate a evasÃo fiscal, analisa-se os impactos da AutuaÃÃo Fiscal imposta como puniÃÃo pela Auditoria Fiscal no Ãmbito da FiscalizaÃÃo TributÃria do ICMS sobre o comportamento dos contribuintes no Estado do CearÃ. Modelos com dados em painel com informaÃÃes sobre o grupo de tratamento formado por empresas fiscalizadas e autuadas por irregularidades no cumprimento de suas obrigaÃÃes tributÃrias no perÃodo de Julho de 2006 Ã Dezembro de 2006 foram contrastados com os de um grupo de controle composto por empresas que nÃo foram fiscalizadas e autuadas entre Janeiro de 2005 Ã Dezembro de 2007. Modelos para a elasticidade tributÃria do faturamento do contribuinte foram estimados tambÃm em segmentos de atividade para verificar efeitos setoriais da autuaÃÃo fiscal. Para o total das empresas nos diversos segmentos constatou-se um moderado impacto da autuaÃÃo fiscal sobre a elasticidade tributÃria do faturamento dos contribuintes autuados. Nos subgrupos, apenas o segmento Atacadista mostra o impacto positivo da autuaÃÃo fiscal.
Given the significant importance of measuring the effectiveness of the mechanisms of action of the tax administrations and the role of audit in combating tax evasion, we analyze the impacts of Tax Fine imposed as punishment for Tax Audit under the Tax Inspection of ICMS on behavior of taxpayers in the state of CearÃ. Models with panel data with information about the treatment group consists of companies audited and fined for irregularities in meeting their tax obligations in the period July 2006 to December 2006 were contrasted with a control group composed of companies that do not were audited and fined between January 2005 to December 2007. Models for the tax elasticity of the billing were estimated also in segments of activity to verify effects of sectoral tax fine. For the group of companies in various segments was found a moderate impact of the tax fine on the tax elasticity of billing from taxpayers fined. In subgroups, only the Wholesale segment shows the positive impact of tax fine.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Ericsson, Yvonne. « E-fakturans inverkan på integritetskänslig vårdinformation ». Thesis, Jönköping University, JIBS, Business Informatics, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-170.

Texte intégral
Résumé :

Denna uppsats handlar om integritetskänslig vårdinformation på fakturor, ett ämne som aktualiserats under mitt arbete på landstingets kansli i Jönköping. När en person får vård i annat län än hemlandstinget så skickas en faktura till hemlandstinget och innan dessa fakturor betalas måste de granskas och attesteras. Hanteringen kring detta upplevdes som tidsödande och ineffektiv, vilket ledde mig in på funderingar kring elektronisk fakturering (e-fakturering). Eftersom fakturorna innehåller integritetskänslig information, som personuppgifter och vissa uppgifter som går att koppla till diagnos och behandling, är det inte bara viktigt att hanteringen blir effektiv utan också att sekretessen kan upprätthållas under hela faktureringsprocessen.

I denna undersökning studeras hur hanteringen av integritetskänslig vårdinformation påverkas om fakturahanteringen digitaliseras. Syftet är att göra en jämförelse avseende hantering av sekretess mellan pappersfakturor och digitala e-fakturor. Undersökningen genomfördes med en kvalitativ ansats där en organisation, Jönköpings läns landsting, studerades mer ingående och för datainsamlingen valdes litteraturstudier och intervjuer.

Resultatet visade att sekretessen kring pappersfakturor är en känslig faktor då hanteringen av dessa inte går att spåra och att fakturor lätt kan försvinna inom organisationen. För e-fakturor finns däremot stor möjlighet till spårbarhet och kontroll över händelserna kring hanteringen. Det finns olika typer av e-fakturering och om ett system som till exempel EDI-fakturering väljs kan många uppgifter automatiseras vilket minskar risken för såväl avsiktliga som oavsiktliga sekretessbrott. Om vissa personuppgifter kan döljas innebär det dessutom en än större säkerhet för patienterna. Tekniskt sett finns det alltså många bra lösningar som kan förbättra sekretessen, men mycket av detta är dock endast en efterkontroll över att lagar och regler följs, till exempel sparade uppgifter om vad som har hänt en specifik faktura.

Det största hotet mot sekretess är dock den så kallade mänskliga faktorn. Det går inte att bygga ett informationssystem så att det inte finns den minsta risk för läckor, eftersom det alltid är de personer som hanterar de känsliga uppgifterna som gör att tanken blir till handling. Vissa typer av misstag eller slarv kan förebyggas med teknik, men personalens inställning till sitt eget arbete och det ansvar som det innebär är en grundläggande faktor såväl med pappers- som e-fakturor.


This thesis deals with integrity sensitive healthcare information on invoices, a subject that arose during my work at county council secretariat in Jönköping. When a person gets treatment in another county than the home county the invoice are sent to the home county, but before these invoices are paid, they have to be checked and attested. This handling was experienced as time consuming and ineffective which led me in to some thoughts around electronic billing (e-billing). The invoice contains integrity sensitive information, like personal code numbers and other kind of information, which is possible to connect with diagnosis and treatments. Therefore it is important that the handling becomes effective and that the secrecy can be maintained during the whole billing process.

In this study I discuss how the handling of integrity sensitive healthcare information is influenced if the invoice handling is digitalised. The purpose was to do a comparison the handling of secrecy between paper invoices and digital e-billing. The study was accomplished by a qualitative approach where one organisation, Jönköpings county council, was examined more thorough and for the data collection literature studies and interviews were chosen.

The result showed that the secrecy around paper invoices is a sensitive factor. It is not able to trace the handlings around these documents and an invoice could easily disappear within an organisation. On the contrary, for e-bills there is a large possibility to trace and control the handlings. There are different kinds of e-billing and if a system like for example EDI-billing is chosen, a lot of tasks could be automatized. That will decrease the risk for intentional as well as unintentional secrecy crimes. Furthermore, if some personal information could be hidden, it would mean a better safety for the patients. Technically there are a lot of good solutions that is able to improve the secrecy, but a lot of these are only a recheck that laws and rules are followed, for example saved information about what have happened with a specific invoice.

The biggest threat though is the so called human element. It is impossible to build an information system that contains no leaks, because it is always the persons that are handling the delicate information that makes the thoughts into actions. Some kind of mistakes or carelessness could be prevented by technical solutions, but the staffs’ attitude of their work and the responsibility is a fundamental factor with papers- as well as e-bills.

Styles APA, Harvard, Vancouver, ISO, etc.
9

Тимчук, Микола Іванович, et Mykola Tymchuk. « Методи та засоби побудови білінгових систем з відмовостійкою архітектурою ». Master's thesis, Тернопільський національний технічний університет імені Івана Пулюя, 2020. http://elartu.tntu.edu.ua/handle/lib/33328.

Texte intégral
Résumé :
Кваліфікаційна робота присвячена дослідженню методів та засобів побудови відмовостійкої архітектури для білінгових систем. Проведено порівняльний аналіз існуючих програмних середовищ та технологій для вирішення завдання, як результат обрано Oralce Golden Gate. Наведено особливості встановлення Oralce Golden Gate з конфігурацією вихідної і цільової баз даних. Докладно описані програмні механізми реплікації баз даних, приведено процедуру створення екземпляру бази даних Oracle. Також запропоновано способи усунення проблем та конфліктів, які можуть виникнути при використанні розробки. Проведено попереднє налаштування Active-Active реплікації. Дана робота охоплює ряд важливих аспектів, таких як висока доступність і максимальний захист даних, відмовостійкість, продуктивність, зниження витрат на розгортання, управління і підтримку системи
Thesis deals with the study of methods and means of constractimg a fault-tolerant architecture for billing systems. A comparative analysis of existing software environments and technologies to solve the problem was carried out. As a result, Oralce Golden Gate was selected. Features of installation of Oralce Golden Gate with a configuration of initial and target databases are resulted. The software mechanisms of database replication are described in detail, the procedure for creating an instance of the Oracle database is given. There are also ways to resolve issues and conflicts that may arise when using the development were proposed. Pre-configured Active-Active replication was done. This research work covers a number of important aspects, such as high availability and maximum data protection, fault tolerance, productivity, reduced deployment costs, management and system maintenance.
ЗМІСТ ПЕРЕЛІК УМОВНИХ ПОЗНАЧЕНЬ, СИМВОЛІВ, ОДИНИЦЬ СКОРОЧЕНЬ І ТЕРМІНІВ 6 ВСТУП 9 РОЗДІЛ 1. АНАЛІТИЧНА ЧАСТИНА 12 1.1. Роль білінгу в ІС. 12 1.2. Аналіз основних понять досліджуваної області 16 1.3. Огляд існуючих технологій 20 1.4. Висновки до розділу 23 РОЗДІЛ 2. ТЕОРЕТИЧНА ЧАСТИНА 24 2.1. Характеристика середовища OGG 24 2.1.1. Ключові можливості і технологічні відмінності 25 2.1.2. Використовувані топології 25 2.1.3. Проміжне ПЗ Oracle для BI і ODI 26 2.1.4. Застосування для сховищ даних в режимі РЧ 27 2.1.5. Використання для СУБД Oracle 28 2.1.6. Відмовостійкість 29 2.1.7. Вимоги до системи 30 2.2. Огляд архітектури OGG 30 2.2.1. OGG Capture 32 2.2.2. OGG Trail Files 32 2.2.3. OGG Delivery 33 2.2.4. OGG Manager 34 2.3. Встановлення OGG 35 2.4. Конфігурація вихідної і цільової Oracle БД 37 2.5. Висновки до розділу 38 РОЗДІЛ 3. ПРАКТИЧНА РЕАЛІЗАЦІЯ. 39 3.1. Архітектура БД БС 39 3.2. Захоплення змін та реплікація 40 3.3. Створення OGG екземпляру 47 3.4. Вирішення проблем 51 3.5. Попереднє налаштування Active-Active реплікації 54 3.6. Висновки до розділу 56 РОЗДІЛ 4. ОХОРОНА ПРАЦІ ТА БЕЗПЕКА В НАДЗВИЧАЙНИХ СИТУАЦІЯХ 57 4.1. Охорона праці 57 4.2. Комп’ютерне забезпечення процесу оцінки радіаційної та хімічної обстановки 60 4.3. Висновки до розділу 62 ВИСНОВКИ 63 СПИСОК ВИКОРИСТАНИХ ДЖЕРЕЛ 65 ДОДАТОК А. Тези конференції
Styles APA, Harvard, Vancouver, ISO, etc.
10

Feuchte, Beate. « Billig Nähen für den Weltmarkt - Lebens- und Arbeitsbedingungen der Beschäftigten der bangladeschischen Bekleidungsindustrie eine sozialgeographische Studie ». Berlin wvb, Wiss. Verl, 2007. http://www.wvberlin.de/data/inhalt/feuchte.html.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Jang, Jiyong. « Scaling Software Security Analysis to Millions of Malicious Programs and Billions of Lines of Code ». Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/306.

Texte intégral
Résumé :
Software security is a big data problem. The volume of new software artifacts created far outpaces the current capacity of software analysis. This gap has brought an urgent challenge to our security community—scalability. If our techniques cannot cope with an ever increasing volume of software, we will always be one step behind attackers. Thus developing scalable analysis to bridge the gap is essential. In this dissertation, we argue that automatic code reuse detection enables an efficient data reduction of a high volume of incoming malware for downstream analysis and enhances software security by efficiently finding known vulnerabilities across large code bases. In order to demonstrate the benefits of automatic software similarity detection, we discuss two representative problems that are remedied by scalable analysis: malware triage and unpatched code clone detection. First, we tackle the onslaught of malware. Although over one million new malware are reported each day, existing research shows that most malware are not written from scratch; instead, they are automatically generated variants of existing malware. When groups of highly similar variants are clustered together, new malware more easily stands out. Unfortunately, current systems struggle with handling this high volume of malware. We scale clustering using feature hashing and perform semantic analysis using co-clustering. Our evaluation demonstrates that these techniques are an order of magnitude faster than previous systems and automatically discover highly correlated features and malware groups. Furthermore, we design algorithms to infer evolutionary relationships among malware, which helps analysts understand trends over time and make informed decisions about which malware to analyze first. Second, we address the problem of detecting unpatched code clones at scale. When buggy code gets copied from project to project, eventually all projects will need to be patched. We call clones of buggy code that have been fixed in only a subset of projects unpatched code clones. Unfortunately, code copying is usually ad-hoc and is often not tracked, which makes it challenging to identify all unpatched vulnerabilities in code basesat the scale of entire OS distributions. We scale unpatched code clone detection to spot over15,000 latent security vulnerabilities in 2.1 billion lines of code from the Linux kernel, allDebian and Ubuntu packages, and all C/C++ projects in SourceForge in three hours on asingle machine. To the best of our knowledge, this is the largest set of bugs ever reported in a single paper.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Galicia, Auyón Jorge Armando. « Revisiting Data Partitioning for Scalable RDF Graph Processing Combining Graph Exploration and Fragmentation for RDF Processing Query Optimization for Large Scale Clustered RDF Data RDFPart- Suite : Bridging Physical and Logical RDF Partitioning. Reverse Partitioning for SPARQL Queries : Principles and Performance Analysis. ShouldWe Be Afraid of Querying Billions of Triples in a Graph-Based Centralized System ? EXGRAF : Exploration et Fragmentation de Graphes au Service du Traitement Scalable de Requˆetes RDF ». Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2021. http://www.theses.fr/2021ESMA0001.

Texte intégral
Résumé :
Le Resource Description Framework (RDF) et SPARQL sont des standards très populaires basés sur des graphes initialement conçus pour représenter et interroger des informations sur le Web. La flexibilité offerte par RDF a motivé son utilisation dans d'autres domaines. Aujourd'hui les jeux de données RDF sont d'excellentes sources d'information. Ils rassemblent des milliards de triplets dans des Knowledge Graphs qui doivent être stockés et exploités efficacement. La première génération de systèmes RDF a été construite sur des bases de données relationnelles traditionnelles. Malheureusement, les performances de ces systèmes se dégradent rapidement car le modèle relationnel ne convient pas au traitement des données RDF intrinsèquement représentées sous forme de graphe. Les systèmes RDF natifs et distribués cherchent à surmonter cette limitation. Les premiers utilisent principalement l’indexation comme stratégie d'optimisation pour accélérer les requêtes. Les deuxièmes recourent au partitionnement des données. Dans le modèle relationnel, la représentation logique de la base de données est cruciale pour concevoir le partitionnement. La couche logique définissant le schéma explicite de la base de données offre un certain confort aux concepteurs. Cette couche leur permet de choisir manuellement ou automatiquement, via des assistants automatiques, les tables et les attributs à partitionner. Aussi, elle préserve les concepts fondamentaux sur le partitionnement qui restent constants quel que soit le système de gestion de base de données. Ce schéma de conception n'est plus valide pour les bases de données RDF car le modèle RDF n'applique pas explicitement un schéma aux données. Ainsi, la couche logique est inexistante et le partitionnement des données dépend fortement des implémentations physiques des triplets sur le disque. Cette situation contribue à avoir des logiques de partitionnement différentes selon le système cible, ce qui est assez différent du point de vue du modèle relationnel. Dans cette thèse, nous promouvons l'idée d'effectuer le partitionnement de données au niveau logique dans les bases de données RDF. Ainsi, nous traitons d'abord le graphe de données RDF pour prendre en charge le partitionnement basé sur des entités logiques. Puis, nous proposons un framework pour effectuer les méthodes de partitionnement. Ce framework s'accompagne de procédures d'allocation et de distribution des données. Notre framework a été incorporé dans un système de traitement des données RDF centralisé (RDF_QDAG) et un système distribué (gStoreD). Nous avons mené plusieurs expériences qui ont confirmé la faisabilité de l'intégration de notre framework aux systèmes existants en améliorant leurs performances pour certaines requêtes. Enfin, nous concevons un ensemble d'outils de gestion du partitionnement de données RDF dont un langage de définition de données (DDL) et un assistant automatique de partitionnement
The Resource Description Framework (RDF) and SPARQL are very popular graph-based standards initially designed to represent and query information on the Web. The flexibility offered by RDF motivated its use in other domains and today RDF datasets are great information sources. They gather billions of triples in Knowledge Graphs that must be stored and efficiently exploited. The first generation of RDF systems was built on top of traditional relational databases. Unfortunately, the performance in these systems degrades rapidly as the relational model is not suitable for handling RDF data inherently represented as a graph. Native and distributed RDF systems seek to overcome this limitation. The former mainly use indexing as an optimization strategy to speed up queries. Distributed and parallel RDF systems resorts to data partitioning. The logical representation of the database is crucial to design data partitions in the relational model. The logical layer defining the explicit schema of the database provides a degree of comfort to database designers. It lets them choose manually or automatically (through advisors) the tables and attributes to be partitioned. Besides, it allows the partitioning core concepts to remain constant regardless of the database management system. This design scheme is no longer valid for RDF databases. Essentially, because the RDF model does not explicitly enforce a schema since RDF data is mostly implicitly structured. Thus, the logical layer is inexistent and data partitioning depends strongly on the physical implementations of the triples on disk. This situation contributes to have different partitioning logics depending on the target system, which is quite different from the relational model’s perspective. In this thesis, we promote the novel idea of performing data partitioning at the logical level in RDF databases. Thereby, we first process the RDF data graph to support logical entity-based partitioning. After this preparation, we present a partitioning framework built upon these logical structures. This framework is accompanied by data fragmentation, allocation, and distribution procedures. This framework was incorporated to a centralized (RDF_QDAG) and a distributed (gStoreD) triple store. We conducted several experiments that confirmed the feasibility of integrating our framework to existent systems improving their performances for certain queries. Finally, we design a set of RDF data partitioning management tools including a data definition language (DDL) and an automatic partitioning wizard
Styles APA, Harvard, Vancouver, ISO, etc.
13

McNamara, Kathryn. « Defense Data Network : usage sensitive billing ». Thesis, 1986. http://hdl.handle.net/10945/21764.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

Chien, Se-Fen, et 簡瑟芬. « Mining on the Convergent Billing Services Data for Telecommunication Business ». Thesis, 2003. http://ndltd.ncl.edu.tw/handle/26852064036354613323.

Texte intégral
Résumé :
碩士
國立雲林科技大學
資訊管理系碩士班
91
As deregulation opens up the telecommunications industry, the telecommunications market has increased more converging communications services. According to investigation, suppliers of telecommunication services are facing heavy competition due to the ever increasing needs from customers for more and better converging communication services, the suppliers consider to deliver these services must become more suitable and creative. The study attempts to acquire hidden knowledge by means of the technology of data mining from the huge data stored in a telecommunication billing database, which is used to support large customers’ convergence billing services activity. Through this study, we discover two kinds of customer behavioral information: characterization of convergence service and behavior of purchasing. The characterization of convergence service enables us to better understand customers for increasing customer satisfaction and royalty. The behavior of association purchasing will support enterprise for marketing strategic. The mining methodology adopted is based on some of techniques taken from artificial intelligence, such as: decision tree, neural networks of classification, and association analysis, have been widely adopted to acquire customer knowledge patterns. Beside the pattern extraction, we make this knowledge explicit by rules and visualization link chart resulting in an actionable format to business managers, who understand customers’ needs and patterns and design proper relationship strategies based on discovered informative characteristics.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Kan, Kevin, et 甘永勝. « Using Data Mining Technique to improve billing system performance in semiconductor Industry ». Thesis, 2006. http://ndltd.ncl.edu.tw/handle/09701949595597951307.

Texte intégral
Résumé :
碩士
國立交通大學
管理學院碩士在職專班資訊管理組
95
In today's dynamic and changing environment, companies need to be responsive and closer to the customers, and deliver value-added products and services as quickly as possible. Companies also need to be able to support organizational information faster and better than their competitors. In semiconductor testing, beside emphasize fast reaction result in testing changed and result, billing’s real time and accurate provide data that is important also or too to provide the enterprise’s strategy guiding principal and satisfied customer requirement. To build the new intelligent billing system for this purpose, It major make use the Data Warehousing integration concept to consolidate these billing data and Decision Tree of Data Mining method to configured the billing rule. Both of data warehousing and data mining methods are to make up the intelligent billing system. The result of this paper is to design the new intelligent billing system. Speed up the implementation new billing system of customers, and improve the billing performance. Especially; the company have already existed many customers that didn’t stilled to implement the billing system. And finally this new intelligent billing system is to provide the accurate the almost real time revenue data that can view by customer, tester machine, and provide the real time revenue of enterprise to integrate the EIS , DSS or BI, let the enterprise to make a right strategic timely.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Firmino, Bruno Manuel Paias. « Smart Monetization - Telecom Revenue Management beyond the traditional invoice ». Master's thesis, 2019. http://hdl.handle.net/10362/113609.

Texte intégral
Résumé :
Nowadays, there is a fast and unpredictable technological evolution, with new systems constantly emerging on the market, with the capability of being monetized. However, these systems are not always fully and flexibly explored. Many hardware distributors are selling products without a clear view on sustainable business models for them, leaving these as an afterthought. Communications Service Providers are suddenly under pres sure to modernize and expande their business models as to regain the ground claimed by Over-the-top service providers, who make use of existing infrastructures to provide their own services, which naturally may lead to substantial revenue loss from the actual infras tructure owners. The Smart Monetization project aims to explore this paradigm, with the design and implementation of a reusable asset, making use of Big Data and Analytics tools that can ingest and process usage and billing data from customers, detecting event patterns and correlations that can be monetized, leading to improved and new service experiences and ensuring, as well, greater transparency on the process of billing and charging of these services.
Styles APA, Harvard, Vancouver, ISO, etc.
17

De, Wet Dawid Pieter. « A scalable business model for mass customization of broadband services in the emerging Africa market / Dawid Pieter de Wet ». Thesis, 2012. http://hdl.handle.net/10394/15176.

Texte intégral
Résumé :
Africa’s rapid adoption of the mobile phone is quickly closing the digital divide in voice services. But, just as one divide is closing, another one is widening. Consumers almost everywhere are demanding more services and higher Internet access data rates. In the developing world the knowledge gained through access to information is creating unprecedented opportunities and is having a dramatic impact on the way people live and work. Africa, however, has been largely left behind in the shift to broadband. Increasing the availability and affordability of broadband services is thus high on the agenda for policy makers in Africa, though it will require major efforts from both government and the private sector. Fundamental to the all efforts to close the “digital divide” is the need to provide a ubiquitous and affordable access network that will enable distribution of broadband services to anywhere, and anytime throughout Africa. While many kinds of broadband services are being offered to the African population, the currently available services have failed to reach the majority of Africans living in rural areas. This poses a very pertinent question that justifies further investigations: why have the existing broadband services failed to satisfy Africa’s need for a ubiquitous digital communication service. The lack of penetration of the existing services makes it clear that a different technology and service offering is needed, a service offering that is affordable to the large consumer market segment and which can complement the mobile and ADSL broadband networks to provide services to all of Africa on a cost effective basis. This research work investigates the current business and technology domains and develops new knowledge and the insights that are required firstly to understand why existing broadband services are failing to reach rural Africa and secondly to understand what criteria must be satisfied to deliver broadband access services to the mass consumer Africa market. The research work focuses on the interrelationships between markets, technology and business of the consumer broadband market and defines new thinking as reference to provide guidance to the future development of more suitable broadband offerings for the rural African market. The study centres around three principal areas of knowledge contribution. Analysis of the primary factors impacting the delivery of broadband services Firstly the study addresses the current market dynamics and technology realities to determine two critical aspects: 1) Can the mass market afford broadband services or will it remain the privilege of the higher income groups? And, 2) Can existing mobile broadband , ADSL and satellite access services meet the demands to service the mass market or is an alternative technology option required? Through analytical review the study determined that there is a large, and growing, middle class market that can afford broadband access services. This market sector is quantified in terms of consumer income levels and demographic user data. The study formulates the commercial and service criteria applicable to a broadband access service on servicing this target market. The study further investigates the availability, affordability and market penetration of the current mobile and ADSL broadband services and found that the available service options cannot effectively meet the current and future demand. The limitation in meeting the current market demand leads to a large under serviced consumer market in Africa. The study proposes a unique approach to quantify the specific under-serviced gap, which will not be met by currently available broadband technologies. The technology comparative study provides new insight into the limitations of mobile 3G broadband services and why this technology will not be able to meet the future demand for consumer broadband services in Africa. The technology study furthermore quantifies the advantages of using satellite technology to implement a mass consumer broadband service in Africa. The study proves that the ubiquitous nature and rapid deployment capabilities of satellite access networks provides distinct benefits when deploying a mass consumer network which makes satellite the technology of choice for consumer broadband services. We then continue to assess the ability of existing satellite broadband offerings to satisfy the needs of African end-users, and find that those offerings have been optimized for the needs and affordability levels of customers from the develop world. The result is that satellite broadband services aimed at the African end-user is primarily used by corporate and institutional customers, with little penetration of the consumer market. This finding provides the motivation for developing a business model that can leverage available technology to effectively service the African consumer market. Innovation of new concepts to support a viable broadband business strategy The mobile prepay model as well as the DStv pay-TV subscription services have demonstrated the need for a specific business innovation to ensure successful market adoption of new technologies. Both these industries have demonstrated that innovative approaches in the commercialization of technology solutions are critical to ensure the mass adoption thereof. The second section of the study therefore focuses on the innovations that are required to overcome the obstacles as identified in section 1 in order to arrive at a business strategy and business model that will prove to be viable in the delivery of broadband services to the rural African consumer market. The first challenge is the selection of the most appropriate technology platforms and the architectural design of the delivery systems to effectively service the mass consumer market. In order to adapt the business models employed by existing satellite broadband service providers the study defines the following two specific business innovation concepts that contribute to a new business paradigm for mass market broadband access services: 1) Through applied billing model innovation the study defines a new billing structure for broadband services and set a completely new paradigm for users to influence the cost of the service. The new billing model provides end-user the capability to adapt their broadband usage patterns to meet their budget constraints. 2) To successfully deliver a technology service to an emerging market requires a very specific organisational structure that effectively integrates knowledge, capability and funding while minimizing risk and uncertainty. The study proposes a new symbiotic organisational structure that elegantly combines capability and knowledge while minimizing funding requirements to ensure the acceptable market development risk. Development of a business model simulator for satellite broadband service delivery The deployment of a new type of satellite broadband service to rural Africa on an experimental basis is too expensive to be conducted for research purposes. A more practical approach that is also widely used in other domains of engineering is to construct a simulated model of the system being studied. The third knowledge contribution area of the study therefore focuses on constructing a mathematical model of the expected behavior of a business operation that provides satellite based broadband services to the African market. This simulator can be applied to quantitatively analyze various existing or proposed new business strategies. The business model simulation integrates all the business, market, technology and commercial relationships that impacts on the expected behavior of such an operation and provides a quantified model of expected business behavior based on the underlying dynamics of the satellite broadband industry. The development and validation of the business model simulator represents a unique contribution to this industry as no results of a similar model that represents the operations of a satellite broadband access service provider has been published before. The model empowers Service Providers and industry stakeholders to analyze different business strategies and to quantify the impact of various business decisions. In general it can be stated that this research work adds knowledge and insight to the field of applied business strategy as applicable to providing advanced technology-based services for emerging markets. The final outcome of this research study is the business model simulator. It integrates various market and business elements as well as satellite network engineering practises into an integrated financial cost modelling, business scenario planning and engineering network design tool. Through this integration of known disciplines the study provides an additional extension to the field of satellite business engineering.
PhD (Electronic Engineering), North-West University, Potchefstroom Campus, 2013
Styles APA, Harvard, Vancouver, ISO, etc.
18

Ahmed, Aly. « Complex graph algorithms using relational database ». Thesis, 2021. http://hdl.handle.net/1828/13306.

Texte intégral
Résumé :
Data processing for Big Data plays a vital role for decision-makers in organizations and government, enhances the user experience, and provides quality results in prediction analysis. However, many modern data processing solutions make a significant investment in hardware and maintenance costs, such as Hadoop and Spark, often neglecting the well established and widely used relational database management systems (RDBMS's). In this dissertation, we study three fundamental graph problems in RDBMS. The first problem we tackle is computing shortest paths (SP) from a source to a target in large network graphs. We explore SQL based solutions and leverage the intelligent scheduling that a RDBMS performs when executing set-at-a-time expansions of graph vertices, which is in contrast to vertex-at-a-time expansions in classical SP algorithms. Our algorithms perform orders of magnitude faster than baselines and outperform counterparts in native graph databases. Second, we studied the PageRank problem which is vital in Google Search and social network analysis to determine how to sort search results and identify important nodes in a graph. PageRank is an iterative algorithm which imposes challenges when implementing it over large graphs. We study computing PageRank using RDBMS for very large graphs using a consumer-grade machine and compare the results to a dedicated graph database. We show that our RDBMS solution is able to process graphs of more than a billion edges in few minutes, whereas native graph databases fail to handle graphs of much smaller sizes. Last, we present a carefully engineered RDBMS solution to the problem of triangle enumeration for very large graphs. We show that RDBMS's are suitable tools for enumerating billions of triangles in billion-scale networks on a consumer grade machine. Also, we compare our RDBMS solution's performance to a native graph database and show that our RDBMS solution outperforms by orders of magnitude.
Graduate
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie