To see the other types of publications on this topic, follow the link: Client application for Domain Name System.

Journal articles on the topic 'Client application for Domain Name System'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Client application for Domain Name System.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ma, Liang, and Xiaole Zhao. "Research on the multi-terminal adapting Web system construction." MATEC Web of Conferences 232 (2018): 01028. http://dx.doi.org/10.1051/matecconf/201823201028.

Full text
Abstract:
Among the various applying patterns of Internet programs, the Web system based on B/S architecture has the congenital advantage of cross platform. The same application could be accessed via the same domain name with any operating system, as long as the client-side software (browser) is installed. Thus, in the development of Internet application, if the function could meet the requirement, the Web system construction should be given the top priority. With the wide development of information technology, such as intelligent terminal, mobile interconnection and cloud computing, how to construct the Web system which could adapt to multi terminals is deserving of research. Based on a systematical and comprehensive investigation, this article aims to explore feasible ways to construct the multi-terminal adapting Web system catering to the current social concern, through feasibility analysis of prototype design.
APA, Harvard, Vancouver, ISO, and other styles
2

Yunianta, Arda, Norazah Yusof, Arif Bramantoro, Haviluddin Haviluddin, Mohd Shahizan Othman, and Nataniel Dengen. "Data mapping process to handle semantic data problem on student grading system." International Journal of Advances in Intelligent Informatics 2, no. 3 (December 1, 2016): 157. http://dx.doi.org/10.26555/ijain.v2i3.84.

Full text
Abstract:
Many applications are developed on education domain. Information and data for each application are stored in distributed locations with different data representations on each database. This situation leads to heterogeneity at the level of integration data. Heterogeneity data may cause many problems. One major issue is about the semantic relationships data among applications on education domain, in which the learning data may have the same name but with a different meaning, or learning data that has a different name with same meaning. This paper discusses on semantic data mapping process to handle semantic relationships problem on education domain. There are two main parts in the semantic data mapping process. The first part is the semantic data mapping engine to produce data mapping language with turtle (.ttl) file format as a standard XML file schema, that can be used for Local Java Application using Jena Library and Triple Store. The Turtle file contains detail information about data schema of every application inside the database system. The second part is to provide D2R Server that can be accessed from outside environment using HTTP Protocol. This can be done using SPARQL Clients, Linked Data Clients (RDF Formats) and HTML Browser. To implement the semantic data process, this paper focuses on the student grading system in the learning environment of education domain. By following the proposed semantic data mapping process, the turtle file format is produced as a result of the first part of the process. Finally, this file is used to be combined and integrated with other turtle files in order to map and link with other data representation of other applications.
APA, Harvard, Vancouver, ISO, and other styles
3

Khudoyberdiev, Jin, and Kim. "A Novel Approach towards Resource Auto-Registration and Discovery of Embedded Systems Based on DNS." Electronics 8, no. 4 (April 17, 2019): 442. http://dx.doi.org/10.3390/electronics8040442.

Full text
Abstract:
The Internet of Things (IoT) is expected to deliver a whole range of new services to all parts of our society, and improve the way we work and live. The challenges within the Internet of Things are often related to interoperability, device resource constraints, a device to device connection and security. One of the essential elements of identification for each Internet of Things devices is the naming system and addresses. With this naming system, Internet of Things devices can be able to be discoverable by users. In this paper, we propose the IoT resource auto-registration and accessing indoor services based on Domain Name System (DNS) in the Open Connectivity Foundation (OCF) environment. We have used the Internet of Things Platform and DNS server for IoT Resource auto-registration and discovery in the Internet Protocol version 4 (IPv4). An existing system called Domain Name Auto-Registration in Internet Protocol version 6 can be used for Internet of Things devices for auto-registration and resource discovery. However, this system is not acceptable in the existing internet networks, because the highest percentage of the networks on the Internet are configured in Internet Protocol version 4. Through the proposed auto-registration system, clients can be able to discover the resources and access the services in the OCF network. Constrained Application Protocol (CoAP) is utilized for the IoT device auto-registration and accessing the services in the OCF network.
APA, Harvard, Vancouver, ISO, and other styles
4

Victors, Jesse, Ming Li, and Xinwen Fu. "The Onion Name System." Proceedings on Privacy Enhancing Technologies 2017, no. 1 (January 1, 2017): 21–41. http://dx.doi.org/10.1515/popets-2017-0003.

Full text
Abstract:
Abstract Tor onion services, also known as hidden services, are anonymous servers of unknown location and ownership that can be accessed through any Torenabled client. They have gained popularity over the years, but since their introduction in 2002 still suffer from major usability challenges primarily due to their cryptographically-generated non-memorable addresses. In response to this difficulty, in this work we introduce the Onion Name System (OnioNS), a privacy-enhanced decentralized name resolution service. OnioNS allows Tor users to reference an onion service by a meaningful globally-unique verifiable domain name chosen by the onion service administrator.We construct OnioNS as an optional backwards-compatible plugin for Tor, simplify our design and threat model by embedding OnioNS within the Tor network, and provide mechanisms for authenticated denial-of-existence with minimal networking costs. We introduce a lottery-like system to reduce the threat of land rushes and domain squatting. Finally, we provide a security analysis, integrate our software with the Tor Browser, and conduct performance tests of our prototype.
APA, Harvard, Vancouver, ISO, and other styles
5

Xiao, Yu Zhi. "On the Research of Internet Equipment Naming Methods." Advanced Materials Research 664 (February 2013): 1021–27. http://dx.doi.org/10.4028/www.scientific.net/amr.664.1021.

Full text
Abstract:
This paper analyzes the content of Internet naming problems as well as the current exist problems .We discuss the namespace and the major technologies, and then elaborating the important thinking on research of naming problems detailed and propose development direction in the future. In the paper, we propose a new method for devices naming due to the issues that exist in the Internet naming. Using this technical, it can solve the problem of the extremely scarce IPv4 and the IP address changed frequently on Internet devices. Equipment naming scheme has five major components: equipment name registered server, equipment communication server and client domain name server and client communication agency, the client login and configuration. It is a pattern in which the mobile device uses configured client software to issue a request to NG-DNS (new generation domain system).we have analyze on the realization of the key technology on the windows system. It has realized communication among equipments which are mobile, wireless, dynamic IP address, and hidden NAT and firewall. The method can be extended entry of Android devices.
APA, Harvard, Vancouver, ISO, and other styles
6

Ji, Hong. "Research on Design and Security Strategy of DNS." Applied Mechanics and Materials 378 (August 2013): 510–13. http://dx.doi.org/10.4028/www.scientific.net/amm.378.510.

Full text
Abstract:
DNS service plays a very important role in the Internet; each host must pass it to query the IP address of the destination host, and can communicate with each other. DNS uses a distributed database structure and server/client mode, which is stored domain name in the server and allowed a client to access the required data. DNS can resolve the host name to the corresponding IP address, while it also can resolve IP address to a host name. With the development of information technology, the network attacks become more and more frequent occurrence in the Internet. So DNS system has suffered a series of attacks frequently, and the communication is severely affected in the Internet. Therefore, the security problems of DNS are increasingly concerned about. The paper describes the composition, features and working principle of DNS system. According to the security risks of the system, it proposes the corresponding security strategy and designs a reliable and safe DNS.
APA, Harvard, Vancouver, ISO, and other styles
7

Díaz-Sánchez, Daniel, Andrés Marín-Lopez, Florina Almenárez Mendoza, and Patricia Arias Cabarcos. "DNS/DANE Collision-Based Distributed and Dynamic Authentication for Microservices in IoT †." Sensors 19, no. 15 (July 26, 2019): 3292. http://dx.doi.org/10.3390/s19153292.

Full text
Abstract:
IoT devices provide real-time data to a rich ecosystem of services and applications. The volume of data and the involved subscribe/notify signaling will likely become a challenge also for access and core networks. To alleviate the core of the network, other technologies like fog computing can be used. On the security side, designers of IoT low-cost devices and applications often reuse old versions of development frameworks and software components that contain vulnerabilities. Many server applications today are designed using microservice architectures where components are easier to update. Thus, IoT can benefit from deploying microservices in the fog as it offers the required flexibility for the main players of ubiquitous computing: nomadic users. In such deployments, IoT devices need the dynamic instantiation of microservices. IoT microservices require certificates so they can be accessed securely. Thus, every microservice instance may require a newly-created domain name and a certificate. The DNS-based Authentication of Named Entities (DANE) extension to Domain Name System Security Extensions (DNSSEC) allows linking a certificate to a given domain name. Thus, the combination of DNSSEC and DANE provides microservices’ clients with secure information regarding the domain name, IP address, and server certificate of a given microservice. However, IoT microservices may be short-lived since devices can move from one local fog to another, forcing DNSSEC servers to sign zones whenever new changes occur. Considering DNSSEC and DANE were designed to cope with static services, coping with IoT dynamic microservice instantiation can throttle the scalability in the fog. To overcome this limitation, this article proposes a solution that modifies the DNSSEC/DANE signature mechanism using chameleon signatures and defining a new soft delegation scheme. Chameleon signatures are signatures computed over a chameleon hash, which have a property: a secret trapdoor function can be used to compute collisions to the hash. Since the hash is maintained, the signature does not have to be computed again. In the soft delegation schema, DNS servers obtain a trapdoor that allows performing changes in a constrained zone without affecting normal DNS operation. In this way, a server can receive this soft delegation and modify the DNS zone to cope with frequent changes such as microservice dynamic instantiation. Changes in the soft delegated zone are much faster and do not require the intervention of the DNS primary servers of the zone.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Chen Guang, Yu Lan Zhao, Feng Xian Jiang, and Chao Ji. "The Research and Application of Short Message Name and Address System." Advanced Materials Research 225-226 (April 2011): 1008–11. http://dx.doi.org/10.4028/www.scientific.net/amr.225-226.1008.

Full text
Abstract:
With the rapid development of mobile communication technology, the short message name and address system [1] has been developed swiftly nowadays. We make the short message to be a kind of carrier, for delivering the requirements which clients required and return the answers which clients seek for. This paper contains such contents as follows based on the research of the system: analyzing the problems and defections existed in the system, enhancing the transfer mode to be a standard way, and improving the efficiency of data query and transfer. Taking advantages of the superiority of this system and innovate on the technology, we have completed a business software based on J2ME platform[2-7].In the research and development process we designed client-side functions and structural model combined with some functions used on the mobile phones, accordingly, the system technology has been practiced to some extents. And we optimize the software interface by a third party plug-in board called the “polish”, this work must will provide far-reaching significance for later research on this technology.
APA, Harvard, Vancouver, ISO, and other styles
9

Bein, Adrian Sean, and Alexander Williams. "Networking IP Restriction filtering and network address." IAIC Transactions on Sustainable Digital Innovation (ITSDI) 1, no. 2 (April 30, 2020): 172–77. http://dx.doi.org/10.34306/itsdi.v1i2.149.

Full text
Abstract:
Permissions setting on a computer is necessary. This is an effort that is not easy to change the system configuration or settings changed by the user. With a network of computers, of course, permissions settings do not need to be done one by one manually. Because in a computer network course there are many collections of computers connected together. Permissions setting so that the system can use the client-server applications that access restrictions can be done effectively. As the implementation of client-server applications can be created using Visual Basic 6.0. This language has been able to access the socket on the Windows operating system, named Winsock API that supports TCP / IP. This protocol is widely used because of the reliability of client-server application programming. The application is divided into two main applications, namely the client and server program name with the name of the Receiver Sender program. Receiver function receives instructions restriction of access rights Sender and sends reports to the Sender process execution. While Sender functions to send instructions restrictions permissions via the Registry to the Receiver. And after the test, the application can block important features available in the Windows operating system. So it is expected that these applications can help in permissions setting on a computer network.
APA, Harvard, Vancouver, ISO, and other styles
10

Jia, Bin. "The Research and Application of Short Message Name and Address Technology." Applied Mechanics and Materials 432 (September 2013): 571–74. http://dx.doi.org/10.4028/www.scientific.net/amm.432.571.

Full text
Abstract:
This paper contains such contents as follows based on the research of the system: analyzing the problems and defections existed in the system, enhancing the transfer mode to be a standard way, and improving the efficiency of data query and transfer. Take advantages of the superiority of this system and innovates on the technology, we have completed a business software based on J2ME platform [1-.In the research and develop process we designed client-side functions and structural model combined with some functions used on the mobile phones, accordingly, the system technology has been practiced to some extents. And we optimize the software interface by a third party plug-in board called the polish [7-, this work must will provide far-reaching significance for later work on this technology.
APA, Harvard, Vancouver, ISO, and other styles
11

Tian, Hong Cheng, Hong Wang, and Jin Kui Ma. "Domain Name System during the Transition from IPv4 to IPv6." Applied Mechanics and Materials 687-691 (November 2014): 1912–15. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.1912.

Full text
Abstract:
IPv4 and IPv6 will coexist for a long time, due to ISPes’ inertia in the transition from IPv4 to IPv6. Domain Name System (DNS) is a very important functional unit in the Internet. This paper describres the hierarchy and operating process of IPv6 DNS, IPv6 DNS resolver, and presents the DNS transition from IPv4 to IPv6 in particular. We suggest two methods to implement DNS service during the transition period: DNS-Application Level Gateway (DNS-ALG) with Network Address Translation-Protocol Translation (NAT-PT), and dual stacks. And we also propose their respective operational principles. This paper is of valuable reference for network engineers to construct DNS in the transition phase.
APA, Harvard, Vancouver, ISO, and other styles
12

Auer, Michael, and Alexander Zipf. "3D WebGIS: From Visualization to Analysis. An Efficient Browser-Based 3D Line-of-Sight Analysis." ISPRS International Journal of Geo-Information 7, no. 7 (July 21, 2018): 279. http://dx.doi.org/10.3390/ijgi7070279.

Full text
Abstract:
3D WebGIS systems have been mentioned in the literature almost since the beginning of the graphical web era in the late 1990s. The potential use of 3D WebGIS is linked to a wide range of scientific and application domains, such as planning, controlling, tracking or simulation in crisis management, military mission planning, urban information systems, energy facilities or cultural heritage management, just to name a few. Nevertheless, many applications or research prototypes entitled as 3D WebGIS or similar are mainly about 3D visualization of GIS data or the visualization of analysis results, rather than about performing the 3D analysis itself online. This research paper aims to step forward into the direction of web-based 3D geospatial analysis. It describes how to overcome speed and memory restrictions in web-based data management by adapting optimization strategies, developed earlier for web-based 3D visualization. These are applied in a holistic way in the context of a fully 3D line-of-sight computation over several layers with split (tiled) and unsplit (static) data sources. Different optimization approaches are combined and evaluated to enable an efficient client side analysis and a real 3D WebGIS functionality using new web technologies such as HTML5 and WebGL.
APA, Harvard, Vancouver, ISO, and other styles
13

Schmitt, Paul, Anne Edmundson, Allison Mankin, and Nick Feamster. "Oblivious DNS: Practical Privacy for DNS Queries." Proceedings on Privacy Enhancing Technologies 2019, no. 2 (April 1, 2019): 228–44. http://dx.doi.org/10.2478/popets-2019-0028.

Full text
Abstract:
Abstract Virtually every Internet communication typically involves a Domain Name System (DNS) lookup for the destination server that the client wants to communicate with. Operators of DNS recursive resolvers—the machines that receive a client’s query for a domain name and resolve it to a corresponding IP address—can learn significant information about client activity. Past work, for example, indicates that DNS queries reveal information ranging from web browsing activity to the types of devices that a user has in their home. Recognizing the privacy vulnerabilities associated with DNS queries, various third parties have created alternate DNS services that obscure a user’s DNS queries from his or her Internet service provider. Yet, these systems merely transfer trust to a different third party. We argue that no single party ought to be able to associate DNS queries with a client IP address that issues those queries. To this end, we present Oblivious DNS (ODNS), which introduces an additional layer of obfuscation between clients and their queries. To do so, ODNS uses its own authoritative namespace; the authoritative servers for the ODNS namespace act as recursive resolvers for the DNS queries that they receive, but they never see the IP addresses for the clients that initiated these queries. We present an initial deployment of ODNS; our experiments show that ODNS introduces minimal performance overhead, both for individual queries and for web page loads. We design ODNS to be compatible with existing DNS protocols and infrastructure, and we are actively working on an open standard with the IETF.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Zhi Chun, Song Yan Lu, Kun Xu, Zhuang Xiong, and Yun He. "A Transparent Communication System in Distributed Systems." Advanced Materials Research 909 (March 2014): 311–16. http://dx.doi.org/10.4028/www.scientific.net/amr.909.311.

Full text
Abstract:
This paper proposes a communication system in order to facilitate the development of communication for distributed systems based on TCP/IP network. The system provides real-time communication service based on UDP, subnetting network, broadcast and multi-NIC(Network Interface Card) configuration in the same host. It also provides reliable communication service based on TCP. These services are some application programming interfaces (APIs) that can be called in client applications with IHBs (Information Harbors) to identify the communication end-points. An IHB is a name defined in a network configuration file (NCF) to make network transparent so that all network concepts are invisible to the users. The NCF is the same for all client applications to guarantee the communication end-points identical. The system is general that can be applied in any network without any source codes needing to be modified and the use is very easy. To apply it to a new network, all of the work is to provide a NCF. The uses in many flight simulators show that the system is more efficient to implement communications.
APA, Harvard, Vancouver, ISO, and other styles
15

Jeong, Taikyeong Ted. "Highly scalable intelligent sensory application and time domain matrix for safety-critical system design." International Journal of Distributed Sensor Networks 14, no. 4 (April 2018): 155014771774110. http://dx.doi.org/10.1177/1550147717741102.

Full text
Abstract:
The designs of highly scalable intelligent sensory application—Ethernet-based communication architectures—are moving toward the integration of a fault recovery and fault-detection algorithm on the automotive industry. In particular, each port on the same network interface card design is required to provide highly scalable and low-latency communication. In this article, we present a study of intelligent sensory application for the Ethernet-based communication architecture and performance of multi-port configuration which is mainly used in safety-enhanced application such as automotive, military, finance, and aerospace, in other words, safety-critical applications. Our contributions and observations on the highly scalable intelligent behavior: (1) proposed network interface card board design scheme and architecture with multi-port configuration are a stable network configuration; (2) timing matrix is defined for fault detection and recovery time; (3) experimental and related verification methods by cyclic redundancy check between client–server and testing platform provide comparable results to each port configurations; and (4) application program interface–level algorithm is defined to make network interface card ready for fault detection.
APA, Harvard, Vancouver, ISO, and other styles
16

Singanamalla, Sudheesh, Suphanat Chunhapanya, Jonathan Hoyland, Marek Vavruša, Tanya Verma, Peter Wu, Marwan Fayed, Kurtis Heimerl, Nick Sullivan, and Christopher Wood. "Oblivious DNS over HTTPS (ODoH): A Practical Privacy Enhancement to DNS." Proceedings on Privacy Enhancing Technologies 2021, no. 4 (July 23, 2021): 575–92. http://dx.doi.org/10.2478/popets-2021-0085.

Full text
Abstract:
Abstract The Internet’s Domain Name System (DNS) responds to client hostname queries with corresponding IP addresses and records. Traditional DNS is unencrypted and leaks user information to on-lookers. Recent efforts to secure DNS using DNS over TLS (DoT) and DNS over HTTPS (DoH) have been gaining traction, ostensibly protecting DNS messages from third parties. However, the small number of available public large-scale DoT and DoH resolvers has reinforced DNS privacy concerns, specifically that DNS operators could use query contents and client IP addresses to link activities with identities. Oblivious DNS over HTTPS (ODoH) safeguards against these problems. In this paper we implement and deploy interoperable instantiations of the protocol, construct a corresponding formal model and analysis, and evaluate the protocols’ performance with wide-scale measurements. Results suggest that ODoH is a practical privacy-enhancing replacement for DNS.
APA, Harvard, Vancouver, ISO, and other styles
17

Hussain, Mohammed Abdulridha, Hai Jin, Zaid Alaa Hussien, Zaid Ameen Abduljabbar, Salah H. Abbdal, and Ayad Ibrahim. "Enc-DNS-HTTP: Utilising DNS Infrastructure to Secure Web Browsing." Security and Communication Networks 2017 (2017): 1–15. http://dx.doi.org/10.1155/2017/9479476.

Full text
Abstract:
Online information security is a major concern for both users and companies, since data transferred via the Internet is becoming increasingly sensitive. The World Wide Web uses Hypertext Transfer Protocol (HTTP) to transfer information and Secure Sockets Layer (SSL) to secure the connection between clients and servers. However, Hypertext Transfer Protocol Secure (HTTPS) is vulnerable to attacks that threaten the privacy of information sent between clients and servers. In this paper, we propose Enc-DNS-HTTP for securing client requests, protecting server responses, and withstanding HTTPS attacks. Enc-DNS-HTTP is based on the distribution of a web server public key, which is transferred via a secure communication between client and a Domain Name System (DNS) server. This key is used to encrypt client-server communication. The scheme is implemented in the C programming language and tested on a Linux platform. In comparison with Apache HTTPS, this scheme is shown to have more effective resistance to attacks and improved performance since it does not involve a high number of time-consuming operations.
APA, Harvard, Vancouver, ISO, and other styles
18

Setiawan, Widyadi, I. Nyoman Budiastra, and Sri Andriati Asri. "Android Application of Traffic Density Visualization Based on Vehicle Speed." Journal of Electrical, Electronics and Informatics 1, no. 1 (February 2, 2017): 7. http://dx.doi.org/10.24843/jteei.v01i01p02.

Full text
Abstract:
This paper presents a system to display traffic density in real time based on speed of vehicles on the main roads in the city of Denpasar. With this application, users who are in a vehicle can get the density of roads information. The software will run on the Android platform created with the help of Google maps with visualization density of roads are being reviewed. Measurement of vehicle speed using the frame difference method, so the computational process can be run quickly and in real time. The trial results of this paper, user (vehicle speed measurement) produces the same data as the data is received by the client (viewer visualization) with the display format is the name of the location, vehicle speed, date and time data retrieval.
APA, Harvard, Vancouver, ISO, and other styles
19

Setiawan, Widyadi, I. Nyoman Budiastra, and Sri Andriati Asri. "Android Application of Traffic Density Visualization Based on Vehicle Speed." Journal of Electrical, Electronics and Informatics 1, no. 1 (February 2, 2017): 7. http://dx.doi.org/10.24843/jeei.2017.v01.i01.p02.

Full text
Abstract:
This paper presents a system to display traffic density in real time based on speed of vehicles on the main roads in the city of Denpasar. With this application, users who are in a vehicle can get the density of roads information. The software will run on the Android platform created with the help of Google maps with visualization density of roads are being reviewed. Measurement of vehicle speed using the frame difference method, so the computational process can be run quickly and in real time. The trial results of this paper, user (vehicle speed measurement) produces the same data as the data is received by the client (viewer visualization) with the display format is the name of the location, vehicle speed, date and time data retrieval.
APA, Harvard, Vancouver, ISO, and other styles
20

Setiawan, Widyadi, I. Nyoman Budiastra, and Sri Andriati Asri. "Android Application of Traffic Density Visualization Based on Vehicle Speed." Journal of Electrical, Electronics and Informatics 1, no. 1 (February 2, 2017): 7. http://dx.doi.org/10.24843/jeei.v01i01p02.

Full text
Abstract:
This paper presents a system to display traffic density in real time based on speed of vehicles on the main roads in the city of Denpasar. With this application, users who are in a vehicle can get the density of roads information. The software will run on the Android platform created with the help of Google maps with visualization density of roads are being reviewed. Measurement of vehicle speed using the frame difference method, so the computational process can be run quickly and in real time. The trial results of this paper, user (vehicle speed measurement) produces the same data as the data is received by the client (viewer visualization) with the display format is the name of the location, vehicle speed, date and time data retrieval.
APA, Harvard, Vancouver, ISO, and other styles
21

Goularas, Dionysis, Khalifa Djemal, and Yannis Mannoussakis. "3D Image Modelling and Specific Treatments in Orthodontics Domain." Applied Bionics and Biomechanics 4, no. 3 (2007): 111–24. http://dx.doi.org/10.1155/2007/248715.

Full text
Abstract:
In this article, we present a 3D specific dental plaster treatment system for orthodontics. From computer tomography scanner images, we propose first a 3D image modelling and reconstruction method of the Mandible and Maxillary based on an adaptive triangulation allowing management of contours meant for the complex topologies. Secondly, we present two specific treatment methods directly achieved on obtained 3D model allowing the automatic correction for the setting in occlusion of the Mandible and the Maxillary, and the teeth segmentation allowing more specific dental examinations. Finally, these specific treatments are presented via a client/server application with the aim of allowing a telediagnosis and treatment.
APA, Harvard, Vancouver, ISO, and other styles
22

Ardiyanto, Yudhi, and Muhamad Yusvin Mustar. "Color Blindness Testing Using A Client-Server Based on The Ishihara Method." Journal of Electrical Technology UMY 4, no. 2 (December 8, 2020): 46–52. http://dx.doi.org/10.18196/jet.v4i2.10993.

Full text
Abstract:
The Ishihara method is one way to detect whether or not someone suffers from color blindness. This method possesses several sets of plates or circle patterns containing various combinations of colored dots and sizes, forming numbers visible for people with normal eyes. A collection of several plates with the name Ishihara book is available in the market. However, the book has a weakness for being easily damaged or faded in color. Although there are some shortcomings, several researchers have developed applications to anticipate those weaknesses. Being unable to be used by several people at once and the test history has not been adequately archived, are several examples of shortcomings of existing applications. This study aims to create a client-server-based color blindness testing mobile application. The system can provide real-time test results, and detailed test history is presented. Another feature is the push notification menu that sends messages to all users. This application has been designed and implemented successfully. Thereby, it can be used for a color blindness test before conducting a medical examination.
APA, Harvard, Vancouver, ISO, and other styles
23

J, Ravikumar, and Ramakanth Kumar P. "A framework for named entity recognition of clinical data." Indonesian Journal of Electrical Engineering and Computer Science 18, no. 2 (May 1, 2020): 946. http://dx.doi.org/10.11591/ijeecs.v18.i2.pp946-952.

Full text
Abstract:
With emergence of technologies like big data, the healthcare services are also being explored to apply this technology and reap benefits. Big Data analytics can be implemented as a part of e-health which involves the extrapolation of actionable insights from sources like health knowledge base and health information systems. Present day medical data creates a lot of information consistently. At present, Hospital Information System is a quickly developing innovation. This data is a major asset for getting data from gathering of gigantic measures of surgical information by forcing a few questions and watchwords. Be that as it may, there is issue of getting data precisely what the client need, because Hospital Information System contains more than one archive identified with a specific thing, individual or episode and so on. Information extraction is one of information mining systems used to concentrate models portraying essential information classes. The proposed work will work for the most part concentrating on accomplishing great execution in Medical Domain. Fundamentally this had two primary purposes one was separating significant information from patient content record and second one labelling name substance, for example, individual, association, area, malady name and symptoms. Improve survival rates and tweak care conventions and review inquiries to better deal with any interminable consideration populace. Lower costs by decreasing pointless hospitalizations. Abbreviate length of stay when confirmation is fundamental.
APA, Harvard, Vancouver, ISO, and other styles
24

Pallavi, P., and Shaik Salam. "Online Command Area Water Resource Management System." APTIKOM Journal on Computer Science and Information Technologies 5, no. 2 (April 30, 2020): 70–74. http://dx.doi.org/10.34306/csit.v5i2.141.

Full text
Abstract:
Water is an important, but often ignored element in sustainable development by now it has been clear that urgent action is needed to avoid global water crisis. Water resource management is the activity of planning, developing, distributing and managing the optimum use of water resources. Successful management of water resources requires accurate knowledge of their resource distribution to meet up the competing demands and mechanisms to make good decisions using advanced recent technologies.Towards evolving comprehensive management plan in suitable conservation and utilization of water resources space technology plays a crucial role in managing country’s available water resources. Systematic approaches involving judicious combination of conventional server side scripting programming and remote sensing techniques pave way for achieving optimum planning and operational of water resources projects. new methodologies and 24/7 accessible system need to be built, these by reducing the dependency on complex infrastructure an specialist domain Open source web GIS systems have proven their rich in application of server side scripting and easy to use client application tools. Present study and implementation aims to provide wizard based or easily driven tools online for command area management practices. In this large endeavour modules for handling remote sensing data, online raster processing, statistics and indices generation will be developed.
APA, Harvard, Vancouver, ISO, and other styles
25

Adler, Jeffrey L., and Eknauth Persaud. "Knowledge Acquisition for Large-Scale Expert Systems in Transportation." Transportation Research Record: Journal of the Transportation Research Board 1651, no. 1 (January 1998): 59–65. http://dx.doi.org/10.3141/1651-09.

Full text
Abstract:
One of the greatest challenges in building an expert system is obtaining, representing, and programming the knowledge base. As the size and scope of the problem domain increases, knowledge acquisition and knowledge engineering become more challenging. Methods for knowledge acquisition and engineering for large-scale projects are investigated in this paper. The objective is to provide new insights as to how knowledge engineers play a role in defining the scope and purpose of expert systems and how traditional knowledge acquisition and engineering methods might be recast in cases where the expert system is a component within a larger scale client-server application targeting multiple users.
APA, Harvard, Vancouver, ISO, and other styles
26

Kart, Özge, Alp Kut, and Vladimir Radevski. "Decision Support System For A Customer Relationship Management Case Study." International Journal of Informatics and Communication Technology (IJ-ICT) 3, no. 2 (August 1, 2014): 88. http://dx.doi.org/10.11591/ijict.v3i2.pp88-96.

Full text
Abstract:
<span lang="EN-US">Data mining is a computational approach aiming to discover hidden and valuable information in large datasets. It has gained importance recently in the wide area of computational among which many in the domain of Business Informatics. This paper focuses on applications of data mining in Customer Relationship Management (CRM). The core of our application is a classifier based on the naive Bayesian classification. The accuracy rate of the model is determined by doing cross validation. The results demonstrated the applicability and effectiveness of the proposed model. Naive Bayesian classifier reported high accuracy. So the classification rules can be used to support decision making in CRM field. The aim of this study is to apply the data mining model to the banking sector as example case study. This work also contains an example data set related with customers to predict if the client will subscribe a term deposit. The results of the implementation are available on a mobile platform. </span>
APA, Harvard, Vancouver, ISO, and other styles
27

Rhodus*, Tim, and Bud Witney. "Developing New Approaches and Tools for Improved Management and Delivery of Online Digital Photos." HortScience 39, no. 4 (July 2004): 875C—875. http://dx.doi.org/10.21273/hortsci.39.4.875c.

Full text
Abstract:
More and more of the Department's academic and outreach communications on the Internet involve the use of digital photos. While enhancing visual appeal and conveying information that cannot be communicated via text is an obvious benefit, it is critical that digital collections be efficiently and effectively managed at the client level (personal workstation) and also the server level. To assist faculty and staff who routinely publish on the web and those who contribute to the Ohio State Univ.'s WebGarden online image database, a new client application was developed to assist in viewing and organizing digital photos on their workstation. Based on FileMaker Pro database software, a standalone program name DPM (Digital Photo Manager) was developed that runs without the user having to have FileMaker software installed on their system. DPM allows the user to scan a folder of digital photos, create thumbnails, add appropriate captions and cataloging information, and even display a full-screen slideshow. When the user is ready to publish on the web, they upload their file into a portion of the department website managed by Gallery software, a free PHP-based application that integrates with various web server programs and handles any number of user-specific digital albums. Following this, a website was developed that allows the user to select a photo from their online album, add 1-3 lines of captioning, and enter their name for a photo credit. The website automatically applies a standard background, creates four different sizes of the image, renames the files into a standard naming convention used for all images on the server, saves each file into a specific folder, and provides the user with the URL address for the digital files.
APA, Harvard, Vancouver, ISO, and other styles
28

Cresswell, Stephen N., Thomas L. McCluskey, and Margaret M. West. "Acquiring planning domain models using LOCM." Knowledge Engineering Review 28, no. 2 (February 22, 2013): 195–213. http://dx.doi.org/10.1017/s0269888912000422.

Full text
Abstract:
AbstractThe problem of formulating knowledge bases containing action schema is a central concern in knowledge engineering for artificial intelligence (AI) planning. This paper describes Learning Object-Centred Models (LOCM), a system that carries out the automated generation of a planning domain model from example training plans. The novelty of LOCM is that it can induce action schema without being provided with any information about predicates or initial, goal or intermediate state descriptions for the example action sequences. Each plan is assumed to be a sound sequence of actions; each action in a plan is stated as a name and a list of objects that the action refers to. LOCM exploits assumptions about the kinds of domain model it has to generate, rather than handcrafted clues or planner-oriented knowledge. It assumes that actions change the state of objects, and require objects to be in a certain state before they can be executed. In this paper, we describe the implemented LOCM algorithm, the assumptions that it is based on, and an evaluation using plans generated through goal-directed solutions, through random walk, and through logging human-generated plans for the game of freecell. We analyze the performance of LOCM by its application to the induction of domain models from five domains.
APA, Harvard, Vancouver, ISO, and other styles
29

Santhanavanich, T., P. Wuerstle, J. Silberer, V. Loidl, P. Rodrigues, and V. Coors. "3D SAFE ROUTING NAVIGATION APPLICATION FOR PEDESTRIANS AND CYCLISTS BASED ON OPEN SOURCE TOOLS." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences VI-4/W2-2020 (September 15, 2020): 143–47. http://dx.doi.org/10.5194/isprs-annals-vi-4-w2-2020-143-2020.

Full text
Abstract:
Abstract. The recent advancement in Information &amp; Communication Technology (ICT) is seen as a critical enabler to design intelligent smart cities targeting different domains. One such domain is modes of transport in a city. Currently, various cities around the world are envisioning innovative ways to reduce emissions in the cities by increasing physically active mobility. However, there is still limited information about the safety of cyclists and pedestrians within city limits. To address this, we develop a 3D web-based safe routing tool called Vision Zero. Our concept prototype used Augsburg city, Germany, as a case study. The implementation is based on open-source tools. In the back-end, the OGC 3D Portrayal Service standard helps to deliver and integrate various 2D and 3D geospatial contents on a web-based client using CesiumJS. The OGC SensorThings API (STAPI) standard is used to manage historical and real-time open road-incident data from the Federal Statistical Office of Germany. The navigation system is built up based on the routing engine pgRouting, which calculates the safest route based on the mentioned STAPI server and the road-network dataset from OpenStreetMap.
APA, Harvard, Vancouver, ISO, and other styles
30

S, Priyadharshini, and Catherine Joy. R. "Design and Implementation of an Automated Hotel Management System." International Journal of Engineering and Advanced Technology 10, no. 5 (June 30, 2021): 37–42. http://dx.doi.org/10.35940/ijeat.e2569.0610521.

Full text
Abstract:
The aim of an automated hotel management system is to handle all aspects of the hotel's information and booking system. This application attempted to cover all operations that occur in residential hotels. It is all identified, from employee management to booking, floors, offices, and room type management, among other things. We sought to demonstrate how data/information is processed in hotels in our project, automated Hotel Management System. The hotel management overview was achieved by splitting the project into different modules. Customers are offered various facilities such as check-in, checkout, and entry editing, as well as advance payments. Customer has the option to cancel his or her reservation if he or she so desires. Customer Id or customer name may be used to search for any customer or employee. It is also possible to inquire about available spaces. It will produce reports for customers, employees (who work in the hotel), and a bill for the customer when the customer checks out. We've only included a few modules because our aim is to get an idea or learn more about how hotels are managed. With the addition of several more components, this type of project may be used in a variety of hotels. The efficiency of any hotel is determined by the method used to obtain and prevent information from customers' personal data for use in the hotel's various services. It has been a complex and difficult operation to manage their outcome revealed, particularly when information flow is consistent. This project focuses on creating a client-side and user interface in Java Script, as well as a backend in Java Spring to support panorama data and images.
APA, Harvard, Vancouver, ISO, and other styles
31

Pandey, Divyanshu, Adithya Venugopal, and Harry Leib. "Multi-Domain Communication Systems and Networks: A Tensor-Based Approach." Network 1, no. 2 (July 7, 2021): 50–74. http://dx.doi.org/10.3390/network1020005.

Full text
Abstract:
Most modern communication systems, such as those intended for deployment in IoT applications or 5G and beyond networks, utilize multiple domains for transmission and reception at the physical layer. Depending on the application, these domains can include space, time, frequency, users, code sequences, and transmission media, to name a few. As such, the design criteria of future communication systems must be cognizant of the opportunities and the challenges that exist in exploiting the multi-domain nature of the signals and systems involved for information transmission. Focussing on the Physical Layer, this paper presents a novel mathematical framework using tensors, to represent, design, and analyze multi-domain systems. Various domains can be integrated into the transceiver design scheme using tensors. Tools from multi-linear algebra can be used to develop simultaneous signal processing techniques across all the domains. In particular, we present tensor partial response signaling (TPRS) which allows the introduction of controlled interference within elements of a domain and also across domains. We develop the TPRS system using the tensor contracted convolution to generate a multi-domain signal with desired spectral and cross-spectral properties across domains. In addition, by studying the information theoretic properties of the multi-domain tensor channel, we present the trade-off between different domains that can be harnessed using this framework. Numerical examples for capacity and mean square error are presented to highlight the domain trade-off revealed by the tensor formulation. Furthermore, an application of the tensor framework to MIMO Generalized Frequency Division Multiplexing (GFDM) is also presented.
APA, Harvard, Vancouver, ISO, and other styles
32

Munkhdalai, Lkhagvadorj, Tsendsuren Munkhdalai, Oyun-Erdene Namsrai, Jong Lee, and Keun Ryu. "An Empirical Comparison of Machine-Learning Methods on Bank Client Credit Assessments." Sustainability 11, no. 3 (January 29, 2019): 699. http://dx.doi.org/10.3390/su11030699.

Full text
Abstract:
Machine learning and artificial intelligence have achieved a human-level performance in many application domains, including image classification, speech recognition and machine translation. However, in the financial domain expert-based credit risk models have still been dominating. Establishing meaningful benchmark and comparisons on machine-learning approaches and human expert-based models is a prerequisite in further introducing novel methods. Therefore, our main goal in this study is to establish a new benchmark using real consumer data and to provide machine-learning approaches that can serve as a baseline on this benchmark. We performed an extensive comparison between the machine-learning approaches and a human expert-based model—FICO credit scoring system—by using a Survey of Consumer Finances (SCF) data. As the SCF data is non-synthetic and consists of a large number of real variables, we applied two variable-selection methods: the first method used hypothesis tests, correlation and random forest-based feature importance measures and the second method was only a random forest-based new approach (NAP), to select the best representative features for effective modelling and to compare them. We then built regression models based on various machine-learning algorithms ranging from logistic regression and support vector machines to an ensemble of gradient boosted trees and deep neural networks. Our results demonstrated that if lending institutions in the 2001s had used their own credit scoring model constructed by machine-learning methods explored in this study, their expected credit losses would have been lower, and they would be more sustainable. In addition, the deep neural networks and XGBoost algorithms trained on the subset selected by NAP achieve the highest area under the curve (AUC) and accuracy, respectively.
APA, Harvard, Vancouver, ISO, and other styles
33

Alshaer, Abdulaziz, David O’Hare, Philippe Archambault, Mark Shirley, and Holger Regenbrecht. "How to Observe Users’ Movements in Virtual Environments: Viewpoint Control in a Power Wheelchair Simulator." Human Factors: The Journal of the Human Factors and Ergonomics Society 62, no. 4 (July 15, 2019): 656–70. http://dx.doi.org/10.1177/0018720819853682.

Full text
Abstract:
Objective We describe a networked, two-user virtual reality (VR) power wheelchair (PWC) simulator system in which an actor (client) and an observer (clinician) meet. We then present a study with 15 observers (expert clinicians) evaluating the effect of three principal forms of viewpoint control (egocentric-egomotion, egocentric-tethered, and client-centric) on the observer’s assessment of driving tasks in a virtual environment (VE). Background VR allows for the simulation and assessment of real-world tasks in a controlled, safe, and repeatable environment. Observing users’ movement behavior in such a VE requires appropriate viewpoint control for the observer. The VR viewpoint user interface should allow an observer to make judgments equivalent or even superior to real-world situations. Method A purpose-built VR PWC simulator was developed. In a series of PWC driving tasks, we measured the perceived ease of use and sense of presence of the observers and compared the virtual assessment with real-world “gold standard” scores, including confidence levels in judgments. Results Findings suggest that with more immersive techniques, such as egomotion and tethered egocentric viewpoints, judgments are both more accurate and more confident. The ability to walk and/or orbit around the view significantly affected the observers’ sense of presence. Conclusion Incorporating the observer into the VE, through egomotion, is an effective method for assessing users’ behavior in VR with implications for the transferability of virtual experiences to the real world. Application Our application domain serves as a representative example for tasks where the movement of users through a VE needs to be evaluated.
APA, Harvard, Vancouver, ISO, and other styles
34

Telagam, Nagarjuna, Nehru Kandasamy, Menakadevi Nanjundan, and Arulanandth TS. "Smart Sensor Network based Industrial Parameters Monitoring in IOT Environment using Virtual Instrumentation Server." International Journal of Online Engineering (iJOE) 13, no. 11 (November 22, 2017): 111. http://dx.doi.org/10.3991/ijoe.v13i11.7630.

Full text
Abstract:
A remote monitoring and control are one of the most important criteria for maximizing the production in any industry. With the development of modern industry the requirement for industrial monitoring system is getting higher. This project explains the real time scenario of monitoring temperature and humidity in industries. National Instruments my RIO is used and results are observed on Lab VIEW and VI Server. The server VI program and client VI program is developed in block diagram for the two sensor data. This proposed system develops a sensor interface device essential for sensor data acquisition of industrial Wireless Sensor Networks (WSN) in Internet of Things (IOT) environment. By detecting the values of sensors like temperature, humidity present in the industrial area. The results are displayed on the web page. The data can be accessed with admin name and password. After logging into the web page the index of files is displayed. After restarting the mine RIO kit and initiate the deploying process the nations will display log.csv file. By double clicking the file the excel sheet will appear on the computer. This VI server is tested for its working, using a data acquisition web application using a standard web browser. The critical situation can be avoided and preventive measures are successfully implemented.
APA, Harvard, Vancouver, ISO, and other styles
35

Manvi, Sunilkumar S. "Resource Monitoring for Wireless Sensor Networks using ANFIS." Journal of Applied Computer Science Methods 8, no. 1 (June 1, 2016): 41–67. http://dx.doi.org/10.1515/jacsm-2016-0004.

Full text
Abstract:
Abstract Wireless sensor networks (WSNs) are usually a resource constrained networks which have limited energy, bandwidth, processing power, memory etc. These networks are now part of Internet by the name Internet of Things (IoT). To get many services from WSNs, we may need to run many applications in the sensor nodes which consumes resources. Ideally, the resources availability of all sensor nodes should be known to the sink before it requests for any further service(s) from the sensor node(s). Hence, continuous monitoring of the resources of the sensor nodes by the sink is essential. The proposed work is a framework for monitoring certain important resources of sensor network using Adaptive-Neuro Fuzzy Inference System (ANFIS) and Constrained Application Protocol (CoAP). The ANFIS is trained with these resources consumption patterns. The input to ANFIS is the resources consumption levels and the output is the resources consumed levels that needs to be sent to the sink which may be individual or combinations of resources. The trained ANFIS generates the output periodically which determines resources consumption levels that needs to be sent to the sink. Also, ANFIS continuously learns using hybrid learning algorithm (which is basically a combination of back propagation and least squares method) and updates its parameters for better results. The CoAP protocol with its observe option is used to transport the resource monitoring data from the sensor nodes to the cluster head, then from the cluster head to the sink. The sensor nodes runs coap server, the cluster head runs both coap client and server and the sink runs coap client. The performance of the proposed work is compared with LoWPAN network management protocol (LNMP) and EmNets Network Management Protocol (EMP) in terms of bandwidth and energy overheads. It is observed that proposed work performs better when compared to the existing works.
APA, Harvard, Vancouver, ISO, and other styles
36

Hong, Yang, David Gochis, Jiang-tao Cheng, Kuo-lin Hsu, and Soroosh Sorooshian. "Evaluation of PERSIANN-CCS Rainfall Measurement Using the NAME Event Rain Gauge Network." Journal of Hydrometeorology 8, no. 3 (June 1, 2007): 469–82. http://dx.doi.org/10.1175/jhm574.1.

Full text
Abstract:
Abstract Robust validation of the space–time structure of remotely sensed precipitation estimates is critical to improving their quality and confident application in water cycle–related research. In this work, the performance of the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) precipitation product is evaluated against warm season precipitation observations from the North American Monsoon Experiment (NAME) Event Rain Gauge Network (NERN) in the complex terrain region of northwestern Mexico. Analyses of hourly and daily precipitation estimates show that the PERSIANN-CCS captures well active and break periods in the early and mature phases of the monsoon season. While the PERSIANN-CCS generally captures the spatial distribution and timing of diurnal convective rainfall, elevation-dependent biases exist, which are characterized by an underestimate in the occurrence of light precipitation at high elevations and an overestimate in the occurrence of precipitation at low elevations. The elevation-dependent biases contribute to a 1–2-h phase shift of the diurnal cycle of precipitation at various elevation bands. For reasons yet to be determined, the PERSIANN-CCS significantly underestimated a few active periods of precipitation during the late or “senescent” phase of the monsoon. Despite these shortcomings, the continuous domain and relatively high spatial resolution of PERSIANN-CCS quantitative precipitation estimates (QPEs) provide useful characterization of precipitation space–time structures in the North American monsoon region of northwestern Mexico, which should prove useful for hydrological applications.
APA, Harvard, Vancouver, ISO, and other styles
37

Ouedraogo, Moussa, Haralambos Mouratidis, Eric Dubois, and Djamel Khadraoui. "Security Assurance Evaluation and IT Systems’ Context of Use Security Criticality." International Journal of Handheld Computing Research 2, no. 4 (October 2011): 59–81. http://dx.doi.org/10.4018/jhcr.2011100104.

Full text
Abstract:
Today’s IT systems are ubiquitous and take the form of small portable devices, to the convenience of the users. However, the reliance on this technology is increasing faster than the ability to deal with the simultaneously increasing threats to information security. This paper proposes metrics and a methodology for the evaluation of operational systems security assurance that take into account the measurement of security correctness of a safeguarding measure and the analysis of the security criticality of the context in which the system is operating (i.e., where is the system used and/or what for?). In that perspective, the paper also proposes a novel classification scheme for elucidating the security criticality level of an IT system. The advantage of this approach lies in the fact that the assurance level fluctuation based on the correctness of deployed security measures and the criticality of the context of use of the IT system or device, could provide guidance to users without security background on what activities they may or may not perform under certain circumstances. This work is illustrated with an application based on the case study of a Domain Name Server (DNS).
APA, Harvard, Vancouver, ISO, and other styles
38

Zain ul Abideen, Muhammad, Shahzad Saleem, and Madiha Ejaz. "VPN Traffic Detection in SSL-Protected Channel." Security and Communication Networks 2019 (October 29, 2019): 1–17. http://dx.doi.org/10.1155/2019/7924690.

Full text
Abstract:
In recent times, secure communication protocols over web such as HTTPS (Hypertext Transfer Protocol Secure) are being widely used instead of plain web communication protocols like HTTP (Hypertext Transfer Protocol). HTTPS provides end-to-end encryption between the user and service. Nowadays, organizations use network firewalls and/or intrusion detection and prevention systems (IDPS) to analyze the network traffic to detect and protect against attacks and vulnerabilities. Depending on the size of organization, these devices may differ in their capabilities. Simple network intrusion detection system (NIDS) and firewalls generally have no feature to inspect HTTPS or encrypted traffic, so they rely on unencrypted traffic to manage the encrypted payload of the network. Recent and powerful next-generation firewalls have Secure Sockets Layer (SSL) inspection feature which are expensive and may not be suitable for every organizations. A virtual private network (VPN) is a service which hides real traffic by creating SSL-protected channel between the user and server. Every Internet activity is then performed under the established SSL tunnel. The user inside the network with malicious intent or to hide his activity from the network security administration of the organization may use VPN services. Any VPN service may be used by users to bypass the filters or signatures applied on network security devices. These services may be the source of new virus or worm injected inside the network or a gateway to facilitate information leakage. In this paper, we have proposed a novel approach to detect VPN activity inside the network. The proposed system analyzes the communication between user and the server to analyze and extract features from network, transport, and application layer which are not encrypted and classify the incoming traffic as malicious, i.e., VPN traffic or standard traffic. Network traffic is analyzed and classified using DNS (Domain Name System) packets and HTTPS- (Hypertext Transfer Protocol Secure-) based traffic. Once traffic is classified, the connection based on the server’s IP, TCP port connected, domain name, and server name inside the HTTPS connection is analyzed. This helps in verifying legitimate connection and flags the VPN-based traffic. We worked on top five freely available VPN services and analyzed their traffic patterns; the results show successful detection of the VPN activity performed by the user. We analyzed the activity of five users, using some sort of VPN service in their Internet activity, inside the network. Out of total 729 connections made by different users, 329 connections were classified as legitimate activity, marking 400 remaining connections as VPN-based connections. The proposed system is lightweight enough to keep minimal overhead, both in network and resource utilization and requires no specialized hardware.
APA, Harvard, Vancouver, ISO, and other styles
39

Fierro Abella, Jorge Alberto. "Nombre de dominios y otros signos distintivos. Aproximación teórica y análisis practico." Revista Jurídica Piélagus 6 (December 3, 2007): 39–63. http://dx.doi.org/10.25054/16576799.579.

Full text
Abstract:
El desarrollo de la llamada red de redes o internet ha supuesto un cambio sustancial en la forma de entender las relaciones comerciales. La extensión del acceso a aquella a un número cada vez mayor de agentes económicos, unido a un crecimiento continuo de los contenidos de toda índole disponibles en la red, implica que la facilidad con que se acceda a la información de una determinada compañía tenga un elevado valor. El objetivo de este documento de reflexión es ofrecer en primer lugar, un panorama general del marco teórico en el que se desarrolla la actividad de registro de nombres de dominio, tanto en el ámbito internacional como en el local (España) y su relación con el derecho de marcas, para a partir de ello exponer una serie de casos, que tienen una relación directa con España, bien por tratarse de resoluciones de nuestros tribunales, bien por ser asuntos en los que se discutía la titularidad de nombres de dominio en los que aparecía como perjudicado (real o pretendido) una marca o nombre comercial española o al menos con presencia en este país. Abstract Domain names are the familiar and easy-to-remember names for internet computers. They map to unique Internet Protocol (IP) numbers that serve as routing addresses on the Internet. The domain name system (DNS) translates internet names into the IP numbers needed for transmission of information across the network. The challenge pursued by the following research is to provide a general outlook of the theoretical frame for the technical activity of domain name registration procedure, as well as the implication of complementary sources of rules. The territorial context of the analysis is only apparent, since the empirical application of concepts can also be applied by other jurisdictions. Palabras Claves Nombre de dominio, marcas, marcas de internet, derecho de marcas, competencia desleal, usurpación de marcas. Keywords Nombre de dominio, marcas, marcas de internet, derecho de marcas, competencia desleal, usurpación de marcas
APA, Harvard, Vancouver, ISO, and other styles
40

Khan, Basit, Sabine Banzhaf, Edward C. Chan, Renate Forkel, Farah Kanani-Sühring, Klaus Ketelsen, Mona Kurppa, et al. "Development of an atmospheric chemistry model coupled to the PALM model system 6.0: implementation and first applications." Geoscientific Model Development 14, no. 2 (March 1, 2021): 1171–93. http://dx.doi.org/10.5194/gmd-14-1171-2021.

Full text
Abstract:
Abstract. In this article we describe the implementation of an online-coupled gas-phase chemistry model in the turbulence-resolving PALM model system 6.0 (formerly an abbreviation for Parallelized Large-eddy Simulation Model and now an independent name). The new chemistry model is implemented in the PALM model as part of the PALM-4U (PALM for urban applications) components, which are designed for application of the PALM model in the urban environment (Maronga et al., 2020). The latest version of the Kinetic PreProcessor (KPP, 2.2.3) has been utilized for the numerical integration of gas-phase chemical reactions. A number of tropospheric gas-phase chemistry mechanisms of different complexity have been implemented ranging from the photostationary state (PHSTAT) to mechanisms with a strongly simplified volatile organic compound (VOC) chemistry (e.g. the SMOG mechanism from KPP) and the Carbon Bond Mechanism 4 (CBM4; Gery et al., 1989), which includes a more comprehensive, but still simplified VOC chemistry. Further mechanisms can also be easily added by the user. In this work, we provide a detailed description of the chemistry model, its structure and input requirements along with its various features and limitations. A case study is presented to demonstrate the application of the new chemistry model in the urban environment. The computation domain of the case study comprises part of Berlin, Germany. Emissions are considered using street-type-dependent emission factors from traffic sources. Three chemical mechanisms of varying complexity and one no-reaction (passive) case have been applied, and results are compared with observations from two permanent air quality stations in Berlin that fall within the computation domain. Even though the feedback of the model's aerosol concentrations on meteorology is not yet considered in the current version of the model, the results show the importance of online photochemistry and dispersion of air pollutants in the urban boundary layer for high spatial and temporal resolutions. The simulated NOx and O3 species show reasonable agreement with observations. The agreement is better during midday and poorest during the evening transition hours and at night. The CBM4 and SMOG mechanisms show better agreement with observations than the steady-state PHSTAT mechanism.
APA, Harvard, Vancouver, ISO, and other styles
41

Pustišek, Matevž, Dejan Dolenc, and Andrej Kos. "LDAF: Low-Bandwidth Distributed Applications Framework in a Use Case of Blockchain-Enabled IoT Devices." Sensors 19, no. 10 (May 21, 2019): 2337. http://dx.doi.org/10.3390/s19102337.

Full text
Abstract:
In this paper, we present Low-Bandwidth Distributed Applications Framework (LDAF)—an application-aware gateway for communication-constrained Internet of things (IoT) devices. A modular approach facilitates connecting to existing cloud backend servers and managing message formats and APIs’ native application logic to meet the communication constraints of resource-limited end devices. We investigated options for positioning the LDAF server in fog computing architectures. We demonstrated the approach in three use cases: (i) a simple domain name system (DNS) query from the device to a DNS server, (ii) a complex interaction of a blockchain—based IoT device with a blockchain network, and (iii) difference based patching of binary (system) files at the IoT end devices. In a blockchain smart meter use case we effectively enabled decentralized applications (DApp) for devices that without our solution could not participate in a blockchain network. Employing the more efficient binary content encoding, we reduced the periodic traffic from 16 kB/s to ~1.1 kB/s, i.e., 7% of the initial traffic. With additional optimization of the application protocol in the gateway and message filtering, the periodic traffic was reduced to ~1% of the initial traffic, without any tradeoffs in the application’s functionality or security. Using a function of binary difference we managed to reduce the size of the communication traffic to the end device, at least when the binary patch was smaller than the patching file.
APA, Harvard, Vancouver, ISO, and other styles
42

Vats, Satvik, and B. B. Sagar. "An independent time optimized hybrid infrastructure for big data analytics." Modern Physics Letters B 34, no. 28 (July 21, 2020): 2050311. http://dx.doi.org/10.1142/s021798492050311x.

Full text
Abstract:
In Big data domain, platform dependency can alter the behavior of the business. It is because of the different kinds (Structured, Semi-structured and Unstructured) and characteristics of the data. By the traditional infrastructure, different kinds of data cannot be processed simultaneously due to their platform dependency for a particular task. Therefore, the responsibility of selecting suitable tools lies with the user. The variety of data generated by different sources requires the selection of suitable tools without human intervention. Further, these tools also face the limitation of recourses to deal with a large volume of data. This limitation of resources affects the performance of the tools in terms of execution time. Therefore, in this work, we proposed a model in which different data analytics tools share a common infrastructure to provide data independence and resource sharing environment, i.e. the proposed model shares common (Hybrid) Hadoop Distributed File System (HDFS) between three Name-Node (Master Node), three Data-Node and one Client-node, which works under the DeMilitarized zone (DMZ). To realize this model, we have implemented Mahout, R-Hadoop and Splunk sharing a common HDFS. Further using our model, we run [Formula: see text]-means clustering, Naïve Bayes and recommender algorithms on three different datasets, movie rating, newsgroup, and Spam SMS dataset, representing structured, semi-structured and unstructured, respectively. Our model selected the appropriate tool, e.g. Mahout to run on the newsgroup dataset as other tools cannot run on this data. This shows that our model provides data independence. Further results of our proposed model are compared with the legacy (individual) model in terms of execution time and scalability. The improved performance of the proposed model establishes the hypothesis that our model overcomes the limitation of the resources of the legacy model.
APA, Harvard, Vancouver, ISO, and other styles
43

Konan, Martin, and Wenyong Wang. "A Secure Mutual Batch Authentication Scheme for Patient Data Privacy Preserving in WBAN." Sensors 19, no. 7 (April 3, 2019): 1608. http://dx.doi.org/10.3390/s19071608.

Full text
Abstract:
The current advances in cloud-based services have significantly enhanced individual satisfaction in numerous modern life areas. Particularly, the recent spectacular innovations in the wireless body area networks (WBAN) domain have made e-Care services rise as a promising application field, which definitely improves the quality of the medical system. However, the forwarded data from the limited connectivity range of WBAN via a smart device (e.g., smartphone) to the application provider (AP) should be secured from an unapproved access and alteration (attacker) that could prompt catastrophic consequences. Therefore, several schemes have been proposed to guarantee data integrity and privacy during their transmission between the client/controller (C) and the AP. Thereby, numerous effective cryptosystem solutions based on a bilinear pairing approach are available in the literature to address the mentioned security issues. Unfortunately, the related solution presents security shortcomings, where AP can with ease impersonate a given C. Hence, this existing scheme cannot fully guarantee C’s data privacy and integrity. Therefore, we propose our contribution to address this data security issue (impersonation) through a secured and efficient remote batch authentication scheme that genuinely ascertains the identity of C and AP. Practically, the proposed cryptosystem is based on an efficient combination of elliptical curve cryptography (ECC) and bilinear pairing schemes. Furthermore, our proposed solution reduces the communication and computational costs by providing an efficient data aggregation and batch authentication for limited device’s resources in WBAN. These additional features (data aggregation and batch authentication) are the core improvements of our scheme that have great merit for limited energy environments like WBAN.
APA, Harvard, Vancouver, ISO, and other styles
44

Facco Rodrigues, Vinicius, Ivam Guilherme Wendt, Rodrigo da Rosa Righi, Cristiano André da Costa, Jorge Luis Victória Barbosa, and Antonio Marcos Alberti. "Brokel: Towards enabling multi-level cloud elasticity on publish/subscribe brokers." International Journal of Distributed Sensor Networks 13, no. 8 (August 2017): 155014771772886. http://dx.doi.org/10.1177/1550147717728863.

Full text
Abstract:
Internet of Things networks together with the data that flow between networked smart devices are growing at unprecedented rates. Often brokers, or intermediaries nodes, combined with the publish/subscribe communication model represent one of the most used strategies to enable Internet of Things applications. At scalability viewpoint, cloud computing and its main feature named resource elasticity appear as an alternative to solve the use of over-provisioned clusters, which normally present a fixed number of resources. However, we perceive that today the elasticity and Pub/Sub duet presents several limitations, mainly related to application rewrite, single cloud elasticity limited to one level and false-positive resource reorganization actions. Aiming at bypassing the aforesaid problems, this article proposes Brokel, a multi-level elasticity model for Pub/Sub brokers. Users, things, and applications use Brokel as a centralized messaging service broker, but in the back-end the middleware provides better performance and cost (used resources × performance) on message delivery using virtual machine (VM) replication. Our scientific contribution regards the multi-level, orchestrator, and broker, and the addition of a geolocation domain name system service to define the most suitable entry point in the Pub/Sub architecture. Different execution scenarios and metrics were employed to evaluate a Brokel prototype using VMs that encapsulate the functionalities of Mosquitto and RabbitMQ brokers. The obtained results were encouraging in terms of application time, message throughput, and cost (application time × resource usage) when comparing elastic and non-elastic executions.
APA, Harvard, Vancouver, ISO, and other styles
45

Mekkanen, Mike, and Kimmo Kauhaniemi. "Wireless Light-Weight IEC 61850 Based Loss of Mains Protection for Smart Grid." Open Engineering 8, no. 1 (July 14, 2018): 182–92. http://dx.doi.org/10.1515/eng-2018-0022.

Full text
Abstract:
Abstract This paper presents a novel Loss of Mains (LoM) protection method based on IEC 61850 Manufacturing Messages Specification (MMS) protocol over wireless Global System for Mobile Communication (GSM) based access point name (APN) mechanism. LoM or anti islanding protection is a key requirement in modern power distribution grids where there is significant amount of distributed energy resources (DER). The future Smart Grids are based on extensive communication capabilities and thus the communication based LoM approaches will also become dominant. The IEC 61850 standard based systems are gaining ground in the substation communication, and therefore, it is natural to expand this technology deeper into the distribution network. Using this standard for LoM protection, also enables some advanced approaches utilizing large variety of information available in the Smart Grid. There is a specific part of the standard, IEC 61850-7-420, which defines logical nodes (LNs) suitable for this purpose; but, there are no available devices applying this part of the standard yet. In this research, a light-weight implementation of IEDs (Intelligent Electronic Devices) is developed using a low-cost open microcontroller platform, Beagle Bone, and an open source software. Using this platform, a wireless LoM solution based on IEC 61850 MMS protocol has been developed and demonstrated. This paper introduces object modelling according to IEC 61850-7-420 defined LNs and an implementation applying direct client server MMS based communication between lightweight IEDs. The performance of the wireless application using the developed platform is demonstrated by measuring the message latencies. In this paper, a novel LoM protection concept is proposed based on the standardized communication solution brought by IEC 61850 and specific LNs for DERs defined in IEC 61850-7-420. A light-weight implementation of an IEC 61850 based IED is developed in order to reduce large overhead information and complexity of the standard. In addition to LoM function, the developed solution has the ability to monitor DERs status. The available monitoring information can be shared among various distribution management systems (DMS), enabling distributed decision approach for various purposes.
APA, Harvard, Vancouver, ISO, and other styles
46

Maxwell, Deborah, Chris Speed, and Larissa Pschetz. "Story Blocks." Convergence: The International Journal of Research into New Media Technologies 23, no. 1 (January 24, 2017): 79–97. http://dx.doi.org/10.1177/1354856516675263.

Full text
Abstract:
Digital technology is changing, and has changed the ways we create and consume narratives, from moving images and immersive storyworlds to digital long-form and multi-branched story experiences. At the same time, blockchain, the technology that underpins cryptocurrencies such as Bitcoin, is revolutionizing the way that transactions and exchanges occur. As a globally stored and collaboratively written list of all transactions that have ever taken place within a given system, the blockchain decentralizes money and offers a platform for its creative use. There are already examples of blockchain technologies extending beyond the realm of currency, including the decentralization of domain name servers that are not subject to government takedown and identity management and governance. By framing key blockchain concepts with past and present storytelling practices, this article raises questions as to how the principles and implementation of such distributed ledger technologies might be used within contemporary writing practices – that is, can we imagine stories as a currency or value system? We present three experiments that draw on some of the fundamental principles of blockchain and Bitcoin, as an instantiation of a blockchain implemented application, namely, (1) the ledger, (2) the blocks and (3) the mining process. Each low-fi experiment was intentionally designed to be very accessible to take part in and understand and all were conducted as discrete workshops with different sets of participants. Participants included a cohort of design students, technology industry and design professionals and writing and interaction design academics. Each experiment raised a different set of reflections and subsequent questions on the nature of digital, the linearity (or not) of narratives and collaborative processes.
APA, Harvard, Vancouver, ISO, and other styles
47

Kocsis, Imre. "Blokklánc alkalmazási lehetőségek a biztonság területén és bevezetés-tervezésük." Scientia et Securitas 1, no. 1 (December 17, 2020): 12–20. http://dx.doi.org/10.1556/112.2020.00003.

Full text
Abstract:
Összefoglalás. A blokklánc-technológiákat első sikeres alkalmazásuk, a kriptopénzek tették híressé és hírhedté. Valódi jelentőségük azonban az általuk létrehozott új informatikai rendszerkategóriában, az adatbázis jellegű, több résztvevő által közösen hitelesen tartott elosztott főkönyvekben rejlik. A tanulmány ismerteti ezek alapelveit, jellemző üzleti alkalmazási mintáit és a „blokkláncosítást”, mint bevezetési stratégia tervezési elvet. Új eredményként a blokkláncosítás a biztonság területén alkalmazhatóságának megteremtéséhez felállításra kerül egy ismert példákon alapuló érték-modell. Summary. Blockchain technologies were made famous – and arguably, infamous – by their first successful application: cryptocurrencies. Their true significance, however, lies in the novel IT system category they established: distributed ledgers, which are electronic systems of records maintained by multiple parties. The paper summarizes the key concepts of distributed ledger technologies, their key business application types and „blockchainification” as an innovation strategy planning methodology. As a novel contribution, the paper proposes the application of „blockchainification” in the complex context of security, and sets up an initial version of the necessary domain-specific value and application type framework. Distributed Ledger Technologies (DLT) have reached maturity where they can be applied to, and have been demonstrated to be able to, facilitate a very broad range of cross-organizational and client-organization cooperation patterns. For enterprise and industrial usage, DLT key value dimensions, supporting blockchain capbilities and value driver application types have been already collected, facilitating the structured and benefit-based planning of their introduction. One such approach is what we coined „blockchainification”. Blockchainification starts with a decomposition of the business architecture of an organization, to the point where specific cooperations can be characterized, both functionally and by the parties involved. Given such a decomposition, the viability of migrating or replacing the functionality with a DLT-based solution can be assessed, on a cooperation by cooperation basis, including the associated risks and benefits. This way, a blockchain introduction strategy can be formulated for the gradual introduction of DLTs. Additionally, blockchainification suggests – at least in the first phases of an introduction strategy – an emphasis on solutions where a DLT essentially just „replaces” the current information system support of already-digitized cooperations. While in the enterprise and industrial sphere blockchainification is already facilitated by an example-based understanding of key value dimensions, blockchain capabilities and value driver applications, for many other domains, these prerequisites are missing. Importantly, what is already available is not readily applicable for organizations involved in security activities in the broad sense; in many aspects, the value these organizations seek from IT systems is markedly different from the enterprise world. Thus, the paper proposes an initial key value dimension and supporting blockchain capability model for organizations involved in providing a select set of security services.
APA, Harvard, Vancouver, ISO, and other styles
48

Klots, Y. P., I. V. Muliar, V. M. Cheshun, and O. V. Burdyug. "USE OF DISTRIBUTED HASH TABLES TO PROVIDE ACCESS TO CLOUD SERVICES." Collection of scientific works of the Military Institute of Kyiv National Taras Shevchenko University, no. 67 (2020): 85–95. http://dx.doi.org/10.17721/2519-481x/2020/67-09.

Full text
Abstract:
In the article the urgency of the problem of granting access to services of distributed cloud system is disclosed, in particular, the peer distributed cloud system is characterized. The process of interaction of the main components is provided to access the domain name web resource. It is researched that the distribution of resources between nodes of a peer distributed cloud system with the subsequent provision of services on request is implemented using the Kademlia protocol on a local network or Internet and contains processes for publishing the resource at the initial stage of its owner, replication and directly providing access to resources. Application of modern technologies of adaptive information security systems does not allow full control over the information flows of the cloud computing environment, since they function at the upper levels of the hierarchy. Therefore, to create effective mechanisms for protecting software in a cloud computing environment, it is necessary to develop new threat models and to create methods for displaying computer attacks that allow operatively to identify hidden and potentially dangerous processes of information interaction. Rules of access form the basis of security policy and include restrictions on the mechanisms of initialization processes access. Under the developed operations model, the formalized description of hidden threats is reduced to the emergence of context-dependent transitions in the multigraph transactions. The method of granting access to the services of the distributed cloud system is substantiated. It is determined that the Distributed Hash Table (DHT) infrastructure is used to find a replication node that has a replica of the requested resource or part of it. The study identified the stages of identification of the node's validation. The process of adding a new node, validating authenticity, publishing a resource, and accessing a resource is described in the form of a step-by-step sequence of actions within the framework of the method of granting access to services of a distributed cloud system by graphical description of information flows, interaction of processes of information and objects processing.
APA, Harvard, Vancouver, ISO, and other styles
49

Shen, Hong. "11 YEARS OF BIOSTEREOLOGY IN CHINA." Image Analysis & Stereology 19, no. 3 (May 3, 2011): 157. http://dx.doi.org/10.5566/ias.v19.p157-161.

Full text
Abstract:
Biostereology in China is very active. Here is a brief summary: Organization: The organization of biostereology in China was founded in Nov. 1988. Its name is Chinese Society of Biomedical Stereology (CSBS), and is affiliated to the Chinese Society for Stereology (CSS). The first joint president of CSS/BMC was Prof. Peixuan Tang, the second and now the third, is Prof. Dewen Wang. There are 556 registered members. Academic Congresses: Sessions of the National Biostereological Congress were convened in 1990, 1992, 1996 and 2000. Publications: Four works were written and published in China. One is "Quantitative Histology" (Luji Shi, 1964), another is "Stereological Morphometry For Cell Morphology" (Fusheng Zheng, 1990), the third one is "Practical Biostereological Techniques" (Hong Shen and Yingzhong Shen, 1991) and the fourth one is "Quantitative Cytology and Cytochemistry Techniques" (Genxing Xu, 1994). A Chinese Journal of Stereology and Image Analysis has been published since 1996. Courses: More than ten national training courses on biostereology were held. In some medical universities or colleges, a biostereology course has been set up. Theoretical studies: Some new concepts, parameters and methods for stereology and morphometry were put forward, such as: regular form factor, volume concavity, surface concavity, area concavity, boundary concavity, curve profile area density, positive university for immunohistochemistry stain etc. Application: Stereological methods have been widely applied in biomedical studies. The applied field covered most of the morphological domain of biology. The main applications of biostereology are quantitative pathological diagnosis and prognosis of tumor cells and histostructures. Most studies utilize classical stereological methods. New stereological methods should be popularized and applied in the future. Image Analysis System: Image analysis systems are widely used in biostereological studies. About ten kinds of image analysis systems have been manufactured in China. The most popular is HPIAS, which is made by Huahai Electronic CO.LTD.
APA, Harvard, Vancouver, ISO, and other styles
50

Jara, Antonio J., Miguel A. Zamora, and Antonio Skarmeta. "Glowbal IP: An Adaptive and Transparent IPv6 Integration in the Internet of Things." Mobile Information Systems 8, no. 3 (2012): 177–97. http://dx.doi.org/10.1155/2012/819250.

Full text
Abstract:
The Internet of Things (IoT) requires scalability, extensibility and a transparent integration of multi-technology in order to reach an efficient support for global communications, discovery and look-up, as well as access to services and information. To achieve these goals, it is necessary to enable a homogenous and seamless machine-to-machine (M2M) communication mechanism allowing global access to devices, sensors and smart objects. In this respect, the proposed answer to these technological requirements is called Glowbal IP, which is based on a homogeneous access to the devices/sensors offered by the IPv6 addressing and core network. Glowbal IP's main advantages with regard to 6LoWPAN/IPv6 are not only that it presents a low overhead to reach a higher performance on a regular basis, but also that it determines the session and identifies global access by means of a session layer defined over the application layer. Technologies without any native support for IP are thereby adaptable to IP e.g. IEEE 802.15.4 and Bluetooth Low Energy. This extension towards the IPv6 network opens access to the features and methods of the devices through a homogenous access based on WebServices (e.g. RESTFul/CoAP). In addition to this, Glowbal IP offers global interoperability among the different devices, and interoperability with external servers and users applications. All in all, it allows the storage of information related to the devices in the network through the extension of the Domain Name System (DNS) from the IPv6 core network, by adding the Service Directory extension (DNS-SD) to store information about the sensors, their properties and functionality. A step forward in network-based information systems is thereby reached, allowing a homogenous discovery, and access to the devices from the IoT. Thus, the IoT capabilities are exploited by allowing an easier and more transparent integration of the end users applications with sensors for the future evaluations and use cases.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography