Academic literature on the topic 'Computer algortihms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Computer algortihms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Computer algortihms"

1

Siregar, Yunita Sari, Boni Oktaviana Sembiring, Hasdiana Hasdiana, Arie Rafika Dewi, and Herlina Harahap. "Algortihm C4.5 in mapping the admission patterns of new Students in Engingeering Computer." SinkrOn 6, no. 1 (October 10, 2021): 80–90. http://dx.doi.org/10.33395/sinkron.v6i1.11154.

Full text
Abstract:
University of Harapan Medan is one of the private universities in North Sumatra which has computer-based study programs such as Informatics Engineering and Information Systems. Every year this college receives many registrations from students who have completed their education at the school stage. The large number of incoming student data makes it difficult for the admin to select new students who will register. In this study using the C4.5 algorithm data mining method to map the pattern of student admissions selection in the field of Engineering and computers. The attributes used are the average value of report cards (high, enough, low), basic academic ability tests (very high, high, medium, low, very low), basic computer knowledge tests (very high, high, enough, low, very low) and interviews (good, bad). Data mining is a mathematical calculation process that uses algorithms and requires large data. While the C4.5 algorithm is an algorithm that processes data by calculating entrophy and information gain, where after the calculation process is carried out, those who get the largest information gain value will become nodes and branches. This C4.5 algorithm will describe a decision tree that will form a pattern in student selection. The results of this study indicate that in mapping the selection pattern of interview attributes into level 1 nodes, the attributes of the basic computer knowledge test become the level 1 branch, the attributes of the basic academic ability test become the level 2 branch and the attribute average value of report cards becomes the level 3 branch.
APA, Harvard, Vancouver, ISO, and other styles
2

Grafarend, Erik W., Torben Krarup, and Rainer Syffus. "An algortihm for the inverse of a multivariate homogeneous polynomial of degree n." Journal of Geodesy 70, no. 5 (February 1, 1996): 276–86. http://dx.doi.org/10.1007/s001900050018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kusmiran, Amirin. "IMPLEMENTASI ALGORITMA DISCRETE FURIER TRANSFORM UNTUK KARAKTERISASI NADA DARI HURUF VOKAL." Jurnal TAMBORA 1, no. 2 (June 6, 2016): 31–35. http://dx.doi.org/10.36761/jt.v1i2.135.

Full text
Abstract:
Sinyal atau gelombang merupakan salah satu phenomena fisik yang telah banyak diaplikasi dibidang sains dan teknologi untuk mengkarakterisasi suatu bahan, seperti retakan dan kandungan dari material, dan nada. Sifat fisik yang digunakan untuk mengkarakterisasi bahan adalah frekuensi. Frekuensi yang dihasilkan oleh manusia berberda-berbeda dikarenakan tekanan, pita suara juga berbeda-beda. Penekanan suara dapat dikarakterisasi dalam domain waktu, sedangkan frekuensinya dapat dikarakterisasi dalam domain frekuensi. Untuk mengkaraterisasi, nada tersebut direkam dengan menggunakan microphone. Hasil rekaman tersebut akan tersimpan di dalam soundcard yang terintegrasi dengan personal computer (PC), kemudian dianalisis dengan menggunakan algoritma yang diimplementasi kedalam matlab. Algortima tersebut adalah algoritma recording dan discrete Fourier transform (DFT). Windows leakage dapat diminimalisasi menggunakan algoritma Blackmann dan Barthannwin modified. Frekuensi ,dan amplitudo yang dihasilkan oleh nada darihuruf vokala dalahnada I adalah 190 Hz dengan amplitudo 0,14 dB, nada o adalah 580 Hz dengan amplitudo 0,1 dB, nada u adalah 210 Hz dengan amplitudo 0,15 dB, nada e adalah 200 Hz dengan amplitudo 0,13 dB, dan nada aadalah 310 Hz dengan 0,1 dB.Noise yang dihasilkan oleh nada o pada saat pengambilan data disebabkan oleh perangkat personal computer (PC).
APA, Harvard, Vancouver, ISO, and other styles
4

Yurtay, Yuksel, Nilüfer Yurtay, Arda Yuksel, and Ayca Armay. "Sample application on dramatization in education." New Trends and Issues Proceedings on Humanities and Social Sciences 3, no. 7 (July 23, 2017): 08–13. http://dx.doi.org/10.18844/prosoc.v3i7.1978.

Full text
Abstract:
Developments on the technology have a big effect to change our lifestyle and education methods. It is obvious that effects of games based computer technology is increased on people’s life. Dramatization in Education is the one of the latest studies that is used education materials and dramatization methodologies. The idea of converting the education materials to game, is invented with changing human profiles and their interestsIn these study, it is intended that teaching the Gini Algorithm that is used with dramatization tools which is the one of the most significant usage area in data mining. The dramatization of theorical steps of Gini Algortihm is executed and it is developed with practiced on a group of student. It is measured the effects on students of classical education and dramatization on education with this application.After the measurement, results are discussed, evuluated and shared. The application is designed for mobile devices and it is coded to be used with Android Operating System. Besides, this applicaton is developed computer engineer students based. Instead of to memorize the theroical informations and formulas, the application focused on how people understand the logic.This information becomes permanent information in young minds. Application consists two stages. First one is the preparing of dataset and second stage is the operating the algorithm with the help of prepared datasets and formulas. In the last part of the application, whole main nodes provide to users as decision tree table. Keywords: Education, dramatization, data mining, gini algorithm.
APA, Harvard, Vancouver, ISO, and other styles
5

Buulolo, Novelius, and Anita Sindar. "Analisis dan Perancangan Keamanan Data Teks Menggunakan Algoritma Kriptografi DES (Data Encryption Standard)." Respati 15, no. 3 (November 10, 2020): 61. http://dx.doi.org/10.35842/jtir.v15i3.373.

Full text
Abstract:
INTISARIDalam mengirim pesan ataupun pertukaran informasi menggunakan koneksi internet melalui alat komunikasi, data tersebut bisa dibaca oleh orang lain. Untuk mengatasi masalah ini diperlukan keamanan komputer untuk menjaga data dari pihak yang tidak berwenang. Permasalahan yang muncul adanya data atau dokumen yang tidak aman pada software, keamanan data pada software maka perlu diterapkan algoritma kriptografi DES dalam merancang aplikasi untuk menjaga kerahasiaan data. Data dienkrip dalam blok-blok 64 bit menggunakan kunci 56 bit. DES mentransformasikan input 64 bit dalam beberapa tahap enkripsi ke dalam output 64 bit, dengan tahapan dan kunci yang sama. . Kata kunci— Keamanan Data, Algortima DES, Cipherteks, Plainteks ABSTRACTIn sending messages or exchanging information using an internet connection via communication tools, the data can be read by other people. To solve this problem, computer security is needed to protect data from unauthorized parties. Problems that arise are data or documents that are not secure in the software, data security in the software, it is necessary to apply the DES cryptographic algorithm in designing applications to maintain data confidentiality. Data is encrypted in 64-bit blocks using a 56-bit key. DES transforms 64-bit input in several encryption steps into 64-bit output with the same stages and keys. Keywords— Data Security, DES Algorithm, Ciphertext, Plaintext
APA, Harvard, Vancouver, ISO, and other styles
6

Bathla, Gourav, Himanshu Aggarwal, and Rinkle Rani. "Migrating From Data Mining to Big Data Mining." International Journal of Engineering & Technology 7, no. 3.4 (June 25, 2018): 13. http://dx.doi.org/10.14419/ijet.v7i3.4.14667.

Full text
Abstract:
Data mining is one of the most researched fields in computer science. Several researches have been carried out to extract and analyse important information from raw data. Traditional data mining algorithms like classification, clustering and statistical analysis can process small scale of data with great efficiency and accuracy. Social networking interactions, business transactions and other communications result in Big data. It is large scale of data which is not in competency for traditional data mining techniques. It is observed that traditional data mining algorithms are not capable for storage and processing of large scale of data. If some algorithms are capable, then response time is very high. Big data have hidden information, if that is analysed in intelligent manner can be highly beneficial for business organizations. In this paper, we have analysed the advancement from traditional data mining algorithms to Big data mining algorithms. Applications of traditional data mining algorithms can be straight forward incorporated in Big data mining algorithm. Several studies have analysed traditional data mining with Big data mining, but very few have analysed most important algortihsm within one research work, which is the core motive of our paper. Readers can easily observe the difference between these algorthithms with pros and cons. Mathemtics concepts are applied in data mining algorithms. Means and Euclidean distance calculation in Kmeans, Vectors application and margin in SVM and Bayes therorem, conditional probability in Naïve Bayes algorithm are real examples. Classification and clustering are the most important applications of data mining. In this paper, Kmeans, SVM and Naïve Bayes algorithms are analysed in detail to observe the accuracy and response time both on concept and empirical perspective. Hadoop, Mapreduce etc. Big data technologies are used for implementing Big data mining algorithms. Performace evaluation metrics like speedup, scaleup and response time are used to compare traditional mining with Big data mining.
APA, Harvard, Vancouver, ISO, and other styles
7

Januar Al Amien and Doni Winarso. "ANALISIS PENINGKATAN KINERJA FTP SERVER MENGGUNAKAN LOAD BALANCING PADA CONTAINER." JURNAL FASILKOM 9, no. 3 (November 14, 2019): 8–18. http://dx.doi.org/10.37859/jf.v9i3.1667.

Full text
Abstract:
Abstract Cloud computing is a technology that answers the challenge of the need for efficient computing technology. There are many things that can be implemented using cloud computing technologies such as web services, storage services, applications and others. Use of cloud computing using container technology can help in the management of applications and optimize the use of resources in the form of memory and processor usage on the server. In this research docker containers implemented by service of FTP (File Transfer Protocol). The FTP service is made into 3 containers within a single server computer. To handle load problems performance on the FTP server against overload requests, load balancing is used. Load balancing is a method to improve performance while reducing the performance load on FTP servers. Based on the test results, the use of multi container and load balancing in the FTP server in load with two algorithm least connection and raound robin handling has result of smaller memory usage and utilization of processor usage evenly. Both algorithms are recommended for handling loads for FTP servers and will be more efficient when applied to servers with the same specifications and loads Keywords: Cloud Computing, Docker, FTP, Load Balancing, HAProxy, Least Connection, Round Robin. Abstrak Cloud computing merupakan teknologi yang menjawab tantangan akan kebutuhan teknologi komputasi yang efisien. Terdapat banyak hal yang dapat diimplementasikan menggunakan teknologi cloud computing seperti web service, layanan penyimpanan, aplikasi dan lain-lain. Penerapan cloud computing dengan menggunakan teknologi container dapat membantu dalam pengelolaan aplikasi serta mengoptimalkan penggunaan sumber daya berupa penggunaan memory dan prosesor pada server. Dalam penelitian ini penerapan docker container diimplementasikan menggunakan layanan aplikasi FTP (File Transfer Protocol). Layanan FTP dibuat menjadi 3 container didalam satu computer server. Untuk menangani permasalahan beban kinerja pada FTP server terhadap permintaan yang terlalu berat (overload) digunakan load balancing. Load balancing merupakan metode untuk meningkatkan kinerja sekaligus mengurangi beban kinerja pada FTP server. Berdasarkan hasil pengujian, penerapan multi container serta load balancing didalam FTP server dalam penanganan beban dengan dua algortima least connection dan round robin memiliki hasil penggunaan memory yang lebih kecil dan pemanfaatan penggunaan prosesor yang merata kedua algoritma tersebut direkomendasikan untuk penanganan beban untuk ftp server dan akan lebih efisien apabila diterapkan pada server dengan spesifikasi dan beban yang sama. Kata Kunci: Cloud Computing, Docker, FTP, Load Balancing, HAProxy, Least Connection, Round Robin .
APA, Harvard, Vancouver, ISO, and other styles
8

Anna Hendri Soleliza Jones and Ade Dermawan. "PENERAPAN ALGORITMA ITERATIVE DICHOTOMISER 3 UNTUK DIAGNOSIS PENYAKIT SALURAN PENCERNAAN PADA BALITA." PINTER : Jurnal Pendidikan Teknik Informatika dan Komputer 5, no. 1 (June 1, 2021): 1–9. http://dx.doi.org/10.21009/pinter.5.1.1.

Full text
Abstract:
Penyakit saluran pencernaan (digestive tract) pada balita dapat terjadi karena sering terlambat dalam melakukan penanganan. Faktor lain mempengaruhi seperti terbatasnya jumlah dokter spesialis anak sedangkan balita yang harus ditangani banyak sehingga dokter membutuhkan waktu dalam mendiagnosa pasien cukup lama. Penelitian ini bertujuan untuk memberikan hasil rekomendasi diagnosis penyakit saluran pencernaan berdasarkan gejala klinis yang dialami pasien dengan menggunakan algoritma iterative dichotomiser 3 serta memberikan informasi mengenai penyakit saluran pencernaan untuk mengedukasi masyarakat dan orang tua. Metode yang digunakan untuk mengumpulkan data pada penelitian ini adalah studi literatur serta hasil dari wawancara dengan pakar dr. Yolanda Pitra Kusumadewi dari Puskesmas Kotagede 1. Dari pengumpulan data tersebut didapat data gejala, penyakit serta pengobatan. Kemudian data diolah dan dilanjutkan dengan merancang sistem dan mendesain perangkat lunak berdasarkan kebutuhan sistem. Implementasikan algoritma iterative dichotomiser 3 dengan forward chaining. Pengujian sistem menggunakan confusion matrix. Dari sistem yang dibangun dihasilkan sebuah aplikasi atau sistem sofware tentang sistem pakar dengan basis web yang dapat mendiagnosis 7 macam penyakit saluran pencernaan pada balita berdasarkan 32 gejala klinis yang sering dialami pasien serta diharap dapat memberikan informasi dan pengetahuan dalam bidang kesehatan kepada masyarakat terutama pada penanganan penyakit saluran pencernaan. Penerapan algoritma iterative dichotomiser 3 dengan pengujian confusion matriks didapat sebuah aplikasi yang dapat membantu pengguna mendiagnosis penyakit saluran pencernaan dengan akurasi sistem 93,3% dari 15 data uji baru/samples yang dipeoleh dari pakar dengan presisi sebesar 93,3% dan recall sebesar 93,3%. Dapat Disimpulkan bahwa algortima ID3 dapat digunakan karena tingkat akurasi yang cukup tinggi.
APA, Harvard, Vancouver, ISO, and other styles
9

Retnoningsih, Endang, and Rully Pramudita. "Mengenal Machine Learning Dengan Teknik Supervised Dan Unsupervised Learning Menggunakan Python." BINA INSANI ICT JOURNAL 7, no. 2 (December 28, 2020): 156. http://dx.doi.org/10.51211/biict.v7i2.1422.

Full text
Abstract:
Abstrak: Machine learning merupakan sistem yang mampu belajar sendiri untuk memutuskan sesuatu tanpa harus berulangkali diprogram oleh manusia sehingga komputer menjadi semakin cerdas berlajar dari pengalaman data yang dimiliki. Berdasarkan teknik pembelajarannya, dapat dibedakan supervised learning menggunakan dataset (data training) yang sudah berlabel, sedangkan unsupervised learning menarik kesimpulan berdasarkan dataset. Input berupa dataset digunakan pembelajaran mesin untuk menghasilkan analisis yang benar. Permasalahan yang akan diselesaikan bunga iris (iris tectorum) yang memiliki bunga bermaca-macam warna dan memiliki sepal dan petal yang menunjukkan spesies bunga, dibutuhkan metode yang tepat untuk pengelompokan bunga-bunga tersebut kedalam spesiesnya iris-setosa, iris-versicolor atau iris-virginica. Penyelesaian digunakan Python yang menyediakan algoritma dan library yang digunakan membuat machine learning. Penyelesaian dengan teknik supervised learning dipilih algoritma KNN Clasiffier dan teknik unsupervised learning dipilih algoritma DBSCAN Clustering. Hasil yang diperoleh Python menyediakan library yang lengkap numPy, Pandas, matplotlib, sklearn untuk membuat pemrograman machine learning dengan algortima KNN memanggil from sklearn import neighbors termasuk teknik supervised, maupun DBSCAN memanggil from sklearn.cluster import DBSCAN termasuk teknik unsupervised learning. Kemampuan Python memberikan hasil output sesuai input dalam dataset menghasilkan keputusan berupa klasifikasi maupun klusterisasi. Kata kunci: DBSCAN, KNN, machine learning, python. Abstract: Machine learning is a system that is able to learn on its own to decide something without having to be repeatedly programmed by humans so that computers become smarter in learning from the experience of the data they have. Based on the learning technique, supervised learning can be distinguished using a dataset (training data) that is already labeled, while unsupervised learning draws conclusions based on the dataset. The input in the form of a dataset is used by machine learning to produce the correct analysis. The problem to be solved by iris flowers (iris tectorum), which has flowers of various colors and has sepals and petals that indicate the species of flowers, requires an appropriate method for grouping these flowers into iris-setosa, iris-versicolor or iris-virginica species. The solution is used by Python, which provides the algorithms and libraries used to make machine learning. The solution with the supervised learning technique was chosen by the KNN Clasiffier algorithm and the unsupervised learning technique was selected by the DBSCAN Clustering algorithm. The results obtained by Python provide a complete library of numPy, Pandas, matplotlib, sklearn to create machine learning programming with KNN algorithms calling from sklearn import neighbors including supervised techniques, and DBSCAN calling from sklearn.cluster import DBSCAN including unsupervised learning techniques. Python's ability to provide output according to the input in the dataset results in decisions in the form of classification and clustering. Keywords: DBSCAN, KNN, machine learning, python.
APA, Harvard, Vancouver, ISO, and other styles
10

Setiawan, Agung Wahyu. "Perbandingan Preskrining Lesi Kulit berbasis Convolutional Neural Network: Citra Asli dan Tersegmentasi." Jurnal Teknologi Informasi dan Ilmu Komputer 8, no. 4 (July 22, 2021): 793. http://dx.doi.org/10.25126/jtiik.2021844411.

Full text
Abstract:
<p>Seiring dengan bertambahnya prevalensi lesi kulit, maka diperlukan adanya preskrining lesi kulit mandiri yang mudah dan akurat. Pada studi ini, dilakukan perbandingan kinerja preskrining lesi kulit berbasis <em>Convolutional Neural Network</em> antara citra asli dan citra tersegmentasi <em>Grabcut</em> sebagai masukan. Ada dua parameter kinerja yang digunakan sebagai evaluasi, yaitu akurasi serta waktu pembuatan model. Tidak ada perbedaan kinerja akurasi pelatihan dan validasi pembelajaran mesin menggunakan citra asli dengan citra tersegmentasi. Meskipun terdapat proses tambahan berupa penghilangan latar belakang citra menggunakan algortima <em>Grubcut</em>, akurasi pelatihan maupun validasi preskrining lesi kulit tidak mengalami peningkatan yang signifikan. Pada parameter kinerja yang kedua, waktu pembuatan model dipengaruhi oleh jumlah data latih dan validasi. Semakin kecil jumlah data latih yang digunakan, maka waktu pembuatan model akan semakin cepat, dan sebaliknya. Disamping itu, proporsi antara jumlah data latih dengan validasi juga berpengaruh ke akurasi validasi. Pada studi ini, dengan menggunakan jumlah data latih yang lebih kecil dibandingkan data validasi, akurasi validasi mengalami peningkatan dari 0,82% menjadi 0,90%. Studi ini telah memberikan bukti bahwa pada preskrining lesi kulit menggunakan pembelajaran mesin berbasis CNN tidak diperlukan mekanisme adanya penghilangan latar belakang citra. Selain itu, pembuatan model pembelajaran mesin berbasis CNN dapat dilakukan dengan menggunakan data latih sekitar 22,41% dari data total. Diharapkan, hasil studi ini dapat dimanfaatkan untuk pengembangan aplikasi preskrining lesi kulit menggunakan pembelajaran mesin berbasis CNN pada komputer atau gawai dengan sumber daya komputasi yang rendah.</p><p> </p><p><strong><em>Abstract</em></strong></p><p> </p><p class="Abstract"><em>It is necessary to develop a self-prescreening of skin lesion due to the prevalence is increasing every year. This study tries to compare and evaluate the performance of prescreening of a skin lesion in the original and segmented images using Convolutional Neural Network. The Grabcut algorithm is used in the image segmentation process. Two parameters are used to evaluate the performance of the classification, i.e. accuracy and time to build the model. The results show that there is no significant difference in training and validation accuracy between original and segmented images. Even though there is an additional process in removing image background using Grabcut, the accuracy of training and validation do not increase significantly. In the second performance indicator, the time to build the model is influenced by the numbers of training and validation data that are used. The smaller the amount of training data used, the faster the model creation time will be. In addition, the proportion between the amount of training data and validation also affects the accuracy of validation. In this study, using a smaller amount of training data than the validation data, the validation accuracy increased from 0.82 to 0.90. This study has provided evidence that prescreening of skin lesions using machine learning based on CNN does not require image background removal and only about 22.41% of the total data are needed to build the model. One of the contributions of this study is that the results of this study can be used for the development of a skin lesion prescreening application using CNN-based machine learning on computers or devices with low computational resources.</em></p>
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Computer algortihms"

1

Mullen, Patrick Bowen. "Learning in Short-Time Horizons with Measurable Costs." BYU ScholarsArchive, 2006. https://scholarsarchive.byu.edu/etd/808.

Full text
Abstract:
Dynamic pricing is a difficult problem for machine learning. The environment is noisy, dynamic and has a measurable cost associated with exploration that necessitates that learning be done in short-time horizons. These short-time horizons force the learning algorithms to make pricing decisions based on scarce data. In this work, various machine learning algorithms are compared in the context of dynamic pricing. These algorithms include the Kalman filter, artificial neural networks, particle swarm optimization and genetic algorithms. The majority of these algorithms have been modified to handle the pricing problem. The results show that these adaptations allow the learning algorithms to handle the noisy dynamic conditions and to learn quickly.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Computer algortihms"

1

Kakde, O. G. Algortithms for Compiler Design (Electrical and Computer Engineering Series). Charles River Media, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Computer algortihms"

1

Amine, Ouardi, and Mestari Mohammed. "Predicting A search algortihm heuristics using neural networks." In 2021 International Conference on Electrical, Computer and Energy Technologies (ICECET). IEEE, 2021. http://dx.doi.org/10.1109/icecet52533.2021.9698700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography