uts-textmining-oleholeh-khas-semarang-menggunakan-serpapi-sms-spam-detector- dengan-algoritma

Aqil Ilhanputra
7 min readNov 7, 2022

--

lakukan proses Prepocessing

Kita import terlebih dahulu modul — modul nya

import numpy as np 
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set_style("whitegrid")
Kemudian kita panggil file csv nya yang sudah saya upload di github.com dengan perintah berikut:
filename = "https://raw.githubusercontent.com/aqililhanputra30/aqililhans/main/latihan.csv"
df = pd.read_csv(filename)
df.head()
hasil output:

Kita tampilkan lagi

df.head()

Hasil Output:

lakukan proses Cleaning Text
Kita import modul nya terlebih dahulu

import string
import re

Terus kita masukkan kode berikut

def clean_review(review):return re.sub('[^a-zA-Z]', ' ', review).lower()df['cleaned_review'] = df['review'].apply(lambda x: clean_review(str(x)))df['label'] = df['rating'].map({1.0:0, 2.0:0, 3.0:0, 4.0:1, 5.0:1})

kita tambahkan fitur tambahan (panjang, dan persentase tanda baca dalam teks)

def count_punct(review):count = sum([1 for char in review if char in string.punctuation])return round(count/(len(review) - review.count(" ")), 3)*100df['review_len'] = df['review'].apply(lambda x: len(str(x)) - str(x).count(" "))df['punct'] = df['review'].apply(lambda x: count_punct(str(x)))df

Hasil Output:

Selanjutnya kita lakukan proses tokenizing

Tokenizing adalah metode untuk melakukan pemisahan kata dalam suatu kalimat dengan tujuan untuk proses analisis teks lebih lanjut.

Fungsi split()pada pyhton dapat digunakan untuk memisahkan teks. Perhatikan contoh dibawah ini :

def tokenize_review(review):
tokenized_review = review.split()
return tokenized_review

df['tokens'] = df['cleaned_review'].apply(lambda x: tokenize_review(x))
df.head()

Hasil Output:

lakukan proses Lemmatization dan Removing Stopwords

Filtering adalah tahap mengambil kata-kata penting dari hasil token dengan menggunakan algoritma stoplist (membuang kata kurang penting) atau wordlist (menyimpan kata penting).

Stopword adalah kata umum yang biasanya muncul dalam jumlah besar dan dianggap tidak memiliki makna. Contoh stopword dalam bahasa Indonesia adalah “yang”, “dan”, “di”, “dari”, dll.

Langkah-langkahnya nya kita import dulu modulnya

import nltk
nltk.download('wordnet')
nltk.download('omw-1.4')
nltk.download('stopwords')
from nltk.corpus import stopwords
all_stopwords = stopwords.words('english')
all_stopwords.remove('not')
[nltk_data] Downloading package wordnet to /root/nltk_data...
[nltk_data] Package wordnet is already up-to-date!
[nltk_data] Downloading package omw-1.4 to /root/nltk_data...
[nltk_data] Package omw-1.4 is already up-to-date!
[nltk_data] Downloading package stopwords to /root/nltk_data...
[nltk_data] Package stopwords is already up-to-date!

lakukan perintah kode berikut

def lemmatize_review(token_list):
return " ".join([lemmatizer.lemmatize(token) for token in token_list if not token in set(all_stopwords)])

lemmatizer = nltk.stem.WordNetLemmatizer()
df['lemmatized_review'] = df['tokens'].apply(lambda x: lemmatize_review(x))
df.head()

Hasil Output:

Langkah selanjutnya kita lakukan proses Exploratory Data Analysis

#  Shape of the dataset, and breakdown of the classes
print(f"Input data has { len(df) } rows and { len(df.columns) } columns")
print(f"rating 1.0 = { len(df[df['rating']==1.0]) } rows")
print(f"rating 2.0 = { len(df[df['rating']==2.0]) } rows")
print(f"rating 3.0 = { len(df[df['rating']==3.0]) } rows")
print(f"rating 4.0 = { len(df[df['rating']==4.0]) } rows")
print(f"rating 5.0 = { len(df[df['rating']==5.0]) } rows")

Hasil Output:

# Missing values in the dataset
print(f"Number of null in label: { df['rating'].isnull().sum() }")
print(f"Number of null in text: { df['review'].isnull().sum() }")
sns.countplot(x='rating', data=df);

Hasil Output:

kita akan Membuat Wordcloud

wordcloud adalah gambar yang menunjukkan daftar kata-kata yang digunakan dalam sebuah teks, umumnya semakin banyak kata yang digunakan semakin besar ukuran kata tersebut dalam gambar.

Import library wordcloud

Sebelum mulai menggenerate wordcloud, kita perlu meng-impor beberapa library yang digunakan dalam program, ketikkan kode berikut:

from wordcloud import WordCloud

kita akan menampilkan Word Cloud dalam Gambar dengan perintah berikut :

df_negative = df[ (df['rating']==1.0) | (df['rating']==2.0) | (df['rating']==3.0) ]
df_positive = df[ (df['rating']==4.0) | (df['rating']==5.0) ]

#convert to list
negative_list= df_negative['lemmatized_review'].tolist()
positive_list=df_positive['lemmatized_review'].tolist()

filtered_negative = ("").join(str(negative_list)) #convert the list into a string of negative
filtered_negative = filtered_negative.lower()

filtered_positive = ("").join(str(positive_list)) #convert the list into a string of positive
filtered_positive = filtered_positive.lower()

Untuk menampilkan gambarnya Positive Review dengan perintah berikut :

wordcloud = WordCloud(max_font_size = 160, margin=0, background_color = "white", colormap="Greens").generate(filtered_positive)
plt.figure(figsize=[10,10])
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.margins(x=0, y=0)
plt.title("Positive Reviews Word Cloud")
plt.show()

Hasil Output:

Untuk menampilkan gambarnya Negative Review dengan perintah berikut :

wordcloud = WordCloud(max_font_size = 160, margin=0, background_color = "white", colormap="Reds").generate(filtered_negative)
plt.figure(figsize=[10,10])
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.margins(x=0, y=0)
plt.title("Negative Reviews Word Cloud")
plt.show()

Hasil Output:

kita lakukan langkah Feature Extraction from Text

X = df[['lemmatized_review', 'review_len', 'punct']]
y = df['label']
print(X.shape)
print(y.shape)
from sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 0)print(X_train.shape)print(X_test.shape)print(y_train.shape)print(y_test.shape)
from sklearn.feature_extraction.text import TfidfVectorizertfidf = TfidfVectorizer(max_df = 0.5, min_df = 2) # ignore terms that occur in more than 50% documents and the ones that occur in less than 2tfidf_train = tfidf.fit_transform(X_train['lemmatized_review'])tfidf_test = tfidf.transform(X_test['lemmatized_review'])X_train_vect = pd.concat([X_train[['review_len', 'punct']].reset_index(drop=True),pd.DataFrame(tfidf_train.toarray())], axis=1)X_test_vect = pd.concat([X_test[['review_len', 'punct']].reset_index(drop=True),pd.DataFrame(tfidf_test.toarray())], axis=1)X_train_vect.head()
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(max_df = 0.5, min_df = 2) # ignore terms that occur in more than 50% documents and the ones that occur in less than 2
tfidf_train = tfidf.fit_transform(X_train['lemmatized_review'])
tfidf_test = tfidf.transform(X_test['lemmatized_review'])

X_train_vect = pd.concat([X_train[['review_len', 'punct']].reset_index(drop=True),
pd.DataFrame(tfidf_train.toarray())], axis=1)
X_test_vect = pd.concat([X_test[['review_len', 'punct']].reset_index(drop=True),
pd.DataFrame(tfidf_test.toarray())], axis=1)

X_train_vect.head()

Kita lakukan klasifikasi

from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.naive_bayes import MultinomialNB
classifier = MultinomialNB()
classifier.fit(X_train_vect, y_train)
naive_bayes_pred = classifier.predict(X_test_vect)

# Classification Report
print(classification_report(y_test, naive_bayes_pred))

# Confusion Matrix
class_label = ["negative", "positive"]
df_cm = pd.DataFrame(confusion_matrix(y_test, naive_bayes_pred), index=class_label, columns=class_label)
sns.heatmap(df_cm, annot=True, fmt='d')
plt.title("Confusion Matrix")
plt.xlabel("Predicted Label")
plt.ylabel("True Label")
plt.show()
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(n_estimators=150)
classifier.fit(X_train_vect, y_train)
random_forest_pred = classifier.predict(X_test_vect)

# Classification report
print(classification_report(y_test, random_forest_pred))

# Confusion Matrix
class_label = ["negative", "positive"]
df_cm = pd.DataFrame(confusion_matrix(y_test, random_forest_pred), index=class_label, columns=class_label)
sns.heatmap(df_cm, annot=True, fmt='d')
plt.title("Confusion Matrix")
plt.xlabel("Predicted Label")
plt.ylabel("True Label")
plt.show()
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier.fit(X_train_vect, y_train)
log_reg_pred = classifier.predict(X_test_vect)
# Classification report
print(classification_report(y_test, log_reg_pred))

# Confusion Matrix
class_label = ["negative", "positive"]
df_cm = pd.DataFrame(confusion_matrix(y_test, log_reg_pred), index=class_label, columns=class_label)
sns.heatmap(df_cm, annot=True, fmt='d')
plt.title("Confusion Matrix")
plt.xlabel("Predicted Label")
plt.ylabel("True Label")
plt.show()
from sklearn.svm import SVC
classifier = SVC(kernel = 'linear', random_state = 0)
classifier.fit(X_train_vect, y_train)
svm_pred = classifier.predict(X_test_vect)
# Classification report
print(classification_report(y_test, svm_pred))

# Confusion Matrix
class_label = ["negative", "positive"]
df_cm = pd.DataFrame(confusion_matrix(y_test, svm_pred), index=class_label, columns=class_label)
sns.heatmap(df_cm, annot=True, fmt='d')
plt.title("Confusion Matrix")
plt.xlabel("Predicted Label")
plt.ylabel("True Label")
plt.show()
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors = 5)
classifier.fit(X_train_vect, y_train)
knn_pred = classifier.predict(X_test_vect)

# Classification report
print(classification_report(y_test, knn_pred))

# Confusion Matrix
class_label = ["negative", "positive"]
df_cm = pd.DataFrame(confusion_matrix(y_test, knn_pred), index=class_label, columns=class_label)
sns.heatmap(df_cm, annot=True, fmt='d')
plt.title("Confusion Matrix")
plt.xlabel("Predicted Label")
plt.ylabel("True Label")
plt.show()
from sklearn.model_selection import cross_val_score

models = [MultinomialNB(), LogisticRegression(), RandomForestClassifier(n_estimators = 150),
SVC(kernel = 'linear'), KNeighborsClassifier(n_neighbors = 5)]
names = ["Naive Bayes", "Logistic Regression", "Random Forest", "SVM", "KNN"]
for model, name in zip(models, names):
print(name)
for score in ["accuracy", "precision", "recall", "f1"]:
print(f" {score} - {cross_val_score(model, X_train_vect, y_train, scoring=score, cv=10).mean()} ")
print()
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors = 5)

classifier.fit(tfidf_train, y_train)
classifier.score(tfidf_test, y_test)
data = ["Bad", "Good", "I hate the service, it's really bad", "The nurse is so kind"]
vect = tfidf.transform(data).toarray()

my_pred = classifier.predict(vect)
print(my_pred)
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer()
X_cv = cv.fit_transform(df['lemmatized_review']) # Fit the Data
y_cv = df['label']

from sklearn.model_selection import train_test_split
X_train_cv, X_test_cv, y_train_cv, y_test_cv = train_test_split(X_cv, y_cv, test_size=0.3, random_state=42)
#Naive Bayes Classifier
from sklearn.naive_bayes import MultinomialNB
clf = MultinomialNB()

clf.fit(X_train_cv, y_train_cv)
clf.score(X_test_cv, y_test_cv)

Upload data dari Github ke Heroku

Setelah melakukan analisis data sentiment kita akan mengimplementasikan hasil dari anasilis data sentimen yang sudah di upload di Github ke website Heroku.

1. Login terlebih dahulu ke akun Heroku dan Github, Jika belum membuat akun, silahkan buat akunnya terlebih dahulu.

2. Jika sudah login, buat aplikasi baru dengan memilih tombol New dibagian sudut kanan atas.

3. Masukan nama aplikasinya sesuai keinginan kamu, lalu klik Create App.

4. Jika sudah, Pastikan kamu sudah login dengan Github dan pastikan juga kamu sudah mempunyai repository yang ingin dihubungkan dengan Github.

5. Cari Github pada Connect to Github, lalu klik

6. klik Authorize Heroku.

7. Lalu masukan nama repository Github kamu, klik search.

8. Jika ada, klik Connect.

9. jika sudah terhubung, ceklis pada bagian Wait for CI to pass before deploy dan klik tombol Enable Automatic Deploy, lanjutkan dengan klik Deploy Branch.

10. jika sudah terhubung, ceklis pada bagian Wait for CI to pass before deploy dan klik tombol Enable Automatic Deploy, lanjutkan dengan klik Deploy Branch.

link heroku:

https://textminninguts.herokuapp.com/

--

--