23 Sun
2021 Dev-Matching : 머신러닝 과제테스트
프롤로그
오늘은 8시간이나 진행하는 머신러닝 테스트이다. 사실 좀 더 많은 준비를 하고 싶었는데, 얼마 준비를 못해서 자신감이 없었다. 내가 할 수 있을 거라는 생각도 못했고 그냥 경험삼아 해보다가 어려우면 포기하자라는 마인드를 가지고 시작했다.
그리고, 영화 같은 일은 벌어지지 않는다. 역시나 어려웠고 많은 검색으로 조그마한 문제들을 해결함에 있어 답답했다. 아니 시간이 이만큼이나 지났는데 아직 이것밖에 못했다고? 결국 답답한 마음을 못이기고 포기했다. 그리고 1시간 반을 잤다.
자고 일어나서 못해도 되니까 한번 할 수 있는데 까지 해보자 라는 마인드로 다시 시작했다. 10시부터 시작인 과제를 2시부터 다시 시작했다. 4시간만에 할 수 있을까?
문제
사람, 말, 집, 개, 기타, 기린, 코끼리의 7자리 카테고리 분류 문제였다. 1698개의 데이터셋이 주어졌고 각각의 데이터는 각 단어의 영어단어의 이름을 가진 폴더안에 있었다.


제출물
코드 설명이 있는 코드
테스트 데이터에 대한 csv 결과 파일
구현
머신러닝 과제테스트
Image 파일들이 담긴 압축 파일을 풉니다.
!unzip test.zip
rm -r train
!unzip train.zip
필요한 라이브러리를 선언합니다
import os
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import cv2
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Input, Conv2D, MaxPool2D, Activation, Flatten, Dense
from tensorflow.keras.losses import categorical_crossentropy
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.models import load_model
이미지의 크기를 정의합니다. 그리고 이미지의 각 라벨들을 정수와 매칭합니다
image_width = 227
image_height = 227
class_names = ['dog', 'elephant', 'giraffe', 'guitar', 'horse', 'house', 'person',]
class_names_label = {class_name:i for i, class_name in enumerate(class_names)}
num_classes = len(class_names)
train_path = './train'
train폴더에서 이미지를 불러와 리스트에 저장합니다. 이 때 각 이미지의 라벨은 이미지가 존재하는 폴더 이름으로 정합니다.
images = []
labels = []
for dirname, _, filenames in os.walk(train_path):
category = dirname.replace('./train/','')
if category == './train':
continue
label = class_names_label[category]
for filename in filenames:
path = os.path.join(dirname, filename)
image = cv2.imread(path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
images.append(image)
labels.append(label)
images = np.array(images, dtype = 'float32')
labels = np.array(labels, dtype = 'int32')
데이터셋을 셔플하고 성능 측정을 위해 테스트 데이터셋을 20% 분리합니다
images, labels = shuffle(images, labels, random_state=25)
images = images / 255.0
train_images, test_images, train_labels, test_labels = train_test_split(images, labels, test_size=0.2)
현재 1358개의 훈련 데이터와 340개의 테스트 데이터 총 1698개의 데이터셋을 가지고 있습니다.
n_train = train_labels.shape[0]
n_test = test_labels.shape[0]
print ("Number of training examples: {}".format(n_train))
print ("Number of training examples: {}".format(n_test))
Number of training examples: 1358
Number of training examples: 340
간단한 CNN 모델을 직접 만들어보고 성능을 확인해봅니다. 이 때 검증 데이터를 20% 비율로 마련해서 추가로 성능을 확인해봅니다.
model = Sequential([
Conv2D(32, (3, 3), activation = 'relu', input_shape = (image_height, image_width, 3)),
MaxPool2D(2,2),
Conv2D(64, (3, 3), activation = 'relu'),
MaxPool2D(2,2),
Conv2D(128, (3, 3), activation = 'relu'),
MaxPool2D(2,2),
Flatten(),
Dense(256, activation=tf.nn.relu),
Dense(num_classes, activation=tf.nn.softmax)
])
model.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy', metrics=['accuracy'])
history = model.fit(train_images, train_labels, batch_size=64, epochs=20, validation_split = 0.2)
Epoch 1/20
17/17 [==============================] - 38s 240ms/step - loss: 3.6401 - accuracy: 0.1889 - val_loss: 1.9124 - val_accuracy: 0.2243
Epoch 2/20
17/17 [==============================] - 2s 110ms/step - loss: 1.8678 - accuracy: 0.2314 - val_loss: 1.8660 - val_accuracy: 0.2279
Epoch 3/20
17/17 [==============================] - 2s 110ms/step - loss: 1.7636 - accuracy: 0.2779 - val_loss: 1.7660 - val_accuracy: 0.3125
Epoch 4/20
17/17 [==============================] - 2s 111ms/step - loss: 1.6425 - accuracy: 0.3904 - val_loss: 1.6835 - val_accuracy: 0.3566
Epoch 5/20
17/17 [==============================] - 2s 110ms/step - loss: 1.4164 - accuracy: 0.4867 - val_loss: 1.7457 - val_accuracy: 0.3382
Epoch 6/20
17/17 [==============================] - 2s 110ms/step - loss: 1.0570 - accuracy: 0.6507 - val_loss: 1.6682 - val_accuracy: 0.3676
Epoch 7/20
17/17 [==============================] - 2s 109ms/step - loss: 0.7076 - accuracy: 0.8003 - val_loss: 1.8565 - val_accuracy: 0.3787
Epoch 8/20
17/17 [==============================] - 2s 109ms/step - loss: 0.4653 - accuracy: 0.8563 - val_loss: 2.2181 - val_accuracy: 0.3824
Epoch 9/20
17/17 [==============================] - 2s 111ms/step - loss: 0.2103 - accuracy: 0.9435 - val_loss: 2.4412 - val_accuracy: 0.4007
Epoch 10/20
17/17 [==============================] - 2s 112ms/step - loss: 0.0872 - accuracy: 0.9852 - val_loss: 3.1015 - val_accuracy: 0.3787
Epoch 11/20
17/17 [==============================] - 2s 110ms/step - loss: 0.0549 - accuracy: 0.9883 - val_loss: 3.1400 - val_accuracy: 0.3676
Epoch 12/20
17/17 [==============================] - 2s 112ms/step - loss: 0.0546 - accuracy: 0.9836 - val_loss: 3.5330 - val_accuracy: 0.3860
Epoch 13/20
17/17 [==============================] - 2s 111ms/step - loss: 0.0174 - accuracy: 0.9989 - val_loss: 3.5670 - val_accuracy: 0.4118
Epoch 14/20
17/17 [==============================] - 2s 110ms/step - loss: 0.0107 - accuracy: 1.0000 - val_loss: 3.6782 - val_accuracy: 0.4301
Epoch 15/20
17/17 [==============================] - 2s 110ms/step - loss: 0.0028 - accuracy: 1.0000 - val_loss: 3.9017 - val_accuracy: 0.4375
Epoch 16/20
17/17 [==============================] - 2s 111ms/step - loss: 0.0016 - accuracy: 1.0000 - val_loss: 4.0851 - val_accuracy: 0.4375
Epoch 17/20
17/17 [==============================] - 2s 111ms/step - loss: 9.3226e-04 - accuracy: 1.0000 - val_loss: 4.1403 - val_accuracy: 0.4412
Epoch 18/20
17/17 [==============================] - 2s 111ms/step - loss: 7.6645e-04 - accuracy: 1.0000 - val_loss: 4.1752 - val_accuracy: 0.4412
Epoch 19/20
17/17 [==============================] - 2s 111ms/step - loss: 5.6912e-04 - accuracy: 1.0000 - val_loss: 4.2507 - val_accuracy: 0.4412
Epoch 20/20
17/17 [==============================] - 2s 111ms/step - loss: 5.1643e-04 - accuracy: 1.0000 - val_loss: 4.3206 - val_accuracy: 0.4449
훈련 데이터의 성능은 매우 좋지만 검증 데이터의 성능은 많이 높지 않은 결과가 볼 수 있습니다.
def plot_accuracy_loss(history):
fig = plt.figure(figsize=(10,5))
plt.subplot(221)
plt.plot(history.history['accuracy'],'bo--', label = "accuracy")
plt.plot(history.history['val_accuracy'], 'ro--', label = "val_accuracy")
plt.title("train_acc vs val_acc")
plt.ylabel("accuracy")
plt.xlabel("epochs")
plt.legend()
plt.subplot(222)
plt.plot(history.history['loss'],'bo--', label = "loss")
plt.plot(history.history['val_loss'], 'ro--', label = "val_loss")
plt.title("train_loss vs val_loss")
plt.ylabel("loss")
plt.xlabel("epochs")
plt.legend()
plt.show()
plot_accuracy_loss(history)

그래프를 통해 확인해보면 대략 5epochs 이후 부터는 교차 데이터의 성능이 잘 나오지 않습니다.
test_loss = model.evaluate(test_images, test_labels)
11/11 [==============================] - 1s 45ms/step - loss: 5.0795 - accuracy: 0.4118
테스트 데이터의 성능은 대략 0.4 내외입니다. (여러 시행 결과 0.35 ~ 0.45 사이)
Image Augmentation
데이터셋의 갯수가 많아질수록 모델이 오버피팅될 가능성이 줄어들며, 이는 성능의 개선요소가 됩니다.
데이터셋의 갯수를 증가시키기 위해 ImageDataGenerator
를 이용합니다.
현재 데이터셋은 총 1698개입니다. 이미지 조작을 통해 데이터셋의 갯수를 늘려줍니다. 이미지 1개당 조작을 통해 3개의 이미지가 생성됩니다. 따라서 총 1698 4 = 6792개의 데이터셋이 마련됩니다. 1698 2 = 3396 1698 3 = 5094 1698 4 = 6792 1698 * 5 = 8490
더 많은 이미지는 colab에서 리소스 부족으로 작동하지 않아서 추가로 2개까지만 생성했습니다.(총 3개)
from tensorflow.keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
imageGenerator = ImageDataGenerator(
rotation_range=20,
width_shift_range=0.1,
height_shift_range=0.1,
brightness_range=[.2, .2],
horizontal_flip=True,
)
for dirname, _, filenames in os.walk(train_path):
category = dirname.replace('./train/','')
if category == './train':
continue
for filename in filenames:
img = load_img(os.path.join(dirname, filename))
x = img_to_array(img)
x = x.reshape((1, ) + x.shape)
i = 0
for batch in imageGenerator.flow(x,batch_size = 1,
save_to_dir = os.path.join(train_path, category),
save_format ='jpg'):
i += 1
if i == 2:
break
이미지를 조작할 방법을 다음과 같이 정합니다.
rotation_range = 20
회전할 각도를 20도로 설정합니다
width_shift_range = 0.1
height_shift_range = 0.1
수평 또는 수직으로 이동 비율을 0.1로 정합니다
brigthness_range = [.2, .2]
밝기를 -20% 부터 20%까지 변화시킵니다.
horizontal_flip=True
수평으로 뒤집습니다.
train_path = './train'
images = []
labels = []
for dirname, _, filenames in os.walk(train_path):
category = dirname.split('/')[-1]
if category == 'train':
continue
label = class_names_label[category]
for filename in filenames:
path = os.path.join(dirname, filename)
image = cv2.imread(path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
images.append(image)
labels.append(label)
images = np.array(images, dtype = 'float32')
labels = np.array(labels, dtype = 'int32')
images, labels = shuffle(images, labels, random_state=25)
images = images / 255.0
train_images, test_images, train_labels, test_labels = train_test_split(images, labels, test_size=0.2)
n_train = train_labels.shape[0]
n_test = test_labels.shape[0]
print ("Number of training examples: {}".format(n_train))
print ("Number of training examples: {}".format(n_test))
Number of training examples: 3989
Number of training examples: 998
model.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy', metrics=['accuracy'])
history = model.fit(train_images, train_labels, batch_size=64, epochs=20, validation_split = 0.2)
Epoch 1/20
50/50 [==============================] - 8s 153ms/step - loss: 2.2086 - accuracy: 0.4334 - val_loss: 1.4567 - val_accuracy: 0.4900
Epoch 2/20
50/50 [==============================] - 5s 109ms/step - loss: 1.1767 - accuracy: 0.5928 - val_loss: 1.3364 - val_accuracy: 0.5426
Epoch 3/20
50/50 [==============================] - 5s 110ms/step - loss: 0.6962 - accuracy: 0.7762 - val_loss: 1.4134 - val_accuracy: 0.5564
Epoch 4/20
50/50 [==============================] - 6s 111ms/step - loss: 0.3223 - accuracy: 0.9011 - val_loss: 1.9013 - val_accuracy: 0.5501
Epoch 5/20
50/50 [==============================] - 6s 111ms/step - loss: 0.1151 - accuracy: 0.9695 - val_loss: 2.3870 - val_accuracy: 0.5313
Epoch 6/20
50/50 [==============================] - 6s 110ms/step - loss: 0.0513 - accuracy: 0.9889 - val_loss: 2.5908 - val_accuracy: 0.5602
Epoch 7/20
50/50 [==============================] - 6s 112ms/step - loss: 0.0282 - accuracy: 0.9938 - val_loss: 2.8788 - val_accuracy: 0.5238
Epoch 8/20
50/50 [==============================] - 6s 111ms/step - loss: 0.0187 - accuracy: 0.9968 - val_loss: 3.5123 - val_accuracy: 0.5125
Epoch 9/20
50/50 [==============================] - 6s 111ms/step - loss: 0.0382 - accuracy: 0.9899 - val_loss: 3.8634 - val_accuracy: 0.4561
Epoch 10/20
50/50 [==============================] - 6s 112ms/step - loss: 0.1146 - accuracy: 0.9645 - val_loss: 2.8725 - val_accuracy: 0.5088
Epoch 11/20
50/50 [==============================] - 6s 111ms/step - loss: 0.0543 - accuracy: 0.9849 - val_loss: 2.9627 - val_accuracy: 0.5276
Epoch 12/20
50/50 [==============================] - 6s 111ms/step - loss: 0.0163 - accuracy: 0.9961 - val_loss: 3.2457 - val_accuracy: 0.4987
Epoch 13/20
50/50 [==============================] - 6s 112ms/step - loss: 0.0150 - accuracy: 0.9974 - val_loss: 3.4321 - val_accuracy: 0.5238
Epoch 14/20
50/50 [==============================] - 6s 112ms/step - loss: 0.0058 - accuracy: 0.9990 - val_loss: 3.7935 - val_accuracy: 0.5288
Epoch 15/20
50/50 [==============================] - 6s 111ms/step - loss: 6.9671e-04 - accuracy: 1.0000 - val_loss: 3.8625 - val_accuracy: 0.5401
Epoch 16/20
50/50 [==============================] - 6s 112ms/step - loss: 2.6550e-04 - accuracy: 1.0000 - val_loss: 3.9490 - val_accuracy: 0.5414
Epoch 17/20
50/50 [==============================] - 6s 112ms/step - loss: 1.7575e-04 - accuracy: 1.0000 - val_loss: 3.9936 - val_accuracy: 0.5439
Epoch 18/20
50/50 [==============================] - 6s 112ms/step - loss: 1.4344e-04 - accuracy: 1.0000 - val_loss: 4.0374 - val_accuracy: 0.5439
Epoch 19/20
50/50 [==============================] - 6s 112ms/step - loss: 1.1886e-04 - accuracy: 1.0000 - val_loss: 4.0765 - val_accuracy: 0.5439
Epoch 20/20
50/50 [==============================] - 6s 112ms/step - loss: 1.0780e-04 - accuracy: 1.0000 - val_loss: 4.1103 - val_accuracy: 0.5439
plot_accuracy_loss(history)

test_loss = model.evaluate(test_images, test_labels)
32/32 [==============================] - 1s 25ms/step - loss: 3.8832 - accuracy: 0.5341
테스트 데이터의 성능이 나아진 것을 볼 수 있습니다.
Simple 0.41
Data Augmentation 0.53
VGG16 모델
현재 구현된 이미지 분류기는 그 층의 깊이가 얕습니다. 더 좋은 분류기를 사용하기 위해 tensorflow에서 지원하는 모델인 VGG를 사용합니다. 또한 VGG모델을 거쳐 생성된 특징으로 입력과 출력을 모두 연결해주는 Dense Layer를 거치게 됩니다.
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
model = VGG16(weights='imagenet', include_top=False)
train_features = model.predict(train_images)
test_features = model.predict(test_images)
n_train, x, y, z = train_features.shape
n_test, x, y, z = test_features.shape
numFeatures = x * y * z
model2 = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape = (x, y, z)),
tf.keras.layers.Dense(50, activation=tf.nn.relu),
tf.keras.layers.Dense(num_classes, activation=tf.nn.softmax)
])
model2.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy', metrics=['accuracy'])
history2 = model2.fit(train_features, train_labels, batch_size=64, epochs=15, validation_split = 0.2)
Epoch 1/15
50/50 [==============================] - 1s 16ms/step - loss: 1.8851 - accuracy: 0.4079 - val_loss: 0.6769 - val_accuracy: 0.7581
Epoch 2/15
1/50 [..............................] - ETA: 0s - loss: 0.5036 - accuracy: 0.8438
---------------------------------------------------------------------------
KeyboardInterrupt Traceback (most recent call last)
<ipython-input-9-c73ec4987367> in <module>()
7 model2.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy', metrics=['accuracy'])
8
----> 9 history2 = model2.fit(train_features, train_labels, batch_size=64, epochs=15, validation_split = 0.2)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1098 _r=1):
1099 callbacks.on_train_batch_begin(step)
-> 1100 tmp_logs = self.train_function(iterator)
1101 if data_handler.should_sync:
1102 context.async_wait()
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
826 tracing_count = self.experimental_get_tracing_count()
827 with trace.Trace(self._name) as tm:
--> 828 result = self._call(*args, **kwds)
829 compiler = "xla" if self._experimental_compile else "nonXla"
830 new_tracing_count = self.experimental_get_tracing_count()
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
853 # In this case we have created variables on the first call, so we run the
854 # defunned version which is guaranteed to never create variables.
--> 855 return self._stateless_fn(*args, **kwds) # pylint: disable=not-callable
856 elif self._stateful_fn is not None:
857 # Release the lock early so that multiple threads can perform the call
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in __call__(self, *args, **kwargs)
2941 filtered_flat_args) = self._maybe_define_function(args, kwargs)
2942 return graph_function._call_flat(
-> 2943 filtered_flat_args, captured_inputs=graph_function.captured_inputs) # pylint: disable=protected-access
2944
2945 @property
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _call_flat(self, args, captured_inputs, cancellation_manager)
1917 # No tape is watching; skip to running the function.
1918 return self._build_call_outputs(self._inference_function.call(
-> 1919 ctx, args, cancellation_manager=cancellation_manager))
1920 forward_backward = self._select_forward_and_backward_functions(
1921 args,
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in call(self, ctx, args, cancellation_manager)
558 inputs=args,
559 attrs=attrs,
--> 560 ctx=ctx)
561 else:
562 outputs = execute.execute_with_cancellation(
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
58 ctx.ensure_initialized()
59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 60 inputs, attrs, num_outputs)
61 except core._NotOkStatusException as e:
62 if name is not None:
KeyboardInterrupt:
plot_accuracy_loss(history)

test_loss = model2.evaluate(test_features, test_labels)
32/32 [==============================] - 0s 3ms/step - loss: 0.3025 - accuracy: 0.8938
성능이 비약적으로 상승한 것을 볼 수 있습니다.
Simple 0.41
Data Augmentation 0.53
VGG16 + Data Augmentation 0.89
Ensemble, 앙상블
앙상블 러닝은 여러개의 분류기를 통해 학습하고 학습한 결과를 종합(평균 또는 최빈값)하여 최종 결과값을 결정하는 것을 말합니다.
이때, 앙상블 러닝을 운용하는 방법은 세 가지가 있습니다.
서로 다른 분류기로 동일한 데이터셋에 대해 다른 결과를 얻는다
서로 다른 데이터로 동일한 분류기에 대해 다른 결과를 얻는다
또는, 이 둘을 둘 다 운용한다.
여기서는 10개의 서로 다른 분류기로 동일한 데이터셋에 대해 다른 결과를 얻습니다. 분류기가 서로 다르다는 뜻은 Dense Layer에서 입력값을 다르게 입력받는다는 뜻이며 이는 VGG16을 거친 특징들이 랜덤으로 Dropout된 10가지의 입력을 받는다는 말과 같습니다.
n_estimators = 10
max_samples = 0.8
max_samples *= n_train
max_samples = int(max_samples)
models = list()
random = np.random.randint(50, 100, size = n_estimators)
for i in range(n_estimators):
model3 = tf.keras.Sequential([ tf.keras.layers.Flatten(input_shape = (x, y, z)),
tf.keras.layers.Dense(random[i], activation=tf.nn.relu),
tf.keras.layers.Dense(num_classes, activation=tf.nn.softmax)
])
model3.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy', metrics=['accuracy'])
models.append(model3)
histories = []
for i in range(n_estimators):
train_idx = np.random.choice(len(train_features), size = max_samples)
histories.append(models[i].fit(train_features[train_idx], train_labels[train_idx], batch_size=64, epochs=10, validation_split = 0.1))
Epoch 1/10
45/45 [==============================] - 1s 9ms/step - loss: 1.9823 - accuracy: 0.3059 - val_loss: 1.0176 - val_accuracy: 0.7063
Epoch 2/10
45/45 [==============================] - 0s 6ms/step - loss: 0.9019 - accuracy: 0.7247 - val_loss: 0.5121 - val_accuracy: 0.8875
Epoch 3/10
45/45 [==============================] - 0s 7ms/step - loss: 0.3701 - accuracy: 0.9178 - val_loss: 0.3479 - val_accuracy: 0.9094
Epoch 4/10
45/45 [==============================] - 0s 7ms/step - loss: 0.2173 - accuracy: 0.9613 - val_loss: 0.2734 - val_accuracy: 0.9281
Epoch 5/10
45/45 [==============================] - 0s 6ms/step - loss: 0.1204 - accuracy: 0.9880 - val_loss: 0.2201 - val_accuracy: 0.9375
Epoch 6/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0818 - accuracy: 0.9940 - val_loss: 0.1895 - val_accuracy: 0.9406
Epoch 7/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0501 - accuracy: 0.9982 - val_loss: 0.1696 - val_accuracy: 0.9406
Epoch 8/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0344 - accuracy: 0.9996 - val_loss: 0.1632 - val_accuracy: 0.9312
Epoch 9/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0235 - accuracy: 0.9998 - val_loss: 0.1513 - val_accuracy: 0.9563
Epoch 10/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0191 - accuracy: 0.9993 - val_loss: 0.1467 - val_accuracy: 0.9625
Epoch 1/10
45/45 [==============================] - 1s 9ms/step - loss: 2.1596 - accuracy: 0.3024 - val_loss: 1.0193 - val_accuracy: 0.6375
Epoch 2/10
45/45 [==============================] - 0s 6ms/step - loss: 0.8358 - accuracy: 0.7512 - val_loss: 0.6540 - val_accuracy: 0.8031
Epoch 3/10
45/45 [==============================] - 0s 6ms/step - loss: 0.5051 - accuracy: 0.8747 - val_loss: 0.4970 - val_accuracy: 0.8375
Epoch 4/10
45/45 [==============================] - 0s 7ms/step - loss: 0.3183 - accuracy: 0.9158 - val_loss: 0.3074 - val_accuracy: 0.9125
Epoch 5/10
45/45 [==============================] - 0s 6ms/step - loss: 0.1145 - accuracy: 0.9833 - val_loss: 0.2050 - val_accuracy: 0.9500
Epoch 6/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0617 - accuracy: 0.9970 - val_loss: 0.1897 - val_accuracy: 0.9625
Epoch 7/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0357 - accuracy: 1.0000 - val_loss: 0.1739 - val_accuracy: 0.9656
Epoch 8/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0302 - accuracy: 1.0000 - val_loss: 0.1671 - val_accuracy: 0.9563
Epoch 9/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0192 - accuracy: 1.0000 - val_loss: 0.1580 - val_accuracy: 0.9563
Epoch 10/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0151 - accuracy: 1.0000 - val_loss: 0.1578 - val_accuracy: 0.9625
Epoch 1/10
45/45 [==============================] - 1s 9ms/step - loss: 2.2014 - accuracy: 0.3511 - val_loss: 0.8670 - val_accuracy: 0.6906
Epoch 2/10
45/45 [==============================] - 0s 7ms/step - loss: 0.6973 - accuracy: 0.7739 - val_loss: 0.6392 - val_accuracy: 0.8125
Epoch 3/10
45/45 [==============================] - 0s 6ms/step - loss: 0.4221 - accuracy: 0.9097 - val_loss: 0.4906 - val_accuracy: 0.8656
Epoch 4/10
45/45 [==============================] - 0s 7ms/step - loss: 0.2574 - accuracy: 0.9617 - val_loss: 0.4151 - val_accuracy: 0.8906
Epoch 5/10
45/45 [==============================] - 0s 6ms/step - loss: 0.1797 - accuracy: 0.9754 - val_loss: 0.3713 - val_accuracy: 0.8813
Epoch 6/10
45/45 [==============================] - 0s 6ms/step - loss: 0.1147 - accuracy: 0.9935 - val_loss: 0.3483 - val_accuracy: 0.9000
Epoch 7/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0843 - accuracy: 0.9957 - val_loss: 0.3210 - val_accuracy: 0.9031
Epoch 8/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0618 - accuracy: 0.9977 - val_loss: 0.3275 - val_accuracy: 0.9125
Epoch 9/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0476 - accuracy: 0.9982 - val_loss: 0.3028 - val_accuracy: 0.9125
Epoch 10/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0344 - accuracy: 0.9998 - val_loss: 0.2974 - val_accuracy: 0.9094
Epoch 1/10
45/45 [==============================] - 1s 9ms/step - loss: 2.9052 - accuracy: 0.3392 - val_loss: 0.7875 - val_accuracy: 0.7437
Epoch 2/10
45/45 [==============================] - 0s 7ms/step - loss: 0.5499 - accuracy: 0.8438 - val_loss: 0.5012 - val_accuracy: 0.8406
Epoch 3/10
45/45 [==============================] - 0s 7ms/step - loss: 0.2768 - accuracy: 0.9504 - val_loss: 0.3944 - val_accuracy: 0.8938
Epoch 4/10
45/45 [==============================] - 0s 7ms/step - loss: 0.1770 - accuracy: 0.9739 - val_loss: 0.3226 - val_accuracy: 0.9219
Epoch 5/10
45/45 [==============================] - 0s 7ms/step - loss: 0.1055 - accuracy: 0.9945 - val_loss: 0.2937 - val_accuracy: 0.9281
Epoch 6/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0730 - accuracy: 0.9987 - val_loss: 0.2598 - val_accuracy: 0.9312
Epoch 7/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0519 - accuracy: 0.9994 - val_loss: 0.2530 - val_accuracy: 0.9281
Epoch 8/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0383 - accuracy: 1.0000 - val_loss: 0.2581 - val_accuracy: 0.9250
Epoch 9/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0281 - accuracy: 1.0000 - val_loss: 0.2265 - val_accuracy: 0.9406
Epoch 10/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0230 - accuracy: 1.0000 - val_loss: 0.2292 - val_accuracy: 0.9281
Epoch 1/10
45/45 [==============================] - 1s 9ms/step - loss: 2.1120 - accuracy: 0.4169 - val_loss: 0.5158 - val_accuracy: 0.8438
Epoch 2/10
45/45 [==============================] - 0s 6ms/step - loss: 0.3176 - accuracy: 0.9168 - val_loss: 0.2947 - val_accuracy: 0.9094
Epoch 3/10
45/45 [==============================] - 0s 7ms/step - loss: 0.1382 - accuracy: 0.9806 - val_loss: 0.2272 - val_accuracy: 0.9281
Epoch 4/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0661 - accuracy: 0.9959 - val_loss: 0.2120 - val_accuracy: 0.9375
Epoch 5/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0399 - accuracy: 0.9994 - val_loss: 0.1898 - val_accuracy: 0.9375
Epoch 6/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0238 - accuracy: 1.0000 - val_loss: 0.1754 - val_accuracy: 0.9469
Epoch 7/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0172 - accuracy: 1.0000 - val_loss: 0.1769 - val_accuracy: 0.9344
Epoch 8/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0131 - accuracy: 1.0000 - val_loss: 0.1698 - val_accuracy: 0.9438
Epoch 9/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0116 - accuracy: 1.0000 - val_loss: 0.1685 - val_accuracy: 0.9438
Epoch 10/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0087 - accuracy: 1.0000 - val_loss: 0.1692 - val_accuracy: 0.9438
Epoch 1/10
45/45 [==============================] - 1s 9ms/step - loss: 2.4135 - accuracy: 0.3638 - val_loss: 0.8577 - val_accuracy: 0.6750
Epoch 2/10
45/45 [==============================] - 0s 6ms/step - loss: 0.5958 - accuracy: 0.8420 - val_loss: 0.5768 - val_accuracy: 0.7969
Epoch 3/10
45/45 [==============================] - 0s 6ms/step - loss: 0.3599 - accuracy: 0.9117 - val_loss: 0.4977 - val_accuracy: 0.8406
Epoch 4/10
45/45 [==============================] - 0s 6ms/step - loss: 0.2330 - accuracy: 0.9629 - val_loss: 0.3527 - val_accuracy: 0.8844
Epoch 5/10
45/45 [==============================] - 0s 7ms/step - loss: 0.1348 - accuracy: 0.9895 - val_loss: 0.3043 - val_accuracy: 0.8969
Epoch 6/10
45/45 [==============================] - 0s 6ms/step - loss: 0.1012 - accuracy: 0.9908 - val_loss: 0.2929 - val_accuracy: 0.9062
Epoch 7/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0683 - accuracy: 0.9976 - val_loss: 0.2615 - val_accuracy: 0.9156
Epoch 8/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0497 - accuracy: 0.9995 - val_loss: 0.2250 - val_accuracy: 0.9250
Epoch 9/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0372 - accuracy: 1.0000 - val_loss: 0.2279 - val_accuracy: 0.9312
Epoch 10/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0284 - accuracy: 1.0000 - val_loss: 0.2228 - val_accuracy: 0.9344
Epoch 1/10
45/45 [==============================] - 1s 9ms/step - loss: 1.8665 - accuracy: 0.4245 - val_loss: 0.6087 - val_accuracy: 0.7781
Epoch 2/10
45/45 [==============================] - 0s 7ms/step - loss: 0.3551 - accuracy: 0.8953 - val_loss: 0.3399 - val_accuracy: 0.9156
Epoch 3/10
45/45 [==============================] - 0s 7ms/step - loss: 0.1476 - accuracy: 0.9772 - val_loss: 0.2929 - val_accuracy: 0.9156
Epoch 4/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0767 - accuracy: 0.9972 - val_loss: 0.2542 - val_accuracy: 0.9344
Epoch 5/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0447 - accuracy: 0.9987 - val_loss: 0.2191 - val_accuracy: 0.9438
Epoch 6/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0286 - accuracy: 1.0000 - val_loss: 0.2117 - val_accuracy: 0.9563
Epoch 7/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0220 - accuracy: 1.0000 - val_loss: 0.2031 - val_accuracy: 0.9500
Epoch 8/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0162 - accuracy: 1.0000 - val_loss: 0.2149 - val_accuracy: 0.9344
Epoch 9/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0127 - accuracy: 1.0000 - val_loss: 0.2075 - val_accuracy: 0.9469
Epoch 10/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0110 - accuracy: 1.0000 - val_loss: 0.2015 - val_accuracy: 0.9500
Epoch 1/10
45/45 [==============================] - 1s 9ms/step - loss: 2.4010 - accuracy: 0.3874 - val_loss: 0.9141 - val_accuracy: 0.7000
Epoch 2/10
45/45 [==============================] - 0s 7ms/step - loss: 0.7456 - accuracy: 0.7663 - val_loss: 0.5889 - val_accuracy: 0.8062
Epoch 3/10
45/45 [==============================] - 0s 7ms/step - loss: 0.3749 - accuracy: 0.9113 - val_loss: 0.3908 - val_accuracy: 0.8781
Epoch 4/10
45/45 [==============================] - 0s 7ms/step - loss: 0.1923 - accuracy: 0.9694 - val_loss: 0.2747 - val_accuracy: 0.8969
Epoch 5/10
45/45 [==============================] - 0s 7ms/step - loss: 0.1132 - accuracy: 0.9871 - val_loss: 0.2336 - val_accuracy: 0.9281
Epoch 6/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0662 - accuracy: 0.9937 - val_loss: 0.2369 - val_accuracy: 0.9062
Epoch 7/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0419 - accuracy: 0.9990 - val_loss: 0.2026 - val_accuracy: 0.9406
Epoch 8/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0250 - accuracy: 0.9993 - val_loss: 0.2055 - val_accuracy: 0.9406
Epoch 9/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0180 - accuracy: 1.0000 - val_loss: 0.1853 - val_accuracy: 0.9344
Epoch 10/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0120 - accuracy: 1.0000 - val_loss: 0.1871 - val_accuracy: 0.9312
Epoch 1/10
45/45 [==============================] - 1s 9ms/step - loss: 2.0665 - accuracy: 0.4211 - val_loss: 0.5312 - val_accuracy: 0.8156
Epoch 2/10
45/45 [==============================] - 0s 7ms/step - loss: 0.3373 - accuracy: 0.9030 - val_loss: 0.3433 - val_accuracy: 0.9031
Epoch 3/10
45/45 [==============================] - 0s 6ms/step - loss: 0.1310 - accuracy: 0.9791 - val_loss: 0.2772 - val_accuracy: 0.9125
Epoch 4/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0733 - accuracy: 0.9955 - val_loss: 0.2638 - val_accuracy: 0.9187
Epoch 5/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0448 - accuracy: 1.0000 - val_loss: 0.2496 - val_accuracy: 0.9281
Epoch 6/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0293 - accuracy: 0.9997 - val_loss: 0.2377 - val_accuracy: 0.9312
Epoch 7/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0232 - accuracy: 1.0000 - val_loss: 0.2428 - val_accuracy: 0.9312
Epoch 8/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0165 - accuracy: 1.0000 - val_loss: 0.2363 - val_accuracy: 0.9312
Epoch 9/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0123 - accuracy: 1.0000 - val_loss: 0.2325 - val_accuracy: 0.9312
Epoch 10/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0106 - accuracy: 1.0000 - val_loss: 0.2233 - val_accuracy: 0.9375
Epoch 1/10
45/45 [==============================] - 1s 9ms/step - loss: 2.2188 - accuracy: 0.3826 - val_loss: 0.7416 - val_accuracy: 0.7563
Epoch 2/10
45/45 [==============================] - 0s 6ms/step - loss: 0.5199 - accuracy: 0.8513 - val_loss: 0.4587 - val_accuracy: 0.8656
Epoch 3/10
45/45 [==============================] - 0s 6ms/step - loss: 0.2311 - accuracy: 0.9584 - val_loss: 0.3136 - val_accuracy: 0.9375
Epoch 4/10
45/45 [==============================] - 0s 7ms/step - loss: 0.1234 - accuracy: 0.9838 - val_loss: 0.2402 - val_accuracy: 0.9469
Epoch 5/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0727 - accuracy: 0.9968 - val_loss: 0.2090 - val_accuracy: 0.9500
Epoch 6/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0448 - accuracy: 1.0000 - val_loss: 0.1904 - val_accuracy: 0.9531
Epoch 7/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0323 - accuracy: 1.0000 - val_loss: 0.1819 - val_accuracy: 0.9531
Epoch 8/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0230 - accuracy: 1.0000 - val_loss: 0.1719 - val_accuracy: 0.9594
Epoch 9/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0167 - accuracy: 1.0000 - val_loss: 0.1676 - val_accuracy: 0.9469
Epoch 10/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0133 - accuracy: 1.0000 - val_loss: 0.1618 - val_accuracy: 0.9469
predictions = []
for i in range(n_estimators):
predictions.append(models[i].predict(test_features))
predictions = np.array(predictions)
predictions = predictions.sum(axis = 0)
pred_labels = predictions.argmax(axis=1)
print("Accuracy : {}".format(accuracy_score(test_labels, pred_labels)))
Accuracy : 0.8937875751503006
성능이 조금 더 상승한 것을 볼 수 있습니다.
Simple 0.41
Data Augmentation 0.53
VGG16 + Data Augmentation 0.89
VGG16 + Data Augmentation + Ensemble 0.90 (0.89~0.91)
테스트 데이터 적용하기
test_path = './test/0'
test_images = []
for dirname, _, filenames in os.walk(test_path):
category = dirname.split('/')[-1]
for filename in filenames:
path = os.path.join(dirname, filename)
image = cv2.imread(path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
test_images.append(image)
test_images = np.array(test_images, dtype = 'float32')
test_features = model.predict(test_images)
predictions = []
for i in range(n_estimators):
predictions.append(models[i].predict(test_features))
predictions = np.array(predictions)
predictions = predictions.sum(axis = 0)
pred = predictions.argmax(axis=1)
모델의 결과값을 데이터프레임으로 만든 후 csv 파일로 저장합니다.
def write_preds(pred, fname):
pd.DataFrame({"answer value": pred}).to_csv(fname, index=False, header=True)
write_preds(pred, "result.csv")
결과파일로 얻은 csv 파일을 다운로드 합니다.
from google.colab import files
files.download('result.csv')
에필로그
오 잘 풀은거 같은데?? 성능도 좋고!! 결과는?? 이라고 한다면 시간이 부족했다. 내가 자초한 일이기에 살짝 아쉽긴 하다. 잠을 안잤으면 좋은 성적을 내지 않았을까 생각. 아쉬운 마음에 .ipynb
파일이라도 제출했다. 리더보드 성적은 0점도 아니다. 그냥 미제출.
거의 다 했는데 마지막에 변수명을 헷갈리게 써서 한참 동안 막힌 에러를 해결하지 못했다. 끝나고 30분이 지난 후에야 에러를 해결했다.
큰 아쉬움은 있지만, 그래도 이런 과제를 처음 해보는데 나름 잘 한것 같다. 최근에 배운 앙상블과 Image Augmentation을 활용할 수 있어서 좋았다. 그리고 VGG16 모델도 처음 사용해봤다. 그리고 성능도 꽤 준수해서 좋았다.
다음에는 한번에 긴 코드를 쓸 수 있을 정도로(그만큼 검색을 적게 할 정도로) 익숙해지고 박학해지고 싶다! 다음 Dev-matching 전에 취업을 할것이기 때문에 다음 시험을 못보는 것이 아쉽지만!
놀란점(?)이 몇 개 있어서 끄적이고 간다
생각보다 리더보드 98점 이상이 수두룩 했다. 100점도 막판가서는 10명 이상은 있던 것 같다.
이미 4시간이 지난 시점에 100점이 3명이었다.
나는 몇점이었을까 ㅠㅠ
그래도 고득점 순위에는 못들었을 것 같다. 이미 테스트랑 검증 데이터 성능이 90점대이니까?
private test data에 대한 100점은 한명이었다. 근데 이 한명이 리더보드에서 100점은 아니었던 것 같다.
역시 운빨(?)
colab에서 앙상블 구현할 때 RAM이 너무 부족하다. 이것 때문에 잡아먹은 시간이 1시간은 넘는 것 같다. 자꾸 세션이 초기화돼서 너무 힘들었다.
분류기 갯수는 10개로 많이 설정했는데 그대신 epoch를 10까지 밖에 못했다.
epoch = 12도 세션이 초기화됐다 ㅠㅠ
Image Augmentation을 돌릴 때도 많은 데이터를 처리하기에는 너무 부족했다.
만 장 정도를 돌렸는데 돌지 못했다.
합의 본것은 4,500장 정도...
생각보다 문제가 쉬웠다.
난 모델 구현하는 코드를 이번에 처음 작성해봤다. 물론 구글링과 각종 코드를 취합한 것이긴 한데.. 누구는 안그럴까?
더 많이 공부해야겠다. 화이팅!
Last updated
Was this helpful?