23 Sun

2021 Dev-Matching : ๋จธ์‹ ๋Ÿฌ๋‹ ๊ณผ์ œํ…Œ์ŠคํŠธ

ํ”„๋กค๋กœ๊ทธ

์˜ค๋Š˜์€ 8์‹œ๊ฐ„์ด๋‚˜ ์ง„ํ–‰ํ•˜๋Š” ๋จธ์‹ ๋Ÿฌ๋‹ ํ…Œ์ŠคํŠธ์ด๋‹ค. ์‚ฌ์‹ค ์ข€ ๋” ๋งŽ์€ ์ค€๋น„๋ฅผ ํ•˜๊ณ  ์‹ถ์—ˆ๋Š”๋ฐ, ์–ผ๋งˆ ์ค€๋น„๋ฅผ ๋ชปํ•ด์„œ ์ž์‹ ๊ฐ์ด ์—†์—ˆ๋‹ค. ๋‚ด๊ฐ€ ํ•  ์ˆ˜ ์žˆ์„ ๊ฑฐ๋ผ๋Š” ์ƒ๊ฐ๋„ ๋ชปํ–ˆ๊ณ  ๊ทธ๋ƒฅ ๊ฒฝํ—˜์‚ผ์•„ ํ•ด๋ณด๋‹ค๊ฐ€ ์–ด๋ ค์šฐ๋ฉด ํฌ๊ธฐํ•˜์ž๋ผ๋Š” ๋งˆ์ธ๋“œ๋ฅผ ๊ฐ€์ง€๊ณ  ์‹œ์ž‘ํ–ˆ๋‹ค.

๊ทธ๋ฆฌ๊ณ , ์˜ํ™” ๊ฐ™์€ ์ผ์€ ๋ฒŒ์–ด์ง€์ง€ ์•Š๋Š”๋‹ค. ์—ญ์‹œ๋‚˜ ์–ด๋ ค์› ๊ณ  ๋งŽ์€ ๊ฒ€์ƒ‰์œผ๋กœ ์กฐ๊ทธ๋งˆํ•œ ๋ฌธ์ œ๋“ค์„ ํ•ด๊ฒฐํ•จ์— ์žˆ์–ด ๋‹ต๋‹ตํ–ˆ๋‹ค. ์•„๋‹ˆ ์‹œ๊ฐ„์ด ์ด๋งŒํผ์ด๋‚˜ ์ง€๋‚ฌ๋Š”๋ฐ ์•„์ง ์ด๊ฒƒ๋ฐ–์— ๋ชปํ–ˆ๋‹ค๊ณ ? ๊ฒฐ๊ตญ ๋‹ต๋‹ตํ•œ ๋งˆ์Œ์„ ๋ชป์ด๊ธฐ๊ณ  ํฌ๊ธฐํ–ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  1์‹œ๊ฐ„ ๋ฐ˜์„ ์žค๋‹ค.

์ž๊ณ  ์ผ์–ด๋‚˜์„œ ๋ชปํ•ด๋„ ๋˜๋‹ˆ๊นŒ ํ•œ๋ฒˆ ํ•  ์ˆ˜ ์žˆ๋Š”๋ฐ ๊นŒ์ง€ ํ•ด๋ณด์ž ๋ผ๋Š” ๋งˆ์ธ๋“œ๋กœ ๋‹ค์‹œ ์‹œ์ž‘ํ–ˆ๋‹ค. 10์‹œ๋ถ€ํ„ฐ ์‹œ์ž‘์ธ ๊ณผ์ œ๋ฅผ 2์‹œ๋ถ€ํ„ฐ ๋‹ค์‹œ ์‹œ์ž‘ํ–ˆ๋‹ค. 4์‹œ๊ฐ„๋งŒ์— ํ•  ์ˆ˜ ์žˆ์„๊นŒ?

๋ฌธ์ œ

์‚ฌ๋žŒ, ๋ง, ์ง‘, ๊ฐœ, ๊ธฐํƒ€, ๊ธฐ๋ฆฐ, ์ฝ”๋ผ๋ฆฌ์˜ 7์ž๋ฆฌ ์นดํ…Œ๊ณ ๋ฆฌ ๋ถ„๋ฅ˜ ๋ฌธ์ œ์˜€๋‹ค. 1698๊ฐœ์˜ ๋ฐ์ดํ„ฐ์…‹์ด ์ฃผ์–ด์กŒ๊ณ  ๊ฐ๊ฐ์˜ ๋ฐ์ดํ„ฐ๋Š” ๊ฐ ๋‹จ์–ด์˜ ์˜์–ด๋‹จ์–ด์˜ ์ด๋ฆ„์„ ๊ฐ€์ง„ ํด๋”์•ˆ์— ์žˆ์—ˆ๋‹ค.

์ „์ฒด ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ
dog ํด๋” ์•ˆ ์ด๋ฏธ์ง€

์ œ์ถœ๋ฌผ

  • ์ฝ”๋“œ ์„ค๋ช…์ด ์žˆ๋Š” ์ฝ”๋“œ

  • ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ csv ๊ฒฐ๊ณผ ํŒŒ์ผ

๊ตฌํ˜„

๋จธ์‹ ๋Ÿฌ๋‹ ๊ณผ์ œํ…Œ์ŠคํŠธ

Image ํŒŒ์ผ๋“ค์ด ๋‹ด๊ธด ์••์ถ• ํŒŒ์ผ์„ ํ’‰๋‹ˆ๋‹ค.

!unzip test.zip
rm -r train
!unzip train.zip

ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์„ ์–ธํ•ฉ๋‹ˆ๋‹ค

import os
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import cv2
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Input, Conv2D, MaxPool2D, Activation, Flatten, Dense
from tensorflow.keras.losses import categorical_crossentropy
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.models import load_model

์ด๋ฏธ์ง€์˜ ํฌ๊ธฐ๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ด๋ฏธ์ง€์˜ ๊ฐ ๋ผ๋ฒจ๋“ค์„ ์ •์ˆ˜์™€ ๋งค์นญํ•ฉ๋‹ˆ๋‹ค

image_width = 227
image_height = 227

class_names = ['dog', 'elephant', 'giraffe', 'guitar', 'horse', 'house', 'person',]
class_names_label = {class_name:i for i, class_name in enumerate(class_names)}
num_classes = len(class_names)

train_path = './train'

trainํด๋”์—์„œ ์ด๋ฏธ์ง€๋ฅผ ๋ถˆ๋Ÿฌ์™€ ๋ฆฌ์ŠคํŠธ์— ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. ์ด ๋•Œ ๊ฐ ์ด๋ฏธ์ง€์˜ ๋ผ๋ฒจ์€ ์ด๋ฏธ์ง€๊ฐ€ ์กด์žฌํ•˜๋Š” ํด๋” ์ด๋ฆ„์œผ๋กœ ์ •ํ•ฉ๋‹ˆ๋‹ค.

images = []
labels = []

for dirname, _, filenames in os.walk(train_path):
  category = dirname.replace('./train/','')
  if category == './train':
    continue
  label = class_names_label[category]
  for filename in filenames:
    path = os.path.join(dirname, filename)
    image = cv2.imread(path)
    image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
    images.append(image)
    labels.append(label)

images = np.array(images, dtype = 'float32')
labels = np.array(labels, dtype = 'int32')   

๋ฐ์ดํ„ฐ์…‹์„ ์…”ํ”Œํ•˜๊ณ  ์„ฑ๋Šฅ ์ธก์ •์„ ์œ„ํ•ด ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์…‹์„ 20% ๋ถ„๋ฆฌํ•ฉ๋‹ˆ๋‹ค

images, labels = shuffle(images, labels, random_state=25)
images = images / 255.0
train_images, test_images, train_labels, test_labels = train_test_split(images, labels, test_size=0.2)

ํ˜„์žฌ 1358๊ฐœ์˜ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์™€ 340๊ฐœ์˜ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ ์ด 1698๊ฐœ์˜ ๋ฐ์ดํ„ฐ์…‹์„ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค.

n_train = train_labels.shape[0]
n_test = test_labels.shape[0]

print ("Number of training examples: {}".format(n_train))
print ("Number of training examples: {}".format(n_test))
Number of training examples: 1358
Number of training examples: 340

๊ฐ„๋‹จํ•œ CNN ๋ชจ๋ธ์„ ์ง์ ‘ ๋งŒ๋“ค์–ด๋ณด๊ณ  ์„ฑ๋Šฅ์„ ํ™•์ธํ•ด๋ด…๋‹ˆ๋‹ค. ์ด ๋•Œ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ๋ฅผ 20% ๋น„์œจ๋กœ ๋งˆ๋ จํ•ด์„œ ์ถ”๊ฐ€๋กœ ์„ฑ๋Šฅ์„ ํ™•์ธํ•ด๋ด…๋‹ˆ๋‹ค.

model = Sequential([
    Conv2D(32, (3, 3), activation = 'relu', input_shape = (image_height, image_width, 3)), 
    MaxPool2D(2,2),
    Conv2D(64, (3, 3), activation = 'relu'),
    MaxPool2D(2,2),
    Conv2D(128, (3, 3), activation = 'relu'),
    MaxPool2D(2,2),
    Flatten(),
    Dense(256, activation=tf.nn.relu),
    Dense(num_classes, activation=tf.nn.softmax)
])
model.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy', metrics=['accuracy'])
history = model.fit(train_images, train_labels, batch_size=64, epochs=20, validation_split = 0.2)
Epoch 1/20
17/17 [==============================] - 38s 240ms/step - loss: 3.6401 - accuracy: 0.1889 - val_loss: 1.9124 - val_accuracy: 0.2243
Epoch 2/20
17/17 [==============================] - 2s 110ms/step - loss: 1.8678 - accuracy: 0.2314 - val_loss: 1.8660 - val_accuracy: 0.2279
Epoch 3/20
17/17 [==============================] - 2s 110ms/step - loss: 1.7636 - accuracy: 0.2779 - val_loss: 1.7660 - val_accuracy: 0.3125
Epoch 4/20
17/17 [==============================] - 2s 111ms/step - loss: 1.6425 - accuracy: 0.3904 - val_loss: 1.6835 - val_accuracy: 0.3566
Epoch 5/20
17/17 [==============================] - 2s 110ms/step - loss: 1.4164 - accuracy: 0.4867 - val_loss: 1.7457 - val_accuracy: 0.3382
Epoch 6/20
17/17 [==============================] - 2s 110ms/step - loss: 1.0570 - accuracy: 0.6507 - val_loss: 1.6682 - val_accuracy: 0.3676
Epoch 7/20
17/17 [==============================] - 2s 109ms/step - loss: 0.7076 - accuracy: 0.8003 - val_loss: 1.8565 - val_accuracy: 0.3787
Epoch 8/20
17/17 [==============================] - 2s 109ms/step - loss: 0.4653 - accuracy: 0.8563 - val_loss: 2.2181 - val_accuracy: 0.3824
Epoch 9/20
17/17 [==============================] - 2s 111ms/step - loss: 0.2103 - accuracy: 0.9435 - val_loss: 2.4412 - val_accuracy: 0.4007
Epoch 10/20
17/17 [==============================] - 2s 112ms/step - loss: 0.0872 - accuracy: 0.9852 - val_loss: 3.1015 - val_accuracy: 0.3787
Epoch 11/20
17/17 [==============================] - 2s 110ms/step - loss: 0.0549 - accuracy: 0.9883 - val_loss: 3.1400 - val_accuracy: 0.3676
Epoch 12/20
17/17 [==============================] - 2s 112ms/step - loss: 0.0546 - accuracy: 0.9836 - val_loss: 3.5330 - val_accuracy: 0.3860
Epoch 13/20
17/17 [==============================] - 2s 111ms/step - loss: 0.0174 - accuracy: 0.9989 - val_loss: 3.5670 - val_accuracy: 0.4118
Epoch 14/20
17/17 [==============================] - 2s 110ms/step - loss: 0.0107 - accuracy: 1.0000 - val_loss: 3.6782 - val_accuracy: 0.4301
Epoch 15/20
17/17 [==============================] - 2s 110ms/step - loss: 0.0028 - accuracy: 1.0000 - val_loss: 3.9017 - val_accuracy: 0.4375
Epoch 16/20
17/17 [==============================] - 2s 111ms/step - loss: 0.0016 - accuracy: 1.0000 - val_loss: 4.0851 - val_accuracy: 0.4375
Epoch 17/20
17/17 [==============================] - 2s 111ms/step - loss: 9.3226e-04 - accuracy: 1.0000 - val_loss: 4.1403 - val_accuracy: 0.4412
Epoch 18/20
17/17 [==============================] - 2s 111ms/step - loss: 7.6645e-04 - accuracy: 1.0000 - val_loss: 4.1752 - val_accuracy: 0.4412
Epoch 19/20
17/17 [==============================] - 2s 111ms/step - loss: 5.6912e-04 - accuracy: 1.0000 - val_loss: 4.2507 - val_accuracy: 0.4412
Epoch 20/20
17/17 [==============================] - 2s 111ms/step - loss: 5.1643e-04 - accuracy: 1.0000 - val_loss: 4.3206 - val_accuracy: 0.4449

ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์˜ ์„ฑ๋Šฅ์€ ๋งค์šฐ ์ข‹์ง€๋งŒ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ์˜ ์„ฑ๋Šฅ์€ ๋งŽ์ด ๋†’์ง€ ์•Š์€ ๊ฒฐ๊ณผ๊ฐ€ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

def plot_accuracy_loss(history):
    fig = plt.figure(figsize=(10,5))

    plt.subplot(221)
    plt.plot(history.history['accuracy'],'bo--', label = "accuracy")
    plt.plot(history.history['val_accuracy'], 'ro--', label = "val_accuracy")
    plt.title("train_acc vs val_acc")
    plt.ylabel("accuracy")
    plt.xlabel("epochs")
    plt.legend()

    plt.subplot(222)
    plt.plot(history.history['loss'],'bo--', label = "loss")
    plt.plot(history.history['val_loss'], 'ro--', label = "val_loss")
    plt.title("train_loss vs val_loss")
    plt.ylabel("loss")
    plt.xlabel("epochs")

    plt.legend()
    plt.show()
plot_accuracy_loss(history)

๊ทธ๋ž˜ํ”„๋ฅผ ํ†ตํ•ด ํ™•์ธํ•ด๋ณด๋ฉด ๋Œ€๋žต 5epochs ์ดํ›„ ๋ถ€ํ„ฐ๋Š” ๊ต์ฐจ ๋ฐ์ดํ„ฐ์˜ ์„ฑ๋Šฅ์ด ์ž˜ ๋‚˜์˜ค์ง€ ์•Š์Šต๋‹ˆ๋‹ค.

test_loss = model.evaluate(test_images, test_labels)
11/11 [==============================] - 1s 45ms/step - loss: 5.0795 - accuracy: 0.4118

ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์˜ ์„ฑ๋Šฅ์€ ๋Œ€๋žต 0.4 ๋‚ด์™ธ์ž…๋‹ˆ๋‹ค. (์—ฌ๋Ÿฌ ์‹œํ–‰ ๊ฒฐ๊ณผ 0.35 ~ 0.45 ์‚ฌ์ด)

Image Augmentation

๋ฐ์ดํ„ฐ์…‹์˜ ๊ฐฏ์ˆ˜๊ฐ€ ๋งŽ์•„์งˆ์ˆ˜๋ก ๋ชจ๋ธ์ด ์˜ค๋ฒ„ํ”ผํŒ…๋  ๊ฐ€๋Šฅ์„ฑ์ด ์ค„์–ด๋“ค๋ฉฐ, ์ด๋Š” ์„ฑ๋Šฅ์˜ ๊ฐœ์„ ์š”์†Œ๊ฐ€ ๋ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์…‹์˜ ๊ฐฏ์ˆ˜๋ฅผ ์ฆ๊ฐ€์‹œํ‚ค๊ธฐ ์œ„ํ•ด ImageDataGenerator ๋ฅผ ์ด์šฉํ•ฉ๋‹ˆ๋‹ค.

ํ˜„์žฌ ๋ฐ์ดํ„ฐ์…‹์€ ์ด 1698๊ฐœ์ž…๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์กฐ์ž‘์„ ํ†ตํ•ด ๋ฐ์ดํ„ฐ์…‹์˜ ๊ฐฏ์ˆ˜๋ฅผ ๋Š˜๋ ค์ค๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ 1๊ฐœ๋‹น ์กฐ์ž‘์„ ํ†ตํ•ด 3๊ฐœ์˜ ์ด๋ฏธ์ง€๊ฐ€ ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ด 1698 4 = 6792๊ฐœ์˜ ๋ฐ์ดํ„ฐ์…‹์ด ๋งˆ๋ จ๋ฉ๋‹ˆ๋‹ค. 1698 2 = 3396 1698 3 = 5094 1698 4 = 6792 1698 * 5 = 8490

  • ๋” ๋งŽ์€ ์ด๋ฏธ์ง€๋Š” colab์—์„œ ๋ฆฌ์†Œ์Šค ๋ถ€์กฑ์œผ๋กœ ์ž‘๋™ํ•˜์ง€ ์•Š์•„์„œ ์ถ”๊ฐ€๋กœ 2๊ฐœ๊นŒ์ง€๋งŒ ์ƒ์„ฑํ–ˆ์Šต๋‹ˆ๋‹ค.(์ด 3๊ฐœ)

from tensorflow.keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img

imageGenerator = ImageDataGenerator(
    rotation_range=20,
    width_shift_range=0.1,
    height_shift_range=0.1,
    brightness_range=[.2, .2],
    horizontal_flip=True,
)

for dirname, _, filenames in os.walk(train_path):
  category = dirname.replace('./train/','')
  if category == './train':
    continue
  for filename in filenames:
    img = load_img(os.path.join(dirname, filename))
    x = img_to_array(img) 
    x = x.reshape((1, ) + x.shape)  
    i = 0
    for batch in imageGenerator.flow(x,batch_size = 1,
                                     save_to_dir = os.path.join(train_path, category),
                                     save_format ='jpg'):
        i += 1
        if i == 2: 
            break

์ด๋ฏธ์ง€๋ฅผ ์กฐ์ž‘ํ•  ๋ฐฉ๋ฒ•์„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ •ํ•ฉ๋‹ˆ๋‹ค.

  • rotation_range = 20

    • ํšŒ์ „ํ•  ๊ฐ๋„๋ฅผ 20๋„๋กœ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค

  • width_shift_range = 0.1

  • height_shift_range = 0.1

    • ์ˆ˜ํ‰ ๋˜๋Š” ์ˆ˜์ง์œผ๋กœ ์ด๋™ ๋น„์œจ์„ 0.1๋กœ ์ •ํ•ฉ๋‹ˆ๋‹ค

  • brigthness_range = [.2, .2]

    • ๋ฐ๊ธฐ๋ฅผ -20% ๋ถ€ํ„ฐ 20%๊นŒ์ง€ ๋ณ€ํ™”์‹œํ‚ต๋‹ˆ๋‹ค.

  • horizontal_flip=True

    • ์ˆ˜ํ‰์œผ๋กœ ๋’ค์ง‘์Šต๋‹ˆ๋‹ค.

train_path = './train'
images = []
labels = []

for dirname, _, filenames in os.walk(train_path):
  category = dirname.split('/')[-1]
  if category == 'train':
    continue
  label = class_names_label[category]
  for filename in filenames:
    path = os.path.join(dirname, filename)
    image = cv2.imread(path)
    image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
    images.append(image)
    labels.append(label)

images = np.array(images, dtype = 'float32')
labels = np.array(labels, dtype = 'int32')   


images, labels = shuffle(images, labels, random_state=25)
images = images / 255.0
train_images, test_images, train_labels, test_labels = train_test_split(images, labels, test_size=0.2)
n_train = train_labels.shape[0]
n_test = test_labels.shape[0]

print ("Number of training examples: {}".format(n_train))
print ("Number of training examples: {}".format(n_test))
Number of training examples: 3989
Number of training examples: 998
model.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy', metrics=['accuracy'])
history = model.fit(train_images, train_labels, batch_size=64, epochs=20, validation_split = 0.2)
Epoch 1/20
50/50 [==============================] - 8s 153ms/step - loss: 2.2086 - accuracy: 0.4334 - val_loss: 1.4567 - val_accuracy: 0.4900
Epoch 2/20
50/50 [==============================] - 5s 109ms/step - loss: 1.1767 - accuracy: 0.5928 - val_loss: 1.3364 - val_accuracy: 0.5426
Epoch 3/20
50/50 [==============================] - 5s 110ms/step - loss: 0.6962 - accuracy: 0.7762 - val_loss: 1.4134 - val_accuracy: 0.5564
Epoch 4/20
50/50 [==============================] - 6s 111ms/step - loss: 0.3223 - accuracy: 0.9011 - val_loss: 1.9013 - val_accuracy: 0.5501
Epoch 5/20
50/50 [==============================] - 6s 111ms/step - loss: 0.1151 - accuracy: 0.9695 - val_loss: 2.3870 - val_accuracy: 0.5313
Epoch 6/20
50/50 [==============================] - 6s 110ms/step - loss: 0.0513 - accuracy: 0.9889 - val_loss: 2.5908 - val_accuracy: 0.5602
Epoch 7/20
50/50 [==============================] - 6s 112ms/step - loss: 0.0282 - accuracy: 0.9938 - val_loss: 2.8788 - val_accuracy: 0.5238
Epoch 8/20
50/50 [==============================] - 6s 111ms/step - loss: 0.0187 - accuracy: 0.9968 - val_loss: 3.5123 - val_accuracy: 0.5125
Epoch 9/20
50/50 [==============================] - 6s 111ms/step - loss: 0.0382 - accuracy: 0.9899 - val_loss: 3.8634 - val_accuracy: 0.4561
Epoch 10/20
50/50 [==============================] - 6s 112ms/step - loss: 0.1146 - accuracy: 0.9645 - val_loss: 2.8725 - val_accuracy: 0.5088
Epoch 11/20
50/50 [==============================] - 6s 111ms/step - loss: 0.0543 - accuracy: 0.9849 - val_loss: 2.9627 - val_accuracy: 0.5276
Epoch 12/20
50/50 [==============================] - 6s 111ms/step - loss: 0.0163 - accuracy: 0.9961 - val_loss: 3.2457 - val_accuracy: 0.4987
Epoch 13/20
50/50 [==============================] - 6s 112ms/step - loss: 0.0150 - accuracy: 0.9974 - val_loss: 3.4321 - val_accuracy: 0.5238
Epoch 14/20
50/50 [==============================] - 6s 112ms/step - loss: 0.0058 - accuracy: 0.9990 - val_loss: 3.7935 - val_accuracy: 0.5288
Epoch 15/20
50/50 [==============================] - 6s 111ms/step - loss: 6.9671e-04 - accuracy: 1.0000 - val_loss: 3.8625 - val_accuracy: 0.5401
Epoch 16/20
50/50 [==============================] - 6s 112ms/step - loss: 2.6550e-04 - accuracy: 1.0000 - val_loss: 3.9490 - val_accuracy: 0.5414
Epoch 17/20
50/50 [==============================] - 6s 112ms/step - loss: 1.7575e-04 - accuracy: 1.0000 - val_loss: 3.9936 - val_accuracy: 0.5439
Epoch 18/20
50/50 [==============================] - 6s 112ms/step - loss: 1.4344e-04 - accuracy: 1.0000 - val_loss: 4.0374 - val_accuracy: 0.5439
Epoch 19/20
50/50 [==============================] - 6s 112ms/step - loss: 1.1886e-04 - accuracy: 1.0000 - val_loss: 4.0765 - val_accuracy: 0.5439
Epoch 20/20
50/50 [==============================] - 6s 112ms/step - loss: 1.0780e-04 - accuracy: 1.0000 - val_loss: 4.1103 - val_accuracy: 0.5439
plot_accuracy_loss(history)
test_loss = model.evaluate(test_images, test_labels)
32/32 [==============================] - 1s 25ms/step - loss: 3.8832 - accuracy: 0.5341

ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์˜ ์„ฑ๋Šฅ์ด ๋‚˜์•„์ง„ ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

  • Simple 0.41

  • Data Augmentation 0.53

VGG16 ๋ชจ๋ธ

ํ˜„์žฌ ๊ตฌํ˜„๋œ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๊ธฐ๋Š” ๊ทธ ์ธต์˜ ๊นŠ์ด๊ฐ€ ์–•์Šต๋‹ˆ๋‹ค. ๋” ์ข‹์€ ๋ถ„๋ฅ˜๊ธฐ๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด tensorflow์—์„œ ์ง€์›ํ•˜๋Š” ๋ชจ๋ธ์ธ VGG๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ VGG๋ชจ๋ธ์„ ๊ฑฐ์ณ ์ƒ์„ฑ๋œ ํŠน์ง•์œผ๋กœ ์ž…๋ ฅ๊ณผ ์ถœ๋ ฅ์„ ๋ชจ๋‘ ์—ฐ๊ฒฐํ•ด์ฃผ๋Š” Dense Layer๋ฅผ ๊ฑฐ์น˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค.

from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input

model = VGG16(weights='imagenet', include_top=False)
train_features = model.predict(train_images)
test_features = model.predict(test_images)
n_train, x, y, z = train_features.shape
n_test, x, y, z = test_features.shape
numFeatures = x * y * z
model2 = tf.keras.Sequential([
    tf.keras.layers.Flatten(input_shape = (x, y, z)),
    tf.keras.layers.Dense(50, activation=tf.nn.relu),
    tf.keras.layers.Dense(num_classes, activation=tf.nn.softmax)
])

model2.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy', metrics=['accuracy'])

history2 = model2.fit(train_features, train_labels, batch_size=64, epochs=15, validation_split = 0.2)
Epoch 1/15
50/50 [==============================] - 1s 16ms/step - loss: 1.8851 - accuracy: 0.4079 - val_loss: 0.6769 - val_accuracy: 0.7581
Epoch 2/15
 1/50 [..............................] - ETA: 0s - loss: 0.5036 - accuracy: 0.8438
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
<ipython-input-9-c73ec4987367> in <module>()
      7 model2.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy', metrics=['accuracy'])
      8 
----> 9 history2 = model2.fit(train_features, train_labels, batch_size=64, epochs=15, validation_split = 0.2)

/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
   1098                 _r=1):
   1099               callbacks.on_train_batch_begin(step)
-> 1100               tmp_logs = self.train_function(iterator)
   1101               if data_handler.should_sync:
   1102                 context.async_wait()

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
    826     tracing_count = self.experimental_get_tracing_count()
    827     with trace.Trace(self._name) as tm:
--> 828       result = self._call(*args, **kwds)
    829       compiler = "xla" if self._experimental_compile else "nonXla"
    830       new_tracing_count = self.experimental_get_tracing_count()

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
    853       # In this case we have created variables on the first call, so we run the
    854       # defunned version which is guaranteed to never create variables.
--> 855       return self._stateless_fn(*args, **kwds)  # pylint: disable=not-callable
    856     elif self._stateful_fn is not None:
    857       # Release the lock early so that multiple threads can perform the call

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in __call__(self, *args, **kwargs)
   2941        filtered_flat_args) = self._maybe_define_function(args, kwargs)
   2942     return graph_function._call_flat(
-> 2943         filtered_flat_args, captured_inputs=graph_function.captured_inputs)  # pylint: disable=protected-access
   2944 
   2945   @property

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _call_flat(self, args, captured_inputs, cancellation_manager)
   1917       # No tape is watching; skip to running the function.
   1918       return self._build_call_outputs(self._inference_function.call(
-> 1919           ctx, args, cancellation_manager=cancellation_manager))
   1920     forward_backward = self._select_forward_and_backward_functions(
   1921         args,

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in call(self, ctx, args, cancellation_manager)
    558               inputs=args,
    559               attrs=attrs,
--> 560               ctx=ctx)
    561         else:
    562           outputs = execute.execute_with_cancellation(

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
     58     ctx.ensure_initialized()
     59     tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 60                                         inputs, attrs, num_outputs)
     61   except core._NotOkStatusException as e:
     62     if name is not None:

KeyboardInterrupt: 
plot_accuracy_loss(history)
test_loss = model2.evaluate(test_features, test_labels)
32/32 [==============================] - 0s 3ms/step - loss: 0.3025 - accuracy: 0.8938

์„ฑ๋Šฅ์ด ๋น„์•ฝ์ ์œผ๋กœ ์ƒ์Šนํ•œ ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

  • Simple 0.41

  • Data Augmentation 0.53

  • VGG16 + Data Augmentation 0.89

Ensemble, ์•™์ƒ๋ธ”

์•™์ƒ๋ธ” ๋Ÿฌ๋‹์€ ์—ฌ๋Ÿฌ๊ฐœ์˜ ๋ถ„๋ฅ˜๊ธฐ๋ฅผ ํ†ตํ•ด ํ•™์Šตํ•˜๊ณ  ํ•™์Šตํ•œ ๊ฒฐ๊ณผ๋ฅผ ์ข…ํ•ฉ(ํ‰๊ท  ๋˜๋Š” ์ตœ๋นˆ๊ฐ’)ํ•˜์—ฌ ์ตœ์ข… ๊ฒฐ๊ณผ๊ฐ’์„ ๊ฒฐ์ •ํ•˜๋Š” ๊ฒƒ์„ ๋งํ•ฉ๋‹ˆ๋‹ค.

์ด๋•Œ, ์•™์ƒ๋ธ” ๋Ÿฌ๋‹์„ ์šด์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ์„ธ ๊ฐ€์ง€๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค.

  • ์„œ๋กœ ๋‹ค๋ฅธ ๋ถ„๋ฅ˜๊ธฐ๋กœ ๋™์ผํ•œ ๋ฐ์ดํ„ฐ์…‹์— ๋Œ€ํ•ด ๋‹ค๋ฅธ ๊ฒฐ๊ณผ๋ฅผ ์–ป๋Š”๋‹ค

  • ์„œ๋กœ ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ๋กœ ๋™์ผํ•œ ๋ถ„๋ฅ˜๊ธฐ์— ๋Œ€ํ•ด ๋‹ค๋ฅธ ๊ฒฐ๊ณผ๋ฅผ ์–ป๋Š”๋‹ค

  • ๋˜๋Š”, ์ด ๋‘˜์„ ๋‘˜ ๋‹ค ์šด์šฉํ•œ๋‹ค.

์—ฌ๊ธฐ์„œ๋Š” 10๊ฐœ์˜ ์„œ๋กœ ๋‹ค๋ฅธ ๋ถ„๋ฅ˜๊ธฐ๋กœ ๋™์ผํ•œ ๋ฐ์ดํ„ฐ์…‹์— ๋Œ€ํ•ด ๋‹ค๋ฅธ ๊ฒฐ๊ณผ๋ฅผ ์–ป์Šต๋‹ˆ๋‹ค. ๋ถ„๋ฅ˜๊ธฐ๊ฐ€ ์„œ๋กœ ๋‹ค๋ฅด๋‹ค๋Š” ๋œป์€ Dense Layer์—์„œ ์ž…๋ ฅ๊ฐ’์„ ๋‹ค๋ฅด๊ฒŒ ์ž…๋ ฅ๋ฐ›๋Š”๋‹ค๋Š” ๋œป์ด๋ฉฐ ์ด๋Š” VGG16์„ ๊ฑฐ์นœ ํŠน์ง•๋“ค์ด ๋žœ๋ค์œผ๋กœ Dropout๋œ 10๊ฐ€์ง€์˜ ์ž…๋ ฅ์„ ๋ฐ›๋Š”๋‹ค๋Š” ๋ง๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค.

n_estimators = 10
max_samples = 0.8

max_samples *= n_train
max_samples = int(max_samples)
models = list()
random = np.random.randint(50, 100, size = n_estimators)

for i in range(n_estimators):
    
    model3 = tf.keras.Sequential([ tf.keras.layers.Flatten(input_shape = (x, y, z)),
                                    tf.keras.layers.Dense(random[i], activation=tf.nn.relu),
                                    tf.keras.layers.Dense(num_classes, activation=tf.nn.softmax)
                                ])
    
    model3.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy', metrics=['accuracy'])
    
    models.append(model3)
histories = []

for i in range(n_estimators):
    train_idx = np.random.choice(len(train_features), size = max_samples)
    histories.append(models[i].fit(train_features[train_idx], train_labels[train_idx], batch_size=64, epochs=10, validation_split = 0.1))
Epoch 1/10
45/45 [==============================] - 1s 9ms/step - loss: 1.9823 - accuracy: 0.3059 - val_loss: 1.0176 - val_accuracy: 0.7063
Epoch 2/10
45/45 [==============================] - 0s 6ms/step - loss: 0.9019 - accuracy: 0.7247 - val_loss: 0.5121 - val_accuracy: 0.8875
Epoch 3/10
45/45 [==============================] - 0s 7ms/step - loss: 0.3701 - accuracy: 0.9178 - val_loss: 0.3479 - val_accuracy: 0.9094
Epoch 4/10
45/45 [==============================] - 0s 7ms/step - loss: 0.2173 - accuracy: 0.9613 - val_loss: 0.2734 - val_accuracy: 0.9281
Epoch 5/10
45/45 [==============================] - 0s 6ms/step - loss: 0.1204 - accuracy: 0.9880 - val_loss: 0.2201 - val_accuracy: 0.9375
Epoch 6/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0818 - accuracy: 0.9940 - val_loss: 0.1895 - val_accuracy: 0.9406
Epoch 7/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0501 - accuracy: 0.9982 - val_loss: 0.1696 - val_accuracy: 0.9406
Epoch 8/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0344 - accuracy: 0.9996 - val_loss: 0.1632 - val_accuracy: 0.9312
Epoch 9/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0235 - accuracy: 0.9998 - val_loss: 0.1513 - val_accuracy: 0.9563
Epoch 10/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0191 - accuracy: 0.9993 - val_loss: 0.1467 - val_accuracy: 0.9625
Epoch 1/10
45/45 [==============================] - 1s 9ms/step - loss: 2.1596 - accuracy: 0.3024 - val_loss: 1.0193 - val_accuracy: 0.6375
Epoch 2/10
45/45 [==============================] - 0s 6ms/step - loss: 0.8358 - accuracy: 0.7512 - val_loss: 0.6540 - val_accuracy: 0.8031
Epoch 3/10
45/45 [==============================] - 0s 6ms/step - loss: 0.5051 - accuracy: 0.8747 - val_loss: 0.4970 - val_accuracy: 0.8375
Epoch 4/10
45/45 [==============================] - 0s 7ms/step - loss: 0.3183 - accuracy: 0.9158 - val_loss: 0.3074 - val_accuracy: 0.9125
Epoch 5/10
45/45 [==============================] - 0s 6ms/step - loss: 0.1145 - accuracy: 0.9833 - val_loss: 0.2050 - val_accuracy: 0.9500
Epoch 6/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0617 - accuracy: 0.9970 - val_loss: 0.1897 - val_accuracy: 0.9625
Epoch 7/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0357 - accuracy: 1.0000 - val_loss: 0.1739 - val_accuracy: 0.9656
Epoch 8/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0302 - accuracy: 1.0000 - val_loss: 0.1671 - val_accuracy: 0.9563
Epoch 9/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0192 - accuracy: 1.0000 - val_loss: 0.1580 - val_accuracy: 0.9563
Epoch 10/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0151 - accuracy: 1.0000 - val_loss: 0.1578 - val_accuracy: 0.9625
Epoch 1/10
45/45 [==============================] - 1s 9ms/step - loss: 2.2014 - accuracy: 0.3511 - val_loss: 0.8670 - val_accuracy: 0.6906
Epoch 2/10
45/45 [==============================] - 0s 7ms/step - loss: 0.6973 - accuracy: 0.7739 - val_loss: 0.6392 - val_accuracy: 0.8125
Epoch 3/10
45/45 [==============================] - 0s 6ms/step - loss: 0.4221 - accuracy: 0.9097 - val_loss: 0.4906 - val_accuracy: 0.8656
Epoch 4/10
45/45 [==============================] - 0s 7ms/step - loss: 0.2574 - accuracy: 0.9617 - val_loss: 0.4151 - val_accuracy: 0.8906
Epoch 5/10
45/45 [==============================] - 0s 6ms/step - loss: 0.1797 - accuracy: 0.9754 - val_loss: 0.3713 - val_accuracy: 0.8813
Epoch 6/10
45/45 [==============================] - 0s 6ms/step - loss: 0.1147 - accuracy: 0.9935 - val_loss: 0.3483 - val_accuracy: 0.9000
Epoch 7/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0843 - accuracy: 0.9957 - val_loss: 0.3210 - val_accuracy: 0.9031
Epoch 8/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0618 - accuracy: 0.9977 - val_loss: 0.3275 - val_accuracy: 0.9125
Epoch 9/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0476 - accuracy: 0.9982 - val_loss: 0.3028 - val_accuracy: 0.9125
Epoch 10/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0344 - accuracy: 0.9998 - val_loss: 0.2974 - val_accuracy: 0.9094
Epoch 1/10
45/45 [==============================] - 1s 9ms/step - loss: 2.9052 - accuracy: 0.3392 - val_loss: 0.7875 - val_accuracy: 0.7437
Epoch 2/10
45/45 [==============================] - 0s 7ms/step - loss: 0.5499 - accuracy: 0.8438 - val_loss: 0.5012 - val_accuracy: 0.8406
Epoch 3/10
45/45 [==============================] - 0s 7ms/step - loss: 0.2768 - accuracy: 0.9504 - val_loss: 0.3944 - val_accuracy: 0.8938
Epoch 4/10
45/45 [==============================] - 0s 7ms/step - loss: 0.1770 - accuracy: 0.9739 - val_loss: 0.3226 - val_accuracy: 0.9219
Epoch 5/10
45/45 [==============================] - 0s 7ms/step - loss: 0.1055 - accuracy: 0.9945 - val_loss: 0.2937 - val_accuracy: 0.9281
Epoch 6/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0730 - accuracy: 0.9987 - val_loss: 0.2598 - val_accuracy: 0.9312
Epoch 7/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0519 - accuracy: 0.9994 - val_loss: 0.2530 - val_accuracy: 0.9281
Epoch 8/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0383 - accuracy: 1.0000 - val_loss: 0.2581 - val_accuracy: 0.9250
Epoch 9/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0281 - accuracy: 1.0000 - val_loss: 0.2265 - val_accuracy: 0.9406
Epoch 10/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0230 - accuracy: 1.0000 - val_loss: 0.2292 - val_accuracy: 0.9281
Epoch 1/10
45/45 [==============================] - 1s 9ms/step - loss: 2.1120 - accuracy: 0.4169 - val_loss: 0.5158 - val_accuracy: 0.8438
Epoch 2/10
45/45 [==============================] - 0s 6ms/step - loss: 0.3176 - accuracy: 0.9168 - val_loss: 0.2947 - val_accuracy: 0.9094
Epoch 3/10
45/45 [==============================] - 0s 7ms/step - loss: 0.1382 - accuracy: 0.9806 - val_loss: 0.2272 - val_accuracy: 0.9281
Epoch 4/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0661 - accuracy: 0.9959 - val_loss: 0.2120 - val_accuracy: 0.9375
Epoch 5/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0399 - accuracy: 0.9994 - val_loss: 0.1898 - val_accuracy: 0.9375
Epoch 6/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0238 - accuracy: 1.0000 - val_loss: 0.1754 - val_accuracy: 0.9469
Epoch 7/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0172 - accuracy: 1.0000 - val_loss: 0.1769 - val_accuracy: 0.9344
Epoch 8/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0131 - accuracy: 1.0000 - val_loss: 0.1698 - val_accuracy: 0.9438
Epoch 9/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0116 - accuracy: 1.0000 - val_loss: 0.1685 - val_accuracy: 0.9438
Epoch 10/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0087 - accuracy: 1.0000 - val_loss: 0.1692 - val_accuracy: 0.9438
Epoch 1/10
45/45 [==============================] - 1s 9ms/step - loss: 2.4135 - accuracy: 0.3638 - val_loss: 0.8577 - val_accuracy: 0.6750
Epoch 2/10
45/45 [==============================] - 0s 6ms/step - loss: 0.5958 - accuracy: 0.8420 - val_loss: 0.5768 - val_accuracy: 0.7969
Epoch 3/10
45/45 [==============================] - 0s 6ms/step - loss: 0.3599 - accuracy: 0.9117 - val_loss: 0.4977 - val_accuracy: 0.8406
Epoch 4/10
45/45 [==============================] - 0s 6ms/step - loss: 0.2330 - accuracy: 0.9629 - val_loss: 0.3527 - val_accuracy: 0.8844
Epoch 5/10
45/45 [==============================] - 0s 7ms/step - loss: 0.1348 - accuracy: 0.9895 - val_loss: 0.3043 - val_accuracy: 0.8969
Epoch 6/10
45/45 [==============================] - 0s 6ms/step - loss: 0.1012 - accuracy: 0.9908 - val_loss: 0.2929 - val_accuracy: 0.9062
Epoch 7/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0683 - accuracy: 0.9976 - val_loss: 0.2615 - val_accuracy: 0.9156
Epoch 8/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0497 - accuracy: 0.9995 - val_loss: 0.2250 - val_accuracy: 0.9250
Epoch 9/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0372 - accuracy: 1.0000 - val_loss: 0.2279 - val_accuracy: 0.9312
Epoch 10/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0284 - accuracy: 1.0000 - val_loss: 0.2228 - val_accuracy: 0.9344
Epoch 1/10
45/45 [==============================] - 1s 9ms/step - loss: 1.8665 - accuracy: 0.4245 - val_loss: 0.6087 - val_accuracy: 0.7781
Epoch 2/10
45/45 [==============================] - 0s 7ms/step - loss: 0.3551 - accuracy: 0.8953 - val_loss: 0.3399 - val_accuracy: 0.9156
Epoch 3/10
45/45 [==============================] - 0s 7ms/step - loss: 0.1476 - accuracy: 0.9772 - val_loss: 0.2929 - val_accuracy: 0.9156
Epoch 4/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0767 - accuracy: 0.9972 - val_loss: 0.2542 - val_accuracy: 0.9344
Epoch 5/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0447 - accuracy: 0.9987 - val_loss: 0.2191 - val_accuracy: 0.9438
Epoch 6/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0286 - accuracy: 1.0000 - val_loss: 0.2117 - val_accuracy: 0.9563
Epoch 7/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0220 - accuracy: 1.0000 - val_loss: 0.2031 - val_accuracy: 0.9500
Epoch 8/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0162 - accuracy: 1.0000 - val_loss: 0.2149 - val_accuracy: 0.9344
Epoch 9/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0127 - accuracy: 1.0000 - val_loss: 0.2075 - val_accuracy: 0.9469
Epoch 10/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0110 - accuracy: 1.0000 - val_loss: 0.2015 - val_accuracy: 0.9500
Epoch 1/10
45/45 [==============================] - 1s 9ms/step - loss: 2.4010 - accuracy: 0.3874 - val_loss: 0.9141 - val_accuracy: 0.7000
Epoch 2/10
45/45 [==============================] - 0s 7ms/step - loss: 0.7456 - accuracy: 0.7663 - val_loss: 0.5889 - val_accuracy: 0.8062
Epoch 3/10
45/45 [==============================] - 0s 7ms/step - loss: 0.3749 - accuracy: 0.9113 - val_loss: 0.3908 - val_accuracy: 0.8781
Epoch 4/10
45/45 [==============================] - 0s 7ms/step - loss: 0.1923 - accuracy: 0.9694 - val_loss: 0.2747 - val_accuracy: 0.8969
Epoch 5/10
45/45 [==============================] - 0s 7ms/step - loss: 0.1132 - accuracy: 0.9871 - val_loss: 0.2336 - val_accuracy: 0.9281
Epoch 6/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0662 - accuracy: 0.9937 - val_loss: 0.2369 - val_accuracy: 0.9062
Epoch 7/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0419 - accuracy: 0.9990 - val_loss: 0.2026 - val_accuracy: 0.9406
Epoch 8/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0250 - accuracy: 0.9993 - val_loss: 0.2055 - val_accuracy: 0.9406
Epoch 9/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0180 - accuracy: 1.0000 - val_loss: 0.1853 - val_accuracy: 0.9344
Epoch 10/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0120 - accuracy: 1.0000 - val_loss: 0.1871 - val_accuracy: 0.9312
Epoch 1/10
45/45 [==============================] - 1s 9ms/step - loss: 2.0665 - accuracy: 0.4211 - val_loss: 0.5312 - val_accuracy: 0.8156
Epoch 2/10
45/45 [==============================] - 0s 7ms/step - loss: 0.3373 - accuracy: 0.9030 - val_loss: 0.3433 - val_accuracy: 0.9031
Epoch 3/10
45/45 [==============================] - 0s 6ms/step - loss: 0.1310 - accuracy: 0.9791 - val_loss: 0.2772 - val_accuracy: 0.9125
Epoch 4/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0733 - accuracy: 0.9955 - val_loss: 0.2638 - val_accuracy: 0.9187
Epoch 5/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0448 - accuracy: 1.0000 - val_loss: 0.2496 - val_accuracy: 0.9281
Epoch 6/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0293 - accuracy: 0.9997 - val_loss: 0.2377 - val_accuracy: 0.9312
Epoch 7/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0232 - accuracy: 1.0000 - val_loss: 0.2428 - val_accuracy: 0.9312
Epoch 8/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0165 - accuracy: 1.0000 - val_loss: 0.2363 - val_accuracy: 0.9312
Epoch 9/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0123 - accuracy: 1.0000 - val_loss: 0.2325 - val_accuracy: 0.9312
Epoch 10/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0106 - accuracy: 1.0000 - val_loss: 0.2233 - val_accuracy: 0.9375
Epoch 1/10
45/45 [==============================] - 1s 9ms/step - loss: 2.2188 - accuracy: 0.3826 - val_loss: 0.7416 - val_accuracy: 0.7563
Epoch 2/10
45/45 [==============================] - 0s 6ms/step - loss: 0.5199 - accuracy: 0.8513 - val_loss: 0.4587 - val_accuracy: 0.8656
Epoch 3/10
45/45 [==============================] - 0s 6ms/step - loss: 0.2311 - accuracy: 0.9584 - val_loss: 0.3136 - val_accuracy: 0.9375
Epoch 4/10
45/45 [==============================] - 0s 7ms/step - loss: 0.1234 - accuracy: 0.9838 - val_loss: 0.2402 - val_accuracy: 0.9469
Epoch 5/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0727 - accuracy: 0.9968 - val_loss: 0.2090 - val_accuracy: 0.9500
Epoch 6/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0448 - accuracy: 1.0000 - val_loss: 0.1904 - val_accuracy: 0.9531
Epoch 7/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0323 - accuracy: 1.0000 - val_loss: 0.1819 - val_accuracy: 0.9531
Epoch 8/10
45/45 [==============================] - 0s 7ms/step - loss: 0.0230 - accuracy: 1.0000 - val_loss: 0.1719 - val_accuracy: 0.9594
Epoch 9/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0167 - accuracy: 1.0000 - val_loss: 0.1676 - val_accuracy: 0.9469
Epoch 10/10
45/45 [==============================] - 0s 6ms/step - loss: 0.0133 - accuracy: 1.0000 - val_loss: 0.1618 - val_accuracy: 0.9469
predictions = []
for i in range(n_estimators):
    predictions.append(models[i].predict(test_features))
    
predictions = np.array(predictions)
predictions = predictions.sum(axis = 0)
pred_labels = predictions.argmax(axis=1)
print("Accuracy : {}".format(accuracy_score(test_labels, pred_labels)))
Accuracy : 0.8937875751503006

์„ฑ๋Šฅ์ด ์กฐ๊ธˆ ๋” ์ƒ์Šนํ•œ ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

  • Simple 0.41

  • Data Augmentation 0.53

  • VGG16 + Data Augmentation 0.89

  • VGG16 + Data Augmentation + Ensemble 0.90 (0.89~0.91)

ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ ์ ์šฉํ•˜๊ธฐ

test_path = './test/0'
test_images = []

for dirname, _, filenames in os.walk(test_path):
  category = dirname.split('/')[-1]
  for filename in filenames:
    path = os.path.join(dirname, filename)
    image = cv2.imread(path)
    image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
    test_images.append(image)

test_images = np.array(test_images, dtype = 'float32')
test_features = model.predict(test_images)
predictions = []
for i in range(n_estimators):
    predictions.append(models[i].predict(test_features))
    
predictions = np.array(predictions)
predictions = predictions.sum(axis = 0)
pred = predictions.argmax(axis=1)

๋ชจ๋ธ์˜ ๊ฒฐ๊ณผ๊ฐ’์„ ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„์œผ๋กœ ๋งŒ๋“  ํ›„ csv ํŒŒ์ผ๋กœ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค.

def write_preds(pred, fname):
    pd.DataFrame({"answer value": pred}).to_csv(fname, index=False, header=True)

write_preds(pred, "result.csv")

๊ฒฐ๊ณผํŒŒ์ผ๋กœ ์–ป์€ csv ํŒŒ์ผ์„ ๋‹ค์šด๋กœ๋“œ ํ•ฉ๋‹ˆ๋‹ค.

from google.colab import files
files.download('result.csv')

์—ํ•„๋กœ๊ทธ

์˜ค ์ž˜ ํ’€์€๊ฑฐ ๊ฐ™์€๋ฐ?? ์„ฑ๋Šฅ๋„ ์ข‹๊ณ !! ๊ฒฐ๊ณผ๋Š”?? ์ด๋ผ๊ณ  ํ•œ๋‹ค๋ฉด ์‹œ๊ฐ„์ด ๋ถ€์กฑํ–ˆ๋‹ค. ๋‚ด๊ฐ€ ์ž์ดˆํ•œ ์ผ์ด๊ธฐ์— ์‚ด์ง ์•„์‰ฝ๊ธด ํ•˜๋‹ค. ์ž ์„ ์•ˆ์žค์œผ๋ฉด ์ข‹์€ ์„ฑ์ ์„ ๋‚ด์ง€ ์•Š์•˜์„๊นŒ ์ƒ๊ฐ. ์•„์‰ฌ์šด ๋งˆ์Œ์— .ipynb ํŒŒ์ผ์ด๋ผ๋„ ์ œ์ถœํ–ˆ๋‹ค. ๋ฆฌ๋”๋ณด๋“œ ์„ฑ์ ์€ 0์ ๋„ ์•„๋‹ˆ๋‹ค. ๊ทธ๋ƒฅ ๋ฏธ์ œ์ถœ.

๊ฑฐ์˜ ๋‹ค ํ–ˆ๋Š”๋ฐ ๋งˆ์ง€๋ง‰์— ๋ณ€์ˆ˜๋ช…์„ ํ—ท๊ฐˆ๋ฆฌ๊ฒŒ ์จ์„œ ํ•œ์ฐธ ๋™์•ˆ ๋ง‰ํžŒ ์—๋Ÿฌ๋ฅผ ํ•ด๊ฒฐํ•˜์ง€ ๋ชปํ–ˆ๋‹ค. ๋๋‚˜๊ณ  30๋ถ„์ด ์ง€๋‚œ ํ›„์—์•ผ ์—๋Ÿฌ๋ฅผ ํ•ด๊ฒฐํ–ˆ๋‹ค.

ํฐ ์•„์‰ฌ์›€์€ ์žˆ์ง€๋งŒ, ๊ทธ๋ž˜๋„ ์ด๋Ÿฐ ๊ณผ์ œ๋ฅผ ์ฒ˜์Œ ํ•ด๋ณด๋Š”๋ฐ ๋‚˜๋ฆ„ ์ž˜ ํ•œ๊ฒƒ ๊ฐ™๋‹ค. ์ตœ๊ทผ์— ๋ฐฐ์šด ์•™์ƒ๋ธ”๊ณผ Image Augmentation์„ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ์–ด์„œ ์ข‹์•˜๋‹ค. ๊ทธ๋ฆฌ๊ณ  VGG16 ๋ชจ๋ธ๋„ ์ฒ˜์Œ ์‚ฌ์šฉํ•ด๋ดค๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์„ฑ๋Šฅ๋„ ๊ฝค ์ค€์ˆ˜ํ•ด์„œ ์ข‹์•˜๋‹ค.

๋‹ค์Œ์—๋Š” ํ•œ๋ฒˆ์— ๊ธด ์ฝ”๋“œ๋ฅผ ์“ธ ์ˆ˜ ์žˆ์„ ์ •๋„๋กœ(๊ทธ๋งŒํผ ๊ฒ€์ƒ‰์„ ์ ๊ฒŒ ํ•  ์ •๋„๋กœ) ์ต์ˆ™ํ•ด์ง€๊ณ  ๋ฐ•ํ•™ํ•ด์ง€๊ณ  ์‹ถ๋‹ค! ๋‹ค์Œ Dev-matching ์ „์— ์ทจ์—…์„ ํ• ๊ฒƒ์ด๊ธฐ ๋•Œ๋ฌธ์— ๋‹ค์Œ ์‹œํ—˜์„ ๋ชป๋ณด๋Š” ๊ฒƒ์ด ์•„์‰ฝ์ง€๋งŒ!

๋†€๋ž€์ (?)์ด ๋ช‡ ๊ฐœ ์žˆ์–ด์„œ ๋„์ ์ด๊ณ  ๊ฐ„๋‹ค

  • ์ƒ๊ฐ๋ณด๋‹ค ๋ฆฌ๋”๋ณด๋“œ 98์  ์ด์ƒ์ด ์ˆ˜๋‘๋ฃฉ ํ–ˆ๋‹ค. 100์ ๋„ ๋ง‰ํŒ๊ฐ€์„œ๋Š” 10๋ช… ์ด์ƒ์€ ์žˆ๋˜ ๊ฒƒ ๊ฐ™๋‹ค.

    • ์ด๋ฏธ 4์‹œ๊ฐ„์ด ์ง€๋‚œ ์‹œ์ ์— 100์ ์ด 3๋ช…์ด์—ˆ๋‹ค.

    • ๋‚˜๋Š” ๋ช‡์ ์ด์—ˆ์„๊นŒ ใ… ใ… 

    • ๊ทธ๋ž˜๋„ ๊ณ ๋“์  ์ˆœ์œ„์—๋Š” ๋ชป๋“ค์—ˆ์„ ๊ฒƒ ๊ฐ™๋‹ค. ์ด๋ฏธ ํ…Œ์ŠคํŠธ๋ž‘ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์„ฑ๋Šฅ์ด 90์ ๋Œ€์ด๋‹ˆ๊นŒ?

  • private test data์— ๋Œ€ํ•œ 100์ ์€ ํ•œ๋ช…์ด์—ˆ๋‹ค. ๊ทผ๋ฐ ์ด ํ•œ๋ช…์ด ๋ฆฌ๋”๋ณด๋“œ์—์„œ 100์ ์€ ์•„๋‹ˆ์—ˆ๋˜ ๊ฒƒ ๊ฐ™๋‹ค.

    • ์—ญ์‹œ ์šด๋นจ(?)

  • colab์—์„œ ์•™์ƒ๋ธ” ๊ตฌํ˜„ํ•  ๋•Œ RAM์ด ๋„ˆ๋ฌด ๋ถ€์กฑํ•˜๋‹ค. ์ด๊ฒƒ ๋•Œ๋ฌธ์— ์žก์•„๋จน์€ ์‹œ๊ฐ„์ด 1์‹œ๊ฐ„์€ ๋„˜๋Š” ๊ฒƒ ๊ฐ™๋‹ค. ์ž๊พธ ์„ธ์…˜์ด ์ดˆ๊ธฐํ™”๋ผ์„œ ๋„ˆ๋ฌด ํž˜๋“ค์—ˆ๋‹ค.

    • ๋ถ„๋ฅ˜๊ธฐ ๊ฐฏ์ˆ˜๋Š” 10๊ฐœ๋กœ ๋งŽ์ด ์„ค์ •ํ–ˆ๋Š”๋ฐ ๊ทธ๋Œ€์‹  epoch๋ฅผ 10๊นŒ์ง€ ๋ฐ–์— ๋ชปํ–ˆ๋‹ค.

      • epoch = 12๋„ ์„ธ์…˜์ด ์ดˆ๊ธฐํ™”๋๋‹ค ใ… ใ… 

  • Image Augmentation์„ ๋Œ๋ฆด ๋•Œ๋„ ๋งŽ์€ ๋ฐ์ดํ„ฐ๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ธฐ์—๋Š” ๋„ˆ๋ฌด ๋ถ€์กฑํ–ˆ๋‹ค.

    • ๋งŒ ์žฅ ์ •๋„๋ฅผ ๋Œ๋ ธ๋Š”๋ฐ ๋Œ์ง€ ๋ชปํ–ˆ๋‹ค.

    • ํ•ฉ์˜ ๋ณธ๊ฒƒ์€ 4,500์žฅ ์ •๋„...

  • ์ƒ๊ฐ๋ณด๋‹ค ๋ฌธ์ œ๊ฐ€ ์‰ฌ์› ๋‹ค.

    • ๋‚œ ๋ชจ๋ธ ๊ตฌํ˜„ํ•˜๋Š” ์ฝ”๋“œ๋ฅผ ์ด๋ฒˆ์— ์ฒ˜์Œ ์ž‘์„ฑํ•ด๋ดค๋‹ค. ๋ฌผ๋ก  ๊ตฌ๊ธ€๋ง๊ณผ ๊ฐ์ข… ์ฝ”๋“œ๋ฅผ ์ทจํ•ฉํ•œ ๊ฒƒ์ด๊ธด ํ•œ๋ฐ.. ๋ˆ„๊ตฌ๋Š” ์•ˆ๊ทธ๋Ÿด๊นŒ?

    • ๋” ๋งŽ์ด ๊ณต๋ถ€ํ•ด์•ผ๊ฒ ๋‹ค. ํ™”์ดํŒ…!

Last updated

Was this helpful?