DAY67. Tensorflow CNN model
mnist cnnbasic
MNIST + CNN basic
ํฉ์ฑ๊ณฑ์ธต : ํน์ง๋งต(feature map) - ์ด๋ฏธ์ง ํน์ง ์ถ์ถ
ํด๋ง์ธต : ํฝ์
์ถ์(๋ค์ด์ํ๋ง) - ์ด๋ฏธ์ง ํน์ง ๊ฐ์กฐ
import tensorflow as tf
from tensorflow.keras.datasets import mnist # image dataset
import matplotlib.pyplot as plt # image ์๊ฐํ
import numpy as np
1. image dataset load
(x_train, y_train), (x_test,y_test) = mnist.load_data()
x_train.shape #(60000, 28, 28) - (size, h, w)
print(x_train[0])
2. ์ ์ํ -> ์ค์ํ ๋ณํ
x_train = x_train.astype('float32')
3. ์ ๊ทํ ๊ท์ ํ
x_train = x_train/255.
x_train[0]
4. first image ํน์ง ์ถ์ถ -> ํน์ง ๊ฐ์กฐ
img = x_train[0]
plt.imshow(X=img, cmap='gray')
plt.show()
img.shape # (28, 28)
1) ์
๋ ฅ ์ด๋ฏธ์ง : 4์ฐจ์ ๋ชจ์ ๋ณ๊ฒฝ
firstImg = img.reshape(1,28,28,1) #[size, h, w, color]
2) ํํฐ(Filter) ์ ์ : w๋ณ์ ์ญํ (์์ ๋์)
Filter = tf.Variable(tf.random.normal(shape=[3,3,1,5])) #[h,w,color,map_size]
3) ํฉ์ฑ๊ณฑ์ธต : ์ด๋ฏธ์ง ํน์ง ์ถ์ถ
conv2d = tf.nn.conv2d(input=firstImg, filters=Filter,
strides=[1,1,1,1], padding='SAME')
ํฉ์ฑ๊ณฑ ์ฐ์ฐ ๊ฒฐ๊ณผ ์๊ฐํ
print(conv2d.shape) #(1, 28, 28, 5)
conv2d_img = np.swapaxes(conv2d, 0, 3) #(5, 28, 28, 1) - ์ถ๊ตํ
for i, img in enumerate(conv2d_img) : #index, image ๋๊น
plt.subplot(1, 5, i+1) #1ํ5์ด ๊ฒฉ์ : 1 -> 5 ์ด๋ฏธ์ง ๋ฐฐ์ด
plt.imshow(img.reshape(28, 28), cmap='gray')
plt.show()
ํด๋ง ์ฐ์ฐ ๊ฒฐ๊ณผ ์๊ฐํ
print(pool.shape) # (1, 14, 14, 5)
pool_img = np.swapaxes(pool, 0, 3) # (5, 14, 14, 1) - ์ถ๊ตํ
for i, img in enumerate(pool_img) : # index, image ๋๊น
plt.subplot(1,5, i+1) # 1ํ5์ด ๊ฒฉ์ : 1 -> 5 ์ด๋ฏธ์ง ๋ฐฐ์ด
plt.imshow(img.reshape(14, 14), cmap='gray')
plt.show()
keras cnn cifar10
Keras CNN model ์์ฑ
1. image dataset load
2. image dataset ์ ์ฒ๋ฆฌ
3. CNN model ์์ฑ
4. CNN model layer ๊ตฌ์ถ
5. model compile : ํ์ตํ๊ฒฝ
6. model training : ํ์ต
7. model ํ๊ฐ
8. model history
from tensorflow.keras.datasets import cifar10 #color image dataset
from tensorflow.keras.utils import to_categorical #y๋ณ์ encoding
from tensorflow.keras import Sequential #keras model
from tensorflow.keras.layers import Conv2D, MaxPool2D #Conv layer
from tensorflow.keras.layers import Flatten, Dense, Dropout #DNN layer
import matplotlib.pyplot as plt #image show
1. image dataset load
(x_train, y_train), (x_val, y_val) = cifar10.load_data()
x_train.shape #(50000, 32, 32, 3) - (size, h, w, c)
y_train.shape #(50000, 1)
first_img = x_train[0]
first_img.shape #(32, 32, 3)
plt.imshow(X=first_img)
plt.show()
y_train[0] #[6] - ๊ฐ๊ตฌ๋ฆฌ
x_val.shape #(10000, 32, 32, 3)
2. image dataset ์ ์ฒ๋ฆฌ
1) image pixel ์ค์ํ ๋ณํ
x_train = x_train.astype(dtype='float32') #image vs filter
x_val = x_val.astype(dtype='float32')
2) image ์ ๊ทํ : 0~1
x_train = x_train / 255
x_val = x_val / 255
3) label ์ ์ฒ๋ฆฌ : class(10์ง์) -> one hot encoding(2์ง์)
y_train = to_categorical(y_train)
y_val = to_categorical(y_val)
y_train[0] #[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.] - 6
3. CNN model ์์ฑ
model = Sequential()
4. CNN model layer ๊ตฌ์ถ
input_shape = (32, 32, 3) #์
๋ ฅ image ๋ชจ์
Conv layer1 : 1์ธต [28, 28, 32] -> [13, 13, 32]
model.add(Conv2D(filters=32, kernel_size=(5,5),
input_shape = input_shape, activation='relu')) #ํฉ์ฑ๊ณฑ+ํ์ฑํจ์
filters : ํน์ง๋งต ๊ฐ์, kernel_size : Filter ํฌ๊ธฐ
model.add(MaxPool2D(pool_size=(3,3), strides=(2,2))) #ํด๋ง : ํฝ์
์ถ์
pool_size : ํด๋ง ์๋, strides : ์๋ ์ด๋ ํฌ๊ธฐ
model.add(Dropout(rate=0.3)) #30% ๋ฌด์์ n/w ์ ๊ฑฐ
Conv layer2 : 2์ธต [9, 9, 64] -> [4, 4, 64]
model.add(Conv2D(filters=64, kernel_size=(5,5),
input_shape = input_shape, activation='relu')) #ํฉ์ฑ๊ณฑ+ํ์ฑํจ์
model.add(MaxPool2D(pool_size=(3,3), strides=(2,2))) #ํด๋ง : ํฝ์
์ถ์
model.add(Dropout(rate=0.1)) #10% ๋ฌด์์ n/w ์ ๊ฑฐ
Conv layer3 : 3์ธต [2, 2, 128]
model.add(Conv2D(filters=128, kernel_size=(3,3),
input_shape = input_shape, activation='relu')) #ํฉ์ฑ๊ณฑ+ํ์ฑํจ์
model.add(Dropout(rate=0.1)) #10% ๋ฌด์์ n/w ์ ๊ฑฐ
์ ๊ฒฐํฉ์ธต : Flatten layer : 3d(h,w,c) -> 1d(h*w*c)
model.add(Flatten())
DNN layer1 : 4์ธต(hidden layer)
model.add(Dense(units=64, activation='relu'))
DNN layer2 : 5์ธต(output layer)
model.add(Dense(units=10, activation='softmax'))
model.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_8 (Conv2D) (None, 28, 28, 32) 2432
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 13, 13, 32) 0
_________________________________________________________________
conv2d_9 (Conv2D) (None, 9, 9, 64) 51264
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 (None, 4, 4, 64) 0
_________________________________________________________________
conv2d_10 (Conv2D) (None, 2, 2, 128) 73856
_________________________________________________________________
flatten_1 (Flatten) (None, 512) 0
_________________________________________________________________
dense_5 (Dense) (None, 64) 32832
_________________________________________________________________
dense_6 (Dense) (None, 10) 650
=================================================================
5. model compile : ํ์ตํ๊ฒฝ(๋คํญ๋ถ๋ฅ๊ธฐ)
model.compile(optimizer='adam',
loss='categorical_crossentropy', #y = one_hot encoding
metrics=['accuracy'])
6. model training : ํ์ต
model_fit = model.fit(x=x_train, y=y_train, #ํ๋ จ์
epochs=10, #๋ฐ๋ณตํ์ต
batch_size = 100, #1ํ๊ณต๊ธ image ๊ฐ์(1epoch=100*500)
verbose=1,
validation_data=(x_val, y_val)) #๊ฒ์ฆ์
7. model ํ๊ฐ
print('='*30)
model.evaluate(x=x_val, y=y_val)
8. model history
loss vs val_loss : ๊ณผ์ ํฉ ์์์ : 3.5
plt.plot(model_fit.history['loss'], 'y', label='train loss')
plt.plot(model_fit.history['val_loss'], 'r', label='val loss')
plt.legend(loc='best')
plt.show()
accuracy vs val_accuracy : ๊ณผ์ ํฉ ์์์ : 4
plt.plot(model_fit.history['accuracy'], 'y', label='train accuracy')
plt.plot(model_fit.history['val_accuracy'], 'r', label='val accuracy')
plt.legend(loc='best')
plt.show()
real image cnn basic
real image + CNN basic
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt #image show
from matplotlib.image import imread #image read
path = r'C:\ITWILL\5_Tensorflow\data\images'
1. image load
img = imread(path + '/parrots.png')
type(img) #numpy.ndarray
plt.imshow(X=img)
plt.show()
RGB ํฝ์
๊ฐ
img.shape #(512, 768, 3) - (h, w, color)
2. image reshape
Img = img.reshape(1, 512, 768, 3) #(size, h, w, color)
3. Filter ์ ์
Filter = tf.Variable(tf.random.normal([9,9,3,5])) #[h,w,color,f-map]
4. ํฉ์ฑ๊ณฑ์ธต : ์ด๋ฏธ์ง ํน์ง ์ถ์ถ
conv2d = tf.nn.conv2d(input=Img, filters=Filter,
strides=[1,2,2,1], padding='SAME')
conv2d.shape # [1, 256, 384, 5]
5. ํด๋ง์ธต : ์ด๋ฏธ์ง ํน์ง ๊ฐ์กฐ(ํฝ์
์ถ์)
pool = tf.nn.max_pool(input=conv2d, ksize=[1,7,7,1],
strides=[1,4,4,1], padding='SAME')
pool.shape #[1, 64, 96, 5]
ํฉ์ฑ๊ณฑ ์ฐ์ฐ ๊ฒฐ๊ณผ ์๊ฐํ
print(conv2d.shape) #(1, 256, 384, 5)
conv2d_img = np.swapaxes(conv2d, 0, 3) #(5, 28, 28, 1) - ์ถ๊ตํ
for i, img in enumerate(conv2d_img) : #index, image ๋๊น
plt.subplot(1, 5, i+1) #1ํ5์ด ๊ฒฉ์ : 1 -> 5 ์ด๋ฏธ์ง ๋ฐฐ์ด
plt.imshow(img.reshape(256, 384), cmap='gray')
plt.show()
ํด๋ง ์ฐ์ฐ ๊ฒฐ๊ณผ ์๊ฐํ
print(pool.shape) #(1, 64, 96, 5)
pool_img = np.swapaxes(pool, 0, 3) #(5, 14, 14, 1) - ์ถ๊ตํ
for i, img in enumerate(pool_img) : #index, image ๋๊น
plt.subplot(1,5, i+1) #1ํ5์ด ๊ฒฉ์ : 1 -> 5 ์ด๋ฏธ์ง ๋ฐฐ์ด
plt.imshow(img.reshape(64, 96), cmap='gray')
plt.show()
keras cnn tensorboard
* keras_cnn_cifar10 ์ฐธ๊ณ
- Keras CNN layers tensorboard ์๊ฐํ
from tensorflow.keras.datasets import cifar10 #color image dataset
from tensorflow.keras.utils import to_categorical #y๋ณ์ encoding
from tensorflow.keras import Sequential #keras model
from tensorflow.keras.layers import Conv2D, MaxPool2D #Conv layer
from tensorflow.keras.layers import Flatten, Dense, Dropout #DNN layer
import matplotlib.pyplot as plt #image show
import tensorflow as tf #[์ถ๊ฐ]
[์ถ๊ฐ] tensorboard ์ด๊ธฐํ
tf.keras.backend.clear_session()
1. image dataset load
(x_train, y_train), (x_val, y_val) = cifar10.load_data()
x_train.shape #(50000, 32, 32, 3) - (size, h, w, c)
y_train.shape #(50000, 1)
first_img = x_train[0]
first_img.shape #(32, 32, 3)
plt.imshow(X=first_img)
plt.show()
y_train[0] #[6] - ๊ฐ๊ตฌ๋ฆฌ
x_val.shape #(10000, 32, 32, 3)
2. image dataset ์ ์ฒ๋ฆฌ
1) image pixel ์ค์ํ ๋ณํ
x_train = x_train.astype(dtype='float32') #image vs filter
x_val = x_val.astype(dtype='float32')
2) image ์ ๊ทํ : 0~1
x_train = x_train / 255
x_val = x_val / 255
3) label ์ ์ฒ๋ฆฌ : class(10์ง์) -> one hot encoding(2์ง์)
y_train = to_categorical(y_train)
y_val = to_categorical(y_val)
y_train[0] #[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.] - 6
3. CNN model ์์ฑ
model = Sequential()
4. CNN model layer ๊ตฌ์ถ
input_shape = (32, 32, 3) # ์
๋ ฅ image ๋ชจ์
Conv layer1 : 1์ธต [28, 28, 32] -> [13, 13, 32]
model.add(Conv2D(filters=32, kernel_size=(5,5),
input_shape = input_shape, activation='relu')) #ํฉ์ฑ๊ณฑ+ํ์ฑํจ์
filters : ํน์ง๋งต ๊ฐ์, kernel_size : Filter ํฌ๊ธฐ
model.add(MaxPool2D(pool_size=(3,3), strides=(2,2))) #ํด๋ง : ํฝ์
์ถ์
pool_size : ํด๋ง ์๋, strides : ์๋ ์ด๋ ํฌ๊ธฐ
model.add(Dropout(rate=0.3)) # 30% ๋ฌด์์ n/w ์ ๊ฑฐ
Conv layer2 : 2์ธต [9, 9, 64] -> [4, 4, 64]
model.add(Conv2D(filters=64, kernel_size=(5,5),
input_shape = input_shape, activation='relu')) #ํฉ์ฑ๊ณฑ+ํ์ฑํจ์
model.add(MaxPool2D(pool_size=(3,3), strides=(2,2))) #ํด๋ง : ํฝ์
์ถ์
model.add(Dropout(rate=0.1)) #10% ๋ฌด์์ n/w ์ ๊ฑฐ
Conv layer3 : 3์ธต [2, 2, 128]
model.add(Conv2D(filters=128, kernel_size=(3,3),
input_shape = input_shape, activation='relu')) #ํฉ์ฑ๊ณฑ+ํ์ฑํจ์
model.add(Dropout(rate=0.1)) #10% ๋ฌด์์ n/w ์ ๊ฑฐ
์ ๊ฒฐํฉ์ธต : Flatten layer : 3d(h,w,c) -> 1d(h*w*c)
model.add(Flatten())
DNN layer1 : 4์ธต(hidden layer)
model.add(Dense(units=64, activation='relu'))
DNN layer2 : 5์ธต(output layer)
model.add(Dense(units=10, activation='softmax'))
model.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_8 (Conv2D) (None, 28, 28, 32) 2432
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 13, 13, 32) 0
_________________________________________________________________
conv2d_9 (Conv2D) (None, 9, 9, 64) 51264
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 (None, 4, 4, 64) 0
_________________________________________________________________
conv2d_10 (Conv2D) (None, 2, 2, 128) 73856
_________________________________________________________________
flatten_1 (Flatten) (None, 512) 0
_________________________________________________________________
dense_5 (Dense) (None, 64) 32832
_________________________________________________________________
dense_6 (Dense) (None, 10) 650
=================================================================
5. model compile : ํ์ตํ๊ฒฝ(๋คํญ๋ถ๋ฅ๊ธฐ)
model.compile(optimizer='adam',
loss='categorical_crossentropy', #y = one_hot encoding
metrics=['accuracy'])
[์ถ๊ฐ] Tensorboad
from tensorflow.keras.callbacks import TensorBoard
from datetime import datetime #'20211215-134135'
log file ์ ์ฅ ์์น
logdir = 'c:/graph/' + datetime.now().strftime('%Y%m%d-%H%M%S')
callback = TensorBoard(log_dir = logdir)
6. model training : ํ์ต
model_fit = model.fit(x=x_train, y=y_train, #ํ๋ จ์
epochs=10, #๋ฐ๋ณตํ์ต
batch_size = 100, #1ํ๊ณต๊ธ image ๊ฐ์(1epoch=100*500)
verbose=1,
validation_data=(x_val, y_val), #๊ฒ์ฆ์
callbacks = [callback]) #[์ถ๊ฐ] log file
7. model ํ๊ฐ
print('='*30)
model.evaluate(x=x_val, y=y_val)
8. model history
loss vs val_loss : ๊ณผ์ ํฉ ์์์ : 3.5
plt.plot(model_fit.history['loss'], 'y', label='train loss')
plt.plot(model_fit.history['val_loss'], 'r', label='val loss')
plt.legend(loc='best')
plt.show()
accuracy vs val_accuracy : ๊ณผ์ ํฉ ์์์ : 4
plt.plot(model_fit.history['accuracy'], 'y', label='train accuracy')
plt.plot(model_fit.history['val_accuracy'], 'r', label='val accuracy')
plt.legend(loc='best')
plt.show()