LEE_BOMB 2021. 12. 24. 18:22
mnist cnnbasic

MNIST + CNN basic 
ํ•ฉ์„ฑ๊ณฑ์ธต : ํŠน์ง•๋งต(feature map) - ์ด๋ฏธ์ง€ ํŠน์ง• ์ถ”์ถœ 
ํด๋ง์ธต : ํ”ฝ์…€ ์ถ•์†Œ(๋‹ค์šด์ƒ˜ํ”Œ๋ง) - ์ด๋ฏธ์ง€ ํŠน์ง• ๊ฐ•์กฐ 

import tensorflow as tf 
from tensorflow.keras.datasets import mnist # image dataset 
import matplotlib.pyplot as plt # image ์‹œ๊ฐํ™” 
import numpy as np



1. image dataset load 

(x_train, y_train), (x_test,y_test) = mnist.load_data()

x_train.shape #(60000, 28, 28) - (size, h, w)

print(x_train[0])




2. ์ •์ˆ˜ํ˜• -> ์‹ค์ˆ˜ํ˜• ๋ณ€ํ™˜ 

x_train = x_train.astype('float32')




3. ์ •๊ทœํ™” ๊ทœ์ •ํ™” 

x_train = x_train/255.
x_train[0]




4. first image ํŠน์ง• ์ถ”์ถœ -> ํŠน์ง• ๊ฐ•์กฐ 

img = x_train[0]

plt.imshow(X=img, cmap='gray')
plt.show()

img.shape # (28, 28)


1) ์ž…๋ ฅ ์ด๋ฏธ์ง€ : 4์ฐจ์› ๋ชจ์–‘ ๋ณ€๊ฒฝ 

firstImg = img.reshape(1,28,28,1) #[size, h, w, color]


2) ํ•„ํ„ฐ(Filter) ์ •์˜ : w๋ณ€์ˆ˜ ์—ญํ• (์ˆ˜์ • ๋Œ€์ƒ)

Filter = tf.Variable(tf.random.normal(shape=[3,3,1,5])) #[h,w,color,map_size]


3) ํ•ฉ์„ฑ๊ณฑ์ธต : ์ด๋ฏธ์ง€ ํŠน์ง• ์ถ”์ถœ 

conv2d = tf.nn.conv2d(input=firstImg, filters=Filter, 
                      strides=[1,1,1,1], padding='SAME')

 

 

 

ํ•ฉ์„ฑ๊ณฑ ์—ฐ์‚ฐ ๊ฒฐ๊ณผ  ์‹œ๊ฐํ™” 

print(conv2d.shape) #(1, 28, 28, 5) 
conv2d_img = np.swapaxes(conv2d, 0, 3) #(5, 28, 28, 1) - ์ถ•๊ตํ™˜ 

for i, img in enumerate(conv2d_img) : #index, image ๋„˜๊น€ 
    plt.subplot(1, 5, i+1) #1ํ–‰5์—ด ๊ฒฉ์ž : 1 -> 5 ์ด๋ฏธ์ง€ ๋ฐฐ์—ด  
    plt.imshow(img.reshape(28, 28), cmap='gray')  
plt.show()


ํด๋ง ์—ฐ์‚ฐ ๊ฒฐ๊ณผ ์‹œ๊ฐํ™”

print(pool.shape) # (1, 14, 14, 5)
pool_img = np.swapaxes(pool, 0, 3) # (5, 14, 14, 1) - ์ถ•๊ตํ™˜ 

for i, img in enumerate(pool_img) : # index, image ๋„˜๊น€ 
    plt.subplot(1,5, i+1) # 1ํ–‰5์—ด ๊ฒฉ์ž : 1 -> 5 ์ด๋ฏธ์ง€ ๋ฐฐ์—ด
    plt.imshow(img.reshape(14, 14), cmap='gray') 
plt.show()

 

 

 

 

 

keras cnn cifar10

Keras CNN model ์ƒ์„ฑ 
1. image dataset load
2. image dataset ์ „์ฒ˜๋ฆฌ 
3. CNN model ์ƒ์„ฑ 
4. CNN model layer ๊ตฌ์ถ• 
5. model compile : ํ•™์Šตํ™˜๊ฒฝ 
6. model training : ํ•™์Šต 
7. model ํ‰๊ฐ€ 
8. model history 


from tensorflow.keras.datasets import cifar10 #color image dataset
from tensorflow.keras.utils import to_categorical #y๋ณ€์ˆ˜ encoding
from tensorflow.keras import Sequential #keras model
from tensorflow.keras.layers import Conv2D, MaxPool2D #Conv layer
from tensorflow.keras.layers import Flatten, Dense, Dropout #DNN layer
import matplotlib.pyplot as plt #image show



1. image dataset load

(x_train, y_train), (x_val, y_val) = cifar10.load_data()

x_train.shape #(50000, 32, 32, 3) - (size, h, w, c)
y_train.shape #(50000, 1)

first_img = x_train[0]
first_img.shape #(32, 32, 3)

plt.imshow(X=first_img)
plt.show()

y_train[0] #[6] - ๊ฐœ๊ตฌ๋ฆฌ 

x_val.shape #(10000, 32, 32, 3)

 



2. image dataset ์ „์ฒ˜๋ฆฌ
1) image pixel ์‹ค์ˆ˜ํ˜• ๋ณ€ํ™˜ 

x_train = x_train.astype(dtype='float32') #image vs filter 
x_val = x_val.astype(dtype='float32')


2) image ์ •๊ทœํ™” : 0~1

x_train = x_train / 255 
x_val = x_val / 255


3) label ์ „์ฒ˜๋ฆฌ : class(10์ง„์ˆ˜) -> one hot encoding(2์ง„์ˆ˜)

y_train = to_categorical(y_train)
y_val = to_categorical(y_val)

y_train[0] #[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.] - 6




3. CNN model ์ƒ์„ฑ 

model = Sequential()




4. CNN model layer ๊ตฌ์ถ• 

input_shape = (32, 32, 3) #์ž…๋ ฅ image ๋ชจ์–‘


Conv layer1 : 1์ธต [28, 28, 32] -> [13, 13, 32]

model.add(Conv2D(filters=32, kernel_size=(5,5),
                 input_shape = input_shape, activation='relu')) #ํ•ฉ์„ฑ๊ณฑ+ํ™œ์„ฑํ•จ์ˆ˜


filters : ํŠน์ง•๋งต ๊ฐœ์ˆ˜, kernel_size : Filter ํฌ๊ธฐ  

model.add(MaxPool2D(pool_size=(3,3), strides=(2,2))) #ํด๋ง : ํ”ฝ์…€์ถ•์†Œ


pool_size : ํด๋ง ์œˆ๋„, strides : ์œˆ๋„ ์ด๋™ ํฌ๊ธฐ 

model.add(Dropout(rate=0.3)) #30% ๋ฌด์ž‘์œ„ n/w ์ œ๊ฑฐ


Conv layer2 : 2์ธต [9, 9, 64] -> [4, 4, 64]

model.add(Conv2D(filters=64, kernel_size=(5,5),
                 input_shape = input_shape, activation='relu')) #ํ•ฉ์„ฑ๊ณฑ+ํ™œ์„ฑํ•จ์ˆ˜ 
model.add(MaxPool2D(pool_size=(3,3), strides=(2,2))) #ํด๋ง : ํ”ฝ์…€์ถ•์†Œ 
model.add(Dropout(rate=0.1)) #10% ๋ฌด์ž‘์œ„ n/w ์ œ๊ฑฐ


Conv layer3 : 3์ธต [2, 2, 128]

model.add(Conv2D(filters=128, kernel_size=(3,3),
                 input_shape = input_shape, activation='relu')) #ํ•ฉ์„ฑ๊ณฑ+ํ™œ์„ฑํ•จ์ˆ˜ 
model.add(Dropout(rate=0.1)) #10% ๋ฌด์ž‘์œ„ n/w ์ œ๊ฑฐ


์ „๊ฒฐํ•ฉ์ธต : Flatten layer : 3d(h,w,c) -> 1d(h*w*c)

model.add(Flatten())


DNN layer1 : 4์ธต(hidden layer)

model.add(Dense(units=64, activation='relu'))


DNN layer2 : 5์ธต(output layer)

model.add(Dense(units=10, activation='softmax'))          
model.summary()


_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_8 (Conv2D)            (None, 28, 28, 32)        2432      
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 13, 13, 32)        0         
_________________________________________________________________
conv2d_9 (Conv2D)            (None, 9, 9, 64)          51264     
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 (None, 4, 4, 64)          0         
_________________________________________________________________
conv2d_10 (Conv2D)           (None, 2, 2, 128)         73856     
_________________________________________________________________
flatten_1 (Flatten)          (None, 512)               0         
_________________________________________________________________
dense_5 (Dense)              (None, 64)                32832     
_________________________________________________________________
dense_6 (Dense)              (None, 10)                650       
=================================================================



5. model compile : ํ•™์Šตํ™˜๊ฒฝ(๋‹คํ•ญ๋ถ„๋ฅ˜๊ธฐ)

model.compile(optimizer='adam',
              loss='categorical_crossentropy', #y = one_hot encoding 
              metrics=['accuracy'])

 


               
6. model training : ํ•™์Šต

model_fit = model.fit(x=x_train, y=y_train, #ํ›ˆ๋ จ์…‹ 
          epochs=10, #๋ฐ˜๋ณตํ•™์Šต
          batch_size = 100, #1ํšŒ๊ณต๊ธ‰ image ๊ฐœ์ˆ˜(1epoch=100*500) 
          verbose=1,
          validation_data=(x_val, y_val)) #๊ฒ€์ฆ์…‹




7. model ํ‰๊ฐ€

print('='*30)
model.evaluate(x=x_val, y=y_val)




8. model history 
loss vs val_loss : ๊ณผ์ ํ•ฉ ์‹œ์ž‘์  : 3.5

plt.plot(model_fit.history['loss'], 'y', label='train loss')
plt.plot(model_fit.history['val_loss'], 'r', label='val loss')
plt.legend(loc='best')
plt.show()


accuracy vs val_accuracy : ๊ณผ์ ํ•ฉ ์‹œ์ž‘์  : 4

plt.plot(model_fit.history['accuracy'], 'y', label='train accuracy')
plt.plot(model_fit.history['val_accuracy'], 'r', label='val accuracy')
plt.legend(loc='best')
plt.show()

 

 

 

 

 

real image cnn basic

real image + CNN basic

import tensorflow as tf 
import numpy as np 
import matplotlib.pyplot as plt #image show 
from matplotlib.image import imread #image  read 

path = r'C:\ITWILL\5_Tensorflow\data\images'



1. image load 

img = imread(path + '/parrots.png')
type(img) #numpy.ndarray
plt.imshow(X=img)
plt.show()


RGB ํ”ฝ์…€๊ฐ’ 

img.shape #(512, 768, 3) - (h, w, color)




2. image reshape

Img = img.reshape(1, 512, 768, 3) #(size, h, w, color)




3. Filter ์ •์˜ 

Filter = tf.Variable(tf.random.normal([9,9,3,5])) #[h,w,color,f-map]

 



4. ํ•ฉ์„ฑ๊ณฑ์ธต : ์ด๋ฏธ์ง€ ํŠน์ง• ์ถ”์ถœ 

conv2d = tf.nn.conv2d(input=Img, filters=Filter, 
                      strides=[1,2,2,1], padding='SAME')

conv2d.shape # [1, 256, 384, 5]




5. ํด๋ง์ธต : ์ด๋ฏธ์ง€ ํŠน์ง• ๊ฐ•์กฐ(ํ”ฝ์…€ ์ถ•์†Œ)

pool = tf.nn.max_pool(input=conv2d, ksize=[1,7,7,1], 
               strides=[1,4,4,1], padding='SAME')
pool.shape #[1, 64, 96, 5]


ํ•ฉ์„ฑ๊ณฑ ์—ฐ์‚ฐ ๊ฒฐ๊ณผ  ์‹œ๊ฐํ™” 

print(conv2d.shape) #(1, 256, 384, 5)
conv2d_img = np.swapaxes(conv2d, 0, 3) #(5, 28, 28, 1) - ์ถ•๊ตํ™˜ 

for i, img in enumerate(conv2d_img) : #index, image ๋„˜๊น€ 
    plt.subplot(1, 5, i+1) #1ํ–‰5์—ด ๊ฒฉ์ž : 1 -> 5 ์ด๋ฏธ์ง€ ๋ฐฐ์—ด  
    plt.imshow(img.reshape(256, 384), cmap='gray')  
plt.show()


ํด๋ง ์—ฐ์‚ฐ ๊ฒฐ๊ณผ ์‹œ๊ฐํ™”

print(pool.shape) #(1, 64, 96, 5)
pool_img = np.swapaxes(pool, 0, 3) #(5, 14, 14, 1) - ์ถ•๊ตํ™˜ 

for i, img in enumerate(pool_img) : #index, image ๋„˜๊น€ 
    plt.subplot(1,5, i+1) #1ํ–‰5์—ด ๊ฒฉ์ž : 1 -> 5 ์ด๋ฏธ์ง€ ๋ฐฐ์—ด
    plt.imshow(img.reshape(64, 96), cmap='gray') 
plt.show()

 

 

 

 

 

keras cnn tensorboard

* keras_cnn_cifar10 ์ฐธ๊ณ 
- Keras CNN layers tensorboard ์‹œ๊ฐํ™” 

from tensorflow.keras.datasets import cifar10 #color image dataset
from tensorflow.keras.utils import to_categorical #y๋ณ€์ˆ˜ encoding
from tensorflow.keras import Sequential #keras model
from tensorflow.keras.layers import Conv2D, MaxPool2D #Conv layer
from tensorflow.keras.layers import Flatten, Dense, Dropout #DNN layer
import matplotlib.pyplot as plt #image show   
import tensorflow as tf #[์ถ”๊ฐ€]


[์ถ”๊ฐ€] tensorboard ์ดˆ๊ธฐํ™” 

tf.keras.backend.clear_session()



1. image dataset load

(x_train, y_train), (x_val, y_val) = cifar10.load_data()

x_train.shape #(50000, 32, 32, 3) - (size, h, w, c)
y_train.shape #(50000, 1)

first_img = x_train[0]
first_img.shape #(32, 32, 3)

plt.imshow(X=first_img)
plt.show()

y_train[0] #[6] - ๊ฐœ๊ตฌ๋ฆฌ 

x_val.shape #(10000, 32, 32, 3)




2. image dataset ์ „์ฒ˜๋ฆฌ
1) image pixel ์‹ค์ˆ˜ํ˜• ๋ณ€ํ™˜ 

x_train = x_train.astype(dtype='float32') #image vs filter 
x_val = x_val.astype(dtype='float32')


2) image ์ •๊ทœํ™” : 0~1

x_train = x_train / 255 
x_val = x_val / 255


3) label ์ „์ฒ˜๋ฆฌ : class(10์ง„์ˆ˜) -> one hot encoding(2์ง„์ˆ˜)

y_train = to_categorical(y_train)
y_val = to_categorical(y_val)

y_train[0] #[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.] - 6




3. CNN model ์ƒ์„ฑ 

model = Sequential()




4. CNN model layer ๊ตฌ์ถ• 

input_shape = (32, 32, 3) # ์ž…๋ ฅ image ๋ชจ์–‘


Conv layer1 : 1์ธต [28, 28, 32] -> [13, 13, 32]

model.add(Conv2D(filters=32, kernel_size=(5,5),
                 input_shape = input_shape, activation='relu')) #ํ•ฉ์„ฑ๊ณฑ+ํ™œ์„ฑํ•จ์ˆ˜


filters : ํŠน์ง•๋งต ๊ฐœ์ˆ˜, kernel_size : Filter ํฌ๊ธฐ  

model.add(MaxPool2D(pool_size=(3,3), strides=(2,2))) #ํด๋ง : ํ”ฝ์…€์ถ•์†Œ


pool_size : ํด๋ง ์œˆ๋„, strides : ์œˆ๋„ ์ด๋™ ํฌ๊ธฐ 

model.add(Dropout(rate=0.3)) # 30% ๋ฌด์ž‘์œ„ n/w ์ œ๊ฑฐ


Conv layer2 : 2์ธต [9, 9, 64] -> [4, 4, 64]

model.add(Conv2D(filters=64, kernel_size=(5,5),
                 input_shape = input_shape, activation='relu')) #ํ•ฉ์„ฑ๊ณฑ+ํ™œ์„ฑํ•จ์ˆ˜ 
model.add(MaxPool2D(pool_size=(3,3), strides=(2,2))) #ํด๋ง : ํ”ฝ์…€์ถ•์†Œ 
model.add(Dropout(rate=0.1)) #10% ๋ฌด์ž‘์œ„ n/w ์ œ๊ฑฐ


Conv layer3 : 3์ธต [2, 2, 128]

model.add(Conv2D(filters=128, kernel_size=(3,3),
                 input_shape = input_shape, activation='relu')) #ํ•ฉ์„ฑ๊ณฑ+ํ™œ์„ฑํ•จ์ˆ˜ 
model.add(Dropout(rate=0.1)) #10% ๋ฌด์ž‘์œ„ n/w ์ œ๊ฑฐ


์ „๊ฒฐํ•ฉ์ธต : Flatten layer : 3d(h,w,c) -> 1d(h*w*c)

model.add(Flatten())


DNN layer1 : 4์ธต(hidden layer)

model.add(Dense(units=64, activation='relu'))


DNN layer2 : 5์ธต(output layer)

model.add(Dense(units=10, activation='softmax'))          
model.summary()

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_8 (Conv2D)            (None, 28, 28, 32)        2432      
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 13, 13, 32)        0         
_________________________________________________________________
conv2d_9 (Conv2D)            (None, 9, 9, 64)          51264     
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 (None, 4, 4, 64)          0         
_________________________________________________________________
conv2d_10 (Conv2D)           (None, 2, 2, 128)         73856     
_________________________________________________________________
flatten_1 (Flatten)          (None, 512)               0         
_________________________________________________________________
dense_5 (Dense)              (None, 64)                32832     
_________________________________________________________________
dense_6 (Dense)              (None, 10)                650       
=================================================================



5. model compile : ํ•™์Šตํ™˜๊ฒฝ(๋‹คํ•ญ๋ถ„๋ฅ˜๊ธฐ)

model.compile(optimizer='adam',
              loss='categorical_crossentropy', #y = one_hot encoding 
              metrics=['accuracy'])


[์ถ”๊ฐ€] Tensorboad

from tensorflow.keras.callbacks import TensorBoard
from datetime import datetime #'20211215-134135'


log file ์ €์žฅ ์œ„์น˜ 

logdir = 'c:/graph/' + datetime.now().strftime('%Y%m%d-%H%M%S')
callback = TensorBoard(log_dir = logdir)




6. model training : ํ•™์Šต

model_fit = model.fit(x=x_train, y=y_train, #ํ›ˆ๋ จ์…‹ 
          epochs=10, #๋ฐ˜๋ณตํ•™์Šต
          batch_size = 100, #1ํšŒ๊ณต๊ธ‰ image ๊ฐœ์ˆ˜(1epoch=100*500) 
          verbose=1,
          validation_data=(x_val, y_val), #๊ฒ€์ฆ์…‹ 
          callbacks = [callback]) #[์ถ”๊ฐ€] log file




7. model ํ‰๊ฐ€

print('='*30)
model.evaluate(x=x_val, y=y_val)



8. model history 
loss vs val_loss : ๊ณผ์ ํ•ฉ ์‹œ์ž‘์  : 3.5

plt.plot(model_fit.history['loss'], 'y', label='train loss')
plt.plot(model_fit.history['val_loss'], 'r', label='val loss')
plt.legend(loc='best')
plt.show()


accuracy vs val_accuracy : ๊ณผ์ ํ•ฉ ์‹œ์ž‘์  : 4

plt.plot(model_fit.history['accuracy'], 'y', label='train accuracy')
plt.plot(model_fit.history['val_accuracy'], 'r', label='val accuracy')
plt.legend(loc='best')
plt.show()