由於語法渲染問題而影響閱讀體驗, 請移步博客閱讀~
本文GitPage地址

Tensorflow-Numbers-k

我也不记得这是啥了= =

  1. ##!/usr/local/bin/python3.6
  2. import numpy as np
  3. import pandas as pd
  4. from sklearn.model_selection import train_test_split
  5. from tensorflow.python import keras
  6. from tensorflow.python.keras.models import Sequential
  7. from tensorflow.python.keras.layers import Dense, Flatten, Conv2D, Dropout
  8. a=pd.read_csv("train.csv")
  9. a.drop('label', axis=1)
  10. img_rows, img_cols = 28, 28
  11. num_classes = 10
  12. def data_prep(raw):
  13. out_y = keras.utils.to_categorical(raw.label, num_classes)
  14. num_images = raw.shape[0]
  15. x_as_array = raw.values[:,1:]
  16. x_shaped_array = x_as_array.reshape(num_images, img_rows, img_cols, 1)
  17. out_x = x_shaped_array / 255
  18. return out_x, out_y
  19. train_size = 30000
  20. train_file = "train.csv"
  21. raw_data = pd.read_csv(train_file)
  22. x, y = data_prep(raw_data)
  23. model = Sequential()
  24. model.add(Conv2D(30, kernel_size=(3, 3),
  25. strides=2,
  26. activation='relu',
  27. input_shape=(img_rows, img_cols, 1)))
  28. Dropout(0.5)
  29. model.add(Conv2D(30, kernel_size=(3, 3), strides=2, activation='relu'))
  30. Dropout(0.5)
  31. model.add(Flatten())
  32. model.add(Dense(128, activation='relu'))
  33. model.add(Dense(num_classes, activation='softmax'))
  34. model.compile(loss=keras.losses.categorical_crossentropy,
  35. optimizer='adam',
  36. metrics=['accuracy'])
  37. model.fit(x, y,
  38. batch_size=128,
  39. epochs=2,
  40. validation_split = 0.2)

Enjoy~

本文由Python腳本GitHub/語雀自動更新

由於語法渲染問題而影響閱讀體驗, 請移步博客閱讀~
本文GitPage地址

GitHub: Karobben
Blog:Karobben
BiliBili:史上最不正經的生物狗