跳到主要內容

發表文章

目前顯示的是 6月, 2017的文章

利用Hyper Parameters自動調整深度學習參數

利用Hyper Parameters Optimization自動調整深度學習混合參數 深度學習通常需要大量的調整參數才能夠得到最好的結果,而為了解決這個問題,有團隊利用了Tree of Parzen Estimators (TPE), Random Search, Grid Search, Bayesian Optimization各種不同的演算法自動調整混合參數(hyper parameters) 而剛好python上有套件為hyperopt,在keras神經網路框架中有人實作為 hyperas 用法如下 from __future__ import print_function from hyperopt import Trials, STATUS_OK, tpe from keras.datasets import mnist from keras.layers.core import Dense, Dropout, Activation from keras.models import Sequential from keras.utils import np_utils from hyperas import optim from hyperas.distributions import choice, uniform, conditional def data(): """ Data providing function: This function is separated from model() so that hyperopt won't reload data for each evaluation run. """ (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train.reshape(60000, 784) x_test = x_test.reshape(10000, 784) x_train = x_train.astype('float32') x_tes