You should always remember the importance of pre-processing and analysing the data. We will also give a quick glance to what each of these layers do.įor now we will shift our attention to getting our data and analysing it. We will discuss convolutional layers when we start building our model. The rest of the Keras modules we have imported are convolutional layers. Let's say we have three labels ("apples", "oranges", "bananas"), then one hot encodings for each of these would be -> "apples", -> "oranges", -> "bananas". It is used to convert the categorical labels into one-hot encodings. To_categorical: to_categorical is a keras utility. Once we are satisfied with our training and validation accuracies, we will make final predictions on our test data. We will reiterate through our model building process and making required changes along the way. If the difference between both quantities is significantly large, then our model is probably overfitting. We use a training dataset to train our model and then we will compare the resulting accuracy to validation accuracy. The reason behind this split is to check if our model is overfitting or not. Train_test_split: This module splits the training dataset into training and validation data. import numpy as npįrom sklearn.model_selection import train_test_splitįrom keras.layers import Conv2D, MaxPooling2Dįrom keras.layers import Flatten, BatchNormalization Along with importing libraries I have also imported some specific modules from these libraries. Start with importing all the above mentioned libraries. Okay! We have our tools and libraries ready. The purpose of these libraries will become more clear once we dive into the code. You can use Google Colabs also if you need better computational power.Īlong with these four, we will also use scikit-learn.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |