Building neural network models involves compiling code for efficiency and measuring performance through critical metrics. Understanding model compilation and interpreting accuracy and loss results can significantly improve your neural network's effectiveness.
Key Insights
- Compiling a neural network model optimizes the code for better performance, typically using parameters such as the Adam optimizer and accuracy as a primary metric.
- Loss metrics are essential for evaluating the model's effectiveness, as decreasing loss values indicate the model is improving its predictions during training.
- Increasing the number of epochs generally enhances model accuracy, but after multiple epochs (such as five), improvements become marginal, signaling diminishing returns for additional training.
Note: These materials offer prospective students a preview of how our classes are structured. Students enrolled in this course will receive access to the full set of materials, including video lectures, project-based assignments, and instructor feedback.
We've built our model. Before we actually run it, which we'll do in just a moment, we need to compile it. Compiling is a strange thing.
It's fairly unique to neural network models, but it's very common in the programming world. Compiling is taking code and making it more efficient. And we're going to pass in these parameters to compile it.
And it is outside the scope of this course to discuss the hyperparameters, as the phrase is, the little settings you can change before you even, you know, train your data. But you can read more about these, and they're fascinating. So the standard way we're going to compile, our model has a compile method, because again, this is a big part of our training.
It's a big part of neural networks. So this model just has a compile method. We'll set the optimizer to the atom optimizer.
And again, you can learn more about that. The loss, we can talk about that briefly. How to measure how badly things are going is essentially what it is.
How to measure data that's being lost in this process. And finally, what's our final metrics? How should we measure how we're doing? Because it's going to measure for us. And the answer is, for us, just accuracy.
It might depend on what you want, but we don't actually care about precision and recall, which we learned. We don't care as much about which ones it's getting wrong. We just care about overall what's its accuracy.
And we're going to dive more into, you know, analyzing neural network results in the next section. All right. Did I run this? You know what? I didn't run this.
That sounds like me. There we go. Ran it.
Let's run this one. There we go. And now let's train the model on our X train and Y train.
We'll say model.fit, as we typically do. And our X train is our training images normalized, where we took them and we scaled them. And we'll pass in our training labels, which is the digits 0 through 9, the correct answers for each one.
And finally, we'll pass it our epics value. Or epoch is typically how people pronounce it. But I have heard people say epics, and I tend to say it that way.
But I'll say epoch, because it is more technically correct. And we'll give it 5. This is basically how many times it should try to improve itself. It's going to run 5 times.
We'll talk a lot in the next section about what makes a good epoch number. Okay. When we run this, you're going to see some neat stuff.
It's improving its accuracy. 83%, 85%, 86%. It's like, yep, nailed it.
Now it's going to run that again. And you see how much the accuracy is improving. And it's improving as you go.
You'll also see loss going down as we do this. 97%, almost 98%. And the loss is even less.
Epic 4, epoch 4, 90. It's gone up to 98%, although you do see it going slightly down as it continues to train. And the loss has gotten less.
Epoch 5, you can see there's some diminishing returns happening here. Sure, it's at 98%, but it's almost 99%, but it's only 0.3%, 0.4% better than the previous one. Whereas each jump is just like a little bit less of an improvement.
And again, we'll talk about that a bunch in our next section.