Overfitting to the Test Set: AWS Certified Cloud Practitioner Exam
In an effort to build a constructive path to continuing education I decided I wanted to “Work my weaknesses”, which is an adage we use in fitness. I have always had a keen eye when it comes to self awareness. Not only am I aware of what I can do, but I am also aware of what I cannot do.
This lead me toward learning more about AWS, arguably one of the most highly sought after “Branded” skills. I say branded with quotations in an effort to make sure everyone knows that branded skills is something I just made up. What I mean here is that AWS (Amazon Web Services) is a cloud provider. So in order to be certified in a certain technology you have to make a choice to find a “brand” that has well respected certifications in order to make yourself stand out. Having worked with AWS in the past, and seeing it arise on countless job applications, it looked to be my most fruitful path forward.
Now to the title. Overfitting to the Test set: this is absolutely a pun. The idea for the title and mostly of the topic of this blog came about after I had finished about 600 practice questions in preparation for the exam. I had previously gone through 6–10 hours with of videos that went over in depth with examples and labs of what was going to be on the exam. Now the goal was to build my understanding based on the testing format of the information.
The test was structured as single-answer multiple choice or multi-answer multiple choice. This meant in order to pass the test I wanted to make sure I had adequate practice in this domain. When you have the opportunity to have a known structure of an upcoming test, if you do not practice in that structure, you are not giving yourself every advantage possible to be successful.
I would answer questions until I got one wrong. When I did I would identify the area in which my knowledge was lacking, create a notecard, study that material until I understood it, and go back to answering more questions. This proved effective as I was able to pass the exam on the first try.
Now to the more complex idea of this blog. My “Model” (aka my brain) was overfitting to the “test set” (aka the exam). In an ideal world I would have loved to create a project that utilized every single service they tested on, build an incredibly robust, applicable cloud solution that stood tall to all 5 Well-Architected Framework principles and leveraged the incredible services AWS has to offer. But… I lack the data, monetary investment, and time in order to build that platform.
What does that mean? It means that not every test is completely reflective of the result. Don’t get me wrong, I have such a massive improvement in understanding of the services that AWS offers and through the instruction for multiple different free resources I have gained a much better understanding of many different universal cloud principles that will do nothing but fill my bucket (S3 Bucket if I had a choice…🤦🏼♂️) with much more knowledge than I had prior.
No test is without its faults, but when we relate these ideals back to Machine Learning (ahem, the title 🔝). We need to be aware the metrics we use to define the success of a model. Someone much smarter than me once said “As soon as we look to optimize for the success of a certain metric, it fails to be a true measurement of success”. I loved that idea. When we are looking to measure for success it need not be based on just accuracy, or pass/failing, but how the model interacts with the data. How then can we use that model to build on what we already know.
While building models in practice and in production an important concept is to optimize for applicability. Building a model is most useful for the desired outcomes, not the highest scoring metric. Here comes another Quote or rather conversation:
“You know what they call the guy that got straight As and perfect score on his MCAT…?”
“You know what they call the guy that just got passing grades and struggled through school…?”
The idea I like to get across in comparing this to building a model. Is it’s not always having the fanciest model or the highest score, it’s how you use that model. How are you going to build it into production. What is the leanest, most effective model to yield the results you desire. All these ideas need to be cleared and decided upon before the model working begins, before you get too deep in trying to optimize hyper parameters. Model building starts before you even look at the data. It starts with defining the problem, thus defining what a successful outcome would look like.
Successful outcomes will always come from increase preparation rather than they will come from creative on the spot solutions.