When Copy‑Paste Stops Working
In marketing, we live and die by templates. Landing page templates. Ad copy frameworks. Campaign recipes. Machine learning wouldn’t be that different, right? Just plug in your dataset, borrow someone’s notebook from Kaggle, tweak a few parameters, and be done with it.
That’s how a lot of people approach it. And to be fair, it kind of works, until it doesn’t. You get a model that runs, a prediction that looks reasonable, a chart that impresses in a meeting. But under the surface, there’s often a gap. You don’t really know what’s going on.
It’s like using a microwave without understanding how it heats food. You press buttons. It beeps. Something comes out hot. But if it breaks or if you need it to do something different, you’re stuck.
That’s the part I couldn’t accept. I don’t want to be a microwave user. I want to be the person who understands how it works.
Why I’m Going Back to Basics
I’ve finished my first year of a master’s in data science. A lot of it was applied—building models, using libraries, working on projects. Some of it coverd math. Enough to get by.
But now I’m in the second year. And I want to go deeper. Not just because it’ll make me a better engineer, but because I don’t like the feeling of bluffing my way through a concept I barely understand.
I’ve started spending time each day revisiting the fundamentals. Reading slowly. Solving problems. Writing code from scratch.
The two books guiding me right now are:
- Introduction to Statistical Learning with Python — clear, practical, and great for intuition.
- Mathematics for Machine Learning — a bit heavier, but it’s giving me the tools I skipped over before: linear algebra, calculus, and probability.
It’s not fast, but it’s already changing how I think about problems.
From Marketer to ML Engineer
This is more like building a bridge between two worlds.
Marketing taught me how to spot patterns, think about behavior, and tell a story around data. Machine learning gives me the tools to dig deeper into those patterns—to test them, model them, and make them more than just gut instinct.
Understanding the mechanics matters. I want to be able to write a function and know exactly what it’s doing. Not just trust the output, but trace it. Break it. Fix it.
That mindset is shaping how I approach AI, too. Instead of relying on GPT like a magic box, I’m starting to study RAGs, embeddings, and the actual architecture behind language models.
What This Looks Like (So Far)
Here’s the learning path I’m setting up right now:
- Revisiting the math: linear algebra, probability, and calculus, using Mathematics for Machine Learning
- Applying it to ML: working through ISLR with Python, doing the coding exercises by hand
- Building basic models from scratch—linear regression, logistic regression, simple neural nets
- Exploring the foundations of modern AI: transformers, vector search, RAG systems
This is a long haul effort. I’m not sprinting. I’m building something I can stand on.
If You’re on a Similar Path
If you came from a non-technical background and now find yourself deep in machine learning: hi, I know how disorienting it can be. Everyone around you seems to speak fluent numpy and think in eigenvectors.
You don’t need to fake it. You can build your way in. That’s what I’m doing right now.
I’ve got one year of a data science master’s behind me. A couple months into the second. Some foundations are there. But I want more. I want to go beyond the surface. That’s why I’m relearning the math. Writing code by hand. Slowing things down so I actually understand what I’m building.