I never expected to learn about people by writing code. I didn’t go into data science thinking it would make me more aware of the assumptions we live with. But over the past few years, I’ve come to realise that every model we build reflects not just data, but the decisions we make about the world.
At InLogic, I’ve been working on AI systems for property and art, tools designed to help staff retrieve information or estimate the value of a painting. These projects are still in progress, but I’ve already come across gaps. Sometimes a valid question returns nothing because the wording isn’t what the system expects. Sometimes a relevant result gets missed because of a slight difference in how an artist’s name is written. These moments aren’t just bugs. They point to a deeper issue: the system’s idea of how things should be doesn’t always line up with how people actually work.
That realisation didn’t start here. At university, I built a model to predict heart failure readmissions using real hospital data. I worked with clinical records and tested different classifiers. But even with high accuracy, I couldn’t shake the feeling that something was missing. These weren’t just numbers, they were patients. A wrong prediction wouldn’t just mean a mislabelled datapoint. It could affect someone’s life.
In another project, I helped develop a real-time pipeline to track traffic, pollution, and weather across the east coast. It handled over half a million data points a day. Technically, it worked. But the more I watched the visualisations update, the more I thought about the people behind them. Someone was living under that red cloud on the map. Someone else was stuck in that rush hour pattern every weekday. What responsibility do we have when we turn people’s daily realities into rows of numbers?
I’ve learned that building AI is not just about getting it to work. It’s about asking what the system assumes, who it includes, and what kind of world it quietly supports. If it’s built only for efficiency, or trained only on what’s been done before, it risks leaving people out. Sometimes the people who need it most.
The AI we build doesn’t just sit in the background. It guides decisions. It sets expectations. And sometimes, it reinforces biases that were never meant to be encoded. That’s why I’ve learned to design carefully, and to always keep people, not just patterns, at the centre.
AI can do incredible things. But only if we teach it to see the world as it really is, messy, unequal, and full of meaning we haven’t yet defined in code.