Back to Media

The AI We Build is the Society We Shape

InLogic Media

Working across art, healthcare, and property, I’ve learned that building AI isn’t just about models, it’s about the values we carry into them.

Jun 27, 2025

Brian King

,

AI Engineer

Back to Media

The AI We Build is the Society We Shape

InLogic Media

Working across art, healthcare, and property, I’ve learned that building AI isn’t just about models, it’s about the values we carry into them.

Jun 27, 2025

Brian King

,

AI Engineer

Back to Media

The AI We Build is the Society We Shape

InLogic Media

Working across art, healthcare, and property, I’ve learned that building AI isn’t just about models, it’s about the values we carry into them.

Jun 27, 2025

Brian King

,

AI Engineer

Across different projects, both in my academic journey and at InLogic, I’ve helped build AI systems that work in spaces most people forget to design for. Along the way, I’ve realised the hardest part of building AI isn’t the code. It’s asking who the system is for, who it leaves behind, and whether it understands the world it’s trying to help.

Across different projects, both in my academic journey and at InLogic, I’ve helped build AI systems that work in spaces most people forget to design for. Along the way, I’ve realised the hardest part of building AI isn’t the code. It’s asking who the system is for, who it leaves behind, and whether it understands the world it’s trying to help.

Electronic device
Electronic device
Electronic device

I never expected to learn about people by writing code. I didn’t go into data science thinking it would make me more aware of the assumptions we live with. But over the past few years, I’ve come to realise that every model we build reflects not just data, but the decisions we make about the world. 

At InLogic, I’ve been working on AI systems for property and art, tools designed to help staff retrieve information or estimate the value of a painting. These projects are still in progress, but I’ve already come across gaps. Sometimes a valid question returns nothing because the wording isn’t what the system expects. Sometimes a relevant result gets missed because of a slight difference in how an artist’s name is written. These moments aren’t just bugs. They point to a deeper issue: the system’s idea of how things should be doesn’t always line up with how people actually work. 

That realisation didn’t start here. At university, I built a model to predict heart failure readmissions using real hospital data. I worked with clinical records and tested different classifiers. But even with high accuracy, I couldn’t shake the feeling that something was missing. These weren’t just numbers, they were patients. A wrong prediction wouldn’t just mean a mislabelled datapoint. It could affect someone’s life. 

In another project, I helped develop a real-time pipeline to track traffic, pollution, and weather across the east coast. It handled over half a million data points a day. Technically, it worked. But the more I watched the visualisations update, the more I thought about the people behind them. Someone was living under that red cloud on the map. Someone else was stuck in that rush hour pattern every weekday. What responsibility do we have when we turn people’s daily realities into rows of numbers? 

I’ve learned that building AI is not just about getting it to work. It’s about asking what the system assumes, who it includes, and what kind of world it quietly supports. If it’s built only for efficiency, or trained only on what’s been done before, it risks leaving people out. Sometimes the people who need it most. 

The AI we build doesn’t just sit in the background. It guides decisions. It sets expectations. And sometimes, it reinforces biases that were never meant to be encoded. That’s why I’ve learned to design carefully, and to always keep people, not just patterns, at the centre. 

AI can do incredible things. But only if we teach it to see the world as it really is, messy, unequal, and full of meaning we haven’t yet defined in code. 

More Articles

Brian King

Jun 27, 2025

The AI We Build is the Society We Shape

Working across art, healthcare, and property, I’ve learned that building AI isn’t just about models, it’s about the values we carry into them.

Electronic device

Brian King

Jun 27, 2025

The AI We Build is the Society We Shape

Electronic device

Brian King

Jun 27, 2025

The AI We Build is the Society We Shape

Electronic device

Franz Pretorius

Jun 14, 2025

Open-Source vs Closed-Source AI Models: An Evolving Landscape

By 2025, open-source AI models, prized for their flexibility and transparency, have begun challenging closed-source giants known for industry-leading performance and support. Meta’s LLaMA 3 directly competes with OpenAI’s GPT-4.5, reshaping the AI landscape by offering powerful, community-driven alternatives.

Electronic device

Franz Pretorius

Jun 14, 2025

Open-Source vs Closed-Source AI Models: An Evolving Landscape

Electronic device

Franz Pretorius

Jun 14, 2025

Open-Source vs Closed-Source AI Models: An Evolving Landscape

Electronic device

Nathan Luo

May 29, 2025

Cheaper, Smarter AI

AI costs are falling at an unprecedented pace, turning advanced models into an affordable utility. OpenAI’s 80-percent price cut and Sam Altman’s forecast of ten-fold annual declines reveal a clear trajectory: every year delivers dramatically more compute for less money.

Nathan Luo

May 29, 2025

Cheaper, Smarter AI

Nathan Luo

May 29, 2025

Cheaper, Smarter AI

Alina Noor

May 24, 2025

Explainable AI: Building Trust in Intelligent Decisions

Transforming AI from a mysterious "black box" into a transparent decision-making partner, Explainable AI empowers businesses to confidently leverage artificial intelligence for critical decisions.

Alina Noor

May 24, 2025

Explainable AI: Building Trust in Intelligent Decisions

Alina Noor

May 24, 2025

Explainable AI: Building Trust in Intelligent Decisions