Strategy

Perspectives: Daniel Hulme, expert in AI and emerging tech and CEO of Satalia

Strategy / Perspectives: Daniel Hulme, expert in AI and emerging tech and CEO of Satalia

Leanne Kelly

Leanne Kelly
January 25, 2021

Daniel Hulme (PhD) is a leading expert in Artificial Intelligence (AI) and emerging technologies, and is the CEO of Satalia. Satalia is an award-winning company that provides AI products and solutions for global companies such as Tesco and PwC. 

Daniel is a popular keynote speaker specialising in the topics of AI, ethics, technology, innovation, decentralization and organisational design. He is a serial speaker for Google and TEDx, and is a contributor to numerous books, podcasts and articles on AI and the future of work.

Daniel Hulme Satalia

Let’s start with a bit about you and your background.

My undergraduate degree was in Artificial Intelligence (AI) and I then went on to do a Master's and PhD in AI at UCL. I've kept my career within academia by running a Master's programme on Applied AI and I'm an Entrepreneur in Residence for Computer Science. I work with people to help them understand and apply technology, so that it has a big impact on the world.

For the last 12 years, I've also been the CEO of Satalia. We build AI solutions for some of the biggest companies in the world. I was also fortunate enough before Covid-19, to travel the world, educating business leaders, academics, and investors on the impact of technology on society.


Can you tell me a bit more about Satalia?

There’s about 120 of us and we build AI innovations for global companies like Tesco and PWC.

We’re considered a thought-leader in AI, pioneering the future of work by combining technology with organisational psychology to create swarm-like organisations. We try and use these new technologies and organisational paradigms to reinvent how companies are structured - moving away from hierarchy and instead operating very much like a swarm.


Let’s talk about the work you’ve done with Tesco and PwC.

With Tesco, we co-created a last mile delivery system, optimising vehicle schedules throughout the UK. They deliver to nearly 2 million customers a week and, as a result of the system, we were able to save them millions of miles, reducing costs and their carbon footprint.

More recently, we built a workforce solution for PwC. The solution uses expert modelling and advanced optimisation algorithms to allocate thousands of auditors to clients. Our solution maximises effective utilisation, with a focus on employee wellbeing and diversity and inclusion.


Satalia is passionate about using AI to combat some of the humanitarian challenges we face. Can you give me an example?

Our work with Tesco is a good example. We were able to help Tesco deliver to more customers, using less fuel. They saved something incredible like 20 million miles a year, which is to the moon and back 40 times, so were able to help them massively reduce their carbon footprint.

Most of our clients are in the private sector, but we want to work more with NGOS and governments in the future.


Can you tell me a bit about your TED talks?

Sure. I’ve done three so far and I’m doing another one this year.

My most recent talk was called ‘Unifying Humanity in the Digital Age’. I explored what a completely new social, economic, and political system would look like. And whether technology could remove the need for things like money. I also spoke about some of the global challenges facing humanity, and how we can unify and collaborate to solve them.


My TED talk later this year focuses on the impact of technology on society - but at a real macro-level, so the political and social implications. It’s something I’m passionate about.


There are so many definitions of Artificial Intelligence, what it is, and what it isn’t. It can often be mistaken for Machine Learning. Can you help clear up some of the confusion and offer a definition?

Sure, this is one of my favourite subjects to talk about!

There are two definitions of AI and unfortunately neither of them is very popular in industry. An issue we have when trying to define AI is that industry has synonymised AI with technology, and that's incorrect.

The most popular definition is getting computers to do things that humans can do. Over the past 10 years, advances in technology mean that we've managed to get machines to do things that traditionally only human beings could do. Things like recognising objects and images and corresponding to natural language. Because humans are the most intelligent things we know in the universe, when we manage to get machines to do ‘human tasks’, we assume it’s intelligence.

Now, I would argue that humans are not that intelligent, so bench-marking machines against humans probably isn’t the most sensible thing to do. For many decades we’ve been getting computers to do things better than humans.


And the second definition.

The second definition is much more rigorous and stems from a definition of intelligence, goal-directed adaptive behaviour. Ultimately, there’s a goal. For example, maximising the utilisation of your employee, or making as many deliveries as you can. Meanwhile, behaviour relates to how quickly you can move your resources to achieve your goal, or how quickly you can answer that question.

For me, the keyword in this definition is the word ‘adaptive’. You want to build a system that adapts, one that learns whether decisions are good or bad, so that it makes a better decision tomorrow. For the most part you don't see adaptive systems in production, so I would argue that actually nobody is really doing AI.


So, how would you define Machine Learning?

Machine Learning is simply a component of AI architecture. Machine Learning recognises patterns which are then used in decision-making.

A common misconception is that companies have Machine Learning problems, when often they have decision-making problems. Machine Learning is a technology that's good at finding patterns in data, but it’s not going to make decisions. It's just one component of the AI stack, the concept of building adaptive systems and Machine Learning.


Let’s talk use cases. Can you give me an example of a use case of AI for HR teams?

It’s useful in allocating staff to projects, particularly if the allocation process is complex. For example, if there are labour laws, skills preferences, and career development considerations. You can then go a step further and use Machine Learning to determine whether the right people were on those projects. How did they perform? Was there any feedback? We can then profile the quality of contribution, so we know how to allocate people to projects in the future.

We can profile peoples’ skills, and we can use their skill profile to allocate opportunities that align with both their aspirations and the aspirations of your company. We can join the process up into a full feedback loop.


Something that’s a major talking point for the HR community right now is bias in AI. Where could bias creep in?

If we take the classic example of screening people in interviews using AI, there are two areas where there could be risk of bias being introduced.

If I only want smiley people, or people of a certain demographic, to progress to the next round of interviews, I’m introducing a biased objective.

If I then want to build and train a model using a dataset of smiley people of a certain demographic, you’re introducing bias, as it's going to be hard for it to identify people who fall outside of those characteristics. So, the breadth and richness of the dataset will also determine the decision of the Machine Learning component.


Is there a risk of bias if there’s a lack of diversity in those writing the code?

This is a bit of a myth. There’s a common misconception that if white middle-class males are writing algorithms, then they are programming their own bias. This isn’t the case; you don't program Machine Learning, you train it. Now, if you are giving it a data set full of bias to learn from, or if the objective itself is biased, then yes there’s a risk - but it’s not due to programming.


What can we do to reduce risk?

We need to have AI government frameworks. Unfortunately, we tend to be reactive and I suspect it will take some bad decisions made by AI before frameworks will be put in place.

AI governance frameworks are slightly different, because there are two key components: safety and ethics. I’ll give you an example. Imagine a driverless car crashes into a traffic cone because it thinks it’s a turning. The safety consideration could be the fact that the component hasn’t been trained on the right amount of data. The ethical consideration could be what happens if there’s a family in front of a car and it can’t stop. Who does the car hit? The adults or the child? It’s extremely complicated, particularly the ethics. We have to agree on the right decision and then determine who is liable for making that decision.


Can you tell me a bit about the research you’re doing into AI and Diversity and Inclusion?

If we use gender as our example here, you may think you have a diverse team, because you have equal numbers of males and females - but is it inclusive? If all the females interact with other females and the males only interact with males, it’s possible that you’ve got diversity, but not inclusion.

If we come back to our workforce solution, we have technologies that understand the make-up of teams at a social level. We can profile people, and their relationships, and ultimately use technology to get a better understanding of the belonging and immersion of different demographic groups.

You can find more information on the research here.


Something else you’re particularly interested in is company values and what they mean to an organisation.

There’s a really good website, 190 Brilliant Examples of Company Values. I find it fascinating because there’s so much overlap, so many companies have almost identical values.

Personally, I think we need to have a rethink. Rather than setting values, I think we need to create the infrastructure for these behaviours to emerge naturally, rather than it being so top-down. If integrity is one of our values, we think everyone should behave with integrity. I think we need to shift the focus and look at how we create organisational structures where people act with integrity. It means going further than just creating flat organisations. It means reinventing how organisations are actually structured. It’s really interesting.


What changes might we see to the future of work as a result of AI?

One concept which I think is cool is the notion of marketplaces within organisations. So instead of managers giving team members a project to work on, AI allocates the right people for the right opportunities. If we go back to my TED talk, I hope that in the future we don't have the concept of companies anymore. We’ll all just be individual workers, with a profile or portfolio that develops over time as we contribute to projects. Our portfolio will be available to everybody in the world and as a new project appears, we’ll be notified and can work on it. We’ll no longer be employed by one organisation but instead we'll be allocated to multiple projects, all facilitated by AI. But that’s a while away.


What’s next for Satalia?

We want to show that we can build innovations. We want to show that technology can be used to organise companies in a way that’s efficient and that allows people to perform optimally. Satalia will continue to be the experimental playground for all our crazy ideas.

I'm trying to pioneer what it means to be a purposeful company. We want to have the biggest positive impact we can have on the world. I don't just want people to love their work, I want them to be free to give to other people, and to contribute to society and humanity.

Thanks Daniel.

Click here to learn more about Satalia.

If you're interested in being interviewed as part of the BPS Perspectives series, I'd love to hear from you. You can get in touch with me at leanne.kelly@bps-world.com.

Back to Insights

What to read next