Doctor Awesome welcomes Daniel Hulme, a leading expert in artificial intelligence with over 25 years of experience in the field, to discuss singularities and how AI will shape our future.  Daniel shares his insights on AI’s potential impact on society, the economy, and human consciousness. Daniel offers an optimistic view, emphasizing the importance of responsible development and implementation. He challenges us all to consider how AI could free humanity from economic constraints, allowing people to pursue their true passions and contribute meaningfully to a protopian society.

Watch the episode here

 

Listen to the podcast here

The Future of AI and Human Potential – Navigating Singularities with Daniel Hulme

Hey, everybody. Welcome back to The Futurist Society. As always, I’m your host, Doctor Awesome. And today, as always, we are talking in the present, but talking about the future.

Today I have a really special guest, Daniel Hulme, who is a thought leader in artificial intelligence, how artificial intelligence interacts with business and many other things related to artificial intelligence. Thanks so much for joining us, Daniel. Tell us a little bit about yourself, how you got into AI, and also what the future holds for AI.

Thank you for your time. I’ve been involved in AI for 25 years. My undergraduate was in AI at UCL, which is one of the world’s leading academic institutes, and where I did my PhD. I have run a master’s program in applied AI for many years. I’m currently Entrepreneur in Residence for UCL, so I help them spin out deep tech companies.

I started a company 16 years ago that’s been building AI solutions for some of the biggest companies in the world. I sold that company to WPP, which is the biggest media marketing communication company in the world, three years ago. Where I continue to be the CEO of Satalia, which you can think of as being WPP’s DeepMind, and I’m also the chief AI officer for WPP. So I coordinate AI across about 120,000 people. I get to invest in AI. And I’ve just started a company, actually, to try to solve machine consciousness, that maybe we’ll talk about later on.

That’s super interesting. I feel like artificial general intelligence, machine consciousness that borders on the realm of philosophy and so it’s very ethereal. So I’d love to hear your take on it.

I think that’s probably where we’ll start because I think most people have this existential threat idea of AI. They’re very scared of it, they’re worried that it will gain consciousness. So tell us what your thoughts are, because I know that you’re an optimist about it.

My day job is bringing algorithms or AI into organizations to solve problems, make things more efficient, make things more effective. So for the past 16-17 years, I’ve been applying AI to solving problems across supply chains. That’s my day job. And, you know, doing that in a safe and responsible way is hard but it’s something we’ve been learning to do extremely well over the past decade.

Singularities

Obviously, I do think about the macro impact that these technologies will have on society. About four or five years ago, I did a TEDx talk that referred to six singularities. I’m sure that most of the listeners have heard the term singularity. Singularity comes from physics. It’s a point in time that we can’t see beyond.

It was adopted by the AI community, Ray Kurzweil, to refer to the technological singularity, which is the point in time where we build a brain a million times smarter than us. And a good friend of mine coined the term, the economic singularity, which is the point in time where we automate the majority of human labor. I realize there’s probably an environmental singularity, which is where we either get control or lose control of our ecosystem. And I realized that those three words, technological, environmental, and economic, are three of the words in PESTLE.

Now, I don’t know if anybody’s written a business plan before, but they might have come across the PEST analysis or PESTLE analysis. And so I asked myself, is there a PESTLE of singularities? And PESTLE stands for political, economic, social, technological, legal, and environmental. So I ask myself, is there a PESTLE of singularities points in time that we can’t see beyond that AI is accelerating us towards?

And I should just emphasize, when we hear the terms associated with a singularity, we often hear them in a negative way. But the point of a singularity is that we don’t know what will happen beyond that point. It could be incredibly good for humanity, or it could be incredibly bad. So maybe it would be helpful to quickly go through the puzzle of singularities. And I’ll come back to this question of machines and conscious machines.

So the political singularity, I think, is something we’re facing every day. It’s a world where we either know what is true, we can guarantee and prove what is true, or a post truth world. And, you know, AI misinformation bots have challenged our political foundations, and they seem to be continuing to challenge our political foundations, but they’re also challenging the fabric of our reality. So, you know, with a few dollars and a little bit of data, you can clone people, you can clone their voice, you can clone their appearance. That’s being used by bad actors to commit fraud and all sorts of stuff. So I actually think that we can solve that post truth world by creating infrastructure that guarantees or authenticates content. So that’s the political singularity.

The environmental singularity I’ve already alluded to, which is where we either gain control or lose control of our environment. And I would say that if we apply algorithms and AI in the right way, over the next decade, I think we can get control of our ecosystem. Now, it’s a big if. We need to make sure that we’re applying these technologies to reduce the carbon it takes to create and disseminate goods. But I think it’s possible.

The point of a singularity is that we don’t know what will happen beyond that point. It could be incredibly good for humanity, or it could be incredibly bad.

The social singularity is not my expertise. It’s often referred to as the methusalarity. It’s the point in time where we cure death. And AI is advancing medicine rapidly. It’s able to monitor ourselves, and eventually it’s able to clean ourselves out. And I guess the idea is, if you can stay on top of cellular damage, then, like a car, that car would never, ever break down. I don’t know what the world will look like if we realize there are people amongst us that won’t have to die. But there are scientists that believe there are people alive today that might not die.

The technological singularity we’ve already talked about, which is the point in time where we become the second most intelligent species on this planet. And I guess my community felt super intelligence wasn’t going to happen for another 30 or 40 years. We think it might now happen in the next ten or 20 years. And there are, I guess, two types of superintelligence.

One type that you might argue is a zombie superintelligence, where you’ve given it a goal. That goal might be to eradicate cancer. But, of course, the easiest way to eradicate cancer is to eradicate humans. Or the goal, the classic example is the goal is to manufacture paper clips. And so it uses all of the resources on our planet to manufacture gazillions of paperclips without being aware of the impact of those decisions on itself and the rest of humanity. So ,how do we mitigate the risk of building a zombie superintelligence that ultimately makes bad decisions for the wider good?

I don’t think, necessarily, large language models will get us to consciousness. There is a suggestion that you need embodiment. You need to have innate drives, evolutionary drives let’s say, to direct you. And then you need to have this concept of feeling–be able to feel are the decisions that I’m making in my environment, good or bad.

That said, there is a new emerging technology that I’ve been tracking for the past few decades now called neuromorphic computing, which are much more modeled on how our brains work. Our brains actually operate on the power of a light bulb. We learn very quickly. And I think that these neuromorphic technologies will end being embedded in drones, in physical devices that actually might end up becoming self aware.

And what we want to do is we want to make sure that either we’re not creating “mind crime”, which is a term that Nick Bostrom coined, where we’re essentially putting conscious beings in tortuous situations. And so the startup that I’ve created, called Conscium, is trying to figure out, how do we understand what consciousness is? How do we identify consciousness behavioral markers in machines? How do we align machines with some sort of value system? So it’s really facing into how do we build potentially a safe super intelligence over the next decade?

I’ve been very lucky because I’ve managed to gather together, I think, the world’s leading thinkers/advisory board in consciousness.

The fifth singularity is a legal singularity, which is around surveillance and persuasion. We know that tech AI is very, very good at understanding people’s behavior. It’s potentially very good at influencing people, manipulating people to get them to do things that they should or shouldn’t be doing. That’s an incredibly powerful position to be in. And we want to mitigate the risk of that technology being used by bad actors to accumulate more wealth and more power.

And then, the economic singularity, which is the point in time where we automate the majority of human labor. And I’ve been building AI’s for the past 15 years, and people have not lost their jobs. That said, I think the next ten years, we’re going to see a Cambrian explosion of new innovations, new opportunities for people to contribute in ways that we haven’t been able to before. But I think beyond ten years, nobody knows what they’re talking about.

I’ll just kind of give you two extremes, and then I’ll shut up.

The first extreme is that if companies can free up whole jobs, they probably will. And if that happens very quickly, we might end up with technological unemployment. Our economy is not being able to rebalance fast enough. That might lead to social unrest. I think it’s a real risk and we need to really face into it. And things like UBI (Universal Basic Income) and four day working weeks could be some of the answers to take the edge off that problem.

There is a controversial view, though, that we should be accelerating towards this singularity as fast as possible. If we can automate the friction from the creation and dissemination of goods like food, healthcare, education, transport, energy, we can actually bring the cost of those goods down to zero. So imagine being born into a world where people don’t have access to paid work, but everything they need to survive and thrive as a human being is free. It’s abundant.

Now, people say to me, “Daniel, what would I do if I didn’t have a job? A job defines who I am as a human being.” I know lots of people who don’t have jobs– either they become financially independent or whatever–they’re not sitting at home bored and depressed. They typically use their time to contribute to humanity.

And I ask all of my audiences this when I engage them, which is, what would you do if you didn’t have to do paid work and everything was free, everything was abundant? Most people say, I’ll travel, I’ll play golf, I’ll indulge in my hobbies, I’ll spend more time with my family. And if you keep pushing people long enough and hard enough, they say the same thing, which is they want to do something that contributes to humanity.

I think we all have an innate desire to contribute to humanity, and I think that if we apply AI in the right way, over the next decade, we can free people from those economic constraints to live their true humanity, which is to not just live for themselves, but to live for others.

That’s a very high level analysis, and I really appreciate that. You’ve introduced me to new words, like “mind crime”, that I’m going to go back and Google and find out about what the next stage of artificial intelligence has in store for us.

That being said, what do you say when it comes to your own family members? I’m sure that, just like in my own family, there’s a certain amount of people who have a real hesitancy towards technology, advancement, and progress. And here you are, and you’re in the field, you’re making this thing that everybody’s talking about with very scary terms. What do you say to the layperson to help them understand that AI is going to be good for them?

Well, first of all, I think that AI doesn’t have intent yet. Human beings have intent, and it’s the intent that ultimately drives the use of these technologies. Technology is neither good or bad, but it’s the intended use. And so what I try to encourage people is to sort of hold themselves accountable, hold their leadership accountable, hold their governments accountable to making sure that we’re using these technologies in the right way.

It’s the intent that ultimately drives the use of these technologies. Technology is neither good or bad, but it’s the intended use.

Actually, I think enterprise is the main influential driving force of these technologies. So if you think about the purpose of most enterprises, the purpose of most enterprises is to create some sort of abundance or to do some sort of positive in the world. Now, if AI can help enterprises achieve their purpose very, very quickly, it’s an effective purpose of enterprise that actually could make this world better.

Risks Associated with AI

So what I say to my friends and family is, first of all, educate yourself about what these technologies are and aren’t. And broadly, there are sort of three categories of risk associated with AI.

The first category of risk are implementing these technologies in the day-to-day, in your organizations, in a safe, responsible and ethical way. And that comes down to intent. Am I using these technologies to enrich my employees’ experiences? Or am I using them to exploit animals or whatever? The intent needs to get scrutinized. There are very safety questions you need to ask yourself when implementing these technologies.

The second category of risk are malicious risks, which is essentially preventing bad actors from creating pathogens and misinformation and all that kind of stuff. And I think that that is the role of the government and regulation to make sure that we’re preventing bad actors from using these technologies for bad reasons.

And then there’s the macro risks that I’ve mentioned, which are the six singularities. I do a lot of podcasts and public speaking about AI and I think if people are aware of where we are moving as a species over the next 10, 15, 20 years, it’s ultimately down to all of us to make good decisions. We can make better decisions if we educate ourselves. So that’s what I encourage my friends and family to do–educate yourselves. Don’t get seduced by the scaremongery that’s in the press, on LinkedIn or any of the social media channels. Educate yourself, engage with experts, and really also hold yourself to account to making sure that you’re using these technologies in a way that’s positive.

So I am a firm believer in the positive aspect of artificial intelligence. But one thing that I do worry about is that it seems to be concentrated in a few different hands. You know, it’s not something that is as distributed when it comes to, like a cellular phone, for example, that is something that has been distributed throughout society. You know, people in Africa have a cell phone and it’s really improved their lives.

I feel like when I look at artificial intelligence, it’s only a handful of actors that have the capacity, that have the compute, that have the power to do these kind of things. And I wonder if I’m a small to medium sized business (Let’s say I have a few hundred people that work for me), rather than a large company, like some of the companies that you’ve worked for, or you’ve built AI for, like some of the Fortune 500 companies–Is this going to trickle down to have some benefit for me? Or is it just going to be another scenario, which you kind of alluded to in one of your answers, that it’s just going to be giving more power to those that are already powerful?

Access and the Trickle Down Effect

I think it already is trickling down. We’re seeing, obviously, large, big tech companies creating these technologies. But what we’re also seeing is, ironically, the way that they’re competing with each other is by making some of them open source, by trying to fight against an organization that somehow has captured the market and has a very strong commercial model.

One way to beat that commercial model is to make these technologies free. And if you make them free, you get access to talent, you get access to data. Ironically, we’re seeing an open sourcing and, in some respects, making free probably the most powerful technology that humanity has ever created.

Access is a different question. And you made a very good point, which is if people do have access to mobile phones, at least smartphones are connected to the Internet, and not everybody, unfortunately, on the planet has access to this type of technology. By the way, if you think about how many phones you go through every couple of years, if those phones were recycled or given to those people that don’t have access to smartphones, and if they have access to the Internet, then first of all, education becomes free, because pretty much all of the things that you need to learn, certainly technologically, is free on the Internet. And second is that people will be able to access these open source models and get them to go and build stuff. You’ll be able to describe an application, a technology, or whatever. And these agents, these life science models, will be able to go out there and construct.

So actually, I think that it is not just trickling down, it’s going to be a waterfall that’s going to wash many, many, many people and give them access to these phenomenally powerful technologies. I think we need to give them access to those technologies. And what I hope will happen is, the more people that become free from these economic constraints, their impulse will be to try to make the world better. And making the world better is giving access and democratizing these technologies to allow people to use them for useful things.

The example I always give is that you might have, for example, old people in old people’s homes, sitting there bored and depressed, and you might have people in some sort of corner of society that want to access some knowledge base, or they might want to access or learn about a new language or have a conversation. If somebody had access to these technologies, they could describe an app where we would connect those two types of people, those old people with those people that want to learn. And there might not be a commercial model there, but it’s giving people purpose and meaning and a way to contribute to humanity.

This concept is called a protopia. It’s not a utopia. I don’t think anybody would agree what a utopia is. I think we agree what a dystopia is. Well, protopia is essentially a system, a mechanism, a flywheel effect, that things are only getting more positive. And I think things will become more positive if we can free people up from these economic constraints that will then use their creativity to then free other people up and enable those people to use their creativity to make the world better.

Yeah, I look forward to that future. I think that just the amount of drudgery and menial tasks that I do even in my life on a regular basis–I would love to have an AI assistant that does that for me. Let me give you an example. I order the same thing for lunch every single day, and the only thing that varies is the time that I order that lunch, right? So I go on Uber eats, I order the same chicken salad from Sweetgreen… and, you know, I have friends in the AI community. I live in Cambridge, Massachusetts, so there’s this whole ecosystem around MIT that’s doing all this kind of stuff. And they’re like, Imran, it would be so easy for you to just make a simple little bot that does this. But I don’t know how to do that, I’m a surgeon.

So I wonder, when is it going to trickle down to just use cases like this where small little benefits of everyday life for people are going to be affected? Or even the company, I employ 200 people. I know there’s people that are out there doing repetitive tasks like data entry and things like that. I just don’t see that happening now, but maybe I’m just naive because I’m not in that space.

So tell us a little bit about, like drudgery, when is that kind of going to go away? Because I feel like that’s the AI promised land. That’s when people are going to sign up and be like, “Oh, my God, this stuff is amazing.”

I think AI, according to most people, has only been around the past few years. It’s been around actually for over 70 years. And so I think we have to remember that the AI as it is now, is the dumbest it will ever be. It’s only going to get smarter.

I use this very bad analogy that at the moment these AI’s are a little bit like intoxicated graduates. You can ask them questions and they will probably give you an answer. You don’t know if that answer is right or wrong, but they do a good job of getting the answer for you. What we’re going to see over the next several years is a graduation of them moving from being intoxicated graduates to being like a Master’s level, where they have now the ability to reason, apply maybe the scientific method to a relatively straightforward problem. In 18 months, two years time, we’ll have a PhD level. I’m not talking about PhD level knowledge, I’m talking about their PhD level capabilities. So you’ll be able to say, go and solve this complex problem. It’s able to then break the problem down into multiple tasks, set up experiments or hypotheses to then test those, solve those tasks. Then there’ll be a postdoc and then it’ll be a professor. We’ll have a professor in our pocket probably by the end of this decade.

I think things will become more positive if we can free people up from economic constraints that will then use their creativity to then free other people up and enable those people to use their creativity to make the world better.

So right now it’s very good at answering questions in a relatively confident but sometimes incorrect way. I think reasoning is going to be the next level of intelligence. The next step change. Mathematical reasoning, logical reasoning, which we’re probably going to see in the next year. And then I think that we’re going to start to see tasks being solved on your digital machine in the next two to two and a half years. So you’ll be able to say, go and order me my usual salad from Sweet Queen, or go and send this email to a person. It knows that it needs to go to your email, it needs to find that person, because it’s breaking that problem down into smaller chunks that it’s solving for. So I think that we’re going to see a step change over the next six to nine months and then another step change in 18 months.

I hope so. And I hope, more importantly, that it trickles down in such a way that it’s accessible to individuals or medium sized businesses and not just like Fortune 500 companies. Do you think that’s going to happen?

Yeah. We’re obviously, at WPP, implementing these technologies to enable people to do their work more efficiently, more effectively, more creatively. And also use these technologies to improve productivity – writing, emails, PowerPoints, all that kind of stuff. So we’re starting to now see these technologies being used in organizations that are improving core productivity, but also solving big problems across supply chains that allow you to create goods more effectively.

I’m at the forefront of that, where we’re really pushing the boundaries in media, marketing, communications. It might feel in some other types of industries–like for example, what you do, or in industries where you have large scale logistics problems, etcetera–where generative AI is not the right technology to solve those problems, you might feel, why am I not seeing the benefit of these technologies? Because generative AI is really only good right now at creating content, imagery, text. Soon video and soon sound. Getting to solve complex problems, getting to solve optimization problems, or real world problems is still a few years away.

So we’re seeing some industries are absolutely embracing this, absolutely being disrupted by these technologies. It’ll take some other industries a little bit of time for these technologies to actually start to have an impact, but it’s absolutely happening. AI is not a bubble. It’s not like it’s overhyped or it’s going to go away. These technologies are phenomenally powerful. The issue is people not understanding what they’re good at solving, and then hiring people that are generative AI experts or applying generative AI to problems that are not suited to generative AI. This is the issue, that people think that AI is generative AI. The fact is that there’s a whole load of different types of algorithms out there that can solve problems across organizations that are really not being used.

And that’s the issue, it is not being utilized. When I started in my field, automation was something that was talked about even when I was training. This is something that ‘s been around there for a while, and I only feel like now we have the capacity to do stuff with automation. So I just personally haven’t seen it and I know that it’s going to come.

I just was wondering, you know, because you have an inside track on it, if this is something that is out there right now for other fields? I know it’s out there to make an email, I know it’s out there to answer basic questions, but, you know, the Google assistant where it makes you an appointment for a barber or something like that, like, that’s some pretty amazing stuff. And I just, I haven’t seen that trickle down to me yet. So I don’t know if that’s something that is a year away or a couple of years away.

It feels like it’s probably about 18 months away.

I hope so. I want that to be available for people. Because in the same way that when Amazon created Alexa and we had this voice activation, all of a sudden it spread like wildfire, right? It’s something that now it’s accepted in society that we are having this in our homes and it’s totally fine. And I want to see a technology where I can not have to scroll through reams of websites just to get a barber appointment. It’s something that I think is a relatively easy task. It just requires a lot of drudgery from a human being, whereas if AI could do it, that would be great.

Interests and Sources of Inspiration

Anyways, I really appreciate all of your insight. We’re getting close to the end of our time now, and so I wanted to finish up with the three questions that I asked all my guests. They’re much more general questions.

Starting out with the first one–I can see that there’s a Terminator skull in the background, as well as some other science fiction novelty–What science fiction inspires you? Because you’re on the leading edge of a lot of stuff that was written about for decades, and obviously the people are still writing about it. There’s all sorts of artificial intelligence themed science fiction that’s coming into even thrillers like Mission Impossible or something like that, like near future science fiction. What are some books that people can read that might give them a better version of the future, a utopian version of the future that you’re trying to create?

Well, first of all, these are here to remind me of the possible future that we might face, but I’m a Trekkie. So I guess the Next Generation is a real optimistic view. This protopia that I mentioned is often referred to as the Star Trek economy. And I have a quote somewhere from Jean Luc Picard, which is that wealth and material goods are not what matters in society. It’s our contribution to ourselves and making society better that matters. And that’s really ultimately the world that I want to create. This Star Trek economy.

I totally agree. I think it’s the best utopian vision, and I think that the Next Generation for me, has developed a lot of my moral and ethical sensibilities. So I really appreciate you saying that.

So, next question. Where do you see, from the layperson’s view, AI being in ten years? Like, what is it going to look like for, let’s say, your mom that is in her sixties or seventies? How is that going to change her life?

Well, I think technology is going to develop dramatically over the next five to ten years. So, as I said, we’re going to have, probably by the end of this decade, a professor in our pockets.

How quickly these technologies are going to be embraced, adopted, implemented in organizations that are then ultimately affecting the world is a different question, because it takes a while to invest in them, to make mistakes, to drive value. I think that these new neuromorphic technologies are going to spawn the robotic revolution. So we’re going to see a lot more robots, agents operating in the physical world.

A natural question you would ask a professor is go and build a smarter version of you. And there’s a concept called the fast takeoff, where we get from being AGI, let’s say a professor level intelligence, all the way through to superintelligence very, very quickly. I think ten years is very, very hard to predict, but I certainly know it’s going to be a very exciting decade.

It’s within our gift to create the future. This is the point. The future has not been created. It’s not been determined by the news or by science fiction. It’s down to all of us to make good decisions.

The last question. I know artificial intelligence is something that you are very passionate about, and it’s a leading technology that’s in the newspaper headlines on a regular basis. Aside from artificial intelligence, what technology are you so interested in that you can’t get enough of? That you’re looking at the newspaper headlines and you can’t stop reading it.

For me, I would say it’s robotics. Just like you were alluding to earlier, I can’t wait until we have a humanoid robot that’s able to do our dishes and fold our laundry. I’ll be the first in line for a laundry folding robot. But what about you? I mean, you’re in this artificial intelligence space, so aside from artificial intelligence, what are you really interested in?

I think there’s quite a lot of interesting stuff around longevity. Could we cure death? I have these gadgets that tell me that I need to sleep better and eat better and exercise more, which, of course, I don’t. But there is this interesting singularity that we’re facing over maybe the coming decades, which is the idea that we might be able to cure death or prolong life for a significant amount of time. That will challenge our social construct, how we educate ourselves, how we have families and all that kind of stuff. So longevity is the other thing I’m very interested in.

That’s cool. So just follow up question, because you kind of alluded to it, how do you feel about Ray Kurzweil? Because he talks a lot about these singularities. He’s been right consistently. He thinks in 2039 we’re going to start adding years to our life. That’s the takeoff point that you’re talking about. Do you think that, especially with artificial intelligence, he’s been right and therefore you’re prone to believe that he’s right about the other stuff?

Well, he’s a very, very smart guy. I think he’s kind of reassessing his view of where we might create a singularity, the technological singularity. But, you know, we’ve got lots of interesting challenges to face as a species before 2039. So even if it was possible to live for a very long time from 2039, between now and 2039, we’ve got some big challenges to solve for. But again, I think that with AI and people that make the right decisions of applying these technologies, we’re going to solve those problems.

That’s great. Well, thank you so much for joining us today.

And thank you, everybody who is listening out in the world, as always, please like and subscribe. And for those of you guys who listen on a regular basis, I will see you again in the future. Thanks, everybody. Have a great day.

Important Links

About Daniel Hulme

Daniel Hulme

Daniel Hulme (PhD) is a globally recognised expert in Artificial Intelligence (AI) and investor in Emerging Technologies. He’s the CEO of Satalia, an award-winning company that provides AI products and solutions for global companies such as Tesco and PwC. Satalia exited to the world’s largest marketing company in 2021, WPP, where Daniel is now the Chief AI Officer; helping define, identify, curate and promote AI capability and new opportunities for the benefit of the wider group and society. He is co-Founder of Faculty AI as well as advisor to responsible AI and AI assurance startups such as Holistic AI.

Daniel has over 20 years academic experience with AI. Having received a Masters and Doctorate in AI at UCL, Daniel was previously Director of UCL’s Applied AI MSc (Business Analytics), where he is now UCL’s Computer Science Entrepreneur-in-Residence and a lecturer at LSE’s Marshall Institute, focused on using AI to solve business and social problems. Daniel is also an Impact Board Member of St Andrew’s University Computer Science department.

Passionate about how technology can be used to govern organisations and bring positive social impact, Daniel is a popular keynote speaker specialising in the topics of AI, ethics, metaverse, emerging technology, innovation, decentralization and organisational design. He is a serial speaker for Google and TEDx, holds an international Kauffman Global Entrepreneur Scholarship, and is a faculty member of Singularity University.

Daniel is a contributor to numerous books, podcasts and articles on AI and the future of work. His mission is to create a world where everyone has the freedom to spawn and contribute to innovations, and have those innovations become free to everyone. He has advisory and executive positions across companies and governments, and actively promotes purposeful entrepreneurship and technology innovation across the globe.

0 Comments

By: The Futurist Society