
ยทE93
Anduril CEO Brian Schimpf on how to make defence faster, cheaper and better
Episode Transcript
Stop the world.
Welcome to Stop the World.
I'm Olivia Nelson and.
I'm David Rowe.
The defence company Andrew was founded less than 10 years ago but is now valued at more than 30 billion U.S.
dollars and has contracts around the world, including with the Australian Defence Force.
For the ADF, they're making the Ghost Shark underwater drone, which is about 6 metres long and has just started rolling off the production line and will provide all kinds of undersea capabilities.
Yeah, Andrew is one of the more interesting companies around in national security at the moment.
Live was cofounded by the colourful Palmer Lucky.
He's literally colourful.
He's known for his ponchant for tropical shirts, and he is also famous for creating the Oculus Rift virtual reality headset.
Which he sold in his early 20s.
Right, correct.
I think he was possibly the youngest billionaire in the world at the time, self made.
He went on to all sorts of fame and fortune.
And then along with a group from the defence tech firm Palantir, including the now CEO Brian Schimpf, Palmer Co founded Andrew.
So it's with Brian, who's our guest today on Stop the World.
He's a great customer.
So Brian's in Australia for the Indo Pacific Maritime Expo at the International Convention Centre in Sydney, which is the same venue where Aspy's own Sydney Dialogue will be taking place in just a month's time.
Grab your tickets while you can, folks.
Yeah, Andrew's philosophy is basically to make stuff that works, make it quickly, make it affordably, and design it in ways that mean it can be built using a country's existing industrial capacity for years and years.
That's important because it recognises that we may be entering a danger period sooner rather than later, and things like long delays, excruciating supply chains and dependencies, ballooning costs and so forth just aren't practical in this strategic environment.
So Brian talks about all of that.
He also talks about the need to deter advert, to deter adversaries by projecting a sense that you have the national capacity to be a difficult target not just for a week but for years to come, so the cost of progression rise.
He talks about Ukraine and Taiwan as well as real time examples.
Yeah, and he also talks about being a company that, a bit like Palantir, in which Andrew has its roots, is unashamedly on the side of liberal democracy.
So they're thinking about the long term strategic needs of liberal democracies and that's what's going to shape the world for the better, not just selling stuff in the here and now.
Agree, and Brian also talks about the growing role of autonomy and artificial intelligence.
Your favourite topic, Dave?
I see that it was left to the end of these voice overs.
Best to last.
About the need to keep human decision making in the loop, human machine teaming, and about having software systems that actually process all the information that is coming in from millions of sensors around the world.
Yeah, it's a great one for the defence nerds.
It was a really good chat with Brian.
I, I very much enjoyed it.
Just for Full disclosure, I recently made a trip to Andrew's Facilities in California, spent a couple of days touring there and my travel was partly covered by the company.
So that said, let's hear the thoughts of Brian Schimpf.
Brian Schimpf, welcome to Canberra.
Thanks for coming on Stop THE World.
Hey.
Thanks for having me.
I want to start with an overview of Andrew, relatively new company, I think 2017.
You cofounded it with Palmer and some of the others taking a new approach to defence industry.
It's fair to say I'm going to lead you a little bit down a particular direction here.
But I'm interested in speed and affordability, about building a massive force in a strategically useful time frame, and this idea of harnessing the industrial capacity of a nation so that it can sustain, you know, potentially itself for years during a crisis or a full fight.
I mean, just talk a little bit about what you're doing and why it's new.
Yeah.
So like you said, founded in 2017 and there was a lot of principles when we got started that we, we really believe, but you know, we've really leaned into this kind of class of next generation, more autonomous, lower cost systems.
And so there's a lot of different parts of that.
One is, you know, the business model of how you can build them can be very, very different.
You know, you don't necessarily need to serialise it in the way these things have always been done where the government carries all the risk.
We've been able to lean in on our own dollar for building factories, you know, Co developing capabilities or just building them off balance sheet.
And that's enabled us to scale to just a huge number of different capabilities very quickly.
So we've got something like 20 different products, everything from, you know, smaller scale systems through to things like go Shark that are, you know, school bus size, autonomous submarine, we're working on autonomous fighter jets, huge span of capabilities.
But this other idea was really around how do we think about mass production in a different way.
My analogy for a lot of the traditional defence systems was we went down this path for a long time, which was technical.
Exquisiteness was the most important capability.
If we had a missile that was 10% better, you know, could could reach a little bit higher, a little bit further, find slightly stealthier targets, you know, that was the right answer.
And we asked industry to build the Swiss watch, you know, luxury good missiles, aircraft, all these things.
And they did it.
They were responsive to that, but nobody asked the question, can you scale a Swiss watch factory?
And turns out you cannot.
And and so in a lot of ways, this was, you know, kind of inevitable where we ran into this problem where you see it today.
There are massive multi year backlogs on these foreign military sales deals out of the US and a lot of allies and partners have seen worldwide are feeling like the urgency to rearm, the urgency to build capacity is very real, but the availability of of actual systems they can acquire quickly is just not there, right.
Like you got to wait four or five years to get a lot of these capabilities.
It's a huge problem.
And talking with folks in the commercial world, you know, learning lessons from automotive, consumer electronics, we've pulled in a really interesting blend of people from both traditional defence as well as a lot of the commercial space.
The reality is how mass producible something is really set when you design the systems.
And if you design it with these commercial processes and you find smart ways to get, you know, nearly identical capability, but at a much more producible process, you can really scale quickly.
You can lean into the automotive and industrial suppliers that do exist.
You can take advantage of composites and metallic providers, all these different things that that really do.
The capacity is already there.
You just got to be able to design your systems to take advantage of it.
And when you look at like World War 2, the US didn't, you know, really was able to scale up very, very quickly.
But it was because when you design these systems, they took advantage of what we could already make.
Like that was the key, right?
It wasn't, well, here's this is super exotic material that is unmanufacturable.
And I that's the linchpin, right?
And, and the reality with manufacturing is you can have 1000 parts on a thing and if one doesn't show up, you're still not producing.
So it's always these relatively few defence exotic things that you really got to solve for, but really taking advantage of designing those out, getting to commercial supply chain and really being able to scale quickly, that's that's the formula we've really found has worked well for us.
My to add a personal touch, my grandfather on my dad's side in Lancashire in northwest England worked in a fireworks factory at the start of World War 2.
So you can imagine what he started doing in a very quick, quick pivot.
Now, I mean, what what you're talking about makes a lot of sense during a fight and a crisis.
It also has deterrence advantages, of course, because if an adversary can see that this is what you're able to do, then they have to know that if they get into a fight with you, then it's going to be over a longer term.
I'm interested in some of the, I mean, the, the real life examples that we're seeing unfold at the moment and Ukraine being the most obvious one.
I mean, they've reorientated a bunch of their capacity, their, you know, national capacity to make drones and that's been decisive in keeping them competitive for 3 1/2 years.
In fact, I'd I'd go so far as to say there's a real question about whether Putin would have actually invaded if in 2022 full scale invasion if he had known that it was going to go the way it's gone.
The same time we've got Russia sending cheap drones over the border into NATO countries, forcing them to deploy very expensive F30 fives to match very, very cheap drones, which is not sustainable over the longer term.
So it's an interesting demonstration of the way in which sustainable management of security challenges is actually going to be essential, including to deterrence.
So what are you sort of learning and taking away from these examples that are going on around the world at the moment?
Yeah, so I think if history is any indicator conflicts, most of these conflicts when you look at state on state conflicts are protracted wars, right.
Like the reality is industrial capacity really matters.
Again, you know World War Two example, you know the US tanks were worse than the German tanks just could out produce them by a massive quantity.
And so the the industrial capacity of this does really matter.
And when you look at a lot of the, you know, sort of war plans and, and the way kind of the different services think about these conflicts is the goal is to have it over in two weeks, but very few conflicts are over in two weeks.
So the question of, you know, what sort of happens after that phase is really critical.
And this deterrence is always this capability.
And well, and I think the, you know, Western allies have just this immense capacity to deliver very decisive, very strong kind of initial salvos.
It's like just sort of the best in the world.
There's no question we can kind of do that very, very quickly.
But I think there is a legitimate question of how long can it be sustained If you're China, if you're Russia, that's a legitimate question, right?
So, you know, Ukraine's now producing something like 4 million drones a year.
I think Russia's at like 5 DoD official estimated that from the industrial base, they believe they can get to 100,000 drones a year.
Seems like a problem, right?
And so the this this issue is, is very severe where, you know, most of the war gaming shows a lot of the key munitions are out in something like 8 to 9 days like it is, it is a real problem.
And you know, why do most people start wars?
Well, there's a lot of reasons, but I think the biggest one is they thought they were going to win.
And if you can make it a little more clear that we have the the will to fight back, that we have the capacity to do so and not just the sheer, you know, volume of force we could bring to the fight, but the sustained political and industrial capacity to keep that fight up well in.
Fact.
I'd refine it to win at a politically acceptable cost.
That's right.
Yeah, exactly.
And so, so there's the political dimension of sustainability on this as well.
And, you know, I think there's been a lot of, you know, historically the, the sort of free market view of, of, of economics, you know, in, in free trade has, has been the dominant force.
But I think people are waking up to the realisation that countries like China have intentionally manipulated that free trade system to create strategic strangleholds on the supply chain, right.
And so this, you know, the obvious example everyone points to is exercising the restriction on rare earth magnets and the massive impact that has had the US utilising, you know, high end GPU export controls.
Like all of these are instruments of, you know, national security that are becoming really important.
But the West has not had a practise in thinking through this industrial policy question, right?
Which is, you know, I need to not just say free trade is the ultimate goal, but a degree of sovereignty, control and assurance that I have not just for defence purposes, but commercial purposes as well.
The ability to deleverage any of these adversaries I'm dealing with is, is incredibly important.
And so I think we're defence when we look at this, there's a lot of things that we can do directly, right?
Like we can tap into these commercial supply chains, we can look for alternative supply, we can have a much more scalable approach that creates a lot of benefit.
You then have a set of things that defence alone cannot solve.
The volume does not exist, the demand does not exist in a sufficient quality quantity to solve rare earth magnets, to solve lithium production, to solve germanium, right.
Like all these things are, are are quite critical.
And so I think there's got to be kind of a whole of, of government approach in all of, you know, kind of the allied countries really around this question of how do we mutually assure that the supplies we need will exist, particularly upstream as you look into the supply chains.
And that in a time of crisis, we are not going to be collapsed economically, that we we have a more resilient, you know, kind of holistic strategy to this.
Yeah, Yeah.
I mean, it's interesting to be even thinking about these things from the point of view of a defence company.
You know, you, you, one might think that as a commercial entity your job is to just provide, provide, you know, capabilities or you know, provide particular platforms, provide ammunition, whatever it might be.
I mean, just the fact that you're thinking about these sorts of things as, you know, very upstream raw materials in that case, like critical minerals, I mean, I just reflect on that a little bit more for me.
Yeah.
I mean, I think, you know, our, our sort of view is 1, there's just straight like kind of commercial interest in this, which is we've got to be able to produce these systems.
And so if that means we've got to go stockpile germanium or go strike deals with people doing mining and, you know, refining operations, hey, we'll do it if that's the, the right strategy to take.
But I, I think there's a broader responsibility that we're typically never asked for, which is a question of how would you be resilient in a time of a crisis?
And that is not something that is ever written into a contract.
Nobody knows how to really measure that.
Even the data to get at that question is very opaque, right?
And the supply chains historically have been incredibly deep.
So our strategy on this is, you know, if the historic strategy is I buy, you know, kind of my avionics module and my control module from these vendors.
And then they have been sub vendors and sub vendors and some vendors you might go back 567 layers deep before you start getting to this question of, you know, a short supply and raw materials.
It lengthens your lead time and makes scaling much harder and you have a really critical problem.
What we've kind of tried to do is pull in more of that design so that we can work with a broader base of suppliers that are a little bit closer to the problem.
And we can start to get a little more transparency around this question of what is actually scalable for both our own commercial purpose.
How fast can we ramp, you know, how quickly can we respond to these things.
We don't want two to three year lead times on this, We want three to six month lead times.
So that becomes really critical.
But then this, this question of wartime, I think is an obligation that every defence provider should be really looking at, right?
Like if, if we do not understand the realities of what will it take to sustain this industrial scale during conflict time, well, whose responsibility is it then, right.
And, and if we have the best first hand knowledge, then we should be obligated to do this.
Now nobody asks us for this.
I would love to see it in contracts.
I think it would be something that would be very healthy.
I don't know what I would write to put that contractual clause in, but it it's a, it's a really important concept.
But I mean, this idea of taking responsibility is a really interesting one.
I mean, I, I think something like supply chains, for instance, they are inherently geopolitical and, and the the recent rare earths deal between Australia and the US is a classic example.
We are working together to deny China its ongoing stranglehold over something that is clearly being willing to weaponize.
So what you're what you're talking about there ties in quite well to me with this idea of almost sort of, I don't want to say patriotic industry, but industry that does actually take a position in geopolitics.
Now, you know, obviously U.S.
companies during the Cold War, you know, worked with the US government very closely.
But what I mean, one thing that I, at the risk of sounding ingratiating here that I do admire about Andrew, is that you're willing to say, OK, we are here for liberal democracies.
We are here for, you know, to enable them to provide security.
We we are actually taking a position on these things.
Does that just work from a commercial point of view, or do you find yourself having to balance one thing against another in certain contexts?
Or or does it just a line of of its own accord?
Well, I, I think there's, there's a lot of nuance to these things.
But yes, I think that is so we've, we've taken a very clear position that, look, we're aligned to Western allies.
We have a clear set of principles and we are not going to do things outside of the policies that the governments are looking for.
And I think that's our our sort of division on this is you don't want, you know, kind of tech Bros setting policy on these things.
It seems like a very bad strategy for the world.
You want tech companies to actually be accountable to saying, here's the best information I can provide, Here's the trade offs and, and you need the government accountable to actually making policy decisions on these things.
And so we tried to be good partners in that, you know, and, and really look at creative ways that we can work within the legal and regulatory frameworks to still create a set of policy options.
So one of these has been really key to us is looking at kind of this question of sovereignty export, you know, and how do we think about what's aligned to, you know, the traditional approach rates buy through the US government, it's export controlled.
That's it, right?
And, you know, sometimes there's some localization and offsets and production, but I think there's a lot more creative strategies to this where, you know, maybe we can take some of the technology that's built in the US.
Maybe not all of the most sensitive pieces need to be exported because other countries also have great investments in unique radar capabilities or, you know, unique, you know, warhead capabilities or things like that, that that may be just much more specialised sensitive to where the US believes they need control.
But we can find win win strategies, right where we can accelerate allies and partners on what they need to produce, what they need to build up and give, you know, kind of US, Australia, any of the countries we work with closely out of the security assurances for their own military that they need to have.
And so there's, there's these kind of very straightforward, you know, but require some creativity strategies you can have around ways to get at this first principle problem of how do we enable allies and partners while still protecting the sovereign interests of every country we operate in.
And I just don't think you can be a defence company and not have a view on these geopolitical questions.
It is the, you know, I think everyone was allowed to forget for about 20 years that this mattered when it was just so clearly a hegemony on, on, on kind of international order.
That's just not true anymore.
Every country, kind of every, every company needs to take a bit of a side.
And we've seen this in the tech community where for a long time in the finance community as well, they were so thoroughly compromised by the promise of returns in China that nobody in the national security community ever thought was going to materialise.
It's like, what does it mean to get a high return in China?
Are you ever going to get your money out?
What does it mean for your investment to go to the literally 0?
Because, you know, the autocrat decided today that he doesn't want that industry or that money is not coming out.
And it was a big wake up call to a lot of these guys who had just huge interest in China and then realising that that was not someone you could trust, that was not a good bet to make.
And that they kind of took for granted the the rules of order that the, you know, the kind of liberal democracies shared.
So we've seen a huge change in just even the tech community over the last few years where, you know, when we started, I remember there was massive protests and concerns that Google was working with the US government on AI.
Now last year, I think nearly every tech companies like openly committed to supporting AI with the, with the government like that is a huge change.
And, you know, it really, the world has really shifted in the last few years where this is not optional anymore.
You actually have to really think through these kind of key strategic questions.
The government's getting more sophisticated, the the companies are feeling more responsible and there's just a much greater alignment around, you know, not taking for granted that the the countries that have provided so much peace and prosperity.
Interesting.
I mean, the, you're, yes, you're a defence company, but the, you know, the distinction between a defence company and a tech company is much less clear than it, it might have always been.
I'm just interested in how that that transition where the tech company, where the tech industry has started to I suppose come a little bit closer to where companies like Andrew and Palantir have been for some years now, what that experience has been like from Andrew point of view.
I mean, you presumably feel somewhat vindicated by that.
Yeah, When we started, it was, you know, when we were going out in 2017, remember, it was Trump won.
There was a massive backlash against doing anything with the federal government.
And our sort of view is been that this is just, this is a bipartisan issue, this matters.
And the system is much worse than anyone sort of realised.
And the world's a much more dangerous place than anyone realised.
And I've been a lifeline Democrat.
A lot of the Co founders said as well.
We've had a very balanced kind of partisan view.
But everyone agrees on the mission of what we're trying to do.
It's like best technology to our allies and partners in the US and our war fighters.
Like that's what we need, right?
Like no question.
And you know, also it was this period where the sort of employee revolts over working on anything controversial is massive.
So to start with a company that was working on, you know, defence and we had investors saying commit to never making weapons.
Of course we're not going to commit to that.
Like we're a defence company.
Like this is an important aspect of it.
But having that clear vision of like, this is what we are going to do.
Do not come here if you do not support this mission.
But here is what we are doing and we're clear with it.
We have communicated it clearly, publicly, transparently with our employees, with the media, with government, and that that has worked really well.
So I think this sort of was a huge lack of sort of authenticity and transparency in these things.
And then it really what, you know, I think Vladimir Putin and G have done an amazing job of convincing the rest of the world that this actually matters, right?
They've really made it very clear.
Have you ever found it difficult to get people as a result of that, you know, stated position?
Especially when the demand for highly qualified tech people is so high.
You know, it honestly, not really because I think the, there was such a desire for like clear, authentic views of, of what your mission was at a time when there was just this sort of very crazy, you know, view of, of, of your relationship to the companies and the mission on it.
So even when it was more controversial, you know, kind of pre Ukraine invasion, even then there's the clarity of what we were doing, I think resonated with a lot of people.
This this idea that people in tech don't want to work on defence, I think is a very minority, but loud minority position.
And I think the vast majority are in favour of a strong National Defence.
Like it's, it's kind of the, the underpinning of safety and security we've had for a century.
So the I never found it that challenging, right?
And, you know, it's I actually think it was like almost better when we were more controversial, like, you know, the when, when too many MBA show up because you're very cool, like you should really watch out.
Like it's, it's you might have jumped the shark at that point.
And so like when you're cool, you're not sure if people are there for the right reason, Like, are you there because this is a great stop in your career and it's a popular company, or are you there because you actually believe in it?
And we want people that are actually, you know, kind of thoroughly believe in the mission of it because it can really corrupt you very quickly otherwise.
And so that's that's something we still philtre for still, you know, very transparent about what we do for that reason.
And, and I'm I'm not sure being a popular company is actually that much of an advantage in reality, right?
All right, Speaking of sharks jumping or otherwise ghost shark, let's get on to that.
The underwater UIV that you're building in collaboration with Australia.
I mean, interested in a couple of things.
One is that it to me sort of it epitomises this, this issue we've got about fast capability acquisition at the moment.
So you know, Australia is going to have fantastic submarines in the 20 forties, right.
But you know, there is a window of danger that we're in now.
Some people, you know, some China experts say that, you know, 27 to 30 really is potentially that sort of peak period when Xi Jinping might say, OK, this is we're not going to get any stronger than we are now relative to other powers.
So certainly, you know, quick capability acquisition is something like that.
A country like Australia really, really needs to think about given that we've got these more exquisite capabilities coming online somewhat in the in the future.
I'm interested in your views on on that and how Ghost Shark sort of fits into that.
But also just in broad terms, as much as you're able to in a, in a, you know, obviously an unclassified podcast, give us your thoughts on why Ghost Shark is useful for Australia, given its geography, given its strategic challenges and so forth.
Yeah.
So the the Ghost Shark programme was for us as a company we really look for forward leaning customers where there's this combination of creative military thinking.
So you know, how are you going to solve these critical operational challenges in different ways with a willingness to move fast and try different models on acquisition capability development and really experiment with more modern approaches.
And with Ghost Shark, we really found that with with the Rand and ASCA and it was it was fantastic where you know, I think we went very, very fast period from first conversations through them saying this is a key capability, let's move out.
We entered a Co development programme, which is one of the first we'd ever done.
We're just sort of risk sharing approach, very open on thinking through kind of governance structures and how do we, you know, kind of break out of the standard mould of destroying these schedules and costs with, you know, obsolete requirements and, you know, over overburdened regulation.
But really just trying to trim to what's necessary for the capability we're trying to deliver.
And that sort of creativity and willingness to, to break the mould is, is, is rare.
I was really impressed with how this has gone throughout the programme where, you know, I've seen this go so badly in US and plenty of other places where, you know, kind of that risk aversion, you know, this is the way we've always done it, you know, coming in with, you know, man, submarine requirements on to uncrewed capability.
That's, that's a, that can just destroy a programme, right?
Like can destroy a capability and so it required a degree of flexibility around kind of all aspects and willingness to re examine all aspects of how do you think about building, testing, deploying, operating these capabilities.
And so these are never just like an acquisition question, right?
Like I I think the the this often gets, you know, kind of viewed as does the government have the authority to acquire differently?
That's one of the five dimensions that need to be true, right?
It's, it's so many parts of building and deploying these things quickly that you have to solve simultaneously.
And, and this is really key to, you know, how are you going to actually get the mass and capability in a rapid way, right.
So you're going to buy some things off the shelf, but it's got the backlog problem that we've already talked about for things that are in production as well as getting new capabilities through that traditional pipeline.
You know, it's gotten longer and longer and longer.
There's there's really no way out of it.
And this is where I think this class of autonomous and lower cost systems really can shine where the cost to develop is not prohibitive.
The cost to purchase is much more affordable and then the ability to project, you know, kind of power at scale is, is really, really compelling, right.
So we take kind of go shark and what capability can this provide to Australia?
You know, obviously Australia is surrounded by ocean, right?
Like the, the maritime side has to be solved, the naval side has to be solved.
And the sheer volume of of coastline and you know, kind of key choke points and straights that you have to control and protect is is massive, right?
And at reasonable ranges, crude submarines are incredible, right?
Like I am a huge fan of Virginia class.
I think it is the most impressive capable weapons programme ever done.
Like it is just a phenomenal capability.
I have nothing but good things to say about it.
But it is also the most complex weapon system ever developed, which is, you know, part of the reason it's so spectacular.
And and you're just inherently limited by the scale that you can actually deploy these app.
And so as we look at where these uncrewed systems can really shine, it really is about scale, right?
It is about the number of places that you can be positioned, the amount of effects and surveillance that you can acquire in the amount of dilemmas you can create for your adversaries, where, you know, if you can put dozens of these systems into the ocean, that creates a lot of uncertainty for your adversary around, you know, particularly subsea where it's very hard to find things.
It is very hard to, you know, kind of mitigate these capabilities.
And it can create a massive amount of uncertainty for an adversary trying to approach the coasts, trying to conduct a manoeuvre where you can really hold them at risk without really a lot of, you know, capital assets or sailors lives at risk.
And that is a massive capability enabler that I think the ran really understood and leaned into as you know, as this really this bridge till you can get Virginia class to where you want it to be.
But even as an augmentation of just the sheer volume that you're going to be need to deploy in any fight.
Let's broaden it out a little bit on on autonomy, autonomy automation on crude systems.
I mean, is there I, I, I, I see and I pardon my ignorance, I should know this off the top of my head better.
But the way it would, you know, there's something like the ghost shark could team with a Virginia class, for instance.
I mean that that just seems like a natural use of it, a bit like the way air platforms we're talking about doing in a similar way.
Just talk about human teaming a little bit, human AI teaming or human autonomous teaming a little bit into the future.
I'm it seems to me quite possible that teaming is a transition until automation and AI get to where, you know, the role for human beings is simply, you know, our, our value contribution starts to shrink to the point of negligible.
Do you see it that way, or do you see the human role as being sustainable forever?
So I fundamentally this is a question of sort of values and, and, and how you're going to hold people accountable to what military systems do, right.
And so fundamentally to me, this is the West will always want human accountability around lethal decision making, deployment of effects, right?
Like all these things that are, you know, have real political and societal consequence, we're going to want human accountability around it.
We're never going to get to a world where we say that's absolved.
The AI told me to do it and it's not my responsibility.
And everything we've designed is really about thinking that question of where is the right point for humans to engage and how does that further enhance human accountability, not degrade it, Right.
So the AI and the autonomy can be phenomenal doing a lot of the more mechanical tasks.
You know, I don't need a human to plot the route that a missile flies or a submarine goes.
That is a perfectly suitable task for software to do, for AI to do this.
AI can recommend courses of action and say, hey, I think you should do AB and C.
And a human can approve that and understand the context and the consequences of those decisions in a way that's very hard to digitise.
And so I think there will always be this role for human accountability in it, but it's always a matter of degree on these things where in a lot of ways there have been lethal autonomous weapons for decades now.
The way a modern torpedo works, you give it a box and you say anything that matches this acoustic signature in this box that is a legitimate military target.
That sounds a lot to me like a autonomous weapon.
And so in a lot of ways, what we're debating is really then these controls around what context do we allow the machine to make these decisions?
What are the rules of engagement and how much freedom and latitude do you provide it, right.
And then where does the human decision making and human accountability really come in?
You know, a lot of the Western systems, whoever launched the weapon is accountable for the results of what that weapon does.
I think it's a good system.
We should probably keep that.
And so the so there will always be that part, right?
And so I that's how we really think about it.
Yeah, there's the stepping stone dimensions of how sophisticated is the AI, how much control can it really do.
I think that is true.
But I think the fundamental thing is really around this question of values and how do you want to provide accountability into the system.
But it's largely the values and ethics question rather than the capability question.
I think that that is the long term question is values and ethics.
I think the near term question is kind of 1 of adoption, right?
So even if the technology was perfect to solve all these problems, I think you are going to inherently see correctly the desire to have a relatively short leash to start, right, relatively prescriptive mission sets.
And then over time as trust is built and we understand what the capabilities are, then you will release kind of more control to these systems to perform more autonomous papers.
We've seen this with nearly everything we've deployed, even counter drone systems.
When it starts, typically the operators want to look at, you know, moving the camera themselves.
Then they're OK now I'll let the system move the camera.
OK.
Now I'll recommend engagement patterns.
So you you kind of see a progression naturally as people build more trust in the system of what that right balance is.
OK, just back on Ghost Shark for a moment.
It's described by Andrew as a sovereign undersea capability.
Just understand, explain to me what sovereign means in that context.
Does it mean that Australia can do everything on its own at some stage in the force?
Well, for the foreseeable future, potentially including one in which you know something happens in the Australia US alliance, for instance, probably not a hopefully not a political collapse, but something else might just mean that Australia needs to do it on its own for some defined.
Period.
Yeah.
So for us, we the team that designed Go Shark is based in Australia, it's Australian engineers.
We have tight collaborations with the US counterparts, but on, but all of the core design expertise resides in Australia.
The production capacity is based in Australia.
We can fully assemble these go sharks here.
And the supply chain, you know, there's a degree of you know kind of AUCUS components to this, there's some UK suppliers and things like that.
But you know, the vast majority of the metallic components, the composites, the battery systems, you know, core electronics, assembly, all of those pieces, we've built out an Australian supply chain around this.
So the ability to actually produce independently of really nearly any other country is, is quite high.
And so the so for us, sovereign means kind of that independent ability, a, a mindful understanding of where those interdependencies exist and something kind of acknowledged by the government of Yep, this looks correct in terms of the interdependencies I want to have.
And so for us, it's very much about creating the ability for Australia to act quite independently as needed.
You know, there's several payloads that I don't get to be brief to.
I'm not an Australian, so that's perfectly fine with me.
I think that's a good thing.
Good, good.
All right.
One thing we haven't talked about yet is Lattice, which is really the core of your company.
That's the the the the software system that there's essentially a command and control system.
Just talk about how that forms the core and how you sort of build around it.
And interestingly, I mean, we'll, we'll ghost shock.
Will ghost shark employ lattice?
Is it, is it?
I don't think lattice is just for Andrew capabilities, correct?
You can use it for other things as well.
But just talk a little bit about the role of lattice.
So stepping back slightly, the the way modern software has been built in Silicon Valley is this idea, these very general purpose platforms and the these are a lot harder to build with quality than most people would appreciate.
You have to build immense tooling around your development infrastructure and simulation.
How do I test my software?
I, you know, I have to simulate sensors and autonomous behaviours and all these network conditions, all these different pieces.
You know, I have to build all this tooling and instrumentation for monitoring and deploying all this controlling configuration.
Just a, an enormous amount of software you end up having to build.
And that is below sort of the tip of the iceberg, which was what people, you know, kind of think of with these, which is I get a drone to go from point A to point B into a mission.
So the complexity of these modern software stacks is just immense.
And so really every major tech company has thought about this is how do I share that load across a number of different capabilities and build these kind of very flexible software platforms that allows me to tailor to any, any application.
And when we look at the the span of problems we're trying to solve, it's really this question of how do I build for a world where I have much larger numbers of sensors on the battlefield, of drones on the battlefield, subsea in the air, and I'm pairing those with manned systems.
So solving all those fundamental problems.
And that's how do I think about networking these.
How do I use AI to process that information at the edge?
What's the right way to move that information around to create this holistic picture of a battle space?
How do I express an intent to a robot?
Like what does that mean?
And how do I represent that in a digital way that can be transmitted over a network?
You know, how does a human want to interact with us, right?
What do I want to see?
What decisions do I need to make?
How do I make sense of a increasingly complex battlefield?
So we have this view to where a lot of these future battlefields go.
Much more systems, much, much more disaggregated.
Everything is networked, but you have constant jamming.
So how do I have resilience baked into that?
And I need to create this transparent picture as best as I can for all the information I can get.
And that at the core is what we've tried to build Lattice to solve.
Now, obviously making a subsea drone is very different than making a you know, a small quadcopter is different than making a fighter jet, and it's very different than talking to a manned F35 S the literal components.
Yeah, those vary from project to project, right.
And but that core infrastructure around, you know, how do I network this represent information and think about managing it that is shared across.
And so Ghost Shark, nearly everything we do has these kind of core lattice pieces into it.
And the greatest part of this is because we've shared this code base across so many systems, all those learnings fold back in and we can provide these updates to every system as it gets smarter, better.
And we take all those learnings and distribute it across the whole fleet of all the capabilities we're building.
I mean, I find the idea of eyes, ears and brains to be a useful analogy just to sort of more instinctively grasp it.
I mean if I had 10,000 eyes and 5000 ears.
That might sound great, but if I've got one brain, unless it's supercharged in some way with AI, which may come down the track.
And I suppose, I mean, actually I'm, I'm let's talk about that for a moment.
What's what's the role of AI in Lattice and what's the role of AI going forward?
Do you think in managing all of that sensor data, all of that sort of decision making complexity, which is going to which is already already exists for, for, for war fighters, but is going to become more and more, I suppose potentially overwhelming unless we we manage it well with AI.
Just tell us a little bit about the role of AI there.
Yeah, so I kind of think I always like to simplify these problems down a little bit.
So like, what are you really trying to do for the vast majority of kind of military applications for what you're trying to do with the sensors?
You're trying to figure out where your friends and allies are.
So where's the blue force?
And you're trying to figure out where everything else is in the environment.
So what's the map?
Where's the bad guys?
What are they doing and how do I find them?
So when you build that, boil that down, it's this question of, you know, how do I interpret sensor data to know, hey, is this a ship?
Is this a tank?
What am I actually looking at?
And then you can kind of then think of relatively simple ways of moving this information around.
So a lot of what I think we've done well is think about these relatively simple abstractions, how to do it.
These are the AII think the we've been doing what's called the prior generation of of AI where this was the computer vision using machine learning for detecting signals of interest in, you know the electromagnetic spectrum, what I think we would call now kind of more discriminative AI.
We've been doing that since the founding right so we deployed border security towers and counter drone systems that have been using computer vision and sensor fusion, you know for eight years now and it's proven to work incredibly well right.
We probably the largest number of deployed production, you know, AI systems out there.
The and then now there's more of this question of generative AI and these more reasoning systems and what implications does that have?
And I'm extremely optimistic around this.
So, you know, I, I've been really fascinated by things like alpha go alpha star.
A lot of the things that, you know, kind of came out of Google DeepMind around the ability to learn these strategy games.
And when we think about, you know, fighter jet tactics or, you know, how would you have a tech helicopter or anything, it is so there's so many variables these pilots process.
It's so hard to code into like, you know, a, a particular algorithm and the ability for these machines learned kind of generative strategies to outpace human level performance or certainly any of these traditional approaches is, is massive.
And we've seen this in some of the research we've been doing in house with just kind of these reinforcement learning approaches, just massively within like 4 weeks of work can massively outpace any of the alternatives we've tried.
And so I think the ability to get to these superhuman levels of performance on, you know, kind of these tactical behaviours is very, very near term.
What I don't think works in practise is sort of this idea that we're going to have this massive central brain kind of puppeteering this large scale operation.
It's not resilient.
I don't even think it works with the laws of physics with how much information you'd have to move over networks to get that data back.
I just, it's no amount of star link that's going to solve that, just too much data to move.
And you've created this incredibly not resilient system all of a sudden.
And so when we think about this, it's really about how do I create these layered intelligent systems that I understand how they operate in concert, but they're each making smart, independent decisions.
That's a resilient strategy.
And it also, by the way, is how the Western militaries are organised.
They figured out this mission command structure is the right approach, give a degree of autonomy and discretion down to your lower echelons.
You get phenomenal results, right?
And you kind of train for that.
You, you operate that way.
You, you let them have initiative and that tends to deliver much better results.
So we kind of mirror that and how we think about the technology perspective, which is the right answer probably looks like something that is more resilient, more distributed, more at the edge than this sort of mythical central brain planning out the whole battle space.
I just, I don't see it.
It doesn't make sense to me.
I don't know why you would do that.
It speaks to my lack of world, real world experience either in engineering or war fighting that I my head immediately starts going to a, you know, the the human president or Prime Minister with one button, you know, shall I win the war?
You know, yes, slash no.
Which one do you want to click?
That's where I see it eventually sort of graduating towards.
But you're doing a good job of dissuading me that that that's going to be the case.
And rather it's to be down at more of a, you know, a lot of decisions contributing in in a tactical, operational and even strategic levels.
That's correct.
That's right.
And I think the, you know, you, maybe your systems are implemented like so the way I'm saying it is really about thinking of how the, you know, future robot armies and these command and control different echelons will work.
I think it has to mimic the structures that we have, that we've learned over a century of what is an effective way to organise and operate a military.
I think those are sort of timeless principles that likely are going to transcend the specifics of the technology.
Now maybe the degree to which that a human at that echelon, the effect they can have will be massively amplified, right.
So, you know, 1 pilot might be able to control 100 droughts.
Well, that is a massive increase in the amount of capability that we can get at every echelon.
And that is a a huge change that technology can bring forward, but it doesn't remove the need for that initiative, that creativity and that sort of mission command structure at every echelon.
The other side of this, though, is what you can start to do with simulation to really understand what is possible, what are the likely outcomes of any given conflict, what strategies actually work.
There's sort of this game theoretic notion or thinking about capability development that we've really leaned into around we, the way we operate is when we think of a new capability, for example, we will.
So here's the war fighting problem I'm trying to solve.
We'll run dozens of different strategies of technology.
And if we had this range or this capability or it flew this fast, what if it was on the land?
What if it was in the air?
And we can simulate those and you can say, hey, here's how these potential conflicts and these engagements would go.
So the state of the art today is reasonably complex.
You can kind of get a, you know, a, a fairly complex engagement and simulate that out.
But there's a huge amount of potential with generative AI to very rapidly lay down super complex scenarios over a huge number of variables.
And you can get a much more comprehensive idea of which are going to be the dominant strategies and how likely am I to prevail with which strategies and how to create those turn effects.
Now the, the hard part with this is so many of these decisions are really in the brain of one human at the end of the day right of will China invade Taiwan?
Well, you can do all the military analysis you want, but ultimately GE will have his reasons for doing it or not doing it based on his perceptions of likelihood of success, the political reality around him and what he needs to do.
You know, same with Putin right?
And the ability to use AI to forecast that humans brain.
That doesn't seem like it's going to happen anytime soon.
And so I think that the goal we think of with these things is really around, you know, what are we trying to construct as a company?
We're trying to provide that capability dimension in that deterrence equation.
And if you can make the outcome of these conflicts quite clear, AI probably helps you can make these things much more obvious.
That is a good thing, right?
Like that is what we were trying to do.
So that conflict does not happen.
And and that's ultimately what I think our our goals and ambitions are on these things.
It's a great message to end with and and wrap up.
Brian, thanks so much for coming on Stop the World.
It's been a great chat.
Thank you.
Appreciate it.
Thank you.
Thanks for listening to this special episode of Stop the World, the TSD Summit Sessions.
We'll be back with another episode next week.