Episode Transcript
1
00:00:00,240 --> 00:00:03,432
What It Takes to Onboard AI Agents by Anna Pinole at
2
00:00:03,456 --> 00:00:07,944
NFX as voiced by the AOK Voicebot About
3
00:00:08,032 --> 00:00:11,944
18 months. Ago, we started getting our first AI agent
4
00:00:11,992 --> 00:00:15,240
pitches. It was clear this had huge potential,
5
00:00:15,400 --> 00:00:18,696
but now we're seeing the full map in even more clarity.
6
00:00:18,888 --> 00:00:22,648
Quicker recap we see AI agents turning labor into
7
00:00:22,704 --> 00:00:25,740
software, a market size in the trillions.
8
00:00:25,880 --> 00:00:29,476
Since our first essay on this, we've worked with amazing companies
9
00:00:29,548 --> 00:00:32,756
in this space and want to do more of it. But if you're
10
00:00:32,788 --> 00:00:36,052
following this space as closely as we are, you probably
11
00:00:36,116 --> 00:00:39,172
have noticed something Progress and adoption are out
12
00:00:39,196 --> 00:00:43,508
of sync. On one hand, there is rapid technological progress.
13
00:00:43,684 --> 00:00:48,484
Just recently, Tool use operator Gemini
14
00:00:48,532 --> 00:00:54,168
2.0 and improved reasoning O3R1.3.7
15
00:00:54,224 --> 00:00:57,688
Sonnet emerged as new AI capabilities,
16
00:00:57,864 --> 00:01:02,008
both of which represent fundamental prerequisites for AI agents
17
00:01:02,104 --> 00:01:06,184
and get us closer to the future. A world where AI
18
00:01:06,232 --> 00:01:10,104
agents can act autonomously and execute complex tasks
19
00:01:10,232 --> 00:01:13,640
at a far cheaper price than we thought possible even a few months
20
00:01:13,680 --> 00:01:17,114
ago is very real. Novel capabilities
21
00:01:17,312 --> 00:01:21,126
paired with the continuous improvements in AI performance and
22
00:01:21,198 --> 00:01:25,190
cost see Deep SEQ and this are setting the foundation
23
00:01:25,270 --> 00:01:28,806
for future exploding demand. That's the good news.
24
00:01:28,958 --> 00:01:33,254
The less good news is there's still a disconnect between progress and adoption,
25
00:01:33,382 --> 00:01:37,606
a gap between the intent to implement AI at work and
26
00:01:37,678 --> 00:01:41,494
actually doing it. For example, a recent McKinsey
27
00:01:41,542 --> 00:01:44,930
survey of 100 organizations doing greater than
28
00:01:44,970 --> 00:01:48,658
$50 million in annual revenue recently found that
29
00:01:49,158 --> 00:01:53,490
63% of leaders thought implementing AI was a high priority,
30
00:01:53,650 --> 00:01:56,738
but 91% of those respondents didn't feel
31
00:01:56,794 --> 00:01:59,986
prepared to do so. It's very early days,
32
00:02:00,138 --> 00:02:03,362
and that's where you come in. Your primary job is
33
00:02:03,386 --> 00:02:06,658
to be a bridge between deep technical progress and mass
34
00:02:06,714 --> 00:02:10,578
adoption. You have to figure out how to make people actually see this
35
00:02:10,634 --> 00:02:14,324
change or want it and have it actually work for them.
36
00:02:14,492 --> 00:02:18,212
So how do we get there? It turns out we may be missing
37
00:02:18,276 --> 00:02:21,188
a few layers of the AI agent stack.
38
00:02:21,364 --> 00:02:24,436
Actually, we are missing three necessary layers right now,
39
00:02:24,508 --> 00:02:28,420
plus a bonus. The accountability layer the foundation of
40
00:02:28,460 --> 00:02:32,180
transparency. Verifiable work and reasoning
41
00:02:32,340 --> 00:02:35,876
the context layer, A system to unlock company
42
00:02:35,948 --> 00:02:38,530
knowledge, culture and goals.
43
00:02:39,030 --> 00:02:42,862
The coordination layer, enabling agents to collaborate
44
00:02:42,926 --> 00:02:47,214
seamlessly with shared knowledge systems Empowering
45
00:02:47,262 --> 00:02:50,702
AI agents, equipping them with the tools and software
46
00:02:50,766 --> 00:02:54,318
to maximize their autonomy in the rising B to a
47
00:02:54,374 --> 00:02:58,238
sphere. We are interested in companies building across each
48
00:02:58,294 --> 00:03:01,982
one of these layers or connecting them all, like NFX
49
00:03:02,046 --> 00:03:05,352
portfolio company Misa. More on that below.
50
00:03:05,536 --> 00:03:08,696
As we solve these challenges and build this infrastructure,
51
00:03:08,808 --> 00:03:12,888
we'll be able to tackle new and more complex and valuable tasks with
52
00:03:12,944 --> 00:03:16,712
AI. And once that's the norm, many more markets we
53
00:03:16,736 --> 00:03:20,440
can barely even conceive of now will emerge. But first
54
00:03:20,560 --> 00:03:24,664
we need these layers and. Here'S why Unlocking Autonomy
55
00:03:24,792 --> 00:03:28,088
from RPA to APA Agentic Process
56
00:03:28,144 --> 00:03:32,048
Automation to understand how. We are going to unlock full
57
00:03:32,104 --> 00:03:35,664
autonomy, we first have to understand a major shift
58
00:03:35,712 --> 00:03:38,672
in the way people look at process automation.
59
00:03:38,816 --> 00:03:42,560
For lack of a more interesting word, we are moving from
60
00:03:42,600 --> 00:03:46,272
robotic process automation to
61
00:03:46,296 --> 00:03:48,500
an agentic process automation.
62
00:03:51,240 --> 00:03:55,344
RPA is a multi billion dollar industry with massive
63
00:03:55,392 --> 00:03:59,478
companies like UiPath, Blue Prism and Workfusion among others.
64
00:03:59,664 --> 00:04:03,674
It's proof of concept that people are more than willing to adopt automation
65
00:04:03,722 --> 00:04:07,162
for high value tasks. To understand how we can bring
66
00:04:07,186 --> 00:04:10,890
on the agent economy, it's useful to use RPA as
67
00:04:10,930 --> 00:04:14,730
a starting point. Once you see its benefits and limitations,
68
00:04:14,890 --> 00:04:19,082
it's clear how agents are the natural and massive next step.
69
00:04:19,266 --> 00:04:23,338
The Benefits RPA excels at rule based structured
70
00:04:23,434 --> 00:04:27,076
tasks spanning multiple business systems 100
71
00:04:27,148 --> 00:04:30,996
to 200 steps. It was effective at capturing company
72
00:04:31,068 --> 00:04:35,252
knowledge within rules, for example VAT number processing,
73
00:04:35,396 --> 00:04:39,956
making automations reliable as long as underlying systems are static
74
00:04:40,148 --> 00:04:43,284
and RPA has strong product market fit
75
00:04:43,332 --> 00:04:47,012
already. The Limitations the universe of possible
76
00:04:47,116 --> 00:04:50,452
RPA able tasks was always going to be limited
77
00:04:50,596 --> 00:04:54,014
because you had to in detail be able map
78
00:04:54,062 --> 00:04:57,294
out exactly what process the RPA should take,
79
00:04:57,462 --> 00:05:00,558
move a mouse here, design this spreadsheet that way,
80
00:05:00,614 --> 00:05:04,222
etc. And importantly, expect it to remain the
81
00:05:04,246 --> 00:05:08,110
same or it breaks. RPA can only go so
82
00:05:08,150 --> 00:05:11,774
far because you can't process map and expect perfect
83
00:05:11,822 --> 00:05:15,566
exact repeatability in everything you do. Some companies
84
00:05:15,638 --> 00:05:19,582
can't even process map at all without hiring outside consultants
85
00:05:19,646 --> 00:05:23,582
to mine their own processes. In fact,
86
00:05:23,686 --> 00:05:27,102
you may not even want that dynamic all the time. Part of
87
00:05:27,126 --> 00:05:30,062
doing great work is reacting to an environment,
88
00:05:30,246 --> 00:05:33,850
intaking changes, tweaking things as you go.
89
00:05:34,230 --> 00:05:38,126
In summary, RPA works extremely well for certain tasks,
90
00:05:38,238 --> 00:05:41,022
but RPA is completely inflexible.
91
00:05:41,166 --> 00:05:45,170
Reliable but inflexible. Enter LLMs.
92
00:05:45,760 --> 00:05:48,824
The rise of LLMs represents a major shift.
93
00:05:48,952 --> 00:05:52,856
LLMs provide unlimited, cheap adaptive intelligence.
94
00:05:53,048 --> 00:05:56,424
They allowed us to define and collate the context needed
95
00:05:56,472 --> 00:06:00,024
to solve more complex problems. And as they began to learn
96
00:06:00,072 --> 00:06:03,480
reasoning, they hugely expanded the surface area of
97
00:06:03,520 --> 00:06:07,464
automatable tasks. That said, LLMs aren't perfect
98
00:06:07,512 --> 00:06:11,576
either. LLMs struggle with repetitive steps, but work well
99
00:06:11,648 --> 00:06:15,354
for unstructured parts of business processes. This can be a
100
00:06:15,362 --> 00:06:19,466
blessing or a curse depending on how creative versus deterministic
101
00:06:19,498 --> 00:06:22,954
you want your outcome to be. But either way, they're a
102
00:06:22,962 --> 00:06:26,730
black box. You can't be 100% sure of what the system is
103
00:06:26,770 --> 00:06:30,266
going to do, nor why it will do it. Even reasoning
104
00:06:30,298 --> 00:06:33,450
traces or model provided rationales can be
105
00:06:33,490 --> 00:06:36,618
completely hallucinated. Organizations need
106
00:06:36,674 --> 00:06:40,390
certainty or it's hard to implement any kind of system.
107
00:06:40,720 --> 00:06:43,784
Even if you want an LLM to be more creative,
108
00:06:43,912 --> 00:06:47,848
that's useless to you if you don't understand why and how
109
00:06:47,904 --> 00:06:51,688
it's arriving at certain conclusions. So where does this leave
110
00:06:51,704 --> 00:06:54,520
us? RPAs have strong PMF.
111
00:06:54,680 --> 00:06:57,432
It's easy to see how your system is working,
112
00:06:57,616 --> 00:07:02,136
but tasks are limited and they have no true flexibility or understanding
113
00:07:02,168 --> 00:07:06,180
of context. They also require a lot of pre work.
114
00:07:07,190 --> 00:07:11,326
LLMs are more capable with the unstructured information that's
115
00:07:11,358 --> 00:07:14,878
hard to express in rules, but they're a black box.
116
00:07:15,054 --> 00:07:18,142
The answer for agents and apa, we need a bit
117
00:07:18,166 --> 00:07:21,902
of both. We need the reliability of the RPA system with
118
00:07:21,926 --> 00:07:25,790
the flexibility and affordability of the LLM.
119
00:07:25,950 --> 00:07:29,582
This takes shape as an auditability and context layer
120
00:07:29,646 --> 00:07:33,374
that we can implement into the AI agent stack. As a
121
00:07:33,382 --> 00:07:36,782
builder in this space, you need to be working on this if you want to
122
00:07:36,806 --> 00:07:40,366
have a chance at widespread adoption. The Accountability
123
00:07:40,478 --> 00:07:44,530
Layer an unlock for Adoption, learning and Supervision
124
00:07:45,030 --> 00:07:48,110
Think back to your. Math classes in elementary school.
125
00:07:48,230 --> 00:07:51,806
When you were asked to solve a problem, you didn't get full credit
126
00:07:51,838 --> 00:07:56,046
for just writing the answer. You were asked to show your work.
127
00:07:56,198 --> 00:07:59,742
The teacher does this to verify that you actually understand
128
00:07:59,846 --> 00:08:03,960
the process that led to that correct answer. This is a
129
00:08:04,000 --> 00:08:07,592
step that many AI systems, even those that seem to show
130
00:08:07,616 --> 00:08:11,592
us logical trains of thought, are missing. We have no idea
131
00:08:11,656 --> 00:08:15,464
why AI actually generated those exact actions or chains
132
00:08:15,512 --> 00:08:19,480
of thought they're just generated. We first became aware of how
133
00:08:19,520 --> 00:08:23,656
big of a deal this was when we met Mesa. This metaphor
134
00:08:23,688 --> 00:08:27,412
was developed by David Villalon and Manuel Romero,
135
00:08:27,576 --> 00:08:30,764
the company's co founders, and it perfectly
136
00:08:30,812 --> 00:08:34,924
encapsulates the problem with so many AI agent ecosystems
137
00:08:34,972 --> 00:08:38,524
Right now, enterprises feel like they're supposed to blindly
138
00:08:38,572 --> 00:08:42,748
trust the AI's thought process. Early during product development,
139
00:08:42,884 --> 00:08:46,476
Misa met with a client that said they needed to prove exactly
140
00:08:46,508 --> 00:08:49,804
what was being done by their AI systems. For auditors,
141
00:08:49,932 --> 00:08:53,260
they needed evidence of each step taken and, critically,
142
00:08:53,340 --> 00:08:57,300
why those steps were taken at all. Conversations like that
143
00:08:57,340 --> 00:09:00,640
gave rise to Misa's concept of chain of work,
144
00:09:01,020 --> 00:09:04,404
a factor we now believe will be key to AI agent
145
00:09:04,452 --> 00:09:07,700
implementation in the workforce. At the heart
146
00:09:07,740 --> 00:09:11,320
of it sits MISA's knowledge processing unit,
147
00:09:12,780 --> 00:09:16,388
their proprietary reasoning engine for orchestrating each
148
00:09:16,444 --> 00:09:20,260
AI step as code rather than relying on ephemeral
149
00:09:20,340 --> 00:09:24,076
chain of thought text. By separating reasoning
150
00:09:24,108 --> 00:09:27,260
from execution they achieve deterministic,
151
00:09:27,340 --> 00:09:30,732
auditable outcomes. Every action is logged in
152
00:09:30,756 --> 00:09:33,900
an explicit chain of work, bridging the best
153
00:09:33,940 --> 00:09:37,308
of LLM style creativity with the reliability
154
00:09:37,404 --> 00:09:41,292
of traditional software. Unlike typical RPA or
155
00:09:41,316 --> 00:09:45,068
Frontier Lab solutions, which remain mostly guesswork behind
156
00:09:45,124 --> 00:09:48,240
the scenes, the KPU fosters trust.
157
00:09:48,620 --> 00:09:52,596
Teams can see precisely why and how the AI
158
00:09:52,708 --> 00:09:56,452
took each action, correct or refine any step and
159
00:09:56,476 --> 00:10:00,164
roll out changes consistently. I like to joke with founders
160
00:10:00,212 --> 00:10:03,812
that I work with that the best B2B software products are
161
00:10:03,836 --> 00:10:07,812
those that help people get promoted. Those that internal stakeholders
162
00:10:07,876 --> 00:10:11,396
smell that they can get big recognition by bringing it in.
163
00:10:11,548 --> 00:10:15,132
That's the reward that AI promises today, but it also comes
164
00:10:15,156 --> 00:10:18,796
with risk. No one wants to bring in a system that ultimately
165
00:10:18,828 --> 00:10:22,476
doesn't work. Building this accountability tips the risk
166
00:10:22,548 --> 00:10:25,884
reward ratio back into your favor. It's a
167
00:10:25,892 --> 00:10:29,840
given that AI automation is a huge win for enterprises.
168
00:10:30,260 --> 00:10:34,252
The key is reducing the risks, real and perceived,
169
00:10:34,396 --> 00:10:38,316
associated with implementation. Misa's chain
170
00:10:38,348 --> 00:10:41,730
of work helps with that. Ratio, and it's working
171
00:10:42,430 --> 00:10:46,518
the Context Layer what makes a great employee?
172
00:10:46,694 --> 00:10:49,990
What makes a great hire? It's not just the credentials.
173
00:10:50,070 --> 00:10:54,006
It's not just the experience. Ultimately, an employee's success
174
00:10:54,078 --> 00:10:57,158
in your organization will depend on their style,
175
00:10:57,254 --> 00:11:00,902
adaptability, and critically also on your ability
176
00:11:00,966 --> 00:11:04,450
to communicate what and how you want things to be done.
177
00:11:04,750 --> 00:11:08,230
Example, you hire a marketer who takes the time to
178
00:11:08,270 --> 00:11:12,280
understand your brand's voice and why you say what you need to say
179
00:11:12,400 --> 00:11:15,660
rather than just churning out bland marketing copy.
180
00:11:16,320 --> 00:11:19,944
Example, you hire an HR person that understands
181
00:11:19,992 --> 00:11:23,928
that he she is actually building company culture, not just
182
00:11:23,984 --> 00:11:28,632
creating an employee handbook. This is the key reason GPT4
183
00:11:28,736 --> 00:11:32,008
isn't an amazing employee. No matter what you do,
184
00:11:32,144 --> 00:11:35,762
GPT4 doesn't get you nor your company if it
185
00:11:35,786 --> 00:11:39,634
acts according to a set of rules, but it lacks the nuance and decision
186
00:11:39,722 --> 00:11:43,538
making context you'd expect from a human employee. Even if you
187
00:11:43,594 --> 00:11:47,234
were to articulate those rules to an AI workflow or custom
188
00:11:47,322 --> 00:11:51,282
GPT, you'd never get all of them. For a few reasons.
189
00:11:51,346 --> 00:11:55,234
A lot of what we learn at a new job isn't written down anywhere.
190
00:11:55,362 --> 00:11:59,122
It's learned by observation, intuition, through receiving
191
00:11:59,186 --> 00:12:02,992
feedback and asking clarifying questions. It's usually the
192
00:12:03,016 --> 00:12:06,512
ability to access and incorporate the unwritten stuff that
193
00:12:06,536 --> 00:12:09,460
distinguishes a great from a good employee.
194
00:12:10,040 --> 00:12:13,728
The actual stuff that is written is all in unstructured data,
195
00:12:13,864 --> 00:12:17,264
not in a database, but in PDFs with instructions,
196
00:12:17,392 --> 00:12:20,128
code, even in company emails.
197
00:12:20,304 --> 00:12:23,552
Most AI tools at the moment aren't plugged into the
198
00:12:23,576 --> 00:12:26,752
unstructured data ecosystem of a company let
199
00:12:26,776 --> 00:12:30,348
alone the minds of the current employees. We've talked about
200
00:12:30,404 --> 00:12:34,684
how one of the advantages of agents versus RPA is precisely
201
00:12:34,732 --> 00:12:38,492
this contextual understanding. It provides adaptability
202
00:12:38,636 --> 00:12:42,800
and eliminates the need for insanely costly process mapping.
203
00:12:43,220 --> 00:12:46,844
Organizing this knowledge is possible and it's been proven
204
00:12:46,892 --> 00:12:50,684
in more constrained environments. Industry standard
205
00:12:50,732 --> 00:12:55,132
retrieval, augmented generation are
206
00:12:55,156 --> 00:12:58,876
a decent start, but they eventually break under large data
207
00:12:58,948 --> 00:13:02,652
sets or specialized knowledge, making this a challenge.
208
00:13:02,796 --> 00:13:06,700
Misa approaches this differently by developing a virtual context
209
00:13:06,780 --> 00:13:10,476
window. VCW bypasses these complexities
210
00:13:10,588 --> 00:13:13,800
by functioning as an OS like paging system.
211
00:13:14,180 --> 00:13:17,900
Digital workers load and navigate only the data they
212
00:13:17,940 --> 00:13:22,076
need per step, giving them effectively unlimited memory and zero
213
00:13:22,148 --> 00:13:25,932
collisions. No fine tuning or unwieldy indexes
214
00:13:25,996 --> 00:13:29,516
needed. Crucially, the VCW also doubles
215
00:13:29,548 --> 00:13:33,516
as a long term know how store for each worker, meaning they adapt
216
00:13:33,548 --> 00:13:36,720
to new instructions or data seamlessly.
217
00:13:37,220 --> 00:13:41,004
A critical part of the AI agent stack must be this contextual
218
00:13:41,052 --> 00:13:44,972
layer. Your customer will think of this as space where they onboard an
219
00:13:44,996 --> 00:13:48,748
AI worker into their organization's unique approach and style.
220
00:13:48,924 --> 00:13:53,084
The challenge is to devise a way to encapsulate that context
221
00:13:53,132 --> 00:13:56,876
for your customers and translate that into your agent's
222
00:13:56,908 --> 00:14:00,892
DNA, both at the moment of onboarding and in the future,
223
00:14:01,076 --> 00:14:04,828
enabling usage of that knowledge and continuous learning.
224
00:14:05,004 --> 00:14:08,892
Some other initiatives in this broader area we have seen unstructured
225
00:14:08,956 --> 00:14:11,612
data preparation for AI agents,
226
00:14:11,756 --> 00:14:15,052
continuous systems to gather and generate new context
227
00:14:15,116 --> 00:14:18,800
data systems that allow us to fine tune models more
228
00:14:18,840 --> 00:14:22,704
easily. Memory systems and long context windows.
229
00:14:22,832 --> 00:14:26,912
See one of the latest advancements here, AI with
230
00:14:26,936 --> 00:14:30,656
an intuitive understanding of emotional intelligence and personality
231
00:14:30,768 --> 00:14:34,208
which will help with all of the above. See our
232
00:14:34,264 --> 00:14:38,288
piece Software with a Soul, the Coordination
233
00:14:38,384 --> 00:14:41,776
layer. Managing the Agentic Workforce in the future.
234
00:14:41,848 --> 00:14:45,312
Businesses are probably going to manage a set of AI agent
235
00:14:45,376 --> 00:14:48,822
employees. You'll have agents for customer service,
236
00:14:49,006 --> 00:14:52,486
sales, HR accounting, and it's
237
00:14:52,518 --> 00:14:56,230
likely that different companies will provide each of these workforces.
238
00:14:56,390 --> 00:15:00,582
It's already starting to happen. We're seeing job listings for AI
239
00:15:00,646 --> 00:15:03,958
agents in the wild. Those agents will have to talk
240
00:15:04,014 --> 00:15:07,254
to humans and to each other. Those agents
241
00:15:07,302 --> 00:15:11,878
will also require permissioning and rules with important considerations
242
00:15:11,974 --> 00:15:15,750
for privacy and security. This is an interesting crux
243
00:15:15,830 --> 00:15:18,930
moment in the development of the AI agent space.
244
00:15:19,230 --> 00:15:22,902
It seems obvious that we will have swarms of agents speaking to
245
00:15:22,926 --> 00:15:26,022
one another, but you could imagine a world where that
246
00:15:26,046 --> 00:15:29,446
isn't the case. You could see a dynamic where companies,
247
00:15:29,598 --> 00:15:32,870
likely incumbents, look to own the whole system of
248
00:15:32,910 --> 00:15:36,822
agent building and managing. In that case, they would probably look
249
00:15:36,846 --> 00:15:39,716
to discourage collaboration with other systems.
250
00:15:39,838 --> 00:15:43,304
A winner take all dynamic. That said, there's Not a
251
00:15:43,312 --> 00:15:46,440
ton of evidence to suggest any AI products have
252
00:15:46,480 --> 00:15:50,584
developed that way so far. With the exception of GPUs,
253
00:15:50,712 --> 00:15:54,968
most of the raw materials needed to build AI products and systems like
254
00:15:55,024 --> 00:15:58,580
foundational models aren't owned by one or two companies.
255
00:15:58,880 --> 00:16:02,056
We have OpenAI, Claude, Gemini,
256
00:16:02,168 --> 00:16:05,102
Mistral, and now Deepseek.
257
00:16:05,296 --> 00:16:08,626
With the sheer number of startups we're seeing in the agent space
258
00:16:08,698 --> 00:16:12,914
right now, it seems more likely that someone deep in the AI agent
259
00:16:12,962 --> 00:16:16,706
world will solve the communications and permissioning problem
260
00:16:16,858 --> 00:16:20,546
faster than an incumbent can shut them out. Ultimately,
261
00:16:20,658 --> 00:16:24,270
a thriving agent ecosystem is a win win for everyone.
262
00:16:24,810 --> 00:16:28,594
From the customer perspective, it provides you with an endless pool
263
00:16:28,642 --> 00:16:32,706
of potential AI talent and the ability to choose the best fit
264
00:16:32,738 --> 00:16:36,518
for you. From a founder's perspective, it opens
265
00:16:36,534 --> 00:16:39,814
the door to network effects. Each new agent
266
00:16:39,862 --> 00:16:43,542
that's added to the ecosystem actually benefits you if
267
00:16:43,566 --> 00:16:47,078
you are the one facilitating the connections. In that
268
00:16:47,134 --> 00:16:50,406
case, interagent communication is essential.
269
00:16:50,598 --> 00:16:54,006
Companies on the forefront of this wave already understand this
270
00:16:54,078 --> 00:16:56,646
and are building multimodal capabilities.
271
00:16:56,838 --> 00:17:01,616
Mesa's KPU from, for example, is model agnostic.
272
00:17:01,808 --> 00:17:05,712
In a world where foundational models are continuously improving,
273
00:17:05,856 --> 00:17:09,088
flexibility is essential. But we will also need
274
00:17:09,144 --> 00:17:12,800
systems for agents to safely exchange and share knowledge.
275
00:17:12,960 --> 00:17:16,624
This is something to be thinking about now as all these agent
276
00:17:16,672 --> 00:17:20,432
ecosystems get up and running. The frontier
277
00:17:20,576 --> 00:17:24,384
Giving AI agents Tools for. The Job Once we tackle
278
00:17:24,432 --> 00:17:27,842
accountability, context and coordination, we get
279
00:17:27,866 --> 00:17:31,474
to the fun stuff. We're already seeing a market emerge
280
00:17:31,522 --> 00:17:35,138
for tools for AI agents, software that will make them better
281
00:17:35,194 --> 00:17:39,826
at their jobs. Some are calling this nascent space B2A
282
00:17:39,978 --> 00:17:43,282
business to agent. This will be a major unlock that
283
00:17:43,306 --> 00:17:46,674
takes agents from rank and file workers to autonomous
284
00:17:46,722 --> 00:17:50,178
decision makers. Imagine if humans weren't allowed to use
285
00:17:50,234 --> 00:17:53,630
calculators or computers. Once you deploy an agent,
286
00:17:53,710 --> 00:17:57,614
you have to set them up for success. We're already at the beginning of
287
00:17:57,622 --> 00:18:01,822
this world. We've seen chat GPT use a web browser,
288
00:18:01,966 --> 00:18:05,022
Claude move a cursor around a screen.
289
00:18:05,166 --> 00:18:08,830
ElevenLabs can give them a voice, but we can imagine this
290
00:18:08,870 --> 00:18:12,590
world getting 10 times better. Agents will need to be able
291
00:18:12,630 --> 00:18:16,094
to pay one another for services. They'll need to be able to
292
00:18:16,102 --> 00:18:20,374
enter into contracts or plug into systems where humans and programs
293
00:18:20,422 --> 00:18:24,182
already interact. Apps can inspire infrastructure
294
00:18:24,246 --> 00:18:28,390
and vice versa. Within the AI agent space,
295
00:18:28,510 --> 00:18:32,566
we're seeing this dynamic as well. These infrastructural layers
296
00:18:32,598 --> 00:18:35,878
will inspire apps, new types of agents,
297
00:18:35,974 --> 00:18:39,414
plus tools for agents which will inform progress at
298
00:18:39,422 --> 00:18:43,414
the infrastructure layer. Creating tools where agents themselves are
299
00:18:43,422 --> 00:18:46,914
the end user is a massive area of white space.
300
00:18:47,082 --> 00:18:51,410
We're watching it closely what it takes to onboard AI
301
00:18:51,490 --> 00:18:54,946
agents. Let's be clear. We are all in
302
00:18:55,018 --> 00:18:59,362
on agents and excited about the potential they hold to us
303
00:18:59,466 --> 00:19:02,882
and most of the founders we work with. The world where we are
304
00:19:02,906 --> 00:19:06,870
all using AI agents each day is an inevitability.
305
00:19:07,370 --> 00:19:11,122
Part of this excitement is building this new ecosystem
306
00:19:11,186 --> 00:19:14,814
from the bottom up. We have to really understand what it takes
307
00:19:14,862 --> 00:19:18,478
to get people to adopt a whole new computing paradigm.
308
00:19:18,654 --> 00:19:21,982
There is a life cycle to these things and we're only at
309
00:19:22,006 --> 00:19:25,582
the beginning. Creating these layers will be key to
310
00:19:25,606 --> 00:19:29,166
making AI agents tools that most people trust and
311
00:19:29,238 --> 00:19:33,070
use every day. These are the challenges that will catapult us over
312
00:19:33,110 --> 00:19:37,102
the adoption gap. We're excited for the companies that recognize
313
00:19:37,166 --> 00:19:40,982
this challenge and dove right in there the new infrastructure
314
00:19:41,046 --> 00:19:43,990
upon which the AI agent revolution will be built.