If you have closely followed the development of Open the AIthe company run by Sam Altman, whose neural networks can now write original text and create original pictures with amazing ease and speed, you can just skip this part.

If, on the other hand, you’ve only been paying close attention to the company’s progress and the growing appeal that other so-called “generative” AI companies are suddenly gaining and want to better understand why, you might find this interview with James Currier, a five-time founder and now venture capitalist who co-founded the firm NFX five years ago with several of his friends who founded the series.

Currier falls into the camp of people who follow progress closely—so closely that NFX has made numerous related investments in “generative technology,” as he describes it, and it’s getting more of the team’s attention every month. In fact, Currier doesn’t think the hype about this new wrinkle in AI is less hype than the realization that the wider startup world is suddenly facing a very big opportunity for the first time in a long time. “Every 14 years,” says Currier, “we get one of these Cambrian explosions. We had one on the internet in ’94. We had one around mobile phones in 2008. Now we have another one in 2022.”

In retrospect, this editor would like to ask better questions, but I’m learning here too. The following are excerpts from our chat, edited for length and clarity. You can listen to our longer conversation here.

TC: There’s a lot of confusion about generative AI, including how new exactly it is or whether it’s just become the latest buzzword.

JC: I think what happened with the AI ​​world in general is that we felt like we could have a deterministic AI that would help us identify the truth about something. For example, is this broken piece on the production line? Is this a suitable date? This is where you define something using AI the same way a human defines something. It’s largely been AI for the last 10 to 15 years.

The other sets of algorithms in AI were more of these diffusion algorithms that were designed to look at huge bodies of content and then generate something new from it, saying, “Here are 10,000 examples. Can we create the 10,001st example that is similar?’

They were quite fragile, quite fragile, until about a year and a half ago. [Now] the algorithms have gotten better. But more importantly, the corpora of content we’re looking at have gotten bigger because we simply have more processing power. So what happens is that these algorithms obey Moore’s Law – [with vastly improved] storage, bandwidth, computing speed—and suddenly became capable of producing something that looked very similar to what a human would produce. This means that the face value of the text it will write and the face value of the drawing it will draw looks very similar to what a human would do. And all this happened in the last two years. So this is not a new idea, but it is new at this threshold. That’s why everyone looks at this and says, “Wow, this is magic.”

So it’s computing power that has suddenly changed the game, and not some previously missing piece of technological infrastructure?

It didn’t change suddenly, it just changed gradually until the quality of his generation got to where it was meaningful to us. So the answer is generally no, the algorithms are very similar. In these diffusion algorithms, they have gotten a little better. But it’s really about the processing power. Then, about two years ago, [powerful language model] GPT came out, which was a local compute type, then GPT3 came out [the AI company Open AI] would do [the calculation] for you in the cloud; since the data models were much larger, they had to do it on their own servers. You just can’t afford to do it [on your own]. And that’s when things really took off.

We know because we invested in a company making AI-based generative games, including “AI Dungeon”, and I think the majority of all GPT-3 calculations came through “AI Dungeon” at one point.

Then, does “AI Dungeon” require a smaller team than another game maker?

That’s one of the big advantages, absolutely. They don’t have to spend all that money to store all that data, and they can, with a small group of people, create dozens of gaming experiences that everyone benefits from. [In fact] the idea is that you’ll add generative AI to old games so your non-player characters can actually say something more interesting than they do today, although you’ll get fundamentally different gameplay experiences resulting from in-game AI. versus adding AI to existing games.

So the big change is in quality? Will this technology plateau at some point?

No, it will always get better. It’s just that the differences in magnification will be smaller over time because they’re already getting pretty good,

But the other big change is that Open AI wasn’t really open. They generated this amazing thing, but then it wasn’t open and it was very expensive. So groups came together like Stability AI and other people and said, “Let’s just make open source versions of this.” And at that point, the price dropped 100 times in just the last two or three months.

These are not forks of Open AI.

All of this generative technology won’t just be built on the Open AI GPT-3 model; it was just the first. The open source community has already replicated a lot of their work, and they’re probably eight months behind, six months behind in terms of quality. But it will get there. And since the open source versions are one-third, one-fifth, or one-twentieth the price of Open AI, you’re going to see a lot of price competition and you’re going to see a proliferation of these models that compete with Open AI. And you’ll probably end up with five, or six, or eight, or maybe, maybe 100 of them.

Then unique AI models will be built on them. So you might have an AI model that really looks at making poetry, or AI models that really look at how you make visuals of dogs and dog hair, or you’ll have one that really specializes in writing sales emails. You’ll have a whole layer of these specialized AI models that will then be custom built. Then on top these, you’re going to have all the generative technology, which is going to be: how do you get people to use the product? How do you get people to pay for the product? How do you keep people coming in? How do you get people to share it? How do you create network effects?

Who’s making money here?

The application layer, where people will go after propagation and network effects, is where you’ll make the money.

What about the big companies that will be able to incorporate this technology into their networks. Wouldn’t it be very difficult for a company that doesn’t have this advantage to come out of nowhere and make money?

I think what you’re looking for is something like Twitch, where YouTube could have integrated this into their model, but they didn’t. And Twitch created a new platform and a valuable new piece of culture and value for investors and founders, even though it was difficult. So you’re going to have great founders who are going to use this technology to give them an edge. And this will create a seam in the market. And while the big guys are doing other things, they will be able to build billion dollar companies.

The New York Times published a piece recently featuring a handful of creatives who said the generative AI applications they use in their respective fields are tools in a broader toolbox. Are people here naive? Is there a risk of being replaced by this technology? As you mentioned, the team working on AI Dungeon is smaller. This is good for the company, but potentially bad for the developers who might have been working on the game otherwise.

I think with most technology people feel some discomfort [for example] robots replacing work in a car factory. When the Internet came along, many of the people doing direct mail felt threatened that companies would be able to sell directly and not use their paper advertising services. But [after] embraced digital marketing or digital communication through email, they probably had a huge boost in their career, their productivity increased, speed and efficiency increased. The same thing happened with credit cards online. We didn’t feel comfortable putting credit cards online until maybe 2002. But those who embraced [this wave in] 2000 to 2003 did better.

I think what is happening now. Writers, designers, and architects who think ahead and adopt these tools to give themselves a 2x, 3x, or 5x productivity boost will do incredibly well. I think in the next 10 years, the whole world will see an increase in productivity. This is a huge opportunity for 90% of people to just do more, be more, do more, connect more.

Do you think it wasn’t a mistake on Open AI’s part? [open source] what was he building given what appeared around him?

After all, a leader behaves differently from followers. I don’t know, I’m not in the company, I can’t say exactly. What I do know is that there will be a large ecosystem of AI models and it’s not clear to me how an AI model stays differentiated as they all aim for the same quality and it just becomes a price game. It seems to me that the people who win are Google Cloud and AWS, because we’re all just going to generate stuff like crazy.

It is possible for Open AI to move up or down. Maybe they become like AWS themselves, or maybe they start making specialized AI that they sell to certain verticals. I think anyone in this space will have the opportunity to do well if they navigate it right; they’ll just have to be smart about it.

NFX has a lot more on their site about generative AI this is worth a read by the way; you can find that here.

Why ‘generative AI’ is suddenly on everyone’s lips: It’s an ‘open field’