Generative AI. You heard about Jasper, Stability AI, and more last week, and it continues. Mike and Paul break down three news stories this week to help you navigate this developing trend in the marketing AI world.
Adobe, along with many other innovators, has been experimenting with generative AI, and will be infusing AI into their Creative Cloud and other products.
It is a transformational technology, one that will accelerate the ways artists brainstorm and explore creative avenues—and make creativity accessible to millions more people. At the same time, like a lot of new technologies, generative AI has raised valid concerns. Among the questions: How is the work of creative people being used to train AI models? And how will we know whether something we see was created by a human or a computer?
Adobe says, “We are committed to helping lead this transition by implementing Generative AI in our tools in a way that empowers artists and opens creativity to new people, but never seeks to replace human imagination and judgment. And we are working with the creative and technological communities to ensure that Generative AI is developed with transparency.”
Next, New York City-based CALA, a startup that bills itself as the “world’s first operating system for fashion,” is adding DALL-E 2’s image-generation to its platform to enable users to make AI-powered product designs.
With the addition of DALL-E 2 text-to-image generating tools, users can generate new visual design ideas from natural text descriptions or uploaded reference images—which the company says are first-of-its-kind capabilities for the fashion industry.
Finally, Paul and Mike discuss Sequoia Capital, one of the best-known venture capital firms on the planet, and the fact they just signaled that they’re all-in on generative AI.
They published a post called “Generative AI: A Creative New World,” where they laid out their position on generative AI tools like Jasper, DALL-E 2, Stable Diffusion, and others that are taking the world by storm. In the article, Sequoia says that generative AI means “Every industry that requires humans to create original work—from social media to gaming, advertising to architecture, coding to graphic design, product design to law, marketing to sales—is up for reinvention.”
Listen to this great conversation with our team:
Timestamps
00:03:19 Adobe to use AI in Creative Cloud
00:13:39 CALA uses OpenAI’s DALL-E 2 API for product design
00:20:49 Sequoia Capital goes all-in on generative AI
Links referenced in the show
Watch the Video
Read the Interview Transcription
Disclaimer: This transcription was written by AI, thanks to Descript.
[00:00:00] Paul Roetzer: If you look at all the work we do as knowledge workers, and you start piecing it off and say, Okay, let’s do these five things this year and these five things next year, we’ll infuse AI into these aspects of what we do.
[00:00:09] Not replacing the humans per se, but the efficiency with which the outputs are produced, I think a 3x-5x is absolutely within reason. And if you’re in a business, again, if you’re a law firm, an accounting practice, a marketing agency, whatever it is, you should be thinking about a very real near-term future where you can 3x-5x productivity.
[00:00:34] Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable. You’ll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.
[00:00:55] My name is Paul Roetzer. I’m the founder of Marketing AI Institute, and I’m your host.
[00:01:03] Welcome to episode 22 of the Marketing AI Show. I’m your host, Paul Roetzer, along with my co-host Mike Kaput What’s up, Mike? How’s it going? Good. We’re back again. Three straight weeks. Cathy would be proud of us right off. Cathy McPhillips, our Chief Growth Officer who helps us with the podcast,
[00:01:20] Mike Kaput: slash keeps us on schedule.
[00:01:22] Yes. So, Mike is our Chief Content Officer at Marketing AI Institute, and Mike and I co-authored the book, Marketing Artificial Intelligence, AI, Marketing, and the Future of Business. So let’s get started. First, I want to give a shout out to rasa.io, our sponsor of today’s episode. It is the only AI driven smart newsletter platform, designed for organizations of any size can get more from their email list, more sales, more leads, and more engagement.
[00:01:50] rasa.io knows that publishing a newsletter is time consuming, but they’ve not only simplified the process, they’ve helped create automated, personalized, and more relevant newsletters. Be the compelling and trustworthy voice your customers need, and keep them engaged with the content they want. In return, you’ll receive richer insights and data that can be used for marketing, sales, and product development.
[00:02:13] Want to see how it works? Visit rasa.io. That’s r a s a.io, and click “how it works.” All right, Mike, what do we got? What? What are our three big topics today? Our weekly format is Mike and I spend the week creating a sandbox of things to talk about. Mike picks the three things we’re going to talk about and we go back and forth and talk about them, and then we try and keep ’em extremely newsworthy, hot topics that are going on that you should care about as a market or business leader.
[00:02:44] And so if this is your first time on our AI Weekly Show, welcome. What do we got, Mike? What are today’s topics?
[00:02:51] Well,
[00:02:51] thankfully for us it seems like every 24 hours there’s some monumental development in the world of ai. So picking fresh topics is pretty easy,
[00:03:01] Paul Roetzer: So much so that this week I was like we need to add a rapid fire at the end that we can just like hit on things that happen in like the 24 hours before we go.
[00:03:10] And if I’m not mistaken, I glanced at the show notes, there was like eight rapid fire items. Yep, yep. Yes. Things are happening fast in the world of AI .
[00:03:19] Mike Kaput: So first up is a really interesting development very directly related to marketing, sales, and design. So this past week at the Adobe Max conference, one of Adobe’s annual conferences, Adobe announced that it plans to actually integrate AI image generation into its products.
[00:03:39] So those tools that we’ve talked about, like DALL-E 2, technology, very similar to that. They’re going to be including in products like Creative Cloud, Photoshop, Adobe Express, and what’s really interesting. Is that Scott Belsky, who is their chief product officer for Creative Cloud specifically, wrote that not only will AI help Adobe customers create images, but that they “believe that technology will help creatives working in all mediums, video, 3D design, texture creation, logo design, and more, do their work better. And what’s really interesting about the announcement is not just the generative AI piece, but also they’re really emphasizing responsible use of the technology. So Adobe, which we’ve talked about in the past, has a stated AI ethics policy, and it’s using that to make sure that image generation AI is up to ethical standards.
[00:04:37] And it also promotes something called It’s “content authenticity initiative” technology, or CAI technology, which helps creators actually show how AI was used to create a piece of content. So really interesting developments and really interesting big swings from Adobe in the world of generative AI. And I wanted to start off by asking Paul, kind of, what are your thoughts on Adobe’s moves here?
[00:05:04] Paul Roetzer: I mean, I think it’s fascinating what they’re doing. It’s essential for sure. So, I mean, one of the big challenges we’ve had with DALL-E from, you know, April or so when it came out was you and I mean, we have friends that are graphic designers. My, I think I’ve told this on this show before, my wife is an artist.
[00:05:21] My daughter is 10 and wants to be an artist, like do illustrations, paintings, you know, they’re creative people. And I think there’s been a lot of fear within the creative world, especially with designers and illustrators who see DALL-E and generative AI as a threat. And these are probably a lot of the people who use Adobe products for doing their job.
[00:05:45] And so Adobe is in an interesting spot where they’re kind of walking this fine line where they need to infuse this technology. Like they have no choice. It has to be a part of what they do. And Adobe is one of the more innovative AI companies in the marketing space. For years they’ve been one of the ones we’ve looked to and said, “Well, this is what platform companies should be doing with AI.”
[00:06:07] Adobe has hundreds of AI features within their platform. So I think it makes total sense that they have to make a play into this. You need generative image capabilities baked within Adobe products; it’s essential. Like there’s no alternative but to do that. And yes, it will make people like you and I have new capabilities with design, like maybe we wouldn’t have used Adobe products before, but if that stuff’s infused in and all of a sudden we have ability to do things that previously we would’ve needed designers or illustrators to do, like logo design’s a great one.
[00:06:39] We would have to hire people to do logo design. If the future or the current state of AI is that we can now design logos without needing a designer, great for us, not great for designers we would’ve paid to do it. And so I think Adobe’s, again, they’re in a difficult spot where they have to innovate.
[00:06:59] They want to innovate. They’re an innovative company. But your core users, at least within Photoshop and Illustrator and some of the other products, are creative professionals who may feel threatened by this very innovation. So the innovation opens the door for other people to become creative in new ways, and it potentially takes away from people who have spent their life being the creative person other people relied on.
[00:07:25] And I get that it’s going to enhance, the designers and illustrators who, you know, embrace generative AI, can do all kinds of new things and interesting things. And we’ve talked about that and written about it before. But there’s also the reality that a lot of those people, their value as a human comes from their creativity.
[00:07:43] Like it’s what they are, what they do, who they are. And they’re not going to just like kumbaya around a campfire, like, great welcome generative AI to our world like, “now you can, anybody can, do what I did for them.” So I don’t know. I think it’s important that Adobe’s taking a leadership role. I like that they’re trying to find ways to support artists and creative professionals in the process through things like trying to identify if their work inspired the AI-generated work.
[00:08:11] Like I have no idea how that’s going to work. I mean, I was reading the Shutterstock stuff, which we might get to in the rapid fire from earlier today. or Shutterstock’s images are used to train the AI and like they’re trying to create a creator fund. If your image was uploaded to Shutterstock and it was used to train the AI, and then the AI created an image that was inspired by it, you’re going to get money somehow.
[00:08:32] Like, I don’t know. I mean, it’s almost like harking back to the like Napster days where all of a sudden music was just free and everywhere and we were trying to figure out how to help the musicians still make money in the process. I feel like we’re in a similar state right now where you’re going to be able to create anything you want and any style inspired by any past artist.
[00:08:53] And how you make money in that I don’t know. So I just think it’s a really great thing that they’re doing it. I don’t know that they have the answers yet. I don’t know if anybody really has the answers for this. And I think it’s just going to keep moving really, really fast. And it’s really important as an industry that we’re at least talking about it and trying to figure out what does this mean for people who are impacted by AI’s, newfound abilities to generate stuff.
[00:09:19] Mike Kaput: So one thing you said really struck me because it’s a question I actually wanted to ask you before you even said it. Is this sense that in some ways Adobe has to make this move to remain competitive. Once generative AI hits the scene, there’s no world it seems, where Adobe can’t figure out the technology.
[00:09:40] They can’t stick their head in the sand. So, I am led to believe that other incumbents in other areas, other industries, whether it’s with generative AI or other types of AI, are going to face the same thing. How quickly do incumbents need to move when AI invades their industry?
[00:10:00] Paul Roetzer: I think it’s going to be industry specific.
[00:10:03] So as we’ve talked about on this show before, if you’re in a highly conservative, highly regulated industry like healthcare or financial services, you know, it’s probably not going to move as fast. You have probably a lot more barriers to this stuff. You’re in retail, you know, e-commerce, consumer goods, like there’s far less restrictions on how you’re going to use this stuff.
[00:10:26] And so I do think that there are going to be some people who are going to be left behind pretty quickly. Like I mean, we think about marketing agencies and I know we’ll, you know, probably touch on those throughout today, but you think about like the marketing agency world and you could get left behind in 24 months, like mm-hmm.
[00:10:42] you could be game over in 24 months if you’re not figuring out how to infuse generative AI into your copywriting and your image generation and your video production and editing. I just feel like that tech is going to move so fast in this space. And … if you provide services around this stuff, or if you are on a corporate brand team that’s involved with content creation in any way, I’ve got to think we’re probably looking at a, I don’t know, two to three year window here where if you haven’t moved within the next one to two years, it’s going to be really hard to catch up. That being said, there’s also going to be an argument around the first movers may not be the winners here, because I don’t know if you and I were talking about this the other day, but, there was a tweet this morning from Sam Altman, the CEO of OpenAI, and let me pull it up real quick.
[00:11:44] Because I thought it was fascinating. He said, “The challenge for companies based on large language models, which is what powers the language writing tools that they make, GPT-3, (they make GPT-3), they can do a lot of technical work to get a current generation model to perform okay and then have to get beat by a next gen model out of the box.
[00:12:07] So what he’s saying is, let’s say you build your entire content team or your agency or whatever around GPT-3. What happens when GPT-4 comes out and maybe it dwarfs the abilities of GPT-3, Like now all of a sudden you have to like figure out ok, now it’s able to do things 10 times faster than it was doing before.
[00:12:28] It needs far less prompt engineering. Or it just does the things that we just spend all this time tuning it to do, it does out of the box. So if you come in a year from now but you come in at GPT-4 instead of GPT-3, are you that much further behind? I don’t know. All we know, and I am very confident that it has never changed in the 10+ years we’ve done this, you need to be paying attention and you need to be watching what’s happening in these trends and the changes in the tech, to figure out the play. And there’s too many businesses and brands and agencies that are completely unaware that generative AI is even a thing. And I think that’s, to me, what we’re trying to do is educate a generation of leaders and marketers and practitioners that are aware and are proactively trying to understand and apply the knowledge, because that’s what’s going to make you unique.
[00:13:22] That’s
[00:13:23] Mike Kaput: a great point, and I think it’s one of the reasons I actually teed up this next topic, which is on the surface going to seem very specific, but I think it’s really important for the audience to understand how quickly things are moving. So, back in like April or May of this year, we reported on DALL-E 2 being released.
[00:13:48] Several months later, we reported on it becoming commercially available and now, DALL-E 2 is being deployed in at least one specific case study and use case in product design. So VentureBeat, this past week, illustrated an example, a really concrete example, of how DALL-E 2 is being integrated into an existing product in the fashion industry to essentially augment product designers.
[00:14:17] So a startup called CALA, C A L A, sells a digital platform that helps fashion professionals design clothing lines. So you know, picking designs for any type of apparel you can think of, designing it and then eventually selling that product. And they announced recently, as VentureBeat reported, that their customers can now use DALL-E 2 within the platform to actually generate product designs and/or tweak existing designs. So, an example here would be like, you can go into this platform and say, I want to design a hoodie and you can use DALL-E 2 to actually type in things like, “I want the hoodie to be made of dark and delicate velvet. I want some sewn logo patches on it.
[00:15:07] Show me some variations.” And then CALA, this platform, will actually generate six different example product designs. And you can keep regenerating designs, you can take a design you like and tweak it further. I mean, we knew from the moment we tried out DALL-E 2, stuff like this would become possible. It is stunning to me that we’re, I don’t know, six months out from the thing even being released. And suddenly startups and, you know, major companies and major job roles within marketing, sales and product are starting to use this for these types of concrete product design use cases. So Paul, what were your thoughts on kind of this use case for the fashion industry and just in general in product design?
[00:15:54] Paul Roetzer: I think it is inevitable. I mean, it was inevitable as you said. Like we looked at it like, Wow, this is going to affect everything. I remember shortly after I got access to DALL-E 2, so this is probably in like late May, early June, I had an architect friend over looking at doing an addition on my house and we were trying to visualize how it would look. It was this specific thing my wife wanted to put off the back of the house and there was complexities around the wires and the limbs and all this stuff. And so here’s this architect who I would have to pay to sketch this up for me and think about all the challenges of it and basically come back to me with here’s what would be possible.
[00:16:37] And in the moment I’m thinking I don’t know that this guy’s going to be needed in five years, like maybe three years. Like if someone, because again, I had seen what DALL-E 2 could do and its very, very early versions. And I was immediately thinking, well, architecture’s natural. I upload a picture of my house, say I would like to put an orangery off the back or whatever the thing was.
[00:16:58] And, I’d like it to be 20 feet by 40 feet and I’d like the roof to be peaked this way and I’d like 10 windows. No, no, no, make the windows big. Like in my mind, I’m prompt engineering the design I want and, and I’m relying on this person to draw it for me and to sketch it. But if you trained DALL-E or you used their API and it was capable of doing design of homes with all these specifications, and all I had to do was prompt it as changes.
[00:17:32] Now show with more brick, add stone here and, and it could just do it. Immediately again, like a month or two into DALL-E, you could see a day where that would be absolutely done. All it takes is a training set. It takes tuning the model to be able to do architecture or shoe design, or clothing design or whatever else you can imagine that you need to do: landscaping, interior design, anything.
[00:18:00] And so I do think, and I believe our last topic will get into sort of this vertical idea. I do think that the near future of the application of these models is domain specific. The models right now are very general, so the ability for GPT-3 to create language, you go get Jasper, copy.ai, Writer, or any of these, GoCharlie any of these writing tools, they’re trained in kind of a very general way.
[00:18:27] They’re based on the GPT-3 model that’s, general in nature. It’s not domain-specific and not trained on a core set of things that are specific to an industry or something. Same with image technology. It’s trained on just general data. But if you take it and tune it to a specific industry or use case like apparel design or product design, it would seem to me that the possibilities are almost infinite.
[00:18:50] And every one of those use cases does have a trickle down effect on the people whose job, the knowledge workers, whose job it is, and whose careers have spent gaining the expertise to do their job, it’s possible that a lot of the work they do in the future is actually training and tuning the models that intelligently automate what they used to do.
[00:19:13] I don’t know. I mean, again, it’s the Shutterstock idea. Shutterstock was a stock photography company, and then they used their billions of images or however many images it is to train the DALL-E 2 model so their images became the training data and so, I just think that this is where the real innovation’s going to happen.
[00:19:34] It’s where billion dollar companies are going to be built: narrow applications of these generative models applied to all kinds of industries and use cases. And it’s really just as much as your imagination can go, it’s taking your domain knowledge and figure out how can I apply a generative model to what I do?
[00:19:54] And those are the people are going to win. There’s going to be business people and creative people and leaders who refuse to accept the day it can do their job. That’s fine. You have that choice. But I really just think that at the end of the day, AI is going to have these capabilities whether you want it to or not, and we’re going to have this next five to 10 year window where the people who embrace it and accept it and figure out what to do about it are going to build massive value.
[00:20:24] And the other people just gradually won’t be able to create the same value that they create today without it.
[00:20:33] Mike Kaput: Yeah, and I think that that really segues fantastically into the way that the people funding these new companies, these new roles and these new innovations are looking at this. So in the past week, Sequoia Capital, which is one of the best known venture capital firms on the planet, they’ve invested in everyone from Apple to Google, LinkedIn, PayPal.
[00:21:02] They’re everywhere. They’re a Silicon Valley institution, essentially. They just wrote a post that basically signals that they’re all in on generative AI and outlines really, really interesting considerations along the lines of what you just said related to how this technology can take over any and every domain you can think of.
[00:21:27] So the post that we’re referring to is titled Generative AI, A Creative New World. And basically in it, Sequoia lays out their position on the entire space of generative AI and highlights some really interesting opportunities. A couple things that really jumped out at me. Sequoia said that generative AI means “every industry that requires humans to create original work from social media to gaming, advertising to architecture, coding to graphic design, product design to law marketing to sales is up for reinvention.”
[00:22:02] And they go on to outline that they think generative AI is going to bring the marginal cost of creation and knowledge work down to zero. They think it’s going to create vast labor, productivity, and economic value, and they’re estimating if this all comes to pass, the market for generative AI solutions alone is potentially worth trillions.
[00:22:26] So that is a very strong statement from some of the smartest people in the game. What do you make of Sequoia’s position on generative AI?
[00:22:37] Paul Roetzer: The part about the trillions, I’ll just read the excerpt because I think it summarizes the basic thesis and kind of adds onto what you’re saying. It says, “The fields that generative AI addresses, knowledge work and creative work, comprise billions of workers.
[00:22:49] Generative AI can make these workers at least 10% more efficient and/or creative. They become not only faster and more efficient, but more capable than before. Therefore, generative AI has the potential to generate trillions of dollars of economic value.” So what I love is rather than spending 50 pages going through a bunch of data and predictions, it was just a very simple equation.
[00:23:11] Let’s accept that there’s billions of these people. Let’s assume 10%, which is an insanely conservative assumption because depending on the knowledge work we’re talking about four to five times the output, not 10% increase. So again, very conservatively, they arrive at, “well, the answer is trillions.” Like let’s not mess around and just pretend like we know it’s 800 billion or 4 trillion.
[00:23:35] It’s just trillion. It’s a lot. And to build on that, there was a study that I used in a recent talk from ARC, which is a global asset management company, and they said, by 2030, AI will increase the output of knowledge workers by 140%. So they looked at office admin support was actually a 4x output.
[00:23:53] Paralegals, lawyers, bookkeepers, accountants, software developers, you could throw marketers in there, consultants, web developers, whatever; the whole point is it is not just coming for knowledge workers, It’s here and it’s only going to get better. So today I could see, like if you and I apply AI to what we do and say, okay, let’s try and get a 10% lift in efficiency for the tasks we do daily, I think that is infinitely doable right now with today’s technology. I think you need to be looking at it as a business and saying, Okay, how do we get a 30 to 50% lift in efficiency over the next three years, because I think that is infinitely doable. Like if you look at all the work we do as knowledge workers, and you start piecing it off and say, Okay, let’s do these five things this year and these five things next year, we’ll infuse AI into these aspects of what we do.
[00:24:45] Not replacing the humans per se, but the efficiency with which the outputs are produced, I think a 3x-5x is absolutely within reason. And if you’re in a business, again, if you’re a law firm, an accounting practice, a marketing agency, whatever it is, you should be thinking about a very real near-term future where you can 3x-5x productivity.
[00:25:11] I think that’s very feasible.
[00:25:17] Mike Kaput: Yeah. That’s really interesting. And they really dive into all of the phases of AI that have led us to the current moment. And I won’t get into the details. I would highly recommend you read the article for yourself.
[00:25:32] It’s
[00:25:32] Paul Roetzer: a long post, but it’s really good.
[00:25:34] Mike Kaput: Yeah, it’s packed. For how much information and insight is in it, it’s very information dense, which is great. And basically Sequoia posits that of all the waves of innovation that have led us here in AI, the phase we’re at right now is where we develop the killer apps for generative AI. And I wanted to kind of get your take, you know, killer apps being kind of a reference to when the iPhone came out.
[00:25:58] At first it, it was an incredible innovation, but the ecosystem around it took some time to adapt to just how game changing it was. And eventually you developed these killer mobile apps that caused the ecosystem to just take off with rocket fuel in terms of users, usage, and how many people are now enabled with mobile devices.
[00:26:20] So in that area of generative AI specifically in marketing and sales, what do you see as potentially killer apps? That would be game changers if they came into being?
[00:26:31] Paul Roetzer: I mean, we look at language and vision and prediction. So when we think about AI and we teach our intro to AI course and things like that, we always break things into language, vision and prediction as the kind of the broad applications of AI.
[00:26:43] And then underneath those you have more specific applications. So under language you have natural language processing, natural language generation, text to speech, speech-to-text, transcription, translation… I would just look at those categories, and then I would apply them to my industry, my business model, whatever it may be.
[00:27:03] And so I agree a hundred percent. This goes back to our domain specific thing. It’s going to be really hard. So if you’re like, Jasper just raised 125 million for generative AI specifically within language. Now they’re infusing image generation within the language model. Basically it’s like a language model.
[00:27:22] It’s going to be hard to win the platform play to be a language platform. Like, you know, if you think about HubSpot, for example, as a CRM platform across marketing, sales, and service, lots and lots of apps built into one big platform. There’s probably going to be a select few winners in the platform game for language platforms and vision or image platforms.
[00:27:45] But when you start getting to the specific applications, the ability to build those apps, that’s the part where it gets almost limitless with what you can do. So like they give the example and there’s a vertical specific writing assistant. So it says, most writing assistants today are horizontal. That’s what we were saying.
[00:28:00] Like GPT-3 is trained horizontally. It’s just broadly trained on a bunch of general data and then people like build some specific capabilities into like blog post writing, social media shares, newsletter writing, things like that. But the training in the model is basically horizontal. They say, we believe there’s an opportunity to build much better generative applications for specific end markets, from legal contract writing to screenwriting.
[00:28:23] That’s the app. That’s the killer app model. That is the, “We are in insurance, we are in media, we are in entertainment. This is the thing we do.” How do we build a very specific solution on top of GPT-3 or DALL-E or whatever the platform is, whatever the model is, how do we actually build something very specific that end users would find immense value in?
[00:28:47] And then how do we kind of plug that into the larger ecosystem? So yeah, I agree a hundred percent. I think, again, with marketing, sales service, where we spend most of our time thinking about, you could come up with hundreds of things to build . I mean, it’s just the resource question and like, do you have the money and the people to do it?
[00:29:07] But the ideas are almost endless of what to build. And
[00:29:11] Mike Kaput: what’s exciting is, as we’ve seen from some of the funding announcements, despite some of the negative economic outlook moving forward, the money is out there. There’s a lot of money flowing into the space and a lot of money sitting on the sidelines that is looking for ideas that, you know, bold entrepreneurs have, whether it’s in marketing or sales or another domain to use this technology.
[00:29:36] Paul Roetzer: Yeah, and the VC, if you think about the traditional VC world, they’re going to be wanting to make their big bets on who are going to be the big platform companies. But once they’ve made those bets, you know, they’re going to need to make a bunch of point solution bets: which are the companies that maybe it’s a hundred million dollar play, or maybe it can get to a billion.
[00:29:54] But I mean, you can build a lot of $100 to $500 million exit companies on very specific applications in vertical industries. I mean, there’s just so much value to be created for people. Because again, just take any industry and say, how do we make that industry more efficient? And then just go build apps to do it.
[00:30:15] That’s the model. Maybe we’ll do it
[00:30:23] Mike Kaput: All right, as we kind of wrap up here, I’ve got only this week just a one shot rapid fire I want to throw at you that you alluded to. Shutterstock. So we saw this morning that Shutterstock will start selling AI generated stock imagery with help from OpenAI, the creators of DALL-E 2. As part of this move, DALL-E 2 is being directly integrated into Shutterstock’s platform so you’re going to be able to get AI generated stock images. And what’s really interesting is, like you said, Shutterstock was actually used at least in part to train DALL-E 2; they took the images from the platform to teach DALL-E 2 all the different features and commonalities of images, thus making DALL-E 2 possible. And Shutterstock has actually launched a contributor fund to pay creators whose content is sold to firms like OpenAI to train these models. So there’s kind of two really interesting pieces to that. What’s your initial take on this? We may dive into it deeper on another podcast, but I want to kind of get your initial read on Shutterstock leaning into this in this way.
[00:31:32] Paul Roetzer: We knew something was coming from Shutterstock. They bought three AI companies last year, I believe, including, you know, our friends at Pattern 89. And so you knew they were making a bigger play. You could theorize what it might be, so in some ways, I would say this is kind of an inevitable outcome, but much like Adobe, their entire business model was built on people, creative professionals who take amazing photos and create amazing images.
[00:32:04] And so they have to walk this fine line between respecting that core audience and trying to find a way for those people, those creative professionals, to have some value created in return for what they did. But as an organization it’s real hard not to look to the future. I don’t know how you differentiate. I mean, I could almost see a day… maybe this is what they’re going to do, I don’t know. Because there’s was a lot of like, “in a few months we’re going to announce these things.” It didn’t actually like tell you exactly how this was going to work. I could see a day where we actually go into like a Shutterstock and say, I want to see human-generated content and I want to see AI-generated content.
[00:32:50] Human only. They couldn’t have, they weren’t allowed to use, AI. This is raw human creation based on experiences and knowledge and all these things. So my raw human creation, I mean, it’s a premium on raw human creation. I don’t know. I’m glad I’m not the one at Shutterstock and Adobe who has to figure this out because it’s a really hard problem.
[00:33:10] But, you know, I also think, and maybe this is a topic for another time, but you gotta wonder if at some point when the ability to generate AI images is so commoditized— like you can go in any platform and just create ‘them– so if I’m in HubSpot, I can create a DALL-E image. I’m in Shutterstock, can create a DALL-E image.
[00:33:27] I’m in, you know, Jasper or wherever I’m at, I can create AI images on the fly and they’re amazing. And it’s just on-demand stuff. Maybe language gets that way. I can create social shares and blog posts and ad copy just on the fly, and it’s just readily available. I wonder like, what does that future state look like where human created stuff is differentiated and maybe more scarce?
[00:33:52] Because I mean, how long until most of the images and the content you find on Google is actually generated by AI, right? And so does human generated creativity become a scarce resource and in some way become more valuable? I don’t know. Like I’m thinking out loud here, but I think again, it’s a fascinating topic.
[00:34:14] It’s intriguing to me how Getty Images and Shutterstock and Unsplash and all these companies are trying to figure this out in real time, because I feel like everybody got caught flat footed. No one that I’m aware of was ready for how quickly this generative AI stuff was going to take over. And there’s a lot of companies that are having to try and figure this out really fast right now.
[00:34:37] Yeah, I think
[00:34:37] Mike Kaput: that that’s a perfect kind of sentiment to end on is that’s why we try to counsel people as much as possible to start playing around with and understanding artificial intelligence as best you can at least get started because nobody can predict exactly how fast things are going to move and what the next big development will be.
[00:34:58] It’s almost like you have to prepare far before the development even happens, that you’re in a position to take advantage of stuff like this when it does take the world by storm. Yep.
[00:35:11] Paul Roetzer: Good stuff, man. I know we got other stuff that was sitting in the sandbox. We’ll be back again next week. If you find this content valuable, subscribe, follow, download, whatever it is you do to, to follow the podcast. It’s published on YouTube. It’s available on the podcast network, it’s published on our blog each week. So yeah, just stay connected and then reach out to Mike and I on LinkedIn; it’s a great way to do it. We’d love to hear your topics and concerns and questions. So maybe we’ll start infusing in in. I think last week we introduced rapid fire, maybe next time it’s going to be a community question, so if you got ideas or questions you want to hear, throw those our way. Maybe we’ll start answering some listener questions each week as well, because I know I got a million of them, so hopefully you do too. All right.
[00:35:59] Well, Mike, great job as always. Thanks again and hopefully you’ll join us again and everyone will join us again next week. Same time, same place. All right, Mike. Talk to you soon, man. All right.
[00:36:09] Mike Kaput: See ya. All right.
[00:36:11] Paul Roetzer: Thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you’re ready to continue your learning, head over to marketing ai institute.com. Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.
[00:36:33] Until next time, stay curious and explore AI.
https://www.marketingaiinstitute.com/blog/the-marketing-ai-show-episode-22-adobe-adopts-ai-images-dall-e-2-for-product-design-sequoia-capital-goes-all-in-on-generative-ai