We aren’t building AI models for the sake of it: Adobe’s Deepa Subramaniam- Dilli Dehat se


Miami, Florida: The result may have surprised many, but it wasn’t really a surprise. When OpenAI announced the video generative AI Sora earlier this year. A while ago, Meta talked about their take on a similar generative AI model. When Adobe announced the Firefly Video model at the annual MAX keynote this week, they’ve won the bragging rights as the generative video chapter is being written. Expect things to develop quickly in this space, as generative video fills find relevance in video editing workflows. That is just one part of Adobe’s artificial intelligence (AI) push, which now underlines most of the 100 new features that have been introduced across the company’s software platforms.

Deepa Subramaniam, Vice President, Product Marketing, Creative Professional at Adobe.
Deepa Subramaniam, Vice President, Product Marketing, Creative Professional at Adobe.

Clearly, Deepa Subramaniam, who is Vice President, Product Marketing, Creative Professional at Adobe, has a lot filling the in-tray on her work desk. She oversees the development of Adobe’s most popular platforms, with focus on sticking to Adobe’s AI ethics and safety policies that have defined the Firefly model evolution since they were first introduced in March last year. “We’re focusing on just improving that even further, maturing that. It means more models, and more workflows in our applications,” she says, in a conversation with HT. She talks about the Firefly models, how workflows find new relevance within apps including Photoshop and Lightroom, the generative AI tools such as Fill and Extend that were developed based on user feedback and how the competition landscape is shaping up. Edited excerpts.

Q. Compared to when Firefly was first announced, there is comparatively better understanding of generative AI now, what works and what needs work. Would you say there have been unique takeaways from its integration across Adobe’s apps?

Deepa Subramaniam: It’s just such an exciting time as this incredibly powerful and new technology develops. Within the industry and also our own learnings, as we develop these foundational models, improve them, expand into more mediums and also deeply integrate them into our applications. We’ve been on that journey, and the journey continues.

We debuted the image model as our first foundational firefly model with massive public data and really well received. That was because of the approach we took and that is what really differentiates us, which continues to this day. It is designed to be safer for commercial use based on how we trained our model and really developing models, thinking about the workflows and the tooling that we want to unlock. In that regard, more and more models—an image model, vector model, design model, and as we announced at the keynote, a video model. But our approach continues to stay the same because it’s really working because of the reception, response and usage we’re seeing from customers across all segments.

Enterprises all the way to individual freelancers has really reinforced that is how they want to see AI brought to life.

We’re focusing on just improving that even further, maturing that. It means more models, and more workflows in our applications. It is very exciting that Premiere is now joining the set of professional applications in addition to Photoshop, illustrator and Lightroom that now that now has generative AI integrated with generative extend and Firefly deeply integrated. But it is also about improving the control capabilities that creators have. The content authenticity, the web app and private data, again, that has been a foundational approach. Using content credentials to drive literacy and transparency in the provenance of content, and now maturing that by actually having this web application that we really want all creators to use to essentially verify and sign their content.

Our approach hasn’t changed, but the proof points behind that, the milestones, maturity of it, has been just getting deeper and deeper, and better and better. That’s really what’s been so enjoyable about this whole process.

Q. There is powerful generative tech such we distraction removal and generative expand now available. Do you believe the AI influences in a way, change the definition of creativity?

DS: I would argue that one’s definition of creativity is almost like a personal, choice. The choice to express yourself, to tap into your creative spirit, which we believe is an innately human thing. To want, to create and express yourself is very much a personal choice.

Is actually, the act of maybe removing distractions from a photo creativity? I would ask the person removing the photos if they think of that as creative expression to get the picture that they want to share. Again, I don’t even mean on social media, just like with friends and family.

I’m an avid Lightroom mobile user, but I barely post on social media, however, I use light room to curate the photos that really matter to me and to share those photos with my friends and family.

The act of editing in Lightroom to me is not just about getting the photo I want, but reliving that photo through the act of editing and tapping into the nostalgia, reliving the moment of my son laughing at the zoo to then share that with family members. To me, that’s creativity. I think creativity spans many definitions cross as many spectrums, and is a definition on to the creator. um so, yeah.

Q. Even after all these years, Adobe doesn’t have a direct competition of this scale, but the challenges are being posed by smaller but more focused platforms. How is Adobe contending with that, and does that warrant a change in approach?

DS: Adobe has products across any number of categories. Imaging, 3D, video, professional design, photography, you name it, that’s a wide spectrum of applications across categories. Growth and more people coming into those categories is a good thing. Innovation of those categories is a good thing. I love that video is exploding, as more people are being asked to do video. I love that digital photography is more accessible to sort of your consumer photo hobbyist and moving beyond the realms of professional photography. That design is increasingly a growing category with more and more people wanting to do it, whether it’s graphic design or website design, for example.

What’s really great about all that is, that’s an opportunity for us. We are helping to drive that accessibility, drive that growth through our products. We showed Neo, that’s all about making 3D more accessible to designers. We showed the power line removal (Deepa references a major update, the Distraction Removal tool). It’s just a very exciting time in these creative categories and that’s what fuels us every day, that’s why we’re innovating to bring, especially our flagship applications out of the desktop and into the browser and onto the phone. We want to continue reaching new audiences who are coming into these categories wanting to create and wanting the best tools to do that.

Q. Adobe always makes it a point to say that functionality and updates are always done by listening to feedback from users. Does that have an inherent disadvantage with time?

DS: Instead of speed and innovation, I would say is that look at the speed of innovation. For instance, our image model was released last March. Since then, we followed with integrations deep into Photoshops in May 2023 was when generative fill was brought into Photoshop.

Our vector model was in the fall of 2023 and then followed by deep integration into Illustrator. At the same time, the maturing of our image model from version one to version two now to version three, and the introduction of our design model in Express. Our pace of innovation on Firefly has been an incredible and I don’t think in any way is constrained by our approach.

In fact, I think this fuels our approach because we’re developing these models and putting these workflows out in these public betas (Deepa mentions the beta test versions of certain functionality, a regular fixture in Adobe’s apps), with the express purpose of getting feedback.

And I want to say a huge thank you to our community for using these betas, giving us feedback, helping us harden our models, and improve our workflows. It has helped ideate new workflows. Generative Expand in Photoshop came out of the generative fill public beta for Photoshop users who were saying we can use this inpainting to do some outpainting. Similarly, improvements in Illustrator vector workflows, generative vector workflows or generative shape fill, which was recently released. It has been just an incredible pace of innovation and that pace has been bolstered by our community engaging with us in these public betas to help improve workflows and models.

Q. Video generative AI’s chapter has just begun to be written. How do you envision this developing over the next year, with a backdrop that includes a responsible and ethical approach to AI, as well as inevitable competition?

DS: It has been really exciting to work on all things video. The Firefly Video model was a culmination of a lot of hard work and it’s honestly just the start of a journey with Firefly and video, and integration into our applications. I think it just comes back to the fact that we didn’t develop this in as silo. We’re always talking to our users, as we work on the video model. That is how it has been from day one, thinking about workflows and how we want to unlock our applications. We have spent thousands of hours talking with our professional video editing community and this challenge around being able to extend footage had come up constantly. Honestly, it was a challenge that was really hard to address, prior to generative AI. What generative AI, that’s a perfect use case, done in a way exactly how our community wanted to see generative AI as an ingredient in bigger workflows. It really came together and then it’s this close partnership between a research team working on the models, our application team working on the app.

Our Premiere team was really thinking through what actually has to be in the model, to be able to support work for the quality, speed and the precision that we saw. That’s just one example among many, which brought Generative Fill to Photoshop, Generative Remove to Lightroom and now Generative Extend to Premiere Pro. There’s a reception that we’re seeing for this extra layer of innovation.

We are not developing models for model’s sake. We’re thinking about the model in conjunction with a workflow and the tooling that we want to support. It’s not just the capabilities and the latent space within the model. It’s also the experience. As with Generative Extend for example, it’s directly in the timeline and when you’re creating the footage, it’s marked as AI generated. We always want our editors to know what’s their human shot footage versus AI generated footage. These details are what comes through that partnership. And of course, we’ll only get better through the public data, so we want everyone to use the beta.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *