Miami, Florida: For Adobe, artificial intelligence (AI) is playing the sort of pivotal role across their really wide portfolio of software that’s meant for enterprises, creators and consumers, across desktop and mobile use cases. The scale, the rapid pace of development, perhaps reflected easily at the company’s MAX 2024 keynote. Or the “more than 100 new features”, as they like to reference it, across Photoshop, Lightroom, Express, Illustrator and more. Underlying most of that new functionality is the Firefly AI model suite, including the rather precise distraction removal generative AI features for photo editing, and Generative Extend for video timeline fills and editing.
The company understands it is important to set the tone for how these AI models are developed. Or as Grace Yee, who is Senior Director of Ethical Innovation (AI Ethics and Accessibility) at Adobe says, “our AI ethics framework to ensure ethical considerations are central to our technology development.” In a conversation with HT, Yee details how AI ethics have been developed with a company-wide effort involving many teams as Firefly finds deeper integration across different apps, as well her outlook towards any evolution of these ethics as generative AI and the broader space continues to evolve.
At Adobe MAX 2024, the company talked about scaling for the future, with the five-year milestone for the AI ethics principles, indicating maturity in the space. “Over the past five years, my team has transformed our AI ethics principles from theoretical concepts into actionable guidelines embedded within our engineering and product development practices. These principles have become foundational to our approach,” she says. Edited excerpts.
Q. Please tell us about how Adobe weaved AI and all the conundrums about ethics that come with it, while building Firefly to the heights it scales continuously?
Grace Yee: We at Adobe came up with three principles. Those are accountability, responsibility and transparency. Acountability is about making sure we have proper mechanisms in place to be able to receive and respond to concerns. Responsibility is about making sure that our AI innovation undergoes an evaluation and careful diligence. Then, transparency is about making sure that our customers know how we can pit AI in our products. I think those principles that we put together back then, with what people are calling classic AI now, they’re the same principles that we have today for generative AI.
What’s changed is, we’re embedding these AI features into our products, we have a lot of different products and features, and so we think it really is the same process and approach. It’s just more, and really trying to understand how these features how AI technologies are being embedded to power what we have, understanding the our customers and how they’re using these features and products.
Based on that understanding the specifics, such as harms that we need to think about both from an intentional as well as from an unintentional perspective. That’s really the biggest change we’ve seen, along with which the assessment too obviously has changed a little bit. We have a lot of questions about generative AI versus, classic AI and then, in terms of our approach as we move into these spaces.
Q. Going forward, will it be challenging to hold the same principles as AI scales up?
GY: The one thing we have started to do since we are taking our Firefly model and embedding it into different applications that we have at Adobe such as Photoshop and Illustrator, its that we understand the guardrails that are and need to be in place. So different teams are taking the model and they’re doing a little bit more with it. We’ve helped them with the innovation piece of it.
That’s because maybe you’re using Firefly in a particular way, but you don’t have to go through such an extensive evaluation piece, because we know what the guardrails are there. We are really just focused on like the additional changes we’re making and maybe it’s a slightly less intensive evaluation than if you are building like another Firefly model
Q. AI, they say, is only as good as the data it is trained with. How is Adobe balancing AI ethics principles for the data sets in use, without putting the models at a learning disadvantage?
GY: For Firefly, we’ve trained it on our Adobe Stock content and the public domain content whose copyright has expired. It is a much smaller training set than what others have used. But I believe this is where the Adobe magic comes in. What our teams are able to do is use some of the magic that we’ve put into our previous features and really make the output amazing.
Q. How important will it be for Adobe to update its vision for AI and ethics, based on a rapidly changing landscape?
GY: I think one of the great things is the product teams we’re working with help early in the process through a risk discovery process. I am sure we will change and we’ll probably need to make tweaks as we go along. It’s an era of processes, more learning, and anticipating that we will make changes. In the terms of what those changes are, that’ll depend on the types of technologies that we develop, and eventually how we integrate those into our products and features in the future.
Leave a Reply