Microsoft will watermark AI-generated content to fight deepfakes

With more and more "deep fake" images and videos emerging, Microsoft has decided to take proactive steps to ensure its procedurally generated AI art can be identified, the Financial Associated Press reported on May 24. At Microsoft's Build 2023 developer conference, the company announced that it will add a feature in the coming months that will allow anyone to identify whether an image or video clip generated by Bing Image Creator and Microsoft Designer was generated by AI. Microsoft says the technology uses cryptographic methods to tag and sign AI-generated content, along with metadata information about its origin. Microsoft says the feature will work with "major image and video formats" supported by its two AI content-generating programs, but didn't announce which formats it supports.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)