Pushing the Limits of Deep Space Exploration! Our scientists use an astronomical AI model to create the "Ultimate Deep Space Map"

robot
Abstract generation in progress

Exploring distant faint celestial bodies and structures is key to solving scientific mysteries such as the origin and evolution of the universe and the cycle of matter and energy. Chinese scientists have developed an astronomical AI model called “Xingyan” based on computational optical principles and artificial intelligence algorithms. This model can decode signals from faint celestial bodies, detect galaxies over 13 billion light-years away, and obtain the deepest space images currently known internationally. The achievement was published online in Science on February 20.

Faint celestial bodies contain critical information for understanding the origin and evolution of the universe. However, background noise from starlight and thermal radiation from telescopes can interfere with signals from faint objects, posing a major challenge to exploring the cosmos.

The figure shows a conceptual diagram of the astronomical AI model Xingyan. (Provided by interviewee)

Led by Professor Dai Qionghai from Tsinghua University’s Department of Automation, Associate Professor Cai Zheng from the Department of Astronomy, and Associate Professor Wu Jiamin from the Department of Automation, the team independently developed the Xingyan model. It can decode vast amounts of data from space telescopes and is compatible with multiple detection devices, potentially becoming a universal deep space data enhancement platform.

“Apparent magnitude” is a classification of celestial brightness; the higher the value, the dimmer the object. Research shows that applying Xingyan to the James Webb Space Telescope, the coverage wavelength can extend from visible light (about 500 nanometers) to mid-infrared (5 micrometers), and its deep space detection depth can be increased by one magnitude, with detection accuracy improved by 1.6 magnitudes—equivalent to increasing the effective aperture of the space telescope from about 6 meters to nearly 10 meters.

“We have generated the deepest space images currently achieved internationally, breaking the limits of deep space detection and creating extremely deep images,” Cai Zheng said. The team used Xingyan to discover over 160 candidate early-universe galaxies, which existed 200 million to 500 million years after the Big Bang, whereas previously only about 50 such galaxies had been identified globally.

The figure compares previous research (blue-purple star markers) with candidate galaxies discovered by Xingyan (orange star markers). (Provided by interviewee)

Wu Jiamin explained that Xingyan’s “self-supervised spatiotemporal denoising” technology focuses on extracting and reconstructing faint signals by jointly modeling noise fluctuations and celestial brightness, trained directly on large amounts of observational data. This approach enhances detection depth while ensuring accuracy.

Reviewers of Science commented that this research provides a “powerful tool” for exploring the universe and “will have a significant impact on the field of astronomy.”

Dai Qionghai stated that, based on Xingyan, faint celestial bodies affected by noise in astronomical observations can be faithfully reconstructed. This technology is expected to be applied to more next-generation telescopes in the future, aiding in decoding dark energy, dark matter, the origin of the universe, exoplanets, and other major scientific questions.

(Source: Xinhua News Agency)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)