Google has released a generative AI learning path, including 8 courses and 2 tests, covering everything from technical principles, implementation methods, application scenarios, and development and deployment. Some courses are in Google Cloud, but the content and structure are very good, completely free.


The 8 courses are introduced and linked as follows:
Note from Xiaopang: If you don’t know what to learn, you can read the introduction. If your hands are itchy, you can go directly to #4 5 #8 to do three practical projects: generating text from text, generating text from text, and generating text from pictures.
1. What is generative AI, what are its applications, and how is it different from traditional machine learning.
[Complete the literacy goal after learning]
2. What is a large language model (LLM), the application scenarios of a large language model, and how prompt words () and fine-tuning (fine-tuning) can improve model performance.
[More than 90% of Chinese Twitter friends after learning]
3. What is responsible AI (Responsible AI), why it is important for AI models to be safe, reliable and ethical, and how to build a product that uses responsible AI.
[There is not much practical value. After learning, you can brag at the wine table but you will be disgusted. 】
4. Diffusion Models image generation model theory, model training methods and how to deploy the model to the cloud (starting to bring goods!).
[After learning, you can find out how those image generation start-up companies are manipulated]
5. Principles of encoder-decoder model architecture widely used in tasks such as machine translation and speech recognition, and how to build a poetry generation AI with this architecture in TensorFlow
[Actually, most text generation startups don’t use this cover...it’s too difficult for them...but you can build your own building blocks in advance and how to cover your business]
6. How does the Attention Mechanism in the neural network allocate computing resources to more important tasks under the condition of limited computing power, and improve the performance of translation, summarization, question answering, etc.
[Most VCs and entrepreneurs with non-technical backgrounds can’t reach this level, at this time bragging will not be easily broken]
7. The basic principles of the pre-training technology BERT (Bidirectional Encoder Representations from Transformers) in natural language processing, and how it can make AI significantly improve the ability to understand unlabeled text in context in many different tasks.
[Scholarly...it's really awesome...but it feels like Google is bragging for its own technology...]
8. Learn image understanding and labeling, and learn how to build an artificial intelligence model that looks at pictures, speaks and understands pictures.
【Difficult and fun! I haven’t seen many applications in this field yet]
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)