ChatGPT, Grok, Gemini, why do they all seem to not work very well?
Recently, while tracking the PCI index, I thought about using AI to help write a TradingView script. I initially planned to do it once and for all, but after running the code just once, it stopped—couldn't run anymore, and kept throwing errors.
I tried fixing it by switching between these three tools. Tried ChatGPT once, but TradingView still had all kinds of issues. Tried Grok, and the same situation occurred. When I used Gemini, I thought there might be a breakthrough, but… still the same old problems. Every time I made modifications, the logic seemed correct, but once I put it up for testing, it just failed.
Honestly, after going through this process, my impression of these tools has taken a bit of a hit. Maybe the indicator logic itself is flawed? Or is there really a ceiling for AI in debugging complex scripts at this stage? Has anyone encountered similar situations and can share some solutions?
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
17 Likes
Reward
17
4
Repost
Share
Comment
0/400
TestnetFreeloader
· 01-01 10:50
All three failed spectacularly, this is truly the ultimate... The code part is AI's Achilles' heel; I've been working on it for ages and still making no progress.
View OriginalReply0
OnchainUndercover
· 01-01 10:38
Bro, I tried all three and still failed. It's really a bit hopeless.
AI script writing still isn't there yet; it falls apart when the logic gets too complex. I've been duped too.
It seems they haven't fully grasped the rules of TradingView.
View OriginalReply0
BearMarketBard
· 01-01 10:27
AI is all showmanship; hard-core tasks like scripting still rely on humans.
---
All three failed, maybe the problem isn't with AI at all.
---
TradingView's standards are not well understood by AI at all, and that's true.
---
There's no such thing as a one-time fix; coding still requires debugging yourself.
---
Gemini is the worst, this time it's confirmed.
---
If the logic is correct, just upload it and it will fail; I know this feeling well haha.
---
Suspect that AI simply doesn't understand your indicator logic.
---
I've encountered this before, and in the end, I solved it manually.
---
Today's AI is just a sophisticated search engine; don't expect too much.
---
Script debugging is indeed a weakness of AI, so don't be too surprised.
View OriginalReply0
MemeKingNFT
· 01-01 10:25
Haha, isn't this just my experience from half a year ago? At that time, I was trying to use AI to quickly generate scripts based on on-chain data, but all three failed, which is a typical case of "looks good but actually doesn't work."
Debugging complex logic with AI is like trading cryptocurrencies; it's easy to overestimate your abilities, brother. I feel that large models still have a ceiling in this area.
To be honest, if the indicator logic is complex, it's still more reliable to do it yourself, otherwise you'll get led into a trap by AI.
Instead of relying on AI to do everything in one step, it's better to carefully clarify the requirements and then feed them to AI. It may be more troublesome, but it's more stable.
My suggestion is to break down the script logic into the smallest units and have AI write each piece. It's much more reliable than relying entirely on its free play.
Code debugging might really require manual intervention; AI still has limitations, so don't trust it too much.
ChatGPT, Grok, Gemini, why do they all seem to not work very well?
Recently, while tracking the PCI index, I thought about using AI to help write a TradingView script. I initially planned to do it once and for all, but after running the code just once, it stopped—couldn't run anymore, and kept throwing errors.
I tried fixing it by switching between these three tools. Tried ChatGPT once, but TradingView still had all kinds of issues. Tried Grok, and the same situation occurred. When I used Gemini, I thought there might be a breakthrough, but… still the same old problems. Every time I made modifications, the logic seemed correct, but once I put it up for testing, it just failed.
Honestly, after going through this process, my impression of these tools has taken a bit of a hit. Maybe the indicator logic itself is flawed? Or is there really a ceiling for AI in debugging complex scripts at this stage? Has anyone encountered similar situations and can share some solutions?