Yeah, this is the approach people are trying to take more now, the problem is generally amount of that data needed and verifying it's high quality in the first place, but these systems are positive feedback loops both in training and in use. If you train on higher quality code, it will write higher quality code, but be less able to handle edge cases or potentially complete code in a salient way that wasn't at the same quality bar or style as the training code.
On the use side, if you provide higher quality code as input when prompting, it is more likely to predict higher quality code because it's continuing what was written. Using standard approaches, documenting, just generally following good practice with code before sending it to the LLM will majorly improve results.
I’ll say, one thing that helped me here was starting to see the “depth in the breadth”, so to speak, and recognizing this jumping around for what it was. A lot of novelty seeking and bouncing between hobbies to avoid conscious regulating, which was tiring.
Now, in things that I consider important, I try to find the novelty and breadth that comes with sticking to it for a long time - stare at a hobby / occupation long enough to see the big world inside of it and realize it’s more than you can take in and take time to put up some blinders so you can hone in there and see it as lots of cool novel things within a smaller space.
Also, realizing that bouncing around to all kinds of things… well, that’s my form of relaxing. If I’m totally depleted, chances are what I need isn’t to sit in one place and “rest”, or to focus on one thing, it’s to schedule time to completely not focus on one thing and allow myself to bounce all over the place and do whatever feels good (within responsible limits). It’s usually a chaotic mess that amounts to no long-term benefit, but it’s much more resting that trying to relax. Trying was the problem, after all.