This comment is evidence that your write-up would have been just fine and understandable by other humans. Using AI to write your technical writing for you makes me lose trust in what you're saying. No one cares if you're a non-native English speaker, just write.
Your English seems good enough to communicate. I'd encourage you to trust your abilities; any misunderstandings can be clarified with follow-up questions if necessary.
Me explaining to a teacher why i cheated on the test: well did you stop to consider the cognitive load of doing the problems myself and how much easier it was to cheat?
Hmm, I don’t see this much anymore. I typically start a project in plan mode and tell Claude to do some research to bring me 2-3 alternatives. Then we talk about the pros and cons before deciding on the libraries, etc.
On the other hand, if you just tell it to do a thing, I could believe that it would just do the thing. It is pretty bad at high level design judgment. Human guidance on architecture choices results in much better output.
The root issue here is that Claude (and most LLMs) optimize for producing working code, not minimal code. When given an ambiguous task they'll reach for a full implementation before checking if a library exists.\n\nA pattern I've found helps: before writing any code, explicitly ask the model to list its assumptions and identify what libraries/modules could handle each part. Something like 'before coding, tell me what existing Python packages could solve each sub-problem.' This forces a discovery step.\n\nThe CLAUDE.md / system prompt approach also works well - you can specify project conventions like 'always check PyPI before implementing utility functions from scratch.' Takes a bit of upfront setup but catches this class of error reliably.
This is why you should set up a project ruleset/constitution when you start. Do you want to prefer libraries or inline code? You can even choose at what point you think the trade off is worth it. 1000 lines of code? 10 functions? You can choose whatever.
Then, you tell your AI to stick to that rule, and it will. There are tradeoffs to each choice, and people fall into different camps. Make your choice, write it down, and tell the AI to always follow that rule, and then you have it your way.
Seen this pattern repeatedly building a shell plugin with Claude. It defaults to writing everything from first principles rather than reaching for existing tools. 200 lines of custom YAML parsing when a one-liner would do. Adding "always check if a library or existing tool solves this before writing custom code" to CLAUDE.md cut this down significantly.
honest question, no shade, wasn't that a but your fault for not googling or asking it to consiser existing approaches and solutions? AI will be as dumb as you let it imo. i always ask it to do a bit of research as i craft a plan with it.
I consider myself AI skeptical-ish and I detest when people defend LLMs with "it's user error, prompt better," but in this case it actually is user error.
If you want a particular implementation approach, you need to specify not only the features you want, but the implementation strategy at least at a high level. This could be as simple as adding "use pywikibit" or "use relevant packages from pypi" to the end of your prompt. Or you could seed your project with some manually writtem scaffolding, including a pyproject.toml
While LLMs do tend have NIH syndrome by default, I think this is a good default. I'd much rather have tight control over when and how to include external dependencies as opposed to letting a prompt fire for 40 minutes, and coming back to find 2 GB of newly installed node packages with a dependency tree 300 levels deep.
On the other hand, I often want an LLM to write things from scratch instead of bringing in 10x the surface area in unnecessary dependencies, and I very, very rarely get better results when these things are let loose on a cesspool of a web. Given that real people have vastly different preferences, you either have to cater to a subset or else require everyone to be a bit more specific with their desires. It's not that surprising.
Yeah, and you can tell the AI to just write the bits of the code that it actually needs for the functionality you are using. If you end up needing more of it, that is fine, the AI will just write more of it when it needs it.
The tradeoffs are very different with AI code than human written code. There are still tradeoffs, but they are different now.
https://www.pangram.com/history/dee030c0-0362-43d0-8fbd-bbab...
On the other hand, if you just tell it to do a thing, I could believe that it would just do the thing. It is pretty bad at high level design judgment. Human guidance on architecture choices results in much better output.
Then, you tell your AI to stick to that rule, and it will. There are tradeoffs to each choice, and people fall into different camps. Make your choice, write it down, and tell the AI to always follow that rule, and then you have it your way.
If you want a particular implementation approach, you need to specify not only the features you want, but the implementation strategy at least at a high level. This could be as simple as adding "use pywikibit" or "use relevant packages from pypi" to the end of your prompt. Or you could seed your project with some manually writtem scaffolding, including a pyproject.toml
While LLMs do tend have NIH syndrome by default, I think this is a good default. I'd much rather have tight control over when and how to include external dependencies as opposed to letting a prompt fire for 40 minutes, and coming back to find 2 GB of newly installed node packages with a dependency tree 300 levels deep.
If you tell it to leverage dependencies, it will. If you (like me) prefer that it avoid dependencies, it will.
The tradeoffs are very different with AI code than human written code. There are still tradeoffs, but they are different now.