Providing an LLM a streamlined, but overall complete initial prompt is vital not to get perplexing answers. It will also greatly diminish the possibility of having the model astraying away, diluting the results. Though I believe this applies to all models, SaaS or local, it is specifically important when using local models, as processing and memory are more finite.