If you're experiencing context loss with the mistral-nemo or llama3 models in Ollama, it's likely due to the default context length being set to 2048 tokens. To utilize the models' full context window, you can adjust the num_ctx parameter. Here's how...