Thin wrapper around the chosen LLM backend. By default, uses ollamar if installed; otherwise returns only the prompt so the caller can still inspect it without failing.
trainer_core_llm_generate(model, prompt, engine = c("ollamar", "none"), ...)A list with elements prompt, response, model,
and engine. If the backend isn't available, response is NULL.
Character scalar, model name (e.g., "llama3").
Character scalar, the prompt to send.
Character scalar, backend engine. Currently "ollamar" or "none". If "none" or if the backend is not available, returns the prompt only.
Passed to the backend generator.