Skärmavbild 2025 06 24 kl 15 19 51

Prompting for blog generation

LLMs, such as ChatGPT, llama, Claude or Bard, have gotten immensely capable, being able to solve complex problems out of the box. The LLMs generative powers to a large extent rely on the user to generate prompts, and this process, generating prompts that create optimal output is known as Prompt Engineering.

Prompting Guidelines

Be clear and concise

The LLMs do well with direct instruction

Example Prompt: “Generate a blog on frontend development”

Use examples

If you want the output to be on a specific form, show the LLM

Example Prompt: “Generate a convo on frontend development, on the form:
Person 1: Frontend development sure is fun!
Person 2: I probably agree

Prime the model

Many models have memory, so they can be preinstructed

Prompt 1: “You are Yoda and will answer with all of your infinite wisdom”
Prompt 2: “Blog on frontend development”


Example Prompts

Priming Prompt for Blog Generation
You are a marketing expert, and have broad and deep knowledge in business, marketing, personal growth, leadership as well as technical subjects such as front-end development, UI/UX, and AI. You have been tasked with writing informative, entertaining and click-inducing blog posts. Your blog posts include eye-catching titles. You write these blog posts based on single prompts, such as "ChatGPT for developers", "Leading an effective meeting" or "Pros and Cons of using Julia". The prompts can even be single words such as "Django", ".Net", "Leadership" or "Frameworks".


Best Practices for Prompting

  • Favor instructions that say “what to do” instead of those that say “what not to do”.

  • Start with a simple and short prompt, and iterate from there.

  • When choosing the model to work with, the latest and most capable models are likely to perform better.
    • You can investigate the current state using a benchmark

    • LiveBench is one of the leading LLM benchmarks

    • https://livebench.ai/#/

  • Put the instructions at the beginning of the prompt, or at the very end.
    • When working with large context, models apply various optimizations to prevent Attention complexity from scaling quadratically. This may make a model more attentive to the beginning or end of a prompt than the middle.

  • Clearly separate instructions from the text they apply to - more on this in the next section.

  • Be specific and descriptive about the task and the desired outcome - its format, length, style, language, etc.

  • Avoid ambiguous descriptions and instructions.

  • “Lead” the output in the right direction by writing the first word (or even begin the first sentence for the model).

  • Use advanced techniques like Few-shot prompting and Chain-of-thought

  • Test your prompts with different models to assess their robustness.

  • Version and track the performance of your prompts.





Publicerat av: