We've already established that large language models can't do math, but here's the thing – they actually can if you know how to ask. Welcome to the whimsical world of Prompt Engineering, where the most powerful programming technique is the synonym.

Alright, you might need more than a synonym, but we're playing fast and loose here because we want to use the thing, not get bogged down in details. If your LLM isn't giving the response you want, try rephrasing your message. Here are a few tricks I've employed that have proven to be remarkably successful.

  • Try some role play. Often, while researching factual information, the LLM defaults to generic answers with a caveat about speaking to a professional for more specifics. Most recently, I was stuck looking into medical aid schemes in South Africa. The LLM was hesitant to narrow down a list of providers given my preferences, repeatedly telling me I needed to consult a financial advisor or broker. In these cases, it can really help to start a fresh conversation with a prompt like, "You are an expert in [insert field here] with 20 years of consulting experience. Your job is to advise me on how to [insert question here]." It's amazing how asking an LLM to hide behind a persona can help loosen it up. To leverage this persona, ensure it has the relevant information by adding a phrase like, 'Ask me all the questions you deem relevant, including follow-up questions, to ensure you have all the facts before answering my question' in these role-play prompts. Experiment with different wording. It makes a difference.
    • If you want to get all meta (the non-Facebook kind) about a problem, try telling the LLM that it's an expert Prompt Engineer tasked with using an LLM to solve a problem. Then ask it, as that Prompt Engineer, what steps it would follow to get the LLM to solve the problem.
  • If the response to your query requires intermediate steps before converging on the final answer, break the problem down into those steps. You can do this in multiple messages if you only need to solve the problem once. If however you're going to solve similar problems multiple times, provide one example of all the steps in one message and then just provide the problem description in each new message after that. This is called "chain-of-thought" reasoning. I've used this to:
    • Reduce the LLM's propensity to hallucinate (e.g., once it has answered the question, ask it to fact-check the answer, providing references)
    • Solve word problems or logic problems
    • Perform common-sense reasoning
  • Since your friendly neighborhood LLM has likely been trained on the user manual for software you're likely to use, you can ask the LLM to create instructions for that software to solve a problem. This works well for math problems. Instead of asking the LLM how to solve the problem, ask it how to solve the problem in Excel or Python and then implement the provided solution in Excel or Python. "LLMs excel at writing code (yes, the pun is intended).
  • Finally, the process of producing a response one word at a time can cause the model to produce something inaccurate (a hallucination). It's a surprising fact that if you ask the model to review its answer and check if it's correct, it can often correct the mistakes it made when creating the response in the first place.

Prompt Engineering is quite the buzzword these days. Creative thinking and language can coax an LLM into producing results that wouldn't be possible with a naïve query. Every day there are new and creative ways to engineer prompts. I'm particularly fond of the site ShareGPT, where users post intriguing chats with ChatGPT and you can see interesting examples of prompts.

5. Prompt Engineering – the art of asking nicely.