Artjom Shestajev’s Post

View profile for Artjom Shestajev, graphic

Product @ Clarifai | ex-Twilio

One of the first methods you can try to improve LLM's results is to spend more effort crafting your prompts. Now, many think this is just "the way you ask," while it is handier to think of prompts as... a programming language. When you write code (say, Python or Java), it is compiled as an "executive order" for the computer (let's say, for simplicity) to act upon. Every developer faces a situation from time to time when they are stuck because the code written does not match the idea in their head. Prompts are the same coding blocks but in natural language, so they are closer to thoughts than "print({weather_data})". This comes with drawbacks. For example, ambiguity is higher – LLMs' output is highly erratic and in some complex scenarios might never repeat. The prompt will also provide different results based on the target (LLM), so you'd better be careful about it. But think of it as a programming challenge to steer the LLM output in the direction you need without diving into fine-tuning (yet). You can even create templates with variables inside them to test and choose the ones that work best. Next, chain them together depending on the output received or any other external condition. Is prompt-steering a magic pill? Surely not. Can it improve results you get from generic LLMs? Definitely, and compared to other methods, with relatively low investment. #ai #ml #promptengineering #llm

  • No alternative text description for this image
Isabelle Dang

Partner @ Qualified Capital | Pharma | AI | VR | AR | Blockchain

7mo

Great insights! Prompt engineering is indeed a skill that can greatly enhance the effectiveness of AI models.

To view or add a comment, sign in

Explore topics