Allie K. Miller’s Post

View profile for Allie K. Miller, graphic
Allie K. Miller Allie K. Miller is an Influencer

#1 Most Followed Voice in AI Business (1.5M) | Leading AI Entrepreneur & Advisor | Former Amazon, IBM | LinkedIn Top Voice | @alliekmiller on Instagram, X, TikTok

One of my favorite ChatGPT / Claude hacks: Don’t just give good examples. Give both good and bad examples. Optional: explain why the bad is bad. My outputs have massively improved.

I hadn't thought of doing that. That's very insightful! 😊

Gerlyn Tiigemäe

Helping You Leverage AI and Data | Strategy Consulting & Training | AI for Business | Data Governance | Co-host of AIPowerment Podcast | Sharing AI Insights & News | Writing in 🇪🇪

2w

Interesting tip! In my experience negative prompts have sometimes opposite results and ChatGPT thinks that it has to use something, that has been stated “do not use”..

Pankaj S.

CEO, Co-founder @ mCSquared.AI

2w

My hack is similar - I tell it to play the role of a critic and critique the previous response. Also, I always ask it to provide references that can be checked and respond based on facts. Even though the public version of these LLMs have somewhat neutral default value for Temperature, they do tend to be slightly on the creative side (probably initially defaulted to impress the general public and journalists). Temperature controls how much an LLM is allowed to “hallucinate” or in other words introduce randomness in the response. The value of this variable ranges from 0 to 1 - 0 being the most deterministic and 1 being the most “creative” for the lack of a better word. The default value of the public version of ChatGPT is 0.7 which OpenAI claims is a “balance” between creativity and coherence. I personally think it is that way to impress the crowds.

Richard Davies

Chief Technology Officer (CTO) @ Vance | Artificial Intelligence Specialist (Computer Vision, Natural Language Processing) | Author of "Prompt Engineering in Practice"

2w

Yes, this is called contrastive few-shot prompting. Contrastive learning has been a part of the machine learning literature for decades; recently, the research community has become excited about it again. This is also the case with the concept of model collapse, although it was previously known by a different name.

Roger Kibbe

Conversational and Generative AI Technology and Strategy Leader. Head of Conversational AI Developer Relations

2w

That's an excellent tip. This technique is widely used in fine-tuning. DPO (Direct Preference Optimization) tuning is typically done with a tuple: - Question - Good Response - Bad response This technique works very well with fine tuning. Using it in a multi-shot prompt (which is akin to fine-tuning that prompt in many ways) should give excellent results as well. One thing that works well with DPO is the bad example is not "bad" it may be decent to good but it's not the best example which is reserved for the good. I haven't tried this within a prompt but it's worth investigating.

Looking at all the research and development happening on prompt engineering, sometimes I feel the prompts are going to get as descriptive as what we were trying to generate in the first place without genAI 😀

Julia Lopes

Lawyer | Capital Markets | Corporate Law | M&A | Contracts

2w

I have a hack that has been working for me: before giving instructions, ask him how to do it. For example, I want him to improve a text. First, I ask him “What does it mean to improve a text? How do you do it?”, then, I say thanks and ask him, based on his answer, to improve some text. In this case I also ask him to make a comparative table so that I don’t have to keep looking for the improvements: “Please output suggestions as a table with columns: Index | "verbatim words to change" | "words in context" (include some extra words around it for easier reading) | What to change to | Explain why”

Isabella Williamson

Helping teams excel with AI | Founder of Tyde.AI | AI Policy & Strategy | Speaker

2w

100%. This is something I teach in my Gen AI for Comms workshops - pinpoint the characteristics of what 'good' and 'bad' outcomes look like, and ensure your descriptions are clear enough for a 12-year-old to understand. I see this as a fundamental capability for any professional, regardless of whether Gen AI is involved.

Tim Miner

AI Strategist for Financial Services Companies | Keynote Speaker | Digital Transformation Leader | Salesforce Optimization Expert

2w

In our courses at Pratt Institute or Rutgers Business School, we always push the students into exercises that practice multi-shot prompting by calling it a dialogue with the AI. You would be surprised how few people know about this. Asking it for points I have missed or negative arguments that will come up in the board meeting to my report, put you as a human to think at a higher level than without AI. As Ethan Mollick would say: Prosthesis for Thought! #prosthesis4thought

See more comments

To view or add a comment, sign in

Explore topics