Watson Chua’s Post

View profile for Watson Chua, graphic

PhD in Natural Language Processing | Applied Generative AI Researcher & Developer | Bridging AI Innovation with Real-World Solutions | Lead Data Scientist @GovTech

After the release of the RAG playbook, the AI CapDev team in GovTech Singapore’s AI Practice Group continues to explore ways to improve LLM apps. One of the questions we are often asked is, “Beyond prompt engineering and few-shot learning, is there a way to improve the quality of LLMs’ responses in context-based question answering?” We took the fine-tuning approach and trained two models, Llama-3-8B and Gemma-2-9B, to reply to parliamentary questions. How do the results compare to that of pre-trained instruction-tuned models? Read my new blog post to find out! https://lnkd.in/gWgqfxfW #aritficialintelligence #generativeai #llm #finetuning #evaluation

Training LLMs to Draft Replies to Parliamentary Questions — Fine-tuning Llama 3 and Gemma 2 with…

Training LLMs to Draft Replies to Parliamentary Questions — Fine-tuning Llama 3 and Gemma 2 with…

medium.com

Chaitanya Jadhav

Final Year Computer Science Student at NTU

1mo

Very interesting! Did you consider using fine tuned LLM-as-a-Judge models like Prometheus 2.0 for the evaluation process?

Like
Reply
Richard Cottrill

architect, tech. lead, solutions, LLM

1mo

Jaan Murphy, JD, GDLP, LLM, GDip Tax, MScApp. You want one? I know a guy.

See more comments

To view or add a comment, sign in

Explore topics