Li Yin’s Post

View profile for Li Yin, graphic

Author of AdalFlow | AI researcher | x MetaAI

The data flow of LLM applications in one diagram: Prompts can be roughly divided into two parts: - System prompt, such as <task_desc>, <tools>, <examples>, <chat_history>, <context>, <agent_step_history>, <output_format>. - User part, such as <user_query>. LightRAG encapsulates the first part using <SYS></SYS>. The placement of these elements can be flexible. Leveraging the model’s internal knowledge: If you only ask a question, it is a simple QA using the model’s internal knowledge. To better distill the knowledge, you can add <task_desc>, known as zero-shot in-context learning (ICL), or add few-shot to many-shot <examples> (few-shot and many-shot ICL). Besides the internal knowledge, LLMs have four major ways to interact with the world: (1) taking external context such as <context> retrieved from a retriever (RAG) and <chat_history> to enable memory (MemGPT), (2) using predefined tools/function calls, (3) code generation with output from a code executor, and (4) working with agents to use tools/code generation either in series or in a DAG with parallel processing. The diagram we provide allows you to visualize all these parts clearly. 👉 Links in comments! #lightrag #llms #ai #ml ________________________ LightRAG : The Lightning Library for LLM Applications. It is light and modular, with a 100% readable codebase. Follow + hit 🔔 to stay updated.

  • No alternative text description for this image
Daniel Sautot

Chief Data Scientist @ AIris

3w

Thanks for your post. You can find the advantages and downsides of each techniques using internal knowledge and/or external here: https://www.linkedin.com/posts/daniel-sautot_what-are-the-best-techniques-for-improving-activity-7216307788677791744-bkzt?utm_source=share&utm_medium=member_ios For internal knowledge, the paper Orpo looks also interesting, it’s using a relative ratio loss, meaning it’s a mix between LoRA and DPO.

Mahmoud Draz

Data strategist energy & AI lead | hands-on - bridge the gap between AI and energy

3w

Don’t you think that this data flow can be expensive to scale and may compromise latency. The loop (LLM - prompt) lacks constraints, with no guarantee of fast (enough) convergence to a final —good— answer.

Like
Reply
Rémy Fannader

Author of 'Enterprise Architecture Fundamentals', Founder & Owner of Caminao

3w

Looks like old-fashioned GO-TO programming languages ...

Like
Reply
Sudeep Makwana

धर्मो रक्षति रक्षितः 🚩 | Tech lead at iorta | Founder @ Diggaj Coder | Node JS | Vuejs | NuxtJS | Kafka | Mongodb | WebRTC | ReactJs & Native | weexapp | Bitcoin | Tech Education Innovator | Husband Father Geek

3w

Good job Li Yin thanks for sharing.. 👍

Like
Reply

That's great. Thank you Li Yin

Like
Reply
Mike Morgan PhD

Emerging Science & Technology Professional

3w

Very informative Li Yin

Like
Reply
See more comments

To view or add a comment, sign in

Explore topics