The data flow of LLM applications in one diagram: Prompts can be roughly divided into two parts: - System prompt, such as <task_desc>, <tools>, <examples>, <chat_history>, <context>, <agent_step_history>, <output_format>. - User part, such as <user_query>. LightRAG encapsulates the first part using <SYS></SYS>. The placement of these elements can be flexible. Leveraging the model’s internal knowledge: If you only ask a question, it is a simple QA using the model’s internal knowledge. To better distill the knowledge, you can add <task_desc>, known as zero-shot in-context learning (ICL), or add few-shot to many-shot <examples> (few-shot and many-shot ICL). Besides the internal knowledge, LLMs have four major ways to interact with the world: (1) taking external context such as <context> retrieved from a retriever (RAG) and <chat_history> to enable memory (MemGPT), (2) using predefined tools/function calls, (3) code generation with output from a code executor, and (4) working with agents to use tools/code generation either in series or in a DAG with parallel processing. The diagram we provide allows you to visualize all these parts clearly. 👉 Links in comments! #lightrag #llms #ai #ml ________________________ LightRAG : The Lightning Library for LLM Applications. It is light and modular, with a 100% readable codebase. Follow + hit 🔔 to stay updated.
Thanks for your post. You can find the advantages and downsides of each techniques using internal knowledge and/or external here: https://www.linkedin.com/posts/daniel-sautot_what-are-the-best-techniques-for-improving-activity-7216307788677791744-bkzt?utm_source=share&utm_medium=member_ios For internal knowledge, the paper Orpo looks also interesting, it’s using a relative ratio loss, meaning it’s a mix between LoRA and DPO.
Don’t you think that this data flow can be expensive to scale and may compromise latency. The loop (LLM - prompt) lacks constraints, with no guarantee of fast (enough) convergence to a final —good— answer.
Looks like old-fashioned GO-TO programming languages ...
Good job Li Yin thanks for sharing.. 👍
That's great. Thank you Li Yin
Very informative Li Yin
Author of AdalFlow | AI researcher | x MetaAI
3wLightRAG github: https://github.com/SylphAI-Inc/LightRAG LightRAG docs: https://lightrag.sylph.ai/ Discord: https://discord.gg/ezzszrRZvT