LlamaIndex’s Post

View organization page for LlamaIndex, graphic

199,092 followers

Worked with AI but wondering why to use LlamaIndex? This new video is an introduction to LlamaIndex and in particular its agentic capabilities, covering: ➡️ What is LlamaIndex, including: ⭐️ Frameworks in Python and TypeScript ⭐️ LlamaParse service for parsing complex documents for LLMs ⭐️ LlamaCloud service for end-to-end enterprise RAG ⭐️ LlamaHub for free downloads of data connectors, LLM adapters, embedding models, vector stores, agent tools and more ➡️ Why you should use it, including: ⭐️Build faster ⭐️ Eliminate boilerplate ⭐️ Avoid early pitfalls ⭐️ Get into production ⭐️ Deliver business value ➡️ What can it do, including code examples of how to build: ⭐️ Retrieval augmented generation (RAG) ⭐️ World class parsing ⭐️ Agents and multi-agent systems https://lnkd.in/d_D3fenm

  • No alternative text description for this image
Farhad Davaripour, Ph.D.

Lead Data Scientist at Arcurve | GenAI & Simulation Specialist | Sessional Instructor at University of Calgary | Mentor

1mo

It took me a few hours to figure out how to store all the intermediate reasoning steps and display them on the Streamlit app. Here's the code for anyone who might find it useful: """ task = agent.create_task(query) if st.button("Submit"): with st.spinner("Processing your query..."): step_output = agent.run_step(task.task_id) while not step_output.is_last: step_output = agent.run_step(task.task_id) st.markdown(step_output.dict()['output'].response) st.subheader("Reasoning:") with st.expander("Show Reasoning"): for step in agent.get_completed_tasks()[-1].extra_state["current_reasoning"]: st.markdown(step) """ This is the main part that stores the reasoning steps: agent.get_completed_tasks()[-1].extra_state["current_reasoning"] Ideally, the chat_history should include the intermediate steps, but since it doesn't (it only includes the first and last), this is the best solution I found so far. cc: Jerry Liu

Agentic capabilities in AI tools enhance their ability to perform complex, dynamic tasks by enabling more self-reflective, iterative processes. This leads to improved performance, efficiency, and the ability to handle sophisticated workflows, significantly impacting software development, business strategy, and operational optimization.

Like
Reply
Chirawat Chitpakdee

Technology Lead for AI Engineering/Applications, Automation & RPA, Python Development, and Performance Testing

1mo

Could you please do a cookbook or tutorial of multi-agent (one agent per function tool), i really need it.

Like
Reply
Hamid Zade(Gholizadegan), MBA

Co-Founder at JazzB.com | LLM | Data Science | E-Commerce

1mo

It is going fantastic! Great job. A multi-agent system with RAG is perfect for automating blog solutions. only missing part is multi media.

Wady Bensalah

Senior AI Engineer & Data Scientist

1mo

Can you seamlessly use this agent with a crewai Crew?

Like
Reply
Vivek Prasad

Data scientist at WNS Global Services

1mo

Can we use Mistral/Cohere/Claude/any other LLM's for agentic chunking apart from CHAT GPT?

Like
Reply
Mitesh P.

I build real-world AI products | Computer Vision, Large Language Models, Generative AI, Edge AI, IoT, AI Product Development

1mo
Philemon Kiprono

AI Engineer | Microsoft Certified | Azure | LlamaIndex | RAGs | DSPy

1mo

Amazing insights

Like
Reply
Simon Janssen

CIO at HappyNurse | Creating innovation with passion 🔥

1mo
Like
Reply
Swapnil Agrawal

ACS With AI | Data Scientist at Capgemini | AWS | Machine Learning | Computer Vision | Python Developer | Data Modeling | CDAC

1mo

Very helpful!

Like
Reply
See more comments

To view or add a comment, sign in

Explore topics