TrueFoundry Newsletter #13: LLM #1 Exploring their applications! 💯
Overview of Newsletter

TrueFoundry Newsletter #13: LLM #1 Exploring their applications! 💯

Hello There👋

We have entered March, the fiscal year-end. I hope you end it on a high.⚡

A lot has already been written about ChatGPT but we found it too overwhelming with generic content. We decided to write a series of 3 blogs that is hyper-focused on the practical use of LLMs. We will cover how Large Language Models (LLMs) are utilized by other companies, Training & fine-tuning of LLMs with own data, and Infrastructure requirements for LLM deployment. 

 In this week's Newsletter, we will talk about:

✅ Use of LLMs in different industries and areas - how companies similar to yours are leveraging LLMs

✅ A summary of three interesting conversations from the MLOps slack community


LLMs, LLMs Everywhere - where to use 🤔

With precise and descriptive prompts, users can get ChatGPT to produce high-quality results. In this blog, we explore how businesses have realized the massive value and have started integrating large language model APIs. 

We look at the area of AI assistants, Entertainment, Customer Service, Healthcare and Marketing & Sales. 

Image generated by Dall-E
Image of different brand logos generated by Dall-E

Read the full blog here


🎯MLOps.Community slack discussions 

Below are a summary of some of the informative conversations on the most popular MLOps community.  Click on the title to read the discussions in detail. 

⭐️ Task orchestration tool for AWS Spot and supports auto-resume

Spot instances can be used to train your ML models but they run the risk of being terminated at any time. In workflow orchestrators, since most tasks in the workflow can be retried without side effects, we can rerun only the failed steps. During model training, we can checkpoint the model periodically to make sure that training does not have to restart from 0. In AWS Spot instances emit an event when it's about to be terminated and we can even use this as a signal to checkpoint our model.

⭐️ Resource allocation between different teams with predictable spend

Genv is a platform for provisioning and orchestrating GPUs between users either on a single machine or a cluster of machines. Genv will find an available GPU on one of the GPU machines, connect to the machine over SSH, activate an environment within the shell, and mark the GPU as provisioned for this environment.

⭐️Experiment Tracking in complex pipelines

In complex DAG pipelines, various experiments might be conducted in a single run of the pipeline. Tracking these experiments and organizing them might become complicated. One solution could be to create an experiment run corresponding to the entire pipeline run and for each sub-graph in the DAG, we create individual runs. By finding a way to link the sub-graph runs to the top level pipeline run, we can create a graph structure for the experiment runs as well which might be a better way to organize your experiments in complex pipelines.


If you like our newsletter? Spread the love to friends and colleagues. 

🐦 Tweet to your followers

👋 Share on WhatsApp

With 💙 by TrueFoundry

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics