DEV Community

Cover image for Understanding the OpenAI JSONL Format: Organising the Records
es404020
es404020

Posted on

Understanding the OpenAI JSONL Format: Organising the Records

In the early days of sorting mail for the postal service, the Six Triple Eight faced challenges with returned letters marked as invalid. This was often due to errors stemming from their lack of prior experience processing such an enormous volume of mail. Over time, they developed innovative indexing systems to match names with regiments and ranks, significantly improving efficiency and accuracy.

Similarly, when working with OpenAI's Large Language Models (LLMs), understanding and adhering to the required input format is crucial. Just as improperly indexed mail led to returned letters, poorly formatted data can result in ineffective fine-tuning and suboptimal results. OpenAI uses the JSONL (JSON Lines) format as the organisational framework for fine-tuning, ensuring data is structured and ready for processing.

Why JSONL Format?

The JSONL format allows data to be stored in a line-by-line structure, where each line represents a single record in JSON format. This structure is compact, easy to read, and compatible with OpenAI’s fine-tuning API. Proper formatting ensures:

  • Accuracy: The model processes data as intended, avoiding errors.

  • Efficiency: Fine-tuning becomes seamless with a consistent structure.

  • Scalability: Large datasets can be managed effectively without complex configurations.

Example JSONL Format for Fine-Tuning

Here’s how data is typically formatted in JSONL for fine-tuning OpenAI models:

 openai_format = {
        "message":[
            {"role":"system","content":system},
            {"role":"user","content":""},
            {"role":"assistant","content":""}
        ]
    }
Enter fullscreen mode Exit fullscreen mode

Each record has three key components:

  • system: The prompt required

  • user: The sample data.

  • assistant: The label for the data

Let convert

import  json
df = pd.read_csv('/content/dataset/train.csv', on_bad_lines='skip')

final_df = df.head(150)
total_tokens = cal_num_tokens_from_df(final_df,'gpt-3.5-turbo')
print(f"total {total_tokens}")


system ="You are a intelligent assistant designed to classify news articles into three categories :business ,entertainment,sport,tech,politics"
with open('dataset/train.jsonl','w') as f:
  for _,row in final_df.iterrows():
    openai_format = {
        "message":[
            {"role":"system","content":system},
            {"role":"user","content":row['text']},
            {"role":"assistant","content":row['label']}
        ]
    }
    json.dump(openai_format,f)
    f.write('\n')
Enter fullscreen mode Exit fullscreen mode

Sample response

{"message": [{"role": "system", "content": "You are a intelligent assistant designed to classify news articles into three categories :business ,entertainment,sport,tech,politics"}, {"role": "user", "content": "qantas considers offshore option australian airline qantas could transfer as"}, {"role": "assistant", "content": "business"}]}
Enter fullscreen mode Exit fullscreen mode

Lessons from the Six Triple Eight

The Six Triple Eight's early challenges in processing mail highlight the importance of preparation and learning. Their indexing innovations ensured that records were correctly matched and delivered, just as adhering to the JSONL format ensures that fine-tuning yields effective and accurate results.

When fine-tuning LLMs, understanding and structuring data in the correct format is as critical as the Six Triple Eight's journey to mastering the art of mail sorting. By learning from both history and technology, we can achieve remarkable results in solving complex logistical challenges.

Top comments (0)