Website

Category

Next app

Guardrails AI

Adding guardrails to large language models

About Guardrails AI

Guardrails is a Python package that provides users with the ability to add structure, type, and quality assurance to the output generated by large language models (LLMs). Guardrails AI:

  • uses pydantic-style validation to thoroughly check the output of LLMs, including the detection of any bias, errors, or other irregularities.
  • has the capability to take corrective measures when validation fails,
  • ensures that the output generated adheres to specific structure and type requirements, such as a JSON format.

Read in Ukrainian or Ru