Gpt Vs Gemini For Structured Information Extraction
Published 11/2024
MP4 | Video: h264, 1920x1080 | Audio: AAC, 44.1 KHz
Language: English | Size: 856.27 MB | Duration: 0h 34m
A systematic approach for evaluating the Structured Output accuracy of Large Language Models
What you'll learn
How to use the Structured Output feature in GPT
How to use the Structured Output feature in Gemini
How to extract different data types like numerical values, booleans etc
How to measure the accuracy of the structured information you extracted
Requirements
Fairly proficient in Python
You should already know how to use Jupyter
Preferable: basic knowledge of the spaCy NLP library
Description
Natural Language Processing (NLP) is often* considered to be the combination of two branches of study - Natural Language Understanding (NLU) and Natural Language Generation (NLG). Large Language Models can do both NLU and NLG. In this course we are primarily interested in the NLU aspect - more specifically we are interested in how to extract structured information from free form text. (There is also an NLG aspect to the course which you will notice as you watch the video lessons).Recently both GPT and Gemini introduced the ability to extract structured output from the prompt text. As of this writing (November 2024), they are the only LLMs which provide native support for this feature via their API itself - in other words, you can simply specify the response schema as a Python class, and the LLMs will give you a "best effort" response which is guaranteed to follow the schema. It is best effort because while the response is guaranteed to follow the schema, sometimes the fields are empty. How can we assess the accuracy of this structured information extraction?This course provides a practical and systematic approach for assessing the accuracy of LLM Structured Output responses. So which one is better - GPT or Gemini? Watch the course to find out :-)*For example, that is how Ines Montani, co-founder of spaCy recently described the fields in a podcast interview.