Case Studies
May 6, 2024

Smaller and Better: How to Outperform ChatGPT at 1% of the Cost

A supply chain analytics company wanted to enhance its investment decision-making by accurately extracting organization names from financial documents using Named Entity Recognition (NER). Our team employed a two-step approach that led to significant improvements; the fine-tuned small model achieved an impressive 87.15% F1 score at only 1% of the cost of GPT3.5.

Smaller and Better: How to Outperform ChatGPT at 1% of the Cost

Low-code tools are going mainstream

Purus suspendisse a ornare non erat pellentesque arcu mi arcu eget tortor eu praesent curabitur porttitor ultrices sit sit amet purus urna enim eget. Habitant massa lectus tristique dictum lacus in bibendum. Velit ut viverra feugiat dui eu nisl sit massa viverra sed vitae nec sed. Nunc ornare consequat massa sagittis pellentesque tincidunt vel lacus integer risu.

  1. Vitae et erat tincidunt sed orci eget egestas facilisis amet ornare
  2. Sollicitudin integer  velit aliquet viverra urna orci semper velit dolor sit amet
  3. Vitae quis ut  luctus lobortis urna adipiscing bibendum
  4. Vitae quis ut  luctus lobortis urna adipiscing bibendum

Multilingual NLP will grow

Mauris posuere arcu lectus congue. Sed eget semper mollis felis ante. Congue risus vulputate nunc porttitor dignissim cursus viverra quis. Condimentum nisl ut sed diam lacus sed. Cursus hac massa amet cursus diam. Consequat sodales non nulla ac id bibendum eu justo condimentum. Arcu elementum non suscipit amet vitae. Consectetur penatibus diam enim eget arcu et ut a congue arcu.

Vitae quis ut  luctus lobortis urna adipiscing bibendum

Combining supervised and unsupervised machine learning methods

Vitae vitae sollicitudin diam sed. Aliquam tellus libero a velit quam ut suscipit. Vitae adipiscing amet faucibus nec in ut. Tortor nulla aliquam commodo sit ultricies a nunc ultrices consectetur. Nibh magna arcu blandit quisque. In lorem sit turpis interdum facilisi.

  • Dolor duis lorem enim eu turpis potenti nulla  laoreet volutpat semper sed.
  • Lorem a eget blandit ac neque amet amet non dapibus pulvinar.
  • Pellentesque non integer ac id imperdiet blandit sit bibendum.
  • Sit leo lorem elementum vitae faucibus quam feugiat hendrerit lectus.
Automating customer service: Tagging tickets and new era of chatbots

Vitae vitae sollicitudin diam sed. Aliquam tellus libero a velit quam ut suscipit. Vitae adipiscing amet faucibus nec in ut. Tortor nulla aliquam commodo sit ultricies a nunc ultrices consectetur. Nibh magna arcu blandit quisque. In lorem sit turpis interdum facilisi.

“Nisi consectetur velit bibendum a convallis arcu morbi lectus aecenas ultrices massa vel ut ultricies lectus elit arcu non id mattis libero amet mattis congue ipsum nibh odio in lacinia non”
Detecting fake news and cyber-bullying

Nunc ut facilisi volutpat neque est diam id sem erat aliquam elementum dolor tortor commodo et massa dictumst egestas tempor duis eget odio eu egestas nec amet suscipit posuere fames ded tortor ac ut fermentum odio ut amet urna posuere ligula volutpat cursus enim libero libero pretium faucibus nunc arcu mauris sed scelerisque cursus felis arcu sed aenean pharetra vitae suspendisse ac.

Problem

A supply chain analytics company based in Silicon Valley wanted to enhance its investment decision-making by accurately extracting organization names from financial documents - a task known as Named Entity Recognition (NER). However, financial data poses a unique challenge for NER models, as it contains a high density of multi-word organization entities compared to the more common person or location entities found in general text.

Approach

Our team employed a two-step approach to speed up the development process, reduce its cost, and increase the AI accuracy.:

  1. AI-Assisted Label Correction: We used an AI-powered process to quickly identify and correct any labeling discrepancies in the training data, ensuring high-quality annotations.
  2. Fine-Tuning a Smaller Language Model: We fine-tuned the Roberta-base model, a more compact language model, on the cleansed dataset.

Before

Before working with our team, the client had tried using several off-the-shelf AI solutions, including the popular ChatGPT 3.5 model. But the results of these models only achieved F1 sores (a measure of accuracy) between 50-60%.

After

The fine-tuned Roberta-base model achieved 87.15% F1-score, which is 27% better than off-the-shelf ChatGPT 3.5. It also slightly outperformed ChatGPT 3.5 fine-tuned on the same data, which achieved 87% F1-score.

Result

Our approach yielded remarkable 27% accuracy improvement in financial NER performance.

Not only that, but our Roberta-base models in production processed 100,000 articles at 1% of the cost of GPT3.5 model - a staggering 99% reduction in ongoing operational expenses. While GPT3.5 costs around $1,000, Roberta-base costs only $13.09!

To learn more about this case study, how we fine-tuned both the winning Roberta-base model and GPT3.5, and how we computed the ongoing cost of each model, contact our team here.