HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Can Foundation Models Wrangle Your Data?

Avanika Narayan Ines Chami Laurel Orr Simran Arora Christopher Ré

Can Foundation Models Wrangle Your Data?

Abstract

Foundation Models (FMs) are models trained on large corpora of data that, at very large scale, can generalize to new tasks without any task-specific finetuning. As these models continue to grow in size, innovations continue to push the boundaries of what these models can do on language and image tasks. This paper aims to understand an underexplored area of FMs: classical data tasks like cleaning and integration. As a proof-of-concept, we cast five data cleaning and integration tasks as prompting tasks and evaluate the performance of FMs on these tasks. We find that large FMs generalize and achieve SoTA performance on data cleaning and integration tasks, even though they are not trained for these data tasks. We identify specific research challenges and opportunities that these models present, including challenges with private and domain specific data, and opportunities to make data management systems more accessible to non-experts. We make our code and experiments publicly available at: https://github.com/HazyResearch/fm_data_tasks.

Code Repositories

fminference/flexgen
jax
Mentioned in GitHub
hazyresearch/fm_data_tasks
Official
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
entity-resolution-on-amazon-googletext-davinci-002_zeroshot
F1 (%): 54.30
entity-resolution-on-amazon-googletext-davinci-002_fewshot-10
F1 (%): 63.50

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Can Foundation Models Wrangle Your Data? | Papers | HyperAI