Command Palette
Search for a command to run...
Da Cheng ; Luo Chuwei ; Zheng Qi ; Yao Cong

Abstract
Document pre-trained models and grid-based models have proven to be veryeffective on various tasks in Document AI. However, for the document layoutanalysis (DLA) task, existing document pre-trained models, even thosepre-trained in a multi-modal fashion, usually rely on either textual featuresor visual features. Grid-based models for DLA are multi-modality but largelyneglect the effect of pre-training. To fully leverage multi-modal informationand exploit pre-training techniques to learn better representation for DLA, inthis paper, we present VGT, a two-stream Vision Grid Transformer, in which GridTransformer (GiT) is proposed and pre-trained for 2D token-level andsegment-level semantic understanding. Furthermore, a new dataset named D$^4$LA,which is so far the most diverse and detailed manually-annotated benchmark fordocument layout analysis, is curated and released. Experiment results haveillustrated that the proposed VGT model achieves new state-of-the-art resultson DLA tasks, e.g. PubLayNet ($95.7\%$$\rightarrow$$96.2\%$), DocBank($79.6\%$$\rightarrow$$84.1\%$), and D$^4$LA ($67.7\%$$\rightarrow$$68.8\%$).The code and models as well as the D$^4$LA dataset will be made publiclyavailable ~\url{https://github.com/AlibabaResearch/AdvancedLiterateMachinery}.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| document-layout-analysis-on-d4la | VGT | mAP: 68.8 Model Parameters: 174M |
| document-layout-analysis-on-publaynet-val | ResNext-101-32×8d | Figure: 0.968 List: 0.940 Overall: 0.935 Table: 0.976 Text: 0.930 Title: 0.862 |
| document-layout-analysis-on-publaynet-val | VGT | Figure: 0.971 List: 0.968 Overall: 0.962 Table: 0.981 Text: 0.950 Title: 0.939 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.