Open Access

Table 2.

Model performance of Transformer models on different datasets. For Bert-Base/Large and Roberta-Base, Matthews correlation is reported for CoLA and accuracy is reported for other datasets. Perplexity is reported for GPT2-Base/Medium/Large

Model Bert-Base Roberta-Base Bert-Large

Dataset CoLA RTE QNLI CoLA RTE QNLI CoLA RTE QNLI
Plaintext 0.616 0.700 0.916 0.629 0.805 0.920 0.686 0.755 0.922
PUMA 0.613 0.700 0.916 0.618 0.805 0.918 0.690 0.747 0.918

Model GPT2-Base GPT2-Medium GPT2-Large

Plaintext 16.284 12.536 10.142
PUMA 16.284 12.540 10.161

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.