Student Project

Tell Me What I Don’t Know: Generating Selective Abstract Summaries

Incident reports may contain categorical data and free text descriptions. This paper simulates abstract summarization of such reports to uniquely capture summaries that expand on the precise categorical data. CNN stories from the CNN / Daily Mail summarization task dataset is cleaned up to use as a proxy for our incident reports.

A baseline T5 transformer model is generated with a small pre-trained model, the CNN training stories and associated reference summaries. Summaries of CNN test stories are found to have ROUGE scores comparable to prior work. Single sentence reference summaries are also modeled to measure the reduction in ROUGE scores that result from shorter summaries.

A sentence is separated from each reference summary to represent ’known’ categorical data for exclusion from predicted summaries. A na¨ıve model built without known sentences generates summaries with a lower ROUGE score, with no detectable improvement in ROUGE scores. Filtering the results of our baseline model to remove generated summary sentences that resemble our known data were also unsuccessful. However a novel solution that appends each ’known’ sentence to its input story ahead of modeling was successful, roughly tripling the increase in ROUGE score for unknown vs known sentences compared with simply leaving the known data out of the modeling.

Last updated:

November 27, 2021