argument summarization: enhancing summary generation and evaluation metrics
abstract
in the current era of mass digital information, the need for effective argument summarization has become paramount. this thesis explores the domain of argument
summarization, focusing on the development of techniques and evaluation metrics to
improve the quality of summarization models. the study first investigates the task of
key point analysis, and the challenges associated with previous approaches to it, emphasizing the significance of coverage of the summary. to address these challenges,
we propose a novel clustering-based framework that leverages the inherent semantics of arguments to identify and group similar arguments. the proposed approach
is evaluated on the benchmark dataset and compared with previous state-of-the-art
methods, demonstrating its effectiveness. in addition to the proposed framework,
this thesis also presents an analysis of the previous evaluation metric for argument
summarization. commonly used metric, rouge is evaluated, revealing its limitation in capturing the nuanced aspects of argument quality. to this end, we introduce
new evaluation metrics and methods that consider the coverage and redundancy of
the generated summaries, providing more accurate and informative assessments of
summarization models. we further show that our evaluation metric has a better correlation with actual summary quality, whereas previous metrics fail to capture this
correlation.