Sentence-Level Enhanced Graph-BERT (SEG-BERT) for Extractive Text Summarization

Osama E. Emam *

Information Systems Department, Faculty of Computers and Artificial Intelligence, Capital University (Formerly Helwan University), Cairo 11795, Egypt.

Helal A. Suleiman

Information Systems Department, Faculty of Computers and Artificial Intelligence, Capital University (Formerly Helwan University), Cairo 11795, Egypt.

Baraa A. Elhady

Information Systems Department, Faculty of Computers and Artificial Intelligence, Capital University (Formerly Helwan University), Cairo 11795, Egypt.

*Author to whom correspondence should be addressed.


Abstract

Background: The exponential growth of digital text across news media and online platforms has intensified the need for automatic summarization models that capture both semantic nuances and document-level structure. Methods: This study introduces Sentence-level Enhanced Graph-BERT (SEG-BERT), a hybrid extractive summarization framework that combines BERT-base contextual embeddings with graph convolutional networks (GCNs) and Sentence-level Positional Encoding (SLPE) to model documents as dynamic semantic graphs. Each sentence is encoded with BERT, enriched with trainable positional encodings, connected through a learned attention-based adjacency matrix, and refined via stacked GCN layers before a classification head assigns sentence salience scores for extractive selection. Results: SEG-BERT was trained on 287,113 news articles from the CNN/DailyMail corpus in a single-epoch proof-of-concept experiment and evaluated on a held-out sample of 1,000 unseen documents using ROUGE metrics, achieving F1 scores of 42.1 (ROUGE-1), 19.5 (ROUGE-2), and 40.2 (ROUGE-L). Conclusions: The results demonstrate the viability of sentence-level graph-augmented transformer architectures for large-scale extractive summarization and provide a solid foundation for SEG-BERT multi-epoch optimization and cross-domain evaluation.

Keywords: Extractive summarization, BERT, graph convolutional networks, sentence-level positional encoding, CNN/DailyMail, ROUGE


How to Cite

Emam, Osama E., Helal A. Suleiman, and Baraa A. Elhady. 2026. “Sentence-Level Enhanced Graph-BERT (SEG-BERT) for Extractive Text Summarization”. Journal of Advances in Mathematics and Computer Science 41 (5):1-15. https://doi.org/10.9734/jamcs/2026/v41i52134.

Downloads

Download data is not yet available.