Data Representation for Deep Learning - Based Arabic Text Summarization Performance Using Python Results

Authors

  • Mohamed Yassin Abdelwahab Yassin Faculty of Computer and Information Technology, Al-Madinah International University, Taman Desa Petaling, 57100 Kuala Lumpur, Malaysia
  • Yazeed Al Moaiad Faculty of Computer and Information Technology, Al-Madinah International University, Taman Desa Petaling, 57100 Kuala Lumpur, Malaysia

DOI:

https://doi.org/10.15379/ijmst.v11i1.3646

Keywords:

Machine Learning, Sequence-to-sequence, Deep Learning Model, Natural Language Processing, Arabic Text Summarization.

Abstract

A sequence-to-sequence model is used as the foundation for a suggested abstractive Arabic text summarizing system. Our goal is to create a sequence-to-sequence model by utilizing multiple deep artificial neural networks and determining which one performs the best. The encoder and decoder have been developed using several layers of recurrent neural networks, gated recurrent units, recursive neural networks, convolutional neural networks, long short-term memory, and bidirectional long short-term memory. We are re-implementing the fundamental summarization model in this study, which uses the sequence-to-sequence framework. Using a Google Colab Jupiter notebook that runs smoothly, we have constructed these models using the Keras library. The results further demonstrate that one of the key techniques that has led to breakthrough performance with deep neural networks is the use of Gensim for word embeddings over other text representations by abstractive summarization models, along with FastText, a library for efficient learning of word representations and sentence classification.

Downloads

Download data is not yet available.

Downloads

Published

2024-04-13

How to Cite

[1]
M. Y. A. . Yassin and Y. A. . Moaiad, “Data Representation for Deep Learning - Based Arabic Text Summarization Performance Using Python Results ”, ijmst, vol. 11, no. 1, pp. 339-356, Apr. 2024.