Thesis Open Access
KOCHITO, TESHALE MENGESHA
<?xml version='1.0' encoding='utf-8'?> <oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"> <dc:creator>KOCHITO, TESHALE MENGESHA</dc:creator> <dc:date>2025-07-09</dc:date> <dc:description>The technique of reducing a lengthy text to a manageable length while maintaining its essential concepts and points are known as text summary. Its goal is to give a concise synopsis that encapsulates the main ideas of the original work. In text summarization, there are two main approaches: the extractive approach and the abstractive approach. In order to provide a succinct summary, extractive text summarizing entails determining the important details by picking out key sentences or phrases from the source text. The technique of creating an internal semantic representation of the source text and rewriting it in new words using natural language processing is known as abstractive text summarization. Extractive text summarization is the main focus of this study. There is no available text summarization research for the Kafi-noonoo language. The main objective of this study is to develop Kafinoonoo text summarizer models with a deep learning approach. For the purpose of this study, 402 kafi-noonoo texts with summaries were used as input documents. Consequently, three deep learning models were proposed in this study: CNN (convolutional neural network), LSTM (long short-term memory), and Bi-LSTM (bi-directional long short-term memory) to perform a comparison analysis for a Kafi-noonoo dataset. So, the developed models for Kafinoonoo language eliminate the mentioned problems of content selection bias, information overload, and wasting time, effort, and materials. In our experiments, the result indicates that the LSTM model achieves precision 98.2%, recall 98.6%, F1 score 98.1%, accuracy 93% and 98.5% of validation accuracy and 96.7% of training accuracy; Bi-LSTM scores precision 98.3%, recall 99.2%, F1 score 98.6%, accuracy 98% and 98.6% of validation accuracy and 97.8% of training accuracy; and the CNN model scores precision 88%, recall 87%, F1 score 93.9%, accuracy 93.5% and 94% of validation accuracy and 93.6% of training accuracy</dc:description> <dc:identifier>https://zenodo.org/record/8277</dc:identifier> <dc:identifier>10.20372/nadre:8277</dc:identifier> <dc:identifier>oai:zenodo.org:8277</dc:identifier> <dc:relation>doi:10.20372/nadre:8276</dc:relation> <dc:relation>url:https://nadre.ethernet.edu.et/communities/mattu_university</dc:relation> <dc:relation>url:https://nadre.ethernet.edu.et/communities/zenodo</dc:relation> <dc:rights>info:eu-repo/semantics/openAccess</dc:rights> <dc:rights>http://www.opendefinition.org/licenses/cc-by</dc:rights> <dc:subject>Automatic text summarization, Natural language processing, Abstractive summarization, Extractive summarization, Kafi-noonoo language</dc:subject> <dc:title>KAFI-NOONOO TEXT SUMMARIZATION WITH A DEEP LEARNING APPROACH</dc:title> <dc:type>info:eu-repo/semantics/doctoralThesis</dc:type> <dc:type>publication-thesis</dc:type> </oai_dc:dc>
All versions | This version | |
---|---|---|
Views | 0 | 0 |
Downloads | 0 | 0 |
Data volume | 0 Bytes | 0 Bytes |
Unique views | 0 | 0 |
Unique downloads | 0 | 0 |