Thesis Open Access

KAFI-NOONOO TEXT SUMMARIZATION WITH A DEEP LEARNING APPROACH

KOCHITO, TESHALE MENGESHA


MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="http://www.loc.gov/MARC21/slim">
  <leader>00000nam##2200000uu#4500</leader>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Automatic text summarization, Natural language processing, Abstractive summarization, Extractive summarization, Kafi-noonoo language</subfield>
  </datafield>
  <datafield tag="502" ind1=" " ind2=" ">
    <subfield code="c">Mattu University</subfield>
  </datafield>
  <controlfield tag="005">20250709065412.0</controlfield>
  <controlfield tag="001">8277</controlfield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Mattu University</subfield>
    <subfield code="4">ths</subfield>
    <subfield code="a">ARULMURUGAN RAMU</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Mattu University</subfield>
    <subfield code="4">ths</subfield>
    <subfield code="a">TESHOME DEBUSHE</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">1930440</subfield>
    <subfield code="z">md5:59af11115968c6020fd6bb712eaf002e</subfield>
    <subfield code="u">https://zenodo.org/record/8277/files/1735033673_Teshale  Mengesha Final thesis edited l (1).pdf</subfield>
  </datafield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">open</subfield>
  </datafield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2025-07-09</subfield>
  </datafield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="p">user-mattu_university</subfield>
    <subfield code="p">user-zenodo</subfield>
    <subfield code="o">oai:zenodo.org:8277</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="u">Mattu University</subfield>
    <subfield code="a">KOCHITO, TESHALE MENGESHA</subfield>
  </datafield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">KAFI-NOONOO TEXT SUMMARIZATION WITH A DEEP LEARNING APPROACH</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">user-mattu_university</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">user-zenodo</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="u">http://www.opendefinition.org/licenses/cc-by</subfield>
    <subfield code="a">Creative Commons Attribution</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="a">cc-by</subfield>
    <subfield code="2">opendefinition.org</subfield>
  </datafield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">&lt;p&gt;The technique of reducing a lengthy text to a manageable length while maintaining its essential concepts and points are known as text summary. Its goal is to give a concise synopsis that encapsulates the main ideas of the original work. In text summarization, there are two main approaches: the extractive approach and the abstractive approach. In order to provide a succinct summary, extractive text summarizing entails determining the important details by picking out key sentences or phrases from the source text. The technique of creating an internal semantic representation of the source text and rewriting it in new words using natural language processing is known as abstractive text summarization. Extractive text summarization is the main focus of this study. There is no available text summarization research for the Kafi-noonoo language. The main objective of this study is to develop Kafinoonoo text summarizer models with a deep learning approach. For the purpose of this study, 402 kafi-noonoo texts with summaries were used as input documents. Consequently, three deep learning models were proposed in this study: CNN (convolutional neural network), LSTM (long short-term memory), and Bi-LSTM (bi-directional long short-term memory) to perform a comparison analysis for a Kafi-noonoo dataset. So, the developed models for Kafinoonoo language eliminate the mentioned problems of content selection bias, information overload, and wasting time, effort, and materials. In our experiments, the result indicates that the LSTM model achieves precision 98.2%, recall 98.6%, F1 score 98.1%, accuracy 93% and 98.5% of validation accuracy and 96.7% of training accuracy; Bi-LSTM scores precision 98.3%, recall 99.2%, F1 score 98.6%, accuracy 98% and 98.6% of validation accuracy and 97.8% of training accuracy; and the CNN model scores precision 88%, recall 87%, F1 score 93.9%, accuracy 93.5% and 94% of validation accuracy and 93.6% of training accuracy&lt;/p&gt;</subfield>
  </datafield>
  <datafield tag="773" ind1=" " ind2=" ">
    <subfield code="n">doi</subfield>
    <subfield code="i">isVersionOf</subfield>
    <subfield code="a">10.20372/nadre:8276</subfield>
  </datafield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.20372/nadre:8277</subfield>
    <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">publication</subfield>
    <subfield code="b">thesis</subfield>
  </datafield>
</record>
0
0
views
downloads
All versions This version
Views 00
Downloads 00
Data volume 0 Bytes0 Bytes
Unique views 00
Unique downloads 00

Share

Cite as