Iqra Journal of Engineering and Computing

Fine-Tuning Deep Learning Models for Sentiment Analysis: A Study on Movie Titles

Research Article 4
- Volume 1, Issue 1 2025
By Hayyan Qasim, Muhammad Zain, Lubna Aziz and Muhammad Ayaz Shirazi
Keywords: Bi-LSTM, Deep learning, Sentiment Analysis, Deep Learning Models

The natural language processing (NLP) sector re- quires sentiment analysis as its essential capability because deep learning advancements keep happening. Textual sentiment analysis capability allows automatic understanding of both positive and negative emotions found in movie titles which can help various industries perform better social media monitoring and marketing and customer feedback analysis. This paper investigates the implementation of a Bi-LSTM (Bidirectional Long Short-Term Memory) model that integrates GloVe (Global Vectors for Word Representation) embeddings as a solution for movie title sentiment classification. A Bi-LSTM model analyzes data sequences from start to end and back again which increases its ability to understand text context. The model receives valuable word semantic representation from pre-trained GloVe embeddings which results in better sentiment understanding in movie titles. Our system uses the model as a basis to retrain it for assigning sentiment labels either Positive or Negative Neutral to movie titles. The Bi-LSTM model performance benefits from the fine- tuning process which maximizes its accuracy levels. The model demonstrates superior performance than standard sentiment analysis techniques since it reaches above 90% accuracy and produces major enhancements in precision, recall, and F1- score metrics. The improved result highlights how deep learning algorithms work successfully for sentiment analysis functions. The research describes the training complications that involve overfitting as well as class unbalances and computing resource limitations. The article details techniques for handling these barriers through dropout regularization, early stopping, and data augmentation methods. Our research includes suggestions for upcoming developments that involve transformer-style models like BERT as well as enlarging the data foundation to advance the model’s universal applicability.

Share this paper


Want to publish in ?
Send us your paper for review
0
Authors
5
Research Papers
0
Citations